curl -s \
-H "Authorization: Token <paste-your-token-here>" \
https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu{
"id": "gm3qorzdhgbfurvjtvhg6dckhu",
"model": "replicate/hello-world",
"version": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
"input": {
"text": "Alice"
},
"logs": "",
"output": "hello Alice",
"error": null,
"status": "succeeded",
"created_at": "2023-09-08T16:19:34.765994Z",
"started_at": "2023-09-08T16:19:34.779176Z",
"completed_at": "2023-09-08T16:19:34.791859Z",
"metrics": {
"predict_time": 0.012683
},
"urls": {
"cancel": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu/cancel",
"get": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu"
}
}status will be one of:starting: the prediction is starting up. If this status lasts longer than a few seconds, then it's typically because a new worker is being started to run the prediction.processing: the predict() method of the model is currently running.succeeded: the prediction completed successfully.failed: the prediction encountered an error during processing.canceled: the prediction was canceled by its creator.output will be an object containing the output of the model. Any files will be represented as HTTPS URLs. You'll need to pass the Authorization header to request them.error will contain the error encountered during the prediction.succeeded, failed, or canceled) will include a metrics object with a predict_time property showing the amount of CPU or GPU time, in seconds, that the prediction used while running. It won't include time waiting for the prediction to start.replicate.delivery and its subdomains. If you use an allow list of external domains for your assets, add replicate.delivery and *.replicate.delivery to it.curl --location --request GET 'https://api.replicate.com/v1/predictions/' \
--header 'Authorization: Bearer <token>'{}