The Prompts API allows you to run AI prompts with your project. You can trigger prompts with different streaming options based on your application needs.
Available Endpoints
Run a specific prompt and get the complete response in a single API call.
Recommended when creating classifiers to make decisions, since the full
response is available for decision making.
POST /prompt-runs/run-sync
{
"promptId": "string",
"promptVersionId": "string",
"replacements": {
"key1": "value1",
"key2": "value2"
}
}
Returns a complete JSON response
Run a specific prompt and stream the response using HTTP/2 protocol. This is useful for real-time applications where you want to display the response as it’s being generated.
{
"promptId": "string",
"promptVersionId": "string",
"replacements": {
"key1": "value1",
"key2": "value2"
}
}
Streams chunks of data as they become available.
Run a specific prompt and stream the response using Server-Sent Events (SSE) protocol. This is useful for web applications that need to display the response as it’s being generated.
POST /prompt-runs/run-sse
{
"promptId": "string",
"promptVersionId": "string",
"replacements": {
"key1": "value1",
"key2": "value2"
}
}
Streams events with data as they become available.
POST / prompt - runs / run - sync;
POST / prompt - runs / async;
POST / prompt - runs / run - sse;