API Fundamentals
This page explains the fundamental concepts and patterns that are consistent across all FASHN API endpoints. Understanding these concepts will help you integrate any current or future endpoint.
Authentication
All API requests require authentication using a Bearer
token in the Authorization
header:
You can obtain your API key from the Developer API Dashboard.
Request Pattern
All FASHN model endpoints follow a consistent request pattern to the same /v1/run
endpoint:
Universal Request Properties
model_name
Requiredstring
Specifies which model/endpoint to use for processing. Each endpoint has its own unique model name.
inputs
Requiredobject
Contains all the input parameters for the selected model. The structure of this object varies by endpoint.
Response Pattern
Initial Response
When you submit a request to /v1/run
, you'll receive an immediate response with a prediction ID:
Status Polling
Use the prediction ID to poll for status and results:
GEThttps://api.fashn.ai/v1/status/{id}
Response States
Poll for the status of a specific prediction using its ID.
status
'starting' | 'in_queue' | 'processing' | 'completed' | 'failed'
The current state of your prediction:
starting
- Prediction is being initializedin_queue
- Prediction is waiting to be processedprocessing
- Model is actively generating your resultcompleted
- Generation finished successfully, output availablefailed
- Generation failed, check error details
Example Status Responses
In Progress:
Completed:
- CDN URLs (default): Available for 72 hours after completion
- Base64 outputs (when
return_base64: true
): Available for 60 minutes after completion
Learn more in the Data Retention & Privacy section.
Failed:
Error Handling
There are two distinct types of errors you might encounter when interacting with the API:
API-Level Errors
These errors occur before your request is accepted for processing—meaning no prediction ID is returned. They are communicated using standard HTTP status codes, along with a descriptive error payload:
Note that there are no id or status fields in these responses, as the request never reached the processing stage.
Below is a complete list of possible API-level errors, including their HTTP status codes, causes, and recommended solutions:
Code | Error | Cause | Solution |
---|---|---|---|
400 | BadRequest | Invalid request format | Check request structure and required parameters |
401 | UnauthorizedAccess | Invalid/missing API key | Verify your API key in the Authorization header |
404 | NotFound | Resource not found | Check endpoint URL and prediction ID |
429 | RateLimitExceeded | Too many requests | Implement request throttling |
429 | ConcurrencyLimitExceeded | Too many concurrent requests | Wait for current requests to complete |
429 | OutOfCredits | No API credits remaining | Purchase more credits |
500 | InternalServerError | Server error | Retry after delay, contact support if persistent |
If you’re just getting started, our Troubleshooting Guide covers common early-stage issues and how to resolve them effectively.
Runtime Errors
These errors occur during model execution, after you've successfully submitted a request and received a prediction ID. You'll get a 200
HTTP status with status: "failed"
when polling /v1/status/{id}
:
Runtime errors are endpoint-specific - each model may have its own unique error types based on the inputs it expects.
Rate Limits
These are the default rate limits that apply to all endpoints unless stated otherwise in the specific endpoint documentation:
Endpoint | Limit |
---|---|
/v1/run | 50 requests per 60 seconds |
/v1/status | 50 requests per 10 seconds |
Concurrency Limits
The API has a default concurrency limit of 6 requests per limit. This means you can have up to 6 concurrent requests being processed at any given time.
Our API rate limits are in place to ensure fair usage and prevent misuse of our services. However, we understand that legitimate applications may require higher limits as they grow. If your app's usage nears the specified rate limits, and this usage is justified by your application's needs, we will gladly increase your rate limit. Please reach out to our support@fashn.ai to discuss your specific requirements.
Webhooks
Instead of polling for status, you can configure webhooks to receive notifications when predictions complete. See the Webhooks Guide for setup instructions.