fashn-logo
FASHNAI

API Fundamentals

This page explains the fundamental concepts and patterns that are consistent across all FASHN API endpoints. Understanding these concepts will help you integrate any current or future endpoint.

Authentication

All API requests require authentication using a Bearer token in the Authorization header:

Authorization: Bearer YOUR_API_KEY

You can obtain your API key from the Developer API Dashboard.

Request Pattern

All FASHN model endpoints follow a consistent request pattern to the same /v1/run endpoint:

POSThttps://api.fashn.ai/v1/run

Request Examples

curl -X POST https://api.fashn.ai/v1/run \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer YOUR_API_KEY" \
     -d '{
           "model_name": "endpoint-specific-model-name",
           "inputs": {
             // endpoint-specific parameters
           }
         }'

Universal Request Properties

model_name
Required
string

Specifies which model/endpoint to use for processing. Each endpoint has its own unique model name.

inputs
Required
object

Contains all the input parameters for the selected model. The structure of this object varies by endpoint.

Response Pattern

Initial Response

When you submit a request to /v1/run, you'll receive an immediate response with a prediction ID:

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "error": null
}

Status Polling

Use the prediction ID to poll for status and results:

GEThttps://api.fashn.ai/v1/status/{id}

Response States

Poll for the status of a specific prediction using its ID.

status'starting' | 'in_queue' | 'processing' | 'completed' | 'failed'

The current state of your prediction:

  • starting - Prediction is being initialized
  • in_queue - Prediction is waiting to be processed
  • processing - Model is actively generating your result
  • completed - Generation finished successfully, output available
  • failed - Generation failed, check error details

Example Status Responses

In Progress:

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "status": "processing",
  "error": null
}

Completed:

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "status": "completed",
  "output": [
    "https://cdn.fashn.ai/123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1/output_0.png"
  ],
  "error": null
}
Output Availability Time Limits
  • CDN URLs (default): Available for 72 hours after completion
  • Base64 outputs (when return_base64: true): Available for 60 minutes after completion

Learn more in the Data Retention & Privacy section.

Failed:

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "status": "failed",
  "error": {
    "name": "ImageLoadError",
    "message": "Error loading model image: Invalid URL format"
  }
}

Error Handling

There are two distinct types of errors you might encounter when interacting with the API:

API-Level Errors

These errors occur before your request is accepted for processing—meaning no prediction ID is returned. They are communicated using standard HTTP status codes, along with a descriptive error payload:

// HTTP 401 UnauthorizedAccess
{
  "error": "UnauthorizedAccess",
  "message": "Unauthorized: Invalid token"
}

// HTTP 400 BadRequest
{
  "error": "BadRequest", 
  "message": "Invalid request body"
}

Note that there are no id or status fields in these responses, as the request never reached the processing stage.

Below is a complete list of possible API-level errors, including their HTTP status codes, causes, and recommended solutions:

CodeErrorCauseSolution
400BadRequestInvalid request formatCheck request structure and required parameters
401UnauthorizedAccessInvalid/missing API keyVerify your API key in the Authorization header
404NotFoundResource not foundCheck endpoint URL and prediction ID
429RateLimitExceededToo many requestsImplement request throttling
429ConcurrencyLimitExceededToo many concurrent requestsWait for current requests to complete
429OutOfCreditsNo API credits remainingPurchase more credits
500InternalServerErrorServer errorRetry after delay, contact support if persistent

If you’re just getting started, our Troubleshooting Guide covers common early-stage issues and how to resolve them effectively.

Runtime Errors

These errors occur during model execution, after you've successfully submitted a request and received a prediction ID. You'll get a 200 HTTP status with status: "failed" when polling /v1/status/{id}:

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "status": "failed",
  "error": {
    "name": "ImageLoadError",
    "message": "Error loading model image: Invalid URL format"
  }
}

Runtime errors are endpoint-specific - each model may have its own unique error types based on the inputs it expects.

Rate Limits

These are the default rate limits that apply to all endpoints unless stated otherwise in the specific endpoint documentation:

EndpointLimit
/v1/run50 requests per 60 seconds
/v1/status50 requests per 10 seconds

Concurrency Limits

The API has a default concurrency limit of 6 requests per limit. This means you can have up to 6 concurrent requests being processed at any given time.

Rate Limit Adjustments

Our API rate limits are in place to ensure fair usage and prevent misuse of our services. However, we understand that legitimate applications may require higher limits as they grow. If your app's usage nears the specified rate limits, and this usage is justified by your application's needs, we will gladly increase your rate limit. Please reach out to our support@fashn.ai to discuss your specific requirements.

Webhooks

Instead of polling for status, you can configure webhooks to receive notifications when predictions complete. See the Webhooks Guide for setup instructions.

On this page