fashn-logo
FASHNAI

Error Handling

Understanding how the API reports errors helps you respond quickly and keep your integration resilient. There are two categories:

  • API-level errors: The request was rejected before a prediction ID was issued.
  • Runtime errors: The request was accepted, but the model failed during execution while you were polling or waiting on a webhook.

API-Level Errors

These errors return an HTTP status code and do not include an id or status because the request never entered processing. They are typically caused by auth, validation, or quota issues.

// HTTP 401 UnauthorizedAccess
{
  "error": "UnauthorizedAccess",
  "message": "Unauthorized: Invalid token"
}
CodeErrorCauseHow to fix
400BadRequestInvalid request formatCheck JSON structure and required parameters.
401UnauthorizedAccessInvalid or missing API keyVerify the Authorization: Bearer YOUR_API_KEY header.
404NotFoundResource not foundConfirm the endpoint URL and prediction ID.
429RateLimitExceededToo many requests in the windowAdd client-side throttling or backoff.
429ConcurrencyLimitExceededToo many concurrent predictionsWait for in-flight jobs to finish before sending new ones.
429OutOfCreditsNo API credits remainingRefill credits before retrying.
500InternalServerErrorServer-side errorRetry with backoff; contact support if it persists.
Retries and idempotency

If you retry after an API-level error, send the same payload. Because no prediction ID was issued, duplicate processing is not a risk.

Runtime Errors

Runtime errors appear with status: "failed" when you poll /v1/status/{id} (HTTP 200) or receive webhook payloads. The response includes the prediction ID and an error object.

{
  "id": "123a87r9-4129-4bb3-be18-9c9fb5bd7fc1-u1",
  "status": "failed",
  "error": {
    "name": "ImageLoadError",
    "message": "Error loading model image: Invalid URL format"
  }
}

Common runtime errors

Most runtime issues fall into a handful of shared cases across endpoints. Start with these before checking model-specific validation rules.

NameCauseHow to fix
ImageLoadErrorThe API could not fetch or decode an input image or asset.Use publicly accessible URLs with correct image Content-Type, or prefix base64 with data:image/<format>;base64, and confirm the data is valid.
ContentModerationErrorAn input image or text prompt violated content policies.Replace or adjust the input to comply with content policies. If the endpoint supports moderation controls (for example, moderation_level), choose the lowest level that still meets your safety requirements.
InputValidationErrorParameters were invalid or inconsistent.Follow the error message to correct field values or required combinations before retrying.
ThirdPartyErrorAn upstream provider refused or failed the request.Retry with backoff. Some upstream services (e.g. captioning, translation) may silently block content; if retries continue to fail, treat it like a content policy block and adjust inputs accordingly.
UnavailableErrorA service needed to fulfill the request was temporarily overloaded or unavailable.Retry with backoff.
PipelineErrorAn unexpected failure occurred inside the pipeline.Retry with backoff.

Endpoint-specific runtime errors

Some models add extra validation tied to their workflow (for example, pose detection on virtual try-on or LoRA loading for variation). Refer to the Runtime Errors section on each endpoint page for those model-specific names and fixes.

Retries and support

  • Failed predictions do not consume credits.
  • If you still see runtime failures after aligning inputs to the schema and retrying with backoff, contact support@fashn.ai with the prediction ID so we can investigate.

On this page