Error Handling
Understanding how the API reports errors helps you respond quickly and keep your integration resilient. There are two categories:
- API-level errors: The request was rejected before a prediction ID was issued.
- Runtime errors: The request was accepted, but the model failed during execution while you were polling or waiting on a webhook.
API-Level Errors
These errors return an HTTP status code and do not include an id or status because the request never entered processing. They are typically caused by auth, validation, or quota issues.
| Code | Error | Cause | How to fix |
|---|---|---|---|
| 400 | BadRequest | Invalid request format | Check JSON structure and required parameters. |
| 401 | UnauthorizedAccess | Invalid or missing API key | Verify the Authorization: Bearer YOUR_API_KEY header. |
| 404 | NotFound | Resource not found | Confirm the endpoint URL and prediction ID. |
| 429 | RateLimitExceeded | Too many requests in the window | Add client-side throttling or backoff. |
| 429 | ConcurrencyLimitExceeded | Too many concurrent predictions | Wait for in-flight jobs to finish before sending new ones. |
| 429 | OutOfCredits | No API credits remaining | Refill credits before retrying. |
| 500 | InternalServerError | Server-side error | Retry with backoff; contact support if it persists. |
If you retry after an API-level error, send the same payload. Because no prediction ID was issued, duplicate processing is not a risk.
Runtime Errors
Runtime errors appear with status: "failed" when you poll /v1/status/{id} (HTTP 200) or receive webhook payloads. The response includes the prediction ID and an error object.
Common runtime errors
Most runtime issues fall into a handful of shared cases across endpoints. Start with these before checking model-specific validation rules.
| Name | Cause | How to fix |
|---|---|---|
ImageLoadError | The API could not fetch or decode an input image or asset. | Use publicly accessible URLs with correct image Content-Type, or prefix base64 with data:image/<format>;base64, and confirm the data is valid. |
ContentModerationError | An input image or text prompt violated content policies. | Replace or adjust the input to comply with content policies. If the endpoint supports moderation controls (for example, moderation_level), choose the lowest level that still meets your safety requirements. |
InputValidationError | Parameters were invalid or inconsistent. | Follow the error message to correct field values or required combinations before retrying. |
ThirdPartyError | An upstream provider refused or failed the request. | Retry with backoff. Some upstream services (e.g. captioning, translation) may silently block content; if retries continue to fail, treat it like a content policy block and adjust inputs accordingly. |
UnavailableError | A service needed to fulfill the request was temporarily overloaded or unavailable. | Retry with backoff. |
PipelineError | An unexpected failure occurred inside the pipeline. | Retry with backoff. |
Endpoint-specific runtime errors
Some models add extra validation tied to their workflow (for example, pose detection on virtual try-on or LoRA loading for variation). Refer to the Runtime Errors section on each endpoint page for those model-specific names and fixes.
Retries and support
- Failed predictions do not consume credits.
- If you still see runtime failures after aligning inputs to the schema and retrying with backoff, contact support@fashn.ai with the prediction ID so we can investigate.