Introduction
Sylph provides an OpenAI-compatible API for interacting with various AI models across different providers. Our API follows RESTful practices and returns JSON responses.
GET
/health
Check API health status and provider availability.
Request
curl
Response
{ "status": "ok", "providers": { "available": ["HackClubProvider", "GoogleProvider", "..."], "disabled": [], "errors": {} } }
Authentication
Authentication is optional but recommended. You can provide any string as the API key:
HTTP Header
Authorization: Bearer any-string-works
Models
Browse all available models at /models
GET
/v1/models
List all available models across enabled providers.
Request
curl /v1/models \ -H "Authorization: Bearer any-string"
Response
{ "object": "list", "data": [ { "id": "hackclub/llama-3.3-70b-versatile", "object": "model", "created": 1743648062945, "owned_by": "https://hackclub.com", "permission": [], "root": "llama", "context_length": 128000, "capabilities": { "text": true, "images": false }, "health": { "status": "operational", "latency": 312 } } ] }
Chat
POST
/v1/chat/completions
Create a chat completion with the specified model.
Request Parameters
Parameter | Type | Description |
---|---|---|
model* | string | ID of the model to use |
messages* | array | Array of messages for the conversation |
stream | boolean | Whether to stream responses (default: false) |
temperature | number | Controls randomness (0-2, default: 0.7) |
Request
curl /v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer any-string" \ -d '{ "model": "hackclub/llama-3.3-70b-versatile", "messages": [ {"role": "user", "content": "Hello!"} ], "temperature": 0.7 }'
Response
{ "id": "chat-12345", "object": "chat.completion", "created": 1743648062945, "model": "hackclub/llama-3.3-70b-versatile", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today?" }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 10, "completion_tokens": 8, "total_tokens": 18 } }
Errors
Code | Description |
---|---|
400 | Invalid request parameters |
401 | Invalid or missing API key |
404 | Model not found |
429 | Too many requests |
500 | Server or provider error |
Error Response Example
{ "error": { "message": "Model not found: invalid-model", "type": "invalid_request_error", "code": "model_not_found", "param": "model" } }