synthetic
    /Dev
    • Back to app
    Usage-based pricing
    • Usage & Billing
    API Documentation
    API
    • Overview
    • Getting Started
    • Models
    OpenAI Reference
    • /models
    • /chat/completions
    • /completions
    • /embeddings
    Anthropic Reference
    • /messages
    • /messages/count_tokens
    Synthetic Reference
    • /quotas
    • /search
    Guides
    • Claude Code
    • Octofriend by Synthetic
    • GitHub Copilot
    • Xcode Intelligence

    Getting Started

    This guide will walk you through making your first API call to Synthetic in just a few minutes.

    Tip

    Check out our Guides for step-by-step tutorials on setting up various frontends and development tools with Synthetic.

    Step 1: Get Your API Key

    Your API Key:
    Log in to see your API key.

    Step 2: Install an OpenAI Client Library

    Since Synthetic is OpenAI-compatible, you can use any OpenAI client library:

    • Python
    • Node.js
    pip install openai

    Other Languages

    Synthetic works with OpenAI client libraries in any language including:

    • Go: go-openai
    • Rust: async-openai
    • Ruby: ruby-openai
    • PHP: openai-php
    • Java: openai-java

    Step 3: Configure Your Client

    Set up your client to point to Synthetic's API:

    • Python
    • TypeScript
    import os
    import openai
    
    client = openai.OpenAI(
    api_key=os.environ.get("SYNTHETIC_API_KEY"),
    base_url="https://api.glhf.chat/v1/",
    )

    Step 4: Make Your First API Call

    Chat Completions

    • Python
    • TypeScript
    • curl
    completion = client.chat.completions.create(
    model="hf:zai-org/GLM-4.6",
    messages=[
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain quantum computing in simple terms."}
    ]
    )
    
    print(completion.choices[0].message.content)

    Streaming Responses

    For long responses, streaming provides a better user experience:

    • Python
    • TypeScript
    completion = client.chat.completions.create(
    model="hf:zai-org/GLM-4.6",
    messages=[
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Write a detailed explanation of machine learning."}
    ],
    stream=True
    )
    
    for chunk in completion:
    if chunk.choices[0].delta.content is not None:
      print(chunk.choices[0].delta.content, end='', flush=True)

    Step 5: Choose Your Model

    Explore available models using the /models endpoint:

    • Python
    • TypeScript
    • curl
    models = client.models.list()
    for model in models.data:
    print(model.id)
    Tip

    The /models endpoint will always show all always-on models, as well as any on-demand models you've recently used.

    Error Handling

    • python
    try:
      completion = client.chat.completions.create(
        model="hf:zai-org/GLM-4.6",
        messages=[{"role": "user", "content": "Hello!"}]
      )
    except openai.APIError as e:
      print(f"API error: {e}")
    except openai.RateLimitError as e:
      print(f"Rate limit exceeded: {e}")
    

    Next Steps

    • Explore the API Reference: Check out detailed documentation for /chat/completions and /messages.
    • Integration Guides: See how to integrate with tools like Octofriend, Claude Code, and more!
    • Monitor Usage: Track your API usage on your Billing page.

    Need Help?

    If you run into issues:

    1. Check that your API key is correctly set
    2. Verify the model name includes the hf: prefix
    3. Ask for help on our Discord Server!

    Happy coding! 🚀