synthetic
    API
    • Overview
    • Getting Started
    • Models
    OpenAI Reference
    • /models
    • /chat/completions
    • /completions
    • /embeddings
    Anthropic Reference
    • /messages
    • /messages/count_tokens
    Synthetic Reference
    • /quotas
    Guides
    • Claude Code
    • Octofriend by Synthetic
    • Xcode Intelligence
    • Back to app

    Completions

    POST https://api.synthetic.new/openai/v1/completions

    Create a completion for a provided prompt and parameters.

    Request Body

    ParameterTypeRequiredDescription
    modelstringYesModel name (must be prefixed with hf:). See supported Models.
    promptstring/arrayYesText prompt(s) to generate completions for. Can be a string, array of strings, array of numbers, or array of arrays of numbers.
    echobooleanNoEcho back the prompt in addition to completion
    frequency_penaltynumberNoPenalty for token frequency (-2.0 to 2.0). Reduces repetition.
    logit_biasobjectNoModify token likelihood. Maps token IDs to bias values (-100 to 100).
    logprobsnumberNoInclude log probabilities on most likely tokens
    max_completion_tokensnumberNoMaximum tokens for completion
    max_tokensnumberNoMaximum number of tokens to generate
    min_pnumberNoMinimum probability for nucleus sampling
    nnumberNoNumber of completions to generate (default: 1)
    presence_penaltynumberNoPenalty for token presence (-2.0 to 2.0). Encourages new topics.
    reasoning_effortstringNoControl reasoning effort for thinking models: low, medium, or high
    stopstring/arrayNoStop sequence(s)
    streambooleanNoStream response using server-sent events
    stream_optionsobjectNoOptions for streaming (when stream: true)
    temperaturenumberNoSampling randomness (0.0-2.0). Higher = more random.
    top_knumberNoLimit sampling to top K tokens
    top_pnumberNoNucleus sampling threshold (0.0-1.0)
    userstringNoUnique identifier representing your end-user

    Example Request

    • Python
    • TypeScript
    • curl
    import openai
    
    client = openai.OpenAI(
      api_key="SYNTHETIC_API_KEY",
      base_url="https://api.synthetic.new/openai/v1"
    )
    
    completion = client.completions.create(
      model="hf:deepseek-ai/DeepSeek-V3-0324",
      prompt="The future of artificial intelligence is",
      max_tokens=50,
      temperature=0.7,
      stop=["\n"]
    )
    
    print(completion.choices[0].text)

    Example Response

    • json
    {
      "id": "49fb0537-dafc-441c-ad42-b4c4aa2f5193",
      "object": "text_completion",
      "created": 1757645512,
      "model": "accounts/fireworks/models/deepseek-v3-0324",
      "choices": [
        {
          "index": 0,
          "text": " undeniably bright and holds the potential to revolutionize every aspect of our lives. As we stand on the cusp of technological advancements, AI is poised to become more sophisticated, integrated, and ethical. From transforming industries to enhancing daily conveniences, AI’",
          "logprobs": null,
          "finish_reason": "length"
        }
      ],
      "usage": {
        "prompt_tokens": 7,
        "total_tokens": 57,
        "completion_tokens": 50
      }
    }
    

    Multiple Prompts

    You can send multiple prompts in a single request:

    • Python
    • TypeScript
    • curl
    completion = client.completions.create(
      model="hf:deepseek-ai/DeepSeek-V3-0324",
      prompt=[
        "The capital of France is",
        "The largest planet in our solar system is"
      ],
      max_tokens=10
    )

    Streaming

    When stream: true is set, the response will be a series of Server-Sent Events:

    • Python
    • TypeScript
    • curl
    completion = client.completions.create(
      model="hf:deepseek-ai/DeepSeek-V3-0324",
      prompt="Once upon a time",
      max_tokens=50,
      stream=True
    )
    
    for chunk in completion:
      if chunk.choices[0].text:
        print(chunk.choices[0].text, end='', flush=True)

    Streaming Response

    data: {"id":"cmpl-abc123","object":"text_completion","created":1757644754,"model":"accounts/fireworks/models/deepseek-v3-0324","choices":[{"text":" in","index":0,"finish_reason":null}]}
    
    data: {"id":"cmpl-abc123","object":"text_completion","created":1757644754,"model":"accounts/fireworks/models/deepseek-v3-0324","choices":[{"text":" a","index":0,"finish_reason":null}]}
    
    data: {"id":"cmpl-abc123","object":"text_completion","created":1757644754,"model":"accounts/fireworks/models/deepseek-v3-0324","choices":[{"text":" far","index":0,"finish_reason":"length"}],"usage":{"prompt_tokens":5,"total_tokens":55,"completion_tokens":50}}]}
    
    data: [DONE]