Embeddings
POST https://api.synthetic.new/openai/v1/embeddings
Create a vector representation of a given input for similarity, retrieval, clustering, and other applications.
Tip
/embeddings requests do not count against your subscription limits!
Supported Embeddings Models
Model | Context length | Status |
---|---|---|
hf:nomic-ai/nomic-embed-text-v1.5 | 8k tokens | ✓ Included |
Request Body
Parameter | Type | Required | Description |
---|---|---|---|
model | string | Yes | Model ID (must be prefixed with hf: ) |
input | string/array | Yes | Input text to embed, encoded as a string or array of strings |
dimensions | number | No | Number of dimensions the resulting output embeddings should have |
Example Request
- Python
- TypeScript
- curl
import openai
client = openai.OpenAI(
api_key="SYNTHETIC_API_KEY",
base_url="https://api.synthetic.new/openai/v1"
)
response = client.embeddings.create(
model="hf:nomic-ai/nomic-embed-text-v1.5",
input="The quick brown fox jumps over the lazy dog."
)
embedding = response.data[0].embedding
print(f"Embedding dimensions: {len(embedding)}")
Example Response
- json
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [
0.0023064255,
-0.009327292,
-0.0028842222,
"... (768 more values)"
]
}
],
"model": "hf:nomic-ai/nomic-embed-text-v1.5",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
Multiple Inputs
You can embed multiple texts in a single request:
- Python
- TypeScript
- curl
response = client.embeddings.create(
model="hf:nomic-ai/nomic-embed-text-v1.5",
input=[
"The quick brown fox jumps over the lazy dog.",
"Pack my box with five dozen liquor jugs.",
"How vexingly quick daft zebras jump!"
]
)
embeddings = [data.embedding for data in response.data]
print(f"Created {len(embeddings)} embeddings")
Multiple Inputs Response
- json
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.0023064255, -0.009327292, "..."]
},
{
"object": "embedding",
"index": 1,
"embedding": [0.0019234521, -0.007891234, "..."]
},
{
"object": "embedding",
"index": 2,
"embedding": [0.0021456789, -0.008765432, "..."]
}
],
"model": "hf:nomic-ai/nomic-embed-text-v1.5",
"usage": {
"prompt_tokens": 24,
"total_tokens": 24
}
}