API Documentation
The LongMemory API allows you to add infinite semantic memory to your AI agents. We provide a unified layer that handles vector embeddings, storage, and retrieval automatically.
Base URL
https://api.longmemory.io/v4 Version
v1.0Authentication
Authentication is handled via Bearer tokens. You can obtain your API key from the dashboard. Always keep your keys secure and never expose them in client-side code.
Python SDK
The client automatically looks for these environment variables.
bash
POST
/ingest
Stores a new memory chunk. The system automatically processes the text, generates vector embeddings using our SOTA models, and indexes it for low latency retrieval.
Parameters
| Property | Type | Description |
|---|---|---|
| user_id Required | string | Unique identifier for the user or agent owner. |
| text Required | string | The conversation content or fact to be stored. |
| speaker Optional | string | Name of the speaker (e.g., 'Bob', 'Alice'). |
| created_at Optional | iso8601 | Timestamp for historical backfilling (e.g., '2023-10-01T12:00:00Z'). |
Python Client
python
cURL Request
bash
Response
json
POST
/query
Parameters
| Property | Type | Description |
|---|---|---|
| user_id Required | string | Unique identifier for the user. |
| query Required | string | The question or statement to retrieve context for. |
Python Client
python
cURL Request
bash
Response
json
POST
/askCOMING SOON
Let LongMemory handle the LLM generation for you. We securely proxy the request, retrieve the relevant context, and prompt the LLM of your choice in one step.
Parameters
| Property | Type | Description |
|---|---|---|
| user_id Required | string | Unique identifier for the user. |
| query Required | string | The question to ask the agent. |
| llm_model Required | string | The model ID (e.g., 'gpt-4o', 'claude-3'). |
| llm_api_key Required | string | Your LLM provider's API key (we do not store this). |
Python Client
python
cURL Request
bash
Response
json