Crystal AI Documentation
Crystal AI is a local-first AI agent framework for developers. Define agents as YAML files, version-control them, run them in the terminal, and inspect them in a local dashboard. Think of it as Prisma for AI agents.
Key Principles
Config as Code
Agents, tools, and workflows are YAML files. Version control with Git.
Local First
No cloud service. SQLite storage. Keys stay on your machine.
Provider Agnostic
5 built-in providers. Switch with a one-line YAML change.
Installation
Install the SDK package. It includes @crystralai/core (runtime engine) automatically.
npm install @crystralai/sdk
# or with other package managers
pnpm add @crystralai/sdk
yarn add @crystralai/sdk
Prerequisites
| Requirement | Details |
|---|---|
| Node.js | 18 or later (node --version) |
| Package Manager | npm, pnpm, or yarn |
| API Key | At least one LLM provider key (OpenAI, Anthropic, etc.) |
@crystralai/client instead — a zero-dependency package that works anywhere fetch does.
Quick Start
1. Create project config
Every Crystal AI project needs a crystral.config.yaml at its root:
version: 1
project: my-project
2. Define your agent
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful assistant. Be concise and accurate.
temperature: 0.7
max_tokens: 4096
3. Set your API key
OPENAI_API_KEY=sk-your-key-here
.env file. Add it to .gitignore.
4. Run the agent
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
const result = await client.run('assistant', 'What is the capital of France?');
console.log(result.content); // "Paris"
console.log(result.usage.total); // 42
console.log(result.durationMs); // 823
npx tsx index.ts
Project Structure
my-project/
├── crystral.config.yaml # Project config (required)
├── agents/ # Agent YAML definitions
│ └── assistant.yaml
├── tools/ # Tool YAML definitions
├── workflows/ # Workflow definitions
├── rag/ # RAG document collections
│ └── my-docs/
├── .crystalai/ # Auto-generated (add to .gitignore)
│ └── agents.db # SQLite database
├── .env # API keys
└── .gitignore
Agents
An agent is a configured AI persona backed by a large language model. Each agent is a YAML file in agents/.
Full Agent Example
version: 1
name: support-agent
description: Customer support agent
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful support agent for {company_name}.
Always be polite and professional.
temperature: 0.3
max_tokens: 2048
tools:
- get-ticket
- send-email
rag:
collections:
- product-docs
embedding_provider: openai
embedding_model: text-embedding-3-small
match_threshold: 0.75
match_count: 5
Agent Fields
| Field | Type | Required | Description |
|---|---|---|---|
version | integer | Yes | Must be 1 |
name | string | Yes | Must match filename |
provider | string | Yes | openai, anthropic, groq, google, together |
model | string | Yes | Model ID (e.g. gpt-4o) |
system_prompt | string | No | Supports {variable} templates |
temperature | number | No | 0.0–2.0, default 1.0 |
max_tokens | integer | No | 1–1,000,000, default 4096 |
tools | list | No | Tool names from tools/ |
rag | object | No | RAG configuration |
mcp | list | No | MCP server connections |
output | object | No | Structured output (JSON schema) |
retry | object | No | Retry policy |
fallback | list | No | Fallback providers |
guardrails | object | No | Input/output filtering |
Tools
Tools give agents the ability to take actions. Crystal AI supports 4 tool types:
| Type | Description | Use Case |
|---|---|---|
rest_api | Call any HTTP endpoint | External APIs, webhooks |
javascript | Sandboxed JS with timeout | Calculations, data transforms |
web_search | Brave Search API | Real-time information |
agent | Delegate to another agent | Specialist sub-agents |
REST API Tool
version: 1
name: get-weather
description: Get current weather for a city
type: rest_api
endpoint: https://wttr.in/{city}?format=j1
method: GET
response_path: current_condition.0
parameters:
- name: city
type: string
required: true
description: City name (e.g. "London")
JavaScript Tool
version: 1
name: calculate
description: Evaluate a math expression
type: javascript
timeout_ms: 5000
parameters:
- name: expression
type: string
required: true
description: Math expression (e.g. "2 + 2")
code: |
const result = new Function('return ' + args.expression)();
return { result: Number(result) };
Providers
Switch providers by changing one line. No code changes required.
# Just change these two lines:
provider: anthropic
model: claude-sonnet-4-20250514
Credential Resolution
API keys are resolved in priority order:
- Environment variable — e.g.
OPENAI_API_KEY - Project
.envfile - Global credentials —
~/.crystalai/credentials
| Provider | Env Variable | Chat | Embeddings | Vision |
|---|---|---|---|---|
| OpenAI | OPENAI_API_KEY | Yes | Yes | Yes |
| Anthropic | ANTHROPIC_API_KEY | Yes | No | Yes |
GOOGLE_API_KEY | Yes | Yes | Yes | |
| Groq | GROQ_API_KEY | Yes | No | No |
| Together | TOGETHER_API_KEY | Yes | No | No |
RAG (Retrieval-Augmented Generation)
Give agents access to your documents with built-in vector search powered by sqlite-vec.
Setup
- Place documents in
rag/<collection-name>/ - Add RAG config to your agent YAML
- Collections are indexed automatically on first use
rag:
collections:
- product-docs
embedding_provider: openai
embedding_model: text-embedding-3-small
match_threshold: 0.7
match_count: 5
Supported formats: .md, .txt, .pdf, .html
Workflows
Orchestrate multiple specialist agents with a single YAML file. The orchestrator LLM decides task routing — no explicit graphs.
version: 1
name: content-pipeline
description: Research and produce content
orchestrator:
provider: openai
model: gpt-4o
system_prompt: |
You orchestrate content production.
Delegate to specialist agents.
strategy: auto
max_iterations: 20
agents:
- name: researcher
agent: research-agent
description: Gathers information
- name: writer
agent: writing-agent
description: Writes final content
context:
shared_memory: true
max_context_tokens: 8000
const workflow = client.loadWorkflow('content-pipeline');
const result = await workflow.run('Write an article about AI');
console.log(result.content);
console.log(result.agentResults);
MCP Servers
Connect to Model Context Protocol servers for dynamic tool discovery.
mcp:
- transport: stdio
name: filesystem
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
- transport: sse
name: github
url: http://localhost:3000/mcp
MCP tools are exposed as mcp_{serverName}_{toolName} and available alongside static tools.
TypeScript SDK
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
// Single-shot
const result = await client.run('assistant', 'Hello!');
console.log(result.content);
Streaming
const result = await client.run('assistant', 'Write a haiku.', {
stream: true,
onToken: (token) => process.stdout.write(token),
});
Sessions
Conversations persist automatically in SQLite. Pass sessionId to continue:
const r1 = await client.run('assistant', 'My name is Alice.');
const r2 = await client.run('assistant', 'What is my name?', {
sessionId: r1.sessionId,
});
// r2.content -> "Your name is Alice."
Browser Client
For frontends, React Native, or edge runtimes — zero dependencies:
import { CrystralClient } from '@crystralai/client';
const client = new CrystralClient({
provider: 'openai',
model: 'gpt-4o',
apiKey: userProvidedKey,
systemPrompt: 'You are a helpful assistant.',
});
const result = await client.run('What is 2+2?');
Structured Output
output:
format: json
strict: true
schema:
type: object
required: [summary, items]
properties:
summary:
type: string
items:
type: array
items:
type: object
required: [name, score]
properties:
name: { type: string }
score: { type: number }
Retry & Fallback
provider: openai
model: gpt-4o
retry:
max_attempts: 3
backoff: exponential
retry_on:
- rate_limit
- server_error
- timeout
fallback:
- provider: anthropic
model: claude-sonnet-4-20250514
- provider: google
model: gemini-1.5-pro
Guardrails
guardrails:
input:
max_length: 10000
block_patterns:
- "(?i)ignore previous instructions"
pii_action: redact
output:
max_length: 5000
block_patterns:
- "(?i)internal use only"
Caching
cache:
enabled: true
ttl: 3600 # seconds
CLI Commands
| Command | Description |
|---|---|
crystalai run <agent> "prompt" | Run an agent |
crystalai run <agent> --stream | Stream output |
crystalai studio | Launch Studio dashboard |
crystalai auth add <provider> | Add API key |
crystalai auth list | List configured providers |
crystalai validate | Validate all config files |
Config Spec
All config files require version: 1. The name field must match the filename (without .yaml). See the full CONFIG_SPEC.md for every field and validation rule.
Provider Comparison
| Provider | Chat | Embeddings | Vision | Tool Calling | Streaming |
|---|---|---|---|---|---|
| OpenAI | Yes | Yes | Yes | Yes | Yes |
| Anthropic | Yes | No | Yes | Yes | Yes |
| Google Gemini | Yes | Yes | Yes | Yes | Yes |
| Groq | Yes | No | No | Yes | Yes |
| Together AI | Yes | No | No | Yes | Yes |