Build agents
with a modern
YAML stack
Crystal AI is a local-first framework for building AI agents and workflows. Define everything in YAML, connect any LLM provider, ship fast.
npm install @crystralai/sdk
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful assistant.
Be concise and accurate.
temperature: 0.7
max_tokens: 4096
tools:
- web-search
- calculate
Works with your favorite LLM provider
Build and iterate
Agents, tools, RAG, and workflows — all defined in YAML. Write your agent logic in config files, iterate with Git, and run anywhere.
Define agents, tools, and workflows as YAML files. Version control with Git. Switch models or providers without touching your application code.
version: 1
name: support-agent
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful support agent.
temperature: 0.3
tools:
- get-ticket
- send-email
rag:
collections:
- product-docs
- REST API — call any HTTP endpoint
- JavaScript — sandboxed VM execution
- Web Search — Brave search integration
- Agent Delegation — agent-to-agent calls
sqlite-vec vector search over your documents. Markdown, text, PDF, HTML. No external vector database. Embeddings stay on your machine.
- Automatic chunking & embedding
- Configurable threshold & count
Orchestrate specialist agents from a single YAML file. LLM-driven routing with shared context — no explicit graph definitions needed.
- Auto strategy routing
- Shared memory between agents
Zero cloud dependency. API keys stay on your machine. Sessions and logs in local SQLite. Inference calls go direct to providers.
- No telemetry or tracking
- MIT licensed, fully auditable
Run your agents in 3 lines
Write your agent config in YAML. Run with the TypeScript SDK. It's that simple.
version: 1
name: support-agent
description: Customer support agent
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful support agent for {company_name}.
Always be polite and professional.
temperature: 0.3
max_tokens: 2048
tools:
- get-ticket
- send-email
rag:
collections:
- product-docs
embedding_provider: openai
embedding_model: text-embedding-3-small
match_threshold: 0.75
match_count: 5
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
// Single-shot
const result = await client.run('support-agent', 'My order is late');
console.log(result.content);
// Streaming
const stream = await client.run('support-agent', 'Help me', {
stream: true,
onToken: (token) => process.stdout.write(token),
});
// Multi-turn sessions
const r1 = await client.run('support-agent', 'Hi, I need help');
const r2 = await client.run('support-agent', 'Order #12345', {
sessionId: r1.sessionId,
});
version: 1
name: content-pipeline
description: Research, analyze, and produce content
orchestrator:
provider: openai
model: gpt-4o
system_prompt: |
You orchestrate content production.
Delegate tasks to specialist agents.
strategy: auto
max_iterations: 20
agents:
- name: researcher
agent: research-agent
description: Gathers information from the web
- name: writer
agent: writing-agent
description: Writes polished final content
context:
shared_memory: true
max_context_tokens: 8000
Three steps. That's it.
From zero to running agent in under 5 minutes. No boilerplate, no config files to debug.
Define
Create a YAML file for your agent — provider, model, prompt, tools, RAG. Everything in one readable file.
agents/assistant.yamlConnect
Drop your API key in .env. Credentials resolve automatically. No init files or setup scripts.
Run
Three lines of TypeScript. Or use the CLI. Or launch the built-in Studio dashboard.
await client.run('assistant', '...')One YAML line to switch
Change your provider with a single config change. No code modifications, no abstractions to learn.
Plus any OpenAI-compatible endpoint — Ollama, vLLM, LM Studio, Fireworks
Built for real work
From customer support to data processing — Crystal AI adapts to your production needs.
Customer Support
RAG-powered agents that search docs, call ticket APIs, and escalate when needed. Multi-turn sessions with full context.
Content Generation
Multi-agent workflows: researcher, writer, editor — orchestrated in one YAML file. Produce content at scale.
Code Review
Security auditors that find OWASP vulnerabilities with structured severity reports. Integrate into CI pipelines.
Data Processing
JS tool agents that extract, transform, and classify data with REST API integrations. Reliable and repeatable.
Running in under 5 minutes
npm install @crystralai/sdk
version: 1
project: my-project
version: 1
project: my-project
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful assistant.
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful assistant.
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
const result = await client.run('assistant', 'Hello!');
console.log(result.content);
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
const result = await client.run('assistant', 'Hello!');
console.log(result.content);
Common questions
Is Crystal AI free?
Yes. Fully open source under the MIT license. You only pay for LLM API usage from the providers you choose — OpenAI, Anthropic, Google, Groq, Together AI. The framework itself is free for any project.
Does it send data to the cloud?
No. Runs entirely on your machine. API calls go directly to the LLM provider. Sessions, logs, and vector embeddings are stored locally in SQLite. Your API keys never leave your environment.
Which LLM providers are supported?
5 built-in: OpenAI, Anthropic, Google Gemini, Groq, Together AI. Plus any OpenAI-compatible endpoint (Ollama, vLLM, LM Studio) via the base_url field. Switching is a one-line YAML change.
How is it different from LangChain?
Configuration-first, not code-first. Agents, tools, workflows are YAML files — no chain abstractions, no class hierarchies. Think Prisma (config-driven, type-safe, local) vs. LangChain (code-driven, chain-based). Less boilerplate, easier debugging, Git-friendly configs.
Can I use it in production?
Yes. Retry with exponential backoff, fallback providers, response caching, guardrails, structured output validation, and inference logging. The TypeScript SDK runs in any Node.js 18+ environment.
Does it support RAG?
Yes. Built-in vector search with sqlite-vec. Place documents in a rag/ directory, reference the collection in YAML, and Crystal AI handles chunking, embedding, and retrieval. No external vector DB required.