TypeScript AI Framework

Build agents
with a modern
YAML stack

Crystal AI is a local-first framework for building AI agents and workflows. Define everything in YAML, connect any LLM provider, ship fast.

$ npm install @crystralai/sdk
agents/assistant.yaml
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful assistant.
  Be concise and accurate.
temperature: 0.7
max_tokens: 4096
tools:
  - web-search
  - calculate

Works with your favorite LLM provider

OpenAI Anthropic Google Gemini Groq Together AI Ollama vLLM LM Studio
Features

Build and iterate

Agents, tools, RAG, and workflows — all defined in YAML. Write your agent logic in config files, iterate with Git, and run anywhere.

Core Concept
Config as code

Define agents, tools, and workflows as YAML files. Version control with Git. Switch models or providers without touching your application code.

agents/support-agent.yaml
version: 1
name: support-agent
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful support agent.
temperature: 0.3
tools:
  - get-ticket
  - send-email
rag:
  collections:
    - product-docs
Integrations
5 LLM providers

OpenAI, Anthropic, Gemini, Groq, Together AI. Plus any OpenAI-compatible endpoint.

Extensible
4 tool types
  • REST API — call any HTTP endpoint
  • JavaScript — sandboxed VM execution
  • Web Search — Brave search integration
  • Agent Delegation — agent-to-agent calls
Search
Local RAG

sqlite-vec vector search over your documents. Markdown, text, PDF, HTML. No external vector database. Embeddings stay on your machine.

  • Automatic chunking & embedding
  • Configurable threshold & count
Orchestration
Multi-agent workflows

Orchestrate specialist agents from a single YAML file. LLM-driven routing with shared context — no explicit graph definitions needed.

  • Auto strategy routing
  • Shared memory between agents
Security
Privacy first

Zero cloud dependency. API keys stay on your machine. Sessions and logs in local SQLite. Inference calls go direct to providers.

  • No telemetry or tracking
  • MIT licensed, fully auditable
Developer Experience

Run your agents in 3 lines

Write your agent config in YAML. Run with the TypeScript SDK. It's that simple.

agents/support-agent.yaml
version: 1
name: support-agent
description: Customer support agent
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful support agent for {company_name}.
  Always be polite and professional.
temperature: 0.3
max_tokens: 2048
tools:
  - get-ticket
  - send-email
rag:
  collections:
    - product-docs
  embedding_provider: openai
  embedding_model: text-embedding-3-small
  match_threshold: 0.75
  match_count: 5
index.ts
import { Crystral } from '@crystralai/sdk';

const client = new Crystral();

// Single-shot
const result = await client.run('support-agent', 'My order is late');
console.log(result.content);

// Streaming
const stream = await client.run('support-agent', 'Help me', {
  stream: true,
  onToken: (token) => process.stdout.write(token),
});

// Multi-turn sessions
const r1 = await client.run('support-agent', 'Hi, I need help');
const r2 = await client.run('support-agent', 'Order #12345', {
  sessionId: r1.sessionId,
});
workflows/content-pipeline.yaml
version: 1
name: content-pipeline
description: Research, analyze, and produce content

orchestrator:
  provider: openai
  model: gpt-4o
  system_prompt: |
    You orchestrate content production.
    Delegate tasks to specialist agents.
  strategy: auto
  max_iterations: 20

agents:
  - name: researcher
    agent: research-agent
    description: Gathers information from the web
  - name: writer
    agent: writing-agent
    description: Writes polished final content

context:
  shared_memory: true
  max_context_tokens: 8000
Quick Start

Three steps. That's it.

From zero to running agent in under 5 minutes. No boilerplate, no config files to debug.

1

Define

Create a YAML file for your agent — provider, model, prompt, tools, RAG. Everything in one readable file.

agents/assistant.yaml
2

Connect

Drop your API key in .env. Credentials resolve automatically. No init files or setup scripts.

OPENAI_API_KEY=sk-...
3

Run

Three lines of TypeScript. Or use the CLI. Or launch the built-in Studio dashboard.

await client.run('assistant', '...')
Providers

One YAML line to switch

Change your provider with a single config change. No code modifications, no abstractions to learn.

OpenAI
ChatEmbedVisionStream
Anthropic
ChatEmbedVisionStream
Google Gemini
ChatEmbedVisionStream
Groq
ChatEmbedVisionStream
Together AI
ChatEmbedVisionStream

Plus any OpenAI-compatible endpoint — Ollama, vLLM, LM Studio, Fireworks

Use Cases

Built for real work

From customer support to data processing — Crystal AI adapts to your production needs.

Customer Support

RAG-powered agents that search docs, call ticket APIs, and escalate when needed. Multi-turn sessions with full context.

Content Generation

Multi-agent workflows: researcher, writer, editor — orchestrated in one YAML file. Produce content at scale.

Code Review

Security auditors that find OWASP vulnerabilities with structured severity reports. Integrate into CI pipelines.

Data Processing

JS tool agents that extract, transform, and classify data with REST API integrations. Reliable and repeatable.

Setup

Running in under 5 minutes

1Install
npm install @crystralai/sdk
2Configure
version: 1
project: my-project
3Define Agent
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful assistant.
4Run
import { Crystral } from '@crystralai/sdk';
const client = new Crystral();
const result = await client.run('assistant', 'Hello!');
console.log(result.content);
FAQ

Common questions

Is Crystal AI free?

Yes. Fully open source under the MIT license. You only pay for LLM API usage from the providers you choose — OpenAI, Anthropic, Google, Groq, Together AI. The framework itself is free for any project.

Does it send data to the cloud?

No. Runs entirely on your machine. API calls go directly to the LLM provider. Sessions, logs, and vector embeddings are stored locally in SQLite. Your API keys never leave your environment.

Which LLM providers are supported?

5 built-in: OpenAI, Anthropic, Google Gemini, Groq, Together AI. Plus any OpenAI-compatible endpoint (Ollama, vLLM, LM Studio) via the base_url field. Switching is a one-line YAML change.

How is it different from LangChain?

Configuration-first, not code-first. Agents, tools, workflows are YAML files — no chain abstractions, no class hierarchies. Think Prisma (config-driven, type-safe, local) vs. LangChain (code-driven, chain-based). Less boilerplate, easier debugging, Git-friendly configs.

Can I use it in production?

Yes. Retry with exponential backoff, fallback providers, response caching, guardrails, structured output validation, and inference logging. The TypeScript SDK runs in any Node.js 18+ environment.

Does it support RAG?

Yes. Built-in vector search with sqlite-vec. Place documents in a rag/ directory, reference the collection in YAML, and Crystal AI handles chunking, embedding, and retrieval. No external vector DB required.

YAML defines,
TypeScript ships.