/ Docs

Crystal AI Documentation

Crystal AI is a local-first AI agent framework for developers. Define agents as YAML files, version-control them, run them in the terminal, and inspect them in a local dashboard. Think of it as Prisma for AI agents.

New here? Follow the Quick Start guide to have an agent running in under 5 minutes.

Key Principles

Config as Code

Agents, tools, and workflows are YAML files. Version control with Git.

Local First

No cloud service. SQLite storage. Keys stay on your machine.

Provider Agnostic

5 built-in providers. Switch with a one-line YAML change.

Installation

Install the SDK package. It includes @crystralai/core (runtime engine) automatically.

Terminal
npm install @crystralai/sdk

# or with other package managers
pnpm add @crystralai/sdk
yarn add @crystralai/sdk

Prerequisites

RequirementDetails
Node.js18 or later (node --version)
Package Managernpm, pnpm, or yarn
API KeyAt least one LLM provider key (OpenAI, Anthropic, etc.)
Tip: For browser or React Native apps, use @crystralai/client instead — a zero-dependency package that works anywhere fetch does.

Quick Start

1. Create project config

Every Crystal AI project needs a crystral.config.yaml at its root:

crystral.config.yaml
version: 1
project: my-project

2. Define your agent

agents/assistant.yaml
version: 1
name: assistant
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful assistant. Be concise and accurate.
temperature: 0.7
max_tokens: 4096

3. Set your API key

.env
OPENAI_API_KEY=sk-your-key-here
Never commit your .env file. Add it to .gitignore.

4. Run the agent

index.ts
import { Crystral } from '@crystralai/sdk';

const client = new Crystral();
const result = await client.run('assistant', 'What is the capital of France?');

console.log(result.content);    // "Paris"
console.log(result.usage.total); // 42
console.log(result.durationMs);  // 823
Terminal
npx tsx index.ts

Project Structure

File Tree
my-project/
├── crystral.config.yaml     # Project config (required)
├── agents/                  # Agent YAML definitions
│   └── assistant.yaml
├── tools/                   # Tool YAML definitions
├── workflows/               # Workflow definitions
├── rag/                     # RAG document collections
│   └── my-docs/
├── .crystalai/              # Auto-generated (add to .gitignore)
│   └── agents.db            # SQLite database
├── .env                     # API keys
└── .gitignore

Agents

An agent is a configured AI persona backed by a large language model. Each agent is a YAML file in agents/.

Full Agent Example

agents/support-agent.yaml
version: 1
name: support-agent
description: Customer support agent
provider: openai
model: gpt-4o
system_prompt: |
  You are a helpful support agent for {company_name}.
  Always be polite and professional.
temperature: 0.3
max_tokens: 2048
tools:
  - get-ticket
  - send-email
rag:
  collections:
    - product-docs
  embedding_provider: openai
  embedding_model: text-embedding-3-small
  match_threshold: 0.75
  match_count: 5

Agent Fields

FieldTypeRequiredDescription
versionintegerYesMust be 1
namestringYesMust match filename
providerstringYesopenai, anthropic, groq, google, together
modelstringYesModel ID (e.g. gpt-4o)
system_promptstringNoSupports {variable} templates
temperaturenumberNo0.0–2.0, default 1.0
max_tokensintegerNo1–1,000,000, default 4096
toolslistNoTool names from tools/
ragobjectNoRAG configuration
mcplistNoMCP server connections
outputobjectNoStructured output (JSON schema)
retryobjectNoRetry policy
fallbacklistNoFallback providers
guardrailsobjectNoInput/output filtering

Tools

Tools give agents the ability to take actions. Crystal AI supports 4 tool types:

TypeDescriptionUse Case
rest_apiCall any HTTP endpointExternal APIs, webhooks
javascriptSandboxed JS with timeoutCalculations, data transforms
web_searchBrave Search APIReal-time information
agentDelegate to another agentSpecialist sub-agents

REST API Tool

tools/get-weather.yaml
version: 1
name: get-weather
description: Get current weather for a city
type: rest_api
endpoint: https://wttr.in/{city}?format=j1
method: GET
response_path: current_condition.0
parameters:
  - name: city
    type: string
    required: true
    description: City name (e.g. "London")

JavaScript Tool

tools/calculate.yaml
version: 1
name: calculate
description: Evaluate a math expression
type: javascript
timeout_ms: 5000
parameters:
  - name: expression
    type: string
    required: true
    description: Math expression (e.g. "2 + 2")
code: |
  const result = new Function('return ' + args.expression)();
  return { result: Number(result) };

Providers

Switch providers by changing one line. No code changes required.

One-line switch
# Just change these two lines:
provider: anthropic
model: claude-sonnet-4-20250514

Credential Resolution

API keys are resolved in priority order:

  1. Environment variable — e.g. OPENAI_API_KEY
  2. Project .env file
  3. Global credentials~/.crystalai/credentials
ProviderEnv VariableChatEmbeddingsVision
OpenAIOPENAI_API_KEYYesYesYes
AnthropicANTHROPIC_API_KEYYesNoYes
GoogleGOOGLE_API_KEYYesYesYes
GroqGROQ_API_KEYYesNoNo
TogetherTOGETHER_API_KEYYesNoNo

RAG (Retrieval-Augmented Generation)

Give agents access to your documents with built-in vector search powered by sqlite-vec.

Setup

  1. Place documents in rag/<collection-name>/
  2. Add RAG config to your agent YAML
  3. Collections are indexed automatically on first use
Agent with RAG
rag:
  collections:
    - product-docs
  embedding_provider: openai
  embedding_model: text-embedding-3-small
  match_threshold: 0.7
  match_count: 5

Supported formats: .md, .txt, .pdf, .html

Workflows

Orchestrate multiple specialist agents with a single YAML file. The orchestrator LLM decides task routing — no explicit graphs.

workflows/content-pipeline.yaml
version: 1
name: content-pipeline
description: Research and produce content

orchestrator:
  provider: openai
  model: gpt-4o
  system_prompt: |
    You orchestrate content production.
    Delegate to specialist agents.
  strategy: auto
  max_iterations: 20

agents:
  - name: researcher
    agent: research-agent
    description: Gathers information
  - name: writer
    agent: writing-agent
    description: Writes final content

context:
  shared_memory: true
  max_context_tokens: 8000
Running a workflow
const workflow = client.loadWorkflow('content-pipeline');
const result = await workflow.run('Write an article about AI');
console.log(result.content);
console.log(result.agentResults);

MCP Servers

Connect to Model Context Protocol servers for dynamic tool discovery.

Agent with MCP
mcp:
  - transport: stdio
    name: filesystem
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
  - transport: sse
    name: github
    url: http://localhost:3000/mcp

MCP tools are exposed as mcp_{serverName}_{toolName} and available alongside static tools.

TypeScript SDK

Basic usage
import { Crystral } from '@crystralai/sdk';

const client = new Crystral();

// Single-shot
const result = await client.run('assistant', 'Hello!');
console.log(result.content);

Streaming

Stream tokens
const result = await client.run('assistant', 'Write a haiku.', {
  stream: true,
  onToken: (token) => process.stdout.write(token),
});

Sessions

Conversations persist automatically in SQLite. Pass sessionId to continue:

Multi-turn
const r1 = await client.run('assistant', 'My name is Alice.');
const r2 = await client.run('assistant', 'What is my name?', {
  sessionId: r1.sessionId,
});
// r2.content -> "Your name is Alice."

Browser Client

For frontends, React Native, or edge runtimes — zero dependencies:

Browser usage
import { CrystralClient } from '@crystralai/client';

const client = new CrystralClient({
  provider: 'openai',
  model: 'gpt-4o',
  apiKey: userProvidedKey,
  systemPrompt: 'You are a helpful assistant.',
});

const result = await client.run('What is 2+2?');

Structured Output

JSON schema output
output:
  format: json
  strict: true
  schema:
    type: object
    required: [summary, items]
    properties:
      summary:
        type: string
      items:
        type: array
        items:
          type: object
          required: [name, score]
          properties:
            name: { type: string }
            score: { type: number }

Retry & Fallback

Resilient agent
provider: openai
model: gpt-4o

retry:
  max_attempts: 3
  backoff: exponential
  retry_on:
    - rate_limit
    - server_error
    - timeout

fallback:
  - provider: anthropic
    model: claude-sonnet-4-20250514
  - provider: google
    model: gemini-1.5-pro

Guardrails

Input/output filtering
guardrails:
  input:
    max_length: 10000
    block_patterns:
      - "(?i)ignore previous instructions"
    pii_action: redact
  output:
    max_length: 5000
    block_patterns:
      - "(?i)internal use only"

Caching

Response cache
cache:
  enabled: true
  ttl: 3600  # seconds

CLI Commands

CommandDescription
crystalai run <agent> "prompt"Run an agent
crystalai run <agent> --streamStream output
crystalai studioLaunch Studio dashboard
crystalai auth add <provider>Add API key
crystalai auth listList configured providers
crystalai validateValidate all config files

Config Spec

All config files require version: 1. The name field must match the filename (without .yaml). See the full CONFIG_SPEC.md for every field and validation rule.

Provider Comparison

ProviderChatEmbeddingsVisionTool CallingStreaming
OpenAIYesYesYesYesYes
AnthropicYesNoYesYesYes
Google GeminiYesYesYesYesYes
GroqYesNoNoYesYes
Together AIYesNoNoYesYes