Introduction to OpenCode: The Terminal-Native AI Agent
While tech giants like Google build polished, proprietary IDEs, the open-source community fights back with OpenCode ā a radically different approach to AI-assisted coding.
OpenCode is a terminal-native AI coding assistant that integrates directly into your existing workflow, whether you use Neovim, Emacs, VS Code, or just raw Vim over SSH.
Philosophy: The Developer is in Control
OpenCode was built on three core principles:
- Transparency: You should know what the AI is doing
- Privacy: Your code should never leave your machine unless you want it to
- Flexibility: Use any editor, any model, any workflow
Unlike Antigravity's "walled garden," OpenCode is the Unix Philosophy applied to AI:
- Do one thing well (agentic coding)
- Work with everything else
- Be composable
How It Works: Terminal-Native
OpenCode runs as a CLI tool in your terminal. You invoke it with natural language:
# Refactor a file
oc refactor auth.ts "use async/await instead of promises"
# Debug an error
oc fix "why is my API returning 500?"
# Generate tests
oc test user-service.ts
# Research
oc research "best way to handle rate limiting in Node.js"
The agent reads your project context, performs the task, and outputs the result ā all without leaving your terminal.
Integration with Existing Editors
OpenCode doesn't replace your editor. It enhances it:
Neovim Example:
:OpenCode "add JSDoc comments to this function"
VS Code Example:
Use the OpenCode extension to highlight code and run:
Cmd+Shift+P > OpenCode: Explain Selection
Pure Terminal:
oc explain < myfile.ts
This editor-agnostic approach means you can use OpenCode on a remote server, inside Docker, or anywhere you have a shell.
Model Agnostic: Use Any AI
Unlike Antigravity (locked to Gemini), OpenCode supports any model via the Model Context Protocol (MCP):
Cloud Models
- Claude Sonnet/Opus (Anthropic)
- GPT-4.5 (OpenAI)
- Gemini 3 (Google)
- Mixtral (Mistral AI)
Local Models (via Ollama)
- Llama 3.3 70B
- Qwen 2.5 Coder
- DeepSeek Coder 33B
Configuration Example:
# ~/.opencode/config.yaml
providers:
- name: claude
model: claude-4.5-opus
api_key: ${ANTHROPIC_KEY}
- name: local
model: llama3.3:70b-instruct
endpoint: http://localhost:11434
default: claude
Switch models mid-session:
oc --model local research "async patterns in Rust"
This flexibility is crucial for:
- Cost Control: Use cheap local models for simple tasks, expensive cloud models for complex ones
- Privacy: Keep proprietary code local
- Experimentation: Try new models as they're released
Key Features
1. Privacy-First Architecture
When using local models, your code never leaves your machine:
# Run Llama 3.3 locally via Ollama
ollama run llama3.3:70b-instruct
# Use it with OpenCode
oc --model ollama/llama3.3 refactor database.ts
For enterprises with strict data policies (finance, defense, healthcare), this is non-negotiable.
2. Context-Aware Commands
OpenCode maintains a context window of your project:
# Initial context building
oc index
# Now it knows your codebase
oc "where is the authentication logic?"
# > Found in: src/auth/jwt.ts, src/middleware/auth.ts
oc "refactor all auth files to use a shared config"
# Modifies multiple files with consistent patterns
3. GitHub Integration
OpenCode integrates directly into GitHub workflows:
Trigger via PR Comment:
Comment: /oc fix linting errors
GitHub Action runs:
name: OpenCode Auto-Fix
on:
issue_comment:
types: [created]
jobs:
fix:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm install -g opencode
- run: oc fix "address linting errors"
- run: git commit -am "Auto-fix by OpenCode"
- run: git push
This enables "self-healing" CI/CD pipelines.
4. Custom Recipes
You can create reusable "recipes" for common tasks:
# ~/.opencode/recipes/add-api-endpoint.yaml
name: add-api-endpoint
description: Create a new REST API endpoint
prompt: |
Create a new {method} endpoint at {path}
- Add route handler in src/routes/
- Add validation schema
- Add tests in __tests__/
- Update OpenAPI spec
Use it:
oc recipe add-api-endpoint --method POST --path /api/users
5. Diff-Based Editing
OpenCode shows exactly what it will change before applying:
oc refactor auth.ts "use bcrypt instead of md5"
Proposed changes:
āāāāāāāāāāāāāāāāāāā
- import crypto from 'crypto';
+ import bcrypt from 'bcrypt';
- const hash = crypto.createHash('md5').update(password).digest('hex');
+ const hash = await bcrypt.hash(password, 10);
Apply? [y/n]
This transparency prevents "AI surprises."
Real-World Workflows
Workflow 1: Backend Developer
# Morning routine
cd ~/projects/api-server
oc index
# Work on feature
oc "add rate limiting to /api/public endpoints"
# Review changes
git diff
# Run tests
oc test "check if rate limiting works"
# Commit
git add .
git commit -m "Add rate limiting"
Workflow 2: DevOps Engineer (Remote Server)
ssh production-server
# Investigate issue
oc "why is nginx returning 502?"
# > Found: Upstream server is down. Check pm2 status.
pm2 status
# > App crashed due to memory leak
oc "find memory leak in app.js"
# > Likely cause: Line 42, unclosed database connections
oc fix app.js "close all db connections properly"
pm2 restart app
Workflow 3: Security Auditing
oc audit
# > Found: SQL injection vulnerability in user-search.ts
# > Found: Missing CSRF token validation
# > Found: Hardcoded API key in config.ts
oc fix --all-security-issues
Advanced Use Cases
Multi-Step Automation
OpenCode can chain commands:
oc "migrate from Express to Fastify, update all routes, fix tests, ensure 100% test coverage"
The agent will:
- Analyze the current Express setup
- Generate Fastify equivalents
- Run tests after each change
- Report which files need manual review
Cross-Language Support
OpenCode works with any language:
oc "convert this Python script to Rust"
oc "explain this C++ template metaprogramming"
oc "optimize this SQL query for PostgreSQL 16"
Documentation Generation
oc document --format openapi src/routes/**/*.ts > api-spec.yaml
oc document --format markdown src/ > README.md
Limitations
- No GUI: If you hate terminals, OpenCode isn't for you
- Setup Required: You need to configure models, context, and recipes
- Less Polished: No drag-and-drop, no slick animations
- Smaller Community: Fewer plugins than VS Code ecosystem
Cost
Free Forever (Open Source)
- No subscription
- Pay only for API usage if using cloud models
- Local models are 100% free
Typical Costs:
- Claude Sonnet: ~$0.05 per task
- Local Llama 3.3: $0.00 (requires GPU)
Who Should Use OpenCode?
Perfect For:
- Backend developers
- DevOps/SRE engineers
- Security researchers
- Anyone working on remote servers
- Developers who value privacy
- Vim/Emacs power users
Not Ideal For:
- Beginners unfamiliar with terminals
- Frontend-only developers who need visual feedback
- Teams wanting zero setup
Comparison: OpenCode vs. Traditional CLIs
| Feature | OpenCode | git / make / npm |
|---|---|---|
| Task Understanding | Natural language | Command memorization |
| Error Handling | Self-correcting | Manual debugging |
| Cross-Project | Adapts to any project | Requires per-project setup |
Getting Started
# Install
npm install -g opencode
# Or
brew install opencode
# Configure
oc config init
# Index your project
cd ~/my-project
oc index
# Start coding
oc "refactor utils.ts to use functional programming"
The Future of OpenCode
The roadmap includes:
- Web UI (optional) for teams who want visual dashboards
- Plugin System for custom tools (linters, formatters, deployers)
- Multi-Agent Mode (like Antigravity's parallel agents)
But the core will always remain: a terminal-native, privacy-first, model-agnostic AI agent.
Conclusion
OpenCode proves that you don't need a multi-billion-dollar company to build powerful AI tools. It represents the hacker ethos: build what you need, share it, and keep it open.
If you value control, privacy, and flexibility over polish and convenience, OpenCode is the future of your workflow.