AI-powered modular development assistant - currently in early preview.
Caution
This project is a research demonstrator. It is in early development and may change significantly. Using permissive AI tools on your computer requires careful attention to security considerations and careful human supervision, and even then things can still go wrong. Use it with caution, and at your own risk, we have NOT built in the safety systems yet. We are performing our active exploration in the open for others to join in the conversation and exploration, not as a product or "official release".
Note
Looking for the earlier Claude Code-based version? The previous version of Amplifier, built on top of Claude Code, has been moved to the amplifier-claude branch.
Amplifier brings AI assistance to your command line with a modular, extensible architecture.
This CLI is just one interface—the reference implementation. The real power is the modular platform underneath. Soon you'll see web interfaces, mobile apps, voice-driven coding, and even Amplifier-to-Amplifier collaborative experiences. The community will build custom interfaces, mixing and matching modules dynamically to craft tailored AI experiences.
Important
Amplifier is currently developed and tested on macOS, Linux, and Windows Subsystem for Linux (WSL). Native Windows shells have known issues—use WSL unless you're actively contributing Windows fixes.
# macOS/Linux/WSL
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows PowerShell
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"uv tool install git+https://github.com/microsoft/amplifier# First-time wizard (auto-detects missing config)
amplifier init
# Ask a question
amplifier run "Explain async/await in Python"
# Start chat mode
amplifieramplifier collection add git+https://github.com/microsoft/amplifier-collection-toolkit@main
amplifier collection add git+https://github.com/microsoft/amplifier-collection-design-intelligence@main
# Explore their profiles
amplifier profile use toolkit-dev
amplifier profile use designerBoth collections also ship focused agents you can invoke from any profile by name, use /agents in chat and look for the ones with toolkit: or design-intelligence: prefixes .
First time? Quick setup wizard:
With Anthropic Claude (recommended)
Provider? [1] Anthropic [2] OpenAI [3] Azure OpenAI [4] Ollama: 1
API key: ••••••••
Get one: https://console.anthropic.com/settings/keys
✓ Saved
Model? [1] claude-sonnet-4-5 [2] claude-opus-4-1 [3] custom: 1
✓ Using claude-sonnet-4-5
Profile? [1] dev [2] base [3] full: 1
✓ Using 'dev' profile
Ready! Starting chat...
>
With Azure OpenAI (enterprise)
Provider? [1] Anthropic [2] OpenAI [3] Azure OpenAI [4] Ollama: 3
Azure endpoint: https://my-resource.openai.azure.com/
✓ Saved
Authentication? [1] API key [2] Azure CLI (az login): 2
✓ Using DefaultAzureCredential
(Works with 'az login' locally or managed identity in Azure)
Deployment name: gpt-5.1-codex
Note: Use your Azure deployment name, not model name
✓ Configured
Profile? [1] dev [2] base [3] full: 1
✓ Using 'dev' profile
Ready! Starting chat...
>
With OpenAI
Provider? [1] Anthropic [2] OpenAI [3] Azure OpenAI [4] Ollama: 2
API key: ••••••••
Get one: https://platform.openai.com/api-keys
✓ Saved
Model? [1] gpt-5.1 [2] gpt-5-mini [3] gpt-5.1-codex [4] o1 [5] custom: 1
✓ Using gpt-5.1
Profile? [1] dev [2] base [3] full: 1
✓ Using 'dev' profile
Ready! Starting chat...
>
With Ollama (local, free)
Provider? [1] Anthropic [2] OpenAI [3] Azure OpenAI [4] Ollama: 4
Model? [1] llama3 [2] codellama [3] mistral [4] custom: 1
✓ Using llama3
Make sure Ollama is running:
ollama serve
ollama pull llama3
Profile? [1] dev [2] base [3] full: 1
✓ Using 'dev' profile
Ready! Starting chat...
>
That's it! From nothing to productive AI assistant in 90 seconds.
First of all, this is still VERY early and we have not brought most of our features over yet, so keep your expectations low and we'll get it ramped up very quickly over the next week or two. Consider this just an early sneak peek.
- Generate code - From simple functions to full applications
- Debug problems - Systematic error resolution with the bug-hunter agent
- Design systems - Architecture planning with the zen-architect agent
- Research solutions - Find patterns and best practices with the researcher agent
- Build modules - Use Amplifier to create new Amplifier modules (yes, really!)
Key features:
- Modular: Swap AI providers, tools, and behaviors like LEGO bricks
- Profile-based: Pre-configured capability sets for different scenarios
- Session persistence: Pick up where you left off, even across projects
- Extensible: Build your own modules, interfaces, or entire custom experiences
Developer Tools:
- Log Viewer: Web-based tool for debugging sessions with real-time log streaming and interactive JSON inspection
# Install and run the log viewer while developing
uv tool install git+https://github.com/microsoft/amplifier-app-log-viewer@main
amplifier-log-viewerAmplifier works with multiple AI providers:
- Anthropic Claude - Recommended, most tested (Sonnet 4.5, Opus models)
- OpenAI - Good alternative (GPT-5, GPT-5-Mini, GPT-5-Codex)
- Azure OpenAI - Enterprise users with Azure subscriptions (supports managed identity)
- Ollama - Local, free, no API key needed (llama3, codellama, etc.)
Switch providers anytime:
# Switch provider (interactive - prompts for model/config)
amplifier provider use openai
# Or explicit
amplifier provider use anthropic --model claude-opus-4-1
amplifier provider use azure-openai --deployment gpt-5.1-codexNote: We've done most of our early testing with Anthropic Claude. Other providers are supported but may have rough edges we're actively smoothing out.
# Start a conversation
amplifier
# Or explicitly
amplifier run --mode chatIn chat mode:
- Context persists across messages
- Use
/helpto see available commands - Use
/tools,/agents,/status,/configto inspect session - Use
/thinkand/doto toggle plan mode - Type
exitor Ctrl+C to quit
# Get quick answers
amplifier run "Explain async/await in Python"
# Generate code
amplifier run "Create a REST API for a todo app with FastAPI"
# Debug issues
amplifier run "Why does this code throw a TypeError: [paste code]"Profiles are pre-configured capability sets for different scenarios:
# See available profiles
amplifier profile list
# Use a specific profile
amplifier run --profile dev "Your prompt"
# Set as default
amplifier profile use devBundled profiles:
foundation- Absolute minimum (provider + orchestrator only)base- Essential tools (filesystem, bash, logging)dev- Full development setup (web, search, agents) — default & recommendedtest- Focused testing utilities layered on top ofbasefull- Showcase build with nearly every module enabled; great for demos, less optimal for day-to-day work
Specialized agents for focused tasks:
# Let the AI delegate to specialized agents
amplifier run "Design a caching layer with careful consideration"
# The AI will use zen-architect when appropriate
# Or request specific agents
amplifier run "Use bug-hunter to debug this error: [paste error]"Bundled agents:
- zen-architect - System design with ruthless simplicity
- bug-hunter - Systematic debugging
- researcher - Content research and synthesis
- modular-builder - Code implementation
- explorer - Breadth-first exploration of local code, docs, and other files with citation-ready summaries
Every interaction is automatically saved:
# Resume most recent session
amplifier continue
# Resume with new prompt (single-shot mode)
amplifier continue "follow-up question"
# List your recent sessions (current project only)
amplifier session list
# See all sessions across all projects
amplifier session list --all-projects
# View session details
amplifier session show <session-id>
# Resume a specific session (interactive mode)
amplifier session resume <session-id>
# Resume specific session with new prompt
amplifier run --resume <session-id> "new question"Sessions are project-scoped—when you're in /home/user/myapp, you see only myapp sessions. Change directories, see different sessions. Your work stays organized.
# Switch provider (interactive - prompts for model)
amplifier provider use openai
# Or explicit
amplifier provider use anthropic --model claude-opus-4-1
# Azure OpenAI (needs endpoint + deployment)
amplifier provider use azure-openai
Azure endpoint: https://my-resource.openai.azure.com/
Auth? [1] API key [2] Azure CLI: 2
Deployment: gpt-5.1-codex
# Configure where to save
amplifier provider use openai --model gpt-5.1 --local # Just you
amplifier provider use anthropic --model claude-opus-4-1 --project # Team
# See what's active
amplifier provider current# Switch profile
amplifier profile use dev
amplifier profile use base
amplifier profile use test
amplifier profile use foundation
# See what's active
amplifier profile current# Add module
amplifier module add tool-jupyter
amplifier module add tool-custom --project
# See loaded modules
amplifier module currentSee docs/USER_ONBOARDING.md#quick-reference for complete command reference.
Profiles configure your Amplifier environment with providers, tools, agents, and settings.
→ Profile Authoring Guide - Complete guide to creating profiles
API Reference: amplifier-profiles
Agents are specialized AI personas for focused tasks.
→ Agent Authoring Guide - Complete guide to creating agents
Core Libraries:
- amplifier-core - Kernel mechanisms and contracts
- amplifier-profiles - Profile/agent loading and compilation
- amplifier-collections - Collections system
- amplifier-config - Configuration management
- amplifier-module-resolution - Module source resolution
Reference Implementation:
- amplifier-app-cli - CLI application (this implementation)
Architecture:
- Repository Rules - Where docs go, what references what
- Module Catalog - Available providers, tools, hooks, orchestrators
Note: Amplifier is under active development. Some documentation links are being consolidated. If you encounter issues, please report them.
Today: A powerful CLI for AI-assisted development.
Tomorrow: A platform where:
- Multiple interfaces coexist - CLI, web, mobile, voice, IDE plugins
- Community modules extend capabilities infinitely
- Dynamic mixing - Amplifier composes custom solutions from available modules
- AI builds AI - Use Amplifier to create new modules with minimal manual coding
- Collaborative AI - Amplifier instances work together on complex tasks
The modular foundation we're building today enables all of this. You're getting in early on something that's going to fundamentally change how we work with AI.
This is an early preview release:
- APIs are stabilizing but may change
- Some features are experimental
- Documentation is catching up with code
- We're moving fast—breaking changes happen
What works today:
- ✅ Core AI interactions (Anthropic Claude)
- ✅ Profile-based configuration
- ✅ Agent delegation
- ✅ Session persistence
- ✅ Module loading from git sources
What's rough around the edges:
⚠️ Other providers need more testing⚠️ Some error messages could be clearer⚠️ Documentation is incomplete in places⚠️ Installation experience will improve
Join us on this journey! Fork, experiment, build modules, share feedback. This is the ground floor.
Note
This project is not currently accepting external contributions, but we're actively working toward opening this up. We value community input and look forward to collaborating in the future. For now, feel free to fork and experiment!
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit Contributor License Agreements.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.