-
Notifications
You must be signed in to change notification settings - Fork 10
feat(ai-proxy): add Anthropic LLM provider support #1435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
2 new issues
|
|
Coverage Impact This PR will not change total coverage. Modified Files with Diff Coverage (3)
🤖 Increase coverage with AI coding...🚦 See full report on Qlty Cloud » 🛟 Help
|
| tool_calls: msg.tool_calls.map(tc => ({ | ||
| id: tc.id, | ||
| name: tc.function.name, | ||
| args: JSON.parse(tc.function.arguments), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON.parse() can throw if tc.function.arguments contains malformed JSON. This call is outside the try-catch block (line 210), so errors would propagate as unhandled SyntaxError instead of being wrapped in AnthropicUnprocessableError.
OpenAI API can return malformed JSON in edge cases (documented issues).
Fix packages/ai-proxy/src/provider-dispatcher.ts:259: Wrap JSON.parse in try-catch or move convertMessagesToLangChain call inside the existing try-catch block at line 210
| }); | ||
| } | ||
|
|
||
| return new AIMessage(msg.content); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LangChain message constructors (SystemMessage, HumanMessage, AIMessage) fail with null content. OpenAI API allows content: null for assistant messages (API spec).
The OpenAIMessage interface at line 117 defines content: string, but should allow null. Line 255 handles this correctly with msg.content || '', but lines 249, 251, 264, 267, 271 pass content directly.
| return new AIMessage(msg.content); | |
| return new AIMessage(msg.content || '\); |
482f6b8 to
0fe24b9
Compare
e121d6a to
4fa181e
Compare
f938960 to
503736c
Compare
abebc85 to
57938d6
Compare
BREAKING CHANGE: isModelSupportingTools is no longer exported from ai-proxy - Add AIModelNotSupportedError for descriptive error messages - Move model validation from agent.addAi() to Router constructor - Make isModelSupportingTools internal (not exported from index) - Error is thrown immediately at Router init if model doesn't support tools This is a bug fix: validation should happen at proxy initialization, not at the agent level. This ensures consistent behavior regardless of how the Router is instantiated. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive integration tests that run against real OpenAI API: - ai-query route: simple chat, tool calls, tool_choice, parallel_tool_calls - remote-tools route: listing tools (empty, brave search, MCP tools) - invoke-remote-tool route: error handling - MCP server integration: calculator tools with add/multiply - Error handling: validation errors Also adds: - .env-test support for credentials (via dotenv) - .env-test.example template for developers - Jest setup to load environment variables Run with: yarn workspace @forestadmin/ai-proxy test openai.integration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…adiness - Add multi-turn conversation test with tool results - Add AINotConfiguredError test for missing AI config - Add MCP error handling tests (unreachable server, auth failure) - Skip flaky tests due to Langchain retry behavior - Ensure tests work on main branch without Zod validation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Set maxRetries: 0 by default when creating ChatOpenAI instance. This makes our library a simple passthrough without automatic retries, giving users full control over retry behavior. Also enables previously skipped integration tests that were flaky due to retry delays. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Addresses all issues identified in PR review: - Add invoke-remote-tool success tests for MCP tools (add, multiply) - Strengthen weak error assertions with proper regex patterns - Fix 'select AI configuration by name' test to verify no fallback warning - Add test for fallback behavior when config not found - Add logger verification in MCP error handling tests Tests now verify: - Error messages match expected patterns (not just toThrow()) - Logger is called with correct level and message on errors - Config selection works without silent fallback Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The test was incorrectly checking for finish_reason: 'tool_calls'. When forcing a specific function via tool_choice, OpenAI returns finish_reason: 'stop' but still includes the tool_calls array. The correct assertion is to verify the tool_calls array contains the expected function name, not the finish_reason. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move the agent-specific message "Please call addAi() on your agent" from ai-proxy to the agent package where it belongs. - ai-proxy: AINotConfiguredError now uses generic "AI is not configured" - agent: Catches AINotConfiguredError and adds agent-specific guidance This keeps ai-proxy decoupled from agent-specific terminology. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The test was not properly waiting for the HTTP server to close. Changed afterAll to use a Promise wrapper around server.close() callback. This removes the need for forceExit: true in Jest config. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests typically complete in 200-1600ms. 30 second timeouts were excessive. - Single API calls: 10s timeout (was 30s) - Multi-turn conversation: 15s timeout (was 60s) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add support for Anthropic's Claude models in the ai-proxy package using @langchain/anthropic. This allows users to configure Claude as their AI provider alongside OpenAI. Changes: - Add @langchain/anthropic dependency - Add ANTHROPIC_MODELS constant with supported Claude models - Add AnthropicConfiguration type and AnthropicModel type - Add AnthropicUnprocessableError for Anthropic-specific errors - Implement message conversion from OpenAI format to LangChain format - Implement response conversion from LangChain format back to OpenAI format - Add tool binding support for Anthropic with tool_choice conversion - Add comprehensive tests for Anthropic provider Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… dispatcher - Move convertMessagesToLangChain inside try-catch to properly handle JSON.parse errors - Update OpenAIMessage interface to allow null content (per OpenAI API spec) - Add null content handling for all message types with fallback to empty string Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mirror OpenAI integration tests for Anthropic provider. Requires ANTHROPIC_API_KEY environment variable. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Merge OpenAI and Anthropic integration tests into a single file using describe.each to run the same tests against both providers. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ropic models Tests tool execution across all supported models with informative skip messages for deprecated/unavailable models. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…list - Import AnthropicModel type from @anthropic-ai/sdk for autocomplete - Allow custom strings with (string & NonNullable<unknown>) pattern - Remove ANTHROPIC_MODELS constant export (now test-only) - Add @anthropic-ai/sdk as explicit dependency - Add ANTHROPIC_API_KEY to env example - Fix Jest module resolution for @anthropic-ai/sdk submodules Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The function is only used in Router, so it makes sense to keep it there. This simplifies the provider-dispatcher module and keeps related code together. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… file - Create supported-models.test.ts with direct function tests - Remove duplicate model list tests from router.test.ts - Simplify index.ts exports using export * pattern Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
6d0f0b4 to
565361c
Compare
- Remove duplicate isModelSupportingTools in router.ts (use supported-models import) - Guard validateConfigurations to only apply OpenAI model checks - Add status-based error handling for Anthropic (429, 401) matching OpenAI pattern - Move message conversion outside try-catch so input errors propagate directly - Add explicit validation for tool_call_id on tool messages - Add JSON.parse error handling with descriptive AIBadRequestError - Throw on unknown message roles instead of silent HumanMessage fallback - Use nullish coalescing (??) for usage metadata defaults - Fix import ordering in integration test - Align router test model lists with supported-models.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The integration tests need a full rework for Anthropic support. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add Anthropic API integration tests mirroring the existing OpenAI tests: - Basic chat, tool calls, tool_choice: required, multi-turn conversations - Error handling for invalid API keys - Model discovery via anthropic.models.list() with tool support verification Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
cabba2c to
4bd08ce
Compare

Summary
@forestadmin/ai-proxyusing@langchain/anthropicname,provider,model,apiKey)Changes
@langchain/anthropicdependencyAnthropicConfigurationtype with all Anthropic-specific optionsANTHROPIC_MODELSconstant with supported Claude modelsAnthropicUnprocessableErrorfor error handlingTest plan
🤖 Generated with Claude Code