Improve tool calling, Ollama resilience, and response handling#45
Closed
MichaelAnders wants to merge 7 commits intoFast-Editor:mainfrom
Closed
Improve tool calling, Ollama resilience, and response handling#45MichaelAnders wants to merge 7 commits intoFast-Editor:mainfrom
MichaelAnders wants to merge 7 commits intoFast-Editor:mainfrom
Conversation
- Add fallback parsing for Ollama models that return tool calls as JSON text in message content instead of using the structured tool_calls field - Return tool results directly to CLI instead of making a follow-up LLM call, reducing latency and preventing hallucinated rewrites of output - Add dedicated Glob tool returning plain text (one path per line) instead of JSON, with workspace_list accepting both 'pattern' and 'patterns' - Clarify why Glob is not aliased to workspace_list (format mismatch)
- Additional logging for tool call parsing and execution - Hard-coded shell commands for reliable tool execution - Deduplication of tool calls within a single response - Collect and return results from all called tools - Ollama uses specified Ollama model - Fix double-serialized JSON parameters from some providers
Enable fs_read to handle paths outside the workspace (~/... and absolute paths) via a two-phase approval flow: the tool first returns a 403 asking the LLM to get user confirmation, then reads the file on a second call with user_approved=true. Write/edit remain workspace-only.
Tool results now loop back to the model for natural language synthesis instead of being returned raw to the CLI. This fixes the bug where conversational messages (e.g. "hi") triggered tool calls and dumped raw output. Additional improvements: - Context-aware tiered compression that scales with model context window - Empty response detection with retry-then-fallback - _noToolInjection flag to prevent provider-level tool re-injection - Auto-approve external file reads in tool executor - Conversation context search in workspace_search
Three concurrent runAgentLoop calls per user message waste GPU time with
large models (~30s each). This adds SUGGESTION_MODE_MODEL config to skip
("none") or redirect suggestion mode to a lighter model. Also adds ISO
timestamps and mode tags to debug logs for easier debugging.
- Add ECONNREFUSED to retryable errors and check undici TypeError .cause.code so connection-refused errors get retried with backoff - Wrap invokeModel in try/catch returning structured 503 with provider_unreachable error instead of raw TypeError bubbling to Express error middleware - Fix suggestion mode early return response shape (json -> body) to match router expectations
… responses The heuristic-based stripThinkingBlocks() matched standard markdown bullets (- item, * item) as "thinking block markers" and dropped all subsequent content. Replace with stripThinkTags() that only strips <think>...</think> tags used by models like DeepSeek and Qwen for chain-of-thought reasoning.
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
stripThinkingBlocks()destroying markdown bullet points in Ollama responsesSUGGESTION_MODE_MODELenv var to control suggestion mode LLM callstool_result_directshort-circuit and improve agentic loopProblem
Several issues with the agentic loop and Ollama integration:
stripThinkingBlocks() was destroying valid response content - The regex matched standard markdown bullets as "thinking markers", causing most list-based responses to be truncated after the first heading.
No graceful handling when Ollama is offline - Requests would fail hard with no retry or user-friendly error.
Suggestion mode wasted GPU resources - Every user message triggered a full agentic loop with tools for suggestion prediction, blocking responses on large models (70b+).
Tool calling response handling had edge cases with Ollama's response format.
Changes
stripThinkingBlocks()heuristic with explicit<think>tag strippingSUGGESTION_MODE_MODELenv var (noneto disable, or redirect to smaller model)tool_result_directshort-circuit that could skip tool executionTesting
SUGGESTION_MODE_MODEL=none)