Skip to content

Add response optimisation to list_ tools in default toolsets#2016

Open
kerobbi wants to merge 12 commits intomainfrom
kerobbi/response-optimisation
Open

Add response optimisation to list_ tools in default toolsets#2016
kerobbi wants to merge 12 commits intomainfrom
kerobbi/response-optimisation

Conversation

@kerobbi
Copy link
Contributor

@kerobbi kerobbi commented Feb 16, 2026

Summary

Adds a generic response optimisation package that reduces token usage in list_ tool responses by applying six strategies at runtime: nested object flattening, URL elimination, zero-value elimination, whitespace normalisation, collection summarisation, and fill-rate filtering.

Two config mechanisms drive it:

  • preservedFields (html_url, draft, prerelease) - exempt from destructive strategies
  • collectionFieldExtractors (labels -> name, requested_reviewers -> login, requested_teams -> name) - extract important values from nested collections instead of summarising them as [N items]

Why

Current JSON responses are token heavy, with lots of redundant nested objects, URL fields, and zero-value noise. A single list_pull_request tool call at 25 items can use 180k tokens. This pipeline brings that down to 55k (~69% reduction) without losing anything the model actually needs.

What changed

  • Added pkg/response - optimisation pipeline with two config mechanisms (preservedFields and collectionFieldExtractors)
  • Added tests covering all strategies and config interactions
  • Updated existing tests for flattened output format
  • Wired response.MarshalItems into 6 default list_ tools (list_pull_requests, list_issues, list_commits, list_tags, list_releases, list_branches)
  • Did not wire list_issue_types, as its data is already flat and the pipeline produces no benefit

Token reduction

  • list_commits: ~21-25%
  • list_tags: ~73-74%
  • list_releases: ~91-94%
  • list_issues: ~5-15%
  • list_pull_requests: ~54-73%
  • list_branches: ~5-9%

Measured using OAI's tiktoken library (o200k_base) at 2, 10, and 25 items. Model accuracy validated with GPT 5.1 and Opus 4.5 across 6 prompts per tool, no degradation observed.

MCP impact

  • No tool or API changes
  • Tool schema or behavior changed
  • New tool added

Prompts tested (tool changes only)

Security / limits

  • No security or limits impact
  • Auth / permissions considered
  • Data exposure, filtering, or token/size limits considered

Tool renaming

  • I am renaming tools as part of this PR (e.g. a part of a consolidation effort)
    • I have added the new tool aliases in deprecated_tool_aliases.go
  • I am not renaming tools as part of this PR

Note: if you're renaming tools, you must add the tool aliases. For more information on how to do so, please refer to the official docs.

Lint & tests

  • Linted locally with ./script/lint
  • Tested locally with ./script/test

Docs

  • Not needed
  • Updated (README / docs / examples)

@kerobbi kerobbi changed the title WIP: Add response optimisation for list_ tools Add response optimisation to list_ tools in default toolsets Feb 18, 2026
@kerobbi kerobbi marked this pull request as ready for review February 18, 2026 11:01
@kerobbi kerobbi requested a review from a team as a code owner February 18, 2026 11:01
Copilot AI review requested due to automatic review settings February 18, 2026 11:01
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a response optimization package (pkg/response) that reduces token usage in list_ tool responses by 5-94% (depending on the tool). The optimization applies six strategies: nested object flattening with dot-notation keys, URL field elimination (except preserved ones), zero-value removal, whitespace normalization, collection summarization to [N items] or extracted fields, and fill-rate filtering to remove rarely-populated fields.

The package provides two configuration mechanisms:

  • preservedFields: exempts specific keys (html_url, draft, prerelease) from destructive optimizations
  • collectionFieldExtractors: controls extraction of important subfields from collections (e.g., labels→name, requested_reviewers→login)

Changes:

  • Added pkg/response package with optimization pipeline and comprehensive test coverage
  • Wired response.MarshalItems() into 6 list tools: list_commits (depth 3 for commit.author.name), list_tags, list_releases, list_branches, list_pull_requests, list_issues
  • Updated existing tests to work with flattened output structure (map[string]any instead of typed structs)
  • Intentionally excluded list_issue_types (already flat, no benefit)

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
pkg/response/optimize.go Core optimization pipeline with flattening, URL removal, zero-value elimination, whitespace normalization, collection handling, and fill-rate filtering
pkg/response/optimize_test.go Comprehensive test coverage for all optimization strategies and configuration interactions
pkg/github/repositories.go Integrated response.MarshalItems into list_commits (depth 3), list_branches, list_tags, list_releases
pkg/github/repositories_test.go Updated test assertions to work with flattened map structure instead of MinimalCommit structs
pkg/github/pullrequests.go Integrated response.MarshalItems into list_pull_requests
pkg/github/issues.go Integrated response.MarshalItems into list_issues, renamed response variable to issueResponse to avoid package name shadowing
pkg/github/issues_test.go Updated test assertions to work with map[string]any structure instead of typed structs, removed unused verifyOrder field

@tonytrg
Copy link
Contributor

tonytrg commented Feb 19, 2026

I have left a few comments, let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments