Skip to content

Conversation

@KrshnKush
Copy link

@KrshnKush KrshnKush commented Jan 29, 2026

================================================================================
PR DESCRIPTION (for use as PR title / first comment)

Title suggestion:
Fix unintended user message text that triggers content filters (Closes #4249)

Short description:
Replaces the fallback user message text "Handle the requests as specified in the System Instruction." with "Handle the incoming request according to the provided requirements." in lite_llm.py. The previous phrasing triggers OpenAI/Azure prompt injection (jailbreak) content filters when using LiteLLM with OpenAI/Azure models after tool calls, causing 400 ContentPolicyViolationError. The new wording avoids this while preserving the intent of the fallback.

================================================================================
FILLED PULL REQUEST TEMPLATE

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

2. Or, if no issue exists, describe the change:

Problem:
When using LiteLLM with OpenAI/Azure models, after tool calls the fallback user message "Handle the requests as specified in the System Instruction." is injected into the conversation. This phrasing triggers Azure OpenAI's content management policy (jailbreak detection), resulting in:

openai.BadRequestError: ... 'code':'content_filter','innererror': {'code':'ResponsibleAIPolicyViolation','content_filter_result': {'jailbreak': {'filtered': True,'detected': True} ...

The flow is: user message → model tool call → tool response (function_response) → _append_fallback_user_content_if_missing adds the fallback text → that text is sent as a user message and is flagged as prompt injection.

Solution:
Replace the fallback text with neutral wording that does not trigger content filters: "Handle the incoming request according to the provided requirements." This keeps the fallback behavior for backends that need a user message with content, while avoiding the jailbreak/self-referential phrasing that causes OpenAI/Azure to reject the request. Changes are limited to the two occurrences in _append_fallback_user_content_if_missing() in src/google/adk/models/lite_llm.py (Part.from_text inline append and the new Content role="user" branch).

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Please include a summary of passed pytest results.

Manual End-to-End (E2E) Tests:

  1. Install google-adk (e.g. 1.22.1 or later) with this change.
  2. Create an agent with at least one tool using LiteLLM with an OpenAI or Azure OpenAI model.
  3. Run a conversation that triggers a tool call (e.g. ask a math question if the tool is a calculator).
  4. Verify that no ContentPolicyViolationError / content_filter error occurs.
  5. Optionally inspect the request payload to confirm the fallback user message now uses "Handle the incoming request according to the provided requirements." instead of the previous text.

Please provide logs or screenshots to help reviewers better understand the fix.

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

================================================================================
CODE CHANGES (for reference)

File: src/google/adk/models/lite_llm.py

  1. Around line 491–494 (inline append):
    Replace:
    text="Handle the requests as specified in the System Instruction."
    With:
    text="Handle the incoming request according to the provided requirements."

  2. Around lines 502–505 (new Content branch):
    Replace:
    text=(
    "Handle the requests as specified in the System"
    " Instruction."
    )
    With:
    text="Handle the incoming request according to the provided requirements."

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @KrshnKush, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the default fallback message used by LiteLLM when user content is missing in its OpenAI/Azure integration. The text has been updated to 'Handle the incoming request according to the provided requirements.' This change aims to improve the clarity of the message and, as per the PR description, is part of an effort to prevent unintended user message injection that could trigger content filters, although the specific logic change for function_response handling mentioned in the description is not present in the provided diffs.

Highlights

  • Fallback Message Update: The default fallback user message text within the _append_fallback_user_content_if_missing function in src/google/adk/models/lite_llm.py has been updated. The previous message 'Handle the requests as specified in the System Instruction.' is now 'Handle the incoming request according to the provided requirements.'
  • Test Updates: Unit tests in tests/unittests/models/test_litellm.py have been modified to reflect the new fallback message text, ensuring test accuracy after the text change.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the models [Component] Issues related to model support label Jan 29, 2026
@adk-bot
Copy link
Collaborator

adk-bot commented Jan 29, 2026

Response from ADK Triaging Agent

Hello @KrshnKush, thank you for creating this PR!

Could you please update the pull request description to include a testing plan and provide logs or a screenshot demonstrating the fix? This information will help reviewers to review your PR more efficiently.

For more details, please see the contribution guidelines. Thanks!

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an unintended user message injection when using LiteLLM with OpenAI/Azure. The implemented change updates the text of the fallback message. However, as noted in the detailed comment, there's a significant discrepancy between this change and the solution outlined in the PR description. The description points to a necessary logic change in _part_has_payload to prevent the fallback message from being added in the first place after a tool call, which appears to be missing. Please clarify if the intended logic fix is part of this PR, as the current changes may not resolve the underlying issue.

Comment on lines 492 to 494
types.Part.from_text(
text="Handle the requests as specified in the System Instruction."
text="Handle the incoming request according to the provided requirements."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There seems to be a discrepancy between the implemented change and the solution described in the pull request. The PR description states the fix is to treat function_response as a valid payload in _part_has_payload to prevent the fallback message from being injected incorrectly. However, this change only modifies the text of the fallback message itself.

This text change doesn't seem to address the core issue of the unintended message injection. Was the logic change to _part_has_payload intended to be part of this PR? Without it, the bug described (issue #4249) may not be fully resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unintended user message injection breaks tool calling with LiteLLM + OpenAI/Azure

3 participants