Skip to content

Conversation

@drorIvry
Copy link
Collaborator

@drorIvry drorIvry commented Feb 9, 2026

Description

Related Issue

Type of Change

  • 📚 Examples / docs / tutorials / dependencies update
  • 🔧 Bug fix (non-breaking change which fixes an issue)
  • 🥂 Improvement (non-breaking change which improves an existing feature)
  • 🚀 New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to change)
  • 🔐 Security fix

Checklist

  • I've read the CODE_OF_CONDUCT.md document.
  • I've read the CONTRIBUTING.md guide.
  • I've updated the code style using make codestyle.
  • I've written tests for all new methods and classes that I created.
  • I've written the docstring in Google format for all the methods and classes that I used.

Summary by CodeRabbit

  • New Features

    • Added an option to optionally include tool definitions and tool calls in policy assertions.
  • Chores

    • Bumped project version to 0.17.0.
  • Tests

    • Added tests verifying the new policy-include-tools option defaults and explicit setting.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 9, 2026

Walkthrough

Version bumped to 0.17.0; mypy config changed to use a string for python_version. Added optional boolean policy_include_tools to evaluate() and EvaluationRequest (default False), with tests validating default and explicit True.

Changes

Cohort / File(s) Summary
Configuration
pyproject.toml
Project version bumped from 0.16.0 to 0.17.0; [tool.mypy] python_version changed from numeric 3.8 to string "3.8".
Feature Implementation
qualifire/client.py, qualifire/types.py
Added optional policy_include_tools: bool = False parameter to evaluate() and added policy_include_tools: bool = False field to EvaluationRequest; value propagated into request payload.
Tests
tests/test_types.py
Added tests verifying policy_include_tools defaults to False and can be set to True when constructing EvaluationRequest.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Feature - user metadata  #467: Follows identical pattern of adding optional field to evaluate method signature and threading it through EvaluationRequest.

Suggested reviewers

  • yuval-qf

Poem

🐰 I nibbled on code with careful paws,
A tiny flag hopped in without a pause,
From client call to request it flew,
Tests gave a nod — true or false, we knew.
Hop, hop — version bumped, a new view! 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'feature - policy tools' is vague and generic, using non-descriptive terms that don't convey meaningful information about the specific changeset. Revise the title to be more specific and descriptive, such as 'Add policy_include_tools parameter to evaluation request' or 'Support tool definitions in policy assertions'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Line 100: The mypy/python target in pyproject.toml is set to python_version =
"3.9" which conflicts with the project's declared minimum Python support
(requires-python >=3.8,<4 and py38 targets in classifiers/black/isort); change
python_version from "3.9" to "3.8" so mypy's target matches requires-python and
the existing black target-version/ isort py_version entries, ensuring
typing/syntax stays compatible with Python 3.8.

In `@tests/test_types.py`:
- Around line 115-121: Combine the two tests into one parametrized pytest case:
create a single test (e.g., test_policy_include_tools) decorated with
`@pytest.mark.parametrize`("value,expected", [(None, False), (True, True)]) or
with explicit values [False, True] and instantiate EvaluationRequest accordingly
(for None use default ctor) and assert req.policy_include_tools == expected;
ensure pytest is imported and reference EvaluationRequest in the test body to
replace the two existing functions test_policy_include_tools_defaults_false and
test_policy_include_tools_can_be_set.

Comment on lines +115 to +121
def test_policy_include_tools_defaults_false(self):
req = EvaluationRequest(input="test")
assert req.policy_include_tools is False

def test_policy_include_tools_can_be_set(self):
req = EvaluationRequest(input="test", policy_include_tools=True)
assert req.policy_include_tools is True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Parametrize the new policy_include_tools tests.

The two standalone tests can be combined into a single parametrized case to follow test guidelines and reduce duplication.

♻️ Suggested refactor
-    def test_policy_include_tools_defaults_false(self):
-        req = EvaluationRequest(input="test")
-        assert req.policy_include_tools is False
-
-    def test_policy_include_tools_can_be_set(self):
-        req = EvaluationRequest(input="test", policy_include_tools=True)
-        assert req.policy_include_tools is True
+    `@pytest.mark.parametrize`(
+        "kwargs,expected",
+        [
+            ({}, False),
+            ({"policy_include_tools": True}, True),
+        ],
+    )
+    def test_policy_include_tools(self, kwargs, expected):
+        req = EvaluationRequest(input="test", **kwargs)
+        assert req.policy_include_tools is expected

As per coding guidelines, "tests/**/*.py: Use parametrized pytest test cases in the tests/ directory for comprehensive test coverage".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_policy_include_tools_defaults_false(self):
req = EvaluationRequest(input="test")
assert req.policy_include_tools is False
def test_policy_include_tools_can_be_set(self):
req = EvaluationRequest(input="test", policy_include_tools=True)
assert req.policy_include_tools is True
`@pytest.mark.parametrize`(
"kwargs,expected",
[
({}, False),
({"policy_include_tools": True}, True),
],
)
def test_policy_include_tools(self, kwargs, expected):
req = EvaluationRequest(input="test", **kwargs)
assert req.policy_include_tools is expected
🤖 Prompt for AI Agents
In `@tests/test_types.py` around lines 115 - 121, Combine the two tests into one
parametrized pytest case: create a single test (e.g., test_policy_include_tools)
decorated with `@pytest.mark.parametrize`("value,expected", [(None, False), (True,
True)]) or with explicit values [False, True] and instantiate EvaluationRequest
accordingly (for None use default ctor) and assert req.policy_include_tools ==
expected; ensure pytest is imported and reference EvaluationRequest in the test
body to replace the two existing functions
test_policy_include_tools_defaults_false and
test_policy_include_tools_can_be_set.

Copy link
Contributor

@yuval-qf yuval-qf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Just take a look at rabbit's comment about mypy version

@drorIvry drorIvry enabled auto-merge (squash) February 10, 2026 10:23
@drorIvry drorIvry merged commit ffcdab3 into qualifire-dev:main Feb 10, 2026
5 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants