Skip to content

Conversation

@giles17
Copy link
Contributor

@giles17 giles17 commented Jan 16, 2026

Motivation and Context

This PR adds comprehensive unit tests for OpenAI content types across three test files, improving test coverage for various content handling scenarios in the agent framework's OpenAI clients.

Description

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • All unit tests pass, and I have added new tests where possible
  • Is this a breaking change? If yes, add "[BREAKING]" prefix to the title of the PR.

Copilot AI review requested due to automatic review settings January 16, 2026 23:08
@github-actions github-actions bot changed the title Added tests for OpenAI content types + Unit test improvement Python: Added tests for OpenAI content types + Unit test improvement Jan 16, 2026
@markwallace-microsoft
Copy link
Member

markwallace-microsoft commented Jan 16, 2026

Python Test Coverage

Python Test Coverage Report •
FileStmtsMissCoverMissing
TOTAL17429258785% 
report-only-changed-files is enabled. No files were changed during this commit :)

Python Unit Test Overview

Tests Skipped Failures Errors Time
3191 213 💤 0 ❌ 0 🔥 1m 5s ⏱️

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive unit tests for OpenAI content types across three test files, improving test coverage for various content handling scenarios in the agent framework's OpenAI clients.

Changes:

  • Added 15+ new test cases for OpenAI Responses client covering content type preparation, parsing, response format handling, and edge cases
  • Added 14+ new test cases for OpenAI Chat client covering reasoning content, approval content, usage content, refusal handling, and various edge cases
  • Added 2 new test cases for OpenAI Assistants client covering code interpreter and MCP server tool call parsing

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
python/packages/core/tests/openai/test_openai_responses_client.py Adds tests for FunctionApprovalResponseContent, ErrorContent, UsageContent, HostedVectorStoreContent, HostedFileContent, MCP server tool handling, response format validation, and conversation ID handling
python/packages/core/tests/openai/test_openai_chat_client.py Adds tests for TextReasoningContent parsing/preparation, FunctionApprovalContent skipping, UsageContent in streaming, refusal handling, and various edge cases for options preparation
python/packages/core/tests/openai/test_openai_assistants_client.py Adds tests for parsing code interpreter and MCP server tool calls from assistant run steps
Comments suppressed due to low confidence (1)

python/packages/core/tests/openai/test_openai_responses_client.py:683

  • Python tests do not follow the C# test guidelines for Arrange, Act, Assert comments. While this is acceptable for Python code, consider adding these comments for consistency and readability, especially in complex tests. However, since the custom coding guidelines specifically apply to C# unit tests and this is Python code, this is a minor observation rather than a requirement.
def test_prepare_content_for_openai_function_approval_response() -> None:
    """Test _prepare_content_for_openai with FunctionApprovalResponseContent."""
    client = OpenAIResponsesClient(model_id="test-model", api_key="test-key")

    # Test approved response
    function_call = FunctionCallContent(
        call_id="call_123",
        name="send_email",
        arguments='{"to": "user@example.com"}',
    )
    approval_response = FunctionApprovalResponseContent(
        approved=True,
        id="approval_001",
        function_call=function_call,
    )

    result = client._prepare_content_for_openai(Role.ASSISTANT, approval_response, {})

    assert result["type"] == "mcp_approval_response"
    assert result["approval_request_id"] == "approval_001"
    assert result["approve"] is True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants