Skip to content

Conversation

@konard
Copy link
Member

@konard konard commented Oct 30, 2025

🎯 Summary

This PR implements comprehensive benchmarks to evaluate the performance impact of different optimization strategies in C# that simulate C++ [[likely]] and [[unlikely]] branch prediction attributes.

📋 Issue Reference

Fixes #96

🔍 Background

Issue #96 requested benchmarks comparing performance of code with and without C++ [[likely]]/[[unlikely]] attributes. Since this is a C# codebase, I've implemented equivalent optimization strategies available in C#/.NET.

💡 Implementation Details

C# Equivalents to [[likely]]/[[unlikely]]

Added six new benchmark scenarios to test different optimization approaches:

  1. AggressiveInlining: Uses [MethodImpl(MethodImplOptions.AggressiveInlining)] to force method inlining for hot paths

    • Simulates C++'s inline behavior combined with branch prediction hints
  2. AggressiveOptimization: Uses [MethodImpl(MethodImplOptions.AggressiveOptimization)] to enable aggressive optimizations

    • Allows JIT compiler more freedom to optimize the hot path
  3. BothOptimizations: Combines both AggressiveInlining and AggressiveOptimization

    • Maximum optimization hints for the likely path
  4. DoesNotReturn + NoInlining: Separates exception throwing into a helper method marked with:

    • [DoesNotReturn] - Helps optimizer understand this path never returns
    • [MethodImpl(MethodImplOptions.NoInlining)] - Keeps exception path out of hot path
    • Simulates C++'s [[unlikely]] by isolating cold path
  5. InlineException: Exception handling within the same method with optimizations

    • Tests if separation into helper method provides benefits
  6. UnlikelyFirst: Anti-pattern with exception check before happy path

    • Demonstrates performance impact of poor branch ordering
  7. GenericOptimized: Generic version using modern .NET generic math (IUnsignedNumber<T>)

    • Tests optimization effectiveness with generic constraints

Key Insights

The benchmarks test the Factorial function with different optimization strategies, focusing on:

  • Branch prediction optimization
  • Exception path isolation (cold path)
  • Code organization (hot path first vs exception path first)
  • Inlining decisions
  • Generic vs non-generic performance

Test Scenario

All benchmarks use FactorialNumber = 19 (within valid range) to ensure we're measuring the hot path performance rather than exception handling.

🧪 Running the Benchmarks

cd csharp/Platform.Numbers.Benchmarks
dotnet run -c Release

📊 Expected Results

The benchmarks will show:

  • Impact of different MethodImpl attributes on hot path performance
  • Benefits of separating exception paths with DoesNotReturn
  • Performance differences between code organization patterns
  • Generic vs non-generic implementation overhead

✅ Changes Made

  • Added 7 new factorial implementation alternatives
  • Added 6 new benchmark methods
  • All implementations handle the same logic with different optimization hints
  • Code compiles successfully and maintains existing behavior

🔗 Related Work


🤖 Generated with Claude Code

Adding CLAUDE.md with task information for AI processing.
This file will be removed when the task is complete.

Issue: undefined
@konard konard self-assigned this Oct 30, 2025
Implements comprehensive benchmarks comparing different optimization strategies
in C# that simulate C++ [[likely]]/[[unlikely]] branch prediction hints:

- AggressiveInlining: Forces method inlining for hot paths
- AggressiveOptimization: Enables aggressive optimizations
- DoesNotReturn attribute: Helps optimizer understand exception paths
- Code organization: Hot path first vs exception path first
- Generic version: Tests modern .NET generic math optimizations

These benchmarks help evaluate performance impact of different approaches
to branch prediction optimization in C#, addressing issue #96.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@konard konard changed the title [WIP] Create benchmark for [[likely]] performance Create benchmark for [[likely]] performance Oct 30, 2025
@konard konard marked this pull request as ready for review October 30, 2025 04:12
@konard
Copy link
Member Author

konard commented Oct 30, 2025

🤖 Solution Draft Log

This log file contains the complete execution trace of the AI solution draft process.

📎 Log file uploaded as GitHub Gist (205KB)
🔗 View complete solution draft log


Now working session is ended, feel free to review and add any feedback on the solution draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create benchmark for [[likely]] performance

2 participants