Skip to content

# TensorRT-LLM: Enable Jetson Thor (sm_110) Support & Build Fixes#11357

Draft
cjac wants to merge 2 commits intoNVIDIA:mainfrom
LLC-Technologies-Collier:gemma3-27b-thor-202602
Draft

# TensorRT-LLM: Enable Jetson Thor (sm_110) Support & Build Fixes#11357
cjac wants to merge 2 commits intoNVIDIA:mainfrom
LLC-Technologies-Collier:gemma3-27b-thor-202602

Conversation

@cjac
Copy link

@cjac cjac commented Feb 6, 2026

Overview

This patch set enables building TensorRT-LLM on the NVIDIA Jetson Thor platform (Blackwell architecture, sm_110) using a minimal configuration. It resolves multiple build system failures related to missing architecture definitions, incompatible PyTorch APIs, and broken dependency handling for CUTLASS. It also selectively disables Mixture-of-Experts (MoE) components to bypass compilation errors caused by missing FP4 type support in the available PyTorch 2.4 environment.

Key Changes

1. Enable Blackwell (sm_110) Architecture

  • CMake Configuration: Updated cpp/cmake/modules/cuda_configuration.cmake to explicitly recognize and support 110 as a target CUDA architecture.
  • Kernel Support: Modified cpp/tensorrt_llm/kernels/multiHeadAttentionCommon.h and cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fmhaRunner.cpp to include kSM_110 in supported architecture checks, allowing FMHA kernels to compile for Thor.

2. Fix CUTLASS Dependency Build

  • Issue: The build failed when attempting to run setup_library.py for CUTLASS because the source directory lacked a valid setup.py.
  • Fix: Patched cpp/tensorrt_llm/kernels/cutlass_kernels/CMakeLists.txt to skip the failing Python installation step. Instead, the PYTHONPATH is explicitly set to the cutlass-src/python directory when invoking kernel generation scripts, ensuring they can still import necessary modules.

3. PyTorch API Compatibility Fixes

  • Issue: The build failed in customCasters.h because c10::getStringToDtypeMap() is not available in the installed PyTorch 2.4.1 version.
  • Fix: Implemented a local, hardcoded fallback map in cpp/tensorrt_llm/nanobind/common/customCasters.h to parse standard dtype strings (float, half, int, etc.) directly.

4. Disable MoE & Bleeding-Edge Ops

  • Issue: The thop (Torch Ops) library failed to compile cuteDslMoeUtilsOp.cpp and related files due to:
    • Missing torch::kFloat4_e2m1fn_x2 type definition in PyTorch.
    • C++ compilation errors involving std::optional stream operators in mxFp4BlockScaleMoe.cpp.
  • Fix: Removed MoE-related source files (e.g., cuteDslMoeUtilsOp.cpp, moeOp.cpp, mxFp4BlockScaleMoe.cpp) from cpp/tensorrt_llm/thop/CMakeLists.txt. This is a safe reduction as the target model (Gemma 3) is dense and does not require these operators.

5. Build Script Safety

  • Issue: scripts/build_wheel.py attempted to aggressively upgrade/downgrade python packages via pip, risking environment corruption.
  • Fix: Patched scripts/build_wheel.py to disable automatic pip install commands, relying on the user's pre-configured environment.

Validation

These changes allow the build process to proceed past previous blockers (CUDA architecture rejection, missing symbols, compilation errors) and are necessary for generating the TensorRT-LLM wheel on Jetson Thor.

@coderabbitai summary

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

## Overview
This patch set enables building TensorRT-LLM on the NVIDIA Jetson Thor platform (Blackwell architecture, sm_110) using a minimal configuration. It resolves multiple build system failures related to missing architecture definitions, incompatible PyTorch APIs, and broken dependency handling for CUTLASS. It also selectively disables Mixture-of-Experts (MoE) components to bypass compilation errors caused by missing FP4 type support in the available PyTorch 2.4 environment.

## Key Changes

### 1. Enable Blackwell (sm_110) Architecture
*   **CMake Configuration:** Updated `cpp/cmake/modules/cuda_configuration.cmake` to explicitly recognize and support `110` as a target CUDA architecture.
*   **Kernel Support:** Modified `cpp/tensorrt_llm/kernels/multiHeadAttentionCommon.h` and `cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fmhaRunner.cpp` to include `kSM_110` in supported architecture checks, allowing FMHA kernels to compile for Thor.

### 2. Fix CUTLASS Dependency Build
*   **Issue:** The build failed when attempting to run `setup_library.py` for CUTLASS because the source directory lacked a valid `setup.py`.
*   **Fix:** Patched `cpp/tensorrt_llm/kernels/cutlass_kernels/CMakeLists.txt` to skip the failing Python installation step. Instead, the `PYTHONPATH` is explicitly set to the `cutlass-src/python` directory when invoking kernel generation scripts, ensuring they can still import necessary modules.

### 3. PyTorch API Compatibility Fixes
*   **Issue:** The build failed in `customCasters.h` because `c10::getStringToDtypeMap()` is not available in the installed PyTorch 2.4.1 version.
*   **Fix:** Implemented a local, hardcoded fallback map in `cpp/tensorrt_llm/nanobind/common/customCasters.h` to parse standard dtype strings (float, half, int, etc.) directly.

### 4. Disable MoE & Bleeding-Edge Ops
*   **Issue:** The `thop` (Torch Ops) library failed to compile `cuteDslMoeUtilsOp.cpp` and related files due to:
    *   Missing `torch::kFloat4_e2m1fn_x2` type definition in PyTorch.
    *   C++ compilation errors involving `std::optional` stream operators in `mxFp4BlockScaleMoe.cpp`.
*   **Fix:** Removed MoE-related source files (e.g., `cuteDslMoeUtilsOp.cpp`, `moeOp.cpp`, `mxFp4BlockScaleMoe.cpp`) from `cpp/tensorrt_llm/thop/CMakeLists.txt`. This is a safe reduction as the target model (Gemma 3) is dense and does not require these operators.

### 5. Build Script Safety
*   **Issue:** `scripts/build_wheel.py` attempted to aggressively upgrade/downgrade python packages via pip, risking environment corruption.
*   **Fix:** Patched `scripts/build_wheel.py` to disable automatic `pip install` commands, relying on the user's pre-configured environment.

## Validation
These changes allow the build process to proceed past previous blockers (CUDA architecture rejection, missing symbols, compilation errors) and are necessary for generating the TensorRT-LLM wheel on Jetson Thor.
@cjac
Copy link
Author

cjac commented Feb 6, 2026

@nvliyuan can I bother you for a review or a traige to the correct reviewer, please?

@cjac
Copy link
Author

cjac commented Feb 6, 2026

Coding Guidelines Compliance Disclosure

The changes implemented to enable TensorRT-LLM on Jetson Thor (sm_110) have been reviewed against the project's CODING_GUIDELINES.md.

Compliance Summary

The patches generally adhere to the project's standards, prioritizing consistency with the immediate local context where the codebase itself deviates from the formal guidelines.

Adherences

  • Constants: Used constexpr int32_t kSM_110 instead of macros, following the kUppercaseSnakeCase convention.
  • Indentation: Strictly applied 4-space indentation across all C++, Python, and CMake modifications.
  • Language Standards: Maintained C++17 compatibility (except where overridden by specific kernel requirements to C++20).
  • Casts: Avoided forceful C-style casts in favor of existing API patterns.

Disclosed Deviations

  • Naming Conventions: In customCasters.h, the variable dtype_map uses snake_case instead of the mandated camelCase. This was done intentionally to match the style of the existing code in that specific file.
  • Brace Style: The lambda initialization in customCasters.h uses a compact brace style (= []() { ... };) rather than strict Allman notation to maintain readability for static initialization.
  • Copyright Headers: The copyright years in modified files were not updated to the current year (2026). This remains an administrative task for the final PR submission.
  • Namespace Comments: Closure of namespaces in new code blocks (e.g., customCasters.h) followed existing local patterns rather than adding new explicit closing comments where they weren't already present.

Conclusion

The changes are idiomatically consistent with the surrounding source code and represent a "minimal impact" approach to resolving the platform blockers.

@svc-trtllm-gh-bot svc-trtllm-gh-bot added the Community want to contribute PRs initiated from Community label Feb 6, 2026
…ckwell)

This patch implements a series of critical workarounds and structural changes required to stabilize the TensorRT-LLM environment on the Jetson Thor platform. The changes specifically address incompatibilities between the Blackwell GPU architecture and the project's dependency on PyTorch 2.4.1.

## 1. Environment Lockdown (`requirements.txt`)
- **Changes**: Commented out core heavy dependencies including `tensorrt`, `torch`, `torchvision`, `nvidia-modelopt`, and `nvidia-nccl-cu13`.
- **Rationale**: This prevents `pip` from inadvertently overwriting the custom source-built, CUDA-enabled binaries with generic CPU-only or incompatible wheels from PyPI. It ensures the environment remains pinned to the validated local build artifacts.

## 2. Custom Op Schema Compatibility (`torch_custom_ops.py`)
- **Changes**: Refactored function signatures for multiple custom operations (e.g., `nvfp4_gemm`, `fp8_batched_gemm_trtllmgen`, `fp8_swap_ab_gemm`). Default values for `dtype` and `allowed_backends` were moved from the function definition into the function body using `None` as a placeholder.
- **Rationale**: PyTorch 2.4.1 (the required baseline) lacks support for non-primitive default values (like `torch.float16` or `torch.bfloat16`) in its C++ schema registration logic. Moving these defaults into the function body satisfies the strict primitive-only requirements of the older dispatcher.

## 3. Robust TorchVision Masking (Modeling & Input Utilities)
- **Changes**: Wrapped `torchvision` and related transform imports (`Normalize`, `Resize`, `ToTensor`) in broad `try...except` blocks across Mistral, Phi-4MM, and multimodal input handlers.
- **Rationale**: Implements a "Universal Masking" strategy to isolate stable text-inference paths from vision stack failures. Due to ABI mismatches in the `torchvision` C++ backend on AArch64, these imports can trigger `AttributeError` or `RuntimeError`. Masking them ensures the server can still serve text models even if the vision components are partially initialized or broken.

## 4. MoE and NVFP4 "Type Wall" Workarounds (`cuteDslMoeUtilsOp.cpp`)
- **Changes**:
    - Surgically disabled NVFP4-specific logic in `moe_permute` and `moe_swiglu_nvfp4_quantize` by replacing type checks with `if (false)`.
    - Replaced experimental `kFloat4_e2m1fn_x2` return types with standard `kHalf`.
    - Commented out problematic `DISPATCH_MOE_ACTIVATION` calls and the `moe_swiglu_nvfp4_quantize` implementation registry.
- **Rationale**: Addresses a fundamental limitation where the PyTorch 2.4.1 C++ frontend does not recognize Blackwell-specific data types. By gutting the problematic function bodies while retaining the expected symbol signatures, the patch allows the shared library (`libth_common.so`) to link successfully, enabling the runtime to function for dense models while bypassing unsupported Mixture-of-Experts (MoE) kernels.

## 5. Library Linkage Finalization (`CMakeLists.txt`)
- **Changes**: Explicitly added 12 MoE and quantization source files (e.g., `moeCommOp.cpp`, `fp4BlockScaleMoe.cpp`, `mxFp8Quantize.cpp`) to the `th_common` library.
- **Rationale**: Resolves "Missing Symbol" errors observed during runtime initialization. This ensures that all operators expected by the high-level Python bindings are physically present in the compiled shared object.
@cjac cjac force-pushed the gemma3-27b-thor-202602 branch from 0ca5c25 to d6fb958 Compare February 11, 2026 01:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Community want to contribute PRs initiated from Community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments