# TensorRT-LLM: Enable Jetson Thor (sm_110) Support & Build Fixes#11357
Draft
cjac wants to merge 2 commits intoNVIDIA:mainfrom
Draft
# TensorRT-LLM: Enable Jetson Thor (sm_110) Support & Build Fixes#11357cjac wants to merge 2 commits intoNVIDIA:mainfrom
cjac wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
## Overview
This patch set enables building TensorRT-LLM on the NVIDIA Jetson Thor platform (Blackwell architecture, sm_110) using a minimal configuration. It resolves multiple build system failures related to missing architecture definitions, incompatible PyTorch APIs, and broken dependency handling for CUTLASS. It also selectively disables Mixture-of-Experts (MoE) components to bypass compilation errors caused by missing FP4 type support in the available PyTorch 2.4 environment.
## Key Changes
### 1. Enable Blackwell (sm_110) Architecture
* **CMake Configuration:** Updated `cpp/cmake/modules/cuda_configuration.cmake` to explicitly recognize and support `110` as a target CUDA architecture.
* **Kernel Support:** Modified `cpp/tensorrt_llm/kernels/multiHeadAttentionCommon.h` and `cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fmhaRunner.cpp` to include `kSM_110` in supported architecture checks, allowing FMHA kernels to compile for Thor.
### 2. Fix CUTLASS Dependency Build
* **Issue:** The build failed when attempting to run `setup_library.py` for CUTLASS because the source directory lacked a valid `setup.py`.
* **Fix:** Patched `cpp/tensorrt_llm/kernels/cutlass_kernels/CMakeLists.txt` to skip the failing Python installation step. Instead, the `PYTHONPATH` is explicitly set to the `cutlass-src/python` directory when invoking kernel generation scripts, ensuring they can still import necessary modules.
### 3. PyTorch API Compatibility Fixes
* **Issue:** The build failed in `customCasters.h` because `c10::getStringToDtypeMap()` is not available in the installed PyTorch 2.4.1 version.
* **Fix:** Implemented a local, hardcoded fallback map in `cpp/tensorrt_llm/nanobind/common/customCasters.h` to parse standard dtype strings (float, half, int, etc.) directly.
### 4. Disable MoE & Bleeding-Edge Ops
* **Issue:** The `thop` (Torch Ops) library failed to compile `cuteDslMoeUtilsOp.cpp` and related files due to:
* Missing `torch::kFloat4_e2m1fn_x2` type definition in PyTorch.
* C++ compilation errors involving `std::optional` stream operators in `mxFp4BlockScaleMoe.cpp`.
* **Fix:** Removed MoE-related source files (e.g., `cuteDslMoeUtilsOp.cpp`, `moeOp.cpp`, `mxFp4BlockScaleMoe.cpp`) from `cpp/tensorrt_llm/thop/CMakeLists.txt`. This is a safe reduction as the target model (Gemma 3) is dense and does not require these operators.
### 5. Build Script Safety
* **Issue:** `scripts/build_wheel.py` attempted to aggressively upgrade/downgrade python packages via pip, risking environment corruption.
* **Fix:** Patched `scripts/build_wheel.py` to disable automatic `pip install` commands, relying on the user's pre-configured environment.
## Validation
These changes allow the build process to proceed past previous blockers (CUDA architecture rejection, missing symbols, compilation errors) and are necessary for generating the TensorRT-LLM wheel on Jetson Thor.
Author
|
@nvliyuan can I bother you for a review or a traige to the correct reviewer, please? |
Author
Coding Guidelines Compliance DisclosureThe changes implemented to enable TensorRT-LLM on Jetson Thor (sm_110) have been reviewed against the project's Compliance SummaryThe patches generally adhere to the project's standards, prioritizing consistency with the immediate local context where the codebase itself deviates from the formal guidelines. Adherences
Disclosed Deviations
ConclusionThe changes are idiomatically consistent with the surrounding source code and represent a "minimal impact" approach to resolving the platform blockers. |
…ckwell)
This patch implements a series of critical workarounds and structural changes required to stabilize the TensorRT-LLM environment on the Jetson Thor platform. The changes specifically address incompatibilities between the Blackwell GPU architecture and the project's dependency on PyTorch 2.4.1.
## 1. Environment Lockdown (`requirements.txt`)
- **Changes**: Commented out core heavy dependencies including `tensorrt`, `torch`, `torchvision`, `nvidia-modelopt`, and `nvidia-nccl-cu13`.
- **Rationale**: This prevents `pip` from inadvertently overwriting the custom source-built, CUDA-enabled binaries with generic CPU-only or incompatible wheels from PyPI. It ensures the environment remains pinned to the validated local build artifacts.
## 2. Custom Op Schema Compatibility (`torch_custom_ops.py`)
- **Changes**: Refactored function signatures for multiple custom operations (e.g., `nvfp4_gemm`, `fp8_batched_gemm_trtllmgen`, `fp8_swap_ab_gemm`). Default values for `dtype` and `allowed_backends` were moved from the function definition into the function body using `None` as a placeholder.
- **Rationale**: PyTorch 2.4.1 (the required baseline) lacks support for non-primitive default values (like `torch.float16` or `torch.bfloat16`) in its C++ schema registration logic. Moving these defaults into the function body satisfies the strict primitive-only requirements of the older dispatcher.
## 3. Robust TorchVision Masking (Modeling & Input Utilities)
- **Changes**: Wrapped `torchvision` and related transform imports (`Normalize`, `Resize`, `ToTensor`) in broad `try...except` blocks across Mistral, Phi-4MM, and multimodal input handlers.
- **Rationale**: Implements a "Universal Masking" strategy to isolate stable text-inference paths from vision stack failures. Due to ABI mismatches in the `torchvision` C++ backend on AArch64, these imports can trigger `AttributeError` or `RuntimeError`. Masking them ensures the server can still serve text models even if the vision components are partially initialized or broken.
## 4. MoE and NVFP4 "Type Wall" Workarounds (`cuteDslMoeUtilsOp.cpp`)
- **Changes**:
- Surgically disabled NVFP4-specific logic in `moe_permute` and `moe_swiglu_nvfp4_quantize` by replacing type checks with `if (false)`.
- Replaced experimental `kFloat4_e2m1fn_x2` return types with standard `kHalf`.
- Commented out problematic `DISPATCH_MOE_ACTIVATION` calls and the `moe_swiglu_nvfp4_quantize` implementation registry.
- **Rationale**: Addresses a fundamental limitation where the PyTorch 2.4.1 C++ frontend does not recognize Blackwell-specific data types. By gutting the problematic function bodies while retaining the expected symbol signatures, the patch allows the shared library (`libth_common.so`) to link successfully, enabling the runtime to function for dense models while bypassing unsupported Mixture-of-Experts (MoE) kernels.
## 5. Library Linkage Finalization (`CMakeLists.txt`)
- **Changes**: Explicitly added 12 MoE and quantization source files (e.g., `moeCommOp.cpp`, `fp4BlockScaleMoe.cpp`, `mxFp8Quantize.cpp`) to the `th_common` library.
- **Rationale**: Resolves "Missing Symbol" errors observed during runtime initialization. This ensures that all operators expected by the high-level Python bindings are physically present in the compiled shared object.
0ca5c25 to
d6fb958
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Overview
This patch set enables building TensorRT-LLM on the NVIDIA Jetson Thor platform (Blackwell architecture, sm_110) using a minimal configuration. It resolves multiple build system failures related to missing architecture definitions, incompatible PyTorch APIs, and broken dependency handling for CUTLASS. It also selectively disables Mixture-of-Experts (MoE) components to bypass compilation errors caused by missing FP4 type support in the available PyTorch 2.4 environment.
Key Changes
1. Enable Blackwell (sm_110) Architecture
cpp/cmake/modules/cuda_configuration.cmaketo explicitly recognize and support110as a target CUDA architecture.cpp/tensorrt_llm/kernels/multiHeadAttentionCommon.handcpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/fmhaRunner.cppto includekSM_110in supported architecture checks, allowing FMHA kernels to compile for Thor.2. Fix CUTLASS Dependency Build
setup_library.pyfor CUTLASS because the source directory lacked a validsetup.py.cpp/tensorrt_llm/kernels/cutlass_kernels/CMakeLists.txtto skip the failing Python installation step. Instead, thePYTHONPATHis explicitly set to thecutlass-src/pythondirectory when invoking kernel generation scripts, ensuring they can still import necessary modules.3. PyTorch API Compatibility Fixes
customCasters.hbecausec10::getStringToDtypeMap()is not available in the installed PyTorch 2.4.1 version.cpp/tensorrt_llm/nanobind/common/customCasters.hto parse standard dtype strings (float, half, int, etc.) directly.4. Disable MoE & Bleeding-Edge Ops
thop(Torch Ops) library failed to compilecuteDslMoeUtilsOp.cppand related files due to:torch::kFloat4_e2m1fn_x2type definition in PyTorch.std::optionalstream operators inmxFp4BlockScaleMoe.cpp.cuteDslMoeUtilsOp.cpp,moeOp.cpp,mxFp4BlockScaleMoe.cpp) fromcpp/tensorrt_llm/thop/CMakeLists.txt. This is a safe reduction as the target model (Gemma 3) is dense and does not require these operators.5. Build Script Safety
scripts/build_wheel.pyattempted to aggressively upgrade/downgrade python packages via pip, risking environment corruption.scripts/build_wheel.pyto disable automaticpip installcommands, relying on the user's pre-configured environment.Validation
These changes allow the build process to proceed past previous blockers (CUDA architecture rejection, missing symbols, compilation errors) and are necessary for generating the TensorRT-LLM wheel on Jetson Thor.
@coderabbitai summary
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.