-
Notifications
You must be signed in to change notification settings - Fork 645
[JAX] Integrate BF16 Grouped GEMM with on-device group sizes #2680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jberchtold-nvidia
wants to merge
16
commits into
NVIDIA:main
Choose a base branch
from
jberchtold-nvidia:gmm
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
7c46453
Grouped GEMM
jberchtold-nvidia 5a96845
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 49b45fa
disable cuda-graph for GMM
jberchtold-nvidia 593a790
proper workspace size
jberchtold-nvidia ae34461
remove duplicate workspace size logic in Python gemm.py
jberchtold-nvidia 7e99c64
use group_sizes as int32 and handle int64 and offsets inside FFI to a…
jberchtold-nvidia a661e9e
restore previous non-cuda-graphable grouped GEMM FFI and move new ver…
jberchtold-nvidia 6fd7f16
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 0d5837d
cleanup and lint fixes
jberchtold-nvidia d3ee0fc
re-add cublas alignment checks
jberchtold-nvidia 6440648
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 661a829
fix symbol export when building with older cublas
jberchtold-nvidia 7d15c4c
Merge branch 'main' into gmm
jberchtold-nvidia bd5e6fb
Fix backend selection depending on whether TE was compiled with the
jberchtold-nvidia 60d5c42
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 70d8f78
Merge branch 'main' into gmm
jberchtold-nvidia File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
O(n²) complexity in parallel kernel. Each of the n threads calls this function with a different
idx, and for case 2 (per-tensor dims without explicit offsets), threadidxperforms a sequential loop from 0 toidx-1. This creates O(1 + 2 + ... + n) = O(n²) total work across all threads.For large numbers of groups, consider either: