Fix test_transformers_tp for torch 2.10 env#915
Conversation
Signed-off-by: Keval Morabia <28916987+kevalmorabia97@users.noreply.github.com>
|
No actionable comments were generated in the recent review. 🎉 📝 WalkthroughWalkthroughIn the awq_lite calibration path, outputs are converted to local tensors when available before computing MSE loss, adding a normalization step without altering control flow or overall calibration behavior. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #915 +/- ##
=======================================
Coverage 73.10% 73.10%
=======================================
Files 205 205
Lines 22281 22283 +2
=======================================
+ Hits 16288 16290 +2
Misses 5993 5993 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
After bumping CICD dev containers to latest (with torch 2.10),
test_transformers_tp.pyis failing (was skpipped in PR-merge CICD as it requires 2-gpu)Failing test: https://github.com/NVIDIA/Model-Optimizer/actions/runs/22258743173/job/64393623736#step:7:617
Passing test after this fix: https://github.com/NVIDIA/Model-Optimizer/actions/runs/22259791793/job/64396179609
Summary by CodeRabbit