Conversation
Signed-off-by: mchochowski <mchochowski@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. 🗂️ Base branches to auto review (3)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
modelopt/torch/puzzletron/anymodel/models/gpt_oss_20b/gpt_oss_pruned_to_mxfp4.py
Show resolved
Hide resolved
| - _self_ | ||
|
|
||
| puzzle_dir: ??? | ||
| descriptor: llama |
| # FFN intermediate sizes to search over (heterogeneous architecture) | ||
| # teacher_intermediate_size is 8192, so we use proportionally smaller values | ||
| pruning: | ||
| intermediate_size_list: [2048, 4096, 6144] |
There was a problem hiding this comment.
is it needed if we prune for num_of_experts?
examples/puzzletron/README.md
Outdated
|
|
||
| Modify `llama-3_1-8B_pruneffn_memory.yaml` file for advanced compression scenarios. | ||
|
|
||
| ## GptOss - 20b |
There was a problem hiding this comment.
Let's not put it at the same level as ## Advanced Usage, I would put it into a separate MD file (in the model descriptor dir) and link nicely in the main tutorial. Consult also with @LianaMikael how best to do it. We want the tutorial to have a great user experience.
Let's also check for English style/grammar. E.g. should be no ',' after that. , likely comma after 'n the prunning steps '
modelopt/torch/puzzletron/anymodel/models/gpt_oss_20b/gpt_oss_pruned_to_mxfp4.py
Show resolved
Hide resolved
Signed-off-by: mchochowski <mchochowski@nvidia.com>
…uator (NVIDIA#894) This PR adds Nemo Evaluator support to the AnyModel branch. It includes documentation and a deployment script that allow for evaluation of AnyModel Puzzletron checkpoints with Nemo Evaluator. We assume development on a GPU node, following the current tutorial style, so we don't rely on Slurm-based deployment/evaluation, but instead use direct evaluation via `eval-factory run_eval`. --------- Signed-off-by: jrausch <jrausch@nvidia.com>
## What does this PR do? **Overview:** - Update the AnyModel Puzzletron tutorial to use lm-eval. We add a script that monkey patches lm-eval to use the patched AnyModel model loading - No need for running ray deployments or replacing the NeMo Export-Deploy deployment script with a patched version - Moved instructions for using NeMo Evaluator to an alternative readme file --------- Signed-off-by: jrausch <jrausch@nvidia.com>
## What does this PR do? **Overview:** Updated license of examples/puzzletron/evaluation/lm_eval_anymodel.py to match that of reference examples/llm_eval/lm_eval_hf.py. Signed-off-by: jrausch <jrausch@nvidia.com>
Signed-off-by: mchochowski <mchochowski@nvidia.com>
…ml config Signed-off-by: mchochowski <mchochowski@nvidia.com>
| # SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. |
There was a problem hiding this comment.
This goes after the EleutherAI license
What does this PR do?
Adds gpt-oss-20b support for puzzle any-model pruning.
Type of change:
new feature
Overview:
adds descriptor, converter and yaml configuration files for expert removal. Introduces slight changes on conversion to account for mxfp4 quantized checkpoint of gpt-oss
Usage
# Add a code snippet demonstrating how to use thisTesting
Before your PR is "Ready for review"
Additional Information