Simple and efficient training framework for long-context models
-
Updated
Jan 12, 2026 - Python
Simple and efficient training framework for long-context models
A PyTorch framework that handles object detection across 6 different architectures (RetinaNet, Faster R-CNN, SSD, FCOS, and more). Takes care of the optimization setup and training quirks for each model.
A comprehensive framework for fine-tuning OpenAI models with streamlined data preparation, training, and evaluation workflows
A comprehensive framework for developing YOLO family models, featuring streamlined workflows for training, validation, testing, and deployment through easy-to-use config files, enabling flexible customization to suit various object detection tasks.
ML training orchestration for the Crucible ecosystem. Distributed training, hyperparameter optimization, checkpointing, model versioning, metrics collection, early stopping, LR scheduling, gradient accumulation, and mixed precision training with Nx/Scholar integration.
Fault-tolerant distributed training framework with async checkpointing for LLM's
GEKO: Gradient-Efficient Knowledge Optimization, A plug and play training framework that makes LLM training 30-50% more efficient. Like LoRA revolutionized fine-tuning, GEKO revolutionizes training through intelligent sample selection with Mountain Curriculum and Q-Value Learning.
A PyTorch framework for image classification covering 11 CNN architectures (ResNet, EfficientNet, MobileNet, etc.). Handles the optimization setup and training specifics for each model.
Library for config based Neural Network Training
🚀 Accelerate ML training on the BEAM with CrucibleTrain's unified infrastructure for diverse model types and workflows.
Add a description, image, and links to the training-framework topic page so that developers can more easily learn about it.
To associate your repository with the training-framework topic, visit your repo's landing page and select "manage topics."