Evolution process to find the best quant tensor weights to build the most optimal GGUF options for an AI model.
-
Updated
Dec 10, 2025
Evolution process to find the best quant tensor weights to build the most optimal GGUF options for an AI model.
Análise Avançada de Dados com Causalidade e Aprendizado por Reforço
Convert and quantize llm models
Convert Hugging Face models to GGUF with xet support.
Auto GGUF Converter for HuggingFace Hub Models with Multiple Quantizations (GGUF Format)
Quantize LLMs automatically.
Add a description, image, and links to the gguf-quantization topic page so that developers can more easily learn about it.
To associate your repository with the gguf-quantization topic, visit your repo's landing page and select "manage topics."