Skip to content

LLMSQL/llmsql-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

118 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
LLMSQL Logo

Downloads codecov PyPI Version CI Python Versions License

LLMSQL

Patched and improved version of the original large crowd-sourced dataset for developing natural language interfaces for relational databases, WikiSQL.

Our datasets are available for different scenarios on our HuggingFace page.

Overview

Install

pip3 install llmsql

This repository provides the LLMSQL Benchmark β€” a modernized, cleaned, and extended version of WikiSQL, designed for evaluating large language models (LLMs) on Text-to-SQL tasks.

Note

The package doesn't have the dataset, it is stored on our HuggingFace page.

This package contains

  • Support for modern LLMs.
  • Tools for inference and evaluation.
  • Support for Hugging Face models out-of-the-box.
  • Structured for reproducibility and benchmarking.

Latest News πŸ“£

Usage Recommendations

Modern LLMs are already strong at producing SQL queries without finetuning. We therefore recommend that most users:

  1. Run inference directly on the full benchmark: model_or_model_name_or_path="Qwen/Qwen2.5-1.5B-Instruct", output_file="path_to_your_outputs.jsonl",

    • Use llmsql.inference_transformers (the function for transformers inference) for generation of SQL predictions with your model. If you want to do vllm based inference, use llmsql.inference_vllm. Works both with HF model id, e.g. Qwen/Qwen2.5-1.5B-Instruct and model instance passed directly, e.g. inference_transformers(model_or_model_name_or_path=model, ...)
    • Evaluate results against the benchmark with the llmsql.evaluate function.
  2. Optional finetuning:

    • For research or domain adaptation, we provide finetuning version for HF models. Use Finetune Ready dataset from HuggingFace.

Tip

You can find additional manuals in the README files of each folder(Inferece Readme, Evaluation Readme)

Tip

vllm based inference require vllm optional dependency group installed: pip install llmsql[vllm]

Repository Structure


llmsql/
β”œβ”€β”€ evaluation/          # Scripts for downloading DB + evaluating predictions
└── inference/           # Generate SQL queries with your LLM

Quickstart

For the full tutorial, check out the Colab notebook: Open in Colab

Install

Make sure you have the package installed (we used python3.11):

pip3 install llmsql

1. Run Inference

Transformers inference

from llmsql import inference_transformers

# Run generation directly with transformers
results = inference_transformers(
    model_or_model_name_or_path="Qwen/Qwen2.5-1.5B-Instruct",
    output_file="path_to_your_outputs.jsonl",
    num_fewshots=5,
    batch_size=8,
    max_new_tokens=256,
    do_sample=False,
    model_kwargs={
        "torch_dtype": "bfloat16",
    }
)

Vllm inference (Recommended)

To speed up your inference we recommend using vllm inference. You can do it with optional llmsql[vllm] dependency group

pip install llmsql[vllm]

After that run

from llmsql import inference_vllm
results = inference_vllm(
    "Qwen/Qwen2.5-1.5B-Instruct",
    "test_results.jsonl",
    do_sample=False,
    batch_size=20000
)

for fast inference.

2. Evaluate Results

from llmsql import evaluate

report =evaluate(outputs="path_to_your_outputs.jsonl")
print(report)

Or with ther results from the infernece:

from llmsql import evaluate

# results = inference_transformers(...) or infernce_vllm(...)

report =evaluate(outputs=results)
print(report)

Prompt Template

The prompt defines explicit constraints on the generated output. The model is instructed to output only a valid SQL SELECT query, to use a fixed table name ("Table") (which will be replaced with the actual table name during evaluation), to quote all table and column names, and to restrict generation to the specified SQL functions, condition operators, and keywords. The full prompt specification is provided in the prompt template.

Below is an example of the 5-shot prompt template used during inference.

Your task: Given a question and a table schema, output ONLY a valid SQL SELECT query.
⚠️ STRICT RULES:
 - Output ONLY SQL (no explanations, no markdown, no ``` fences)
 - Use table name "Table"
 - Allowed functions: ['MAX', 'MIN', 'COUNT', 'SUM', 'AVG']
 - Allowed condition operators: ['=', '>', '<', '!=']
 - Allowed SQL keywords: ['SELECT', 'WHERE', 'AND']
 - Always use "" with all column names and table name, even one word: "Price", "General column", "Something #"

### EXAMPLE 1:
Question: What is the price of the Samsung Galaxy S23?
Columns: ['Brand', 'Model', 'Price', 'Storage', 'Color']
Types: ['text', 'text', 'real', 'text', 'text']
Sample row: ['Apple', 'iPhone 14', 899.99, '128GB', 'White']
SQL: SELECT "Price" FROM "Table" WHERE "Brand" = "Samsung" AND "Model" = "Galaxy S23";

### EXAMPLE 2:
Question: How many books did Maya Chen publish?
Columns: ['Author', 'Books Published', 'Genre', 'Country', 'Years Active']
Types: ['text', 'real', 'text', 'text', 'text']
Sample row: ['John Smith', 3, 'Non-fiction', 'Canada', '2005–2015']
SQL: SELECT "Books Published" FROM "Table" WHERE "Author" = "Maya Chen";

### EXAMPLE 3:
Question: What is the total population of cities in California?
Columns: ['City', 'State', 'Population', 'Area', 'Founded']
Types: ['text', 'text', 'real', 'real', 'text']
Sample row: ['Houston', 'Texas', 2304580, 1651.1, '1837']
SQL: SELECT SUM("Population") FROM "Table" WHERE "State" = "California";

### EXAMPLE 4:
Question: How many restaurants serve Italian cuisine?
Columns: ['Restaurant', 'Cuisine', 'Rating', 'City', 'Price Range']
Types: ['text', 'text', 'real', 'text', 'text']
Sample row: ['Golden Dragon', 'Chinese', 4.2, 'Boston', '$$']
SQL: SELECT COUNT(*) FROM "Table" WHERE "Cuisine" = "Italian";

### EXAMPLE 5:
Question: What is the average salary for Software Engineers?
Columns: ['Job Title', 'Salary', 'Experience', 'Location', 'Company Size']
Types: ['text', 'real', 'text', 'text', 'text']
Sample row: ['Data Analyst', 70000, 'Junior', 'Chicago', '200–500']
SQL: SELECT AVG("Salary") FROM "Table" WHERE "Job Title" = "Software Engineer";

### NOW ANSWER:
Question: {question}
Columns: {headers}
Types: {types}
Sample row: {sample_row}
SQL:"""

Implementations of 0-shot, 1-shot, and 5-shot prompt templates are available here: πŸ‘‰ link-to-file

Suggested Workflow

  • Primary: Run inference on all questions with vllm or transformers β†’ Evaluate with evaluate().
  • Secondary (optional): Fine-tune on train/val β†’ Test on test_questions.jsonl. You can find the datasets here HF Finetune Ready.

Contributing

Check out our open issues, fork this repo and feel free to submit pull requests!

We also encourage you to submit new issues!

To get started with development, first fork the repository and install basic dependencies with dev dependencies.

For more information on the contributing: check CONTRIBUTING.md and our documentation page.

License & Citation

Please cite LLMSQL if you use it in your work:

@inproceedings{llmsql_bench,
  title={LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQL},
  author={Pihulski, Dzmitry and  Charchut, Karol and Novogrodskaia, Viktoria and Koco{'n}, Jan},
  booktitle={2025 IEEE International Conference on Data Mining Workshops (ICDMW)},
  year={2025},
  organization={IEEE}
}

About

A Text2SQL benchmark for evaluation of Large Language Models

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages