-
Notifications
You must be signed in to change notification settings - Fork 63
Open
Description
Task Description
You are given a Qlib-based repository. Your task is to automatically optimize an existing algorithm (model, training pipeline, or strategy) to achieve better performance than the current baseline.
Objectives
- Improve key metrics (e.g., IC, Sharpe Ratio, Annualized Return, Max Drawdown).
- Maintain compatibility with Qlib’s full pipeline (data → training → backtest → evaluation).
- Avoid data leakage and overfitting.
- Ensure fair comparison using the same dataset and evaluation settings.
Optimization Scope
You may improve:
- Model architecture
- Hyperparameters
- Training strategy
- Feature engineering
- Portfolio construction or risk modeling
- RL reward design (if applicable)
Expected Output
- Summary of changes and rationale
- Modified code/config (minimal diffs preferred)
- Baseline vs. optimized results
- Clear performance comparison
Goal: Achieve robust and reproducible improvement over the baseline within Qlib’s framework.
Baseline Repository Link (Must be Public)
https://github.com/microsoft/qlib/tree/main/examples/benchmarks
Baseline reproduction (minimal)
cd examples # Avoid running program under the directory contains qlib
qrun benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml
Dataset (Must be Public)
https://github.com/chenditc/investment_data/releases
Results and Evaluation Metrics
Rank IC (Information Coefficient)
or
Sharpe Ratio
Preconditions (required)
- The baseline I provide comes from public code hosted on an open-source platform such as GitHub.
- The task I use/optimize is based on a public dataset.
- If my request involves private code/data, I will contact interndiscovery@pjlab.org.cn by email instead.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels