A cross-platform telemetry dashboard built for the Sunseeker Solar Car team. The app ingests live CAN/serial traffic from the vehicle, converts it into human-friendly metrics, and visualises everything through an extensible PyQt interface. It bundles a machine-learning layer for energy predictions, supports portable data exports, and now includes a simulation mode so engineers can rehearse race-day scenarios without the car.
- Download the latest release from GitHub (the
.exefile plus the two battery configuration.txtfiles). - Place the executable and the two config files in the same directory, keeping the config files under a folder named
config_files. - Run the executable. On first launch the app creates two CSVs and one rotating log file inside
application_data/. Logs rotate up to five times at 20 MB each so you always have fresh diagnostics.
- Clone or download the repository.
- Open a terminal in the project root.
- Create/activate a Python environment (
python3 -m venv .venv && source .venv/bin/activate). - Install requirements (
pip install -r requirements.txt). - Launch with
python src/main_app.py.
Real-time telemetry values grouped by subsystem (motor controllers, battery packs, solar data, etc.). Predictions such as remaining time and break-even speed are annotated with uncertainty bands, timestamps, and quality flags.
PyQtGraph-powered plots for motor controllers, battery packs, remaining capacity, and the “Insights” panel (efficiency, power, imbalance metrics). Colours can be customised and persisted through the settings tab.
Battery and array image tabs let you overlay probe points on reference drawings. Points and images persist between sessions (src/application_data/user_images/ and config.json). Use Clear/Undo to refine placements.
Quick access to current CSV locations, shortcuts to export copies, and controls to change the capture directory. From this tab you can now create “Telemetry Bundles” (zip archives with data, notes, metadata, and logs) and import previous runs for offline analysis.
Replay recorded telemetry CSVs or generate synthetic scenarios (Nominal Cruise, High Load, Charging Spike). Simulation data streams through the UI but is intentionally not written back to CSVs or sent to the telemetry server, so your historical data stays clean.
Configure COM port, baud rate, log level, Solcast credentials, unit system, and colour themes. Includes machine-learning retrain controls, Solcast key management, and an integrated updater to install tagged releases.
To add new battery pack presets to the configuration dialog:
- Create a
.txtfile with the following lines and numeric values:Battery cell capacity amps hoursBattery cell nominal voltageAmount of battery cellsNumber of battery series
- Save the file inside the
config_filesfolder next to the executable (or project root when running from source). - The entry appears in the configuration dropdown the next time the app starts.
main_app.py– entry point; wires logging and launchesTelemetryApplication.telemetry_application.py– central coordinator: GUI creation, serial reader lifecycle, buffering, ML predictions, telemetry server upload, and simulation handling.buffer_data.py/csv_handler.py– ingest buffering, CSV/training persistence, and bundle import/export helpers.learning_datasets/– machine-learning pipelines and quality diagnostics.gui_files/– modular PyQt tabs (data display, graphs, images, simulation, settings, CSV management).simulation.py– worker threads for replaying recorded telemetry and generating synthetic scenarios.
- Run unit/system smoke tests locally (e.g., simulation replay, bundle export/import).
- Update
Version.pyand changelog. - Tag the release (
git tag vX.Y.Z) and push; the GitHub Actions workflow builds Windows/macOS/Linux artifacts via PyInstaller, runsscripts/build_tuf_repo.py, and publishes signed assets plus TUF metadata. - Verify telemetry bundles and updater metadata before announcing.
The application uses TUF for signed updates. When publishing new artifacts, ensure you upload the freshly generated release/metadata/*.json and release/targets/*.tar.gz. Timestamp validity defaults to 60 days; rerun the pipeline for each release.
Simulated data never touches the on-disk CSVs or training corpus and is not forwarded to the remote ingestion endpoint. The core buffer pipeline still runs so derived metrics and predictions remain accurate during dry runs. When the simulation finishes, the app automatically resumes the serial reader if it was running before.
- Logs live under
application_data/telemetry_application.logwith rotation to*.1…*.5at 20 MB each. - Use the Simulation tab to reproduce edge cases without hardware.
- Enable higher log levels via the settings tab (
DEBUG) for verbose packet tracing. - The machine-learning layer relies on
training_data.csv; ensure it has valid numeric entries before retraining.
- Fork the repository and create a feature branch.
- Keep changes modular (GUI tabs, CSV handler, ML pipelines, simulation).
- Update the README or docs if you modify user-facing flows.
- Run
python -m compileall src(already part of CI) and any project-specific tests. - Submit a pull request with a concise summary plus validation steps.
Thanks for keeping the telemetry stack thriving for future Sunseeker crews!