Skip to content

feat(app): automated Maestro E2E functional tests (#3857)#4733

Closed
Iansabia wants to merge 26 commits intoBasedHardware:mainfrom
Iansabia:feat/maestro-functional-tests-v2
Closed

feat(app): automated Maestro E2E functional tests (#3857)#4733
Iansabia wants to merge 26 commits intoBasedHardware:mainfrom
Iansabia:feat/maestro-functional-tests-v2

Conversation

@Iansabia
Copy link

Summary

  • Adds 10 Maestro E2E test flows covering all core app functionality: onboarding/sign-in, conversations (list, detail, CRUD), memories, chat, apps/plugins, settings, device connection, and recording
  • Flows are tagged core (runs on simulator) vs device_required (needs physical Omi hardware)
  • Includes runner scripts (run_all.sh, run_device.sh) with pass/fail reporting and optional HTML output
  • Integrates into existing test.sh via --e2e flag

How It Works

  1. Install Maestro: brew install maestro
  2. Build and install the app on a simulator or device
  3. Run core tests: bash app/.maestro/scripts/run_all.sh
  4. Run device tests (with Omi connected): bash app/.maestro/scripts/run_device.sh
  5. Or use bash app/test.sh --e2e to run unit + widget + E2E tests together

After ~1 hour you get a full report covering sign-in, conversation recording/transcription, CRUD operations, chat, and app management.

Test Flows

Flow What It Tests Device Required
01_onboarding Sign-in, name entry, language, permissions No
02_conversations_list List rendering, scrolling No
03_conversation_detail Opening conversation, transcript view No
04_conversation_crud Create, update, delete conversations No
05_memories Memory list, creation, interaction No
06_chat Chat input, AI responses No
07_apps App store, plugin install/manage No
08_settings Settings navigation, preferences No
09_device_connection BLE scan, pair, connect status Yes
10_recording Record, transcribe, verify conversation Yes

Test plan

  • Install Maestro CLI
  • Build app with flutter build ios --flavor dev --simulator
  • Run bash app/.maestro/scripts/run_all.sh on simulator
  • Verify all 8 core flows pass
  • Run bash app/.maestro/scripts/run_device.sh with Omi connected
  • Verify recording + device flows pass

Closes #3857

Configures Maestro for automated functional testing with
core flows and device-required flow separation via tags.
Tests app launch, sign-in, name entry, language selection,
permissions, speech profile skip, and home screen landing.
Tests conversation list rendering, scrolling, and list item visibility.
Tests opening a conversation, viewing transcript, and detail screen elements.
Tests creating, updating, and deleting conversations through the UI.
Tests memory list display, creation, and interaction.
Tests chat input, message sending, and AI response display.
Tests app store browsing, plugin installation, and management.
Tests settings screen navigation, preference toggles, and profile access.
Tests BLE device scanning, pairing, and connection status.
Requires physical Omi device (tagged device_required).
Tests recording start, transcription indicator, and conversation
creation from captured audio. Requires physical Omi device.
Runs all core E2E flows sequentially with pass/fail summary
and optional HTML report generation.
Runs Maestro flows that require a physical Omi device connected.
Adds --e2e flag to run Maestro functional tests alongside
existing unit/widget tests.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive suite of Maestro E2E tests, which is a great addition for ensuring app quality. The tests cover core functionality and are well-structured with tags for simulator vs. device-specific flows. My review focuses on improving the maintainability and robustness of the new test scripts. I've identified a few areas with code duplication in both the YAML flow definitions and the shell runner scripts. Addressing these will make the test suite more resilient and easier to manage as the app evolves.

@beastoin
Copy link
Collaborator

Hey @Iansabia, closing this for now — thanks for putting it together.

The code and write-up look solid, but there's no real evidence that any of this was actually run and tested end-to-end. The test plan checkboxes are unchecked, and there are no screenshots, terminal output, videos, or logs showing it working on a real device or environment.

This matters more than ever now that AI makes writing code easy — the code itself isn't the hard part anymore. What's valuable is proving it actually works: real test output, real screenshots, real demo. That's what gives reviewers confidence to merge.

Feel free to reopen once you have real end-to-end evidence — run the tests, paste the output, show it working. We'd love to merge it then.

@beastoin beastoin closed this Feb 17, 2026
@github-actions
Copy link
Contributor

Hey @Iansabia 👋

Thank you so much for taking the time to contribute to Omi! We truly appreciate you putting in the effort to submit this pull request.

After careful review, we've decided not to merge this particular PR. Please don't take this personally — we genuinely try to merge as many contributions as possible, but sometimes we have to make tough calls based on:

  • Project standards — Ensuring consistency across the codebase
  • User needs — Making sure changes align with what our users need
  • Code best practices — Maintaining code quality and maintainability
  • Project direction — Keeping aligned with our roadmap and vision

Your contribution is still valuable to us, and we'd love to see you contribute again in the future! If you'd like feedback on how to improve this PR or want to discuss alternative approaches, please don't hesitate to reach out.

Thank you for being part of the Omi community! 💜

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

omi mobile app functional tests ($300)

2 participants

Comments