diff --git a/.azad/.locked-paths b/.azad/.locked-paths index dd5befe..8499e6a 100644 --- a/.azad/.locked-paths +++ b/.azad/.locked-paths @@ -2,5 +2,6 @@ "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/current/search-results.html", "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/current/create-account/email-verification.html", "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/current/create-account/email-welcome.html", - "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/current/product-detail.html" + "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/current/product-detail.html", + "/Users/davidevans/Documents/Projects/procurement-prototype/app/views/design-histories/v3.html" ] \ No newline at end of file diff --git a/app/views/design-histories/design-histories.html b/app/views/design-histories/design-histories.html index 037df4d..b6518c1 100644 --- a/app/views/design-histories/design-histories.html +++ b/app/views/design-histories/design-histories.html @@ -58,7 +58,7 @@

2. Peer to peer engagement + href="v1">2. Team workshop

diff --git a/app/views/design-histories/v0.html b/app/views/design-histories/v0.html index 5992cec..b5d3ec7 100644 --- a/app/views/design-histories/v0.html +++ b/app/views/design-histories/v0.html @@ -14,9 +14,20 @@

Initial assumptions from discovery


-

Situation

-

The platform was conceived as a central digital repository to enable NHS trusts to make faster, more consistent, and evidence-based procurement decisions for medical technologies.

+

DHSC conducted a comprehensive discovery phase in March 2025, researching directly with NHS users and producing personas, journey maps and system maps.

+ +

The recommendations from the discovery were:

+
    +
  1. Develop a product value information service to address unmet information needs
  2. +
  3. Improve sharing of data between Trusts, currently via informal information sharing networks
  4. +
  5. Improve standards of procurement commercial literacy and ensure teams have correct skills
  6. +
  7. Create standards for evidence provided by suppliers, and improve data collection from Trusts
  8. +
+ +

This Alpha phase was kicked off to address recommendation 1. We pivoted to include 2 as well. 3 and 4 have been spun up into separate teams.

+ +
Information: @@ -48,39 +59,12 @@

Four core problems identified

-
- -

Scope for the intial alpha:

+ -

Less risky assumptions

- + -

Riskier assumptions

- + -

Possibly out of scope for now

-
@@ -121,10 +105,7 @@

Landscape analysis

-
- Information: -

Strategic position: Compass would be the missing "product intelligence layer" – complementary to existing platforms, not competing with them.

-
+
@@ -166,13 +147,13 @@

Value-based procurement framework


-

Initial prototype features

+

Initial prototype

The first prototype (built sprint 1) demonstrated how we would test some our initial assumptions:

1. Product comparison interface

-

What we removed

- -

We removed the star rating review section entirely. Research showed users want conversations and context, not simplified ratings. The nuance of "it worked for us because..." can't be captured in stars.

+

How it tested

diff --git a/app/views/design-histories/v2.html b/app/views/design-histories/v2.html index 299343b..5101d9c 100644 --- a/app/views/design-histories/v2.html +++ b/app/views/design-histories/v2.html @@ -32,7 +32,7 @@

Onboarding flow and homepage iterations

-

We explored how we could accommodate submitting evaluations to the platform.

+

We explored how we could accommodate submitting evaluations to the platform, in user friendly formats

Date: January 2026 | Phase: Alpha Sprint 3

@@ -41,7 +41,7 @@

Onboarding flow and homepage iterations

The problem

-

We explored a simple flow which would enable procurement professionals to submit their own evaluations. This would potentially save time +

We initially explored a simple flow which would enable procurement professionals to submit their own evaluations. This would potentially save time for trusts looking to share their experiences with products, and help populate the platform with more peer-to-peer intelligence.

@@ -51,7 +51,7 @@

The problem

-

Quick document upload

+

Document upload

Purpose: Share existing evaluations quickly with minimal data entry

User journey:

    @@ -72,12 +72,12 @@

    Quick document upload

    Key design decisions

    -

    Regardless of which pathway trusts choose, all product intelligence feeds into a single shared repository. Route A users benefit from Route B's structured data, and Route B users can access Route A's shared documents.

    +

    Information feeds into a single shared repository, capturing essential metadata without buredn.

    -

    Route A captures essential metadata without burden

    -

    Rather than asking trusts to re-enter information already in their documents, we ask four focused questions:

    + +

    Rather than asking trusts to re-enter information already in their documents, we would ask:

-

What we removed

-

Star rating reviews section: User research showed users prefer talking to people over reading reviews. The evaluation cards with contacts serve this need better.

- +
-

Iterating based on user feedback

-

Wound care product page iteration

- -

User feedback

- - -

Design changes

- + + -

Retained from previous design

+

What we kept

Supplier contact card in sidebar, technical specs in accordion, cost details in accordion, contact details in cards with discussion topics.


@@ -112,247 +102,10 @@

User research findings

Key insights from procurement research

-

What users told us

- - -

Implications for design

- - -
- -

Final iteration gaps identified

- -

Gaps to address

- - -
- - -

Testing strategy

- -

Critical assumptions to validate

- -

Assumption 1: Users will actually contact peers

-
-
-
Why critical
-
Core value proposition depends on peer contact happening. If users won't contact, the service fails.
-
-
-
Success criteria
-
75% must say they would use peer contact feature
-
-
-
If wrong
-
Major pivot needed — service may not be viable
-
-
- -

Assumption 2: Trust adoption visibility builds confidence

-
-
-
Why critical
-
Showing '8 trusts using this' should create confidence for shortlisting decisions.
-
-
-
Success criteria
-
Users cite trust adoption as confidence factor in testing
-
-
-
If wrong
-
Rethink value proposition — may need different confidence signals
-
-
- -

Assumption 3: Users accept evaluation variety

-
-
-
Why critical
-
Evidence ranges from clinical trials to quick desk reviews. If users demand standardisation, the passporting model won't work.
-
-
-
Success criteria
-
Users find variety acceptable if context is clear
-
-
-
If wrong
-
Route B (structured assessment) becomes mandatory
-
-
- -

Assumption 4: 'How they evaluated' metadata helps

-
-
-
Why critical
-
Users wanted to understand how trusts conducted evaluations.
-
-
-
Success criteria
-
Users reference process information when explaining relevance
-
-
- -

Assumption 5: Discussion topics increase contact likelihood

-
-
-
Why critical
-
We ask trusts to specify what they'll discuss in Route A.
-
-
-
Success criteria
-
75% say topics make them more likely to contact
-
-
- -
- - -

Service design

- -

Dual pathway approach for evaluation submission

- -

The problem

-

We identified a tension in how NHS trusts approach evaluations. Some have well-established processes and produce comprehensive documents; they want to share existing work without duplicating effort. Others want guidance and structure. A one-size-fits-all approach would be too burdensome for mature trusts or too unstructured for others.

- -

Design solution

-

We designed a dual pathway approach:

- -
-
-
Route A: Quick document upload
-
    -
  • Upload existing evaluation document
  • -
  • Add basic metadata (product name, supplier, evaluation date)
  • -
  • Answer 4 quick questions about the evaluation
  • -
  • Optionally provide contact details for peer discussions
  • -
-

Time to complete: 5–10 minutes

-
-
- -
-
-
Route B: Structured assessment
-
    -
  • Select product and begin structured evaluation
  • -
  • Complete assessment sections (clinical value, safety, integration, sustainability)
  • -
  • Add supporting evidence and documentation
  • -
-

Time to complete: 30–45 minutes

-
-
- -

Key design decisions

- - -

How it tested

- - -
- - -

Visual design

- -

Badge system for evaluation metadata

- -

Purpose

-

Badges provide at-a-glance context for evaluations without requiring users to open documents. They help users quickly assess relevance and quality.

- -

Badge categories

- -

Procurement outcomes:

- - -

Evaluation types:

- - -

Trust types:

- - -

Trusted sources:

- - -
- -

Icon design for landing page

- -

Icons created

-

Custom SVG icons designed for three key value proposition cards:

- - -

Design specifications

- - -
- - -

Summary of design principles

- -
    -
  1. Lead with peer intelligence, not supplier marketing: Trust adoption and peer contacts should be the first things users see
  2. -
  3. Enable conversations, not just documents: Users prefer talking to people over reading; facilitate that
  4. -
  5. Embrace evaluation variety: Don't force standardisation; show context so users can judge relevance
  6. -
  7. Support multiple information-seeking modes: Known-item, exploratory, learning, and re-finding
  8. -
  9. Meet users where they are: Dual pathways accommodate different trust maturity levels
  10. -
  11. Show honest information: Including excluded products builds platform credibility
  12. -
- -
+

Findings here

+ -

Document prepared: January 2026

+