From 08b8f18961263f1a097cf2f16d6a7fd2e5879241 Mon Sep 17 00:00:00 2001 From: David Evans Date: Wed, 4 Feb 2026 08:19:52 +0000 Subject: [PATCH] design histories --- .../design-histories/design-histories.html | 16 +- app/views/design-histories/v0.html | 55 ++--- app/views/design-histories/v1.html | 188 ++++++++++-------- app/views/design-histories/v2.html | 114 +++-------- app/views/design-histories/v3.html | 6 +- 5 files changed, 158 insertions(+), 221 deletions(-) diff --git a/app/views/design-histories/design-histories.html b/app/views/design-histories/design-histories.html index d07761b..037df4d 100644 --- a/app/views/design-histories/design-histories.html +++ b/app/views/design-histories/design-histories.html @@ -43,10 +43,10 @@

Posts

First iteration: inital assumptions from discovery + href="v0">1. inital assumptions from discovery

+ datetime="">Nov 2025

@@ -58,10 +58,10 @@

Second iteration: leading with trust adoption visibility on product pages + href="v1">2. Peer to peer engagement

+ datetime="">Dec 2025

@@ -73,10 +73,10 @@

Third iteration: designing the dual pathway approach for trust evaluation submission + href="v2">3. Onboarding flow and evalution upload

+ datetime=""> Jan 2026

@@ -88,10 +88,10 @@

Fourth iteration - information architecture and search + href="v3">4. Information seeking types

+ datetime="">Feb 2026

diff --git a/app/views/design-histories/v0.html b/app/views/design-histories/v0.html index a460bcc..5992cec 100644 --- a/app/views/design-histories/v0.html +++ b/app/views/design-histories/v0.html @@ -1,16 +1,16 @@ {% extends 'layout-hero-histories.html' %} -{% set pageTitle = "Design History: Ground Zero" %} +{% set pageTitle = "Design History 1" %} {% block content %}
-

Design History: Ground Zero

+

Initial assumptions from discovery

-

The starting context, assumptions, and materials that informed our initial Compass prototype pages – before any user testing.

+

The starting context, assumptions, and materials that informed our initial prototype before any user testing.

-

Date: November 2025 | Phase: Alpha Week 1

+

Date: November 2025 | Phase: Alpha Sprint 1


@@ -20,7 +20,7 @@

Situation

Information: -

The working analogy: "A Which? Best Buyers Guide for medtech devices" - empowering procurement professionals and clinicians with information to buy products based on holistic value, not just price.

+

The working analogy: "A Which? Best Buyers Guide for medtech devices, or Trust Pilot for procurement teams"

Problem statement

@@ -168,28 +168,31 @@

Value-based procurement framework

Initial prototype features

-

The first prototype (built Week 1) demonstrated Must Have scope with two core interfaces:

+

The first prototype (built sprint 1) demonstrated how we would test some our initial assumptions:

1. Product comparison interface

  • Search and browse MHRA-registered products
  • Product cards showing regulatory status (NICE, ODEP, DTAC)
  • Select up to 3 products for side-by-side comparison
  • -
  • Comparison grid showing VBP domain scores
  • Links to PIM for technical specifications
  • Trust evaluation summaries
-

2. Supplier submission portal

+ + +

2. Trust evaluation sharing flow

  • Multi-step form for product registration
  • -
  • MHRA registration capture (required)
  • +
  • Regulatory evidence sections (NICE, ODEP, DTAC)
  • VBP domain evidence upload
  • Trust evaluation submission
  • Draft save functionality
+ +

Initial design decisions

@@ -226,24 +229,13 @@

Initial design decisions

-
- -

Proposed success metrics

- -

At ground zero, we defined these metrics to validate our assumptions:

- -
    -
  • 5+ suppliers submit standardised data for one device category
  • -
  • 8/10 procurement professionals find comparison tool more useful than current methods
  • -
  • Users can find and understand evidence in under 5 minutes
  • -
  • Prototype reduces perceived time-to-decision by 20%+ in testing
  • -
+

What we didn't know yet

-

These questions remained open at ground zero, to be answered through user research:

+

These questions remained open, to be answered through user research:

  • How do procurement teams actually build confidence in product decisions?
  • @@ -264,24 +256,7 @@

    We had not yet validated whether users wanted to read documents or talk to peers. This would prove to be a critical finding that reshaped the entire service direction.

-
- -

Next steps from ground zero

- -
    -
  1. User research: Interview procurement teams and clinicians to validate assumptions
  2. -
  3. Supplier engagement: Test submission portal with 3-5 medtech suppliers
  4. -
  5. Prototype testing: Usability sessions with comparison interface
  6. -
  7. Assumption validation: Prioritise testing riskiest assumptions first
  8. -
  9. Iteration: Update prototype based on findings before Week 4
  10. -
- -
- -
- Information: -

Document purpose: This design history captures the starting context for Compass alpha. It serves as a baseline to compare against later iterations, demonstrating how user research and testing changed our understanding and approach.

-
+ diff --git a/app/views/design-histories/v1.html b/app/views/design-histories/v1.html index e7be603..2e26c1e 100644 --- a/app/views/design-histories/v1.html +++ b/app/views/design-histories/v1.html @@ -17,7 +17,7 @@ service name is needed. You can also reuse this variable within the

, where they are the same. --> -{% set pageName = "Home" %} +{% set pageName = "Design history 2" %} @@ -31,11 +31,11 @@
-

Leading with trust adoption visibility on product pages

+

Peer to peer engagement

-

We structured product pages to prioritise peer to peer information over supplier marketing content, making trust adoption the most critical aspect.

+

We structured product pages to prioritise peer to peer information, making trust adoption the most critical aspect.

-

Last updated: January 2026

+

Date: December 2025 | Phase: Alpha Sprint 2


@@ -45,66 +45,115 @@

The problem

Our initial product page design followed a conventional pattern: supplier branding and marketing content at the top, technical specifications prominent, and trust evaluations buried lower in the page as supporting content. This didn't reflect how users actually make procurement decisions.

-

Key research findings that drove this change:

- -
    -
  • Nearly all users expressed that knowing what other trusts are procuring is crucial information for their shortlisting decisions
  • -
  • Users want to build confidence in their procurement by learning what has worked for other trusts—not just product performance, but how the trust evaluated it, made a business case, and what factors they considered important
  • -
  • Some users said they would always prefer to talk to someone at another trust than rely on written information alone
  • -
  • The value isn't necessarily about saving time or avoiding trials, but about building confidence that a product is a viable option for their trust
  • -
+ -

Design solution

+ + + +

Key Insights

+ +

Peer intelligence is valued but currently ad-hoc

+
    +
  • Nearly all users want to know what other Trusts are procuring and using
  • +
  • Current mechanisms are ad-hoc and based on informal personal networks
  • +
+ +

Users want context beyond product performance

+
    +
  • Users want to understand not just how well a product was evaluated, but how the evaluation was conducted
  • +
  • Interest in how other Trusts built business cases and what factors they considered important
  • +
+ +

Value is about confidence, not time-saving

+
    +
  • The primary value isn't avoiding trials or saving time
  • +
  • It's about building confidence that a product is a viable option for their Trust
  • +
  • Most feedback relates to the shortlisting stage where users consider which products and suppliers to explore further
  • +
+ +

Willingness to share

+
    +
  • Nearly no users saw any issue with sharing their procurement evaluations with other Trusts
  • +
+ +

Preference for conversation

+
    +
  • Some users said they would always prefer talking to someone at another Trust over relying on written information alone
  • +
+ +
+ +

User Needs

+ +

1. Building procurement confidence through peer learning

+

Users need to learn what has worked for other Trusts. The information required varies by Trust, product category and individual, but broadly includes:

+
    +
  • How has the product worked for the Trust, and what issues did they find
  • +
  • How has the supplier worked for the Trust, and what issues did they find
  • +
+ +

2. Knowing who to contact

+

Users need to identify which Trusts to talk to and find contact details for people in those Trusts:

+
    +
  • Whether another Trust has procured or is using a product is key information for choosing who to contact
  • +
  • Users will also judge how helpful a Trust might be based on their own knowledge
  • +
+ +
+ +

Service Vision

+ +

Core functionality

+
    +
  • Show which Trusts are using and procuring which medtech
  • +
  • Enable users to contact people in those Trusts
  • +
+ +

Evaluation visibility

+
    +
  • Show a broad range of evaluation types from Trusts, associated with products
  • +
  • This might include large clinical trials over several months, or a simple one-pager from a clinician trying something out
  • +
  • There will not be consistency (initially) across documents users make available
  • +
  • We could explore showing key information alongside evaluations (e.g. did the Trust end up procuring or excluding this product)
  • +
  • This contextual information could be gathered by asking users a few questions when they share a document
  • +
+ +

Trusted sources beyond Trusts

+

Some evaluations may come from other trusted bodies our users rely on:

+
    +
  • NHS Supply Chain
  • +
  • NHS-led initiatives like GIRFT
  • +
  • External NHS-adjacent partners like ODEP
  • +
  • Any body using clinicians to evaluate devices that our users trust
  • +
+ +

Informal feedback (potential)

+
    +
  • Potentially show user feedback not captured in formal documents (e.g. "I had supply chain issues with this supplier")
  • +
  • Star ratings may not be the right design choice – this needs further exploration
  • +
+ + + + +

Design solution

We fundamentally structured the product page hierarchy to lead with peer information.

-
-
-

Previous structure

-
    -
  1. Product name and supplier branding
  2. -
  3. Product overview and marketing content
  4. -
  5. Technical specifications (expanded)
  6. -
  7. Clinical outcomes
  8. -
  9. Trust evaluations (buried at position 5+)
  10. -
-
-
+
-
-
-

Revised structure

-
    -
  1. Product name and essential identifiers
  2. - -
  3. NHS trusts using this product (new hero section)
  4. -
  5. Evaluation cards with peer contact options
  6. -
  7. Technical specifications (collapsed)
  8. -
  9. Clinical outcomes (collapsed)
  10. -
-
-
+
-

The trust adoption section

- -

The new hero section prominently displays:

- -
    -
  • Total trust count: "XX NHS trusts have evaluated this product"
  • -
  • Procurement outcomes breakdown: "9 procured | 2 under review | 1 excluded"
  • -
  • Expandable list of all trusts organised by outcome
  • -
  • Most recent evaluation date to indicate currency
  • -
+

Featured contact cards

-

Each trust evaluation now includes a contact card showing:

- +
  • Named contact willing to discuss the evaluation
  • Their role and trust
  • @@ -114,7 +163,7 @@

    Featured contact cards

    Embracing evaluation variety

    -

    Rather than forcing standardisation, we embraced the reality that evaluations range from formal clinical trials to quick desk reviews. Each evaluation card now displays:

    +

    Each evaluation card now displays:

    • Evaluation type badge: Clinical trial Pilot study Usage report Quick review
    • @@ -129,42 +178,11 @@

      What we removed

      How it tested

      -

      We tested the restructured product pages with NHS procurement professionals and clinical leads:

      +

      User research here

      -
        -
      • All participants immediately noticed and engaged with the trust adoption section—it was the first thing they looked at after the product name
      • -
      • Users described feeling more confident about shortlisting when they could see "8 trusts are using this"
      • -
      • The evaluation type badges helped users quickly identify which evaluations were most relevant to their situation ("I'd want to talk to another teaching hospital")
      • -
      • Featured contact cards generated strong positive reactions; users said they would "definitely" use the contact option
      • -
      • The "how they evaluated" section addressed a previously unmet need—users wanted to understand not just outcomes but methodology
      • -
      • Including excluded evaluations with context was praised for transparency: "This is honest—I trust it more because you're showing me the ones that didn't work too"
      • -
      - -
      - Information: -

      Concerns raised during testing:

      -
        -
      • Some users questioned whether contacts would actually respond (to be tested in live service)
      • -
      • A few users wanted more filtering options to find similar trusts more quickly
      • -
      -
      - -

      Next steps

      - -

      Based on testing, we will:

      - -
        -
      1. Proceed with trust adoption visibility as the primary value proposition for product pages
      2. -
      3. Add filtering by trust type and evaluation type to help users find relevant peers faster
      4. -
      5. Test the contact facilitation flow in the next round to validate that users will actually initiate peer conversations when given the option
      6. -
      7. Explore whether trusts would prefer direct contact or a facilitated introduction through Compass
      8. -
      - -
      + -

      - Tags: User research, Product design, Peer intelligence, NHS trusts, Alpha testing -

      +

diff --git a/app/views/design-histories/v2.html b/app/views/design-histories/v2.html index ccf5a44..299343b 100644 --- a/app/views/design-histories/v2.html +++ b/app/views/design-histories/v2.html @@ -17,7 +17,7 @@ service name is needed. You can also reuse this variable within the

, where they are the same. --> -{% set pageName = "Home" %} +{% set pageName = "Design history 3" %} @@ -30,40 +30,28 @@
-

Designing the dual pathway approach for trust evaluation submission

+

Onboarding flow and homepage iterations

-

We explored how to accommodate different trust maturity levels through two distinct routes for submitting evaluations to the platform.

+

We explored how we could accommodate submitting evaluations to the platform.

-

Last updated: 14 Jan 2026

+

Date: January 2026 | Phase: Alpha Sprint 3


The problem

-

During early discovery, we identified a tension in how NHS trusts approach procurement evaluations:

-

Some trusts have well-established evaluation processes and produce comprehensive assessment documents as part of their normal workflow. These trusts wanted a quick way to share existing work without duplicating effort.

- -

Other trusts wanted guidance and structure—either because they were less mature in their procurement processes or because they wanted to ensure their evaluations were comprehensive and comparable.

- -

A one-size-fits-all approach would either be too burdensome for trusts with existing evaluations (requiring them to re-enter data they've already documented) or too unstructured for trusts wanting guidance (leaving them without a framework).

- -
- Information: -

"So there will not be any consistency (initially) across the documents that users make available to us. This might be a large clinical trial over some months, or just a one-pager showing the result of one clinician sitting down to try something out."

-
- -

We needed a model that embraced this variety rather than forcing artificial standardisation that would discourage participation.

- -

Design solution

+

We explored a simple flow which would enable procurement professionals to submit their own evaluations. This would potentially save time + for trusts looking to share their experiences with products, and help populate the platform with more peer-to-peer intelligence.

+

-

We designed a dual pathway approach with two distinct routes:

+
-
-
-
-

Route A: Quick document upload

+
+ + +

Quick document upload

Purpose: Share existing evaluations quickly with minimal data entry

User journey:

    @@ -72,33 +60,18 @@

    Route A: Quick document upload

  1. Answer 4 quick questions about the evaluation
  2. Optionally provide contact details for peer discussions
-

Time to complete: 5-10 minutes

-

Best for: Trusts with existing evaluation processes who want to share findings and access others' work without restructuring their current approach

-
-
-
-
-
-
-

Route B: Structured assessment

-

Purpose: Comprehensive product evaluation using a standardised framework

-

User journey:

-
    -
  1. Select product and begin structured evaluation
  2. -
  3. Complete assessment sections (clinical value, safety, integration, sustainability)
  4. -
  5. Add supporting evidence and documentation
  6. -
  7. Review and submit
  8. -
-

Time to complete: 30-45 minutes

-

Best for: Trusts wanting to conduct thorough evaluations using a standardised framework, or those seeking detailed product comparisons

+ +

Potential needs met: Trusts with existing evaluation processes who want to share findings and access others' work without restructuring their current approach

+ +
-
-
+ +

Key design decisions

-

Both routes feed the same repository

+

Regardless of which pathway trusts choose, all product intelligence feeds into a single shared repository. Route A users benefit from Route B's structured data, and Route B users can access Route A's shared documents.

Route A captures essential metadata without burden

@@ -118,55 +91,26 @@

Multi-product evaluations supported

Visual design

-

The pathway selection screen uses clear iconography and time estimates to help users self-select:

- -
    -
  • Route A shown with a document upload icon and "5-10 minutes"
  • -
  • Route B shown with a structured form icon and "30-45 minutes"
  • -
  • Guidance text explains who each route is best suited for
  • -
  • Users can switch between routes if they change their mind
  • -
+

How it tested

-

We tested the dual pathway concept with procurement professionals across different trust sizes and maturity levels:

+

Summary:

    -
  • Users intuitively understood the difference between routes and could self-select appropriately
  • -
  • Trusts with existing evaluation documents appreciated Route A's minimal burden: "I've already done the work—I just want to share it"
  • -
  • Users liked that Route A still captured key metadata: "This asks the right questions without being onerous"
  • -
  • The discussion topics feature was well-received: "This means people won't just contact me about anything—they'll know what I can help with"
  • -
  • Some users said they might start with Route A for speed, then return to complete Route B for important products
  • -
  • The convergence into a single repository was important: "I don't want to have to check two different places"
  • +
  • Key finding
  • +
  • Key finding
  • +
  • Key finding
  • +
  • Key finding
-
- Information: -

Questions and concerns raised:

-
    -
  • How do we ensure Route A evaluations are high enough quality to be useful? (We decided trust-level curation was preferable to platform-level gatekeeping)
  • -
  • Will Route B completion rates suffer if Route A is available? (To be monitored in beta)
  • -
  • Should we prompt Route A users to consider Route B for products they care most about? (Potentially, but not initially)
  • -
-
- -

Next steps

+ -

Based on testing, we will:

+ -
    -
  1. Proceed with the dual pathway model for alpha
  2. -
  3. Monitor usage patterns to understand the split between routes
  4. -
  5. Iterate on Route A's metadata questions based on what proves most useful for peer matching
  6. -
  7. Develop guidance materials for Route B to support trusts new to structured evaluation
  8. -
  9. Test whether users successfully navigate between routes and whether the "meeting users where they are" principle translates to actual adoption
  10. -
+ -
- -

- Tags: User research, Service design, NHS trusts, Evaluation submission, Dual pathway, Alpha testing -

+
{% endblock %} \ No newline at end of file diff --git a/app/views/design-histories/v3.html b/app/views/design-histories/v3.html index 79ba490..80370d5 100644 --- a/app/views/design-histories/v3.html +++ b/app/views/design-histories/v3.html @@ -1,7 +1,7 @@ {% extends 'layout-hero-histories.html' %} -{% set pageName = "Design history" %} +{% set pageName = "Design history 4" %} {% block beforeContent %} {% endblock %} @@ -10,8 +10,8 @@
-

A record of design decisions and iterations on the Compass alpha prototype.

-

Last updated: January 2026

+

Information seeking types

+

Date: November 2025 | Phase: Alpha Sprint 5