-
Peer to peer engagement
+
Team workshop
-
We structured product pages to prioritise peer to peer information, making trust adoption the most critical aspect.
+
We ran an internal workshop to get the team 'unstuck' and decide what to test first
Date: December 2025 | Phase: Alpha Sprint 2
-
The problem
+
The problem
-
User research revealed that NHS procurement professionals rely heavily on informal personal networks to discover what other trusts are procuring and using. However, this mechanism is ad-hoc and limited to who they happen to know.
+
After an initial proof of concept, we needed team alignment on what we would actually test with users
-
Our initial product page design followed a conventional pattern: supplier branding and marketing content at the top, technical specifications prominent, and trust evaluations buried lower in the page as supporting content. This didn't reflect how users actually make procurement decisions.
+
Our initial designs had some good ideas, namely:
+
+
+ - Date sources and data types
+ - Product comparison interface
+ - VBP scoring system
+ - NHS Design System styling
+
-
Key Insights
+
Things we changed
-
Peer intelligence is valued but currently ad-hoc
+
- Nearly all users want to know what other Trusts are procuring and using
- Current mechanisms are ad-hoc and based on informal personal networks
+
Things we kept or tweaked
+
Users want context beyond product performance
- Users want to understand not just how well a product was evaluated, but how the evaluation was conducted
- Interest in how other Trusts built business cases and what factors they considered important
-
Value is about confidence, not time-saving
-
- - The primary value isn't avoiding trials or saving time
- - It's about building confidence that a product is a viable option for their Trust
- - Most feedback relates to the shortlisting stage where users consider which products and suppliers to explore further
-
-
-
Willingness to share
-
- - Nearly no users saw any issue with sharing their procurement evaluations with other Trusts
-
-
-
Preference for conversation
-
- - Some users said they would always prefer talking to someone at another Trust over relying on written information alone
-
-
-
-
User Needs
-
-
1. Building procurement confidence through peer learning
-
Users need to learn what has worked for other Trusts. The information required varies by Trust, product category and individual, but broadly includes:
-
- - How has the product worked for the Trust, and what issues did they find
- - How has the supplier worked for the Trust, and what issues did they find
-
-
-
2. Knowing who to contact
-
Users need to identify which Trusts to talk to and find contact details for people in those Trusts:
-
- - Whether another Trust has procured or is using a product is key information for choosing who to contact
- - Users will also judge how helpful a Trust might be based on their own knowledge
-
-
-
-
Service Vision
-
-
Core functionality
-
- - Show which Trusts are using and procuring which medtech
- - Enable users to contact people in those Trusts
-
-
-
Evaluation visibility
-
- - Show a broad range of evaluation types from Trusts, associated with products
- - This might include large clinical trials over several months, or a simple one-pager from a clinician trying something out
- - There will not be consistency (initially) across documents users make available
- - We could explore showing key information alongside evaluations (e.g. did the Trust end up procuring or excluding this product)
- - This contextual information could be gathered by asking users a few questions when they share a document
-
-
-
Trusted sources beyond Trusts
-
Some evaluations may come from other trusted bodies our users rely on:
-
- - NHS Supply Chain
- - NHS-led initiatives like GIRFT
- - External NHS-adjacent partners like ODEP
- - Any body using clinicians to evaluate devices that our users trust
-
-
-
Informal feedback (potential)
-
- - Potentially show user feedback not captured in formal documents (e.g. "I had supply chain issues with this supplier")
- - Star ratings may not be the right design choice – this needs further exploration
-
+
-
Design solution
-
-
We fundamentally structured the product page hierarchy to lead with peer information.
-
-
-
-

-
-
-

-
-
@@ -172,9 +104,7 @@
Embracing evaluation variety
"How they evaluated" section describing process, business case approach, and key decision factors
-
What we removed
-
-
We removed the star rating review section entirely. Research showed users want conversations and context, not simplified ratings. The nuance of "it worked for us because..." can't be captured in stars.
+
How it tested
diff --git a/app/views/design-histories/v2.html b/app/views/design-histories/v2.html
index 299343b..5101d9c 100644
--- a/app/views/design-histories/v2.html
+++ b/app/views/design-histories/v2.html
@@ -32,7 +32,7 @@
Onboarding flow and homepage iterations
-
We explored how we could accommodate submitting evaluations to the platform.
+
We explored how we could accommodate submitting evaluations to the platform, in user friendly formats
Date: January 2026 | Phase: Alpha Sprint 3
@@ -41,7 +41,7 @@
Onboarding flow and homepage iterations
The problem
-
We explored a simple flow which would enable procurement professionals to submit their own evaluations. This would potentially save time
+
We initially explored a simple flow which would enable procurement professionals to submit their own evaluations. This would potentially save time
for trusts looking to share their experiences with products, and help populate the platform with more peer-to-peer intelligence.
@@ -51,7 +51,7 @@
The problem
-
Quick document upload
+
Document upload
Purpose: Share existing evaluations quickly with minimal data entry
User journey:
@@ -72,12 +72,12 @@ Quick document upload
Key design decisions
- Regardless of which pathway trusts choose, all product intelligence feeds into a single shared repository. Route A users benefit from Route B's structured data, and Route B users can access Route A's shared documents.
+ Information feeds into a single shared repository, capturing essential metadata without buredn.
- Route A captures essential metadata without burden
- Rather than asking trusts to re-enter information already in their documents, we ask four focused questions:
+
+ Rather than asking trusts to re-enter information already in their documents, we would ask:
- - What type of evaluation was this? (clinical trial, pilot, usage report, quick review)
+ - What type of evaluation was this?
- Did you procure this product? (yes, no, under review)
- Brief description of your evaluation process (optional, 300 characters)
- Are you willing to be contacted by other trusts?
@@ -89,7 +89,7 @@ Discussion topics help peer matching
Multi-product evaluations supported
Recognising that trusts often evaluate multiple products in comparative assessments, we designed Route A to handle one document linked to multiple product evaluations—avoiding the need to upload the same PDF repeatedly.
- Visual design
+
diff --git a/app/views/design-histories/v3.html b/app/views/design-histories/v3.html
index 80370d5..410e669 100644
--- a/app/views/design-histories/v3.html
+++ b/app/views/design-histories/v3.html
@@ -22,7 +22,7 @@ Designing search for four modes of information seeki
Based on Donna Spencer's framework
The problem
- Our original search page only had a data sources card and failed to support the different ways NHS procurement users seek information. Some users know exactly what they want, others are exploring options, some don't know what questions to ask, and others are trying to find something they've seen before.
+ Our original search page failed to support the different ways NHS procurement users seek information. Some users know exactly what they want, others are exploring options, some don't know what questions to ask, and others are trying to find something they've seen before.
Design solution
We restructured the search page to support all four information-seeking modes:
@@ -76,33 +76,23 @@ Design solution
- Technical details collapsed: Specifications and cost analysis in expandable accordions
-
What we removed
-
Star rating reviews section: User research showed users prefer talking to people over reading reviews. The evaluation cards with contacts serve this need better.
-
+
-
Iterating based on user feedback
-
Wound care product page iteration
-
-
User feedback
-
- - 'The first thing I would see is a chart across all these responses and then click on it if I want more information'
- - 'Charts summarising the information based on the areas and people I know'
- - 'It would be useful to click on these Trusts'
- - 'Maybe a breakdown of the total cost of ownership would be helpful'
- - 'Trial is better than evaluation'
-
-
-
Design changes
-
- - Charts first: Three clickable summary charts immediately after product header (trial outcomes donut, regional bar chart, contacts ring)
+What else we changed
+
+
+
+ - Charts first: Three clickable summary charts immediately after product header (trial outcomes donut, regional bar chart, contacts ring)
- Charts by region: Regional breakdown with clickable cards for North West, London, Midlands, South East
- Clickable trusts: Every trust card is now a full clickable link to a trust detail page
- Total cost of ownership breakdown: New section with cost table and projected annual savings by trust size
- 'Trial' terminology: Changed 'evaluation' to 'trial' throughout (e.g. '18 NHS trusts have trialled this product')
-
+
+
+
-
Retained from previous design
+
What we kept
Supplier contact card in sidebar, technical specs in accordion, cost details in accordion, contact details in cards with discussion topics.
@@ -112,247 +102,10 @@
User research findings
Key insights from procurement research
-
What users told us
-
- - Nearly all users want to know what other Trusts are procuring and using, but their current mechanism is ad-hoc and based on informal personal networks
- - Users want to know not just about the product itself, but how another Trust did the evaluation, made a business case, and what factors they considered important
- - The value is not primarily about saving time/avoiding trials, but about building confidence that a product is a viable option
- - Most of what we heard was about the 'shortlisting' stage of procurement
- - Users would always prefer to talk to someone in another Trust than to rely on written information alone
- - Nearly no users thought there would be any issue with sharing their evaluations
-
-
-
Implications for design
-
- - Trust adoption as hero: Make 'which trusts are using this' the first thing users see
- - Peer contact facilitation: Enable and encourage trust-to-trust contact
- - 'Lessons learned' framing: Frame contact around lessons learned rather than technical information
- - Show discussion topics: Increase likelihood of peer contact by showing what each person will discuss
-
-
-
-
-
Final iteration gaps identified
-
-
Gaps to address
-
- - Value for money dimensions: Procurement measures value against social value, total product life, clinical results, patient outcomes — not just price
- - Safety prominence for clinicians: Clinical staff prioritise safety and co-production more than procurement staff
- - Support for interpreting information: Teams are often unable to understand what data means without explanation
- - Trust capacity context: Show 500 vs 200 beds to help users assess relevance to their situation
-
-
-
-
-
-
Testing strategy
-
-
Critical assumptions to validate
-
-
Assumption 1: Users will actually contact peers
-
-
-
- Why critical
- - Core value proposition depends on peer contact happening. If users won't contact, the service fails.
-
-
-
- Success criteria
- - 75% must say they would use peer contact feature
-
-
-
- If wrong
- - Major pivot needed — service may not be viable
-
-
-
-
Assumption 2: Trust adoption visibility builds confidence
-
-
-
- Why critical
- - Showing '8 trusts using this' should create confidence for shortlisting decisions.
-
-
-
- Success criteria
- - Users cite trust adoption as confidence factor in testing
-
-
-
- If wrong
- - Rethink value proposition — may need different confidence signals
-
-
-
-
Assumption 3: Users accept evaluation variety
-
-
-
- Why critical
- - Evidence ranges from clinical trials to quick desk reviews. If users demand standardisation, the passporting model won't work.
-
-
-
- Success criteria
- - Users find variety acceptable if context is clear
-
-
-
- If wrong
- - Route B (structured assessment) becomes mandatory
-
-
-
-
Assumption 4: 'How they evaluated' metadata helps
-
-
-
- Why critical
- - Users wanted to understand how trusts conducted evaluations.
-
-
-
- Success criteria
- - Users reference process information when explaining relevance
-
-
-
-
Assumption 5: Discussion topics increase contact likelihood
-
-
-
- Why critical
- - We ask trusts to specify what they'll discuss in Route A.
-
-
-
- Success criteria
- - 75% say topics make them more likely to contact
-
-
-
-
-
-
-
Service design
-
-
Dual pathway approach for evaluation submission
-
-
The problem
-
We identified a tension in how NHS trusts approach evaluations. Some have well-established processes and produce comprehensive documents; they want to share existing work without duplicating effort. Others want guidance and structure. A one-size-fits-all approach would be too burdensome for mature trusts or too unstructured for others.
-
-
Design solution
-
We designed a dual pathway approach:
-
-
-
-
Route A: Quick document upload
-
- - Upload existing evaluation document
- - Add basic metadata (product name, supplier, evaluation date)
- - Answer 4 quick questions about the evaluation
- - Optionally provide contact details for peer discussions
-
-
Time to complete: 5–10 minutes
-
-
-
-
-
-
Route B: Structured assessment
-
- - Select product and begin structured evaluation
- - Complete assessment sections (clinical value, safety, integration, sustainability)
- - Add supporting evidence and documentation
-
-
Time to complete: 30–45 minutes
-
-
-
-
Key design decisions
-
- - Both routes feed the same repository: Route A users benefit from Route B's structured data, and vice versa
- - Route A captures essential metadata without burden: Evaluation type, procurement outcome, brief process description, contact willingness
- - Discussion topics help peer matching: If users consent to contact, we ask what topics they're comfortable discussing
- - Multi-product evaluations supported: One document can link to multiple product evaluations
-
-
-
How it tested
-
- - Users intuitively understood the difference between routes and could self-select appropriately
- - Trusts with existing documents appreciated Route A's minimal burden
- - The discussion topics feature was well-received: 'People won't just contact me about anything — they'll know what I can help with'
- - Some users said they might start with Route A for speed, then return to complete Route B for important products
-
-
-
-
-
-
Visual design
-
-
Badge system for evaluation metadata
-
-
Purpose
-
Badges provide at-a-glance context for evaluations without requiring users to open documents. They help users quickly assess relevance and quality.
-
-
Badge categories
-
-
Procurement outcomes:
-
- - Procured — Positive outcome
- - Under review — Pending decision
- - Excluded — Not proceeding (with reason)
-
-
-
Evaluation types:
-
- - Clinical trial — Rigorous formal trial
- - Pilot study — Focused testing
- - Usage report — Real-world tracking
- - Quick review — Desk review/supplier meeting
-
-
-
Trust types:
-
- - Teaching hospital
- - Specialist trust
- - District hospital
-
-
-
Trusted sources:
-
- - NHS Supply Chain
- - GIRFT
- - ODEP
-
-
-
-
-
Icon design for landing page
-
-
Icons created
-
Custom SVG icons designed for three key value proposition cards:
-
- - Centralised evidence: Document with connecting nodes
- - Real world evaluations: Clipboard with checkmark and hospital cross
- - Peer contact facilitation: Two people with connecting speech elements
-
-
-
Design specifications
-
- - Colour: NHS Blue (#005eb8)
- - Stroke weight: 2px
- - Size: 48×48 pixels
- - Style: Consistent with NHS Design System aesthetics
-
-
-
-
-
-
Summary of design principles
-
-
- - Lead with peer intelligence, not supplier marketing: Trust adoption and peer contacts should be the first things users see
- - Enable conversations, not just documents: Users prefer talking to people over reading; facilitate that
- - Embrace evaluation variety: Don't force standardisation; show context so users can judge relevance
- - Support multiple information-seeking modes: Known-item, exploratory, learning, and re-finding
- - Meet users where they are: Dual pathways accommodate different trust maturity levels
- - Show honest information: Including excluded products builds platform credibility
-
-
-
+
Findings here
+
-
Document prepared: January 2026
+