-
Designing the dual pathway approach for trust evaluation submission
+
A record of design decisions and iterations on the Compass alpha prototype.
+
Last updated: January 2026
+
+
+
+
+
Information architecture
+
+
Designing search for four modes of information seeking
+
Based on Donna Spencer's framework
+
+
The problem
+
Our original search page only had a data sources card and failed to support the different ways NHS procurement users seek information. Some users know exactly what they want, others are exploring options, some don't know what questions to ask, and others are trying to find something they've seen before.
+
+
Design solution
+
We restructured the search page to support all four information-seeking modes:
+
+ - Known-item seeking: Prominent search box with manufacturer/product/GMDN support, plus 'Recently searched across NHS' quick links
+ - Exploratory seeking: Six category cards showing product counts AND trust evaluation counts as social proof
+ - Don't know what you need to know: Educational cards ('What makes good evidence?', 'Questions to ask other trusts', 'Value-based procurement explained')
+ - Re-finding: 'Your recent activity' section showing recently viewed products and saved comparisons when signed in
+
+
+
Key insight
+
Procurement users often think they're doing known-item search ('I need product X') but actually need exploratory support ('what are trusts finding works?') or guidance ('what evidence should I even care about?'). The page now supports all paths without forcing users to self-identify.
+
+
+
+
Category results page for exploratory users
+
+
The problem
+
When users selected a category to explore (e.g. 'Wound care'), we needed a results page that supported discovery rather than just listing products.
+
+
Design solution
+
We designed the category results page with:
+
+ - Category-level stats: '183 products from 47 suppliers · 56 NHS trust evaluations · 23 trusts available to contact'
+ - Filters that teach: Product types, evidence types, and peer availability help users learn the domain as they narrow down
+ - Default sort by 'Most trusts using': Social proof first for discovery-oriented users
+ - Peer contact indicator: '5 trusts willing to discuss lessons learned' on each product card
+ - Comparison checkboxes: Sticky selection bar appears when products selected
+
+
+
Rationale
+
Exploratory users' needs change as they discover. The page supports the journey from 'what's out there?' through filtering, scanning trust adoption, to selecting products for comparison.
+
+
+
+
+
Product pages
+
+
Leading with trust adoption visibility
+
+
The problem
+
User research revealed NHS procurement professionals rely heavily on informal personal networks to discover what other trusts are procuring. Our initial product page design followed a conventional pattern: supplier branding and marketing content at the top, trust evaluations buried lower as supporting content. This didn't reflect how users actually make decisions.
-
We explored how to accommodate different trust maturity levels through two distinct routes for submitting evaluations to the platform.
+
Design solution
+
We restructured product pages to prioritise peer intelligence:
+
+ - Trust adoption section moved to hero position: '12 NHS trusts have evaluated this product' with outcomes (9 procured, 2 under review, 1 excluded)
+ - Featured contact cards: 'Sarah is happy to discuss: implementation, clinical outcomes, supplier experience'
+ - 'How they evaluated' sections: Shows process, business case approach, and decision factors
+ - Evaluation type badges: Clinical trial, pilot study, usage report, quick review
+ - Technical details collapsed: Specifications and cost analysis in expandable accordions
+
+
+
What we removed
+
Star rating reviews section: User research showed users prefer talking to people over reading reviews. The evaluation cards with contacts serve this need better.
-
Last updated: 14 Jan 2026
+
+
+
Iterating based on user feedback
+
Wound care product page iteration
+
+
User feedback
+
+ - 'The first thing I would see is a chart across all these responses and then click on it if I want more information'
+ - 'Charts summarising the information based on the areas and people I know'
+ - 'It would be useful to click on these Trusts'
+ - 'Maybe a breakdown of the total cost of ownership would be helpful'
+ - 'Trial is better than evaluation'
+
+
+
Design changes
+
+ - Charts first: Three clickable summary charts immediately after product header (trial outcomes donut, regional bar chart, contacts ring)
+ - Charts by region: Regional breakdown with clickable cards for North West, London, Midlands, South East
+ - Clickable trusts: Every trust card is now a full clickable link to a trust detail page
+ - Total cost of ownership breakdown: New section with cost table and projected annual savings by trust size
+ - 'Trial' terminology: Changed 'evaluation' to 'trial' throughout (e.g. '18 NHS trusts have trialled this product')
+
+
+
Retained from previous design
+
Supplier contact card in sidebar, technical specs in accordion, cost details in accordion, contact details in cards with discussion topics.
-
The problem
+
+
User research findings
-
During early discovery, we identified a tension in how NHS trusts approach procurement evaluations:
+
Key insights from procurement research
+
+
What users told us
+
+ - Nearly all users want to know what other Trusts are procuring and using, but their current mechanism is ad-hoc and based on informal personal networks
+ - Users want to know not just about the product itself, but how another Trust did the evaluation, made a business case, and what factors they considered important
+ - The value is not primarily about saving time/avoiding trials, but about building confidence that a product is a viable option
+ - Most of what we heard was about the 'shortlisting' stage of procurement
+ - Users would always prefer to talk to someone in another Trust than to rely on written information alone
+ - Nearly no users thought there would be any issue with sharing their evaluations
+
+
+
Implications for design
+
+ - Trust adoption as hero: Make 'which trusts are using this' the first thing users see
+ - Peer contact facilitation: Enable and encourage trust-to-trust contact
+ - 'Lessons learned' framing: Frame contact around lessons learned rather than technical information
+ - Show discussion topics: Increase likelihood of peer contact by showing what each person will discuss
+
+
+
-
Some trusts have well-established evaluation processes and produce comprehensive assessment documents as part of their normal workflow. These trusts wanted a quick way to share existing work without duplicating effort.
+
Final iteration gaps identified
+
+
Gaps to address
+
+ - Value for money dimensions: Procurement measures value against social value, total product life, clinical results, patient outcomes — not just price
+ - Safety prominence for clinicians: Clinical staff prioritise safety and co-production more than procurement staff
+ - Support for interpreting information: Teams are often unable to understand what data means without explanation
+ - Trust capacity context: Show 500 vs 200 beds to help users assess relevance to their situation
+
-
Other trusts wanted guidance and structure—either because they were less mature in their procurement processes or because they wanted to ensure their evaluations were comprehensive and comparable.
+
-
A one-size-fits-all approach would either be too burdensome for trusts with existing evaluations (requiring them to re-enter data they've already documented) or too unstructured for trusts wanting guidance (leaving them without a framework).
+
+
Testing strategy
-
-
Information:
-
"So there will not be any consistency (initially) across the documents that users make available to us. This might be a large clinical trial over some months, or just a one-pager showing the result of one clinician sitting down to try something out."
-
+
Critical assumptions to validate
-
We needed a model that embraced this variety rather than forcing artificial standardisation that would discourage participation.
-
-
Design solution
-
-
We designed a dual pathway approach with two distinct routes:
-
-
-
-
-
-
Route A: Quick document upload
-
Purpose: Share existing evaluations quickly with minimal data entry
-
User journey:
-
- - Upload existing evaluation document
- - Add basic metadata (product name, supplier, evaluation date)
- - Answer 4 quick questions about the evaluation
- - Optionally provide contact details for peer discussions
-
-
Time to complete: 5-10 minutes
-
Best for: Trusts with existing evaluation processes who want to share findings and access others' work without restructuring their current approach
-
-
+
Assumption 1: Users will actually contact peers
+
+
+
- Why critical
+ - Core value proposition depends on peer contact happening. If users won't contact, the service fails.
-
-
-
-
Route B: Structured assessment
-
Purpose: Comprehensive product evaluation using a standardised framework
-
User journey:
-
- - Select product and begin structured evaluation
- - Complete assessment sections (clinical value, safety, integration, sustainability)
- - Add supporting evidence and documentation
- - Review and submit
-
-
Time to complete: 30-45 minutes
-
Best for: Trusts wanting to conduct thorough evaluations using a standardised framework, or those seeking detailed product comparisons
-
-
+
+
- Success criteria
+ - 75% must say they would use peer contact feature
-
+
+
- If wrong
+ - Major pivot needed — service may not be viable
+
+
-
Key design decisions
+
Assumption 2: Trust adoption visibility builds confidence
+
+
+
- Why critical
+ - Showing '8 trusts using this' should create confidence for shortlisting decisions.
+
+
+
- Success criteria
+ - Users cite trust adoption as confidence factor in testing
+
+
+
- If wrong
+ - Rethink value proposition — may need different confidence signals
+
+
-
Both routes feed the same repository
-
Regardless of which pathway trusts choose, all product intelligence feeds into a single shared repository. Route A users benefit from Route B's structured data, and Route B users can access Route A's shared documents.
+
Assumption 3: Users accept evaluation variety
+
+
+
- Why critical
+ - Evidence ranges from clinical trials to quick desk reviews. If users demand standardisation, the passporting model won't work.
+
+
+
- Success criteria
+ - Users find variety acceptable if context is clear
+
+
+
- If wrong
+ - Route B (structured assessment) becomes mandatory
+
+
-
Route A captures essential metadata without burden
-
Rather than asking trusts to re-enter information already in their documents, we ask four focused questions:
+
Assumption 4: 'How they evaluated' metadata helps
+
+
+
- Why critical
+ - Users wanted to understand how trusts conducted evaluations.
+
+
+
- Success criteria
+ - Users reference process information when explaining relevance
+
+
+
+
Assumption 5: Discussion topics increase contact likelihood
+
+
+
- Why critical
+ - We ask trusts to specify what they'll discuss in Route A.
+
+
+
- Success criteria
+ - 75% say topics make them more likely to contact
+
+
+
+
+
+
+
Service design
+
+
Dual pathway approach for evaluation submission
+
+
The problem
+
We identified a tension in how NHS trusts approach evaluations. Some have well-established processes and produce comprehensive documents; they want to share existing work without duplicating effort. Others want guidance and structure. A one-size-fits-all approach would be too burdensome for mature trusts or too unstructured for others.
+
+
Design solution
+
We designed a dual pathway approach:
+
+
+
+
Route A: Quick document upload
+
+ - Upload existing evaluation document
+ - Add basic metadata (product name, supplier, evaluation date)
+ - Answer 4 quick questions about the evaluation
+ - Optionally provide contact details for peer discussions
+
+
Time to complete: 5–10 minutes
+
+
+
+
+
+
Route B: Structured assessment
+
+ - Select product and begin structured evaluation
+ - Complete assessment sections (clinical value, safety, integration, sustainability)
+ - Add supporting evidence and documentation
+
+
Time to complete: 30–45 minutes
+
+
+
+
Key design decisions
- - What type of evaluation was this? (clinical trial, pilot, usage report, quick review)
- - Did you procure this product? (yes, no, under review)
- - Brief description of your evaluation process (optional, 300 characters)
- - Are you willing to be contacted by other trusts?
+ - Both routes feed the same repository: Route A users benefit from Route B's structured data, and vice versa
+ - Route A captures essential metadata without burden: Evaluation type, procurement outcome, brief process description, contact willingness
+ - Discussion topics help peer matching: If users consent to contact, we ask what topics they're comfortable discussing
+ - Multi-product evaluations supported: One document can link to multiple product evaluations
+
+
+
How it tested
+
+ - Users intuitively understood the difference between routes and could self-select appropriately
+ - Trusts with existing documents appreciated Route A's minimal burden
+ - The discussion topics feature was well-received: 'People won't just contact me about anything — they'll know what I can help with'
+ - Some users said they might start with Route A for speed, then return to complete Route B for important products
-
Discussion topics help peer matching
-
If users consent to contact, we ask what topics they're comfortable discussing (implementation, clinical outcomes, supplier experience). This helps other trusts identify the right person to contact for their specific questions.
+
-
Multi-product evaluations supported
-
Recognising that trusts often evaluate multiple products in comparative assessments, we designed Route A to handle one document linked to multiple product evaluations—avoiding the need to upload the same PDF repeatedly.
+
+
Visual design
-
Visual design
+
Badge system for evaluation metadata
+
+
Purpose
+
Badges provide at-a-glance context for evaluations without requiring users to open documents. They help users quickly assess relevance and quality.
+
+
Badge categories
+
+
Procurement outcomes:
+
+ - Procured — Positive outcome
+ - Under review — Pending decision
+ - Excluded — Not proceeding (with reason)
+
-
The pathway selection screen uses clear iconography and time estimates to help users self-select:
+
Evaluation types:
+
+ - Clinical trial — Rigorous formal trial
+ - Pilot study — Focused testing
+ - Usage report — Real-world tracking
+ - Quick review — Desk review/supplier meeting
+
+
Trust types:
- - Route A shown with a document upload icon and "5-10 minutes"
- - Route B shown with a structured form icon and "30-45 minutes"
- - Guidance text explains who each route is best suited for
- - Users can switch between routes if they change their mind
+ - Teaching hospital
+ - Specialist trust
+ - District hospital
-
How it tested
+
Trusted sources:
+
+ - NHS Supply Chain
+ - GIRFT
+ - ODEP
+
-
We tested the dual pathway concept with procurement professionals across different trust sizes and maturity levels:
+
+
Icon design for landing page
+
+
Icons created
+
Custom SVG icons designed for three key value proposition cards:
- - Users intuitively understood the difference between routes and could self-select appropriately
- - Trusts with existing evaluation documents appreciated Route A's minimal burden: "I've already done the work—I just want to share it"
- - Users liked that Route A still captured key metadata: "This asks the right questions without being onerous"
- - The discussion topics feature was well-received: "This means people won't just contact me about anything—they'll know what I can help with"
- - Some users said they might start with Route A for speed, then return to complete Route B for important products
- - The convergence into a single repository was important: "I don't want to have to check two different places"
+ - Centralised evidence: Document with connecting nodes
+ - Real world evaluations: Clipboard with checkmark and hospital cross
+ - Peer contact facilitation: Two people with connecting speech elements
+
+
+
Design specifications
+
+ - Colour: NHS Blue (#005eb8)
+ - Stroke weight: 2px
+ - Size: 48×48 pixels
+ - Style: Consistent with NHS Design System aesthetics
-
-
Information:
-
Questions and concerns raised:
-
- - How do we ensure Route A evaluations are high enough quality to be useful? (We decided trust-level curation was preferable to platform-level gatekeeping)
- - Will Route B completion rates suffer if Route A is available? (To be monitored in beta)
- - Should we prompt Route A users to consider Route B for products they care most about? (Potentially, but not initially)
-
-
-
-
Next steps
+
-
Based on testing, we will:
+
+
Summary of design principles
- - Proceed with the dual pathway model for alpha
- - Monitor usage patterns to understand the split between routes
- - Iterate on Route A's metadata questions based on what proves most useful for peer matching
- - Develop guidance materials for Route B to support trusts new to structured evaluation
- - Test whether users successfully navigate between routes and whether the "meeting users where they are" principle translates to actual adoption
+ - Lead with peer intelligence, not supplier marketing: Trust adoption and peer contacts should be the first things users see
+ - Enable conversations, not just documents: Users prefer talking to people over reading; facilitate that
+ - Embrace evaluation variety: Don't force standardisation; show context so users can judge relevance
+ - Support multiple information-seeking modes: Known-item, exploratory, learning, and re-finding
+ - Meet users where they are: Dual pathways accommodate different trust maturity levels
+ - Show honest information: Including excluded products builds platform credibility
-
+
+
+
Document prepared: January 2026
-
- Tags: User research, Service design, NHS trusts, Evaluation submission, Dual pathway, Alpha testing
-
{% endblock %}
\ No newline at end of file