AI decision support inside the software doctors already use.
Built for healthcare software teams in NL/EU, embedding explainable, privacy-preserving AI into existing clinical workflows without betting your roadmap on fragile prototypes.
- Workflow-first integration: AI surfaces the right signal at the right moment.
- Trust stack: evaluation, explainability, monitoring, human-in-the-loop.
- Security-aware GenAI: treat agents as hypotheses; reduce injection/poisoning risk.
EU-based. You leave with a clear next step, not a pitch. Built for the EU operating reality (traceability, oversight, cybersecurity).
Workflow Embed Architecture
Where software teams get stuck (even with strong engineers + agents)
Five failure modes, each with a distinct cause. They compound each other if left unaddressed.
"AI works in demos, not in the clinician's day."
Looks like
- Separate dashboards clinicians ignore
- Context-switching tax kills adoption
- Alert fatigue; AI becomes background noise
Fix: embed at the moment of decision: not as a second screen.
"No one can defend outputs."
Looks like
- Inconsistent performance across patient subgroups
- Unclear failure modes: no eval harness
- GenAI confabulation with no safeguards
Fix: build evaluation from day one: what counts as working, and what breaks.
"GenAI expands the attack surface."
Looks like
- Prompt injection risk in clinical agent pipelines
- Data poisoning vectors via uncontrolled tool use
- Protected health data leaving secure boundaries
Fix: treat every GenAI integration as a new attack surface: design for it upfront.
"POC never becomes a maintained system."
Looks like
- No monitoring: model drift goes undetected
- No change control: updates break things silently
- No audit trail: can't reconstruct a decision
Fix: build for the organisation's lifecycle, not just the demo.
"Leadership wants speed; reality demands oversight."
Looks like
- Missing documentation for high-risk AI systems
- Unclear human oversight design
- Weak robustness and cybersecurity story
Fix: high-risk AI Act requirements effective Aug 2026: design with oversight from day one.
Capabilities
What gets built: decision-support systems,
not prototypes.
Three interlocking capabilities. Each one necessary; together, they produce something that survives clinical reality.
Insight extraction
Incl. unstructured clinical text
- Signal extraction from notes and narrative
- Feature definitions + validation
- Clinical interpretation aligned with workflow
Why it matters: most value is locked in free text. Getting it out reliably is a prerequisite for everything else.
Trust stack
Evaluation + explainability
- Evaluation harness + failure modes
- Explainability surfaces: "why this matters now"
- Human-in-the-loop design
Why it matters: clinicians and procurement teams need to defend every AI-assisted decision.
Production path
LLMOps / MLOps
- Pipelines, monitoring, change control
- Audit-ready logs + traceability
- Security-aware integration patterns
Why it matters: EU high-risk AI requirements: logging, traceability, robustness, cybersecurity: kick in at deployment.
Trust stack sequence
Each layer is designed in: not bolted on after demo day.
Fixed scope · Structured delivery
How pilots run (and why they survive)
A four-phase path with explicit risk gates. Each gate is a decision point, not a formality.
Discovery
1–2 weeksPick one decision interface + constraints. Define what done looks like.
Prototype
2–4 weeksFast iteration against real evaluation. Trust stack built in from the start.
Pilot
4–10 weeksEmbed into workflow. Measure adoption + outcomes. Real clinician exposure.
Rollout
OngoingMonitoring, change control, governance artifacts. Team handover documentation.
R&D Triage · 30 min · free
One session to scope your first AI pilot.
30 min. No prep needed. You leave with a defined scope, a stack recommendation, and a week-one action plan.
Selected work
What it looks like in practice
Short and technical. The details that matter.
Privacy-aware explainable decision support
Named: GeriMedica: geriatric EHR platform (NL)
Signal extraction from clinical notes + workflow integration. Built for clinical trust: explainability, data minimisation, oversight hooks.
- Explainable decision support (pilot in progress)
- Privacy-aware handling: data minimisation by design
- Workflow integration focus: clinician stays in EHR
Curalytics / C2F: clinical data structuring
Work in progress / pilot-led
Structuring clinical data for downstream decision support. Workflow-integrated, privacy-preserving. Built through project work.
Embedded ML/LLM pipeline + monitoring + eval harness
Anonymised on request
Production pipeline with full observability: monitoring, change control, audit trail, and evaluation harness. Reliability milestone delivered.
Reliability milestone reached
Details available under NDA during triage conversation.
Consultancy is primary
Products (built through projects)
Both products emerged from real implementation work: not greenfield product dev.
C2F.ai
WIPWhat it does
Clinical data structuring: turning unstructured clinical information into downstream-ready signals.
Who it's for
Healthcare software teams that need structured clinical data without building the extraction layer themselves.
Status
Pilot-led development. Active conversations.
Curalytics
Pilot-ledWhat it does
Workflow-integrated decision support: AI insights surfaced at the right moment inside existing clinical tools.
Who it's for
EHR/EPD vendors and healthcare software teams integrating AI decision support into existing products.
Status
Active pilot work. Built through GeriMedica engagement.
EU AI Act · NIST · ISPE · business.gov.nl
Built for the EU/NL operating reality.
Not a compliance checklist. Builder-focused: constraints designed around from day one.
EU-ready delivery checklist (built in, not patched on)
Traceability from day one
Every decision has a traceable data lineage: not reconstructed later.
Evaluation baseline defined upfront
What counts as working is agreed before building starts.
Human oversight hooks designed in
Override and review mechanisms are architecture, not afterthought.
Security-conscious integration
GenAI treated as an attack surface: injection, poisoning, tool-use boundaries.
Monitoring + change control
Deployment isn't the end: drift, updates, rollbacks are planned for.
Documentation + audit artefacts
EU AI Act high-risk documentation produced as part of delivery.
Sources: digital-strategy.ec.europa.eu; business.gov.nl; ispe.org; nist.gov/AI RMF GenAI.
R&D Triage · 30 minutes
Book the 30-min
R&D Triage.
You leave with a plan.
No demo, no deck, no pitch. A structured conversation about your specific situation: and a clear framework for moving forward.
Which workflow integration approach fits your constraints
What the trust stack needs to look like for your use case
Common traps to avoid (specific to healthcare software)
Book your R&D Triage
Takes 2 minutes. You'll hear back within one business day.