Healthcare Software · NL/EU

AI decision support inside the software doctors already use.

Built for healthcare software teams in NL/EU, embedding explainable, privacy-preserving AI into existing clinical workflows without betting your roadmap on fragile prototypes.

  • Workflow-first integration: AI surfaces the right signal at the right moment.
  • Trust stack: evaluation, explainability, monitoring, human-in-the-loop.
  • Security-aware GenAI: treat agents as hypotheses; reduce injection/poisoning risk.

EU-based. You leave with a clear next step, not a pitch. Built for the EU operating reality (traceability, oversight, cybersecurity).

Workflow Embed Architecture

EHR / EPDExisting clinical softwareAI INSIGHT LAYERsignal extractionfeature evalrisk scoringexplainability + evaluationAI insight layerDECISION SUPPORTRisk flag: fall risk elevatedAttention: medication conflictRecommend: review dischargeWhy: 3 contributing factors →clinician reviews + confirmsDecision support outputsAUDIT TRAIL + MONITORINGtraceabilitychange controllogginghuman oversight hookscybersecurity posture
data flow
AI layer
EU compliance
GeriMedica: NLP shipped to production · EU-aware by design

Where software teams get stuck (even with strong engineers + agents)

Five failure modes, each with a distinct cause. They compound each other if left unaddressed.

A: Workflow adoption

"AI works in demos, not in the clinician's day."

Looks like

  • Separate dashboards clinicians ignore
  • Context-switching tax kills adoption
  • Alert fatigue; AI becomes background noise

Fix: embed at the moment of decision: not as a second screen.

B: Trust & evaluation

"No one can defend outputs."

Looks like

  • Inconsistent performance across patient subgroups
  • Unclear failure modes: no eval harness
  • GenAI confabulation with no safeguards

Fix: build evaluation from day one: what counts as working, and what breaks.

C: Privacy & security

"GenAI expands the attack surface."

Looks like

  • Prompt injection risk in clinical agent pipelines
  • Data poisoning vectors via uncontrolled tool use
  • Protected health data leaving secure boundaries

Fix: treat every GenAI integration as a new attack surface: design for it upfront.

D: Operationalization

"POC never becomes a maintained system."

Looks like

  • No monitoring: model drift goes undetected
  • No change control: updates break things silently
  • No audit trail: can't reconstruct a decision

Fix: build for the organisation's lifecycle, not just the demo.

E: EU readiness

"Leadership wants speed; reality demands oversight."

Looks like

  • Missing documentation for high-risk AI systems
  • Unclear human oversight design
  • Weak robustness and cybersecurity story

Fix: high-risk AI Act requirements effective Aug 2026: design with oversight from day one.

Capabilities

What gets built: decision-support systems, not prototypes.

Three interlocking capabilities. Each one necessary; together, they produce something that survives clinical reality.

Insight extraction

Incl. unstructured clinical text

  • Signal extraction from notes and narrative
  • Feature definitions + validation
  • Clinical interpretation aligned with workflow

Why it matters: most value is locked in free text. Getting it out reliably is a prerequisite for everything else.

Trust stack

Evaluation + explainability

  • Evaluation harness + failure modes
  • Explainability surfaces: "why this matters now"
  • Human-in-the-loop design

Why it matters: clinicians and procurement teams need to defend every AI-assisted decision.

Production path

LLMOps / MLOps

  • Pipelines, monitoring, change control
  • Audit-ready logs + traceability
  • Security-aware integration patterns

Why it matters: EU high-risk AI requirements: logging, traceability, robustness, cybersecurity: kick in at deployment.

Trust stack sequence

. Data lineage Evaluation harness Explainability surfaces Monitoring & alerting Human oversight Documentation & audit

Each layer is designed in: not bolted on after demo day.

Fixed scope · Structured delivery

How pilots run (and why they survive)

A four-phase path with explicit risk gates. Each gate is a decision point, not a formality.

1

Discovery

1–2 weeks

Pick one decision interface + constraints. Define what done looks like.

Gate Evaluation baseline agreed
2

Prototype

2–4 weeks

Fast iteration against real evaluation. Trust stack built in from the start.

Gate Workflow integration test
3

Pilot

4–10 weeks

Embed into workflow. Measure adoption + outcomes. Real clinician exposure.

Gate Monitoring + audit trail ready
4

Rollout

Ongoing

Monitoring, change control, governance artifacts. Team handover documentation.

Output EU-ready compliance artefacts

R&D Triage · 30 min · free

One session to scope your first AI pilot.

30 min. No prep needed. You leave with a defined scope, a stack recommendation, and a week-one action plan.

Consultancy is primary

Products (built through projects)

Both products emerged from real implementation work: not greenfield product dev.

C2F.ai

WIP

What it does

Clinical data structuring: turning unstructured clinical information into downstream-ready signals.

Who it's for

Healthcare software teams that need structured clinical data without building the extraction layer themselves.

Status

Pilot-led development. Active conversations.

Curalytics

Pilot-led

What it does

Workflow-integrated decision support: AI insights surfaced at the right moment inside existing clinical tools.

Who it's for

EHR/EPD vendors and healthcare software teams integrating AI decision support into existing products.

Status

Active pilot work. Built through GeriMedica engagement.

EU AI Act · NIST · ISPE · business.gov.nl

Built for the EU/NL operating reality.

Not a compliance checklist. Builder-focused: constraints designed around from day one.

High-risk AI: logging, traceability, documentation, oversight
Security-aware: prompt injection + data poisoning risks
NL phased enforcement + AI literacy expectations (NEN)
GDPR-aware: data minimisation, pseudonymisation by design

EU-ready delivery checklist (built in, not patched on)

Traceability from day one

Every decision has a traceable data lineage: not reconstructed later.

Evaluation baseline defined upfront

What counts as working is agreed before building starts.

Human oversight hooks designed in

Override and review mechanisms are architecture, not afterthought.

Security-conscious integration

GenAI treated as an attack surface: injection, poisoning, tool-use boundaries.

Monitoring + change control

Deployment isn't the end: drift, updates, rollbacks are planned for.

Documentation + audit artefacts

EU AI Act high-risk documentation produced as part of delivery.

Sources: digital-strategy.ec.europa.eu; business.gov.nl; ispe.org; nist.gov/AI RMF GenAI.

R&D Triage · 30 minutes

Book the 30-min
R&D Triage.

You leave with a plan.

No demo, no deck, no pitch. A structured conversation about your specific situation: and a clear framework for moving forward.

Which workflow integration approach fits your constraints

What the trust stack needs to look like for your use case

Common traps to avoid (specific to healthcare software)

30 min, free, no pitch
Amsterdam · NL / EU
Useful regardless of next step

Book your R&D Triage

Takes 2 minutes. You'll hear back within one business day.

No agenda. No pitch. Just a useful 30 minutes.