ClinAssist processes structured EHR data, clinical notes, and imaging in real time — then tells the clinician exactly why it reached its conclusion.
Built to operate within the triage window — fast, multimodal, and fully transparent.
Structured EHR data — vitals, labs, demographics, comorbidities — are pulled automatically at point of registration. Clinical notes are captured via NLP. Imaging is ingested where available.
A transformer-based NLP model processes free-text notes, a gradient-boosted model handles structured EHR data, and a CNN/ViT pipeline analyses diagnostic imaging where available. All three feed into a unified risk score.
Every prediction is explained in real time using SHAP feature attribution, LIME local approximations, and attention visualisation for NLP outputs. The clinician sees exactly which features drove the score — and by how much.
Every prediction comes with layered explanations — SHAP feature attribution for tabular data, LIME local approximations, and attention visualisation for NLP. Clinicians can interrogate, challenge, and override — supported by clear reasoning, not blind outputs.
Transformer-based NLP reads triage and clinical notes in real time, extracting structured insight from unstructured language including medical shorthand and abbreviations.
Risk scores update dynamically as new data arrives — vitals changes, new labs, updated notes — keeping the clinician's picture current throughout the encounter.
Designed around the 3–5 minute triage window. ClinAssist surfaces the right information at the right moment — without adding cognitive burden to high-pressure environments.
CNN and Vision Transformer (ViT) architectures analyse imaging inputs — such as chest X-rays — as part of the multimodal pipeline, extending ClinAssist beyond structured data into visual diagnostics.
Clinician trust and decision accuracy are primary outcome measures — not just model AUC. We measure whether ClinAssist actually improves decisions, not just predictions.