ClinAssist is an explainable AI clinical decision support system designed for emergency care — combining real-time risk stratification with transparent, SHAP-powered reasoning that clinicians can interrogate, trust, and act on.
Clinicians routinely override or ignore AI recommendations they cannot interrogate — not out of stubbornness, but rational caution.
Most AI models are benchmarked on datasets in isolation. They have never been designed around a three-to-five minute triage interaction.
EHR data — vitals, labs, history — and free-text clinical notes are rarely fused. ClinAssist brings them together.
ClinAssist processes both structured EHR data and unstructured clinical notes in real time — then tells the clinician exactly why it reached its conclusion.
Structured EHR data — vitals, labs, demographics, comorbidities — are pulled automatically at point of registration. Clinical notes are captured via NLP.
A transformer-based NLP model processes free-text notes while a gradient-boosted model handles structured data. Both feed into a unified risk score.
Every prediction is explained in real time. The clinician sees exactly which features drove the score — and by how much. No black boxes, ever.
Every prediction comes with a feature-level explanation. Clinicians can interrogate, challenge, and override — supported by clear reasoning, not blind outputs.
Transformer-based NLP reads triage and clinical notes in real time, extracting structured insight from unstructured language including medical shorthand.
Risk scores update dynamically as new data arrives — vitals changes, new labs, updated notes — keeping the clinician's picture current throughout the encounter.
Designed around the 3–5 minute triage window. ClinAssist surfaces the right information at the right moment — without adding cognitive burden.
Initial models are trained and validated on MIMIC-IV — one of the world's largest critical care EHR datasets — before progressing to partner institution pilots.
Clinician trust and decision accuracy are primary outcome measures — not just model AUC. We measure whether ClinAssist actually improves decisions, not just predictions.
Lead Researcher & Developer
CS (ML track) · Deep learning, NLP, SHAP explainability · LLM fine-tuning & prompt engineeringClinAssist proposal developed. Research direction defined. Initial supervisor outreach underway.
Systematic review of clinical AI explainability literature. Research question refinement. Dataset access confirmed.
XGBoost + SHAP pipeline on structured EHR data. Transformer NLP on clinical notes. Unified risk scoring interface prototype.
Human-factors evaluation with ED clinicians. Does SHAP-based explainability improve decision accuracy and clinician trust vs. no-explanation baseline?
MRes thesis submission. Target publication in JAMIA, npj Digital Medicine, or similar clinical AI venues.
Whether you're a clinician, researcher, or health institution interested in collaborating — we'd love to hear from you.