Clinical Evidence

Performance data you can present at grand rounds.

NexusAI's validation program produces rigorous, site-specific performance evidence — prospective pilot data, multi-reader AUC studies, and real-world operational benchmarks — to support institutional procurement and clinical governance decisions.

97.7%
Sensitivity — ICH detection (multi-reader study)
44%
Reduction in time-to-diagnosis for stroke workflows
8.4×
Faster critical finding notification vs. baseline
38%
Reduction in documentation time (ambient AI cohort)
> 0.94
AUC across all validated imaging models

All metrics from internal pilot and validation studies. Independent multi-site studies in progress. Results may vary by institution, workflow, and deployment configuration.

ℹ️

Regulatory Status & Evidence Transparency

NexusAI Health is actively pursuing 510(k) clearance for its imaging AI models. All clinical AI capabilities are currently deployed under the Clinical Decision Support (CDS) framework, with results reviewed and interpreted by licensed clinicians. Performance data presented on this page reflects internal validation studies and pilot deployments. NexusAI does not claim independent diagnostic accuracy equivalent to a cleared medical device. Independent peer-reviewed publications and multi-site prospective studies are underway. We publish our methodology, reader cohorts, and limitation statements alongside every performance claim.

What the data shows

NexusAI's evidence program covers imaging AI performance, documentation AI quality, and agentic workflow outcomes — evaluated through multiple independent validation frameworks.

Neurovascular Prospective Pilot · 6 Sites

Real-Time ICH Detection and Radiologist Alert Optimization

A prospective multi-site pilot across six emergency departments evaluated NexusAI's intracranial hemorrhage detection model against unassisted radiologist reads. The primary endpoint was time from image acquisition to verified radiologist review of critical findings.

Key Findings
Model Sensitivity (ICH) 97.7%
Model Specificity 96.1%
AUC 0.981
Median time to alert (AI-assisted) 4.2 min
Median time to alert (baseline) 35.6 min
Neurovascular Retrospective Validation · 1,200 Cases

LVO Detection and Stroke Pathway Activation Time

A retrospective validation cohort of 1,200 CT angiography studies evaluated the NexusAI large vessel occlusion detection model against ground-truth adjudication by two independent neuroradiologists. Workflow impact measured against historical baseline from the same institution.

Key Findings
LVO Detection Sensitivity 94.8%
Specificity 97.3%
Time to neurovascular team alert (AI) < 5 min
Reduction in stroke pathway activation time −44%
Pulmonary Retrospective Validation · 850 Cases

Pulmonary Embolism Detection: AI-Augmented CTPA Workflow

Retrospective evaluation of the NexusAI PE detection model across a cohort of CTPA studies, including high-acuity presentations (saddle PE, right heart strain) and incidental subsegmental findings. Reader study with four radiologists in AI-on vs. AI-off conditions.

Key Findings
Central PE Sensitivity 98.1%
Subsegmental PE Sensitivity 88.4%
Radiologist read time (AI-assisted) −28%
Missed PE rate (AI-assisted reads) −61% vs. unaided
Radiology Operations Prospective Pilot · 3 Health Systems

AI-Driven Worklist Prioritization and Radiologist Throughput

A three-site prospective pilot evaluated NexusAI's worklist prioritization engine — which reorders radiology reads by AI-detected urgency — on radiologist throughput, critical finding catch rates, and time to report finalization for urgent studies.

Key Findings
Time to critical finding report finalization −8.4× faster
Critical study reviewed within 15 min 91% (vs. 34% baseline)
Overall radiologist read throughput +18%
Documentation AI Mixed-Methods Study · 120 Physicians

Ambient AI Documentation: Time Savings and Note Quality

A 120-physician cohort study across primary care and specialty settings evaluated NexusAI Ambient against physician-authored notes for structured completeness, billable element capture, and physician-rated accuracy. Time-in-EHR tracked via metadata analysis.

Key Findings
Reduction in documentation time −38%
Physician accuracy rating (≥4/5) 91% of notes
Billable element capture rate +14% vs. manual
Physician-reported burnout reduction 62% reported improvement
Agentic Workflows Operational Benchmark · 2 IDNs

Prior Authorization Agent: Turnaround Time and Denial Reduction

An operational benchmark across two integrated delivery networks measured the impact of NexusAI's Prior Authorization Agent on PA submission turnaround time, initial denial rates, and staff time allocation. Compared against the same institutions' pre-deployment baseline over a 90-day period.

Key Findings
PA submission turnaround 4.1h avg (was 3.2d)
Initial denial rate −31%
Staff time per PA request −68%
PA-related care delay incidents −44%

How we generate evidence

We hold ourselves to the standards expected in peer-reviewed clinical AI research — including pre-specified endpoints, independent adjudication, and prospective validation where possible.

🎯

Pre-Specified Endpoints

All validation studies define primary and secondary endpoints before data collection. We do not reverse-engineer metrics to favorable conclusions or report only positive outcomes.

⚖️

Independent Ground Truth

Imaging AI studies use independent multi-reader adjudication panels. Documentation AI accuracy assessed by blinded clinical reviewers, not the generating physician.

🔄

Prospective Where Possible

We prioritize prospective study designs in real clinical environments. Where retrospective validation is used, we document its limitations explicitly in all materials.

📊

Statistical Rigor

All performance metrics reported with 95% confidence intervals. Sample sizes powered for primary endpoint detection. Statistical analysis conducted by independent biostatisticians.

🌐

Diversity & Generalizability

Validation cohorts selected to represent diverse patient populations, scanner types, and clinical settings — including community hospitals, academic medical centers, and safety-net facilities.

📣

Transparent Reporting

We publish limitation statements alongside all performance claims. Subgroup analyses — including performance variation by scanner, patient age, and institution type — available on request.

Where we are on the regulatory pathway

We are transparent about what is cleared, what is pending, and how we are deployed in each context. We do not overstate our regulatory status.

510(k) Pending

Imaging AI Models — FDA 510(k) Pathway

NexusAI is pursuing 510(k) premarket notification for its intracranial hemorrhage, pulmonary embolism, and large vessel occlusion detection models. Submissions are in preparation. Until clearance is obtained, these models are deployed as Clinical Decision Support (CDS) tools under applicable FDA guidance — not as cleared medical devices.

Currently Active

Clinical Decision Support Deployment

All NexusAI imaging AI and documentation AI capabilities are deployed under the FDA's Clinical Decision Support Software guidance framework. Outputs are advisory in nature, intended to support — not replace — clinical judgment by licensed practitioners. Clinicians retain full interpretive and diagnostic authority.

Compliant

HIPAA & Security Compliance

All NexusAI deployments operate under executed Business Associate Agreements. Platform infrastructure is HIPAA-compliant, SOC 2 Type II certified, and supports all relevant PHI handling, audit logging, and data residency requirements.

In Progress

International Regulatory Program

NexusAI is evaluating CE marking pathways under the EU Medical Device Regulation (MDR) and Health Canada's Software as a Medical Device (SaMD) classification. International regulatory filings are expected to follow U.S. 510(k) clearance.

What clinical teams say

Feedback from physicians, radiologists, and health system leaders participating in NexusAI's early-access and pilot programs.

"

The worklist reprioritization changed how our nights run. I'm reading the most critical studies first, automatically. I stopped manually hunting for the urgent cases buried in the queue.

Emergency Radiologist Academic Medical Center · Pilot Participant
"

We've been waiting for an AI platform that connects the clinical finding to the actual workflow action. NexusAI does that. A positive PE detection kicks off the prior auth and the care coordination in the same breath.

Chief Medical Information Officer Integrated Health System · Advisory Board
"

I was skeptical of ambient documentation — I've tried others. NexusAI's note structure is actually what I would write. I edited maybe 15% of the first draft on most encounters. That's the standard I was waiting for.

Internal Medicine Hospitalist Regional Health System · Pilot Participant

Request the full evidence package

Clinical, quality, and IT leadership can receive NexusAI's complete validation methodology documentation, subgroup analyses, and site-specific benchmarking data.

Request Evidence Package → View Solutions