Although “unperfected”, AI has the potential to accelerate and synchronise stroke care

13436

A recently published systematic review reveals that artificial intelligence (AI) is rapidly being used by major medical centres to identify large vessel occlusions (LVO) and diagnose stroke. While AI has the potential to expedite treatment and address a critical time delay for many patients, the authors, Nick Murray (Stanford University, USA) and colleagues, acknowledge that the software is complex, and there is a wide variation in the performance of different AI products. Noting a “paucity” of randomised controlled trials comparing AI software, the team say that, in order for the field to reap the full benefits of AI algorithms, future studies should standardise methods for both validation and comparison.

Nick Murray

The team from Stanford University and John Hopkins (Baltimore, USA) describe the fundamentals of AI and machine learning (ML) as applied to stroke: “ML is an area of AI-related research that provides tools to discover and develop decision-making rules from data. In the case of stroke, the primary goal of any AI or ML algorithm is to reliably identify the presence or absence of a feature, such as LVO, from 3D tomographic images.”

Speaking to NeuroNews, Murray posits that the benefits of stroke diagnostic software using AI are “innumerable”. These include: “Faster stroke diagnosis, faster stroke treatment [and] improved decisions of when and how the stroke should be treated.” Overall, Murray said, “This reduces brain cells lost, which may be the difference between returning to independent functionality and work, and becoming paralysed.”

In their manuscript, published in the Journal of Neurointerventional Surgery, the authors support the consensus that AI tools can alleviate the “unmet” need for immediate and standardised time-sensitive stroke detection and triage. They outline how the clinician can use AI in real-time to improve interpretation of images. “This understanding is necessary for informed interpretation of the scans of individual stroke patients that use AI software,” they write.

The team reviewed the current landscape of AI in ischaemic stroke diagnostics by characterising the current literature and emerging ML diagnostic technology that has only recently been made available to clinicians. In accordance with PRISMA guidelines, they used PubMed, Medline and Embase to conduct their review, extracting a total of 20 studies that adhered to the criteria.

The team found that AI use in acute LVO stroke diagnostics and triage falls under three categories: automatic stroke core and penumbra size and mismatch quantification, detection of vascular thrombi or occlusions that cause stroke, and prediction of acute complications. Specifically, Murray and colleagues write that ML algorithms eliciting Alberta stroke program early CT score (ASPECTS) most commonly employ a random forest learning (RFL) classifier, while LVO detection typically uses convolutional neural networks (CNNs). They found that average image feature detection had greater sensitivity with CNN than with RFL (85% vs. 68%, respectively), with some software performing with significantly higher sensitivity.

Nonetheless, of 10 studies reviewed that used RFL, the authors report: “The AI often out-performs single radiologist ASPECTS and is non-inferior or even better than consensus ASPECTS.” They also acknowledged that core and perfusion studies from iSchemaView’s RAPID CT and MR have the highest metrics for AI accuracy, with some datasets showing 100% sensitivity to predict favourable perfusion mismatch.

An advantage of utilising AI algorithms for acute prognosis prediction—according to the team—is the fact that they facilitate immediate treatment planning; whether or not to offer IV tPA or endovascular therapy, or near-term outcome prediction, such as intracranial haemorrhage. The Viz.ai software is noted by the authors to be the only software currently available that automatically detects blood vessel blockages and then alerts the stroke team via an app. The physicians can then activate a helicopter transport to directly bring the patient to a hospital capable of performing thrombectomy. According to Murray et al, Viz.ai’s software has been validated in multiple hospitals to date.

The authors also explain that some versions of AI have good predictive power to assist in interpretation of traditional stroke imaging outcomes. Accordingly, AI may reduce false-negative human errors in image interpretation, in turn increasing the efficiency of stroke triage and minimising morbidity and mortality.

Yet, Murray and colleagues maintain that diagnosis of acute stroke by AI “has not been perfected”, and errors still exist. They allude to peer-reviewed literature the has reported a mean sensitivity metric of 68% for AI algorithms, which “indicates that AI algorithms may miss up to one in three findings on imaging output”. The investigators propose reasons for the failure of AI in this context: “Radiologic scan abnormalities from pre-existing central nervous injury, inadequate contrast boluses, inability to correct for patient motion, and tortuous vessels that preclude evaluation of the contrast column.” Use of software with the highest performance and automation capability is thus essential, says Murray.

Moreover, the team write that “critical review of the radiological images by a physician remains important”, due to lower specificities reported by some AI algorithms. The authors also argue that while LVO detection methods aim for higher positive detection rates, the “burden on the physician to navigate an increased number of scans containing false positives will be noticed”.

To accommodate these AI advances, validation studies for LVO diagnostics are rapidly developing. Nonetheless, the investigators maintain that there is a “critical need for a clear definition of ‘ground truth’ against which algorithms are evaluated consistently.” They emphasise: “Having a consistent set of metrics is essential to improve AI in acute stroke care.”

Further speaking to NeuroNews on what the future holds, Murray speculates: “Patient CT scan imaging needs to be matched with gold standard data from MRI scans or conventional angiographic images, to detect true final stroke volumes and the presence of LVO. Ultimately, the same stroke image dataset should be evaluated by each AI software for a genuine comparison and reporting of accuracy, sensitivity, specificity and dice coefficient; however, this is limited by patient privacy and sharing of proprietary software.”

AI software

Among the array of AI software platforms currently used in healthcare institutions, Murray and colleagues closely compared iSchemaView RAPID, Viz (Viz.ai) LVO and Viz CTP, and Branomix (see figure 1 for a full software comparison).

Of these, they acknowledge that AI for LVO detection is in different stages of validation, while certain features can differ significantly based on the software platform. Specifically, they highlight that iSchemaView and Brainomix (indirect form) AI do not directly detect LVOs, but instead infer their presence based on asymmetry in collateral blood vessel density. Yet, Brainomix (direct form) and Viz.ai offer direct LVO detection, although “only Viz.ai has reported validation metrics”, write the team. Moreover, AI for both direct LVO detection and automatic emergency LVO treatment system activation is available only through the Viz.ai platform.

“Viz.ai has been focused on combining the power of AI, mobile automation, and multidisciplinary team communication to synchronise stroke care, with the aim being democratisation of the quality of stroke care across rural and urban areas,” Manoj Ramachandran, co-founder of Viz.ai, tells NeuroNews. He adds: “We will be publishing more peer-reviewed journal articles, building on our previous data that in 95.5% of cases, Viz.ai saved time to stroke treatment compared to the standard of care, saving an average of 51.4 minutes and reducing the standard deviation from ±41.14 to ±5.95, meaning care becomes faster and more standardised.”

“We have placed security at the centre of our platform, investing heavily in testing our systems for weakness and ensuring maximum protection from possible patient data breaches. Having signed a major distribution deal with Medtronic, Viz.ai is now installed in 300+ hospitals in the USA and will enter the global stroke market outside the USA in the near future, having successfully closed our series B funding round recently.”

According to Ramachandran, AI and the modernisation of technology will aid the automation of processes that have historically been manual. “In order to truly synchronise care”, he says, “the focus needs to be on all aspects of the clinical workflow, allowing a rapid activation of geographically distributed, multidisciplinary teams [which will] lead to faster treatment and better outcomes”.

Figure 1: Software comparison acute stroke diagnostic and triage-capable software that incorporate AI for acute stroke imaging and emergency treatment system activation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here