RapidAI for Stroke Detection

Details

Files
Project Status:
Active
Project Line:
Health Technology Review
Project Sub Line:
Optimal Use
Project Number:
OP0556-000
Expected finish date:

Digital health technologies (DHTs), including artificial intelligence (AI)–enabled medical devices, are advancing rapidly and generating considerable hope and interest. While DHTs promise to improve various outcomes, there are new and unique challenges to evaluating and implementing them, with AI potentially posing all of the challenges characteristic of other classes of DHTs and more. Therefore, comprehensive AI assessment within health technology assessment is essential to ensure that DHTs are equipped to balance benefits and harms, and also be interoperable and equitably accessible for people living in Canada. This project aims to review a specific AI technology called RapidAI in stroke detection, as well as implementation considerations associated with AI-enabled medical devices.

Key Message

What Is the Issue?

  • Stroke is a sudden loss of neurologic function caused by poor or interrupted blood flow within the brain. It is 1 of the leading causes of death and a major cause of disability in Canada. For patients with suspected stroke, prompt evaluation using CT imaging and other tests can help to determine the type of stroke, to assess the severity of damage, and to guide treatment decisions.
  • RapidAI is an artificial intelligence (AI)–enabled software platform that facilitates the viewing, processing, and analysis of CT images to aid clinicians in assessing patients with suspected stroke. Understanding the potential benefits and harms of using RapidAI is important to clarify its role in stroke detection.

What Did We Do?

  • We sought to identify, synthesize, and critically appraise literature evaluating the effectiveness, accuracy, and cost-effectiveness of RapidAI for detecting large-vessel occlusion (LVO) (i.e., ischemic stroke) and intracranial hemorrhage (ICH) (i.e., hemorrhagic stroke).
  • We searched key resources, including journal citation databases, and conducted a focused internet search for relevant evidence published up to July 22, 2024. We screened citations for inclusion based on predefined criteria, critically appraised the included studies, narratively summarized the findings, and assessed the certainty of evidence. Our methods were guided by the Scottish Health Technologies Group’s health technology assessment (HTA) framework.
  • We highlighted and reflected on the ethical and equity implications of using RapidAI for stroke detection, found in the clinical literature, integrating these considerations throughout the review.
  • We engaged a patient contributor who had experienced a hemorrhagic stroke, to learn about her experience, perspectives, and priorities. Additionally, we incorporated feedback from clinical and ethics experts, the manufacturer, and other interested parties.

What Did We Find?

  • We found 2 cohort studies and 11 diagnostic accuracy studies that assessed the effectiveness and accuracy of RapidAI for detecting stroke. Among these, 3 studies evaluated RapidAI as it is intended to be used in clinical practice (i.e., to complement clinician interpretation of CT images), while the remaining 10 studies assessed RapidAI as a standalone intervention.
  • The patient contributor identified important outcomes for stroke care, including improving speed and accuracy of diagnosis, minimizing the damaging effects of stroke, and reducing mortality rates. She also highlighted ethical considerations regarding the use of AI in health care, such as providing data privacy and equitable access, as well as informing patients about the use of AI technologies in the care pathway.
  • Low-certainty evidence suggests that evaluation of CT angiography images by Rapid LVO combined with clinician interpretation, compared to clinician interpretation alone, may result in clinically important reductions in radiology-report turnaround time in patients with suspected stroke. For detecting ICH, low-certainty evidence suggests that Rapid ICH combined with clinician interpretation, using clinician interpretation as a reference standard, has a sensitivity of 92% (95% confidence interval [CI], 78% to 98%) and a specificity of 100% (95% CI, 98% to 100%). However, estimates of sensitivity and specificity for detecting LVO varied, based on studies using different modules of RapidAI as a standalone intervention, providing only indirect accuracy data.
  • The effects of RapidAI on other time-to-intervention metrics, measures of physical and cognitive function, and response to therapy (e.g., reperfusion rates) were very uncertain. We did not identify any evidence on the effects of RapidAI on many important clinical outcomes, including patient harms, mortality, health-related quality of life, length of hospital stay, or health care resource implications.
  • We did not find any studies on the cost-effectiveness of RapidAI for detecting stroke that met our selection criteria for this review.
  • Ethical and equity considerations related to patient autonomy, privacy, transparency, access, and algorithmic bias have implications across the technology life cycle when using RapidAI for detecting stroke.

What Does This Mean?

  • RapidAI has the potential to improve acute stroke care by creating efficiencies in the diagnostic process. However, the impact of RapidAI on many outcomes, including those that are important to patients, is uncertain due to limitations of the available evidence.
  • To improve the certainty of findings, there is a need for evidence from robustly conducted studies at lower risk of bias that enrol diverse patient populations and measure outcomes that are important to patients, with improved reporting.
  • The cost-effectiveness of RapidAI for stroke detection is currently unknown.
  • In addition to the evidence on the effectiveness and accuracy of RapidAI for detecting stroke, decision-makers may wish to reflect on the ethical and equity considerations that arise during the deployment of AI-enabled technologies, such as those related to autonomy, privacy, transparency, and explainability of machine-learning models, and the need for considerations related to equity and access in their design, development, and deployment.