Regulating Radiology AI
Ed Butler, VP of Corporate Development
In April 2019, the U.S. Food and Drug Administration published a discussion paper on a Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). This came out just as we were publishing the CuraCloud white paper on Collaborative Medical AI Development that also addresses implications of advanced machine learning development processes. Next month the FDA will hold a public workshop February 25 and 26 in Washington D.C. on the Evolving Role of Artificial Intelligence in Radiological Imaging. As we consider updating our white paper we are monitoring regulatory developments around the world along with development process improvements.
It’s too late now to register to attend in person (I tried), but it is not too late to register to view the webcast. I am looking forward to hearing the industry stakeholders and FDA officials discuss the risks and benefits of AI applications in radiology. These are issues that will eventually govern the quality, speed, and effectiveness of AI-assisted solutions available to patients, clinicians, and those who pay for healthcare.
Here are some of the issues that I will be watching carefully for during the FDA workshop. Each issue requires a balancing act, as there are difficult tradeoffs involved in which there will be winners and losers. In this article I am not recommending policy, but I do believe that good policy can only come from asking the right questions. These are some of mine.
Policies around medical data ownership and access
I have heard policy debates around the ownership and access to medical data since the early 1990s. This is an area where the public actually has conflicting goals. The first of which is the need to make rapid progress in medicine to save lives and reduce suffering, while the second is the idea that personal data should be private and controlled by each individual. Medical scientists and software developers need access to high quality clinical data, especially now that machine learning models can be built with this data to accomplish tasks not previously possible. The emergence of big data resources resulting from digitizing most commerce, including healthcare, have led to concerns about how it is used and compensated. Not least is the concern by individuals about their rights to privacy (including in the EU the “right to be forgotten”). Access to data is not to be confused with ownership of data. Patients in the U.S. have the rights (via HIPAA and the 21st Century Cures Act) to get copies of their medical records. It is also understandable that healthcare delivery systems need clinical documentation not only to serve patients but also to get paid, to defend themselves, and to improve their operations. These are not simple issues, and solutions go well beyond the FDA’s remit, but the future of radiology AI will be gated by the policy response.
Public fear of AI
Not all arguments are rational. One of my mentors, Patricia Godbout, was fond of saying “it’s not about reason and logic.” A second issue fueling increased interest in SaMD regulatory policy is the public fear generated by the entertainment industry’s creative use of science fiction in painting a picture of a dystopian future ruled by machines and other malevolent forces using AI. From the Terminator series to the Matrix trilogy, the fear of “AI” has become a marketable plot device. With rapid advances in areas such as autonomous vehicles, facial recognition, and voice response systems, what was once science fiction has become the new everyday reality sooner than many thought possible. This fear of AI inevitably bleeds over into public policy. Science fiction notions such as the “singularity” at which humans and machines merge into a new species are as real to some people as demons and angels to others. Sales professionals often rely on tapping in their customers the motivating emotions of fear, uncertainty, and doubt (FUD). AI in radiology has disruptive potential that may impact multiple stakeholders differently: one person’s revenue is another person’s cost. It will be fascinating to watch for the FUD factor in the upcoming workshop.
Support for regulatory reform
A third dynamic that will is evident is the recognition—even within the FDA—that the regulatory process inherited from prior decades needs to be reformed. The FDA’s “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD),” sympathizes with critiques of existing regulatory processes even as it acknowledges that making dramatic changes may require new legislation.
In an attempt to improve the current process, the FDA has proposed the Digital Health Software Precertification (Pre-Cert) Program. The Pre-Cert program intends to create a more streamlined regulatory process where manufacturers who have demonstrated a culture of quality can commit to monitoring the real-world performance of their products in the U.S. market.
It is noteworthy that the FDA and other regulatory agencies around the world recognize that the public will benefit from modernizing the process. However, as much as the industry may want self-certification, the recent tragedies associated with Boeing’s 737 Max-8 may also be considered. An entirely different federal agency, the Federal Aviation Administration (FAA) regulates commercial aircraft, so this is clearly not the FDA. A comparison could be made to the FAA’s program to allow certain aircraft manufacturers to self-certify. The FDA Pre-Cert program allows AI developers to deploy new SaMDs that have passed internal review within their own Quality Systems, deferring agency attention to post-market surveillance for pre-certified entities. During the workshop, it will be informative to hear if anyone questions the parallels between these self-certification programs. In Boeing’s case, recent disclosures of emails by Boeing employees reveal disturbing consequences of perceived business imperatives overshadowing quality concerns. Even though this example comes from a completely different domain, aircraft design, components, and pilot training, the support for regulatory reform for AI in radiology SaMDs can be informed by the knowledge of the unintended consequences of FAA’s program. Murphy’s Law, “that which can go wrong will go wrong” has not been repealed.
The FDA is inviting comments on this workshop until mid-March 2020. A robust discussion from multiple perspectives will help the agency balance these difficult questions. To learn more about the FDA comment process, visit the FDA Public Workshop – Evolving Role of Artificial Intelligence in Radiological Imaging page.