Cybersecurity and AI in Medical Image Analysis

Jan 5, 2019 | AI Development | 0 comments

By Christian Severt and Ed Butler

 Medical image analysis has become a very active area for development of artificial intelligence (AI) applications. Rapid advances in the speed and accuracy of image classification, segmentation, object detection, and 3D rendering are being applied to wide variety of scenarios in diagnostic and interventional radiology. Many thousands of developers around the world are already taking advantage of the widespread availability of free machine learning software, such as TensorFlow, Microsoft Cognitive Toolset, Torch, and others. Combined with affordable access to GPU hardware, building powerful software medical devices is increasingly commonplace. This is very promising for the future of medicine.

With the digitization of medicine comes the inevitable risks associated with bad actors in the age of the internet. Cybersecurity data breaches are on the rise. There were about 688 breaches from 2005 to 2018 comprising of over 22 million records being compromised. The average global possibility of a breach in the next 24 months is 27.9% with an average total cost of a data breach ranking in at an astounding $3.86 million. (Source: Identity Theft Resource Center report & 2018 Ponemon Report).

In the first three quarters of 2018, 8.7 million patient records were reported as breached, and the 2018 4th quarter total will grow substantially. In November 2018, Acudoc Solutions reported 2.65 million breached records of Atrium Health, a 44-hospital network in North Carolina, South Carolina, and Georgia. 

 

Cyber Security Risks Inherent in “Deep Learning” Applications Development

While there are many risks that can and should be explored for any software development group, here we discuss three areas that developers of medical image analysis AI must consider.

  • Protected Health Information

The phrase “data is the new source code” became very popular in 2018 to describe the crucial role training-datasets have in machine learning development. In medical image applications the management of “protected health information” (PHI), is a central cyber security concern because of the possibilities of punitive fines in the USA and in other countries for breached PHI.

To mitigate the risk of breach, most AI developers work with de-identified data. However, most of this data starts out as identifiable patient information. The chain of custody from the data source, such as the PACS, through the deidentification process must be managed. While many PACS and Enterprise Imaging products offer automated de-identification, these tools may fail to catch all of the PHI, and sometimes go too far in removing demographic details necessary for clinical performance assessment. HIPAA privacy and security rules require the encryption of PHI in motion and at rest, and accountability on who has accessed each patient’ record. It is in the interests of AI development organizations to have clear segregation of duties regarding access to PHI, to limit risk.

  • Protecting Validation Data

A common risk associated with the development of machine learning models for medical image analysis is the misuse of validation data. Here, the vulnerability is not to outside malicious actors but to well-intended algorithm developers who want to improve the clinical performance of their models by repeated use of validation data sets, or inclusion of validation data in the training data-sets. Use of cross-validation methods, such as 10-fold cross validation is very popular in model development, in which data from a larger data set is divided into subsets, leaving out certain data. The FDA’s guidance on this has been in place for 10 years. Their guidance has been, among other steps, that organizations tightly control access to validation data “to ensure that nobody outside of the regulatory assessment team, especially anyone associated with algorithm development, has access to the data”, and that validation data access is logged “to track each time the data is accessed including a record of who accessed the data, the test conditions and the summary performance results.”

https://www.fda.gov/RegulatoryInformation/Guidances/ucm187249.htm

 Consequences for misuse of validation data sets can be severe for those organizations who do not control these practices. It can result in overfitting of models that appear to perform great but suffer when exposed to other real-world data. Further, if validation data is not controlled, FDA audits can result in revocation of expensive and hard-fought approvals because the validation process was “adulterated”.  Because of the severity of these risks, we recommend that AI development organizations treat validation data management as an internal cyber-security risk and use appropriate technical and administrative controls to mitigate this risk.

  • API Risks

Many of the medical image analysis applications today are designed as microservices, rather than complete, stand-alone systems. They depend on being activated via an application programming interface (API) by an external system, such as a PACS, Vendor Neutral Archive, or workflow-driving system. In 2018 we began to see the emergence of AI engines that allow healthcare practices to mix and match machine learning algorithms from multiple sources and that are deployed at run-time into a calling PACS or other system, often from a cloud services provider. This takes place in seconds, from a live clinical system that, of course, uses PHI. This necessitates a risk analysis of how the data gets to and from the AI module, ensuring encryption and necessary logging. Containerization technologies such as Docker, share a command and control environment with other containers. The command control layer is an area where cyber-security attacks can enter the organization and compromise more than a specific AI model. Developers of medical image analysis microservices must anticipate and test for the vulnerabilities of different deployment platforms. Healthcare delivery organizations with their own technology stacks can focus testing on their specific configurations.

 

Conclusion

To guard against these risks it may be helpful for AI development organizations – either vendors or healthcare delivery organizations- to segregate duties from test and development teams. Practices such as the following can help:

  • “Fuzzing” to discover hidden weaknesses in code (eg enter an sql command into a form)
  • Attack code via denial of service; data input and output validation; 
  • Data input and output filtering just enough to get by edits but still enough to break things downstream
  • Vulnerability scanning- by both credentialed and non-credentialed teams, such as a Red team/Blue team where one side attacks and the other defends

Sometimes a culture change is required to optimize the innovation possible with AI development within a structure that manages cyber security risks. The use of knowledgeable consultants to assess current practices and recommend changes can help manage this transition.

 

Christian Severt is the Information Security Leader at CuraCloud. He has extensive enterprise IT security experience from working with clients in major tech firms, government, health insurers, and startups.

Ed Butler has over 30 years in enterprise IT management including leadership and consulting roles in government, health tech vendors, medical practices, health insurers, and hospital systems.