Technological advances in healthcare continue to benefit humanity: from the advent of x-rays to the 19th century to dialysis, CT scans, MRIs and other devices in the 20th century, to a new range of digital tools at that time. Perhaps the most promising of these is artificial intelligence (AI), with its widespread applications for predictive analytics, drug development, personalized medicine, and robot-assisted surgery.
Although the integration of AI into healthcare diagnosis and treatment holds unlimited potential to revolutionize the field – improving patient outcomes, reducing costs and improving overall efficiency – this promise exhilarating is not without peril. The deeper AI becomes integrated into healthcare, the greater the cybersecurity risk it creates. Indeed, AI is already transforming the threat landscape across the medical profession.
AI Risk Assessment
Although artificial intelligence is considered a disruptive force with unknown consequences, the International Association of Privacy Professionals estimates that more than half of AI governance approaches simply build on existing privacy programs and that only 20% of established organizations have begun to initiate formalized AI practices and guidelines. While there are certainly fundamental controls for the underlying IT systems that power these AI models that remain entirely relevant and necessary, we must also recognize the new risks introduced by AI that potentially endanger the privacy and health of patients, and the safety and reputation of medical institutions. establishments. The advent of AI requires us to develop new approaches to cybersecurity policies, strategies and tactics that build on our already well-established foundations. The status quo is important, but not sufficient.
With the technology still nascent, healthcare professionals must continually remain aware of AI-related behavioral risks that could lead to incorrect diagnoses or data hallucinations. AI systems are only as good as the quality and volume of their training data. Promoting transparency of AI models and thorough testing, President Biden recently released a decree on safe, secure and trustworthy artificial intelligence. In addition to directing the Department of Health and Human Services to address unsafe health care practices and real-world harms involving AI, the order aims to establish national standards for rigorous red team testing to ensure that AI systems are safe before their public release and use.
Traditional security measures are better positioned to handle AI-related threats from cybercriminals. Hospitals, for example, are increasingly the target of malware and ransomware attacks. Last August, Prospect Medical Holdings took its main computer network offline after an incident that affected 16 hospitals and more than 100 other medical facilities in the United States for nearly six weeks, an attack that exposed the private information of more than 24 000 workers. AI-assisted security models should provide a counterbalance to the use of technology that helps attackers design better social engineering attacks, probe weaknesses in computer systems more effectively, and create malware that escape detection mechanisms.
Many healthcare organizations rely on third-party vendors for their AI solutions. These vendors can unintentionally introduce vulnerabilities like those just described into healthcare systems, leading to far-reaching consequences. This third-party dynamic meaning less control from internal security teams is nothing new. Third parties have been the main source of vulnerabilities in the healthcare ecosystem for several years. But the added complexity of how providers use AI, where the data goes, and what controls are in place over it make an already complex problem even more problematic.
Implementation of security measures
Healthcare organizations, skilled in preventing and suppressing attacks on the human body, must simultaneously recognize the need to strengthen their own systems by placing cybersecurity at the forefront of their overall AI integration strategies. These measures, designed to harness the benefits of AI while protecting data and patient safety, include:
- Multi-point defense: Guided by the need for redundancy, institutions should create and implement a cybersecurity strategy that considers incorporating defensive AI capabilities and includes several elements such as firewalls, intrusion detection systems and a advanced threat detection, a multi-pronged approach capable of detecting and mitigating threats. at different levels.
- Data encryption and access control: Protecting sensitive data and restricting access to authorized personnel starts with strong encryption protocols. Robust access control mechanisms must be implemented to prevent inappropriate access to AI systems, underlying training models and infrastructure, and private patient records.
- Third-party vendor assessment: Due diligence is necessary to thoroughly review third-party vendors and their cybersecurity practices. At this stage of maturity in the field of AI risk management, it is probably enough to know whether your third parties deploy AI models in their solutions and how your company’s data is used in that model. Further implementation of control will come as standards bodies such as HITRUST and NIST build AI-specific control frameworks.
- Incident response plans: AI systems should be an essential part of any organization’s incident response plans to identify the unknown that AI technologies could present in your standard DR/IR operations and minimize downtime and loss of data in the event of a cyberattack using AI capabilities or against it. an AI system.
- Continuous security audits and updates: Perform periodic security audits of AI systems and overall healthcare infrastructure to ensure your standard security controls are functioning.
- Employee training and awareness: Implement mandatory AI cybersecurity training for all healthcare staff, raising awareness of the privacy and data loss risks of “out-of-the-box” AI technologies and advances in phishing techniques, deep forgery capabilities, and other deceptive methods. practices used by cyberattackers augmented by AI.
AI can be a friend or foe to the healthcare sector, with the ability to improve lives or cause even more breach problems in an already struggling sector. By implementing robust security measures, educating staff and working with trusted suppliers, the industry can move forward with confidence and caution.
About Morgan Hague
Morgan Hague is the Head of IT Risk Management at Meditology Servicesa leading provider of information risk management, cybersecurity, privacy and regulatory compliance consulting services exclusively for healthcare organizations.
About Britton Burton
Britton Burton is the Senior Director of TPRM Strategy with its sister company, CORL Technologiestechnology managed services for risk management and supplier compliance.