In November, President Biden signed a new Executive Decree – titled “Safe, Secure and Trustworthy Artificial Intelligence” – which promises to introduce new national AI regulations focused on safety and accountability in the use of this revolutionary new technology. On the heels of this high-profile EO, the Biden administration has already begun the process of writing real standards for the safe use of generative AI by announcing that the U.S. Department of Commerce. In late December, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced a February 2, 2023 deadline to receive public comments on federal guidelines for testing and protecting AI systems.
This represents considerable interest on the part of the federal government to AI. The question is: what does this mean for healthcare?
New standards, for example. By including health care in this new EO, the Biden administration is signaling very clearly that health systems should expect new safety, security, and equity standards for AI very soon.
More broadly, Biden’s EO reflects a transformative moment for the entire healthcare industry. The use of AI has brought us to a before and after moment. We have now entered the era of “ethical AI” – a period in health technology where our use of AI must be matched by our commitment to patient care and comply with new standards set by government . Our ability to merge these influences will determine the extent to which the healthcare sector benefits from the revolutionary potential offered by the use of AI.
Privacy and cybersecurity are two areas of concern cited by healthcare executives when discussing AI. Is there an approach consistent with the EO’s mission to improve privacy while enabling clinical effectiveness? Can we protect patients’ most valuable information – such as electronic personal health information (ePHI), which is a prime target for cybercriminals – while still providing superior care?
Let’s take a closer look to find out.
The EO and NIST
To fully understand the origins of President Biden’s EO, it is important to understand the NIST-AI Risk Management Framework (RMF) AI-100-1. As stipulated in the National Artificial Intelligence Initiative Act of 2020 (PL 116-283), the RMF is intended to serve as a resource for organizations using AI systems and hopefully help manage the many risks of AI and to promote reliable and responsible development and use of AI. systems.
For healthcare, EO and RMF provide a useful dual framework for improving privacy when using AI. The standards outlined in NIST AI-100-1 provide guidance for achieving visibility into the life of ePHI. These steps include: meticulously tracking where it resides, learning how it is transmitted, and keeping a detailed log of access details. Both the EO and NIST encourage state-of-the-art encryption methods and secure data transmission to protect patient information. This means that healthcare leaders should put privacy at the forefront of our AI solutions.
A model for executive insurance
To confidently navigate the complexities of AI integration, healthcare leaders can use the EO and NIST frameworks to review AI tools that help them ensure compliance while delivering performance reliable. Here are the features to look for in AI solutions designed to promote privacy and cyber protection:
- Valid and reliable: AI solutions should ensure the validity and reliability of healthcare information, providing leaders with a foundation of confidence in decision-making.
- Safe: By prioritizing patient safety, AI applications must minimize risks and ensure that safety protocols evolve with the dynamic healthcare landscape.
- Secure and resilient: Robust security measures strengthen health data, ensuring its integrity and confidentiality. AI systems must adapt to emerging threats, build resilience and secure data transmission within health systems.
- Accountable and transparent: AI models should be designed for executive understanding, providing detailed logs of ePHI access for transparency and accountability.
- Explainable and interpretable: Recognizing the need for interpretability, AI models must be deliberately designed to be explainable, allowing leaders to confidently interpret AI-generated insights.
- Enhanced Privacy: The EO’s commitment to privacy requires visibility into ePHI, actively managed access, and transmission in accordance with NIST AI-100-1 standards.
- Fair: By actively combating bias, AI models should promote fair treatment among diverse patient populations, thereby promoting fairness and inclusiveness in healthcare.
Improving executive confidence with ethical AI
Health systems are creating strategic advantage by deploying AI tools for adaptability and compliance. An important area where AI can be leveraged for adaptability is in privacy-enhancing solutions, where AI is not only more resilient to adapt to evolving regulations and changes in data usage patients. This puts healthcare leaders in a position where compliance becomes not just a box to check, but an ongoing commitment to ensure they can adapt to regulatory changes and new ways of using their healthcare assets. most critical data.
To address Biden’s EO and the need for privacy, healthcare leaders must navigate directly the intersection of healthcare and AI. By deploying the right AI systems, these organizations can gain unparalleled visibility into ePHI, secure data transmission, and adhere to NIST AI-100-1 standards. This positions them to lead with confidence in a digital era where technology perfectly aligns with the principles of patient-centered care and regulatory excellence.
About David Ting
David Ting, Tausight Founder and CTO, was the co-founder and former CTO of Imprivata and former member of the U.S. Department of Health and Human Services’ Healthcare Industry Cybersecurity Task Force. David has over twenty years of experience developing identity and security solutions for government and enterprise environments. David holds twenty-two U.S. patents, with more pending.
At Imprivata, Ting developed the technology behind the OneSign solution widely used in the healthcare industry. As director, he oversaw Imprivata’s evolution from a venture-backed startup to a public company and subsequent private acquisition in 2016. Ting has over twenty years of experience developing solutions identity and security solutions for government and enterprise environments. In 2016, he was appointed by the U.S. Department of Health and Human Services to the Health Care Cybersecurity Task Force, authorized under the U.S. Cybersecurity Information Sharing Act. 2015. Ting helped write the recommendations for securing healthcare in the Cybersecurity Task Force report submitted to Congress in 2017.