CHAI is a coalition of health systems, startups, governments, and patient advocates working on AI.
Ethics in artificial intelligence is important regardless of the field because not only does it involve large amounts of data that must be protected, but there are also issues of bias, as well as what many call “hallucinations ”, or when the AI simply does things. up. This makes AI ethics particularly important in healthcare, where decisions made can literally be a matter of life and death.
THE Coalition for AI in Health (CHAI) is a private sector coalition committed to developing industry best practices and frameworks to address the need for independent validation for quality assurance, representation, and ethical practices for health AI. It is a coalition of leaders and experts representing health systems, startups, government and patient advocates. CHAI has created working groups focused on privacy and security, fairness, transparency, utility and security of AI algorithms.
CHAI now has a new member, announcing that Nablaan ambient AI assistant for clinicians, joins the organization.
Nabla’s product, Copilot, generates clinical notes in seconds; instead of having to enter notes during consultations, clinicians can rely on Nabla Copilot to transcribe the entire encounter and generate an accurate clinical note directly integrated into the EHR. This allows providers to focus entirely on patient care and saves an average of two hours per day typically spent on documentation. Benefits include reduced cognitive load from administrative tasks, less stress and feelings of burnout. Copilot also automates the generation of patient instructions and referral letters.
In March, the company in partnership with Children’s Hospital Los Angelesand since then, it has expanded to serve more than 85 provider organizations and supports more than 45,000 clinicians across the United States. It also signed new health system contracts with the University of Iowa Health Care And Carle Santé, and tripled its adoption rate, going from 3 million annual visits at the start of 2024 to 9 million visits per year.
Alex Lebrun, co-founder and CEO of Nabla, spoke to VatorNews about what’s missing when it comes to ethics in healthcare AI, his vision for what that looks like, and how CHAI is making it happen. will help achieve this.
VatorNews: Ethics in AI is a hot topic, especially in healthcare, where data must be secure and bias can literally be a matter of life and death. What do you think are the most important areas where ethics is important in healthcare?
Alex Lebrun: I think there are many ethical aspects to consider in AI in healthcare, and our healthcare partners have spoken directly to us about the importance of addressing these issues. Their feedback has been crucial in shaping our approach at Nabla, particularly in key areas such as bias, reliability and transparency, each of which has a direct impact on patient care and clinician trust.
Bias: Bias in AI can lead to inconsistent predictions that impact patient outcomes, potentially increasing disparities in care. For example, to reduce linguistic bias, we trained our proprietary text-to-speech models on over 50 different accents, minimizing the impact of voice characteristics on documentation accuracy.
Reliability: Clinicians need reliable AI tools they can trust to keep patients safe. Our systems are designed to operate within clear, defined boundaries to reduce risk and maintain consistency. We’ve also implemented a proprietary framework that cross-references documentation with transcripts and patient context, ensuring every fact is fully supported and verifiable.
Transparency: Transparency is essential to foster trust. At Nabla, we openly share our governance practices and collaborate without renowned institutions (CHAI, AMIA, CHIME and more) on industry standards to help build trust in our ambient AI assistant.
VN: Has enough been done so far to ensure that AI in healthcare is deployed responsibly? If not, what do you think can and should be done?
AL: Although AI in healthcare holds immense potential, there is still a long way to go to ensure its responsible deployment. Many healthcare organizations are proactively establishing their own AI governance standards, but only a small fraction have fully developed strategies addressing critical ethical issues such as bias and safety. Without a universal governance framework, many healthcare AI tools lack comprehensive ethical reviews. A McKinsey 2024 survey found that while 70% of healthcare organizations believe they are ready to integrate AI, only 30% have fully developed responsible AI strategies that address key ethical considerations. To close this gap, we believe healthcare organizations, developers, and policymakers must prioritize collaboration to establish clear, transparent, and standardized guidelines that can evolve alongside technology.
VN: What is your vision for an ethical framework for AI? How do you think we can ensure accuracy and confidentiality?
AL: At Nabla, our vision for an ethical AI framework centers on transparency and trust with our community of clinicians. Our approach is based on three fundamental pillars: privacy, reliability and security, reinforced by a blend of real-time pattern monitoring, clinician feedback and security features built directly into our product.
To ensure documentation accuracy, we developed a proprietary framework through which each note produced is divided into atomic facts, which are verified via an LLM query against the transcript and patient context. Only facts for which we find definitive proof are considered valid. Additionally, each new version of the model undergoes rigorous review by professional medical scribes to confirm that the documentation is complete and meets industry standards.
Additional safeguards include prompts for clinicians to review notes before exporting them and an intuitive feedback tool to flag any issues. We continuously monitor changes and feedback, gaining actionable insights that improve the accuracy and reliability of our model.
Confidentiality is paramount at Nabla. We provide a flexible, customer-focused approach to data storage, allowing health systems to define their retention policies. The standard retention period is 14 days, customizable to just seconds or extended if necessary. We never store meeting audio and customer data is not used by default for training the model. Feedback is anonymized in accordance with HIPAA standards, and health systems have the ability to provide specific data for model improvement, with full control over their information.
VN: How did you first become involved in CHAI? What made you want to join the organization?
AL: We became involved in the Coalition for Health AI because we recognized the positive impact it has on the governance of AI in healthcare. We have been particularly impressed by the valuable work CHAI has already accomplished, such as developing the Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare and publishing the draft Responsible Health AI Framework.
At Nabla, transparency has always been at the heart of our approach. Drawing on our experience as machine learning researchers, we appreciate these complex questions around governance and feel it is our responsibility to be part of the conversation. CHAI’s strong commitment to transparency and building trust, as well as the creation of quality assurance laboratories, closely aligns with our mission. Joining CHAI allows us to actively contribute to these essential discussions and work towards harmonization of AI standards across the industry.
VN: What do you hope to accomplish by being part of CHAI? What will you consider a victory?
AL: At Nabla, we have always put our community of clinicians at the center of our work, building our product based on their feedback and needs. By joining CHAI, we hope to establish another valuable channel to stay connected to our ecosystem, allowing us to listen carefully to clinicians’ expectations, answer their questions, and foster greater transparency and trust in AI health care. Currently, AI governance in healthcare is fragmented, with many organizations developing their own frameworks. A win for the entire ecosystem would be to achieve a more unified set of standards across the industry, making it easier for clinicians to understand, evaluate, and choose AI solutions that deliver prioritizing clinician and patient safety.
VN: Is there anything else I should know?
AL: Nabla is poised to become a real-time, proactive AI assistant that helps doctors make decisions on the spot. With strong partnerships and user trust, we are ready to take this next big step. We are currently working on Active CDI to give instant feedback to clinicians during consultations, ensuring their documentation meets coding standards and reduces claim denials.
(Image source: chai.org)