Lifting the veil on “one of the most important challenges”
Earlier this year, the government published what it called an “interim response” to the challenges of artificial intelligence (AI). This response includes eight AI ethical principles designed to make the technology safe, secure and trustworthy while stricter regulation is considered.
The insurance industry is a major investor in AI. By some estimates, the value of AI in the insurance industry will reach over $45 billion globally by 2031.
However, there are serious concerns about the use of AI. “AI governance is one of the most important challenges of our time,” says the website of UNESCO, the world’s leading science and education agency.
AI is one of the future development axes Women in Insurance Summit Australia in Sydney. A roundtable will explore successful use cases of AI in insurance disciplines and provide insight into ethical issues.
AI in legal practice and insurance
“The legal and insurance sectors are two of the top five sectors investing in AI in Australia, and this is largely driven by the recognition of the incredible potential that AI brings to both professions,” said Jehan Mata (pictured above), partner at Sparke Helmore Lawyers. She specialises in professional indemnity and personal injury claims and leads her firm’s cyber insurance practice.
Sparke Helmore is a Gold Sponsor of the Summit.
Mata said generative AI has the potential to make claims and case processing much more efficient.
“Indeed, it is in this efficiency that some see the threat – although at Sparke Helmore we see it more as an opportunity for improvement,” she said.
From an ethical standpoint, Mata said an AI tool ingesting client data should be held to the same ethical standards as a lawyer on issues such as confidentiality and client privilege.
“This requires significant security measures to ensure data is properly compartmentalised and secured,” she said. “More broadly, with the prevalence of cyberattacks in Australia, many AI tools are simply too risky to use with sensitive data.”
Keep AI tools secure and in-house
Mata said it was “absolutely imperative” that confidential information never be shared with public tools like Chat GPT or Bard. All AI tools, she said, must be installed on secure internal physical servers and all staff must be fully trained.
“It’s a big ask, for sure, but if it means overcoming the threats posed by AI and opening up opportunities for our customers, we think it’s definitely worth it,” Mata said.
The lawyer said her firm is exploring “targeted” use cases for AI in certain high-volume industries. Mata said these pilot programs “absolutely ensure data security and integrity” and are also seen as a way to instill a culture of innovation.
She said it is important to “do the homework right” before rolling out these platforms publicly in the highly regulated legal and insurance sectors.
Reversing the insurance model
Suzi Leung, Sales Director of Hollard Insurance (Hollard), who chairs the Summit, is very optimistic about the potential of AI to help insurance company customers.
Insurance CEOs support tighter regulation
A KPMG Insurance CEOs see AI’s ethical issues and the current lack of regulation as their biggest challenges, a report reveals.
1. Human, societal and environmental well-being
AI systems should benefit individuals, society and the environment.
2. Human-centered values
AI systems must respect human rights and diversity.
3. Equity
AI systems must be inclusive and accessible and must not discriminate against individuals or groups.
4. Privacy and Security
AI systems must respect and uphold privacy rights and data protection.
5. Reliability and security
AI systems must operate reliably, in accordance with their intended purpose.
6. Transparency and explainability
There should be transparency and responsible disclosure so people can understand when they are significantly impacted by AI.
7. Contestability
When an AI system has a significant impact on a person, group or environment, there must be a timely process for people to challenge its use.
8. Liability
The people responsible for the different phases of the AI system life cycle must be identifiable and accountable.
Related articles
Stay up to date with the latest news and events
Join our mailing list, it’s free!