The vast potential of artificial intelligence in wealth management and the financial sector carries ethical risks for financial advisors and other professionals, an expert has said.
AI raises questions relating to the transparency of disclosure of its use, the competence and accuracy of a nascent technology that is still prone to errors, the privacy of customers’ personal data and the problem of racial bias, according to a presentation by Azish Filabi, associate professor. in Business Ethics who is the Cary M. Maguire Executive Director of the American College of Financial Services
Filabi spoke before
“When it comes to AI and human interaction, you’re always in charge, so relying on the authority of a machine could lead to problems in the long run. I think it’s important to keep “That in mind, even if it is under control, even if AI becomes smarter, faster and more precise,” Filabi said. “The best way to address the ethical risks of AI is, in my opinion, to actually engage with AI. There is a lot to fear around the technology, but I think the more we engage with it, the more we understand it, the more we “
LEARN MORE:
Government agencies are starting – slowly but surely – to release new proposals and guidelines around AI, Filabi noted. Examples so far include
Aside from compliance issues, technology has a direct business impact on wealth management and other financial areas. Filabi shared a chart showing the extent to which respondents expressed trust in financial companies compared to other sources such as social media posts or consumer advocacy groups.
“Individuals who have less trust are more likely to turn to these informal sources of information,” she said. “My takeaway from this information is that consumer confidence will shape how competition is structured in the industry in the years to come.”
As the wealth management industry and other financial companies begin to use AI tools in their businesses while working to gain this trust, the ethical concerns associated with it are also increasing, according to Filabi. For example, data sources that have historical biases favoring one race or other group – such as
LEARN MORE:
In another case, the widespread use of software calculating the probability of a series of outcomes known as “
“The challenge here is that the smoother, more precise and faster technologies become, the more likely we are to give in to them,” she said. “When we think about the information we get from software that sometimes seems like magic, how can we actually interpret that information and put it in the context of the normal day-to-day work we do? And finally, as we started to think, “Well, what the hell would regulatory accountability look like?” “That’s really our challenge, because we have so many different touchpoints in a system like this. You might have data entries based on political assumptions or simply errors. You might have software, an algorithm that doesn’t work as expected. broker-dealer, who is responsible for the process, and the advisor, they may be working on a set of assumptions that are not necessarily in the best interest of the clients.
To try to help industry professionals address these challenges, Filabi has put together a list of questions they might want to consider when implementing AI. Advisors and wealth management firms should ask themselves what their specific policies say about the use of technology, whether they will tell clients they are using it in one area or another, to what extent they should do your due diligence and what the problem is. to know if the tools display forms of bias.
LEARN MORE:
As these topics relate to the use of ChatGPT, the industry should, at the very least, refrain from connecting private customer data to publicly available tools, Filabi noted.
“I think it’s important to think about the fact that they’re still under construction, so you can’t really rely entirely on the accuracy of the tools and the quality of the information that they’re getting from the tool,” she declared. “Many of you are leaders within your company. So, think about how do you want to implement guidelines as a best practice for consistent use of ChatGPT? I think now is the time to start thinking about this aspect.