Generative AI generates excitement and concern. Businesses are grappling with the impact of AI and seeking clearer, risk-mitigating regulation.
Jump to:
The big picture
In November 2022, a chatbot called ChatGPT was launched globally. In just over a year, its use has exploded.
To call ChatGPT simply a chatbot is to significantly underestimate its capabilities. In fact, it is the best-known example of what is called generative artificial intelligence (AI). At the user’s request, generative AI can produce content including text, images and analyze complex data, often in just seconds. It’s a technology that generates a lot of excitement and fear. Generative AI can also create synthetic data, information that appears “real” but is actually made up.
The other type of AI, called predictive AI, discovers patterns and trends that allow users to make predictions and design strategic actions. The two types of AI will certainly benefit, and even transform, many industries and professions, in various ways. But for many businesses, AI’s potential impact might look more like disruption. Aware of the potential dangers, governments are working to regulate AI. The goal: to prevent the impact of AI from causing the kind of disruption that destroys businesses, data privacy, or government programs. Every industry will need to be aware of technological advancements, even if it means using AI to protect their business.
Why this issue matters
It makes sense for virtually every business to prepare for everything AI can do, the challenges it might create, and how regulating AI might address those challenges.
For many AI developers, AI risks are not significant enough to require rigorous oversight. They argue that AI regulation could stifle innovation and result in vague or overly complex rules that would not do what they are intended to do, especially given the rapid pace of change.
Those who argue for AI regulation counter that if left unmanaged, the impact of AI could be profoundly damaging. The risk of AI is that it could allow misinformation to spread in the online world by creating fake but seemingly real images and videos, similar to what happened to Google. Taylor Swift with AI this month. Business and individual data can also be more easily stolen through “credible” emails and phone messages due to human error and monitoring. (Some content creators have filed lawsuits against AI developers, claiming that generative AI uses their words and images without credit or compensation). Due to the risks associated with AI, many companies likely to adopt this technology in their operations have been slow to fully adopt it. These companies are seeking clarity in AI regulations so as not to be held liable for any misuse.
Efforts are underway to manage the social impact of AI through regulation. At the beginning of December, the European Union adopted the AI law, which aims to ensure government management of risks linked to AI. Two months earlier, President Biden released a decreer intended to promote the development of AI while establishing guidelines for federal agencies to follow when designing, acquiring, deploying, and supervising AI systems. Among other goals, the decree aims to establish testing standards to minimize AI risks to infrastructure and cybersecurity.
The White House is not the only federal entity exploring AI regulation. In July 2023, the United States Securities and Exchange Commission (SEC) proposed rules regarding data analysis by investment advisors, including the use of AI. These rules would require investment firms to ensure that AI tools do not put the interests of the firm ahead of those of a client. SEC Chairman Gary Gensler also expressed concerns that the industry’s possible overreliance on a small number of AI providers could negatively disrupt the U.S. financial system.
On a more local level, New York and other states have established or are considering regulating AI. That said, efforts to regulate AI are really just getting started. And that makes sense, given the uncertainty of its effects. Government regulators seek to protect the public, the economy and building trust with AI without stifling innovative applications of an emerging technology.
Future of Professionals ReportHow AI is the catalyst for transforming every aspect of work |
Understanding the impact of AI
Of course, the investment advisory industry is by no means the only profession that will be affected by AI and how it is (or is not) regulated. To get an idea of how this technology could benefit and change businesses of all kinds, here are some notable examples.
The impact of AI on the tax profession
One of the most crucial roles tax professionals play is providing high-quality advice to their clients, especially if it protects them from cybercrime during tax time. AI can automate tedious tasks like data collection and analysis, allowing these professionals to focus on higher-level consulting work. This would also allow them to add new skills and deliver more value to these customers, whether internal or external.AI could transform many other aspects of the profession, including talent development and service capabilities. AI could also create opportunities for productivity gains and internal efficiencies, such as better and faster communications and customer services.
The impact of AI on the legal profession
For legal professionals, AI could make it easier new growth and service opportunities. Case in point: the time and capacity needed to identify new markets. Company departments could be freed from many repetitive tasks and devote more time and effort to supporting their company’s growth strategy. Indeed, many companies and departments are already puts AI to work.
President Biden’s October 2023 executive order could also create additional opportunities for legal professionals as they will be required to advise and represent clients on AI-related legal issues, including compliance, liability and intellectual property. Legal professionals will need to familiarize themselves with AI not only as a legal issue, but also as a document drafting and research tool. And they will need to learn how to use AI ethically and effectively.
The source of these ideas is The future of professionals, a report published in August 2023 by Thomson Reuterswhich surveyed more than 1,200 people around the world who work in legal, tax, global business, risk management and compliance in professional firms, corporate in-house departments and government agencies. The survey showed that 67% of respondents believe the impact of AI will be significant on their profession over the next five years. And 66% believe that AI will create new career paths.
All the more reason for this powerful technology to be properly regulated, so that its many benefits outweigh its benefits. risks.