Tony Lee, Chief Technology Officer at Hyperscience, looks at the crucial task of legislating AI, highlighting the need for a nuanced approach to fostering innovation while ensuring responsible use.
Nearly a year has passed since the public debut of OpenAI ChatGPT, the first introduction to large-scale generative AI. The technology has generated unimaginable interest from citizens, organizations, and the federal government, especially when it comes to the regulation of AI. With 82With % of Americans indicating they don’t trust AI technology leaders to self-regulate AI, pressure has only increased for the federal government to get regulation right the first time.
Even though technology leaders such as Sam Altman and Elon Musk have traveled to Washington DC to discuss the responsible use of technology at meetings such as the AI Insight Forum, there is still much consensus to be found to advance the technology. Although AI is a relatively new concept that many can grasp on a large scale, the legislative model that government should follow is not, and we must move toward regulation with existing approaches in mind.
Regulate the use case, not the technology
When discussing the overall approach to regulation, federal leaders should advocate for regulating AI based on use cases rather than implementing blanket, overarching legislation that could discourage its further development. By legislating the overall development of technology, professional practitioners, members of academia and citizens at large are all affected, delaying new research projects and possible technological improvements due to increased red tape administrative. AI development should not come at the expense of security, but it requires a unique balance that enables innovation while mitigating risk.
For example, a lab studying how healthcare professionals leverage technology to facilitate daily procedures and tasks requires more legislation, as its effects can cause significant harm if not developed ethically. However, on the other hand, consumer marketers exploring different ways to implement AI into their marketing campaigns should not face the same scrutiny as use cases in the retail industry. health. Holding both parties to the same development standards is counterintuitive, because requiring consumer marketers to keep pace with healthcare professionals would deprive them of the time savings that AI can bring to specific industries that are reluctant to risk.
See more : Upcoming Privacy Legislation: Can Big Tech Win Back
Align with a set of standards
When considering use cases for regulatory purposes, the federal government must have a set of standard ethical principles to which use cases must adhere for further development. This starts with the development of a mandatory regulatory framework specific to the responsible use of AI.
The most critical standard to establish involves monitoring AI systems, ensuring that there is always a human knowledge worker overseeing the system and outcomes. To build public trust in AI practices, citizens must have confidence that there is always human oversight to verify the validity and ethical standards of machine results. Additionally, the framework should include other aspects of AI use, such as data privacy, bias mitigation, algorithmic transparency and security standards. Knowing that citizens care about topics such as bias mitigation, and with 41Percentage of AI experts at U.S. universities expressing greater concern about AI-related discrimination and bias than issues like mass unemployment (22%), government must ensure this concern essential public interest is sufficiently taken into account. By describing these areas of use and standards such as bias mitigation, the government can reassure end users that their solutions meet regulatory standards.
The framework should also address legal requirements and potential consequences that could arise if organizations chose not to comply. This is where use case regulation comes in, as some industries like healthcare and finance may need to adhere to stricter compliance rules than other industries like entertainment. While ensuring responsible use is crucial, the government must strike the right balance and not move too far towards overly burdensome rules.
See more : Is responsible AI a technological issue or a business issue?
In the footsteps of Fedramp
Today, 53% of Americans believe that AI hurts more than it helps when it comes to keeping personal information private, making it crucial for the government to effectively address security and privacy concerns. The federal government should consider implementing a certification process like FedRamp, which requires a standardized approach to security and risk assessment for cloud services across the federal government, thereby reassuring end users of the security of platforms they use.
Like FedRamp, the standardized approach to AI security would include different certifications for specific standards. These new processes would have certifications regarding data privacy, training data, etc. and would require annual evaluations of these audits. By requiring certifications across different verticals, the federal government can ensure that organizations approach the use of AI ethically, regardless of their industry.
Ethical AI in the absence of legislation
As investments and developments in AI continue to flow, there is no guarantee that legislation will keep pace with progress. Creating meaningful regulation takes time and thought, and organizations must take it upon themselves to self-regulate and promote safe use in the meantime.
Creating an AI ethics committee is a way for organizations to ensure the technology is used safely, protecting the business and end users. Until deliberate, thoughtful legislation is passed, industry leaders must continue to advocate for ethical AI and work with the federal government, creating a clear path to responsible use.
How can AI regulation promote innovation? Why is alignment with ethical standards vital? Share with us your thoughts on Facebook, XAnd LinkedIn. We would love to hear from you!
Image source: Shutterstock