Integrating artificial intelligence into their products and services can help technology companies create tailored solutions with enhanced capabilities. However, AI can have serious downsides, including bias and user privacy concerns, if teams don’t follow responsible development practices.
It is crucial for technology companies to prioritize user well-being and ethical considerations when creating or operating AI systems. Below, 20 members of Forbes Technology Council share strategies for creating AI solutions that empower users while respecting their privacy and values.
1. Adopt a “responsible AI” framework
In the rapidly evolving landscape of AI-based products and services, one practical strategy that is paramount is the adoption of a “responsible AI” framework. This approach emphasizes prioritizing user well-being and ethical considerations from the outset, ensuring that these critical aspects are not afterthoughts but fundamental elements of the design and development process. – Josh Scriven, Neudesic
2. Consider using specialized LLM models
Although large language models are extremely capable and have now almost become synonymous with AI, the fact that they are trained on huge amounts of data makes their behavior less predictable. Depending on the product or service in question, it may make more sense to use specialized models trained on much smaller datasets and with more predictable behavior. – Avi Shua, Orca Safety
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?
3. Leverage AI code generation tools to review developers’ code
Application security is currently heading into a head-on collision with the rapid rise of AI, particularly generative AI. By collaborating with LLMs like ChatGPT, we enable developers to securely leverage AI code generation tools to review their own generated code. This proactive approach helps identify potential vulnerabilities, especially in code from open source materials. – Sandeep Johri, Checkmarx
4. Start with user-centered design and XAI principles
Some important thoughts: First, incorporate user-centered design principles and ethical considerations from the very beginning of the development process, not after the fact. Second, when developing products, use explainable AI (aka XAI) techniques to provide users with a basic understanding of how the AI system makes decisions. This builds trust and helps users understand the reasoning behind AI. – Erum Manzour, Citi Group
5. Have a human review all AI decisions
We don’t have general AI and self-supervision is not at a level where we can let machines manage themselves. A human must review all decisions of an AI system and use the data to iteratively improve the models, keeping user well-being and ethical considerations in mind. A further step is to create mechanisms that identify areas where AI is likely to display harmful biases and involve humans in these processes. – Kaarel Kotkas, Check
6. Keep transparency and explainability at the heart of the design
Transparency and explainability should be at the heart of product design for any AI-based solution. Transparency builds trust among stakeholders, and explainability improves understanding of the reasoning behind AI recommendations or actions. Transparent systems are better equipped to detect and mitigate bias and are able to easily support any compliance audits required by your industry. – Shailaja Shankar, Cisco
7. Establish clear data management and training processes
A strong commitment to privacy and transparency is essential when building AI-based products. Clear processes for managing user data, collecting feedback, and training your generative model, as well as transparently disclosing how your AI works, are key to building trust with your users. And this trust is essential to drive adoption of your products and services. – Oz Alon, Book of honey
8. Consider these five key factors
Five key ethical factors should be considered when designing and implementing AI products and services. These include responsible sourcing of unbiased data sets, accountability under human oversight, mitigating bias in AI models, transparency about how systems are used, and collective inclusion of all these principles during the product life cycle and beyond. – Alan O’Herlihy, Already seen
9. Operationalize responsibility from design to post-sales stages
Operationalize responsible AI development from the design phase through post-sales and/or customer success stages. This way, most risks can be mitigated as part of the regular product development process, and even if things go wrong after launch, escalations are easier to manage. – Didem Un Atès, Goldman Sachs
10. Establish a governing body
Don’t leave these discussions to developers: ethical AI is the responsibility of leaders. My advice is to create a governing body to manage user welfare and ethical issues. They should develop decision frameworks that developers could apply. – Glen Robinson, Platform 1
11. Integrate user well-being and ethics into design sprints
When building AI products, integrate user well-being and ethics into design sprints. Consider potential risks and mitigation strategies as well as basic functionality. Prioritize solutions that benefit both users and society. Regular reviews and user feedback loops help maintain ethical standards throughout development. – Sergei Mashchenko, Light IT Global Limited
12. Make sure the tool has a real use case
Does this AI tool actually solve a user’s challenge, problem, or requirement? When offering a service to a user, AI can appear cold and somewhat clinical in its approach. Is there a clear path to a use case that will provide a satisfying outcome for the user? Consider and account for as many user stories as possible. – Arran Stewart, Job.com
13. Have testers try to “break the system”
We test and evaluate just about every new AI model or model change at Integrail.ai, and we’ve found that testing is absolutely essential. You should have a number of predefined cases of people trying to “break the system” and you should run them every time you make a change to your AI multi-agent. – Anton Antich, Integral
14. Implement user feedback loops
A key AI strategy is implementing robust feedback loops for users. By incorporating user feedback throughout the design and development process, technical teams can ensure their AI-powered products align with users’ values and prioritize their well-being. Additionally, creating multidisciplinary teams including ethicists and social scientists can help organizations identify and address potential ethical considerations early on. – Ankur Pal, Aplazo
15. Prioritize peer review and interdepartmental checks and balances
Technical teams must establish clear standards and commit to continually monitoring the AI models that drive their products. They should prioritize peer review within the team and interdepartmental checks and balances, such as change control committees. Additionally, they should provide regular release notes to communicate feature developments and changes to internal and external recipients. – Kempton-Presley, JoinHealth
16. Implement data anonymization
To prioritize user well-being in AI design, it is essential to implement data de-identification techniques. Removing personal identifiers through methods such as pseudonymization and anonymization protects privacy, ensures compliance with data protection laws and builds trust. Regular updates of these methods are crucial to continually adapt to technological advances. – Hashim Hayat, Walturne
17. Take advantage of these three strategies
There are three practical ways to prioritize user well-being and ethical considerations when designing AI tools. 1. Prioritize a diverse team of people who provide feedback on the rewards training model. 2. Draw clear boundaries between which questions the AI should not answer and which it should answer, and default questions to humans if the AI is unsure. 3. Create continuous feedback loops to allow users to provide feedback on the tool’s results. – Pranav Kashyap, Central
18. Use the “FIRST” frame
I would recommend using the “FIRST” framework for AI. This includes feedback mechanisms (F) for user issues; integrity (I) in ethics training, with the inclusion of diverse data; regular ethical reviews (R); the inclusion of stakeholder(S) from the start; and transparency (T) on data use and compliance. – Viplav Valluri, Nuronics Corp..
19. Maintain and regularly review a results log
Maintain a log of AI tool results and review it periodically. Following the saying that “failure is the path to success,” a post-mortem analysis of the AI tool’s results will reveal areas that need to be corrected or readjusted. Completing these reviews as a team is even better and will highlight the importance of “doing it right.” – Henri Isenberg, RevueInc
20. Allow users to control their data
Provide users with transparency and choice over exactly how their data is stored and collected. This should include the ability to opt out of certain features or data sharing requirements. – JJ Tang, Rooting