Co-founder of Tuta, a secure messaging service. We are leaders in innovation in encrypted communication and collaboration.
As artificial intelligence (AI) systems continue to be hyped in 2024, the risks posed to data privacy can no longer be ignored. I believe AI should be considered a surveillance technology because of its ability to collect, analyze and interpret large amounts of data. It is time to examine not only the possibilities of AI, but also its risks, particularly with regard to everyone’s right to privacy.
Rapid developments in the training and use of AI have raised concerns about user consent, the ethical use of personal data, and the right to privacy in general. Let’s explore how AI is trained, what some of the lesser-known risks are, and what steps can be taken to ensure the benefits outweigh them.
Understanding AI training and its gaps
AI training involves the process of feeding large volumes of data into machine learning algorithms to enable them to learn patterns and make predictions or decisions. It is important to note that training nuances must be considered and addressed by AI developers.
First, it is now well known that AI models can inherit overlooked biases present in training data. If the data is not representative or contains biases, the model can perpetuate or even amplify them. Understanding and addressing these biases are ongoing challenges in AI development.
As AI technology continues to evolve, it is essential that stakeholders actively participate in discussions about ethical considerations, transparency, and ethical deployment of AI. Particular attention should be paid to ethical development, as AI-based systems can process and analyze huge volumes of data from various sources. It is essential to ensure that not all available data can be used for training purposes, as this can include data from the web and social media in general, as well as non-public data such as user actions. users on technology platforms, user profiles or even data. security cameras.
AI systems are frequently trained by merging personal data from external and internal sources. Most of the AI algorithms used are considered proprietary, making them difficult to review. The lack of transparency in how these algorithms operate and make decisions raises concerns about liability and the risk of bias that could disproportionately impact certain groups, particularly minorities. Information about how the data is used, what consent is required or how its use is regulated must be clear.
Therefore, AI technology can use all available data. Currently, it is unclear how personal data is used to train AI systems, where the data is stored, and whether it is secured by encryption or not.
The harms of AI tools on privacy
It is widely accepted that AI tools enable users to create content: texts, images, videos and much more can be created quickly with AI. But these tools can also be used to track and profile individuals. AI enables more detailed profiling and tracking of individuals’ activities, movements and behaviors than ever before. AI-based monitoring technology can, for example, be used to for marketing and targeted advertising purposes.
This global surveillance made possible by AI can lead to breaches of privacy. Individuals may feel monitored and their privacy may be used in this way without their knowledge or consent.
AI also facilitates the implementation of facial recognition technology, which can identify and track individuals based on their facial features, even in the real world. This technology is already used in public spaces, such as train stations or airports, as well as by law enforcement. The widespread use of facial recognition raises concerns about the constant surveillance of individuals.
Predict the future
AI algorithms can analyze patterns of behavior, both in the real world and in online spaces. This includes monitoring social media activities, online searches and communication patterns.
Thanks to the massive use of personal data to train AI systems, this technology is akin to a surveillance system capable of “knowing” what people are thinking and predicting what they will like, what they won’t like or what they might do in a given context.
Websites or online searches already tend to show users only information that matches their past online behavior (called filter bubbles) instead of creating an environment for pluralistic, equally accessible and inclusive public debate. If left unchecked, artificial intelligence could make these filter bubbles even worse, potentially predicting what users would like to see and applying filters accordingly.
Overcoming Challenges
We already live in a world of big data, and the expansion of computing power through AI could radically change the way privacy is protected.
AI is simply the latest technology that presents new challenges to consumers, businesses and regulators. It is neither the first nor the last of its kind. Thus, businesses must implement best practices in data privacy compliance to build user trust. For example, companies must ensure that the privacy requirements set out in the European GDPR privacy regulation are respected when using the personal data of European citizens.
Additionally, organizations can leverage their internal resources and adopt specific strategies to improve privacy and build user trust. This means fostering a culture of transparency and communication and investing in user training programs to give individuals the knowledge to protect their own privacy. Companies should provide resources and guidelines on best practices for secure online behavior and promote the use of encryption. Finally, every business should emphasize ethical data practices within the organization and build privacy by design. Prioritize collecting only necessary data and ensure that data is handled responsibly and in accordance with privacy regulations.
The future of data processing by AI software must be closely monitored and it must be ensured that people’s rights to privacy are not harmed by this new technology. Our quest for technological progress must not come at the expense of privacy.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?