In an interview with Kathimerini, David Evan Harris, senior fellow at CIGI, chancellor of the University of California (UC), Berkeley, and faculty member at the Haas School of Business, who teaches the ethics of artificial intelligence (AI) to leaders. in Berkeley, suggests identifying at-risk jobs and investing in education when asked what he would advise the Greek government to do to prevent the country from becoming a “pariah” in this area.
Harris is dedicated to the responsible use of the powerful tool of AI, which he compares to nuclear technology: valuable but extremely dangerous. He worked at Meta (the former Facebook company), writes a column in the Guardian and follows European efforts to legislate in this area with some apprehension. (Editor’s note: It should be noted that this interview took place before representatives of EU member states voted unanimously on February 2 in favor of the Artificial Intelligence Act, opening the path to a revolutionary set of rules that will influence how AI is governed (in the region and around the world.) Our conversation begins with concerns about the class divide that the spread of AI could create.
Do you think that economic and social inequalities will be exacerbated by AI?
We don’t yet know the answer to this question. Indeed, we do not yet know whether AI will serve the public interest or whether it will be controlled by a few companies. Its development will depend on whether or not there is public investment in its development to make it accessible to people and therefore safe. If, on the contrary, all investments come from the private sector, this mission will be undermined. In other words, the problem is not only the control and regulation of the AI landscape, but also the channeling of public resources, because this is a very expensive activity. So, for universities and research centers to develop it in the public interest, significant investment will be required, either from the EU or from national governments. Thus, two factors guarantee that AI does not increase inequalities: control and public investment.
Do you think AI is a blessing or a curse for a small country like Greece?
The key question we face is whether AI will change the entire job market. Initially, we thought that the first jobs affected would be those of truck drivers or other workers who would be replaced by robots. But we have come to see that today’s AI systems mainly affect jobs like yours and mine, professors, journalists, writers. We didn’t expect these systems to become excellent writers before becoming excellent truck drivers!
If you were an AI advisor to the Greek government, what would you suggest?
I would say that countries that have started to think about the future and plan for it in terms of how the labor market will evolve will have the opportunity to benefit from AI. My first priority for Greece would be to identify the jobs most vulnerable to the introduction of AI and help these people transition to other jobs. The second would be to invest in education.
I heard that Silicon Valley is recruiting people in Greece for engineering positions as part of its outsourcing. Greece is an attractive country for technology companies to open offices. If the right infrastructure is in place in the education system, your country could benefit from the economic opportunities that are opening up. But if what I describe is not done in time, the tide could well be reversed: needed jobs will migrate to other countries, and Greece will find itself ill-prepared for the rehabilitation challenge created by advances in AI.
“We do not yet know whether AI will serve the public interest or whether it will be controlled by a few companies”
You have written that the legislation under discussion in the EU constitutes the most serious attempt to regulate the AI landscape worldwide. Is there something worrying you?
My main concern is the possibility that this effort will be torpedoed by the big tech giants in the EU. In fact, two companies, one French, Mistral AI, and the other German, Aleph Alpha, are lobbying to block the relevant legislation, thereby limiting the application of European legislation to the types of AI that existed before ChatGPT. In other words, they want to make regulations obsolete before they are even implemented. If they succeed, European legislation will be obsolete and outdated. This will enable many more abuses of AI in the future.
Is the erosion of democracy one of your fears?
This is already happening without us realizing it. Big social media companies determine what we see on their platforms, such as extreme political views, and increase the polarization of public opinion. But through the use of generative AI, which has the ability to create new content, it will now be possible to create false national narratives very quickly. I fear that these systems are particularly persuasive in manipulating citizens, encouraging them to either vote a certain way or not vote at all. I also fear that they will be able to distribute such content through encrypted applications such as WhatsApp, which I believe is also very popular in Greece. Remember that the company that owns the platform, Meta, does not want to get involved in controlling the content.
When Meta founder Mark Zuckerberg promises to build a powerful AI system and provide free access to the public, it sounds very democratic. What is behind this commitment?
When a private company promises this, they must also answer for the misuse of this tool. When Mark Zuckerberg made Facebook free for users, at first we loved it and thought it would help spread democracy, we lived through the Arab Spring, the #MeToo movements and Black Lives Matter. We thought it was a compelling platform for democracy. But then we discovered that it was a much more powerful tool in the hands of connected authoritarian leaders who decided to exploit these technologies. I fear this will happen again if AI falls into the hands of people who want to harm our societies.
Many people have compared AI to nuclear technology…
Yes, because there is a lot of material and information around nuclear technology that, if used correctly, you can operate a power plant, produce energy, something valuable for society. But you can also use it to make nuclear weapons. AI is just that: you can use it for the common good or for military purposes. If you make it freely available to anyone who wants it, it can fall into the hands of bad actors. We must not leave it to a private company to make such an important decision, with such enormous implications for all of humanity.
And you say this even though you worked at Meta. Do you regret it?
I worked at Meta for five years. However, I’ve worked in groups that had a good purpose, like fighting platform abuse and election interference, as well as fighting misinformation, but also building corporate accountability. Unfortunately, the company has decided to either eliminate these departments or significantly reduce their workforce. This is what we need to keep in mind when we expect private companies to self-regulate. When there is money and the economy is doing well, these companies are very profitable. They have the luxury of hiring people for the departments I worked in and making sure their products benefit the world. But when the crisis hits and they need to reduce operating costs, they look for the departments that bring the least profit to the company.
And then people like you are the first to be laid off.
Exactly! And to be fair to Mark Zuckerberg, it’s even worse at Elon Musk’s X, when he fired over 80% of his company’s employees, including an entire team dedicated to “AI ethics » whose mission was to make Twitter a safe environment for democracy. Ironically, the company that invests the most in this area is TikTok, even though the Chinese are doing the opposite: censoring a lot of content, going to the other extreme.