In a recent article (“Intelligence, as we do not know it”, IE, March 2), the author discusses developments in the field of artificial intelligence. However, like many others, this article suffers from an over-reliance on simplistic assumptions regarding the revolutionary potential of this technology.
The term “artificial intelligence» must be analyzed by critiquing the assumptions commonly made when engaging in this area of study. “Intelligence” can mean many things since there are many definitions of the word – from Howard Gardner’s theory of multiple types of intelligence, to the three strata theory of cognitive ability, to the intelligence scale. Stanford-Binet intelligence. The simple fact of describing a machine as “intelligent” therefore tells us almost nothing. Meanwhile, nothing about AI is “artificial” because the very workings of the technology are rooted in real-world political and environmental processes. So, as critical technology specialist Kate Crawford explains, AI is neither artificial nor intelligent. Relying on definitions of AI from the 1950s in the 2020s would be a mistake.
Recently, the Center on Privacy and Technology at Georgetown University Law Center in Washington, DC, announced that it would stop using the terms “artificial intelligence,” “AI,” and “machine learning” because words matter. Vague terminology hides more than it reveals. Adopting specific terminology such as “major linguistic models” provides greater clarity. Clarity of terminology is particularly valuable for technology law and policy – which should not be formulated based on assumptions of “intelligence” – whether in the present or the future. The same goes for anthropomorphized technology. The development of laws and policies based on the assumption of machine “intelligence” and without critical analyzes of the underlying technology involved, whether self-driving cars or facial recognition systems , risks having serious consequences for the communities most likely to be affected by these technologies.
False assumptions about what it is and what it is not only reinforce narratives that “AI is too complex” for lawmakers. This benefits the few global companies that control the massive infrastructure used for “AI”, who can then argue for “self-regulation” for “AI” (which actually means no regulation). The popular debate about “AI” continues to focus on these companies rather than those who bear the brunt of the real impacts of “AI” on daily life – among the poorest sections of society who are are denied food and well-being by algorithms, as reportedly happened in Telangana. , to the exploited technicians responsible for “cleaning” the data sets. The article focuses only on the “profit-generating journey” of these companies, their stock prices, their market capitalization, and the clothes their CEOs wear – instead of educating readers about the damage these same companies cause to the planet in the name of “AI”. Anything seen and understood in terms of the “market” and corporate incentives is one of the many unfortunate trends of this neoliberal era. More telling is the assertion that legal regulation should not hinder “innovations” – at no point is it questioned whether it is worth it for legislators to even allow these “innovations” to continue, given of the magnitude of the harm that has already been caused –. deepfakes, misinformation, devaluation of art, degradation of the Internet and much more.
Meanwhile, it is not easy to call for an end to “innovations”. In 2020, some AI researchers from Google questioned the ever-increasing scale of natural language processing systems – pointing out that building larger and larger AI models would not be sustainable in the long term due to the resources required and the entrenchment potential of bias and systemic injustices. More importantly, they suggested that future research would be more useful if it pursued more specific and achievable goals, instead of aiming to be the first to build a hypothetical “general” AI. They were later fired by Google. THE Gemini This question, raised but not explored in depth in the article, is important because of what it actually reveals: that companies will refuse to recognize that bias is inherent to the way the system works; that there is a refusal to recognize that this problem will never be solved; and that these refusals will come up against hesitant technological “solutions” which will cause even more problems.
Unfortunately, even the arguments against “AI” have been co-opted by the industry itself, pushing legitimate voices to the margins. Anupam Guha, AI researcher and academic, explains (“AI for the People,” IE, December 29, 2023) how companies have used genuine concerns about AI as a weapon (as in the case of the “letter » which made headlines last year) to divert attention. concrete interventions.
Effective policy must confront the uncomfortable and unspeakable facts about AI. First, more data does not equal better “AI”; second, AI and socio-political realities are closely linked; third, that there is no such thing as “ethical AI”; fourth, most “AI” today is simply a method of flattening and erasing the complexity of the real world into statistical data; and finally, that “AI” should not and does not need to be implemented everywhere.
Meanwhile, generative AI continues to be largely unprofitable for AI startups as they rely on “the future” while depending on big tech companies like Microsoft and Amazon for financing, which are investing billions in data centers around the world. OpenAI CEO Sam Altman, calling for billions of dollars and hoping for an energy breakthrough, tells us about the environmental costs of AI. The AI industry also has a huge water footprint, which goes unmentioned. A Cornell study predicts that global demand for AI could be responsible for 4.2 to 6.6 billion cubic meters of water withdrawals in 2027. Excessive water consumption for AI systems is particularly concerning when areas are struggling with water scarcity. These companies are often reluctant to reveal the amount of natural resources consumed in their data centers. While many companies have committed to being “water positive” and “carbon negative” by 2030, it could be too late by then. Rather than unfounded hype, misplaced enthusiasm and passive acceptance of the latest “innovations” and “revolutions”, we need popular literature on technology that promotes an attitude of skepticism towards technology campaigns. enterprise, awareness-raising and critical inquiry, to help build a fairer and more equitable world. future.
The author is a doctoral student at NALSAR University of Law, Hyderabad