Key takeaways
- Large technology companies like Microsoft and Google are at the forefront of AI development, benefiting greatly from the technology and investing heavily in partnerships and internal development.
- Generative AI, such as ChatGPT and DALL-E, has been in development for many years, but the required hardware and massive amounts of training data have only recently existed.
- The development of artificial general intelligence (AGI), capable of reasoning and answering complex questions, is the next step in AI. However, being at the mercy of big tech companies and the misuse of data has already harmed society, raising concerns about the future of AI development.
AI is undeniably the hottest technology today. It’s gotten to the point where I can’t go more than a few hours without hearing about it, and chances are you can’t either. All the big tech companies seem desperate to advance AI in new and old products, and AI fatigue is already starting to show. Microsoft may already be causing Copilot burnout with its breakneck pace of integration into systems before it is fully developed, but that is how capitalism works. You want to be first and you want to stay in first place forever.
But we have never played this game with technology so capable of disrupting so many areas of human life. The potential of AI seems limitless, and nothing seems to be able to slow down the investment bubble which, if it continues, will surely eclipse almost every other tech bubble we have witnessed. It makes perfect sense that big tech companies like Microsoft, Google, and Meta are leading the way in partnerships and internal development; they have a lot of money to spend and will definitely benefit greatly from technology.
I’m not saying that AI shouldn’t exist or that its development shouldn’t continue. On the contrary, it’s one of the most exciting technologies I’ve ever seen, and it’s very much part of the utopian future many of us have envisioned. However, seeing the power of AI become centralized by giant corporations does not bode well for the future of the general public. Where money can be made, ethics and morals are often put aside.
Generative AI didn’t appear overnight
Even though it feels like it is
Generative AI is quickly becoming mainstream, and it might seem like it came out of nowhere. This is not the case. The application of generative AI, at least with its current capabilities, was even mooted by Canadian philosopher Marshall McLuhan in the 1960s. McLuhan, who focused his career on media theory, foresaw a time when individuals could request large swaths of information and provide it quickly in a neat package. In an interview with Robert Fulford on This hour is seven days old, McLuhan said, “Products are becoming more and more services” in an attempt to explain where we were going all those years ago. Of course, McLuhan did not foresee the Internet and its ability to provide this information in such rapid timescales, and he certainly did not predict the enormous stores of data with which AI could be trained. He lived in the age of libraries and Xerox machines.
Generative AI in the form of popular systems like Google TagDALL-E and Bard did not appear overnight. Not only did the hardware required for such heavy computation not yet exist all those years ago, but the enormous stores of data on which generative AI has now been trained did not exist either. You know, the data we didn’t know could be so powerful and profitable until it was too late to take power back from the companies that collect it.
When ethics and morals get in the way of increasing profits, history shows a less than perfect record.
Geoffrey Hinton, a computer scientist and cognitive psychologist, played a leading role in the development of deep learning techniques in the 1980s as they are used today. To put it simply, he helped shape the idea that instead of teaching a computer what to think, we could show it enough data to allow it to draw its own conclusions, largely from pattern recognition. He then joined Google in 2013 to help create Google Brain, but left the company after 10 years to begin warning the public about the dangers of AI.
Not only would AI begin to replace humans, Hinton argued, but it would also be a perfect tool for manipulating humans without their knowledge. Sounds familiar, doesn’t it? And while huge controlling corporations have pledged to stick to their self-imposed ethics and morality around AI, the bottom line in a capitalist system is to stay competitive and demonstrate its growth year after year. When ethics and morals get in the way of increasing profits, history shows a less than perfect record.
The next stage of AI development
AGI is somewhere in our future
Generative AI is a step on the path to true artificial intelligence, also known as artificial general intelligence (AGI). AGI differs from traditional AI primarily in its ability to reason. While ChatGPT (which powers Microsoft’s co-pilot) and Google Bard rely on recognizing patterns in the datasets they provide, AGI would be able to understand complex concepts and, essentially, answer questions and apply reasoning on its own. My colleague Adam Conway wrote an excellent article explain OpenAI’s Q* algorithm and its ability to “threaten humanity”, a quote from former OpenAI chief scientist Ilya Sutskever.
This is the version of AI that many people have always envisioned: something so intelligent that we can only distinguish it from human intelligence because it far exceeds our organic capabilities. It’s unclear what really goes on behind closed doors at OpenAI. Still, there is at least some evidence — like the fact that OpenAI CEO Sam Altman was ousted only to quickly return — that the company is starting to exploit something we don’t know how to control. I understand that development has to happen somewhere, but I don’t know if it’s currently in the right place.
Being at the mercy of big tech companies when it comes to just our data has already damaged society.
Being at the mercy of big tech companies when it comes to just our data has already damaged society. I don’t think it’s a big secret that our data is being used against us to cultivate engagement (anger and outrage are also very hot right now), influence political and societal decisions, and sell us things that we don’t really need it. What’s even scarier is that it’s used to predict how we will act on an individual and societal scale; the way this information is often applied leads to authoritarianism and predictive programming, despite being presented as necessary for a better future.
Our data created simply using the Internet has now been used to create arguably the most powerful tool of the 21st century, intended to disrupt our world on a scale similar to that of the Industrial Revolution when it really began. Massive industries employing countless people fear being completely replaced by AI, educational institutions are nearly powerless in the face of widespread abuse, and we haven’t even begun to scratch the surface of ethical dilemmas like deepfakes and scams. Just look at the current hubbub surrounding Taylor Swift AI explicit knockoffs and the reactionary rush to regulate these tools one way or another. These issues are not new, but they are being reframed in a way never seen before.
What will AI look like in 10 years?
There’s a lot to think about
We’re at the point where a lot of money is being made just by putting “AI” in the name of a new product, but that will quickly dry up as AI fatigue sets in in technology trends. consumption. We can justify by Microsoft still has no idea what it’s doing with Copilotand it’s just starting with relatively basic tools (at least compared to what’s coming in the future).
We’re just dipping our toes into the vast ocean of AI potential, and already we’re faced with subscription costs, paywalls, and competitive uncertainty. In many industries, AI-based tools are already too good to ignore if you want to compete, which will undoubtedly widen the already yawning wealth and knowledge gap in 2024. PC AI – those with a built-in neural processing unit (NPU) to handle AI-related tasks – are in their infancy, but they are should increase system requirements (and therefore costs) as they progress.
Looking for answers to AI
Not as simple as a simple prompt
I don’t know who or what should be in charge of AI or if it will even be possible when artificial intelligence reaches a certain point. Even more frightening is the belief that No Any person or institution (including world governments) has any idea what will happen tomorrow, next week, or next year, let alone how to protect the rest of us from unforeseen dilemmas. Catching up and only reacting to problems with something that evolves as quickly as AI will not work well.
Catching up and only reacting to problems with something that evolves as quickly as AI will not work well.
We got off to a bad start, but it’s not an unsurprising start. Continuing the current dynamic, I foresee a market in which those with the most powerful AI (now available only at very high cost) will be years or decades ahead of those without the aided by AI, further centralizing power in the hands of a very small number of unelected and unelected people. largely unsupervised groups. This structure sounds like a capitalist’s dream, but it’s a sleepless night for all of us.
AI is here to stay. I hope we don’t get sidelined as this grows.