The field of AI, although born from the brilliance of modern computing, has its roots in timeless philosophical contemplations.
VICTOR Hugo is the French poet and novelist who is credited with the phrase “Nothing is more powerful than an idea whose time has come.” Ideas whose time has come are unstoppable.
While we deny and reject innovations, the world will continue to move forward, using innovations and historical necessities, while we lag behind and no one will pay attention as we lag behind. We can shout from the rooftops that new innovations like Starlink pose threats to national security, but these are the creators and innovators who will not only continue to rule the world, but also eat the lunch of those sleeping on the job.
As we embark on the path to an artificial intelligence (AI)-infused future like Starlink, these new innovations can only be first understood, then adopted, and ultimately used for our own national development.
Philosophical foundations of AI
The field of AI, although born from the brilliance of modern computing, has its roots in timeless philosophical contemplations.
René Descartes, in his Meditations, addressed the nature of cognition, implicitly hinting at an area where machines could potentially mimic human thought processes. As early as 1651, Thomas Hobbes, in Leviathan, proclaimed that “…reasoning is only calculation”, reinforcing the idea that logical calculation, even mechanized, is at the heart of intelligence.
I first discovered AI 38 years ago when I entered the second year of my Master of Business Administration, in a course Advanced Forecasting Techniques, fascinating to say the least. Even in 1986, it was inconceivable that AI would have as many applications as it does today.
The evolution of AI
Alan Turing, the visionary behind modern computing, asked the fundamental question: “Can machines think?” » in his seminal work, Computing Machinery and Intelligence (Turing, 1950).
His hypothesis, known as the Turing Test, sets a benchmark for artificial intelligence: if a human evaluator cannot distinguish between a machine and a human based solely on their answers to questions, then the machine can be considered to have “thought”.
Since the era of Turing, AI has come a remarkable way, from rule-based systems in the 1960s and 1970s, to expert systems in the 1980s, to the emergence of neural networks in the 1980s. 1990.
The 21st century marked the advent of deep learning, where algorithms, inspired by the complex neural architecture of the human brain, demonstrated astonishing prowess in tasks such as image and speech recognition (LeCun, Bengio and Hinton, 2015).
The ethical dilemmas of AI
Rapid advances in AI capabilities have raised deep ethical concerns. Issues surrounding bias in AI, the right to explanation, and potential job displacement require thoughtful deliberation (Bostrom, 2014).
For example, should a self-driving car prioritize the safety of its passengers or pedestrians?
Additionally, AI and machine learning models, despite their remarkable accuracy, can sometimes function as enigmatic “black boxes.”
This lack of transparency can hinder understanding and troubleshooting, highlighting the need for transparent and interpretable models (Doshi-Velez & Kim, 2017).
Socio-economic ramifications of AI
Beyond the realms of technology and ethics, AI brings unprecedented socio-economic implications. Brynjolfsson and McAfee (2014), in their book The Second Machine Age, highlight the transformative impact of AI on industries, economies and the nature of work.
Although AI offers potential to improve efficiency, concerns about the phenomenon of “technological unemployment,” in which machines could replace human jobs, persist. This is already happening in many sectors around the world.
AGI Quest
The quest for artificial general intelligence (AGI) embodies a breed of AI that possesses the remarkable ability to undertake any intellectual activity that human beings can effortlessly accomplish (Russell and Norvig, 2009).
While our current models demonstrate remarkable competence in limited domains, achieving AGI requires the creation of systems with the ability to learn, reason, and apply knowledge across multiple domains, thus marking the pinnacle of artificial cognitive ability.
Traversing the complex field of artificial intelligence reveals a tapestry woven with threads of revolutionary innovations, complex ethical dilemmas and profound socio-economic metamorphoses.
As humanity stands on the precipice of unprecedented technological advances, a symbiotic alliance with sentient machines is required, calling for both circumspection and bold exploration.
Interaction between dream and reality
For countless centuries, humanity has been fascinated by the concept of creating artificial entities that reflect the complexity of the human intellect.
This timeless aspiration, evident in ancient myths featuring automatons like Talos and the Jewish Golem, has now materialized in the realm of algorithms and machines that emulate cognitive functions similar to those of humans (McCorduck, 2004).
First explorations of automata
The annals of artificial intelligence run deep in the annals of history, where pioneers like Al-Jazari meticulously designed complex automata in the 12th century. Although these devices lacked true cognition, they laid the foundation for the eventual emergence of mechanized intelligence (Hill, 1998).
The transition
The odyssey from mechanical contraptions to conceptualizing intelligence began with visionaries like Ada Lovelace, who envisioned a future in which Charles Babbage’s analytical engine could produce art and music, foreshadowing the multifaceted capabilities of modern AI (Lovelace, 1843).
Formalization
Alan Turing’s theoretical foundations, exemplified by the Turing test and the universal Turing machine, became a crucial step for artificial intelligence (Turing, 1950).
His work sparked a quest to determine whether logical calculations could usher in a new era of intelligent machines.
Emergence of machine learning
The introduction of the perceptron by Rosenblatt in 1958 marked the dawn of machine learning (Rosenblatt, 1958). The ability to train machines using data rather than hard-coded rules was revolutionary.
This heralds the birth of neural networks, a concept inspired by biological neuronal structures.
However, it was not until the development of the back-propagation algorithm in the 1980s that deep learning took its first steps (Rumelhart, Hinton & Williams, 1986).
The renaissance of AI
The confluence of vast data sets, powerful computing capabilities, and refined algorithms in the 21st century has sparked a resurgence in AI research. Deep learning models, powered by layers of artificial neurons, have demonstrated prowess in areas such as natural language processing, image recognition, and even creative endeavors like art generation (Goodfellow, Bengio, Courville, 2016).
GPT, AlphaGo and beyond: The achievements of modern AI models like Open AI’s GPT and DeepMind’s AlphaGo demonstrate not only computational finesse, but also a capacity for nuanced, AI-like learning and strategic thinking. humans (Silver et al, 2017; Brown et al, 2020).
Epilogue: The Future Awaits: Looking to the horizon, the convergence of AI with quantum computing and the pursuit of AGI promise a frontier where machines could match, or even exceed, the scope and depth of human cognitive abilities (Sutor, 2019).
Navigating the maze of AI lexicon: AI, in its vast expanse, is littered with complex terminologies, each concealing a wealth of knowledge. For academics and practitioners alike, it is crucial to navigate these terms for a comprehensive understanding of the field (Russell & Norvig, 2009).
- Ndoro-Mukombachoto is a former academic and banker. She has conducted numerous consultations on strategy, entrepreneurship and private sector development for organizations in Zimbabwe, the sub-region and overseas. As a writer and entrepreneur with interests in real estate, hospitality and manufacturing, she continues in strategy consulting, also sharing through her podcast @HeartfeltwithGloria. — +263 772 236 341.