In October 1950, British mathematician Alan Turing created what he called the imitation game.
Turing’s journal article, first published in the Mind quarterly journal of psychology and philosophy, would later become known as the “Turing Test” — a widely popularized test of a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.
Since then, the concept of an artificial intelligence (AI) system including intelligence exceeds ours captured the public imagination. And now tech companies like OpenAI, Anthropic, Alphabet, Microsoft and others have publicly declared they are try to build such a system.
OpenAI even included in its project the objective of developing a machine capable of artificial general intelligence (AGI). founding charter.
But anthropomorphizing AI systems, or assigning human characteristics to them, can present several dangers – and for many commercial use cases of the innovative technology, it can be a fatal distraction from the very real utility that AI systems can be used for. AI can offer.
After all, AI is far from being as mysterious like people think. AI models are computerized systems that deploy sophisticated probabilistic algorithms at lightning speed to solve complex problems. They are trained to imitate and built to generate. They do not think, do not believe and do not react.
And assuming that AI models and products possess human-like understanding, emotion, or reasoning abilities, companies that seek to exploit the technology beyond what it is currently capable of may fail.
Education and communication about the nature of AI systems can help manage expectations and ensure responsible use. In an enterprise environment, deploy AI systems with a lucid approach to quantifiable goals and the expected return on investment (ROI) is the key to success.
Learn more: Demystifying AI: the probability theory behind LLMs like OpenAI’s ChatGPT
AI is a tool, not a creature: it will perform tasks, not jobs
Scientific researchers and various government agencies have been working on forms of AI since the 1940s and 1950s, but the availability of Big Data to train AI models and advances in hardware such as AI chips and high computing performance have led to major advances in the field. during the last years.
The emergence of generative AI has led to conversational AI interfaces that use billions of data points and advanced probabilistic algorithms to mimic human writing and communication styles.
While chatbots like OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and others each conduct conversations and even write code or generate images, as if they were human, this is not the case . And their capabilities are much more limited.
“Imagination and new creation are Not something AI is capable of… it only imitates what it has learned,” Ofir KrakowskiCEO and co-founder of Deepdubtold PYMNTS earlier this month.
But that doesn’t mean that AI can’t do it, just that it’s never been more important than today to fully understand limitations, decision-making processes and potential biases. inherent to AI in order to effectively deploy and integrate intelligent software. .
So that businesses really get the most out of it To take advantage of AI, they need to understand how it works and be clear about the desired outcome – and this is true. In all areas where AI is applied.
For many tasks, especially those that involve analyzing large amounts of data or information or are repetitive, AI can be a much more economically viable solution than relying on human labor.
AI is already being deployed in areas such as material science And drug discovery to energize and increase the capabilities of human researchers.
AI-based solutions can be particularly useful within finance and accounting offices, helping employees in areas such as processing invoices, generating computer code, creating financial forecasts and preliminary budgets, completing audits, streamlining business correspondence and brainstorming and even researching tax and compliance guidelines.
See also: Demystifying AI capabilities for use in payments
2024 will be about increasing the accuracy of AI results
Viewing AI as human can lead to overestimating its capabilities – and underestimating its weaknesses.
“Technology can be scary in the abstractbut what would happen if I told you that a robot will start ordering for you all day – it’s Google Maps – or the robot will tell you where to eat – it’s OpenTable – and the robot will even tell you with who hook up and date – that’s Tinder,” Adrian AounCEO of Before, told PYMNTS in December. “When robots appear, they are at your service. AI is not at the service of its own mission, it is at the service of your mission.
The next phase of AI for business must ensure that models can be audited by humans and that the decision process used is clear and can be refined.
“The use of AI by businesses must be precise and relevant – and it must be goal-oriented. Consumers can have fun with AI, but in a professional conversation or as part of a business workflow, the numbers need to be right and the answer needs to be correct. Beerud ShethCEO of Conversational AI Platform Gupshuptold PYMNTS in November.
Like PYMNTS did reportedTHE Generative AI the industry is expected to reach $1.3 trillion by 2032. But rather than a single omniscient super-AI, better at everything humans can do, market growth will likely be driven and accelerated by a variety of Different AIs with different abilities. strengths, each refined for various applications.
“There is a long way to go before there is a futuristic version of AI where machines think and make decisions. …Humans will be here for quite a while,” Tony Wimmerresponsible for data and analysis at JP Morgan Paymentstold PYMNTS in March.