By Wayne TomsCEO of Ghostdraft
Hardly a day goes by without someone, somewhere speculating about artificial intelligence (AI) and wondering whether we, as businesses and society as a whole, can and should trust it. technology.
Although much of the fear-mongering is based on hearsay and pure speculation, there are certainly legitimate concerns.
However, businesses and consumers have more than enough reason to believe – and trust – in the ability of AI to add significant value to their processes and their lives.
Customer communication management (CCM), infused with complementary AI, has literally changed the goalposts of what businesses can achieve through personalized, accurate, and compliant customer communication at scale.
On the other hand, screaming headlines push people into a state. Rumors surfaced that Sam Altman’s departure from OpenAI, before his return amid the furore, was linked to concerns among OpenAI board members that significant breakthroughs in artificial general intelligence (AGI) had been carried out and that Altman rushed their production without sufficient guarantees. for society as a whole. The stuff of horror films.
AGI refers to an advanced form of AI capable of performing many activities as well or better than humans. Many AI believers believe that AGI is closer than we think.
Let’s be clear: the real reason for Altman’s departure is not known and may not be related to AGI. It’s also important to point out that AI experts are divided on whether true AGI is imminent, or even likely.
I would argue that even if the AI’s output comes close to that of the human mind, like when Deep Blue stunned the world by defeating a reigning world chess champion, there is a crucial difference between the approaches used by AI and those used by AI. humans.
Sophisticated AI algorithms such as large language models (LLM) do not understand the meaning of concepts the way humans do.
Computer algorithms turn words into numbers from the start, which means they don’t give meaning to words in the same way humans do.
More broadly, making AGI a reality will require significant advances in AI’s ability to understand the meaning and context of patterns in data, not just detect them.
The question then arises: what does all this mean for businesses that want to leverage the power of AI to improve their business processes and customer service?
As a starting point, it is interesting to read the view of Kevin Scott, CTO of Microsoft, referring to Microsoft’s AI tool for its Office suite, in an article article in the New Yorker.
“Office Copilots seem both impressive and mundane. They make mundane tasks easier, but they are far from replacing human workers.
They seem far from what science fiction novels predicted. But they also feel like something people could use every day. » he would have declared.
The article goes on to say that if Scott, Microsoft CEO Satya Nadella, and Chat GPT CTO Mira Murati get what they want, then “AI will continue to gradually infiltrate our lives, at a pace gradual enough to accommodate the precautions required by short-term pessimism, and only as quickly as humans are able to assimilate how this technology should be used.
There remains the possibility that things will spiral out of control and that the gradual rise of AI will prevent us from realizing these dangers until it is too late. But for now, Scott and Murati are confident they can balance advancement and security.
It’s a good approach. What about the rest of us? Companies have a responsibility to serve their shareholders, but also their customers and society as a whole.
Following a thoughtful line in commercializing AI capabilities requires regular review of potential benefits and risks.
Business leaders must uphold the notion of good corporate citizenship in the way they serve customers and develop products. Regulation will play a major role.
It appears that European lawmakers have concluded marathon discussions to put in place a regulatory framework for AI.
The framework will likely maintain a list of all AI models considered to pose systemic risk, and general-purpose AI providers will be required to publish summaries of their algorithms and the content used to train them.
The EU is leading the global regulatory response to AI and could become the model that other governments could follow.
If we return to CCM, GhostDraft’s use of AI focuses specifically on supporting document generation to quickly and accurately capture contractual agreements between businesses and their customers.
CCM is an absolutely crucial element of modern customer communication. Gone are the days when document automation was enough.
The intelligent and calibrated use of AI is a prime example of how technology is supporting a transformative evolution in businesses’ capabilities to create personalized, compliant, well-designed, dynamic and rapid mass communication.
If a business cannot communicate clearly, quickly and accurately with its customers, it will lose them. GhostDraft uses AI for the design and development of communication models and generative AI to analyze sample data, informing the structure and production of future documents.
It improves the readability and completeness of key customer documents and forms so that customers can feel comfortable in their business interactions.
AI can improve customer service and interaction and make more relevant recommendations to customers, ensuring they receive products and services tailored to their needs.
Additionally, the use of AI can save businesses money, which can then translate into more accessible and affordable services for customers. These benefits breed trust.
On the other hand, we already know that AI doesn’t understand context the way humans do. He doesn’t understand nuance, hope or fear.
If we allow AI to have unfettered access to running business operations, it will miss things and that’s where customer trust will be damaged. The term “AI hallucination” refers to cases where AI tools identify patterns in data that are nonexistent or absurd.
There are practical ways to address this problem beyond regulation. Businesses can limit the scope of the chatbot to simpler questions and routing complex or nuanced business functions to humans.
Giving users more control is also important. Businesses can achieve this by giving users control over their data and the extent to which AI is used in their interactions, and by allowing users to easily enable or disable AI-based features.
When AI practitioners build and train models, they would do well to selectively record and audit the AI’s recommendations, and use them to refine their models.
Responsible companies, for their part, must clearly communicate to customers how AI is used in their products or services, in a way that lay users can understand.
Don’t miss the important articles of the week. To subscribe to techbuild.africa weekly digst for updates.