At GyanDhan, a non-banking financial company that offers education loans, content writers found that their output tripled when they used the artificial intelligence (AI) tool ChatGPT. Co-founder and CEO Ankit Mehra is a big fan of these AI tools – known as large language models (LLMs) – that generate text in response to a question or prompt.
However, what is stopping him from deploying it more widely, he says, are concerns ranging from the prohibitive price, to the tool’s propensity to generate erroneous content, to gray areas such as potential copyright violations. ethics and privacy in the use of proprietary data.
LLM involves machine learning, a branch of AI that allows computers to learn from training data. Besides ChatGPT, from US-based AI research firm OpenAI, some of the commercially or enterprise-available LLMs include Google’s Gemini and Microsoft’s Copilot.
“Prejudice is a major concern; the datasets used to train these AI models are limited and can generate potential errors,” explains Mehra. He cites the recent example of Google’s Gemini, calling Indian Prime Minister Narendra Modi’s policies fascist, while refraining from giving such definitive answers regarding other leaders.
“At this point, we are looking at using LLMs as basic workplace hygiene,” says Mehra.
Around 15% of GyanDhan’s workforce, which includes developers, marketing staff, and content writers, currently uses an enterprise LLM.
Walk with caution
Privacy is a major concern when using commercial AI tools. “As an NBFC, we cannot analyze proprietary data using corporate LLMs. We develop our own LLMs using open source models,” says Mehra.
At first glance, fintechs – financial services companies powered by digital technology – may seem like the ideal candidates to adopt generative AI as quickly as possible.
However, in reality it is much more complicated. Many companies interviewed for this report, including a payments bank and a telecommunications company, reported limited deployment of generative AI. “The hype around AI tools is overblown,” says one executive, “there is no fire in the smoke.”
In a recent IBM study, 74% of Indian IT professionals surveyed said their company was exploring some form of AI deployment, but most projects were stuck at the pilot stage.
More importantly, even as the biggest AI players begin working on enterprise tools using their generative AI technology, the tech companies surveyed are primarily investing in R&D, training, and development of proprietary LLMs . According to the report, bias, lack of expertise, ethical concerns and lack of data provenance continue to be barriers to business adoption of LLM.
Computer or non-computer use
IBM’s report largely agrees with the opinion of industry experts, which places India at the very beginning of the so-called hockey stick growth curve promised by AI.
The deployment of AI in businesses is still in its nascent stage, says Sachin Arora, partner and head of Lighthouse-Data, AI and Emerging Technologies, KPMG India.
“Indian businesses are using AI to make existing workflows more efficient through automation or chatbots; the second-level paradigm shift for IT and CRM (customer relationship management) businesses is still far away, when entire workflows will be changed to accommodate the deployment of AI-based enterprise applications to worldwide. We will see some of these changes in the next two or three years; but, for now, big IT companies are content with small “proofs of concepts” and learning, and training their staff in AI, even as Silicon Valley works to create truly revolutionary products for businesses “, he said.
Arora believes that India’s engagement in AI will mirror previous technology cycles, with the IT sector being its largest user while non-tech companies will deploy it in a limited manner.
Sanchit Vir Gogia, chief analyst at Greyhound Research, says 2023 has been a year of pilot projects in India.
“But as with any technology, it will take 2 to 3 years for AI to become widespread and adopted by different teams within an organization. This is largely because companies offering AI tools are also maturing. Additionally, companies are trying to understand the implications of India’s evolving data protection laws on AI deployments. AI needs to deploy a large amount of data,” he explains.
Value proposition
Companies like Microsoft are trying to integrate AI into boardroom conversations. “I have yet to meet a board, CEO or management team that is not curious and excited about the potential of AI for their customers, their own business and their employees. We did a study that found that for every dollar spent on AI, customers get back 3.8 times more dollars,” Puneet Chandok, president, Microsoft India and South Asia, had said. activity area in a recent interview.
Users will wait to see the value proposition that AI can deliver before they are ready to adopt it, says Abhigyan Modi, senior vice president of Document Cloud at Adobe. If technology is seen as cool on one hand, on the other hand there is the question of how productive it can be, he says. “If our end users can see the value of the technology and it allows them to realize real savings or generate value, I see no barrier to its adoption,” he says.