S&P 1200 companies that view data and analytics as strategic outperform their peers 80% of the time, according to Gartner Research Vice President and Distinguished Analyst Mark Beyer during the keynote address at the Gartner Data and Analytics Summit in Sydney.
Rita Sallam, vice president and distinguished analyst at Gartner, added that there is a 30% improvement in financial performance when comparing organizations in the 75th percentile in terms of Data and Analytics Maturity with those in the 25th percentile.
While senior management generally understands the importance of culture and strategy for data and analytics adoption, they tend to underestimate governance and management of the data and analysis functionaccording to Beyer.
Part of the problem is that business people tend to think of governance as a control issue and a drag on progress. But as Sallam explains, it’s about improving execution, in part by enabling the use of data to improve business performance.
One way to do this is to implement and use data products. Key characteristics of data products are that they are easy to find, ready to use, up-to-date and governed for appropriate use, Beyer said.
Data and analytics practice leaders must be prepared to shout their successes – including defensible estimates of their value – from the rooftops, Sallam said.
But it’s important to use the right value metrics, she cautioned. Rather than looking at return on investment, it’s better to look at the business value projects generate: “Connect technical outcomes to business outcomes in terms the business can understand,” such as the effect on executive bonuses, she suggested.
GenAI Opportunities
When it comes to Generative AI (GenAI)According to Beyer, Gartner sees opportunities as falling into four categories: front office, back office, products and services, and core capabilities. Additionally, it can be used to deliver incremental improvements at one extreme, or to be a game changer at the other. The latter approach is high risk, he warns, but it also offers big potential rewards.
Another way to categorize opportunities is “defend, extend, or subvert.” Luke Ellery, vice president and analyst at Gartner, explains that “defense” projects are characterized by incrementalism, marginal gains, and micro-innovations. These projects typically cost between $20,000 and $100,000.
“Extend” projects aim to increase market size, reach, revenue or profitability. Finally, an “Upend” initiative aims to change market players or change competition – but at $5 million to $100 million, “it’s very expensive and the risk is very high,” Ellery said.
Cost estimates can be as much as 500% or even 1,000% higher, Ellery warns. Uncertainties include the possibility that no other data centers will be allowed in a given geographic area, the cost of data preparation, the true cost of using GenAI application programming interfaces (APIs)and user habits, such as iterating towards a final answer that could lead to exponentially higher costs.
But the risk isn’t just about cost. Ellery discussed data quality and ownership, the need to monitor and adjust a model’s performance, and the lack of standard contractual clauses for the provision and use of GenAI models.
Fortunately, there are tools available to help monitor AI risks and costs. ActiveFence, Arthur, TruEra, and Fiddler are among the tools Ellery mentioned. Gartner clients can also use the company’s AI and GenAI Cost Calculator, which takes into account more than 100 detailed cost elements and compares available options.
In addition, a formal and centralized system FinOps (financial operations) practice may help flatten the cost curve associated with GenAI, Ellery suggested.
The flip side of cost is often thought of as benefit. But Ellery points out that benefit does not automatically equate to value. For example, global law firm Ashurst compared the performance of human lawyers with AI-assisted lawyers. While AI-assisted lawyers were faster, there was no improvement in productivity due to the cost of AI.
The human factor
Successful AI adoption requires “purpose-driven humans,” Sallam said, adding that this can be achieved by reorganizing the operating model, extending data literacy to AI literacy, and establishing new leadership paradigms.
Den Hamer called on companies to ensure their employees are ready to use AI, which means investing in AI literacy for business and technical employees, from management to the rank and file. And that should include teaching them how to avoid AI-related mistakes.
Peter Krensky, a senior analyst at Gartner, pointed out two things about AI that everyone should know: “GIGO (garbage in, garbage out) still applies, and large language models sometimes ‘hallucinate’, so you have to spot what’s happening and deal with the resulting situation.”
Beyond that, different jobs require different skills and proficiency levels when it comes to AI. For example, an executive doesn’t need to know anything about AI Engineering but should have a good understanding of how to benefit from AI.
Therefore, organizations will need to identify different personalities and provide targeted training to those different groups, but Krensky believes many of them will “massively underperform.”
Highlighting the synergy that can be unlocked by combining human intelligence and insights with AI, den Hamer recommends investing in data management tools so that data can be made AI-ready, supporting business analysts with pattern discovery and communication capabilities, adopting self-learning AI systems for broader decision automation, and investing in capabilities such as natural language processing to expand the analytics user base.
Trust is another important consideration. Employees need to be able to trust the information provided by GenAI systems, know that privacy issues are being handled properly, and that they are being told the truth about the implications of this information for their jobs.
Processes also need to be rethought and role expectations should reflect human-AI collaboration, not AI substitution, den Hamer said. End users should be empowered to make decisions supported by analytics, but continuous monitoring is needed to avoid self-serving analytics.
Innovation remains important, so employees should have time to focus on side projects that have potentially high impact, but those projects must be aligned with business outcomes, he added.
AI fatigue
Krensky predicts that AI fatigue will be the biggest issue in 2025, and that GenAI isn’t a good thing for everything. Citing Gartner’s hype cycle for GenAI, he said the technology is still at the peak of inflated expectations and there’s no trough of disillusionment yet, much less a productivity plateau. “Vendors tend to overhype,” he warned.
“We will eventually stop talking about AI all the time, but it remains to be seen whether that is because we take it for granted, because it doesn’t live up to the hype, or because ‘intelligence’ is being redefined as ‘augmented intelligence,’” he said.
Organizations will also face challenges in scaling AI, as it requires a combination of technology, data, operations, organization, skills, governance and risk management, he noted.
Near-term milestones in the AI journey include: AI agent development using various technologies that can be integrated into a more sophisticated system.
For example, an expert agent could be built on a combination of a large language modela large-scale action model and a causal model. The expert can rely on a planning agent combine predictive machine learning with optimization algorithmswhile being kept under control by a rules-based agent that provides guarantees.
The goal is to use AI technologies tailored to particular aspects of the overall task at hand, in part to make AI more autonomous, adaptive and reliable.
Another step is the use of simulations and other types of synthetic data to train modelsThis has at least three advantages: there is not always enough real data for a particular project, real data can be biased, and there are no privacy implications when using synthetic data.
Finally, AI regulation will increase. European Union law on AI The law went into effect during the week of the conference and will pave the way for other jurisdictions, Krensky predicted. So it’s time to start thinking about risk categories. The law identifies four risk categories: unacceptable (prohibited), high (regulated), limited (transparency requirements) and minimal (no requirements).
Other related actions suggested by Gartner include putting in place appropriate safeguards, tools and training; establishing a cross-functional AI council with the organization; and promoting an understanding that Responsible AI is a philosophynot a checklist.