Image by Steve Buissinne on Pixabay
Lowest common denominator (LCD) data science is the unthinking variety of data science that neither questions the prevailing wisdom nor attempts to counter it. The sad reality is that LCD data science is much more common and triggers far more damaging side effects than the alternatives.
Consider some symptoms of a society suffering from the current dominance of LCD data science:
The wow factor of the chatbot and the willingness to be fooled by the allure of the AI generation
At this year’s South by SouthWest conference, John Maeda, Microsoft’s vice president of AI and design, observed that chatbots have been fooling humans since the 1960s. The conversation is often cryptic, leading to humans to fill in the gaps with assumptions that don’t reflect what AI actually does or why. As a result, robots can appear smarter than they actually are.
Maeda said chatbots have for decades been able to extend conversations simply by picking out keywords in a human’s conversation and returning the keywords to them in the form of questions phrased in a way to imply that the bot is truly curious.
It’s not difficult for robots to borrow the therapist’s approach to getting a patient to talk about their problems. For example, the robot hears the human mention “mother”. The question in response becomes: “Tell me about your mother.”
Lately, even some qualified scientists who were impressed by the recent question about the success of generative AI have claimed that robots seem to be “sentient” these days. Skeptics, meanwhile, counter that the robots are really just doing an elaborate form of autocomplete-style guesswork, and that they’re still hallucinating a bit.
Just because chatbots provide helpful answers to questions doesn’t mean they understand what the answers they provide mean…or how they relate to the nuances behind the question.
How AI-powered automation can reduce overall business performance
In January 2024, the International Monetary Fund (IMF) published a discussion note titled “Gen-AI: Artificial Intelligence and the Future of Work.” One of the observations offered by the authors was this:
In advanced economies, around 60% of jobs are exposed to AI…. Of these, around half could be negatively affected by AI, while the rest could benefit from increased productivity through AI integration.
One way to read this type of claim with a critical eye is to think about current automation-based practices and how the quality of these processes has further declined now that AI-driven software is the norm.
Consider the worst hiring trends typical of an HR department and how they are amplified by AI. At a time when popular business books like David Epstein’s Range: Why generalists triumph in a specialized world have proclaimed the value of generalists, the vast majority of online job postings are designed to filter through a list of a dozen or more specialties. Generalists can be helpful, but how likely is their application to reach the hiring manager for review?
It is much more likely that applications from abstract-thinking generalists will be filtered using AI, precisely because these generalists may not have X years of experience in specialization Y using software package Z On the other hand, a more thoughtful AI would avoid reducing hiring to a simple exercise of matching text with a CV and requirements.
Repeating the lie that a difficult problem is solved doesn’t make it so
Timnit Gebru, founder and executive director of the Distributed AI Research Institute, recently shared a video clip from a 1984 episode of a Silicon Valley PBS affiliate television program. Computer Chronicles as an example of the kind of hype around AI that has been around for forty years or more. During the program, one of the interviewed consultants proudly announced: “We have reached a turning point, where it is no longer very expensive or very difficult for people without technical training to build (AI) systems and to apply them usefully. »
The truth is that systems thinking is difficult and most companies fail to support a forward-looking architectural vision. Systems thinking should be a salaried discipline in its own right, requiring generalists who can abstract, synthesize, and lead the way through data-centric architecture for the process improvements that business outcomes demand. analysis. To tackle AI, business leaders must fund and maintain 20 different roles, several of which involve architects at different levels, not just four, and these roles must represent a full range of intellectual diversity, from thinkers to styles different.
Improving AI means the need for a radically different approach to data and knowledge management
Most of the skills required by these roles already exist within larger companies. The problem is that people with these skills are siled into departments dedicated to data management, content management and knowledge management.
Instead, people from these three departments could unite around a single, unified approach to managing structured and unstructured data, now achievable through a knowledge graph-based data architecture. Leaders must break down silos in their organizations and empower visionary architects to implement such a unified approach. To fund such an effort, executives can reallocate budgets from underutilized and siled application suites.