Statistics Canada recently published a detailed report estimate which professions are likely to be affected by artificial intelligence in the coming years.
It ends with an optimistic message for education and healthcare professionals, suggesting that not only should they keep their jobs, but their productivity will be improved thanks to advances in AI. However, the outlook is bleaker for those in the finance, insurance, information and culture sectors, who are expected to see their careers derailed by AI.
Should doctors and teachers now breathe easy, while accountants and writers panic? Maybe, but not because of the data in this report.
What Statistics Canada is proposing here is a relatively meaningless exercise. This assumes that it is technology itself and how it complements human efforts, not economic models designed to undermine our common humanity, that are the determining factor. In making this mistake, the report is another victim of embracing corporate optimism at the expense of uglier business realities.
High exposure to AI hype
Companies that come up with new innovations or products that tap into our greatest hopes and fears are nothing new. The only thing that might be new is the scale of big tech’s hopes for AI’s impact, which seem to touch every sector.
It is therefore not surprising that fear is widespread on the industries and sectors that will be replaced by AI. It is also not surprising that Statistics Canada is seeking to allay some of these fears.
Learn more:
The future of work will still have many jobs
The study groups jobs into three categories:
- those with high exposure to AI and low complementarity, meaning humans may compete directly with machines for these roles;
- those with high exposure to AI and high complementarity, where automation could improve the productivity of workers who remain essential to the job;
- and those with little exposure to AI, for whom replacement does not yet appear to be a threat.
The report’s authors say their approach – examining the relationship between exposure and complementarity – is superior to older methods that examined manual versus cognitive Or repetitive versus non-repetitive tasks when analyzing the impact of automation on workplaces.
However, by focusing on these categories, the study continues to buy into the corporate hype. These categories of analysis were developed in 2021. Over the past few years, new windows have opened, giving us a clearer view of how Big Tech is racing to deploy AI. The recently revealed unethical tactics render the predictive categories of exposure and complementarity meaningless.
AI is often driven by humans
Recent developments have shown that even jobs with high exposure to AI and low complementarity with AI still rely on humans behind the scenes to perform essential work. Take Cruise, the self-driving car company acquired by General Motors in 2016 for more than a billion dollars. Driving a taxi is a job with high exposure to AI and low complementarity with AI: we assume that a taxi is either controlled by a human driver or, if driverless, by AI.
It turns out that Cruise’s “self-driving” taxis in California were not, in fact, driverless. There was remote human intervention every few kilometers.
If we were to analyze this work precisely, three categories would be to consider. The first concerns on-board human drivers, the second concerns remote human drivers and the third concerns autonomous vehicles controlled by AI. The second category here makes the complementarity quite high. But the fact that Cruise, and likely other tech companies, I tried to keep it a secret raises a whole new world of questions.
A similar situation occurred at Presto Automation, a company that specializes in AI-powered drive-thru orders for chains like Checkers and Del Taco. The company describes itself as one of the largestwork automation technology suppliers” in the industry, but it has been revealed that much of its “automation” is driven by human work based in the Philippines.
Software company Zendesk presents another example. It used to charge customers based on how often the software was used to try to resolve customer problems. Now Zendesk only charges when its proprietary AI completes a task without human intervention.
Learn more:
Long hours and low wages: the human labor that fuels AI development
Technically, this scenario could be described as high exposure and high complementarity. But do we want to support a business model where the customer’s first point of contact is likely to be frustrating and unnecessary? Especially knowing that companies will roll the dice on this model because they won’t be charged for these unnecessary interactions?
Examine business models
As things stand, AI presents more of a business challenge than a technological one. Government institutions like Statistics Canada must be careful not to amplify the media hype surrounding it. Policy decisions should be based on critical analysis of how businesses actually use AI, rather than exaggerated predictions and agendas.
To create effective policies, it is crucial that policymakers focus on how AI is truly integrated into businesses, rather than getting caught up in speculative predictions that may never fully come to fruition.
The role of technology should be to support human well-being, not simply to reduce labor costs for businesses. Historically, every wave of technological innovation has raised concerns on job displacement. The fact that future innovations may replace human labor is neither new nor daunting; instead, it should prompt us to think critically about how it is used and who will benefit from it.
Political decisions must therefore be based on precise and transparent data. Statistics Canada, as the main data provider, has a vital role to play in this regard. It must offer a clear and impartial vision of the situation, ensure that policy makers have the right information to make informed decisions.