An arms race and a wrecking ball
Competing companies like OpenAI have worked on equivalent tools but have not yet made them public. This is something of an arms race, as these tools are expected to generate a lot of revenue in a few years if they progress as expected.
It is believed that these tools could eventually automate many menial tasks in office jobs. It could also be a useful tool for developers as it could “automate repetitive tasks” and streamline laborious QA and optimization work.
It’s long been part of Anthropic’s message to investors: its AI tools could handle much of some office work more efficiently and affordably than humans. Public testing of the Computer Use feature is a step toward achieving this goal.
We are of course familiar with the ongoing debate around these types of tools, between “it’s just a tool that will make people’s jobs easier” and “it will put people out of work in every sector like a ball of demolition” – both. of these things could happen to some extent. It’s just a question of what the ratio will be, and it can vary depending on the situation or sector.
There are, however, many legitimate concerns regarding the widespread deployment of this technology. Admittedly, Anthropic tried to anticipate some of these issues by putting safeguards in place from the start. The company gave some examples in its blog post:
Our teams have developed classifiers and other methods to report and mitigate this type of abuse. Given the upcoming US elections, we are on alert for attempted abuses that could be perceived as undermining public confidence in electoral processes. Although computer use is not sufficiently advanced nor capable of operating on a scale that would present increased risks compared to existing capabilities, we have measures in place to monitor when Claude is asked to engage in activities related to elections, as well as systems to distance Claude from activities such as generating and posting content on social media, registering web domains, or interacting with government websites.
These safeguards may not be perfect, as there may be creative ways around them or other unintended consequences or abuses remain to be discovered.
Currently, Anthropic is testing the use of the computer to determine what issues arise and to work with developers to improve its capabilities and find positive uses.