Claude 3.5’s most interesting update is that it pushes AI from “can chat” to “can operate.” In the public beta, Claude 3.5 can view what’s on your screen, move the cursor, click buttons, and type into input fields to complete step-based tasks. Below is an editor’s breakdown of what’s changed, so you can judge whether it’s worth jumping in immediately.
What Claude 3.5’s new “Computer Use” can do
“Computer use” means you give Claude 3.5 a goal, and it executes it like a person would by following the on-screen UI flow: first it looks at the screen, then decides where to click and what to type. It’s well-suited for work that involves many steps, such as filling in items one by one in a web admin panel, or organizing information from Page A into a form on Page B. Anthropic also makes it clear this is still an experimental capability—Claude 3.5 may occasionally misclick or miss a step, so you’ll need to correct it as you go.
Availability: API access with multi-platform support
Right now, Claude 3.5 computer use is offered as a public beta via API. Developers can build with it directly through the Anthropic API. At the same time, Claude 3.5 is also available on Amazon Bedrock and Google Cloud’s Vertex AI, making it easier for companies to integrate it into existing cloud setups. For teams, this means Claude 3.5 isn’t just a demo—it’s the kind of capability that can be brought into workflow systems for automation.

