Researchers from Anthropic said they recently observed the âfirst reported AI-orchestrated cyber espionage campaignâ after detecting China-state hackers using the companyâs Claude AI tool in a campaign targeting dozens of targets. Outside researchers are much more measured in describing the significance of the discovery. Anthropic published the reports on Thursday here and here. In September, the reports said, Anthropic discovered a âhighly sophisticated espionage campaign,â carried out by a Chinese state-sponsored group, that used Claude Code to automate up to 90 percent of the work. Human intervention was required âonly sporadically (perhaps 4-6 critical decision points per hacking campaign).â Anthropic said the hackers had employed AI agentic capabilities to an âunprecedentedâ extent. âThis campaign has substantial implications for cybersecurity in the age of AI âagentsââsystems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention,â Anthropic said. âAgents are valuable for everyday work and productivityâbut in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.â âAss-kissing, stonewalling, and acid tripsâ Outside researchers werenât convinced the discovery was the watershed moment the Anthropic posts made it out to be. They questioned why these sorts of advances are often attributed to malicious hackers when white-hat hackers and developers of legitimate software keep reporting only incremental gains from their use of AI. âI continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,â Dan Tentler, executive founder of Phobos Group and a researcher with expertise in complex security breaches, told Ars. âWhy do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?â
Continue reading the complete article on the original source