A new assessment from Anthropic has raised concerns about the accelerating pace at which AI systems can autonomously compromise smart contracts, underscoring how emerging models are reshaping blockchain security. The firm evaluated advanced agents in simulated environments and found that they successfully exploited seventeen of thirty four recently deployed contracts, extracting millions of dollars in test funds. Broader testing across more than four hundred contracts issued between 2020 and 2025 showed that AI models replicated two hundred seven exploits, generating over five hundred million dollars in simulated returns. More alarming, Anthropic reported that its agents detected two previously unknown vulnerabilities in nearly three thousand newly deployed contracts, indicating that autonomous systems can identify weaknesses ahead of human auditors. The vulnerabilities ranged from authorization flaws to unsafe read only functions and incomplete validation checks in fee logic. Researchers said these findings demonstrate how AI tools can rapidly scale blockchain exploitation, especially as computational costs decline and incentives for automated reconnaissance expand across high value token environments.
Anthropic’s analysis suggests that more than half of blockchain exploits executed in 2025 could have been performed by current generation AI agents without human involvement. The report noted that simulated exploit revenue has doubled approximately every six weeks as models improve and attackers harness them for increasingly sophisticated scans. Analysts view this trend as a fundamental shift in operational risk for decentralized finance markets, where code vulnerabilities can trigger irreversible loss events. The research highlights how vulnerabilities buried within older libraries, authentication components or auxiliary services become easier targets when AI agents can probe large code ecosystems continuously and at low cost. While this raises immediate security concerns, researchers also emphasized that similar AI systems could be deployed defensively to identify weaknesses before they are exploited. Anthropic plans to release its new benchmark dataset to help developers evaluate their systems against automated adversaries and accelerate patching cycles in production environments.
The findings add to broader discussions about how AI is reshaping cybersecurity across tokenized assets, automated settlement layers and smart contract based financial infrastructure. As institutional participation in digital asset markets increases, operational resilience and code level integrity become core priorities for risk management. Observers note that decentralized systems, unlike traditional financial networks, lack centralized backstops, making prevention the primary means of protection against rapid exploit propagation. The report has prompted renewed calls for integrating automated scanning tools into audits, improving continuous monitoring and updating developer frameworks to account for adversarial AI behavior. With the sector moving toward more complex applications of smart contracts, including RWA tokenization and multi chain settlement processes, analysts expect AI related attack surfaces to grow unless defensive capabilities evolve at the same pace. Anthropic’s assessment serves as a signal that autonomous exploitation is no longer a theoretical threat but an operational reality that developers and institutions must address proactively.
