
US financial regulators have raised fresh concerns about advanced artificial intelligence and its potential impact on banking cybersecurity, holding an urgent meeting with top Wall Street executives to discuss risks linked to a new frontier AI system developed by Anthropic, according to a new report.
Concretely, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a closed-door session with chief executives of major US banks, warning them to strengthen cyber defenses against emerging AI-driven threats.
The meeting, as reported by Bloomberg News, focused on concerns that Anthropic's latest advanced model could significantly lower the barrier for sophisticated cyberattacks, particularly by helping attackers identify vulnerabilities in widely used software systems.
Bank executives were reportedly urged to reassess their cybersecurity frameworks and prepare for scenarios where AI systems could be used to automate or scale intrusion attempts against financial infrastructure.
Channel News Asia added that the discussions were triggered by assessments that Anthropic's newest model demonstrated unusually strong capability in identifying software weaknesses, raising fears that such tools could be misused if they fall into the wrong hands.
While details of the model's internal capabilities have not been publicly disclosed, the concern among regulators is that frontier AI systems are rapidly improving in areas such as code analysis, vulnerability detection and automated reasoning, skills that could be weaponized in cyber warfare or financial crime.
The urgency of the briefing reflects a broader shift in how US authorities view artificial intelligence risk. Rather than treating it solely as a technology sector issue, regulators are increasingly framing advanced AI as a potential systemic financial stability risk, similar to major shocks in cybersecurity or market infrastructure.
The Straits Times also noted that officials are pushing banks to coordinate more closely with regulators and AI developers to ensure safeguards are built in before such systems are widely deployed in sensitive environments.
The concerns come amid growing global debate over how to regulate rapidly advancing AI systems. The meeting highlighted fears that frontier models are becoming more capable of autonomous planning, coding assistance and vulnerability discovery, raising questions about oversight, containment and responsible release practices.