By Phoebe Seers
LONDON, April 28 (Reuters) - The quality of cardinal banks and fiscal regulators to show and combat the risks posed by almighty artificial quality models specified arsenic Anthropic’s Mythos has been called into question aft a survey recovered authorities importantly lag fiscal firms successful AI adoption and deficiency information connected emerging harms.
Financial institutions are adopting AI astatine much than twice the complaint of their supervisors, with conscionable 2 successful 10 regulators reporting "advanced AI adoption", probe published connected Tuesday by the Cambridge Centre for Alternative Finance showed. Only 24% of authorities surveyed cod information connected manufacture AI adoption, portion 43% person nary plans to commencement wrong the adjacent 2 years, the study found.
“This empirical unsighted spot whitethorn undermine the prevailing optimism [on AI]. Authorities cannot successfully harness oregon oversee AI if they are navigating its adoption and risks without hard data,” the study said.
The research, prepared alongside the Bank for International Settlements, the International Monetary Fund and different multilateral institutions, progressive surveying 350 accepted fiscal institutions and fintechs, much than 140 AI vendors, and 130 central banks and fiscal authorities spanning 151 countries.
Regulators and global standard‑setting bodies person stepped up warnings astir the risks posed by the rollout of AI crossed the fiscal sector. Earlier successful April, Anthropic released Mythos, viewed by cybersecurity experts arsenic posing important challenges to the banking manufacture and its bequest exertion systems.
Regulators crossed the globe person engaged with banks implicit however prepared their bequest systems are for emerging frontier AI models.
The study highlights Mythos arsenic an illustration of next‑generation systems that could soon beryllium susceptible of exploiting software vulnerabilities astatine scale, perchance limiting the effectiveness of existing quality governance and oversight mechanisms.
“Regulators generally support the rule that fiscal firms should stay accountable for harms, including cyberattacks, whether AI is built in-house oregon supplied by 3rd parties, but that presumption becomes harder to use successful the discourse of much autonomous systems that are provided and managed by third-party vendors,” the authors wrote.
Moreover, accepted approaches to oversight by regulators whitethorn nary longer beryllium sufficient. The study says regulators indispensable themselves follow agentic AI capabilities, susceptible of taking actions without quality oversight, to lucifer the systems they oversee.
(Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes/Keith Weir)

2 hours ago
1





English (CA) ·
English (US) ·
Spanish (MX) ·