AI has been invaluable in the financial services industry and is increasingly adopted by all types of financial institutions, including not only banks and credit unions, but also lenders, trading firms, payment processors and, of course, fintech.
It’s no surprise why – AI brings copious benefits that include everything from expanded products and improved customer service to lower costs and greater efficiency, all with enhanced security and privacy. AI is key in several areas of finance, including financial planning, risk scoring, credit approvals, data processing and portfolio management. It has also been crucial in fraud detection and ensuring AML (Anti-Money Laundering) compliance.
However, as fruitful as these technological advancements have been, they have also opened the door to increased cyber threats, leaving many financial institutions wondering how to fight back.
Key Risks of AI in Fintech
Just as AI has been used for good, it has also been used for plenty of bad. The financial industry is especially enticing, and hackers have evolved to weaponize AI, creating new ways to access and manipulate financial systems and data.
“The barrier to entry for a high-level cyberattack has vanished,” says Joshua Crumbaugh, CEO of PhishFirewall and a self-proclaimed ethical hacker. “In seconds, a hacker can now create a near-perfect deepfake or voice clone to bypass KYC (Know Your Customer) protocols or trick a fintech employee into authorizing a massive wire transfer.”
As AI continues to advance, the financial industry must determine how to safeguard its data and operations from compromised data privacy and third-party vulnerabilities amid a cloud-dependent environment.
These are some of the top security risks from AI in the financial industry today.
1. Improved Cyberattacks
The advancement of AI has led to an increase in cyberattacks on the financial industry, as hackers increasingly social-engineer advanced attacks against institutional safeguards.
Online banking systems, ATMs, payment processors and fintech APIs are all potential targets for continuous, automated scanning. With AI advancements, hackers are able get around built-in safeguards like multi-factor authentication to access system data. They can then initiate system outages and embed hardware failures and software bugs.
Cyberattacks can happen much faster using AI, leaving institutions less time to react. Rapid data scanning and analysis can identify and exploit vulnerabilities in financial systems, then manipulate those systems to their advantage through malware, ransomware and distributed denial of service (DDoS) attacks. By the time institutions wise up, the damage is already done.
“In order to reduce these threats, I constantly suggest that finance organizations allocate their budget for the creation of security frameworks with multiple layers,” shares Jacob Kalvo, Co-Founder & CEO at Live Proxies. “Such a framework may include, among other things, providing continuous monitoring, implementing strong identity verification and threat detection systems that are specifically designed for AI environments.”
There have been several incidents already seen within the industry, as evidenced by the 2023 attack on the ION Cleared Derivatives platform. As a division of ION Markets, it fell victim to the Russian cybercriminal group Lockbit, which seized control of the company’s data to demand a ransom.
Although ION eventually rectified the issue, it had an enormous impact on both U.S. and European markets, as its self-named cybersecurity event affected 42 of its clients.
Because of attacks like these, companies will have to take greater accountability, says Benson Varghese, a Board-Certified Criminal Lawyer and founder and Managing Partner of Varghese Summersett Law Firm.
“There will be increasing pressure for built-in ‘kill switches,’ fallback modes, and human-in-the-loop controls for high-impact use cases like lending, fraud detection, and AML,” he predicts. “Regulators and stakeholders will expect to see proof of testing against bias, adversarial attack, and edge cases.”
“Ultimately, the relevant strategic question will not be ‘Can we build this model?’ but rather, ‘Can we defend this model to regulators, auditors, and courts five years from now?’” says Varghese. “As a consequence, teams will need to design for accountability from the ground up.”
“As they do so, auditability, explainability and traceability will become not just ‘good hygiene’ but compliance requirements,” he explains.
Crumbaugh makes an observation based on his experience. “Too often, regulatory frameworks are treated as the end goal. They aren't,” he says. “They are the bare minimum requirements to be compliant, but they won't stop a sophisticated AI-driven attack.”
“It’s up to individual institutions to build security that goes beyond the checkbox.”
2. Corrupted Data
Machine learning (ML) has introduced many vulnerabilities into financial systems, making it all too easy for hackers to penetrate them.
Once they gain access to company systems, cyberattackers can tweak, skew and alter data to compromise systems and operations. They can then force separate outcomes, create bias and jeopardize overall security. 
“The biggest risk isn't just data being ‘corrupted’—it’s data poisoning,” says Crumbaugh. “We’ve already seen machine learning models in antivirus detection systems manipulated to ignore malware from specific nation-states.”
“In fintech, if the data used to train an AI is poisoned, the resulting ‘bias’ can lead to catastrophic financial decisions or security blind spots that go unnoticed for years.”
This ruins both the integrity and impartiality of an institution, thus compromising the entire financial industry as a whole.
“Data that is corrupt or manipulated is a big risk, since AI systems are so dependent on the correct input for their decision-making,” says Kalvo. “Even minor mistakes or bad tampering can lead to huge financial losses.”
In 2024, data cloud company Snowflake suffered a massive data breach that exposed the private data of millions of consumers. Over 150 of Snowflake’s clients were affected, including such big names as AT&T, Santander Bank, Ticketmaster, Lending Tree, Truist Bank and Neiman Marcus. Snowflake’s poor infrastructure, namely its lack of multi-factor authentication, was found to significantly contribute to the breach.
It reinforces the need for stronger infrastructure, more stringent security protocols and additional safeguards built in to account for future risks.
“Future AI regulation in financial services will tighten around technical accountability and require chief technology officers to prove not only that systems work but that they remain controllable, observable and resilient,” explains Alex Kugell, Chief Technology Officer at Trio.
“Regulators will require far more stringent proof of data provenance, versioned model artifacts and end-to-end audit trails,” he continues. “For example, they’ll need to be able to show how a particular input propagated through data pipelines, models and business logic to generate an output at a certain time.”
3. Deepfakes
Deepfakes have already flooded the Internet, bombarding social media and creating not only confusion but also passionate debate about what is real and what is AI-generated.
In fintech, AI-created automation poses a significant risk. Automated bots can continuously scan company systems for vulnerabilities and then attack, stealing massive amounts of both company and consumer data in a single pass.
With this information, hackers can manipulate and steal consumer finances and identities. These bots can test consumer data on a massive scale and replicate human behavior to fool fraud-detection systems. Once they gain entry, they have full control over your most sensitive information.
Deepfakes can come in many forms.
- Phishing. In addition to hacking systems, cyberattackers can produce and distribute phishing messages, or vishing, that appear to be from official sources, such as company executives, regulators or clientele. These interactive messages, whether audio, video or text, can trick employees and clients into revealing sensitive data.
- AI-generated photos. Hackers can use AI to generate fake photos, instantly helping to legitimize false messages and fake identification. These photos can also sometimes be used to bypass security protocols, such as facial recognition, allowing cyberthieves instant access to your accounts.
- Voice cloning. Just like AI can generate these fake photos,it can also mimic voices. This enables hackers to create false messages, thereby gaining access to private accounts.
- Fake identities. Hackers can use customer data to create fake identification, such as driver’s licenses and government IDs. They can even generate false bills and credit histories that look like the real thing, enabling them to open new accounts and run up crippling debt in someone else’s name.
Deloitte reports that generative AI and the use of deepfakes could increase fraud losses from $12.3 billion to $40 billion from 2023 to 2027 alone. It is clear that as technology continues to evolve, so will this problem.
4. Third-Party Vendors
Sometimes the company itself is not the risk but rather, its partners. As seen in the case of Snowflake, it was not the companies themselves who were compromised but rather the vendor they chose for cloud services.
This can also happen with similar IT services, such as payment gateways, which are enticing targets given the immediate financial gain they offer. Service disruptions and outages are another issue, impacting operations and causing financial losses as operations are shut down until a ransom is paid or another solution is found.
Supply chain attacks are common in fintech, as well. Newer companies may not have the resources to develop more advanced systems to ward off attacks, leaving their partners’ data compromised as well as their own.
“Today, AI tools make the job I used to do manually look like child's play,” shares Crumbaugh of PhishFirewall. “Many fintechs are still using outdated vendor security questionnaires. If your Third-Party Risk Management (TPRM) process doesn’t explicitly ask how a vendor trains their generative AI or how they store the data you feed it, you have a massive hole in your defenses.”
Just because one business has strong safety protocols does not mean its partners do, as well, so it is critical that business owners vet their vendors carefully and inquire about instituted security measures.
“From a lawyer’s perspective,” says David Gammill, a trial attorney and Founder of Gammill Law Accident & Injury Lawyers. “The emphasis on documentation and transparency obligations is key, because if firms rely on third-party models or infrastructure, they must document those dependencies.”
What Financial Institutions Can Do

AI holds significant promise for the future with fantastic potential for improved threat detection, streamlined operations with greater efficiency and more personalized service, fostering an unparalleled consumer experience.
However, history has taught us that as advantageous as AI can be, it also opens the door to significant risks that can jeopardize consumer trust and financial health.
Still, all is not lost. The New York State Department of Financial Services (NYDFS) recently issued updated guidance on the growing issue, identifying key risks and calling for increased risk management controls to effectively mitigate evolving AI-borne threats. In its letter, the NYDFS offered several strategies to help mitigate AI security risks for financial firms.
Therefore, it is critical that institutions take a multi-pronged approach to ensure security and privacy through added protocols like encryption and ongoing oversight.
How Financial Institutions Can Protect Against AI Treats Refine defense systems.
Manage relationships.
Be proactive.
|
Of course, financial institutions do have some legal responsibility. Per protections from the SEC Regulation S-P and NASD Notice to Members 05-49, broker-dealers create and maintain written policies and procedures to safeguard consumer data. They are also bound by SEC Regulation S-ID, which requires identity theft prevention programs. Additional international, federal and state regulations may apply.
Varghese of Varghese Summersett Law Firm discusses the expected implications. “If a synthetic identity sails through onboarding or a model systematically disadvantages a protected group,” he says, “the key question will not be whether the A.I. was ‘innovative,’ but whether the firm appreciated it as a foreseeable risk and put in place controls that prosecutors and regulators will judge reasonable and proportionate.”
Further regulation is expected from federal agencies such as the SEC, FDIC and OCC, as well as the Federal Reserve.
Looking to the Future
AI creates an issue for the industry, as FINRA Rule 2010 demands “high standards of commercial honor and just and equitable principles of trade,” including the use of AI.
Just one cyber attack can change all of that, which is why it is imperative that financial institutions do everything they can to protect themselves – and their consumers – from the risks AI has borne.
“Regulators are likely to converge on requiring incident reporting for material AI failures, and extending existing cyber, outsourcing, and consumer protection regulations to explicitly refer to AI,” predicts Gammill of Gammill Law Accident & Injury Lawyers. “Rather than a single ‘AI law,’ they are expected to weave together an overlapping net of obligations.”
Sami Andreani, a finance expert and Chief Financial Officer at Oppizi, adds a tax consideration. “These new rules will require firms to design their A.I. architectures and documentation to make them auditable,” he says. “The question for a C.P.A. will be whether the system can be tested, evidenced and signed off on with the same degree of confidence that a traditional financial process would.”
What the future holds in store is anyone’s guess, which is why it is critical that financial institutions do not become complacent and instead take an active role in AI fraud mitigation.
Benson of Varghese Summersett sees more specific regulation in the future. “Financial regulators and criminal lawmakers will move more in sync to clarify A.I.-relevant offenses, expand corporate criminal liability and recognize that where firms take prevention, documentation and remediation seriously, they should remain deserving of leniency.”
By adopting proactive, forward-thinking measures, financial institutions can better safeguard their data and shield themselves from the dark side of AI.
“History shows that regulation is always years behind the technology,” observes Crumbaugh of PhishFirewall. “While more rules are coming, fintech leaders need to remember that compliance is the starting line, not the finish line.”
