Eftsure Reveals Cybercrime Potential of Generative AI
The new 'Cybersecurity Guide for CFOs' draws on the latest insight from AI experts to outline how cybercriminals leverage AI tools like ChatGPT and how businesses can protect themselves.
SYDNEY, January 25, 2024 (Newswire.com) - Well-known generative AI tools like ChatGPT are helping cybercriminals refine their tactics at scale, along with making convincing copies of anyone’s writing, picture or voice. Eftsure, creator of leading end-to-end B2B payment protection software, offers corporate finance officials a free resource to help them defend against this class of AI-based cyberattacks.
The 2024 edition of Eftsure’s “Cybersecurity Guide for CFOs” is now available on Eftsure’s website. The guide draws on expert insights to reveal the AI-based infiltration and attack strategies cybercriminals use. It also provides practical guidance on how organizations can develop and drive a unified, AI-aware cybercrime prevention strategy.
"There’s a lot we don’t yet know about generative AI, but what we do know is that its pace of evolution is already outstripping laws, regulations, and societal norms. Finance leaders need to be prepared for a post-trust future,” said Mark Chazan, CEO of Eftsure.
The Eftsure guide’s central theme is that AI is shifting the paradigm of cybercrime, releasing a new array of unanticipated risks at a time when businesses are already experiencing huge losses.
Global cybercrime costs are rising 15% annually and are projected to reach $10.5 billion USD by 2025. In Australia, where Eftsure is based, businesses lost $224 million AUD in 2022 to payment redirection scams.
One of the most common forms of cybercrime is a business email compromise (BEC) attack, in which perpetrators send fraudulent emails to a business claiming to be one of its suppliers. The email requests that the business change the supplier’s payment details, redirecting money to a new account that the criminal controls.
Sometimes cybercriminals hack the supplier’s email and gain access to legitimate users’ email accounts. In other cases, they create a convincing fake email that uses malware or a deceptive web link to gain access to payment systems.
The Eftsure guide explains how AI makes it easy for cybercriminals to create more convincing BEC attacks at scale using a subset of generative AI known as large language models (LLMs).
Trained on vast amounts of text – everything from academic journals and computer code hubs to Wikipedia and social media – LLMs are adept at predicting and generating human language to the point where they can mimic the writing and speaking style of specific people.
LLMs are already helping cybercriminals hone their attacks. Typically, phishing emails can be distinguished by poor grammar, spelling errors, or simply by sounding unprofessional. Now, easily accessible AI tools like ChatGPT can help attackers create more effective messages at scale. While mainstream LLMs generally incorporate guardrails to dissuade malicious uses, other LLMs like WormGPT or FraudGPT lack this sort of moderation – they're specifically designed to aid illicit activities, such as the creation of phishing messages.
Malicious actors can use both mainstream LLMs and these “black-market” tools to analyze large datasets, find vulnerabilities, develop malicious code, and form new attack strategies. While hacks have released vast amounts of personal data to the so-called “dark web,” AI helps attackers identify patterns in that data and zero in on the most vulnerable targets.
Cybercrime arsenals now also include synthetic media such as synthesized images and voices, created by generative AI. Synthetic videos are sometimes called “deepfakes,” and Eftsure consulted with a deepfake researcher who warned that scammers are already leveraging this type of synthetic media to defraud businesses and consumers alike. It can even help criminals bypass authentication methods that were previously considered foolproof.
In light of these growing threats, Eftsure has distilled insights from AI researchers, CFOs, finance experts, and futurists from Sydney SXSW into the latest edition of “Cybersecurity Guide for CFOs.”
Through revenues and expenditures oversight, finance teams are a leading target for cybercriminals. The Eftsure guide explains finance teams' biggest vulnerabilities and how CFOs and finance leaders can address these weaknesses and protect their organizations in structured and strategic ways.
"This guide examines generative AI and its impacts on the current threat landscape for AP and finance teams. It offers practical steps for understanding your vulnerabilities and strengthening your payment security,” Chazan said. “To be prepared, CFOs need to think creatively, stay informed, and implement technology-driven processes. Our guide is a first step in that process.”
For more information, please visit https://eftsure.com, and to download a copy of 2024’s “Cybersecurity Guide for CFOs,” visit https://home.eftsure.com.au/cybersecurity-guide-for-cfos-2024.
About Eftsure
Eftsure is a market leader in payment fraud prevention. Specifically designed for businesses, our end-to-end solution safeguards more than $216B in B2B payments per year. Our mission is to build a safer business community. With a large and continuously growing database of verified supplier details (the only one of its kind), we use multi-factor verification to give businesses greater knowledge and control over onboarding suppliers, receiving invoices, and making payments. In short, we ensure our customers never pay the wrong people.
Source: Eftsure