Is Your Business Inviting an AI-based Attack?

img blog is your business inviting an ai based attack
logo adaptive

In the rapidly evolving landscape of cybersecurity, businesses are constantly seeking innovative ways to protect their digital assets and sensitive data from the ever-present threat of cyberattacks. As technology continues to advance, so do the capabilities of malicious actors seeking unauthorized access, and among the latest additions to their arsenal is the deployment of artificial intelligence (AI) for nefarious purposes.

While AI holds immense promise for enhancing various aspects of modern commerce, its dual-use nature means that it can also be exploited by cybercriminals. This introduces a new and daunting challenge for organizations around the world: the rise of AI-based cyberattacks.

In this era of unprecedented connectivity and data-driven decision-making, the allure of AI for businesses is undeniable. From streamlining operations and improving customer experiences to identifying market trends and optimizing resource allocation, AI technologies offer a plethora of benefits.

However, this very reliance on AI also opens up a Pandora’s box of vulnerabilities that cyber adversaries are eager to exploit. In this article, we will shed light on the ways in which businesses, albeit unwittingly, are inviting targets for AI-based cyberattacks.

To comprehend the gravity of this emerging threat, it is crucial to dissect the various avenues through which cybercriminals are harnessing AI to compromise corporate networks, steal sensitive information, and disrupt critical operations.

As we dive into this complicated landscape, we will explore the stealthy tactics and cutting-edge techniques employed by hackers, the vulnerabilities that AI introduces, and the imperative for businesses to strike a balance between innovation and cybersecurity. By understanding these dynamics, organizations can take proactive measures to fortify their defenses and mitigate the risks posed by AI-based cyber threats.

AI can financially destroy your business

In excerpts from an article by the Guardian, they wrote, “Unfortunately, there’s AI that’s being used right now which is already starting to have a big impact – even financially destroy – businesses and individuals. So much so that the US Federal Trade Commission (FTC) felt the need to issue a warning about an AI scam which, according to this NPR report, “sounds like a plot from a science fiction story.”

But this is not science fiction. Using deepfake AI technology, scammers last year stole approximately $11m from unsuspecting consumers by fabricating the voices of loved ones, doctors, and attorneys requesting money from their relatives and friends.

“All [the scammer] needs is a short audio clip of your family member’s voice – which he could get from content posted online – and a voice-cloning program,” the FTC says. “When the scammer calls you, he’ll sound just like your loved one.”

And these incidents aren’t limited to just consumers. Businesses of all sizes are quickly falling victim to this new type of fraud.

That’s what happened to a bank manager in Hong Kong, who received deep-faked calls from a bank director requesting a transfer that was so good that he eventually transferred $35m, and never saw it again.

A similar incident occurred at a UK-based energy firm where an unwitting employee transferred approximately $250,000 to criminals after being deep-faked into thinking that the recipient was the CEO of the firm’s parent. The FBI is now warning businesses that criminals are using deepfakes to create “employees” online for remote-work positions in order to gain access to corporate information.

But it’s the potential impact on the many unsuspecting small business owners I know that worries me the most. Many of us have appeared in publicly accessed videos, be it on YouTube, Facebook, or LinkedIn. But even those who haven’t appeared in videos can have their voices “stolen” by fraudsters copying outgoing voicemail messages or even by making pretend calls to engage a target in a conversation with the only objective of recording their voice.

This is worse than malware or ransomware. If used effectively, it can turn into significant, immediate losses. So what do you do? You implement controls. And you enforce them.

This means that any financial manager in your business should not be allowed to undertake any financial transaction, such as a transfer of cash based on an incoming phone call. Everyone requires a callback, even the CEO of the company, to verify the source.

And just as importantly, no transaction over a certain predetermined amount must be authorized without the prior written approval of multiple executives in the company. Of course, there must also be written documentation – a signed request or contract – that underlies the transaction request.

These types of controls are easier to implement in a larger company that has more structure. But accountants at smaller businesses often find themselves victims of management override, which can best be explained by “I don’t care what the rules are, this is my business, so transfer the cash now, dammit!” If you’re a business owner reading this, then please establish rules and follow them. It’s for your own good.

So, yes, AI technology like ChatGPT presents some terrifying future risks for humanity. But that’s the future. Deepfake technology that imitates executives and spoofs employees is here right now and will only increase in frequency.”

AI-fueled search gives more power to the bad guys

In excerpts from an excellent article by CSO, they wrote, “Concerns about the reach of ChatGPT and how easier it may get for bad actors to find sensitive information have increased following Microsoft’s announcement of the integration of ChatGPT into Bing and the latest update of the technology, GPT-4. Within a month of the integration, Bing had crossed the 100 million daily user threshold. Meanwhile, GPT-4 improved the AI, which now has better reasoning skills, is more accurate, and has the ability to see images.

When ChatGPT was released in November 2022, hackers quickly jumped on the technology to help them write more convincing phishing emails and exploit code, but that was the old ChatGPT. According to Open AI, the new AI’s bar exam score rose from 10% to 90%, its medical knowledge score went from 53% to 75%, its quantitative GRE score rose from 25% to 80%, and the list goes on.

In other words, it’s already better than humans at most tests of knowledge and reasoning — and it can do it in the blink of an eye, for free, for anyone on the planet, in all the major languages.

“Generative AI takes regular humans and turns them into superhumans,” Dion Hinchcliffe, VP and principal analyst at Constellation Research, tells CSO. When combined with real-time search, it will give the bad guys a very powerful weapon. “It can connect the dots and find the patterns,” he says.

That’s something that traditional web search engines can’t do. Enterprises had tools that could do this, he says, for things like threat intelligence, but they cost millions of dollars and are certainly out of reach for small-time cyber criminals.

Open-source intelligence

If an attacker wants to know what technologies a company is using, they can search job listings and resumes of former employees and manually correlate the data, then use it to create convincing lures. This is already done by attackers. With the new AI tools, however, the process can become much faster and more efficient, the lures more convincing. “ChatGPT just makes it more accessible,” Jeetu Patel, EVP and GM of security and collaboration at Cisco’s Webex, tells CSO.

In fact, there’s no shortage of publicly available information on the internet, says Etay Maor, senior director of security strategy at Cato Networks. For example, the OSINT Framework page lists hundreds of free public sources of information about people, companies, IP addresses, and much more.

“Finding material about a target is not the challenge faced by attackers today,” Maor tells CSO. The challenge is weaving this information into something useful — for example, turning publicly available information into a convincing phishing email, in an appropriate writing style. “The fact that responses by ChatGPT are so human-like definitely makes it easier to create a believable conversion with a target,” he says. “This makes social engineering much easier.”

Attackers will also be able to use AI to bring the public data together faster or look at areas that a human might not think to pursue, Mike Parkin, a cyber engineer at Vulcan Cyber, tells CSO. How good it is will probably depend on the person using it. “It’s unlikely AI is going to offer up correlations between obscure data points without prompting,” he says.

In addition, public-facing tools like ChatGPT, Bing Search, or Google’s Bard will have guardrails in place to try to limit the most malicious applications. But there will probably soon be commercial subscription services that will enable fully customized and automated phishing with just a bit of creative coding on the attacker’s part, Parkin says.

As this type of attack becomes easier and quicker, malicious actors will begin to target companies and organizations that previously might not have been worth their time. “Once automated, actors will no longer limit themselves to high-value targets but will leverage their investment to get as much return as possible by casting a wider net,” Pascal Geenens, director of threat intelligence at Radware, tells CSO.

Natural language search risks

Today, some of the most interesting information — interesting from a malicious actor’s point of view — requires some technical skill to get at it. With AI, however, English is the new programming language, says Yale Fox, founder and CEO at Applied Science Group.

In the past, for example, an attacker wanting to scan the perimeter of a company using NMAP would type out a query. Now they could say something like, “Scan and look for open ports, then identify what applications and versions they are running and check to see if there is a known exploit or vulnerability — and do this in a passive way so that it is harder to detect,” Fox tells CSO.

Risks of threat actors from enterprise use of AI chatbots

If an attacker is able to get into enterprise systems and get access to AI-powered enterprise search, the dangers are even higher. “They could search the entire data set, both structured and unstructured, quickly and find valuable information, which was not possible before, especially in unstructured data,” says Andy Thurai, VP and principal analyst at Constellation Research.

There’s also a possibility that tools like ChatGPT might give attackers access to sensitive company data because the company’s own employees shared it with the AI. In fact, many firms — including JPMorgan, Amazon, Verizon, Accenture, and Walmart — have reportedly prohibited their staff from using ChatGPT.

According to a recent survey by Fishbowl, a work-oriented social network, 43% of professionals use ChatGPT or similar tools at work, up from 27% a month prior. Of those using it, 70% don’t tell their bosses. Employees might give the AI tool access to internal company information for different reasons, including to help create code, draft communications, or do any of the other useful things that these generative AI tools can do.

The reason that OpenAI has made ChatGPT free is so that the AI can learn from its interactions with users. “There have already been examples that indicate that ChatGPT has had access to some internal company communications,” Robert Blumofe, EVP and CTO at Akamai tells CSO. He suggests that enterprises use a standalone version of the chatbot, operating in isolation within the company’s walls, to keep information from leaking out.

Using the same technology attackers use is also a way to defend a company’s systems. “What will happen is a series of products will emerge that will consume that intelligence in real-time, ahead of the hackers, and apply protections across the enterprise,” says Constellation Research’s Hinchcliffe. OpenAI has already announced a developer API for ChatGPT and dropped the price to a tenth of what its previous GPT models cost.

AI-powered search engines need regulation

AI-powered search engines aren’t just faster and better than traditional ones; they also understand the context of both the question and the information they provide, which makes them potentially very dangerous in the wrong hands. “We need to lobby our elected representatives to set up some sort of oversight at the state and federal levels for responsible use and deployment of AI-based technology,” Baber Amin, COO at Veridium, tells CSO. “Self-governance should not be an option as we have already seen things go awry with Microsoft’s chatbot.”

Unfortunately, when cyber adversaries are foreign governments, regulations in the US or Europe might not be particularly effective. In fact, according to a recent Blackberry survey of IT decision-makers, 71% believe that nation-states may already be leveraging ChatGPT for malicious purposes.

In addition, once the technology is available in open-source form, it will become widely distributed quickly to attackers at all levels of sophistication.”

Top 3 Cybersecurity Threats Caused by Generative AI

In excerpts from an article by Abnormal, they wrote, “New technologies invite a spectrum of reactions. On the extreme end are the people who, perhaps naively, think that novel tech will solve all humanity’s problems or lead us to our collective doom. But the reality is always more nuanced.

Take generative AI, for instance. Tools like ChatGPT and Bard are popular with individuals, businesses, and even bad actors to quickly create content. While some of these use cases are encouraging, there’s some anxiety about generative AI. In fact, 53% of IT professionals believe ChatGPT will be used this year to help hackers craft more believable and legitimate-sounding phishing emails. And the introduction of WormGPT has made this even more prevalent.

Unfortunately, email attacks powered by generative AI are already happening. And it’s easy to understand why. Generative AI tools enable cybercriminals to automate and scale their attacks, taking a task that used to require five minutes and turning it into five seconds. Making matters worse, ChatGPT and other tools make it nearly impossible to differentiate between real emails and malicious ones, as it eliminates the telltale signs of an attack like grammar mistakes and spelling errors.

As generative AI moves forward, it’s crucial to recognize the risks posed by malicious uses of it and how robust cybersecurity can make a difference. Here are a few ways that we’re already seeing generative AI be used in cyber attacks.

1. Credential Phishing

Phishing attacks come in all shapes and sizes, from generic, mass-blast emails to more focused spear phishing scams. The objective is to steal sign-in credentials and sensitive information. Unfortunately, generative AI makes phishing and similar email-based attacks much worse. Here’s how:

Ease of access: Tools like ChatGPT (and the malicious WormGPT) are free and simple to use. With a couple of prompts, these tools craft highly convincing and error-free scams.

Unlimited volume: Attackers can craft messages rapidly. And since many cyberattacks are a numbers game, generative AI gives attackers more opportunities to succeed.

Increased sophistication: ChatGPT upends how employees spot common red flags like typos, grammatical errors, and inappropriate tone. Additionally, ChatGPT handles multiple languages, including Spanish, Russian, Arabic, German, and Japanese, making it possible to create an attack in one language and send it out to employees in multiple countries.

All in all, generative AI enhances the realism and scale of phishing emails and corresponding landing pages. This increases the chances of tricking users into revealing sensitive information.

2. Business Email Compromise and Social Engineering

Business email compromise (BEC) is already the most financially devastating cybercrime for businesses worldwide, resulting in more than $51 billion in losses since 2013. That’s the bad news. The worse news is that generative AI will only make BEC more effective and harder to detect.

Generative AI doesn’t just spit out generic email copy. With the right prompts and data, it can produce super convincing text copy. For example, a hacker can input specific information about their target, such as conversation history, to mimic the tone of a coworker or executive. This makes malicious messages more engaging and realistic, helping the attacker build trust with the target. By compromising the right account, a threat actor can take invoice information and conversation history for dozens (or even hundreds) of companies and craft extremely realistic BEC emails that convince the target to pay a fake invoice or update their billing account details to a bank account owned by the attacker.

3. Malware Creation and Endpoint Exploitation

Coding is a skill set that not everyone has. One of the promising aspects of generative AI is its ability to produce code that works with simple prompts—no expertise required. This is a boon for noncoders everywhere. Regrettably, it’s also a resource for the bad guys. This is why some cybersecurity experts call generative AI the next generation of script kiddies.

Generative AI can generate new malware variants, making it tough for traditional email security platforms to detect and block malicious software hidden in emails. This includes the possibility of self-mutating or polymorphic malware that evolves its codes or behavior, allowing it to evade detection and persist once it strikes its target.

Similarly, cyberattackers can use generative AI to find and exploit endpoint vulnerabilities. If a hacker discovers a software vulnerability, they can use generative AI to create commands for automated attack payloads. Once the attacker has infiltrated an endpoint, they can easily move to email and connected systems to steal data and other confidential information. With this in mind, it’s easy to see how generative AI could accelerate the security arms race between developers working to patch issues and attackers hoping to exploit them.

Stopping Email Attacks Generated by AI

To counter these high-volume and highly sophisticated email attacks, organizations must deploy a sophisticated email security platform; something that can handle the bad emails before they hit your mailboxes. If these AI-generated attack emails are well-adapted to trick employees, then adopting a proactive AI-powered defense is the perfect solution to this evolving risk.

A next-generation platform built on good AI to fight bad AI should include:

Behavioral Data Science Approach: Go beyond rules-based security with behavioral data science and AI to profile and baseline good behavior and detect anomalies. By using identity modeling, behavioral and relationship graphs, and deep content analysis, a next-generation email security platform can identify and stop suspicious emails—whether they’re created by AI or a human.

API Architecture and Integrations: Since Microsoft 365 and Google Workspace are popular cloud-based, workplace applications, you’ll want an API to provide access to the signals and data necessary for detecting suspicious activity. This includes unusual geolocations, dangerous IP addresses, changes in mail filter rules, unusual device logins, and more. More advanced solutions can also connect to other applications, including Slack, Okta, Zoom, and CrowdStrike, to understand identity and detect multi-channel attacks.

Organizational and Supply Chain Insights: Tracking vendor relationships is vital for your advanced email security platform. BEC attacks often leverage the goodwill between vendors and partners throughout the supply chain. Adopting a platform that understands cross-organizational relationships can impede these attacks and stop malicious activity from compromised accounts outside of your organization.

With these capabilities, sophisticated email security platforms detect anomalous behavior so that attacks powered by generative AI stop before they reach your mailboxes.

In the end, generative AI isn’t a miracle technology, nor is it a Terminator-type threat. It’s a tool with a lot of use cases—some legitimate and some nefarious.”

In Conclusion

The rapid advancement of artificial intelligence (AI) has ushered in a new era of both promise and peril for businesses worldwide. While AI presents numerous opportunities for enhancing operations, customer experiences, and decision-making, it also introduces a range of vulnerabilities that cyber adversaries are eager to exploit. In this evolving landscape, where businesses are increasingly reliant on AI, the rise of AI-based cyberattacks is a formidable challenge that cannot be ignored.

As illustrated by recent examples, AI-based cyberattacks are not confined to the realm of science fiction; they are very much a present and growing threat. Deepfake AI technology, for instance, has been leveraged by cybercriminals to steal millions of dollars by mimicking the voices of loved ones, colleagues, or superiors in fraudulent phone calls. Such attacks not only target individuals but also extend to businesses, where employees are tricked into transferring substantial sums of money based on the authenticity of these AI-generated impersonations.

Also, the integration of AI into search engines and cybersecurity tools amplifies the capabilities of malicious actors. This enables them to efficiently gather information about targets, craft highly convincing phishing emails, and exploit vulnerabilities that were previously harder to access. The automation and sophistication of these attacks pose a considerable risk to organizations of all sizes.

To counter this growing threat, businesses must prioritize cybersecurity measures that account for the AI-driven evolution of cyberattacks. Implementing stringent controls, such as requiring callbacks for financial transactions and multi-executive approvals, can mitigate the risk of falling victim to AI-based financial fraud. Furthermore, adopting advanced email security platforms powered by AI can help detect and block malicious emails, including those generated by AI, before they reach employees’ inboxes.

In a world where AI technology can empower both defenders and attackers, the need for proactive cybersecurity practices has never been more crucial. It is incumbent upon businesses to strike a delicate balance between leveraging AI’s potential and safeguarding against AI-driven threats. To navigate this complex landscape successfully, organizations must remain vigilant, adaptive, and committed to staying one step ahead of those who seek to exploit the dual-edged sword of AI.

At Adaptive Office Solutions, cybersecurity is our specialty. We keep cybercrimes at bay by using analysis, forensics, and reverse engineering to prevent malware attempts and patch vulnerability issues. By making an investment in multilayered cybersecurity, you can leverage our expertise to boost your defenses, mitigate risks, and protect your data with next-gen IT security solutions.

Every single device that connects to the internet poses a cyber security threat, including that innocent-looking smartwatch you’re wearing. Adaptive’s wide range of experience and certifications fills the gaps in your business’s IT infrastructure and dramatically increases the effectiveness of your cybersecurity posture.

Using our proactive cybersecurity management, cutting-edge network security tools, and comprehensive business IT solutions, you can lower your costs through systems that are running at their prime, creating greater efficiency and preventing data loss and costly downtime. With Adaptive Office Solutions by your side, we’ll help you navigate the complexities of cybersecurity so you can achieve business success without worrying about online threats.

To schedule a Cyber Security Risk Review, call the Adaptive Office Solutions’ hotline at 506-624-9480 or email us at