How ChatGPT and AI are Changing Cybersecurity

How ChatGPT and AI are Changing Cybersecurity

Heated discussion about artificial intelligence (AI) has been a feature of the business media since ChatGPT was first released in late 2022.

While most of the media discussion revolves around which jobs ChatGPT is going to eliminate, there’s been a quieter but equally important discussion in the cybersecurity community about how AI is going to affect business security.

We’ve written this article to provide businesses in Tampa with everything they need to know about the coming generation of AI, how it’s going to make achieving lasting cybersecurity harder, and what you can do about it.

Hackers Leverage AI to Generate a New Breed of Malware Attack

One of the most discussed use cases for ChatGPT is in computer programming. Even in this relatively immature state, ChatGPT is already pretty good at generating simple code snippets for simple functions, allowing programmers to focus on more complex aspects of software development.

This means a more efficient development process and lower costs for companies. However, there are also nefarious uses for automatically generated code that business owners must familiarize themselves with.

Bypassing the Security Features of AI Models
In its current state, ChatGPT (and specialist AI models like AlphaCode) are designed not to generate code that could be used for malicious purposes. However, in just the few months since the software was released to the public, hackers have devised multiple ways to bypass those protections.

One hacking group has used ChatGPT’s application programming interface (API), in particular one called davinci-003, which’s specifically designed for chatbot applications. It turns out that the API doesn’t enforce the same restrictions on malicious content as the web version, meaning that hackers can bypass ChatGPT’s protections and generate any code.

The price of this service? A mere $5.50 for every malicious 100 queries on ChatGPT.

AI models that can generate malware code at basically no cost will likely lead to a rapid expansion in the number of threats, just as the commodification of Ransomware on the dark web did in 2019 and 2020.

“Polymorphic” Malware Threats
Hackers have already started to use the power of AI to create new and intelligent forms of malware as well, by embedding specialized forms of “polymorphic,” or mutating, code into their viruses. By changing its composition or “signature,” smart malware avoids endpoint detection and response (EDR) systems and is thus much harder for businesses to detect and isolate.

Polymorphic malware has existed for decades, but the new strains powered by ChatGPT are more dangerous and harder to detect.

In addition to a model that uses the API listed above, another recent proof of concept from Jeff Sims, principal security engineer at threat detection company HYAS InfoSec, demonstrates another possible approach. His software, called BlackMambo, logs keyboard strokes on a host computer, changing its shape every time it runs to avoid detection. According to the HYAS blog:

“BlackMamba utilizes a benign executable that reaches out to a high-reputation API (OpenAI) at runtime, so it can return synthesized, malicious code needed to steal an infected user’s keystrokes… Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. BlackMamba was tested against an industry leading EDR which will remain nameless, many times, resulting in zero alerts or detections.”

Criminals Upgrade Phishing Attacks with the Power of AI

According to the  FBI’s 2022 Internet Crime Report, email attacks are the most common IT threat in America.

People are already falling for today’s email phishing scams, which are notorious for poor grammar and misspellings. As hackers adopt ChatGPT and other large language models (LLMs), criminals in Russia, India, and other countries will be able to create error-free emails on demand, making them harder to detect and more impactful.

As phishing emails become more impactful, and that impact is extrapolated out over millions of attacks that take place each day, we can expect to see a significant impact on the number and efficiency of phishing attacks.

But AI doesn’t just help with email writing; hackers have also started ChatGPT and other AI models to develop new phishing strategies, scan attack surfaces, and alter their cybersecurity attacks to respond to your phishing defenses in real time.

Businesses in Tampa must be ready to adjust their security to compensate.

What Can Tampa Businesses Do About It?

The good news is that generative AI has as many applications for cyber defenders as it does for attackers.

Arm Yourself with the Right Tools
IT services firms like LNS Solutions are using tools with built-in machine learning and artificial intelligence to find network vulnerabilities and proactively address the threat of malicious AI.

To reap the benefits of those tools, it’s important to work with an IT services firm with a track record of cybersecurity success. If you’re not partnered with a cybersecurity firm, then it’s critical that you keep your security software up to date. The cybersecurity arms race is always intensifying, and we’re facing a situation in which ChatGPT and other AI models will create malware that only other AI systems can detect.

Use AI to Extend Your Cybersecurity Team
There’s a well-documented lack of cybersecurity talent in the U.S. The country is estimated to lack about 1 million people in the cybersecurity field, putting countless companies in the U.S. at risk. By arming themselves with AI tools, businesses can extend the capabilities of human cybersecurity staff and enhance the efficiency and sophistication of their defenses.

For example, the cybersecurity company Sophos found that spam filters using ChatGPT, compared with other machine learning models, were more accurate, enabling them to catch far more threats than without. Integrating next-generation spam filters with other “ChatGPT” detection capabilities could help your business not just mitigate the rise in AI-powered attacks but also to win a competitive edge and reduce overall attacks.

Similarly, AI is now being used by a variety of LNS Solutions’ cybersecurity vendors to reduce false notifications and detections, speed up the security forensics process, and eliminate labor intensive security tasks.

Improve Your Cybersecurity Awareness Training
The largest source of cybersecurity vulnerability is an unprepared staff. Now is the time to double down on your cybersecurity awareness training and bring your entire team—from cleaning people and front desk staff to executives and boards of directors—up to task on the changing AI landscape.

Proactively facing the threat of AI head-on is the best way to establish a confident foundation for what’s sure to be a turbulent future full of dynamic AI-powered attacks.

Florida’s Cybersecurity Team

For over 30 years, the LNS Solutions team has been helping companies in Tampa defend themselves against cyber criminals and malware. If your business is struggling to achieve the resiliency and confidence you need, contact our helpful team any time at (813) 393-1626 or info@LNSSolutions.com. We look forward to speaking with you!

 

chevron-downmenu-circlecross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram