RSA Conference 2023: The cybersecurity impact of AI tools like ChatGPT

The 2023 RSA Conference in San Francisco
The 2023 RSA Conference in San Francisco

The 2023 RSA Conference in San Francisco covered a wide range of cybersecurity subjects, both on the information technology (IT) and operational technology (OT) sides. But every year at the event, there are a few topics that seem to capture the imagination of attendees and speakers alike. From the first day of the event and the opening keynotes, it was clear that ChatGPT was one of those topics.

When OpenAI removed the waitlist for ChatGPT-3 in November 2022, effectively opening the artificial intelligence (AI)-driven chatbot to the public, it democratized AI. While the masses had all sorts of fun asking ChatGPT for movie recommendations, financial tips or dating advice, the cybersecurity world braced for impact. Practitioners worried that attackers could use ChatGPT to create cleaner, more effective and more credible social engineering attacks and to write malicious code.

The RSA Conference offered several sessions covering ChatGPT’s impact on the industry, including Not Just for Writing Malware – How Defenders Can Use ChatGPT by Etay Maor of Cato Networks and ChatGPT: A New Generation of Dynamic Machine Based Attacks? by Greg Day of Cybereason and Paul Vann of the University of Virginia. Both sessions required reserved seats and filled up quickly, requiring a waitlist of their own.

The rising phishing threat at ChatGPT

According to Zscaler’s recent 2023 ThreatLabz Phishing Report, phishing attacks around the globe rose by nearly 50% on 2022, and they continue to be a top attack vector.

“Threat actors are leveraging phishing kits and AI tools to launch highly effective email, SMiShing and Vishing campaigns at scale,” said Deepen Desai, global CISO and head of security at Zscaler.

In his opening keynote entitled The Looming Identity Crisis, Rohit Ghai, CEO of RSA, spoke at length about “Good AI” and “Bad AI.” His anthropomorphized “Good” GPT chatbot posited to attendees that the best way defenders can keep attackers at bay is by utilizing new AI tools like ChatGPT to streamline processes. But the “Bad” GPT could be used by threat actors to create refined and elegant phishing attacks that lack the clunkiness of old-school Nigerian Prince-style emails.

Ghai showed an example of a possible phishing lure using ChatGPT. It was a simple email a victim would receive from a trusted friend saying that it was nice to see the victim at a recent event for their children. The fake “friend” then added a link to a photo-sharing site, where the victim could supposedly see photos of their children at the event. The link would actually redirect to a legitimate-looking photo site, where the malware would be nested. As Ghai said, “Who wouldn’t click on that link?”

Moty Kanias, vice president of cyber strategy and alliances at Nanolock, also talked to industrial Cybersecurity Pulse about the ChatGPT threat. He mentioned how easy it is for AI-powered tools to trick victims with social engineering tactics because people freely and willingly provide so much information about themselves. The more data a threat actor has about a potential, victim, the easier it is to craft an effective message.

ChatGPT’s opinion of itself

When ChatGPT was asked by ICS Pulse about the biggest cybersecurity risks it poses to the world, it demurred at first, saying that as an AI language model, it poses no direct cybersecurity threat. But when pressed, it did give a list of possible malicious uses. The following is ChatGPT’s assessment of itself.

Some of the biggest cybersecurity risks that could arise from the use of ChatGPT include:

  1. Malicious use of ChatGPT: If ChatGPT falls into the wrong hands, it could be used for malicious purposes such as creating realistic-looking phishing messages or generating convincing fake news.
  2. Data privacy concerns: ChatGPT requires large amounts of data to be trained, which may include personal information or sensitive data. There is a risk that this data could be accessed or stolen by malicious actors.
  3. Bias and fairness issues: ChatGPT can pick up biases from the data it is trained on, which could result in unfair or discriminatory outputs. This could have negative consequences, such as reinforcing existing societal biases.
  4. Adversarial attacks: Adversarial attacks involve deliberately manipulating inputs to AI models to produce unexpected or malicious outputs. This could be a risk for ChatGPT, particularly in applications where it is used to make critical decisions, such as in finance or health care.
  5. Security vulnerabilities: Like any software, ChatGPT may have security vulnerabilities that could be exploited by attackers to gain unauthorized access or cause harm.

How to prevent ChatGPT cyberattacks

Because these AI tools are still relatively new, the threat is actively evolving. Attackers are a resourceful bunch and will find new ways to use AI to power their operations, just as cybersecurity practitioners will leverage AI in their defense. For now, there are some ways to protect yourself and your team from AI-enabled threat.

  • Never share confidential information, such as financial data, logins/passwords or even your name and address.
  • Set up multifactor authentication.
  • Train your team to spot social engineering attacks before they happen.
  • Be wary of unsolicited attachments and links. If it looks suspicious, don’t click it.
  • Keep your antivirus software and operating systems up to date.

While there are still several days left in the RSA Conference, it’s clear ChatGPT will continue to be a popular topic among attendees. The goal from here is to make sure the negatives of the “Bad AI” doesn’t overwhelm the benefits of the “Good AI.”

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES