Blog

Managing the Real Security Risks of ChatGPT

Introduction

Most new and compelling technologies outpace security teams’ abilities to model and mitigate the risks associated with them. ChatGPT doesn’t appear to be an exception, but what is different is the pace. Consider, for example, that it took over three and a half years for Apple to sell 100 million iPhones or that it took Instagram just under two and half years to reach 100 million monthly active users, while ChatGPT—driven by casual interest, actual utility, and a low barrier to entry—hit that milestone in a little over two months, leaving many security professionals scrambling.

Like any technology, the risks of generative AI can be divided into two broad categories: those that accrue through use and those that result from abuse by malign actors. But are the risks new or are they variations on old ones? And—regardless—how should an organization mitigate those risks? It can be challenging to avoid “shiny object syndrome”—especially with things that capture our collective attention on the scale that AI has—but, as Benedict Evans recently tweeted, “you should start by understanding all of the old ways that people screwed up software and then think about how much you’re talking about new problems or 30-year-old problems.”

Absence of evidence ≠ evidence of absence

Blocking access to specific tools outright may be an appropriate choice for some organizations but in most cases, it is more likely users will seek alternative methods to reach unsanctioned services (or IT and security teams managing increasingly unwieldy exception lists). If an organization can’t tell which tools employees are using, then it is virtually impossible to implement effective controls. Of course, this problem isn’t confined to AI: the phrase “shadow IT” dates to at least 2010s, though it is undoubtedly much older. The solution, as it is in most cases, is the combination of a clear policy that articulates what constitutes acceptable use and set of technical controls that can monitor and enforce use consistent with the policy. Insofar as most AI tools are accessed via the Internet, the necessary technical controls are both mature and readily available (though the flexibility and granularity of control may vary).

When the doors are open and/or the lights are out

Assuming that employees are allowed to use AI, the next obvious risk to consider is what happens when (not if) there is a data breach, as ChatGPT experienced in March 2023 when a bug in an open-source library may have exposed the chat details and payment information of some users. In response, OpenAI temporarily took ChatGPT offline.

This scenario raises two important questions: the first is “what is our response when our data may have (or has) been exposed by a third party?” and the second is “how do we maintain business continuity when a critical system operated by a third party is unavailable?” but no organization should be asking itself these questions for the first time in the context of threat modeling for ChatGPT.

Governance, Risk, and Compliance (GRC) or Third-Party Risk Management (TPRM) processes will help with fallout, but meaningfully answering the first question—in a way that informs a course of action—demands forensic capabilities to determine which data a third party has received and may have been exposed. Such capabilities are the most robust when they leverage visibility from the monitoring and governance infrastructure described above.

The answer to the second question is probably less critical in the context of AI, since it hasn’t reached “mission critical” status in most organizations. Still, the availability of important third-party services should already be a factor in organizations’ business continuity strategy and the resulting plans will help inform how to handle AI tools if and when they become core to business functions.

Compromised accounts and where to find them

ChatGPT also made headlines more recently when the credentials for over 100,000 accounts were discovered for sale in dark web marketplaces. While it may seem large, that total represents a scant .001% of the total number ChatGPT accounts. It also pales in comparison to the 3.2 billion unique email/password combinations in the so-called Compilation Of Many Breaches (COMB) list or the 12.5 billion “pwned accounts” cataloged at haveibeenpwned.com.

Leaked credentials pose the same risk of any case of compromised accounts or password reuse: providing adversaries with inputs for techniques like credential spraying and credential stuffing that help them obtain unauthorized access to services and data. Multi-Factor Authentication (MFA) is a relatively simple and effective method of protecting against compromised passwords, but organizations must also consider situations where a corporate identity (e.g., a company email address) has been used to establish an account at a service that isn’t  (or can’t be) federated into the organization’s Identity Provider (IdP) or enterprise directory, as is the case with ChatGPT. Organizations need tooling to identify and prevent password reuse, as well as the capability to determine whether their users’ credentials have been compromised or leaked so that they can orchestrate appropriate action such as forcing password resets or disabling accounts.

Reining in data

Like credential leakage, data leakage is another area of concern. Many early-stage applications—especially those that aren’t necessarily built for enterprises—provide very few controls. ChatGPT has two: whether to save chats and allow them to be used for training models, and managing links to shared chats. Both are entirely at the discretion of the user. Because ChatGPT’s input is arbitrary text, it represents a data loss vector both via the potential inclusion of information in future responses and users’ ability to share their chats. The issue is complicated further because employees may create accounts using personal credentials, resulting in corporate data leaking to a personal account.

Because OpenAI is processing and storing data supplied to ChatGPT, it also raises concerns about ownership, retention, and residency all of which are also applicable to any SaaS application or third-party information processing service. At the macro level, however, there is little to distinguish these issues from the ones presented by file sharing services such as Dropbox, Google Drive, and Microsoft OneDrive or other cloud-based services. If organizations have robust DLP controls for the web, mitigating risks from ChatGPT is a simple matter of deciding whether existing policies are sufficient or if they require modification.

You can’t believe everything you read on the Internet

One criticism of Large Language Models (LLMs) is that they may produce inaccurate or unexpected results. Security researchers have theorized that this could lead to supply chain attacks caused by ChatGPT “hallucinating” the existence of a software library, which the attackers subsequently create, and which is then recommended by ChatGPT to developers. This may seem like a convoluted way to deliver malicious code but, as a pattern, it’s not altogether different from the range of ways developers might incorporate code of questionable provenance: bad advice on StackOverflow, SEO poisoning, and bogus packages delivered by popular sites like GitHub or tools like npm and PyPI. The reality is that software today is more assembled than built from scratch and any organization that forgoes vetting third-party code is does so at their peril. Here again, there are a variety of mature frameworks and tools that aid such vetting and secure software development.

Meet the new bot, same as the old bot

There are some unique cybersecurity risks with AI, but the majority are not on the near-term horizon. To the extent that ChatGPT increases risk, it is primarily by making existing techniques more convincing or harder to spot. The rest are risks that organizations must address if they’re using any form of cloud computing. As we consider how to address those risks, it is worth remembering [an adaptation of] Frank H. Easterbrook’s central conclusion in Cyberspace and the Law of the Horse: develop a sound set of policies and technical controls for information processing and then apply those to ChatGPT and AI. Using that approach ensures a solid foundation that better positions any organization to confront existing risks, as well as those from emerging technologies.

The Seraphic Security platform provides a wide range of capabilities to address your web security and DLP needs, including all the tools necessary to safely enable your employees to use ChatGPT and other generative AI. For more information visit our Use-cases page or schedule a demo.

Please leave your details:

Sent successfully!

Close

Please leave your details to view content:

Request a Demo