Skip to main content Skip to footer

Blog

5 steps to make sure generative AI is secure AI

Both business leaders and employees are excited to leverage generative AI. Understanding the risk landscape and proactively preparing can help organizations reap value securely.

5-MINUTE READ

June 12, 2023

ChatGPT’s meteoric rise has captured the business world’s attention at a pace and on a scale rarely seen before. But this represents only one part of a broader generative AI revolution. Thanks to advances in large language models (LLMs) and other foundation models, we’re witnessing a step change in AI capability across multiple domains—images, audio, video and text.

This is allowing machines, for the first time, to be computationally creative, generating meaningful and valuable content on demand. And the power, broad downstream adaptability and easy accessibility of this technology means every company, in every industry, will be impacted.

Business leaders are right to be excited about the opportunities. As are employees, many of whom are eager to get started with the technology. However, both groups need to be highly attuned to the business and security risks they may be incurring—and how those risks can be minimized.

Some critical areas of focus when considering cybersecurity for generative AI include:

  • Data & IP Leakage & Theft
  • Malicious content, High Speed Contextual Targeted Attacks
  • Orchestration of generative technologies for misuse
  • Misinformation at scale
  • Copyright infringement / plagiarism
  • Amplification of existing biases and discrimination

The truth is, generative AI projects and products come with a heightened risk of compromise—that requires a well planned and executed security strategy at the start. So, what practical actions should C-suite leaders take now to securely leverage generative AI across their business?

I recently sat down with Tom Patterson, Managing Director – Emerging Technology and Security, to discuss this question that is of great importance to our clients. Together, we’ve come up with a list of our top-five security recommendations for using generative AI in an enterprise context.

Security implications across the layers of generative AI
Security implications across the layers of generative AI

Security implications across the layers of generative AI.

1. Create a trusted environment and minimize risk of data loss

A top business concern about providing access to applications like ChatGPT is the risk of intellectual property or other protected data, leaking out of the organization.

The risk is real. Employees looking to save time, ask questions, gain insights or simply experiment with the technology can easily transmit confidential data—whether they mean to or not—through the prompts given to generative AI applications.

The good news is that, with a bit of upfront technical effort, this risk can be minimized by thinking through specific use cases for enabling access to generative AI applications while looking at the risk based on where data flows. Tom offered us some examples:

For example: The risks of data leakage lie primarily at the application layer, rather than the chat LLM layer, OpenAI. So, companies can build a custom front-end that replaces the ChatGPT interface to leverage the chat LLM API (OpenAI) directly. Voila…we’ve bypassed the ChatGPT application and mitigated the risk.

Another example is creating a sandbox where data is isolated. The sandbox is the gateway for consumption of LLM services, and other filters can be added to safeguard data and reduce bias.”

Further data requirements for each use case might mean that sensitive data needs to remain under a company’s direct control and can be kept in a trusted enclave or environment. In other cases, less sensitive data can be exchanged with a hosted service, especially those that can separate data in a standalone environment.

“Trust by design” is a critical step in building and operating successful systems.

2. Start training your employees now

It’s incredible to see that ChatGPT has seen the fastest adoption of any consumer app in history.

But this groundswell of organic demand creates a problem for business leaders. Many employees are learning about the technology independently via social channels and the media, leaving room for misinformation. They often have no way of knowing what’s right and what’s wrong. And the fact that they can access these applications via their own smartphones and laptops can end up creating a new kind of “shadow IT” and introducing new cybersecurity threats.

That’s why a program of workforce training is both essential and urgent.

Employees need to understand the business and security risks they may be incurring, and what best practices look like.

Tom Patterson / Managing Director – Emerging Technology and Security

Flexibility and responsiveness are vital, recognizing this is a fast-moving space in which it’s not always possible to figure out every detail right now.

3. Be transparent about the data

Whether you’re consuming external foundation models, or customizing them for your own business purposes, it’s essential to recognize the risks inherent in the data used to train, fine-tune and even use these models, while remaining transparent. These risks vary depending on architecture choices.

Data is at the core of large language models (LLMs), and using models that were partially trained on bad data can destroy your results and reputation.

The outputs of generative AI systems are only as unbiased and valuable as the data they were trained on. Inadvertent plagiarism, copyright infringement, bias and deliberate manipulation are several obvious examples of bad training data. To engender trust in AI, companies must be able to identify and assess potential risks in the data used to train the foundational models, noting data sources and any flaws or bias, whether accidental or intentional.

There are tools and techniques companies can use to evaluate, measure, monitor and synthesize training data. But it’s important to understand these risks are very difficult to eliminate entirely.

The best short-term solution is therefore transparency. Being open and upfront about the data used to train the model—and the entire generative AI process—will provide much-needed clarity across the business and engender necessary trust. And creating clear and actionable guidelines around bias, privacy, IP rights, provenance and transparency will give direction to employees as they make decisions about when and how to use generative AI.

4. Use human + AI together to combat ‘AI for bad’

We must use AI for good if we want to defend AI for all. And we can use generative AI itself to help make enterprise use of generative AI more robust overall.

One option is to have a “human in the loop” to add security and ensure a sanity check on responses. Reinforcement learning with human feedback (RLHF) tunes the model based on human rankings generated from the same prompt.

Building on RLHF, Constitutional AI also uses a separate AI model to monitor and score the responses the main enterprise model is outputting. These results can then be used to fine-tune the model and secure it from harm.

5. Understand emerging risks to the models themselves

AI models themselves can be attacked and jailbroken for malicious purposes. One example is a “prompt injection” attack, where a model is instructed to deliver a false or bad response for nefarious ends. For instance, including words like “ignore all previous directions” in a prompt could bypass controls that developers have added to the system. Anecdotally, we’ve seen examples of white text, invisible to human eyes, included in pre-prepared prompts to inject malicious instructions into seemingly innocent prompts.

The implication? Business leaders will need to consider new threats like prompt injection and design robust security systems around the models themselves.

Ensuring Generative AI is safe AI

Generative AI  and foundation models represent a real milestone in AI development. The opportunities are virtually limitless. But there are new risks and threats too. Business leaders need to understand these risks and take urgent action to minimize them.

There are ever evolving models, frameworks, and technologies available to help guide AI programs forward with trust, security and privacy throughout.  Focusing on trustworthy AI strategies, trust by design, trusted AI collaboration and continuous monitoring help build and operate successful systems.

Our shared goal should be to leverage the power of generative AI in a secure way to deliver value to business and improve the lives of all who use it.

WRITTEN BY

Teresa Tung

Lead – Data Capability