How to Think About Risk in Generative AI: It’s Not As New As You Think
Everyone’s talking about Generative AI, what it can do, what it might break, and what it means for security and compliance. There’s a lot of noise. But let’s simplify things.
From a risk perspective, most of what people are calling “GenAI risk” really falls into two categories we’ve been managing for years: Third-Party risk and Data Risk.
Don’t get me wrong, there is more to it than this, ESPECIALLY if you’re building your own AI. I’ll cover that in a future blog. But for those of you dealing with your vendors, tools, and third parties using GenAI, it can pretty much boils down to this:
1. Third-Party Risk
If you’re already sending your data to a vendor, and you’ve done your due diligence, locked in solid contracts, and have appropriate controls in place—then using that vendor’s new GenAI features probably doesn’t introduce much additional risk.
It’s not a new vendor. It’s just a new capability. As long as your agreements cover things like data use, confidentiality, and model training exclusions, your residual risk likely hasn’t changed much.
Bottom line: If you already trust the vendor with your data, and the paperwork backs it up, GenAI shouldn’t feel like a leap of faith.
Another area that is seeing accelerated risk here is in application development. When teams are using copilots to help develop code, the code review processes must be capable of considering hallucinations in the code. But in the end, it’s still code review and should have already been there.
2. Data Risk
This is where things can get more nuanced.
If you’ve already been sharing sensitive or regulated data with a vendor, and your contracts cover usage and protections, then enabling GenAI features may not shift your risk significantly.
But if you haven’t been sending that kind of data, and GenAI creates new ways to input it, you need to pause and think:
Do you have internal controls or user guidance to prevent sensitive data from being shared?
Are there technical safeguards (like DLP or prompt filtering) in place?
If not, do you need to rework your contracts to address this?
This really comes down to two things: data discipline and clear boundaries. Without them, GenAI doesn’t just increase risk, it increases uncertainty.
Updated: One area of data risk that is worth exploring a little further is access management. I’ve talked to a number of people that are using various copilots and what they’re finding is that their access controls aren’t as controlled as they assumed. This results in their copilots making access to data, which was already granted inadvertently to users, easier to access. This is a very important aspect of using internal copilots is to ensure that they aren’t exacerbating an access control problem that already exists, but just wasn’t realized.
What Can You Do Right Now?
Start with visibility. Know where GenAI is showing up in tools you already use.
Classify before you prompt. Be clear on what kinds of data are okay to input, and what isn’t.
Ask vendors the right questions. Is prompt data stored? Used for training? Who has access?
Tighten contracts if needed. Make sure GenAI use is covered under your existing data terms.
Ensure you are comfortable with the access controls on the data that you grant to GenAI technology; it’s going to pass along what it sees.
Final Thought
There’s also a business lens to this conversation that often gets skipped: Will using GenAI save money or generate revenue?
If the answer is no, or you can’t measure it yet, then now may not be the time to overhaul your controls or rush into implementation. Not every new tool deserves your attention right away. But, also true, is that if you don’t keep up with the market, you will be left behind.
At the end of the day, GenAI doesn’t introduce some brand-new category of risk. It just puts a new spotlight on third-party risk and data risk, things you should already be managing.
If your foundation is solid, GenAI doesn’t need to be disruptive. It just needs to be deliberate.