MANAGING THE RISK OF AI: PART IV - SEE SOMETHING, SAY SOMETHING (FOR BUSINESS)


Rebecca Kipp

Lead Consultant

Like it or not, we can all agree that AI is changing the world. It is especially changing the workplace.

From chatbots almost instantly answering your wildest questions and automating minutes, themes, and takeaways of a meeting to software that can consume raw data and provide detailed analysis with predictive analytics, the workplace is being transformed by business automation. 

But is it all puppies and walks on the beach?  

All That Glitters Is Not Gold

The world is investing in AI functionality, and as someone who can directly benefit from its usage, you may be understandably enthralled. We understand the excitement and the argument that those not using AI could be left behind. For a lot of business automation, this is likely true.

We just need one thing to be perfectly clear: Use of AI can mean risk for your company.

Data is the precious resource of all organizations, and AI can’t function without that data. What we’re seeing, without getting too in the weeds or legalese, are contract terms that protect the suppliers and explicitly refute the protections that companies would generally have in other technology agreements. 

Ultimately, it’s Legal’s job to know and address these risks, but companies could be in for a world of hurt if their businesspeople aren’t educated and on alert that AI comes with risk and aren’t reporting the potential use of AI to the proper channels. This is especially true with how AI is being introduced.  

Slipping One Past the Goalie

It was easier to track risk in technology contracts before AI. We covered this in more detail in Parts 1 and 2 of Managing the Risk of AI, but the main reason for this is simple: you (the businesspeople) are likely to be the only ones that (1) know whether you’re using AI, and (2) know whether there are contracts that have been added or updated.

Suppliers are creating contracts that will slip past the necessary risk reviews (the “goalie”), unless businesspeople are making sure the contracts get reviewed. AI is being added to existing software or introduced through bite-sized, inexpensive (or even free) functionality, but that doesn’t mean that the data going in doesn’t contain significant risk.

The legal world is the tortoise to technology’s hare, and by the time courts catch up with what’s going on and rule on how suppliers should approach contracts for their AI solutions, AI will be as ubiquitous as touchscreen phones. In the meantime, it is the duty of each member of the organization to speak up when AI is involved.

See Something, Say Something

“See Something, Say Something” means that if you see something unusual and potentially concerning, say something to the nearest authority. AI, while seemingly mainstream already, is still in its infancy, and there are very real concerns, the full magnitude of which remains undetermined.

Many organizations have put together a committee or group to review potential AI usage and the related contracts. If you don’t know who to contact in that group – or whether your company even has such a group – contacting Sourcing or Legal is always a good start.

How best to protect your company is a work-in-progress – and management leaders will need to put in place a more robust set of Risk Management protocols and processes – but in the meantime, make sure you’re communicating use of AI to someone, preferably before you start using it. It is up to all members of an organization to be aware that the use of AI includes risk and to assist with the management of that risk.

As Shakespeare put it in The Merchant of Venice, “All that glisters is not gold,” and while AI is incredibly exciting, it comes with risk, too.

Please join us next week as the 2025 Seprio Summer Series dives into the risks associated with multi-tenet cloud environments and concludes its review of the Risk Management section of Supplier Governance.


Please let us know in the form below what you think about this blog post, other content on this website, or ask any other questions you might have. Don’t be shy.

Next
Next

MANAGING THE RISK OF AI: PART III - BUILDING A PLANE MIDFLIGHT (FOR LEGAL)