Navigating AI and risk management in business

The launch of generative artificial intelligence (AI) systems like Chat GPT, Copilot, DALL-3, etc., has dramatically increased conversations around AI. From wondering how employees can use the technology to worrying about privacy and accuracy, business leaders are frantically trying to get a handle on how their organizations should approach this new technology and what risks it poses to their operations.

But here’s the secret. This technology isn’t as new as we think, and neither are the risks. Businesses have been using machine learning models and chatbots for several decades. What is new is the extent to which AI systems have started to infiltrate everyday business operations. And with more use, the exposure to the AI-related-risks ― frequently connected to data concerns ― increases.

  • Data hygiene: Just as issues around data use are under a microscope, so too is a business’s actual data. For organizations using AI, the phrase “garbage in, garbage out” is applicable. AI accentuates everything ― the good, the bad and the ugly. Therefore, it is vital to be intentional about the quality of data being used within these systems, making sure it is clean, accurate and as free from bias as possible.
     
  • Regulation and governance: Development of specific AI regulations has been slow, and businesses can’t wait until everything is ironed out. Besides staying current on what legislators are considering and being active in organizations influencing public policies, looking at previous regulations and governance can provide organizations some insight into how to approach AI regulation. Since AI can’t exist without data, referencing state and federal data governance policy is a good place to start. Also, asking the question, “Would our customers feel we’re being good stewards of their data if we proceed with our desired course of action?” can provide businesses with a good gut check.
     
  • Contracts and third-party systems: Organizations are increasingly contracting out for AI services, from chatbots to analytical software to cybersecurity protections. Since data is at the heart of everything AI does, it is important to understand how these third-party vendors will use your data. Will it be walled off and only used to train your instance of the system? Will the data be kept private or used for continued training and system learning? It is important to know what the contract says for any system you are thinking about using; and have a plan in place in case of a breach.
     
  • Business case: When it comes to AI, there is no one-size-fits-all approach for businesses. From the initial considerations of using AI, to identifying what AI products will be the best fit, organizations need to think through their appetite for AI and what tasks or functions it would be most adept at. AI isn’t great at everything. For example, it can analyze data, but can it interpret the results in a meaningful way, or do you need the judgment, expertise and background of employees for such insight? Similarly, AI could be used by employees to quickly search and pull up policy and procedure information when interacting with a customer, but it’s not advanced enough to provide quality customer interactions by itself. Ultimately, how an organization incorporates AI into its business will depend on its goals, culture and business model.

Whatever direction a business goes with AI, managing the risks related to AI doesn’t need to be a scary endeavor. The risks are familiar ones, even if they are wrapped in different packaging. By being diligent in understanding the organization’s approach to AI and data concerns, risk managers can have more confidence in addressing and mitigating the risks associated with AI.

 

This blog post content is based on information shared during a Risk Series panel discussion featuring Jordan Adams, CPA, CIDA, CQF, AVP with Nationwide; Shannon Terry, chief advance analytics officer with Nationwide and Angie Westover-Muñoz, program manager with Ohio State Moritz College of Law, to encourage discussion around developing an AI risk management program. It is not meant to provide guidance on AI laws, standards or regulations.