close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

A Framework for AI Governance

Jean-Francois Gagné

Modern AI can figure out patterns, classify objects, make decisions and evaluate the results. It can learn and adapt to new situations using feedback loops. It’s awesome software.

New Power Means New Responsibility

It used to be that people would study a process, its inputs and its outputs, to write code that could automate that process. Building such software is a form of capturing intellectual property in digital form, and until now it has been a cognitive task mostly driven by humans. Today AI is writing its own software, extracting signal from noise, figuring out the rules by itself; it’s taking on that cognitive task of digitally codifying the world. It is revolutionizing what can be automated and the scale at which it can be deployed. With that new reach comes new responsibilities to make sure AI is serving the right objectives.

Modern AI can figure out patterns, classify objects, make decisions and evaluate the results. It can learn and adapt to new situations using feedback loops. It’s awesome software. You don’t need to pay someone to make an analysis then pay someone to create a piece of software then pay someone to validate the outcome then realize it’s not doing what you expected then go back and tweak the software. That whole cycle can take years in large organizations, yet a properly designed process powered by artificial intelligence can do that in a matter of days. It can recode itself, roll the changes out, and verify if it’s actually moving the needle in a direction that you want — all at accuracy and speed beyond human ability. Quite powerful. Quite threatening.

This new approach is introducing new risk into organizations at a scale not seen before. The way you code it is by showing examples of data, not by writing or editing the system yourself. At this point, there is very little monitoring of the consequences, because the current field of data governance only concerns the quality, integrity, and security of the data itself. AI governance, on the other hand, looks more broadly at the signals in the data and the outcomes they drive.

In contrast to data supporting human-driven processes, AI will be operating processes sometimes 10x better or faster than they used to, leading to scenarios that you’ve never seen before. Simply from a value standpoint, this can be problematic. If you become super efficient at selling to a specific consumer segment, you might lock yourself in and forget about the rest of the market. Is your model seeing the entire problem?

Those responsible for managing these new information systems have three main focuses:

  1. Accurately defining the problem and objectives the agent is solving for and the outcomes it should be seeking. This includes the right metrics of performance and also what IP (insights, models of the world, etc.) should be extracted and the gradients of ownership between user and vendor.
  2. Orchestrating the feedback loops that drive the learning and improvement of the model, from the raw collection to the interpretation of results and connection with other intelligent systems’ insights.
  3. Assessing the risk, all the points where your agent system can go wrong. How will the model self-assess at every point in the process? How are you monitoring the automated system to make sure it is doing the right things?

It’s quite simple to get AI to learn and perform the basic functions of a car. The challenge is whether it can do so in all the different possible contexts, such as with changing road conditions, stormy weather, pedestrians, etc. Coding with data examples can drastically reduce the cost, but it still requires a good deal of human ingenuity to consider how it is applied, and the job of managing AI systems will become much more about considering whether the AI is dealing with the whole picture. Managing the value creation and the risks are what governance frameworks are for.

So, here’s a framework of what I think governance should look like for AI to be trusted both inside and outside the organization*:

 Each area of consideration will depend on the amount of autonomy you are building an agent for.
Each area of consideration will depend on the amount of autonomy you are building an agent for.
Levels of Autonomy
0. Disconnected

There is activity in your organization that AI has no clue about. You need to consider how even disconnected activities may become connected eventually in your risk assessment. Example: Handwritten notes made 10 years prior or old store video footage may be analyzed by an AI system for relevant information.

1. Watching

Watching is one of the basic implementations of AI, collecting and classifying information that will drive other processes. What is it watching, what is it paying attention to or ignoring? What processes does its information collection connect to? Example: An AI agent that watches a hockey game and automatically records the stats, including ones a human can’t see such as the force of a check.

2. Coaching

Coaching is about making suggestions without taking action. AI coaches can still be powerful, thanks in part to what we know about nudging human behavior. Example: Say I’m presenting and a camera is watching the audience, analyzing their body language to tell if they’re bored. It can tell what they like and dislike, and when it’s time for me to try another joke.

3. Collaborating

This is where a machine can’t yet fully automate a scenario, but can still drive most of the show. Example: In insurance claims processing, you may get 60% or 70% automation. That means many of the tasks are performed by an AI agent, while a human is very much in the loop to process and analyze the other 30%.

4. Autonomous

When an agent is fully autonomous, things are going to happen so quickly that the human cannot be in the loop. The interaction will be through adjusting the system, monitoring the results and providing feedback. Example: This is what much of cybersecurity or high-frequency robo-trading looks like today, machines making huge numbers of decisions on their own at speeds beyond any detailed human oversight.

Classifying these levels of autonomy will vary for different organizations, but this captures the broad scope of it.

An AI Governance Framework for Industry

The below is to explain what I mean by each category. Every organization will need to think of their own general principles for each of these sections, but also apply them individually to each of their agents to make specific rules for a given situation. For the individual considerations, I would add the role of the agent, requirements for deployment, risks to watch for, parameters for adversarial governance models, and also how it connects with broader, existing corporate governance, especially around data and ethics.

Read the full story on jfgagne.ai

...
more posts