close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

‘Trustworthy AI’ is a Framework to Help Manage Unique Risk

Irfan Saif, Beena Ammanath, MIT Technology Review
25.3.2020

With the continued rapid expansion of artificial intelligence, businesses are facing challenges that are more human than machine.

Artificial intelligence (AI) technology continues to advance by leaps and bounds and is quickly becoming a potential disrupter and essential enabler for nearly every company in every industry. At this stage, one of the barriers to widespread AI deployment is no longer the technology itself; rather, it’s a set of challenges that ironically are far more human: ethics, governance, and human values.

As AI expands into almost every aspect of modern life, the risks of misbehaving AI increase exponentially—to a point where those risks can literally become a matter of life and death. Real-world examples of AI gone awry include systems that discriminate against people based on their race, age, or gender and social media systems that inadvertently spread rumors and disinformation and more.

Even worse, these examples are just the tip of the iceberg. As AI is deployed on a larger scale, the associated risks will likely only increase—potentially having serious consequences for society at large, and even greater consequences for the companies responsible. From a business perspective, these potential consequences include everything from lawsuits, regulatory fines, and angry customers to embarrassment, reputation damage, and destruction of shareholder value.

Yet with AI now becoming a required business capability—not just a “nice to have”—companies no longer have the option to avoid AI’s unique risks simply by avoiding AI altogether. Instead, they must learn how to identify and manage AI risks effectively. In order to achieve the potential of human and machine collaboration, organizations need to communicate a plan for AI that is adopted and spoken from the mailroom to the boardroom. By having an ethical framework in place, organizations create a common language by which to articulate trust and help ensure integrity of data among all of their internal and external stakeholders. Having a common framework and lens to apply the governance and management of risks associated with AI consistently across the enterprise can enable faster, and more consistent adoption of AI.

The Trustworthy AI Framework

To better address the challenges related to AI ethics and governance—it helps to leverage a framework. Deloitte’s Trustworthy AI framework introduces six key dimensions that, when considered collectively in the design, development, deployment, and operational phases of AI system implementation, can help safeguard ethics and build a trustworthy AI strategy.

The Trustworthy AI framework is designed to help companies identify and mitigate potential risks related to AI ethics at every stage of the AI lifecycle. Here’s a closer look at each of the framework’s six dimensions.

The Trustworthy AI framework
The Trustworthy AI framework

1. Fair, Not Biased

Trustworthy AI must be designed and trained to follow a fair, consistent process and make fair decisions. It must also include internal and external checks to reduce discriminatory bias.

Bias is an ongoing challenge for humans and society, not just AI. However, the challenge is even greater for AI because it lacks a nuanced understanding of social standards—not to mention the extraordinary general intelligence required to achieve “common sense”— potentially leading to decisions that are technically correct but socially unacceptable. AI learns from the data sets used to train it, and if those data sets contain real-world bias then AI systems can learn, amplify, and propagate that bias at digital speed and scale.

For example, an AI system that decides on-the-fly where to place online job ads might unfairly target ads for higher paying jobs at a website’s male visitors because the real-world data shows men typically earn more than women. Similarly, a financial services company that uses AI to screen mortgage applications might find its algorithm is unfairly discriminating against people based on factors that are not socially acceptable, such as race, gender, or age. In both cases, the company responsible for the AI could face significant consequences, including regulatory fines and reputation damage.

To avoid problems related to fairness and bias, companies first need to determine what constitutes “fair.” This can be much harder than it sounds since for any given issue there is generally no single definition of “fair” upon which all people agree. Companies also need to actively look for bias within their algorithms and data, making the necessary adjustments and implementing controls to help ensure additional bias does not pop up unexpectedly. When bias is detected, it needs to be understood and then mitigated through established processes for resolving the problem and rebuilding customer trust.

2. Transparent and Explainable

For AI to be trustworthy, all participants have a right to understand how their data is being used and how the AI is making decisions. The AI’s algorithms, attributes, and correlations must be open to inspection, and its decisions must be fully explainable.

As decisions and processes that rely on AI increase both in number and importance, AI can no longer be treated as a “black box” that receives input and generates output without a clear understanding of what is going on inside.

For example, online retailers that use AI to make product recommendations to customers are under pressure to explain its algorithms and how recommendation decisions are made. Similarly, the US court system faces ongoing controversy over the use of opaque AI systems to inform criminal sentencing decisions.

Important issues to consider in this area include identifying the AI use cases for which transparency and explainability are particularly important, and then understanding what data is being used and how decisions are being made for those use cases. Also, with regard to transparency, there is growing pressure to explicitly inform people when they are interacting with AI, instead of having the AI masquerade as a real person.

3. Responsible and Accountable

Trustworthy AI systems need to include policies that clearly establish who is responsible and accountable for their output. Blaming the technology itself for poor decisions and miscalculations just isn’t good enough – not for the people who are harmed, and certainly not for government regulators. This is a key issue that will likely only become more important as AI is used for an expanding range of increasingly critical applications such as disease diagnosis, wealth management, and autonomous driving.

For example, if a driverless vehicle causes a collision, who is responsible and accountable for the damage? The driver? The vehicle owner? The manufacturer? The AI programmers? The CEO?

Similarly, consider the example of an investment firm that uses an automated platform powered by AI to trade on behalf of its clients. If a client invests her life savings through the firm and then loses everything due to poor algorithms, there should be a mechanism in place to identify who is accountable for the problem, and who is responsible for making things right.

Key factors to consider include which laws and regulations might determine legal liability and whether AI systems are auditable and covered by existing whistleblower laws. Also, how will problems be communicated to the public and regulators, and what consequences will the responsible parties face?

4. Robust and Reliable

In order for AI to achieve widespread adoption, it must be at least as robust and reliable as the traditional systems, processes, and people it is augmenting or replacing.

For AI to be considered trustworthy, it must be available when it’s supposed to be available and must generate consistent and reliable outputs—performing tasks properly in less than ideal conditions and when encountering unexpected situations and data. Trustworthy AI must scale up well, remaining robust and reliable as its impact expands and grows. And if it fails, it must fail in a predictable, expected manner.

Consider the example of a health-care company that uses AI to identify abnormalities in brain scans and prescribe appropriate treatment. To be trustworthy, it is absolutely essential for the Al algorithms to produce consistent and reliable results since lives could be on the line.

To achieve AI that is robust and reliable, companies need to ensure their AI algorithms produce the right results for each new data set. They also need established processes for handling issues and inconsistencies if and when they arise. The human factor is a critical element here: understanding how human input affects reliability; determining who the right people are to provide input; and ensuring those people are properly equipped and trained—particularly with regard to bias and ethics.

Read the full story on MIT Technology Review

...
more posts