Explainability in Deep Learning | Razor Think

Explainability in Deep Learning

explainability-in-deep-learningDeep learning algorithms are indispensable tools for achieving human-like performance in artificial intelligence. While the algorithms themselves are difficult to explain, they are essentially used to process inputs and combine them with existing data and probabilities to develop numerical outputs, called scores. These scores are then used to make data-driven decisions.

A primary challenge with deep learning algorithms is the ability of businesses to trust and defend the scores provided by the AI. While deep learning enables businesses to make data-driven decisions, those decisions need to be defensible. The process with which the AI developed the score also needs to be explainable. This is where explainable artificial intelligence comes into play.

What Is Explainable AI?

To dive into the details of explainability, we first need to define terms. There are two terms often used to discuss the accessibility of AI — explainability and interpretability. While explainable and interpretable machine learning are often used interchangeably, the two terms are slightly different. Simply put, explainability is the extent to which the machine learning model and its algorithms can be explained in human terms. Interpretability, on the other hand, describes the extent to which observers can predict the outcome of the AI model through cause and effect analysis. While both concepts are important to machine learning models, we’ll specifically be covering explainability here.

Explainable deep learning models are difficult due to two key features of AI:

  • Non-linear pattern recognition: Deep learning AI tends toward non-linear pattern recognition to a much larger degree than traditional machine learning. These non-linear patterns are difficult for modelers to identify or plan for, which can reduce their control over the model’s outcome scores.
  • Feature detection: Deep learning AI handles a great deal of feature detection. As such, modelers often don’t know what features the AI has spotted in the data.

Both of these factors lend to the complicated nature of deep learning AI. This means it is difficult for modelers to know what the AI “sees” in the data, leaving them with less control over the outcome scores and reducing their ability to explain the outcomes. Without explainability, there is no visibility into the process, making it difficult to trust and defend the outcomes produced by the AI.

However, if a modeler can master explainability for an AI deep learning model, they can maximize visibility and adjust the deep learning model to create defensible outputs.

Why Is Explainability Needed?

Explainable artificial intelligence is a significant challenge in machine learning today. AI models are often used in a range of applications across a variety of industries, running everything from security cameras and smartphones to legal and financial decisions. Instances of AI developing poor or discriminatory outcomes have also gained widespread attention. Given the rising use and increased skepticism toward AI, explainability is more important than ever.

why-is-explainability-neededThere are four key benefits driving the need for understanding how deep learning models create their outputs:

  • Transparency: Transparency is required to understand and exploit the basic mechanisms of deep learning models. Knowledge of predictors or features enables data scientists to adjust their values to see their effects on scores. For instance, if they know one of the attributes is family size when attempting to predict which members of a certain population will buy a car in six months, they can increase that value to see how it affects the output.
  • Verifying intuition: Models don’t understand human intuition — they simply view variables in mathematical terms. When determining which customers to give a loan, for instance, people would assume a positive correlation between income and receiving the loan. However, models might create negative correlations between income and loans. Explainability is necessary to ensure models are drawing parallels consistent with human understanding.
  • New patterns: Whereas human thinking is limited to three dimensions, models can apply numerous dimensions to business problems. Explainability for non-linear patterns in fraud detection, for example, enables humans to learn new patterns from deep learning to incorporate into business processes.
  • Regulatory compliance: Regulatory compliance demands explainability in several industries. For example, financial companies need to provide reasons for declining customers, if customers want to know. Those explanations can’t be numerical model outputs, but rather their significance to financial factors which the customers and businesses understand.

How Do We Improve Explainability in Deep Learning Models?

There are many ways to increase the explainability of deep learning models. Some of the less mathematically rigorous techniques include:

  • Surrogate modeling: Data scientists can create surrogate models from deep learning models to explain their results. To do so, they train a decision tree or linear regression on predictions of a complex model — such as which customers may default on a loan — and the original inputs. Data scientists assume the interactions, coefficients, trends, and variable importance shown in the surrogate model indicate the complex model’s internal mechanisms. The surrogate model results are easier to understand and help explain the results of the initial neural network.
  • Leave one covariate out (LOCO): This trial and error method is effective for seeing which variables impact models most. It requires removing single variables from deep learning models one at a time, deploying the model, and seeing if the results dramatically change due to any specific missing variable. When results do change drastically, data modelers notate the difference in the model’s score and try to match it with their expectations for the difference in results based on that variable.
  • Maximum activation analysis: This measure is effective for explaining patterns users are unfamiliar with, such as those contributing to fraud detection. When users detect unknown patterns contributing to high scores for fraud, they can give these patterns higher weights than others when optimizing models. The higher weights activate the patterns each time they occur, helping data scientists reach a conclusion about the nature of the pattern.

Explainability Translates to Better and More Credible Models

The true value of deep learning is only achieved when we associate it with explainability. Explanations for model results can affect business outcomes, workflows, and even organizational objectives. Most of all, they indicate how best to adjust models to achieve goals for deploying predictive analytics.

Without explainability, users blindly follow models like they once blindly followed intuition. Explainability gives them the understanding to best apply accurate predictions.

leverage-ai-with-razorthink-ctaLeverage AI With RazorThink

If your company is interested in leveraging AI and achieving explainability with your models, RazorThink has the operating system for you.

RazorThink is an AI system company dedicated to simplifying the creation, deployment, and management of AI systems. Our artificial intelligence operating system provides an intuitive AI development environment that allows users to explore data, build models, troubleshoot engines, and run experiments and analytics. By leveraging pre-built and pre-tested code blocks, the RZT aiOS allows users without advanced software engineering skills to build powerful, explainable AI applications within days.Learn more about RazorThink and the RZT aiOS today by reaching out to speak with a representative.


One Comment

Leave a Reply