days
0
-28
0
hours
-1
0
minutes
-2
-9
seconds
-2
-7
search
Blackboxes and glassboxes

What’s in The Box? Explainability versus Accuracy in AI Models

data
© Shutterstock / Dima Moroz

It is a challenge to make AI-driven models transparent. They are a blackbox and can cause serious issues. The aim of a glassbox is to provide greater transparency in how a model is operating and how its outputs have been reached.

The complexity of today’s problems means that modern businesses are increasingly reliant on AI. Data scientists are using AI-driven models in everything from crime to healthcare to solve real world problems. Yet, given the complexity of building models which can test millions of hypotheses a minute, it is a challenge to make these models transparent. They are a blackbox.

As a result, when non-data scientists access these models they have a severely limited understanding of how outputs have been reached, which can result in it being treated as gospel. For a variety of real-word reasons, outputs may in fact be more devilish. This has serious ramifications as individuals fail to account for the unintended biases and inaccuracies of such models.

SEE ALSO: How enterprise companies are changing recruitment with AI

Moreover, given that data science is predominantly a tech field, even those with extensive training are not equipped to consider moral or ethical questions in how the outputs from a blackbox should be used. This is where the ‘glassbox’ comes in.

The aim of a glassbox is to provide greater transparency in how a model is operating and how its outputs have been reached. If we look at the simplest of models – such as a linear regression – we have complete visibility over the variables, and the outputs should be apparent. This is a glassbox.

However, preserving this visibility becomes harder as the complexity of the models grows. If we look at a more complex example – such as a deep neural network – it can be near impossible to pull apart the data.

In simple terms, the fewer variables in a model, the easier it is to explain. However, this comes at a cost. Simpler models can’t match the accuracy of more complex models; which can lead to less useful outputs. While this may sound like a binary, it is in reality more of a sliding scale. Data scientists will need to decide where to draw the line between glassbox and blackbox and determine which is more appropriate for their individual use case.

Unpacking the glassbox

The main benefit of the glassbox, and where it is most appropriate, is providing data scientists with more accountability in where conclusions come from.

The second is in allowing a wider range of people with little-to-no experience building models to understand them and to appropriately trust them. These less-technical individuals need the explainability and interpretability of the glassbox to make sense of these models before they can use the outputs with the right level of assurance or risk.

Moreover, when the decisions made from these models impact real people, such as in medical diagnoses or informing police patrol routes, for example, there needs to be a tangible understanding of where these conclusions came from.

With this in mind, how can data scientists decide whether a glassbox is the most appropriate solution for their model?

Going to the root of the problem

One good indicator for data scientists on whether they should prioritise explainability or accuracy is the need for root cause analysis. For instance, if you are building a credit risk model you may sacrifice accuracy for explainability.

On the other hand, if you are building something less critical like face detection you may prioritise accuracy.

In other words, if the conclusions being reached by the model need justification, a glassbox better allows the data scientists to reverse engineer the dynamics in data and understand the mechanism of action. This enables the decision makers using these models to observe and reason the phenomena they are observing in the data.

In short, if you need to uncover the ‘why’ in your data model, then focus on explainability.

Prioritising privacy

Another consideration when striking the balance between explainability versus accuracy is the ethics of privacy and this largely comes down to the tradeoff between privacy and signal. In order to perform deep analysis on a model and dissect it on a granular level researchers typically need complete access to all the data.

However, providing researchers access to this data may introduce privacy and security risks when the data contains personally identifiable information (PII). The issues posed by this can be mitigated quite simply through the obfuscation, aggregation and anonymisation of data as well as the removal of certain data fields.

However, in doing so the performance of AI systems is hampered as their access to potentially meaningful data is obscured.

As such, a decision needs to be made on how to responsibly handle data that is being shared with human researchers (within or outside the company) without unnecessarily compromising performance. One solution to this is ‘blindfolded analytics’ which co-locates an autonomous AI-powered research engine next to the data in a secure environment.

This engine is capable of asking questions and exploring creative directions of research. The autonomous capabilities of the engine enable running research without any human researcher being exposed to the raw, unaggregated data – thereby eliminating the trade-off between privacy, security and performance of AI systems.

This does require an AI up to the challenge of asking lots of questions of the data so that analysts or scientists don’t need to look at it and formulate these themselves.

While the solutions to privacy, bias and security issues may seem obvious, knowing when to implement them may be the biggest obstacle. Given that many of these solutions involve some trade off in model accuracy, knowing when to implement a glassbox approach is not always aligned with business objectives. To illustrate this, let us look at an example:

Say we have created a model for an oil pipeline to determine where corrosion may have occurred and where parts need replacing. If the model is inaccurate in one direction – i.e. it misidentifies a perfectly functional pipeline as corroded – this involves some expense but is not too problematic. However, inaccuracy in the opposite direction – i.e. failing to identify corroded pipelines – could be catastrophic in environmental and economic damage.

SEE ALSO: Decoding the 3 potential advantages of big data

Data scientists may need clear direction from the business in knowing to prioritise such outcomes, and that explainability here should not come before accuracy given the importance of the model.

At the same time, data scientists need to use their understanding of how the model works to explain the potential drawbacks to the business and that even in a best-case scenario the model is only probabilistic. As such, a dialogue between the business and its data scientists will ensure the right balance between explainability and accuracy is struck.

Author
Sagie Davidovich
Sagie is an entrepreneur, technological visionary and machine intelligence enthusiast, who continually strives to bridge the gap between human and machine reasoning and interaction. He’s passionate about computational knowledge representation, acquisition, storage, reasoning, and processing.Sergey has served in a range of executive technological positions in disruptive startup companies. Prior to co-founding SparkBeyond, Sergey served as GM and SVP of R&D for NewBrandAnalytics, a social business intelligence pioneer. He’s also served as VP R&D of SemantiNet, a semantic reasoning engine, and co-founded Delver, a social search engine that was acquired by Sears, where he served as CTO. Prior to founding Delver, Sergey was the architect of a large-scale award-winning predictive maintenance system.

Jo McLenaghan
Jo is a Principal Data Scientist at SparkBeyond. She is skilled in machine learning, R and Python coding and big data solutions. Recent projects include building machine learning models for: predictive maintenance, store location optimisation and digital media effectiveness. She holds a 1st class Masters degree in physics from the University of Oxford and a PhD in physics from the University of St. Andrews.

Leave a Reply

avatar
400