days
-4
-5
hours
-1
-3
minutes
-1
-6
seconds
-2
-6
search
Interview with Ward Van Laer

How UX can demystify AI: “We need more than just technical transparency”

Hartmut Schlosser
Ward Van Laer

Can UX demystify AI? Ward Van Laer answers this question in his session at the ML Conference 2019. We invited him for an interview and asked him how to solve the black box problem in machine learning by merely improving the user experience.

JAXenter: Machine learning is regarded by many as a kind of miracle; we train the machine with data until it can make decisions independently. How these decisions are made is a kind of myth. No longer comprehensible, we end up with the “black box problem”. Does that have to be like that?

Ward Van Laer: The black box problem is a perception created by, for most people, the unintelligible jumble of machine learning models. But the decision the models make are always based on the data we feed the model. Will we be able to design completely transparent models without having to compromise the complexity of the problems to be solved?  In my opinion, the real question is what kind of explainability do we really need to demystify the black box perception.

JAXenter: In your talk at the ML Conference you show how to develop transparent machine learning models. How does that work?

Ward Van Laer: I will demonstrate that explainability can be interpreted in multiple ways. Depending on the perspective from which we look at an AI system, explainable AI can mean different things.

We can look at explainability in a technical way, which means we are looking through the eyes of machine learning engineers, for example. In this case, transparent AI can help to spot dataset biases. More importantly, this technical explainability is not interesting or understandable for an end-user. From this perspective, UX will play a crucial role in demystifying AI applications.

JAXenter: Why do you think transparency in ML is important?

Ward Van Laer: I believe we need more than just technical transparency, or as it is referred to at the moment, “explainable AI”. We need to pinpoint the needed properties that lay at the ground of a trustworthy AI, instead of focussing on full transparency.

JAXenter: Can you give an example of how a good UX changes the acceptance of AI solutions?

Ward Van Laer: In one of our projects in the health care industry we visualize links between classification results and the dataset, which helps physicians understand why certain decisions are made.

To have more insight in the possibilities I can certainly encourage everyone to attend my talk at MLConference 2019 ;)

JAXenter: What is the core message of your session that every participant should take home?

Ward Van Laer: Creating a well-working machine learning model is only half of the work. Developing a thought-through User Experience is the key to successful AI.

Please complete the following sentences:

The fascinating thing about Machine Learning for me is…

… that it will be able to help us solve many complex problems (e.g. health care).

Without Machine Learning, humanity could never…

… improve itself.

The biggest current challenge in machine learning is…

… making sure that an AI system is successful in the eyes of user.

I advise everyone to get started with machine learning …

… to better understand what the real possibilities are.

Once the machines have taken power…

… hmm let’s hope we can explain how it happened! ;)

JAXenter: Thank you very much!

Author
Hartmut Schlosser
Content strategist, editor, storyteller - as online team lead at S&S Media, Hartmut is always on the lookout for the story behind the news. #java #eclipse #devops #machinelearning #seo Creative content campaigns that touch the reader make him smile. @hschlosser

Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of