ML at its best: How the communication between human and machines works
Machine learning enables customized conversations between man and machine that can result in buying decisions. We asked Tina Nord and Kathleen Jaedtke to explain how this can be achieved through the use of dialogue-oriented technologies. Let’s take a look at how communication between man and machines works.
JAXenter: Customer contact via a machine sounds exciting. What exactly do you mean by “artificial intelligence” in connection with marketing strategies? Are we talking about chatbots?
Tina Nord & Kathleen Jaedtke: It’s about much more than chatbots. A simple chatbot is not necessarily based on Machine Learning (ML). A simple dialogue between man and machine can be programmed relatively uncomplicated. ML only becomes relevant when a bot or intelligent assistant is supposed to process complex speech or text input of the human counterpart. And even that is only one of many use cases. Those who want to deal with Artificial Intelligence (AI) and marketing should first deal with the processing and generation of natural language (NLP & NLG) as well as with machine vision.
Subforms of machine learning as processing and generation of natural language have the potential to speed up or simplify work processes.
These subforms of machine learning influence e.g. the search behavior and the expectations of users or enable new, intuitive and faster types of dialogue. They also have the potential to speed up or simplify work processes. This can be, for example, the automated creation and translation of texts or the provision of automatically pre-sorted images that correspond to a certain corporate identity. In this way, AI can contribute to achieving overarching strategic goals – such as cost leadership or differentiation.
JAXenter: What does a source of inspiration look like?
Tina Nord & Kathleen Jaedtke: A source of inspiration for us are the users. Their feedback is essential when it comes to the use of new innovative technologies. Intelligent assistants or robots are a great thing, but they only have real added value if they simplify users’ lives. A simple example is text search: typing in endless columns of words in search engines is usually time-consuming and leads to an endless series of search results, but not quickly and easily to the desired information. Only through the use of machine learning do search results become more personally relevant. Moreover, through language search, we no longer have to type the search term and the spelling becomes irrelevant when pronouncing the search word.
The visual search even reduces finding visual inspiration to a click of the camera trigger. All three examples are based on ML and speed up and simplify finding. New technology makes it easier for us to search the Internet and replaces the tedious text search. Conclusion: Machine learning only has a right to exist if there is true added value for the user.
JAXenter: What do dialogue-oriented technologies look like in practice and what’s under their hood?
Tina Nord & Kathleen Jaedtke: The best-known example of a dialogue-oriented technology is probably Google Duplex. A function of the Google Assistant that arranges appointments with a human voice or books a table in a favorite restaurant. However, such advanced features are usually not available in practice. More likely is the use of so-called Google Actions or Alexa Skills. These often do not go beyond functions such as weather or news queries.
The development of such Conversational Interfaces is (still) complex and what is under the hood can be better explained by a software engineer. However, numerous companies are working to change this. In the future, everyone will be able to create new skills or actions and make them available to users with just a few clicks.
JAXenter: Where is the journey headed? Let’s see this from the customer’s perspective. Will we be communicating with a machine via voice or text input in the future?
Tina Nord & Kathleen Jaedtke: For us and for many other experts it is clear that the near future is called “voice first”. Very soon we will be talking not only with our smartphone, but also with the fridge or the washing machine. Our environment will be our dialogue partner, regardless of whether we are in our own four walls or, for example, at the train station.
We will touch fewer things and navigate with gestures or speech instead. Language barriers will disappear through real-time translations. In addition, it can already be observed that machine vision is increasingly being combined with language functions. The relaunch of Google Glasses or the voice-activated Selfie filter from Snapchat Lens, for example, speak for themselves. So the future is voice and visual first.