Interview with Christoph Henkelmann

“BERT is a system than can be tuned to do practically all tasks in NLP”

Maika Möbus

We interviewed ML Conference speaker Christoph Henkelmann in Berlin. The natural language processing expert shared some insights on Google’s model BERT, OpenAI’s recently fully released model GPT-2, and what the future may hold for NLP.

What sets Google’s natural language processing (NLP) model BERT apart from other language models? How can a custom version version be implemented and what is the so-called ImageNetMoment?

ML Conference speaker Christoph Henkelmann (DIVISIO) answered our questions regarding these topics and shared his opinion on OpenAI’s controversial choice to initially withhold the full-parameter model of GPT-2.


BERT is a system than can be tuned to do practically all tasks in NLP. It’s very versatile but also really powerful

NLPChristoph Henkelmann holds a degree in Computer Science from the University of Bonn. He is currently working at DIVISIO, an AI company from Cologne, where he is CTO and co-founder. At DIVISIO, he combines practical knowledge from two decades of server and mobile development with proven AI and ML technology. In his pastime he grows cacti, practices the piano and plays video games.
Maika Möbus
Maika Möbus has been an editor for Software & Support Media since January 2019. She studied Sociology at Goethe University Frankfurt and Johannes Gutenberg University Mainz.

Inline Feedbacks
View all comments