Siri, compile my code: Programming with voice control and the future of coding
It’s been one wild ride. The history of programming has come so far, and continues to make enormous leaps. As we look at how modern technology can help people code by voice, we wonder: What’s next?
The future of coding is more than just new programming languages, frameworks, and libraries. It’s also how we code. When we look back at some of the history of coding, it’s incredible how far programming has come. We’ve gone from rooms as large as warehouses storing computers, to chunky CRT monitors, to sleek laptops that fit in our messenger bags. Beyond that, we’ve also seen the rise of open source, the age of the Internet allowing for information to spread globally, and programming becoming more and more accessible for people to learn.
What new advances are currently happening and what’s on the horizon?
The now: Coding with voice control
Voice controls have gone from being a quirky feature in its rough stages to a full-fledged tool that comes already implemented in many devices. From Siri, to our cars, to Amazon Echo, a lot of us use voice commands without a second thought. (Even birds are using voice control to make shopping lists. Alexa: order strawberries and a Raspberry Pi.) We’ve all had some sore wrists from long days at the computer and tried to combat it by looking up exercises and ergonomic keyboards. For some people with disabilities, chronic pain, or injuries, voice control has been life-changing and provides better accessibility and ease of use.
But what about coding? How do you code with voice control?
Vocola is a voice command language for dictating code with easy syntax. According to its FAQ, many programmers who are paralyzed from the neck have learned how to program using just their voice. If you’re already wondering how programmers pronounce many of their commands, it’s done by creating a new vocabulary or just spelling them out. They can also copy visible symbols and repeat them, or choose from a list.
Aena is an open-source project that allows a Python-based voice macro system called Dragonfly to send commands to another computer. It runs on Linux and allows the user to dictate prose via emulated keystrokes using a keystroke capture client.
VoiceCode was created when programmer Ben Meyer injured his hands from constant keyboard use. He states that he could barely hold a fork, let alone program. And so, VoiceCode was created to allow coding in any programming language using chained and nested commands to create complex actions.
A recent article titled “Speaking in code: how to program by voice” discusses how Harold Pimentel has overcome and learned to code via voice control. The article delves into some of the current shortcomings, such as how voice control reacts to accents and particularly women’s voices, and the difficulty of speaking for extended periods of time without tiring the throat. As the technology grows, so will solutions to these problems.
The future: Coding with machine learning
Let’s take a moment to get hypothetical and look even further into the future. Machine learning has done some impressive things recently that in the distant past we would have thought impossible. Between searching for exoplanets, facial recognition, retail recommendations, self-driving cars, and data security, there isn’t a single sector that hasn’t been disrupted with the advances of machine learning. As always, we ask: how will machine learning affect the future of programming?
Last December, we spoke to Jay Jay Billings, a research scientist at Oak Ridge National Laboratory, about a paper titled “Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing?“.
The basic concept is that in the not-so-distant future, coding won’t be done by humans at all. At least, not the majority of coding. Billings predicts that while humans will still take care of complex code, “everyday” code will be machine-generated. Machine learning will take over and auto-generate the code without human input. In the aforementioned paper, it is suggested that present trends of artificial intelligence and programming suggest that by 2040 we will see machine-generated code (MGC) become commonplace.
How theoretical is this idea? And is it more of a Utopian Star Trek concept or an eerie Black Mirror future?