Catching up with the Keynoters
Road to JAX London - A chat with JClarity's Martijn Verburg and Kirk Pepperdine - Part 2
JAX: Let's move on to JClarity. How did that come about and the goals behind setting that up?
MV: It's a kind of a convoluted story. There's three of us - Kirk, myself and Ben Evans. We've known each other quite some time on the conference circuit, mailing lists, talking about the JVM and performance and also fun tech stuff. A couple of years ago, Ben and I were working on a book together and we decided that was going quite well and we wanted to start a company together.
We kind of bandied around some ideas but nothing was
sticking, all that much that we felt would be an industry changing
or industry significant idea. At that time, Kirk had kindly invited
us to teach his performance tuning course. After teaching that a
couple of times, a lightbulb went off in our heads, and we thought
Kirk has actually done this for so long and got this brilliant
methodology which despite being slightly experimental, kind of
KP: Kind of works?
MV: You can turn it into software. We approached Kirk straight away and said "Hey, want to turn this into software?" and he turned around and said "I've been waiting to do this for 10 years, why didn't you guys ask me earlier." That about sum it up Kirk?
KP: That about sums it up
JAX: What is the raison d'être of JClarity and the company moving forward?
KP: Really in my mind, it's about disrupting the performance space. It's not a black art, or magic. It's not unpredictable, there's a lot of predictability here. There's a lot of things we can do to actually further, for the lack of a better word, the science of performance tuning so that we can actually tool a lot of this stuff, so we can basically "professionalise" the space.
MV: Turn it into a empirical science - we believe that it is and we've proven that it is. Kirk's passed 15 years of teaching and putting this stuff into practice, and Ben and myself and have been doing for quite some time. There's actually a small sort of loose community of us who have always had this feeling and it never really banded together. Now it's starting to happen.
JAX: Do you think a lot of the work is to dispel myths for Java developers - are some developers wary of trudging into those waters without making too many mistakes?
MV: Absolutely. I was just teaching this week in London and I had six very bright Devops/developers. All tech leads and senior developers. Lightbulbs were going off all the time. They'd go "But I read this blogpost that said to do this or my admin manual told me to do that, or my old mentor told me to do this." They never actually sat down and just measured things like any normal scientific process. Once they started doing that, with the experience and intelligence they naturally had, they were up and flying in a couple of days. And they were all really excited to go back and apply it in their day jobs.
KP: We don't really teach anybody anything new - it's about refocusing how they think about problems. That's the most useful thing they come out of the course with. Once you get the right measurement, it'll give visibility into what's going on and if you don't have that, you don't have the right measurement. You just need to learn how to get it.
JAX: I'm guessing some developers don't have that vision already available to them?
KP: I've had conversations with a number of departments at universities that were asking "How do we teach this because no one teaches it?" Performance tuning is something that isn't taught. It's expected because we're in this industry where everything blazing fast. We're supposed to know it just naturally and what I find is that it's not a skill developers come out of school naturally, so when they actually have to go do it on the job, they'll do it.
Some people will be naturally good at it and other people will come along and be less good at a particular aspect. They'll run into roadblocks. People come up with their own way of doing things. What we're trying to do is get past these naive approaches, and say here's some best practices, filtering them over time to figure out which ones don't work. We toss out things that don't work and even our process has to adapt to our changes in hardware and technology stack. Because when things do change, we're going to find there's modifications that we might have to make to the process in order to keep up.
JAX: There seems to be a lot surrounding the cloud on your website, right?
MV: The reason we're aiming for a cloud - actually it's a multitude of reasons. One of the big reasons we've seen the trend for a while now that Java and the JVM is going to dominate on the cloud going forward. It's really the only platform that can handle the scalability requirements. However, there's still this element of Java performance tuning being a black box and what we really hope to do is give illumination, clarity and light to people who have to administer hundreds, thousands of VMs. That'll hopefully make the entire cloud story a bit more stable and better for everyone really. We want to become the heart and soul of these cloud platforms.
KP: If I could be so brazen to say on the scalability issue - there isn't a product out there that will scale into the cloud as we expect cloud deployments to scale.
That's the first problem: the second is that all the tools
require an extrordinary amount of knowledge of all the different
bits of the systems. If you combine those two things, it really
means, you've got a lot of problems with the current toolset. We
think that it can be done better. We certainly want to be scalable
so we understand the problems that cause things not to be scalable.
So we're trying to avoid them up front. People to have to be rocket
scientists to have to work tooling.
JAX: With low-impact?
MV: Absolutely. We sit on the other side of the fence when we've been working at enterprises in our careers. We've had vendors come in with these performance tools and they'll make claims like "this will have a tiny footprint on your application" and every single time, the performance tool becomes the performance bottleneck. It's really frustrating and we've flipped it on its head, and said this is our first requirement. Not getting at the data, not looking at pretty graphs - our first requirement is to make that whatever we build genuinely has very little to no impact when it runs.
KP: I like to say that we're a guest on our customer's hardware and we want to behave that way. Low footprint, low impact. A lot of tools claim to have it. When you actually look at what they're doing, you find that they can get moderate amounts of scalability in a lot of cases, and some can scale relatively pretty good. But when you get into larger deployments, they start falling over.