The Promise of the Cloud
Judah Johns, Chief Evangelist at Jelasti discusses the evolution of cloud computing and why he believes 2012 is the year of the PaaS, after years of hype
Over the last few years, the use the term Cloud has become so prevalent and widely used that one would assume that everyone knows what it means and why they are using it. The truth is far from that. If anything, we are still struggling over the definition of the Cloud and what it means for the end-user, the SMB and the enterprise. We hear constant talk about how the Cloud is saving money for this company and saved this much for that government – in fact, just last week, Forbes magazine had an article talking about how Cloud computing would save the U.S. Government $12 billion annually! The truth is, for the most part, most companies aren’t saving any money with Cloud: they don’t know how to.
In order to see how the Cloud could actually save a company or and individual money, we need to understand a little bit about how Cloud computing has evolved. There are three major steps in the evolution of computing that we must mention, at the least in passing, if we are going to talk about Cloud computing: grid/utility computing, application service providers (ASP) and Software-as-a-Service.
Grid, or as it is sometime referred to, utility computing, happened along the same time that the commercial Internet came into being – right around the same time that the World Wide Web (WWW) was made available to the public. In order to handle the explosion in usage that came along with the WWW, grid computing came into being. Basically, the idea was to make computing as easy to use as plugging into a power grid, allowing people to work together towards common goals and projects. In essence, utility computing allowed people to “rent” computing resources, such as internet access and console access, and in doing so, reduced the cost and increased the availability of computing.
The first wave of internet-enabled, or web centric, applications happened in the late 1990’s. It was a response to the increasing costs of using software. An ASP would license commercial software applications to multiple customers, helping reduce cost by allowing companies to outsource some of their IT needs, like software and servers. It was much like renting a server with an application installed on it – in other words, pretty limited.
Though many times SaaS is lumped in with ASP, the truth is, it was the next step. The main difference between SaaS and ASP is that ASP is typically a single-instance, single-consumer legacy software application. One of the main advantages of the SaaS over ASP is its built in multi-tenancy and scalability. Though both models are hosted, the first is flawed because it has no built-in model for scalability, is far too customized, is generally stuck with a single revenue model, and has no aggregators for data. In essence, ASP is legacy and SaaS is built for growth, scalability, muti-revenue models and advance metering and billing.
Dedicated Hosting: To be more specific with respect to Cloud computing, we need to look at the evolution of the hosting industry; because that is where the Cloud has blossomed and where the idea of decentralizing, or moving off of hardware that you own, data to an outside location/s. If we generalize a bit, we can say that hosting originally started with actually owning your own server and then making it available for others to access. From there, that server was moved out of the home/office and you then rented one from a web hosting provider. In 1991, when the World Wide Web was released to the public, you either owned your own server or you rented it from someone like GeoCities or Angelfire. That was called dedicated hosting: you own/rented a server and it was yours to use. From there, managed hosting came into play. Managed hosting took dedicated hosting to another level by offering managed services – system administration, updates, etc.
Shared Hosting: Not long after, shared hosting came into the picture. It had a number of benefits. It saved the Hoster money because he could put more users on the same box, thus reducing hardware cost. It also saved the end-user money because he paid for a much smaller portion of the server than before when he had to rent the whole box. It did have some downsides. If that server crashed, so did all of the stuff of all the users on it. It also limited the amount of resources that each user had, making it slow and prone to crash. It also reduced the level of service, partly because the client new that he was paying for a budget product.
VPS: From shared hosting came the Virtual Private Sever (VPS). It tried to strike the fine balance between shared hosting and dedicated server hosting. A VPS is a single hardware node, or server, that is split into many virtual machines (VPS). This is made possible by virtualization technology, like Parallels Virtuozzo or Xen. These technologies allowed for a single server to be divided up into many smaller dedicated machines, each with its own resources but all sharing the same core hardware. This was great because it allowed one to have a virtual dedicated machine while still having costs in the shared hosting range. There were a few issues, as with shared hosting. If the server went down, so did all the VPS running on it. But the benefits on cost and performance still greatly outweigh shared hosting and dedicated hosting, if one doesn’t need super high-end performance.
Cloud Hosting: Building upon the VPS, Cloud hosting took the idea of shared, yet dedicated, resources and added higher performance, redundancy and scalability. The higher performance came from advances in hardware that was designed specifically for Cloud hosting. Two of the most highly touted benefits of Cloud hosting are its scalability and, maybe even more importantly, its redundancy. The ability to scale, or dynamically add resources to a VPS, is one of the first “Cloud-like” things that end-users of hosting were able to see. If their website needed to scale up in resources to meet traffic demand, it could – automatically. This feature has made companies to grow with the demand for their services without having to stop their servers and add more boxes. The other great benefit of Cloud hosting is its redundancy, or data safety. In typical Cloud hosting setups, a Storage Area Network (SAN) provides storage. A good SAN would typically have a number of drives that are replicated in real time, putting your data in two different locations at the same time. Should one side of the array fail, the other is still running and has your data available to be accessed.
Geo Hosting: One of the latest steps in hosting is Geo hosting. It takes performance and redundancy to a whole new level. By making your site available in multiple locations, not unlike a CDN, your site is served to your visitor from the location nearest them. Because distance matters when serving sites, putting your sites physical location closer to your user makes it many times faster. Redundancy is also greatly improved by the fact that your site is being served from more than one location at the same time. If a location goes down, your site is still accessible.
Though the definition of Cloud is not yet defined, the features that are continually added to it are shaping its definition. Those that are used more get given more resources and add users. Those that don’t, die off. So, in the end, the definition of the Cloud is more along the lines of performance, scalability, redundancy and ease of use. I add ease of use because the truth is that people want to worry less and less about how things run or happen and just want them to work. If you ask a young kid today what the Cloud is, he will probably tell you more about what it provides or what he can do with it versus what it actually is. And, in the end, that is the real definition.
The natural end of the Cloud evolution
Most of the evolution, with the exception of SaaS, within computing that I talked about before would be classified under Infrastructure-as-a-Service (IaaS), where infrastructure is rented to the end-user. The next big step in the evolution of Cloud computing falls right in line with what I said a kid would say the Cloud is: it just works.
If the Cloud lets you be hardware, system and backend agnostic, then, it would seem that the Platform-as-Service is the fulfilment of the promise. You use it and you don’t have to worry about how it happens – it just does. Probably the best example of a PaaS that everyone uses is Gmail. You really don’t know how it does what it does, but it makes it easy to send and receive email. That is the PaaS.
If you listened to vendors and research firms, you would think that last year, 2011, was the year of the PaaS. Once Gartner proclaimed “2011: The year of the PaaS,” everyone from Forbes Magazine to ZDNet was saying the same thing. All you have to do is do a quick internet search to prove that. The truth is though, that 2011 was just the start. If anything, 2012 is quickly showing itself to be the year that the PaaS comes into its own. From Heroku to Hadoop and from Google Apps to Jelastic, 2012 has been, and is becoming more and more so, the PaaS banner year.
What is the PaaS?
The PaaS blurs the line between hardware and software. According to Forrester Research, the PaaS represents the on-ramp to the Cloud in that it provides the linkage between application platforms and underlying infrastructures. More simply, it allows you to use your program/platform/service without having to worry about setting up the backend or customizing the platform itself. It just works. But how does that translate when talking about Java?
In the last few years, and especially the last twelve months, we have seen a number of PaaS emerge to meet the needs of the developer that needs to deploy Java applications to the Cloud. Google App Engine (GAE) was the first on the scene, and as such, they had a huge dominance; but somehow, they quit developing the platform and let it stagnate. They then followed up with a few major changes that really hurt their users, like increasing the cost of the service without any warning. This was probably one of their biggest mistakes because they have a custom application server that requires recoding applications before they can be deployed on GAE.
Not long after, Amazon Web Services released its Elastic Beanstalk (AWS-EB) PaaS. This also was great, initially. But it did not deliver on the promise of the PaaS. Both GAE and AWS-EB had rather long learning curves and required commitment to the platform due to non-standard software stacks. Amazon did have a number of benefits, like access to the AWS catalogue of services, but again, having to be a system administrator to setup servers and instances doesn’t fall in line with what the PaaS is supposed to do: make it easier to deploy your applications; not create new roadblocks.
Other big players, like Microsoft’s Azure and Salesforces’ Heroku, have also come on the scene to try and meet the huge demand for platforms that facilitate the deployment of Java applications to the Cloud. But none of the platforms has delivered on the promise of the PaaS to take advantage of what the Cloud has to offer without creating more work for the end-user.
Delivering on the promise of the PaaS
Traditionally, if a developer wanted to deploy his application, he had to rent servers. After that, he had to set up databases servers and application servers. This meant that after he had already gone through the trouble of developing an application, he then had to sys-admin it into existence by setting up the backend and the tuning it so that it worked. This was hugely time consuming and, if anything, disheartening.
When the first wave of PaaS for Java hosting came along, they did make it some easier. They helped auto-provision VMs and had images of certain setups already preinstalled, but they were still severely limiting. Developers still had to be sys admins and still had to learn the custom software stacks that these providers had in order to upload their applications – not to speak of the extra work that it was to deploy those applications.
Upload. Deploy. And enjoy.
Last year, we released the Jelastic PaaS for Java hosting in beta. We had run into a lot of the same issues that other developers had. We were actually developing a rather large project, and when we go to the point of deploying it, we realized that none of the available choices would work for us. They just didn’t meet the needs that we had, and the idea of having to recode our whole project to work with one platform or the other didn’t appeal to us: much less recoding and then realizing that we needed to use another platform, requiring more recoding.
That’s when we pivoted. We went from working on our project to deciding to make the world’s first PaaS for Java hosting that would actually facilitate deploying application to the Cloud. Our platform would have true auto-scaling, not just horizontal, but vertical as well. It would also use standard, open-source software stacks. No more recoding. No lock-in. And most importantly, it would make Java hosting easy.
2012: the year of the PaaS
As more and more developers become aware of the benefits of the PaaS, the move away from the traditional model towards the new, web-centric, browser-based model will accelerate. That trend has already started, developers want to be able to focus on what’s important to them – functionality of their application – and not have to worry about the underlying structure. Web hosting providers can take advantage of this by focusing on what they are good at – providing and managing computing resources and infrastructure, servers and hosting, and support for those resources – and then providing a PaaS that users want.
2012 is the year of the PaaS. The Cloud has allowed it to happen. As more and more end users and service providers jump on the bandwagon, the opportunity to take advantage of the developers’ need for the PaaS will only grow.