days
-1
-3
hours
0
0
minutes
-5
0
seconds
-5
-7
search
Software: "What else can I eat?"

Why everything is code

Coman Hamilton
Code image via Shutterstock

Ops, hardware, infrastructure – software is still eating its way into various worlds that were once out of its reach. Meanwhile DevOps is eating up the developer’s workplace. Perforce’s Mark Warren speaks to us about the impact of everything becoming code and how DevOps is changing responsibility, security and collaboration.

“Everything is code.”
– Kohsuke Kawaguchi, founder of Jenkins

JAXenter: What do you make of Kohsuke Kawaguchi‘s recent claim that everything is becoming code?

Mark Warren, Perforce: I think that’s spot on. And I think we’re seeing that more and more. Infrastructure as code is one subset of that. I like idea of the everything as code. Because more and more, there’s this virtualisation of what’s going on in hardware. So Kohsuke gave a good example of the programmable chips, FPGAs, these generic chips that basically have their functionality burned into them by way of software. And a lot of tools for simulating software behaviour and what could be hardware and validating that behaviour. And there are even tools these days that try to work out what’s most cost-effective. Should you burn silicon or put it into applications?

That’s all being done in software before it get’s anywhere burning a chip, because that’s a pretty expensive process. If you look at network configurations: routers are being configured through software. And more and more networking is being virtualised. Even service is being virtualised through VMWare or VirtualBox. And we have tools like Docker and Vagrant that are defining these containers for your application in text files effectively.

So he’s absolutely right. I think we will see much more of that virtualisation. It brings a number of opportunities. Source files and text files are reasonably easy to manage compared to very large binary files. You can iterate and change things and wind them back. You can even branch and merge source files relatively easy.

The virtualisation of hardware

But where it’s going to get relatively tricky for some tools is that these products are a collection of components. A router contains a motherboard with a number of different chips on there. And each of those chips’ behaviours may be controlled in software, but they’re all coming from different vendors. And they’ll all have different variants for different customers. So this is getting to be a more complicated management task for these code-defined components.

Kohsuke was making a good point about aggregating steps of the workflow and just say “there’s a library out there that’s going to do…well don’t worry about how that works. Just include that library or piece of scripting into your workflow and it’ll all be ok.” The trick though is spotting that there’s actually a dependency. And when that sub-assembly changes, if you’re a consumer of that assembly (or that ‘workflow snippet’), does that affect you?

So there’s a configuration management problem there. Well actually, as software developers, we’ve been managing that configuration problem for a long time. So maybe those same skills will be applicable to everything being code and these software-defined worlds in the future. So there’s a lot of lessons to learn.

Can you tell us what’s changing for security in a DevOps setting?

I think there’s a really interesting trend coming that people are starting to understand that there is a role for security in the DevOps pipeline. And DevOps have started to be asked what they’re doing about security. There’s two groups coming together. DevOps people that have been working on optimising their CI and CD pipelines and getting Dev and Ops teams together, but they don’t know very much about security. The only thing they fear is that “the security guys are going to slow us all down, because they’re going to go away and do penetration testing for three weeks, or something, before we can roll out.” And that’s going to break the CD pipeline.

On the security side they’re starting to understand what the value is of what software teams have been building, and realising that it needs to be protected. But they don’t necessarily understand what a DevOps pipeline looks like. So we’re starting to see DevOps teams and security teams coming together. We’re starting to hear terms like DevSec, which is rather painful. Or sometimes ‘Rugged DevOps’, which is a little bit broader. It talks about security of a system as well the resilience of a systems. So have you got high availability systems built in. And that’s involving a change to the development process. We’ve got to get those security stories in the backlog for each sprint planning session and giving them the right prioritisation against all those user stories that marketing or customers ask for. So that’s an interesting dynamic that some teams are starting to face up to now.

And in terms of CD and CI, is security changing in that area as well?

I think security has started to come in there. We’ve got some great tools, like Jenkins, that are automating the workflow. And then we’ve got tools like Docker and Vagrant that help package up these applications. We’ve got Puppet and Chef to roll them out. Now there are tools that are doing more complete automated testing as part of that, tools that are deliberately trying to break the app. They’re doing SQL insertion, they’re trying to malform the URLs on a website.

SEE ALSO: DevOps is changing the meaning of the word ‘release’

Some of those still need manual intervention. But the automated tools are getting better. They can be plugged into that pipeline as part of that CI testing. There’s always that balance of depth of test and performance. You know, if you find that your testing takes 26 hours and you’re trying to do a CI build on every check-in, which you’re doing every five minutes, these gears are not running at the same speed. Something’s going to break. That’s where skill is coming in to have the right level of testing for the amount of risk that you’re trying to manage.

One of the words we’re hearing quite a lot in the DevOps context is ‘collaboration’. So what is changing at the moment in terms of how developers communicate with each other, and beyond, the rest of the company.

You see lots of things happening that seem to make things better. So lots of instant messaging, chat apps, people are really excited about Slack, which looks a lot like IRC or something, if you can remember. On that level things are getting better, except there are many more channels of collaboration, whether it’s Twitter or Slack or whatever.

The other things that’s been making collaboration harder is that products are getting more complicated. So imagine a modern application, for instance a banking application. Well banking applications are no longer a greenscreen app that’s connected to a mainframe. Those applications are running in a web browser, they’re running on a mobile device… so these applications include not just source code (which could be Objective-C, C# or Java, or whatever). They’ve probably got graphics, JPGs and GIFs, to give a nice user-friendly experience. They have may have video in there for tutorials. They’ll have documentation and guides. There may be compliance documentation.

Unifying different content and different contributors

So you’ve got all of those different kinds of content that makes up an application. And you’ve also got all these different kinds of contributors. You haven’t just got the programmers, the coder that’s cutting this beautiful Java or JS. You’ve got artists doing wonderful creative things in Photoshop. You’ve got product designers using AutoCAD and 3D modelling tools. You’ve got writers using Word or text documents. And they’re all using different repositories to store them.

The code is maybe using a version management tool like Git or Perforce. Artists may be using Dropbox, and documenters may be using memory sticks. As somebody who’s responsible for building that product, making sure the right bits that come from all the right contributors, actually join up to make a release. And then comes putting them into the automatic build pipeline through Jenkins or something. It’s really, really hard, if not impossible, to find the right version for all those different resources. Oh and by the way, if you’re responsible for security for all those things, how do you get your head around that?

So the answer to the challenge there is that you really need that single repository that everybody can use. And this is where Perforce fits in, because we can hold any kind of digital assets. We don’t have those limitations that tools like Git or Dropbox have, which won’t understand how to manage the revisions of files over a period of time. You can do all of that inside of Perforce. You’ve got that single repository. Now you can have your tools of choice around it, whether it’s a Photoshop or an Eclipse plugin. So you can use the tools and workflows as a contributor if you like. But you have this one place and you can all see my revision of the file going along with your revision of the file, then put them together and you’ve got a build that should work through the pipeline.

Collaboration tools are getting more complicated and fragmented, even though there are some really cool ones out there. But at the heart of it, you’ve got to have all of your work products together. And just two rooms in the same building is hard enough – if you have multiple teams around the globe, a single repository that’s accessible with high performance around the globe is also required. So that’s another thing that can be tricky with other tools out there.

Another issue is that auditors aren’t noticing exactly what developers are doing. Is that something that DevOps is helping to solve?

I think we’re starting to get there. The conversations I’ve been having with security offices…they don’t really know how important the software is in the first place. The software or the chip or product design, that could be the most valuable IP in the business. It might be on a par, or ahead of, the people that they have. And if this is compromised or it gets out of the building, or if it gets lost, this could be a significant business risk.

So security auditors are starting to look at what is going on in the development teams. And they are starting to understand that some of the tools and processes that developers have been using aren’t as secure as they need to be going forward. So I think we’re at the start of that, and we’re starting to ask those questions.

So it would be a good idea for development managers, QA managers, compliance managers, whoever’s involved in that pipeline, to be working with their security experts in their team to work out if they have the right tools. And what do they need to do to be compliant both with the audit requirements for the company and also for external audits for legislative compliance.

Another thing we’re noticing with the DevOps trend is more and more responsibility being passed on to the individual developer. At the same time, Perforce is also offering a monitoring software to observe what programmers are doing. Is this a response to this shift in responsibility with DevOps?

Well I think this idea of moving responsibility left in the pipeline, giving developers more responsibility, that is a trend that’s been going on for some time, since CI and CD. Developers now have to consider security as well. And that is a big burden. That is a difficult one. And you’ve got to think about security in multiple ways. One is: is the code securer? Are they building code that doesn’t have obvious flaws that can be exploited, like the famous SQL insertion type bugs, that kind of thing. So there are tools out there that can validate for that; static analysis tools that can be built into the pipeline to do that kind of validation.

Something that maybe not individual developers will worry about, but maybe their managers and certainly people outside the development team, like security officers and risk managers: they’ll be worried whether the right people are able to access the right files at the right time. And they’re anything that’s very dangerous. So that’s where new monitoring tools like Helix threat detection come along to see what is normal behaviour for a set of developers. And then flag up when abnormal things start happening, like files being taken from projects that people don’t normally work on, or people taking more files than they really need. That can open up a risk of people leaving a laptop behind somewhere will all their files on it. Or maybe somebody’s credentials have been compromised.

Now the individual developer may not care too much about that, as long as it doesn’t get in their way. And that’s absolutely fine. It’s important that such monitoring tools don’t get in the way. But you’ll find that managers or the rest of the business will be more worried about that kind of security around the development process.

Author
Coman Hamilton
Coman was Editor of JAXenter.com at S&S Media Group. He has a master's degree in cultural studies and has written and edited content for numerous news, tech and culture websites and magazines, as well as several ad agencies.

Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of