Fact from fiction

Containers and security – What are the five biggest myths?

Colin Fernandes
© Shutterstock / Lee Yiu Tung

By 2023, it is predicted that companies will have more than two containerised applications in place. Don’t go into containers without knowing what is true and what is just a myth. Read about these five big misconceptions regarding container security and overcome the challenges of planning for strong security models.

For site reliability engineers and developers, containers have proven to be a useful new way to quickly package and deploy application components at scale. As these IT teams rely more and more on the portability and pervasiveness that containers offer to support their microservices-enabled technologies, they will need to quickly compose and orchestrate all these software edifices. Kubernetes has made it easier to orchestrate the infrastructure for those applications with automated management processes that would otherwise have to be done manually or by stand-alone scripting or complex meshing of siloed tools.

Gartner has predicted that 70 percent of companies will have more than two containerised applications in place by 2023, compared with less than 20 percent today. Our own research has shown that containers are getting more commonly adopted, and that Kubernetes is growing in popularity, particularly for multi-cloud deployments.

However, what containers don’t have around them is a strong security model as standard. For companies that now rely on containers to run their applications, ensuring that these applications are secure over time should be essential to their planning. So what are the common misconceptions here, and how can you overcome them?

Misconception 1: Containers are secure by default

Actually this is not the case, as there are no security provisions built-in at the start. Containers were developed to solve an application deployment problem, and they do this job very well. However, the approach was built based on what developers saw as most important rather than what IT security teams would have put in as best practices. If you have an insecure cloud instance, then the containers running in this cloud will be accessible.

As more companies adopt containers and move these apps into production, security will naturally become a more important factor for the whole IT team across the complete build, run, and operations lifecycle. Adding security to DevOps processes – otherwise known as DevSecOps – will help ensure that the whole development pipeline is more secure, from testing for initial security issues in new code through to securing the continuous integration and continuous deployment process too.

DevSecOps puts the focus on building infrastructure and applications that can securely scale over time, eclipsing older and more reactive security models. Whereas previously, developers designed a system first, then probed it for vulnerabilities and corrected them as they surfaced, DevSecOps uses data and automated processes to take a more proactive and predictive approach to security with customer experience in mind. By moving responsibility for security to the door of every stakeholder, applications and processes are built to be as close as possible to invulnerable.

Misconception 2: Container images are less risky because they only exist for a short period

The Internet is big. Really big. To misquote Douglas Adams, you may think that your cloud deployment is big, but that’s just peanuts to the Internet. This sense of scale can breed a false sense of security. When each container itself is compared to the huge volume of servers, machine images and software deployments that make up the Internet, how likely is it that one container would be found and exploited?

However, these images can be found and discovered, and the images themselves are sometimes not as transitory as we expect. For larger applications with high volumes of transactions, the level of demand should mean that new images are created and not removed, leading to images existing long enough to be discovered.

SEE ALSO: The four myths of shift left testing

In fact, containers may be at more risk of being targeted over time. For bad actors, using specialist search engines to detect vulnerabilities is a route to finding potential target machines even if they exist for short time periods. As they will be actively looking for easier targets like this, putting more security processes in place around temporary assets like containers is just as important as security for more permanent IT infrastructure assets. While container assets are short-lived, they should receive the same security treatment as other IT infrastructure.

Misconception 3: Containers are being put together based on secure assets

Containers are created based on a set of components that are all pulled together into a runtime image as it is needed. These containers are kept in a library of images to be used as needed either internally or from a public repository.

These container images should be checked for potential issues over time, as both internal and public images may not contain the latest and most secure system libraries. Open source security company Snyk found that the most popular Docker images had on average at least 30 vulnerabilities inside, which could lead to security problems over time. For private repositories, containers may have vulnerabilities too if they are not updated with the latest secure components.

Alongside this, container images can ‘drift’ over time compared to their base states. Containers can have software added to them after their initial creation, so they meet particular goals today rather than what the original image was designed for. Updating the base images should be undertaken instead, so that any new use case is met while keeping security in mind.

Misconception 4: Monitoring containers is the same as for other IT assets

Observing application logs, metrics, and other data is an essential task for performance monitoring and for diagnosing issues when faults occur. This observability is equally necessary for container-based applications, but the ways to gather this data are fundamentally different compared to more traditional IT and application designs.

Taking a cloud-native approach to designing observability can help, while it’s also worth understanding how to get data out of those containers in the first place. Typically, an application creates log data by writing to a specific file in its file system that can then be read, either by a monitoring application or by a person. However, files within a container are not available outside that image and therefore can’t be read as easily. This meant either bind mounting those files or running logging application inside each container.

While these approaches are possible, the sheer number of containers being created makes it much more difficult to scale up effective monitoring. Bind mounting several hundred container images from a host is not a practical approach. Similarly, applications will normally run their containers across multiple host machines too. This further complicates the data gathering process as those container images could effectively be spawned on any of the hosts within a set cluster.

From a security perspective, this data is essential to keeping up with how applications are running over time. This involves looking at the end-to-end processes involved around containers as well as integrating any security tools in place that should have insight into those applications. Getting this data therefore involves putting in a full data gathering process that can collect data from the containers themselves, normalise that data and then make it understandable to those that might need it. Once that data has been collected using tools like Prometheus, it can be understood in context, shared and used for higher value collaboration activities across development, operations and security.

Misconception 5: Companies can already look at all their asset data in the same way

Linked into the last point is how companies think about their assets and infrastructure over time. Bill Baker of Microsoft originally coined the idea of treating servers like pets or like cattle – for traditional IT assets, any server would be treated like a pet and nursed back to health, while modern IT approaches treat server images like cattle and as disposable. For cloud services and containers, this disposability is a big selling point. Each individual image provides a service that makes up the whole application, but any individual issue is fixed with simple reboots or new images being spawned.

However, we cannot take the same approach around data. Each container will carry out work and create data on that work. All this data has to be gathered, understood and put into context. Alongside this data from transient containers should be data from more traditional application infrastructure deployments. Often, this information will be present in different tools and then developers and operations teams will have to make sense of this all separately. Instead, each team should be able to consolidate all their data and understand it in context.

SEE ALSO: How AI assists in threat analytics and ensures better cybersecurity

Without this work to get all the right data together – and to make it understandable for different teams – it’s much harder to see where potential threats might exist or where intelligence gaps have opened up in your security posture. Equally, while all your IT assets might not be created equal, it’s also important to see the most critical issues and prioritise where it’s appropriate.

Without the right data from your container applications, these new apps will be second class citizens when it comes to security profiling and planning. As container images can be created or changed continuously in response to changes in the continuous integration / continuous delivery pipeline, so monitoring these containers has to take place continuously too. Getting continuous intelligence in place can therefore help profile and manage these applications over time.

For companies adopting containers as part of their DevOps and DevSecOps strategies, getting good data on container deployment should be an obvious next step. It’s critical that observability and monitoring keeps up with the potential that containers can deliver, so that the business benefits from these new applications are realised and managed effectively.

Colin Fernandes
Colin Fernandes is product marketing director, EMEA at Sumo Logic.

Inline Feedbacks
View all comments