Avoiding a meltdown

How intent breakdown caused Meltdown and Spectre

Patrick Londa
© Shutterstock / LoulouVonGlup

Meltdown and Spectre have been causing some scares lately, as tech companies try to keep up with their cybersecurity and fix these vulnerabilities. In this article, Patrick Londa discusses how intent breakdown plays a hand in the game.

The cybersecurity saga around Meltdown and Spectre continues. Last month, a new variant of a Spectre-type attack was disclosed, forcing Intel and other computer processor manufacturers to develop new mitigations in the form of patches and hardware changes. Despite these vulnerabilities being present in most of the world’s processors for two decades now, it is just becoming clear that speculative execution in its current form poses a real risk.

Speculative execution is a prediction approach that allows processors to execute commands out-of-order to achieve faster performance. It was first introduced by Intel in the mid-1990s and became common practice across the industry. Now, it is a clear cybersecurity vulnerability, allowing hackers to access memory in different ways without permissions.

The intention of the initial design was that if speculative execution was run and the program did not have permission to access the memory, then the returned memory would be discarded. The manufacturers did not implement this fully, sending discarded memory to the cache. While the privileged memory exists in the cache, it is susceptible to Meltdown and Spectre-type attacks.

If you are curious, there are white papers and other resources that go into much more detail on this. You can also watch our full webinar on this topic here. Almost all major tech companies have put forth solutions to mitigate these vulnerabilities but as new variants pop up, the risk has to be assessed again. While it would not have solved all of the vulnerabilities identified in this case, true cross-department collaboration might have identified this fundamental error before it was embedded in computers and devices around the world.

SEE ALSO: How to prevent and react to cybersecurity threats

Intent breakdown in a digital workplace 

“Intent Breakdown” is when the difference between intent and the communication to convey that intent leads to adverse action. This issue is becoming more prevalent as our communication is increasingly in a digital, text-based form.

For the processor design that led to Meltdown and Spectre, the intent was to enable both memory isolation and speculative execution. Memory isolation is the idea of restricting access to memory addresses outside of the permissions of a process. The intent breakdown occurred when no one considered the danger of dumping invalid memory requests into an accessible cache. 

For software development teams, this notion of intent breakdown is most applicable to the writing and implementation of requirements and other artifacts. Does a new feature or design change put forth for one purpose also have unintended consequences? While it seems straightforward to ask this, it requires a collaborative mindset.

The shortfalls of traditional quality analysis 

What does quality mean? The traditional approach to quality is rooted in repeated and comprehensive testing, whether the product is car tires or a mobile application. For software alone, there is penetration testing, static analysis, functional UI testing, load testing, etc. To be completely transparent, my company (SmartBear) offers an extensive portfolio of testing tools. Testing is a critical part of software and product development, but it has its limits. No amount of traditional testing would have caught the design flaw behind Meltdown and Spectre.

In order to capture unintended consequences, teams need to have all functional stakeholders on the same page and abreast of meaningful changes. If the design team makes a change that will directly affect the testing team, the testing team should have access to the conversation around that change, and crucially, the intent behind that change.

Intent verification through a standardized peer review process

The notion of “Intent Verification” sounds pretty self-explanatory, but I’m not sure the practice actually takes place in many organizations. When a change is made or new code is written, where does the conversation exist to connect the dots between the submitted change, the requirement, and the ultimate intent behind the requirement?

For organizations without a standardized peer review process, these changes might only be discussed in meetings, email threads, and quick chats in the hallway. Especially now as teams are becoming more agile and task-oriented, establishing a standardized peer review process that verifies the intent behind changes is critical to preventing major defects.

Most teams are already conducting some sort of peer reviews on code and documents across their software development. By standardizing your team’s process, you create one source of truth for all changes. Tools like Collaborator enable you to discuss and review requirements documents, design documents, user stories, and source code all in one platform.

For high-impact changes, your organization could require top stakeholders from design, development, testing, and security to participate in the peer review. This standardization ensures that these changes are exposed to different perspectives. If unintended consequences can be flagged and discussed prior to the change being implemented, then the overall quality of inputs into your product will be much higher.

SEE ALSO: DevOps thinking means service-centric security

Setting an appropriate review scope

It might sound like I am advocating for more cooks in the kitchen, opening new cans of worms – proverbially. It doesn’t have to be. When done right, cross-functional peer reviews actually speed up development. By reducing rework and fending off unnecessary technical debt, your team spends more time productively building.

Your team can combat review scope creep by utilizing checklists. Instead of requesting general feedback, set clear asks for each member of a review. This restraint allows for the type of collaboration that catches defects without adding friction to your development.

Meltdown and Spectre are rare for a number of reasons. These vulnerabilities went undetected for decades. When they were revealed, it was by multiple teams separately within a few months. These attacks even have respective logos. What is not rare is how this design flaw made it into the end product. Waterfall product development invites unintended consequences because the people who might be able to see potential issues are not included in the conversation.

By establishing a standardized peer review process, your team and organization embeds intent verification into your development lifecycle. If you don’t, it might lead to a meltdown of your own and haunt your organization for years.


Patrick Londa

Patrick Londa is the Digital Marketing Manager for Collaborator at SmartBear Software. With a background growing startups in the clean tech and digital health space, Patrick is now focused on software quality, process traceability, and peer review systems for companies in highly-regulated, high-impact sectors.

Inline Feedbacks
View all comments