security by design

The Ongoing Struggle for Security by Design

When it comes to safety problems with new, complex products, society’s response is typically consistent: first, blame the user. Only later on should you hold the manufacturer accountable for inherent design flaws in their products. We saw this with cars and then with computers. But just as the automobile changed, attitudes in the IT industry are also evolving.

The first cars went on sale in the US in the late 1890s. After that came a slew of state safety laws. Connecticut introduced the first speed limit in 1901. Then came the first traffic lights. New York passed the first drink-driving law in 1910. Eventually (but slowly), states began licensing drivers and even occasionally testing them.

Punish The User, Spare The Vendor

These measures to govern driver behaviour were all important, but no one held the automobile vendors accountable for designing safety into their products, to begin with. It wasn’t until 1965 when Ralph Nader published his expose book on vehicular safety, Unsafe at Any Speed, that consumers began thinking about demanding safer cars. A year later, Congress passed the National Traffic and Motor Vehicle Safety Act, creating the Department of Transportation and eventually forcing auto vendors to put seat belts in vehicles.

Congress passed that seat belt law 60 years after the first Ford Model T rolled off the production line. It’s perhaps unsurprising, then, that 42 years after IBM launched the PC, there are almost no laws holding technology product vendors similarly accountable for the safety of their products.

The only real laws governing computer safety today are there to police the users. The Computer Fraud and Abuse Act (CFAA), designed to stop cybersecurity intrusions, was passed almost 40 years ago and hasn’t been significantly updated since. The Digital Millennium Copyright Act (DMCA) focuses on preventing people from circumventing digital copyright controls.

Waking Up To Security By Design

Now, there’s a dedicated effort to get manufacturers to do the right thing and build security into their products at the design phase rather than as an after-market add-on. In April 2023, the Cybersecurity and Infrastructure Security Agency (CISA) published its guidance on secure product design: Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default.

Security by design builds security in from the design phase onward rather than as an afterthought or after-market add-on. Security by default ensures that security is switched on to protect users out of the box without additional charges.

The joint advisory’s principles include some apparent no-brainers. For example, it warns that the burden of security should not fall solely on the customer. Software vendors should “take ownership of the security outcomes of their customer’s purchase.” This was also a strategic objective in the White House’s March 2023 National Cybersecurity Strategy.

Another is “radical transparency,” in which software vendors should pride themselves on creating secure products, demonstrating how they do it.

All this relies on the third principle: building a leadership structure that supports these goals. Senior executives must be willing to gather customer feedback on product security and then dedicate internal resources to address those issues. That organisational structure could mean dedicating a specific person responsible for software security, the document adds.

The Problem With Vendor Liability

Security by design sounds like a simple proposition, but advocates face several harsh challenges, many of which are monetary.

Firstly, accepting responsibility for software security is a considerable risk for software vendors, whose customers risk huge losses thanks to flaws in their software every day. Only in very extreme cases do these vendors suffer financially. For example, SolarWinds’ insurers paid $26m to customers in a settlement after its compromised software affected around 18,000 organisations in 2020.

For every technology vendor that strives to secure their products from the ground up, there will be plenty that don’t. The White House has committed to working with Congress to develop legislation establishing vendor liability for technology product security, but as we enter an election year and Congress can barely agree on enough to keep the government running, the chances of this seem slim.

For the time being, forcing them to change might be the customer’s job. CISA recommends that companies vote with their wallets, assessing their suppliers’ efforts to secure products by design and default. The White House is helping. In July, it announced the US Trust Mark, a cybersecurity rating system to help consumers evaluate connected devices.

There are other challenges to vendor liability. While some security missteps might be the vendor’s fault, there will be many where the vendor could blame the customer for misusing or misconfiguring the software.

One tool to help prevent such customer misuse is the software authorisation profile. This built-in security tactic, highlighted by CISA in its guidance, recommends how users of certain types should use the software, including outlining access privileges for those varying roles. This stops the mail room supervisor from accessing the same functions in the enterprise resource planning system as the head of sales, for example. Savvy software vendors can alert users if they attempt to deviate from the profile.

Cost And Complexity

Embedded security is complex and expensive. As the joint advisory points out: “Secure-by-Design development requires the investment of significant resources by software manufacturers at each layer of the product design and development process that cannot be ‘bolted on’ later.”

This is especially problematic when dealing with legacy software and hardware products. Many software vendors are working with monolithic legacy code developed over years that is brittle and difficult to update. This is more difficult to update than modular software, with many independent and loosely-coupled components.

Companies can pay down this technical debt gradually across multiple product iterations, but it takes significant resources to peel that software back to the foundations and restructure security from the ground up.

Secure Design: A Thankless Task

That brings us to the next problem: visibility, or the lack thereof.

Produce a shiny, highly visible new feature like generative AI, and you’ll tempt customers to either buy the next version of your product or maintain their subscription. Conversely, adjusting code under the hood to be more secure and well-organised is laudable but thankless; it isn’t much of a selling point for many customers. A website that says “Now secure from the inside out” will likely prompt the reply: “What, you mean it wasn’t that secure in the first place?”

Security has always been a little like property or life insurance: you have to do it, but it’s difficult to sell. Making your own non-security products more secure doesn’t generate direct revenue. However, selling after-market security products like anti-malware software and firewalls is lucrative.

Tactics For Security By Design

With all this said, the challenges shouldn’t deter us from pursuing security by design. Organisations can adopt some tactics that will help to encourage software security from the beginning. One of these, highlighted in the CISA guidance, is the use of memory-safe languages.

Some traditional low-level programming languages, notably C and C++, allow programmers to manipulate areas of memory that they shouldn’t. They can read memory that might contain sensitive information or inappropriate code. They can also change how other programs run or put them in a confused state, rendering them vulnerable to attack.

Operating system vendors have introduced memory protection measures, but CISA says that these are inadequate on their own. Instead, it recommends using programming languages with built-in memory safeguards, like C#, Go, or Rust.

Dealing with this problem from the beginning could yield significant security improvements. In 2019, Microsoft engineers said that roughly seven in ten of all vulnerabilities in Microsoft products were down to memory safety problems.

Who Is Leading In Security By Design

Several government and industry groups already have secure design principles and frameworks focusing on various levels of the technology stack. On the software side, these include NIST’s Secure Software Development Framework and an industry-wide initiative for secure software development called SafeCode. There are also some efforts to build security into specific areas, such as web application design, through OWASP’s secure design principles.

On the hardware side, companies have worked together for years on trusted platform module (TPM) systems that physically store secrets in tamper-proof silicon on the motherboard. At this point, you can’t install Windows 11 without a version 2.0 TPM.

A Race To The Bottom (Of The Stack)

Microsoft’s insistence on TPM hardware is an example of how some vendors are doing their best to tackle security by design, collaborating with each other to create chains of security that begin in the silicon and extend into the operating system.

One example is Secure Boot, a security feature that stores codes approved by the manufacturer that prove various components on the system, such as the firmware and the operating system, are legitimate. This relies on a TPM, along with Unified Extensible Firmware Interface (UEFI), the modern version of computer firmware – the thing that runs and bootstraps the rest of the computer when it turns on.

By verifying and protecting code at lower system levels, the operating system vendors and original equipment manufacturers aim to ensure complete control over everything that relies on that code. However, these protections are subject to their own security flaws, just like everything else. In Secure Boot’s case, a vulnerability code named Baton Drop allowed attackers to introduce a UEFI rootkit called BlackLotus that circumvented these protections, enabling attackers to own the system for its attackers.

Attacks like these don’t mean we shouldn’t pursue security by design and default. Driving more security into the system from the beginning nudges the needle in the defenders’ favour and makes attacks more difficult. But attacks like BlackLotus show that even security imposed during the design phase can be circumvented. The answer is to design multiple layers and facets of protection into systems, minimising the attack surface and providing multiple hurdles for attackers to overcome.

Regulations

Governments are getting serious about security by design, with several legislative measures either here or in the works. In the US, California and Oregon have passed IoT security laws. These require individual connected devices to either have unique pre-programmed passwords or force users to generate a new means of authentication before they can access the device for the first time.

In the UK, the Product Security and Telecommunications Act will mandate baseline security requirements for connected products out of the box. These include unique passwords, information on reporting security issues with a product, and the minimum support period for security updates.

This is a beginning, but still misses some opportunities to enforce robust security by design across essential products. For example, desktop and laptop computers and tablets are excluded from UK law, as are medical devices, smart meters, and smartphones. At least your connected kettles and web security cameras are covered.

The problem with such laws is finding the balance between efficacy and complexity. Laws that micromanage the application of security principles are difficult to police and update. Nevertheless, mandating the application and documentation of a secure software development life cycle would help to secure many products.

The EU also hopes to address the built-in security issue at a bloc level. In September 2022, it published draft legislation, far stricter than the UK law, that would tighten cybersecurity rules to enforce better product security. The Cyber Resilience Act would force manufacturers to improve the security of products throughout the entire product life cycle.

Learning From History

The PC industry’s approach to security by design is currently where the automotive industry’s was in the mid-sixties. Cybersecurity has become a widespread public concern, and some organisations have been exploring approaches to built-in security on a voluntary basis to differentiate themselves and protect their users.

Now, governments are gradually pressing the issue with legislation. There is a long way to go, partly because the complexity of IT solutions and the digital supply chains that support them is an order of magnitude greater than that for the pre-digitisation automotive sector.

Some things remain the same, notably consumer lack of awareness or ambivalence. When the US mandated the inclusion of seat belts in cars, their use was voluntary. When the first states began requiring the use of seat belts almost two decades later, fewer than one in five people were using them. It will be up to governments and vendors to enforce better security in technology products and ensure that they are switched on for users by default.

Streamline your workflow with our new Jira integration! Learn more here.