EU Cyber Resilience Act part two: Updates & Impracticalities

This is a living document - I’d normally spend a few days polishing everything, but since CRA talks are ongoing right now, there’s simply no time for that. Check back frequently for updates! Also please let me know urgently on bert@hubertnet.nl if you think I’m reading things incorrectly!

As a follow-up to my earlier post on the EU Cyber Resilience Act, here I’d like to address some practicalities: how would it actually work.

Like with the previous article, I want to thank the many people that spent serious time explaining the CRA and its intentions to me, this is most appreciated.

Also please know I’m not against regulation of important hardware and software. As noted in the previous post: “[It] is a sign of maturity that regulation is starting to apply to networked devices. It is somewhat odd that my kitchen appliances are heavily regulated for safety, but I can buy a network connected camera with a default password that on its own can take out a whole hospital’s communication systems, and that this is all legal (both on the side of the camera and the hospital!)”.

I am however worried that the CRA might not end up being workable, or that it might stifle innovation. Please see my earlier post for more details and nuance.

This post is up to date with the 10th of March Council of the European Union version of the CRA.

Third party components

A very key issue is how the CRA deals with third party components that form part of your product of software. Many modern devices and software consist mostly of third party components, with some new functionality layered on top. As an extreme example, you could build a credible firewall product that, by source code weight, is 99% Linux kernel and 1% user interface.

Similarly, a security camera will also typically run Linux, and then use existing vision libraries and tools to encode and distribute the video.

Now, most programmers would assume that as a vendor you’d be on the hook for everything your device or program does, and large parts of the CRA also talk like that.

The latest CRA compromise draft however states:

(18a) When integrating components sourced from third parties in products with digital elements, manufacturers should exercise due diligence. The appropriate level of due diligence measures should be informed by the nature and level of the cybersecurity risk associated with the component and, for this purpose, take into account specific factors, such as the way in which the component contributes to the functionality of the product and the extent to which it has access to data processed by the product with digital elements.

This makes it clear that integrated components are under a different regime than code you authored yourself. It only requires “due diligence”. The amount of due diligence can also vary a lot - if a component has no security role, it barely requires attention, for example.

The draft recital 18a then goes on:

Depending on the risk, due diligence measures could include: verifying that the manufacturer of a component has demonstrated conformity with this Regulation, verifying that a component is free from vulnerabilities registered in publicly accessible vulnerability databases, or verifying that a component receives regular security updates.

So if a component really is essential for your security, you need a declaration of conformity before you can integrate it. Check that CE-mark. If something is entirely not exposed to the outside world, due diligence could be very limited.

So in essence, recital 18a is a supremely important knob in the CRA - if interpreted leniently, you could argue as a vendor or manufacturer that you only need to check your own code, since the 99% of the code you ship as components is not important.

Another reading however could be that all your security relies on Linux (or more generally, the operating system) doing the right thing with its exposed network interfaces, and that all of the OS is therefore something that should comply with the CRA. The same goes for the CPU or the WiFi chipset of your device - either we believe these are harmless components, or we recognize that for example the CPU (through things like Spectre & Meltdown or Row hammer) could leak information, and therefore need full CRA compliance before we can use them.

Article 10.2 adds a little bit of context:

For the purposes of complying with the obligation laid down in paragraph 1, manufacturers shall exercise due diligence when integrating components sourced from third parties in products with digital elements in a manner that such components do not compromise the security of the product with digital elements.

How this will be interpreted determines the bulk of the impact that the CRA will have on software and hardware development. Currently, all we have to go on are these two paragraphs which say “it could be a lot of work, or not”.

I would urge everyone involved to provide a lot more clarity here – it is the kind of article that is wonderful from a political and negotiation perspective since you can read/imagine into it what you want. But for us producers, it is a giant mystery what we’ll have to do.

Should the Linux kernel we use have a CRA CE-mark for use in a security camera? In a digital picture frame? In a VPN appliance? We need to have guidance that tells us what to expect beyond “you figure it out”.

Essential Cybersecurity Requirements & the standard

The CRA lists 12 essential cybersecurity requirements which are discussed at length in the previous post. In the latest draft compromise there has been a serious regression in that the requirements now state products should be shipped with ’no known vulnerabilities’, which is impossible.

The previous version wisely spoke of ’no known exploitable vulnerabilities’. By dropping ’exploitable’, the act suddenly requires even unreachable and never used code to be secure. The costs of dropping ’exploitable’ will run into the billions of euros, and will make it extremely dangerous to ship any code or product to Europe. I’d recommend not doing so in any case if this requirement stays like that.

Now, there is an interpretation possible that the EU thinks that a vulnerability that happens not to be exploitable in your product is not a vulnerability. For one, this is not an interpretation anyone skilled in the art of security would agree with. And secondly, when legislation is drafted, no one should be relying on a charitable reading “they probably mean what I hope they mean”.

(it may be good to know my experience in this field comes from having been a member of the Dutch Intelligence and Security Agency regulatory commission, where I had to interpret what complex and ambiguously formatted laws actually meant)

One reader also wondered about the ‘known’ requirement: known to whom? The producer, the public, the component manufacturer, anyone?

Another reader with a lot of relevant experience raised the question if it will now be illegal to distribute older versions of software with vulnerabilities still in there. Many environments can not upgrade instantly and rely on the availability of older versions for their continuity.

Clarity is required here.

Reading on, as noted in the previous post, the essential requirements are a mixed bag. Some of them are unambiguously good, others could be terrible. Not adhering to the requirements could lead to million or billion euro fines, and the brief description of the requirements in annex 1 is not precise enough for anyone to know for sure if they are in compliance.

The goal is therefore that CEN-CENELEC drafts a standard that explains what these requirements mean exactly. As discussed earlier, it is very uncertain if the standardisation and software communities will be able to come to an agreement what these requirements mean precisely, and what specific rules you’d need to adhere to.

It is therefore likely that we’ll spend half a decade or more without an agreed standard.

The CRA does not explicitly require anyone to use a standard, but it does come close.

Recital 38 says that your product/software will have the ‘presumption of conformance’ if you adhere to a standard that implements the essential cybersecurity requirements.

Recital 45 states:

The manufacturer should apply harmonised standards, common specifications or cybersecurity certification schemes under Regulation (EU) 2019/881 which have been identified by the Commission in an implementing act, if it wants to carry out the conformity assessment under its own responsibility (module A). If the manufacturer does not apply such harmonised standards, common specifications or cybersecurity certification schemes, the manufacturer should undergo conformity assessment involving a third party.

This means that if you use a standard, you could be ok. But if there is no standard, you should rely on a third party to tell you what is ok. And not just any third party, this would need to be one of the officially EU notified bodies. None of the currently known notified bodies appears to be good at cybersecurity assessments.

If I read it correctly, this means that until there is a standard, everyone shipping anything remotely important will have to rely on a non-existent party to determine the conformance of their products.

23rd of March update: A member state government has confirmed (in writing) that this reading is correct. 24 months after the CRA enters into force (perhaps in 2025), the conformity requirements become mandatory (in 2027). This means that by then we should either have harmonised standards, or there should be ample auditing capacity by notified bodies. Right now, it appears plausible we’ll have neither at that time. This essentially means it is entirely possible that new products with digital elements can no longer be introduced into the EU single market at that point.

It appears that governments are assuming a lot of notified bodies will spring into existence in the coming years to make the CRA possible. This is a very questionable assumption, especially because that market would suddenly shrink again once a standard becomes available.

Alternatively, governments could be assuming a suitable standard will appear in a few years. Skeptically, they may be assuming that industry will hurry up with a standard because being held up by a lack of auditors would be terrible for business.

These are pretty bold assumptions, especially since getting it wrongs means no new products and software in the EU single market.

This would appear to be bad.

To be continued

This is a living document, so please check back frequently for updates and fixes.