CSAM Regulation Update: Dutch Intelligence agency weighs in

A quick update - The European Commission and EU member states have been pondering, for years now, if they should force WhatsApp/Apple/Signal/Telegram to scan all our private messages for suspected child sexual abuse material (CSAM). For various reasons it is a horrendous idea to break end to end encryption in this likely highly ineffective way. Variations of the proposal have also included a mandate to perform such scanning of images using AI, and even to also read our text messages to see if we aren’t “grooming” children.

The current proposal is to fuzzily scan for “known” bad material today, and to add the AI bits in three years time.

After a serious struggle, the Dutch government has now consulted with their civilian Intelligence and Security Service AIVD (where I used to work in the mid-2000s, and of which I later became a regulator until 2022).

This is their translated conclusion:

“Introducing a scanning application on every mobile phone, with its associated infrastructure and management solutions, leads to an extensive and very complex system. Such a complex system grants access to a large number of mobile devices & the personal data thereon. The resulting situation is regarded by AIVD as too large a risk for our digital resilience. (…) Applying detection orders to providers of end-to-end encrypted communications entails too large a security risk for our digital resilience”.

For background, any intelligence agency will be highly aware of how the various spyware apps have been breaking into chat applications and phones for years now.

Inserting a whole new scanning layer into our chat traffic will expand the threat surface significantly. Image parsing has been very hard to secure, as for example Apple and Google know all too well. Hackers, nation state or other, might enter our phones via the CSAM filter.

And now imagine Facebook/Meta/etc having to install, on their own cost, an image scanner & associated infrastructure. It makes no money, they aren’t getting paid for it. It might very well not end up being a very good product, especially from a security perspective. And this would likely worry any intelligence agency.

Interestingly, drafts of the CSAM regulation contain an explicit opt-out for intelligence services (and military and law and order people). Apparently there are some privacy risks.

The original AIVD statement can be found in these two Dutch documents: