I don't want you to visit LinkedIn, so here a copy paste of the commentary.
New research with my amazing COSIC team members Maxime Deryck and Diane Leblanc-Albarel reveals critical vulnerabilities in Microsoft’s PhotoDNA, the technology widely used to detect Child Sexual Abuse Material (CSAM).
PhotoDNA has been a cornerstone of online CSAM detection since 2009, used by Microsoft, Google, Instagram, TikTok, NCMEC (National Center for Missing and Exploited Children), and many others. For the first time, we have produced a full mathematical description of the algorithm—uncovering structural weaknesses with serious security and policy implications.
Key Findings
• Hash value reversal: Visual information can be partially reconstructed from a PhotoDNA hash value.
• Detection evasion: Illicit images can be subtly modified to avoid detection.
• False positives: Benign images can be manipulated to resemble known CSAM hash values.
• Collisions: Two different high‑quality images can be engineered to produce the same hash values.
These attacks run in seconds or minutes on a standard laptop and succeed with near‑perfect reliability.
Why this matters
As policymakers consider large‑scale client‑side scanning (including versions of the EU “#chatcontrol” proposal), these weaknesses highlight the risks of deploying fragile detection systems on billions of devices. The findings point to potential information leakage, undetected CSAM, and even wrongful accusations.
The researchers emphasize that the goal of the work is to strengthen protection for CSAM victims by encouraging more robust, transparent, and targeted detection technologies.
A coordinated vulnerability disclosure process was followed with Microsoft.
Here a link to the actual publication
The author often warns about the dangers of chatcontrol.