Apple’s child sex-abuse scanning tool is too clever for its own good: Tae Kim
Apple Inc.’s plan to scan iPhones for child-abuse images is past the point of repair. A complete overhaul is in order.
The company said Friday that it will delay implementation of the software following a backlash from privacy groups, security experts and many customers. Apple will spend a few months taking in additional feedback to make improvements “before releasing these critically important" features, the tech giant said.
The statement seems to suggest that Apple will make minor adjustments and then roll out something similar to its current proposal. At this point, though, Apple should recognize that anything designed to examine the personal contents of people’s phones is a lost cause.
Let’s look at the details. After facing pressure from governments to do more to battle child pornography and exploitation, Apple unveiled its plan last month to offer three new tools. They included the ability for parents to be notified when their children receive or send explicit photos over Apple Messages, the option to report child abuse using the Siri voice assistant and a new system to detect Child Sexual Abuse Material (CSAM) stored in users’ iCloud photo libraries.
The last measure received the harshest criticism from privacy advocates. Rather than scanning photos after they are uploaded to cloud services, Apple created an elaborate new system that looks at images on a customer’s device. For any iCloud Photos user, the software would compare what is essentially the digital fingerprint of each image to databases of known illegal photos on the iPhone itself. Once a number threshold of matches is met, Apple would review each incident manually and then, if valid, report it to the National Center for Missing and Exploited Children, an organization that works with U.S. law enforcement agencies.
While it may be technically accurate that Apple’s on-device matching technique is more secure and privacy-conscious, convincing people this is the case has proved extremely difficult. I consider myself to be technically proficient, but I had to read Apple’s 12-page documentation multiple times to get a sense of how it all works. The average person isn’t going to make the effort to understand the nuances.
After the recent public debate, the concept of scanning someone’s personal device — no matter how ingenious the method — has become repellent. Fight for the Future, a digital-rights advocacy group, is organizing protests outside of Apple Stores next week to call for the permanent cancellation of the program, citing privacy concerns.
Then there is the slippery slope argument. Privacy groups are also worried that once the technology for fingerprinting CSAM photos is set up, authoritarian governments may ask for surveillance of other types of content on personal devices. These concerns are legitimate. While Apple has explicitly said it would refuse such requests, what happens when there is a court order or legislation that requires it? Once the system is implemented, it opens the door for misuse.
That’s why Apple should instead copy the practices of its main technology rivals. Facebook Inc., Alphabet Inc.’s Google and Microsoft Corp. scan for CSAM photos after they’re uploaded. It’s not a perfect solution. Apple would need to look through more photos instead of a small subset. But it is easier for users to accept the idea that images sent for storage on the internet may get examined for illegal content.
Sometimes companies can be too clever for their own good. The sooner Apple realizes this public relations battle is unwinnable, the better. Otherwise, fear of corporate surveillance may dominate the conversation surrounding iPhones for a long time.
Tae Kim is a Bloomberg Opinion columnist covering technology. He previously covered technology for Barron's, following an earlier career as an equity analyst.