The past couple weeks, I’ve had a few people ask for my thoughts on Apple’s plans over detection of images of sexual assault of children.

At the bottom of all of this, I have a list of links on this topic. MacRumors has a good collection of articles about how this system is supposed to work, but the first one to Bruce Schneier’s thoughts on it and even more links to more thoughts is the most important of the links, in my opinion.

Apple has two upcoming systems, one that I’m not sure I have any problems with that I don’t think is going to be a big problem, and the other which I do share the concerns voiced in the links below. Apple announcing both of these systems concurrently has caused a tremendous amount of confusion because people are conflating the two.

The first one is supposed to use on-device machine learning to detect if someone has sent pornographic material to a device that has been configured as to be used by a child. Apple products have, for years, used on-device machine learning to identify the contents of photos for the purposes of categorization or tagging. Why the term “on-device” is important here is that the device itself is deciding the content of the image, not a having to talk to a computer in an Apple data center. I think this could be a valuable tool for parents, however as I’ve said in previous posts, I do think that technological tools to help protect children might be a false sense of security for parents who will trade having a good relationship with their children for outsourcing that for a corporation’s claims that it’ll do the job for you. As an assist for a parent - this could be terrific, but it shouldn’t be understood to be a perfect shield against people sending unwanted pornographic images to their children.

The larger issue is the second tool, which looks inside of Apple’s iCloud Photos Library for “Child Sexual Abuse Material” (CSAM). Documentation says that this is done by the National Center for Missing and Exploited Children providing hashes (which are strings of characters that generated by an algorithm from known images of abuse cataloged by the NCMEC) which then are to be copied to every Apple device, which then hashes all of the photos that it will be sending to a user’s iCloud Photos account. From the documentation:

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

Two take aways that I have here is:

  1. If someone is generating images of child abuse, this system will do nothing. If the important work of the NCMEC does not know this image, this system is blind to it.
  2. If someone saves images of child abuse to iCloud Files instead of iCloud Photos, this also appears to do nothing.
  3. Apparently, someone can have some known images of child abuse uploaded into their iCloud Photos account, but the system does nothing until there are enough of them to then have someone at Apple review the situation, and then report the account to the NCMEC.

The flaws in this implementation are not my largest complaints. The issue stems from places of transparency, privacy, consent, and mis-use.

  1. Apple’s own documentation explains that the list of hashes which will be stored on your device will not be available for you, or anyone, to audit. You cannot known what Apple’s system is looking for with specificity beyond what it tells you. I hope that the NCMEC and “other child safety organizations” have perfectly curated this horrible data in such a way that all of the hashes are accurate and true, but are they?
  2. In law, I have heard of discussion of the concept of the “perfect search”. If I remember right, the first time I heard this was a lawyer arguing that a piece of software that reports on a person to a governmental entity is a violation of 3rd Amendment rights (which prohibits the non-consensual quartering of soldiers in your home), which I felt was a bit of a stretch, since the amendment was ratified 230 years ago, we’ll work with what we have. Also, the 4th Amendment which has prohibitions against unreasonable searches and seizures, which I feel is more relevant here. Can a search occur in which only what is to be searched for is ever reported and nothing else?
  3. A partial issue with item two is that the Bill of Rights are prohibition against governmental interference in a person’s home, or papers and effects, not Apple, which is not the government. And here, we have the issue of consent. This system will be going into effect without a choice to have my data searched by this organization on a service that I may have used for years. Just as one does not have the “right” to say whatever they want on social media, nor does one have the “right” to be secure in their “papers and effects” with iCloud Photos, because they are not the government.
  4. Mis-use is, in my opinion, the absolutely largest of issues here. The links below to other people’s thoughts go into much more detail, but this system can be expanded to pry into anything stored in iCloud servers and, I would presume, on Apple’s devices, regardless if it is stored in iCloud servers or not. And the scope of what could be scanned for could be expanded. And who it reports to could be expanded even further. Even if the NCMEC’s data is perfect, and Apple’s system works precisely as designed, and it never returns a false positive, and it does disrupt some of the most awful crimes humanity can commit against itself, that is good for today - but what about tomorrow when this system is wielded against vulnerable people instead of for vulnerable children?

My conclusion on this is that I hope that Apple hears enough negative feedback that it shuts down this program and tells law enforcement that it’s not going to sacrifice all of its customer’s privacy so that law enforcement’s job is slightly easier at finding the few horrible people that using its services to store awful material. If that doesn’t occur, I hope this system works well and people face consequences for their crimes. But I’m not an optimist on this. I’ve already disabled my iCloud Photos portion of my iCloud account, and will also consider what else I would like to store with Apple cloud services.