The safety organizations will also operate in separate jurisdictions belonging to different governments. The other notable detail is how Apple is going to rely on not one, but at least two child safety organizations to determine the child pornography to look out for. “If and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images, and at that point, only knows about those images, not about any of your other images,” Apple SVP Craig Federighi said in a Friday interview (Opens in a new window) with The Wall Street Journal. The company’s team of human reviewers will examine the flagged pictures firsthand to confirm the imagery is child sexual abuse material. Apple isn’t going to automatically report the account to the National Center for Missing and Exploited Children, which works with law enforcement to stop child predators.
It’s also important to note what happens if the system mistakenly flags an innocent iCloud Photos account as full of child porn.
“But the match threshold will never be lower than what is required to produce a one-in-one trillion false positive rate for any given account,” the company writes in the document. If you think the threshold is too high or too low, Apple adds the company may change the number after it deploys the detection system with iOS 15 later this year. The company settled on 30 images, describing it as “drastic safety margin reflecting a worst-case assumption about real-world performance.” The threshold is important, since there’s always a possibility Apple’s CSAM system will mistakenly flag an innocuous image uploaded to iCloud as child porn.