Stanford researchers: Twitter FAILED to prevent dozens of child abuse images from spreading online
By ramontomeydw // 2023-06-07
 
Researchers from Stanford University found that Twitter failed to prevent dozens of known images featuring child sexual abuse (CSA) from being posted, indicating a lapse in basic enforcement. Experts at the Stanford Internet Observatory (IO), who were investigating child safety issues across several platforms, noted their findings in a report. According to the Wall Street Journal (WSJ), the IO forwarded its findings to Twitter. In response, the tech firm said it had improved some aspects of its detection system and asked the IO researchers to alert the company if a spike in CSA images are detected in the future. IO Chief Technologist David Thiel, who also co-authored the report, said he and his colleagues used a data set of roughly 100,000 tweets for their assessment. He added that from March 12 to May 20 – a span of more than two months – their system detected more than 40 CSA images. According to Thiel and his team, the appearance of the images on Twitter was striking because they had been previously flagged for CSA. Moreover, the said images were part of a database companies can use to screen content posted to their platforms. "This is one of the most basic things you can do to prevent [the spread of CSA material] online, and it did not seem to be working," said Thiel. "It's a surprise to get any PhotoDNA hits at all on a small Twitter dataset." Thiel and his colleagues used PhotoDNA technology, which creates a unique digital signature or hash of a picture that can be counter-checked with other hashes from pictures posted by other users. In this case, they compared images with pictures related to CSA based on databases maintained by organizations that combat child abuse. After using an automated program to obtain a stream of data from Twitter using certain keywords, links to the images were automatically passed through PhotoDNA for checking. The WSJ said PhotoDNA has "been used by organizations including tech companies and law-enforcement agencies." Even Twitter has used the technology to remove child exploitation material alongside other tools.

Twitter silent on IO researchers' findings

Twitter refused to respond to an email from the WSJ requesting comments about the researchers' report. Similarly, the platform's owner Elon Musk did not respond to a request for comment. The Tesla and SpaceX CEO denounced the IO back in 2020, calling it a "propaganda machine" in a March 2023 tweet. Musk emphasized Twitter's commitment to removing CSA material on the platform after he acquired it in October 2022 for $44 billion. He remarked in several tweets that removing child abuse material is "priority No. 1" and "will forever be [Twitter's] top priority." But the challenge of blocking such content is not just limited to Twitter, as other platforms face the same issue. According to the Stanford researchers, Twitter told them that it has detected some false positives in some CSA image databases it manually filters out. As a result, the social media site warned that researchers might see some false positives in the future. (Related: Yoel Roth, 'gay data,' and child exploitation on Twitter.) While Twitter bans material that features or promotes CSA, the IO team noted that one challenge facing Twitter is its rule allowing adult nudity. This rule adds complexity to identifying and removing violating content, they added. Visit DorseyWatch.com for more stories about Twitter. Watch this video that explains why Twitter allows the promotion of child torture, child sexual exploitation and other nefarious practices. This video is from the Henrik Wallin - All knowledge channel on Brighteon.com.

More related stories:

Pedo-enabler Jack Dorsey condones child rape and pedophilia via Twitter policies while banning those who try to protect children. Marjorie Taylor Greene blasts Twitter for permanently suspending her account while allowing child pornography. Reddit joins Twitter in protecting pedophiles who post exploitative depictions of young children. Sources include: WSJ.com Brighteon.com