Twitter’s attempt to monetize porn is halted because of child safety warnings

<strong>Twitter’s attempt to monetize porn is halted because of child safety warnings</strong>
Legal

Twitter is reportedly considering monetizing adult content amidst internal shakeups and high pressure from investors to make more profits.

The move is, however, halted because of child safety warnings. The Verge reported that Twitter is aiming to be a competitor to OnlyFans by permitting adult creators to sell subscriptions on Twitter.

Already, some adult creators rely on Twitter as an avenue to advertise their OnlyFans accounts since the app has become one of the major platforms where posting porn doesn’t violate rules.

Twitter has put the mission on hold after the 84-employee “red team” crafted to test the product for security issues discovered that Twitter can’t detect CSAM (Child Sexual Abuse Material) and non-consensual nudity.

The social media platform doesn’t equally have tools to verify whether consumers and creators of adult content are above 18 years. In particular, Twitter’s Health team has warned of higher-ups concerning Twitter’s CSAM challenge since February 2021.

To detect and know such content, the platform utilizes a database designed by Microsoft known as PhotoDNA. This helps social media platforms easily identify and eliminate any known CSAM. However, digitally altered or newer images could evade detection if any piece of CSAM hasn’t been part of that database.

Matthew Green, an associate professor at the Johns Hopkins Information Security Institute, says, “You see people saying Twitter is doing a bad job. And then it turns out that Twitter uses the same PhotoDNA scanning technology that almost everybody uses.”

Twitter’s annual revenue, which is around $5 billion in 2021, is little to create a more robust technology to identify CSAM, even though the machine learning-powered mechanisms aren’t foolproof. For Green, “This new kind of experimental technology isn’t the industry standard.” 

Green also added, “These tools are powerful in that they can find new stuff, but they’re also error-prone. Machine learning doesn’t know the difference between sending something to your doctor and actual child sexual abuse.”

Until February 2022, the social media app didn’t have a means for its users to flag content with CSAM. This means that some of Twitter’s most harmful content can remain online for a long time even after user reports. 

Two users sued Twitter for sex video violation

In 2021, 2 users sued the blue app for allegedly profiting from videos that recorded them as sex trafficking teenage victims. The case is headed to the Ninth Circuit Court of Appeals of the US

The plaintiffs claimed that Twitter didn’t remove the videos when it was notified about them. Since then, the videos have garnered over 167,000 views.

Generally, Twitter is big enough that detecting every CSAM is almost impossible, and unlike Google, it doesn’t earn enough to invest in more advanced and robust safeguards.

Photo from Pexel by greenwish _