Instagram: Meta takes action with AI to shield children from “sextortion”

Meta

On Thursday, Meta announced that it was creating additional resources to shield adolescent Instagram users from “sextortion” schemes, which have been linked by US lawmakers to negative effects on young people’s mental health.

Gangs use extortion tactics to coerce victims into providing sexual photos of themselves, threatening to make the images public unless they obtain payment.

Advertisement

According to Meta, the app’s messaging system will detect and blot any photographs featuring nudity that were sent to children. This AI-driven “nudity protection” feature is currently undergoing development.

According to Capucine Tuffier, who oversees kid protection at Meta France, “the recipient is not exposed to unwanted intimate content and has the choice to see the image or not.”

According to the US corporation, anyone sending or receiving such communications will also receive guidance and safety tips.

Advertisement

Authorities in the United States estimate that in 2022, 3,000 young people were duped by sexploitation schemes.

In a different case, Meta is being sued by over 40 US states since October. The lawsuit claims that Meta “profited from children’s pain.”

In the lawsuit, Meta was accused of taking advantage of minors by developing a business strategy that maximizes their usage of the site while endangering their health.

Advertisement

– ‘On-device machine learning’ –

In order to better safeguard minors under the age of eighteen, Meta announced in January that it will increase the availability of parental supervision tools and tighten content restrictions.

The company stated on Thursday that “our long-standing work to help protect young people from unwanted or potentially harmful contact” was being strengthened by the newest resources.

Advertisement

Read also: Veteran actor, Jimi Solanke funeral is on Friday

According to the firm, “We’re testing new features to help protect young people from intimate image abuse and sextortion, as well as to make it more difficult for potential scammers and criminals to find and interact with teens.”

It further stated that the “nudity protection” program analyzed photos using “on-device machine learning,” a form of artificial intelligence.

The company emphasized that it would not obtain access to the photos unless users reported them. The company is also frequently accused of breaching the privacy of its users’ data.

Advertisement

Additionally, Meta announced that it would employ AI technologies to detect accounts that are transmitting offensive content and severely limit their capacity to communicate with younger users on the site.

Former Facebook programmer and whistleblower Frances Haugen made internal research conducted by Meta, then known as Facebook, public in 2021. The research revealed the corporation had long been aware of the risks its platforms posed to young people’s mental health.

AFP

Advertisement

About The Author