Facebook uses artificial intelligence to help prevent suicides

The social network has rolled out new tools in tandem with partners

Facebook is using a combination of pattern recognition, live chat support from crisis support organizations and other tools to prevent suicide, with a focus on its Live service.

There is one death by suicide every 40 seconds and over 800,000 people kill themselves every year, according to the World Health Organization. “Facebook is in a unique position — through friendships on the site — to help connect a person in distress with people who can support them,” the company said Wednesday.

The move by Facebook appears to aim to prevent the live-streaming of of suicides on the Live platform, which was launched in April last year, and allows people, public figures and pages to share live videos with friends and followers. In January, a 14-year-old-girl in Miami live-streamed her suicide on the platform.

The company said that its suicide prevention tools for Facebook posts will now be integrated into Live, giving people watching a live video the option to reach out to the person directly and to report the video to the company.

A person sharing a live video will also see a set of resources on their screen and can choose to reach out to a friend, contact a help line or see tips. “We will also provide resources to the person reporting the live video to assist them in helping their friend,” Facebook said.

People will also have the option to connect to crisis support partners such as Crisis Text Line, the National Eating Disorder Association and the National Suicide Prevention Lifeline over Messenger. They will see the option to message with someone in real time directly from the organizations' page or through Facebook’s suicide prevention tools.

Facebook has also started testing pattern recognition technology to identify posts as very likely to include thoughts of suicide. Even if someone on Facebook has not reported the post yet, the platform's Community Operations team will review the posts and, if appropriate, provide resources to the person who posted the content, the company said.

The new measures add to those the company already has in place for over 10 years, it said. It already has teams around the world, working around the clock to review reports that come in and prioritize the most serious reports like suicide.

Join the Computerworld newsletter!

Error: Please check your email address.

More about FacebookMessengerWorld Health Organization

Show Comments