by D&E Staff

March 23, 2017

Users (and your brand) can benefit from live-streaming apps such as Facebook Live and Periscope to interact with viewers in real time through content in a form that we already know they prefer – video. Social platforms are investing large resources into live video so that any user with a mobile phone, internet connection and a social media account has the opportunity to broadcast content to anyone in the world.

As exciting as live-streaming is for companies and individuals to share significant life moments and behind-the-scenes footage of major events, it is unfortunately becoming increasingly more common for violent crimes, such as rape, torture, abuse or even murder, to be live-streamed on social media, sometimes by the perpetrators themselves. Earlier this week, footage of a teenage girl being sexually assaulted was streamed on Facebook Live, with at least 40 people tuning in at one point and nobody calling the police. Over the last year alone, there have been at least 40 broadcasts of sensitive, violent or criminal footage shared on live video. What responsibility do social platforms have in censoring or removing the footage? And does removing offensive content violate user privacy or censor potentially newsworthy information?

Facebook takes a stand against any videos that are thought to glorify violence, claiming such subject matter violates its content guidelines. However, the social platform largely relies on users to flag obscene content. More recently, the company announced it is in the “research stage” of using artificial intelligence to detect violence (and fake news) in Facebook Live videos. Periscope also relies on users to flag graphic and violent content. And both platforms allow violent content to remain on their site if it is considered newsworthy.

As tech companies have begun to receive blowback for launching live video before working out all the potential kinks, they are starting to take a more proactive approach. For example, Facebook has begun testing a procedure in which the company automatically reviews publicly shared live broadcasts once they have reached a certain number of views or gone viral – even if there were no complaints. And Periscope is currently working on a tool to automatically monitor live video for offensive or graphic content.

In a world where an increasing number of users can be considered “digitally native,” it is perhaps a logical progression for generations where the online sharing of opinions and life moments is an important part of communication, self-expression and personal identity – no matter how disturbing the content may be.

Do you think social platforms are doing enough to monitor live-streaming video content? How much control should app providers have over the content posted by users? Who should decide what’s acceptable? And is there a risk that monitoring tools might become too aggressive and take down innocent content? Feel free to share your comments below or tweet me.