Apr 16, 2019

Facebook's Attempt To End The Spread Of Harmful Content Continues

With 2.3 billion active users, Facebook has grown to be one of the widest reaching and most influential social media platforms available.

  • Facebook's social dominance is increasing, as users also view it as a newsworthy platform, but Facebook are yet to accept this role, as they still maintain the desire to be a friendly and non-threatening platform. 
  • Facebook will now take new measures to ensure they are acting against the spread of harmful content, as outlined below. 

Facebook is one of the biggest platforms for human interaction, although the likes of Instagram and YouTube are also seeing numbers of active users and engagement soar, they will never quite parallel the level of personal communication and ability to form connections that Facebook generates.

As a social media platform, Facebook excels because it makes use of the fact that people trust information and opinions shared by their friends and family more than any other form of information they receive. This characteristic is similarly seen in consumers before making a purchase for a product or service. Research shows that 91% of 18-34 year-olds trust online reviews just as much as personal recommendations, identifying that the trust users have with others has a great influential power, an attribute that Facebook is trying to manage.

Remove, Reduce, Inform Facebook

Facebook’s Head of News Feed Integrity, state that some measures have already been put in place in an attempt to manage the power of influence Facebook has over its content distribution, which includes:

“...removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share. This strategy applies not only during critical times like elections, but year-round.”

Along with this, Facebook has indicated that they will be using both technology and people to try and combat the increase in photo, text, and video based misinformation shared across the channel.

Group Measures:

The past two years saw Facebook encouraging the use of groups, as these allow conversations to shift from the public News Feed to more private spaces. As users are increasingly more aware of the permanency of content they post on the News Feed, they are now opting to move conversations to an area of Facebook that they believe is more private, e.g. groups or messenger. However, in pushing conversations to a more private space, this means Facebook will be less able to prevent the distribution of harmful content, therefore the content will still be shared and influence users, but due to lower exposure, it appears to be less of an issue. As a result, Facebook has created some detection methods that are able to find harmful content on groups before it is even reported - regardless if the group is secret, closed, or public.

Facebook groups

Facebook intends to hold the admins of Groups more accountable for community standards violations as part of their ‘Safe Communities Initiative’ in the next month, which will help when reviewing if a group should be taken down or not. They will investigate the admins and content violations within the group when coming to a decision. Meaning that Facebook is issuing penalties for groups that contribute to the spread of ‘fake news’ whether its misinformation or questionable links, which are determined by third-party fact-checking partners, resulting in a downgrade in the reach of content, even if it does not violate Facebook’s standards.

In addition, Facebook will be adding a new ‘Group Quality’ feature that will provide Group admins with a view of content that was flagged or removed, including an area for false news. Facebook hopes that by increasing admins’ awareness of ‘Community Standards’, this will filter through to the group communities they run so that more users are conscious of material that is not acceptable on the platform.

‘Click-Gap’

Much like Instagram and other social media platforms, Facebook has created an algorithm that organises its News Feed. However, due to the way Facebook’s algorithm works, it encourages content creators to use titles and captions that have the ability to spark more debate and general engagement, both positive and negative, so that they appear higher in the ranking on the feed, resulting in a higher reach.

In order to prevent publishers distorting their content titles and captions, Facebook is implementing a new measure to the News Feed called ‘Click-Gap.’

Facebook outlines ‘Click-Gap’ as reliant on:

“the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the centre of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content."

Facebook news feed

‘Click-Gap’ has the potential to make a vast improvement on the distribution of harmful content as it is built into the Facebook algorithm, meaning that it will not rely on more human involvement once it is implemented. This mirrors the method Google has adopted through the use of backlinks and page authority, which helps them determine the trust and reputation of a page, so Facebook hopes this will provide testament to their equivalent method.

Next steps:

Relating to recent news, the DCMS and the Home Office have issued an ‘Online Harms White Paper’ which intends to regulate social media, including a proposal to take measures that will ensure websites accept more responsibility for their users’ online safety.

“Facebook now employs 30,000 moderators worldwide and has, to its credit, been open about how AI is unlikely to be the only solution for screening out harmful content.”

Omar Oakes, Global Technology Editor at Campaign discusses.

The white paper does not, however, discuss details concerning the logistics behind the new regulations, for example how many moderators are necessary per platform. With 2.3 billion active users worldwide, is 30,000 moderators enough for Facebook?

Nevertheless, this at least points out the necessity for humans to be included in the process. Particularly evident in the case when Facebook removed a very well-known Vietnam war photo back in 2016, that showed a naked nine-year-old girl running away from a napalm attack.

In this day and age, Facebook has effectively superseded the role newspaper editors used to play when deciding what photos and headlines were deemed appropriate and informative. Yet, this is a role Facebook is not completely willing to assume, as it sees itself primarily as a platform where users can share photos and updates with friends and family. It is not motivated by the same journalistic principles as newspapers and as a result, will automatically remove disturbing images that do not comply with its ‘community standards.’

The majority of these measures should not affect digital marketers, however, it is important to understand that these new tools could affect your engagement levels and potential reach on Facebook during the period where they all first come into effect.

On the whole, the impacts from these changes should be positive, as the site’s desire to maintain a friendly and non-threatening style will take precedence, and hopefully in the future, put an end to the distribution of harmful or misinformed content on Facebook.

KEEP IN TOUCH