Fixing Social Media: Hard Questions & Tough Answers

Last week, Facebook’s Vice President of Public Policy and Communications published a blog titled ‘Hard Questions‘. It lists a series of introspective questions like – ‘Is social media good for democracy?’, ‘How can we use data for everyone’s benefit, without undermining people’s trust?’ and ‘How should young internet users be introduced to new ways to express themselves in a safe environment?’ among others.

In putting up fundamental questions about the platform and the medium for public scrutiny, this is an unprecedented gesture. Some critics are calling it ‘too little, too late’. But if nothing else, it is a refreshing move away from the rote sounding corporate mantra of ‘openness-transparency-connectivity-empowerment’ towards a more grounded and critical perspective.

Forward thinking vision and the concept of disruption are what has driven the accelerated technological boom that we are currently in the midst of.  Disruption leaves some wreckage behind – in jobs, in spaces, in culture. Even if these changes have been acknowledged, they are written off – categorized mostly as collateral damage in the course of humanity’s inevitable evolution towards a technological utopia. But fairly recently, the debate has been slowly shifting to take into account the aftershocks – both material and ideal – of this disruption.

Facebook’s questioning of itself has not appeared in vacuum. It’s a service used by close than 2 billion people and rising – that’s around 30% of the world’s population. By their own admission, they have grown to be unprecedentedly big in an unprecedentedly short period of time. But inevitably, along with the growth, facebook’s timeline has been marked by cyclic spells of controversy followed by apology.

This includes several unfortunate incidents like banning of the ‘Napalm Girl’ photo (which was later revoked) or targeting of emotionally insecure children with ads (which they have since unconditionally apologized for) or conducting psychological experiments on unsuspecting users (which they since claimed will never be repeated) or livestreaming of deaths and suicides and the recent revelation of Facebook’s content moderation guidelines which specifies what content is acceptable and what is unacceptable on the platform.

Despite the fact that all the above incidents have direct consequences on the user, including causing them distress and anxiety, the platform refused to come clean until it was inevitable or leaked. There is an opaque black box at the center of some of facebook’s critical processes and data related to safety and protection and complaint redressal that has been constantly cropping up as a matter of grave concern.

One realizes that for an entity as big and dynamic as facebook, it is not easy to strike a reasonable balance between safety, freedom of expression and commerce. But what is coming across with each controversy is that safety is getting a short shirt and there may be several reasons why.

The moderation guidelines leak provides a rare window into this black box. And in examining it closely, we can hope to spotlight some of the larger issues at stake.

Into the Black Box

Why Were The Guidelines Hidden?

Consider this scenario: Suppose you find some content online that according to you is offensive/illegal. Let’s say it is a picture of a non-sexual abuse of a child which you believe hurts the dignity and the rights of the concerned child. You flag it off and report it to facebook. You have sent the complaint into the black box and have no idea how it is being processed. Later, you will receive a notification saying that the content you reported meets ‘existing community guidelines’. A link will lead you to the community guidelines page which is not really of any help. If you are persistent you re-report or you may give up. What off the concerned child that you saw on your timeline? What can you do? This thought keeps gnawing at you and frustrating you. You feel helpless.

The above scenario has been abstracted from our own experiences and the experiences of many users shared with us via email, calls or in conferences. It is a scenario that crops up because none of us were aware that content with non-sexual child sexual abuse would not be taken down unless it was shared with, as the leaked moderation policies seek to term it – ‘celebration’ or ‘sadism’. We spent all that time, losing our minds… for nothing.

Considering that facebook puts responsibility on users for the searching and reporting of offensive or criminal content, it is a shame that the guidelines had to be whistle-blown and leaked in order to be brought to light. It should have already been available in the open. Keeping these guidelines in the dark misinforms and dis-empowers the user and takes away their power to make the right decision.

What Facebook can do: If you are indeed relying on users to report, you need to empower them to do so. Figure out a way to make transparent critical processes and data concerning safety. This will enable the users to confidently access redressal in times of need thus reducing the anxiety and trauma associated with coming across offensive/criminal content online.

Lack of Standard Definitions

Consider this: Non-sexual child abuse is primarily defined by the guidelines as a violent act committed on a child by an adult. In the light of several cases where children are bullied and harassed by their peers, this definition is limiting.

Despite existing and accepted terminology, the facebook guidelines prefer to frame its own definitions as per its convenience. What may clearly be accepted as abuse in a country’s legal system may not constitute facebook’s definition of abuse. This application of a conveniently partial meaning to commonly used terminology creates a gap between the user’s expectations of a safe online experience and facebook’s conception of the same.

What Facebook Can Do: Needs to employ standard terminology in its guidelines and policies as it is currently agreed upon and accepted across the world.

Problematic Responses to Problematic Content

Consider this: As per the guidelines, evidence of child abuse is allowed to be shared in the hope of identifying and rescuing the victim.  Another policy states that videos of violent deaths can create awareness.

The idea of strangers being allowed to share photos of someone’s extreme distress on social media is a problematic one. Even for organizations who work with children who are victims of abuse, one of the biggest areas of concern is maintaining the confidentiality of the child. This is done to safeguard the child from the trauma of being subjected to repeated questioning and social taboo. This enables the child to move on and restores to the child the all-important agency of whether and when to disclose the fact that they are a survivor.

Facebook’s policy of crowd-sourcing rescue can possibly jeopardize the child’s long term recovery and healing.

What Facebook can Do: Work with experts in the field and frame policies that are victim-centric i.e. they put the best interests of the victim in mind.

The Issue of Context

In the guidelines, the term ‘Non-Consensual Intimate Imagery’ is used to describe content that can be used for sextortion or revenge porn. While the policies around NCII do seem encouragingly stringent, there remain practical concerns as to how redressal will take place.

The usage of the word ‘intimate’ is broad and culturally non-specific. In certain situations/cultural settings perfectly non-intimate photos can also be used to harass victims. In several cases, the level of detail and nuance required by a third party to judge a post may just not be practical.

Similarly, as stated above, Facebook only removes representations of child abuse when it is shared with sadism or celebration. Sadism is defined as explicit enjoyment of pain & humiliation a living being is feeling. So, it is basically the caption and the moderator’s understanding of the tone and intent of the caption that can result in child abuse content being taken off. Unless the caption goes full Mogambo (cartoony evil)-  chances are the picture/video stays. That does not sound reassuring.

What Facebook Can Do: Context can be explained and resolved only when there is a human response at facebook’s end. Automated messages with links to anodyne pages is in no way reassuring to distressed users. When one is in a crisis, what one wants in the least, is the reassurance of a something vaguely human at the other end that one can atleast hope to negotiate with.

The Moderators

Facebook claims that every complaint is individually looked at by a human moderator who maybe working for facebook or in a company to which moderation has been outsourced. One of the company’s responses to recent controversies was to highlight the fact that they were looking to add thousands of more moderators into the system. They are also promising that these new moderators will come from specific cultural contexts that will help them understand complaints better. This is essential, as reports indicate that the volume of complaints received from across the world is gigantic and currently, existing moderators have less than a minute to dedicate to each complaint.

However if every moderator is being trained under the current guidelines very little change can be expected. For example, even if you are a moderator who has a background in child rights you still cannot take down a video of bullying or non-sexual child abuse because the guidelines say so.

The excellent video above (#MustWatch) which chronicles a firm in India to which moderation is outsourced, shows how it is a queasy space both ethically and legally. In the video above, the moderators in training are depicted to be watching ‘child pornography’. Also, pornography is clearly being transmitted. They seem to be blissfully unaware that, though they may be acting in good faith, they are indulging in a criminal act.

Also, ensuring the well being of moderators who are looking at explicit content every day of their working hour poses several challenges that need to be addressed.

What Facebook Can Do: Understand that moderators are only as good as the guidelines they are trained under. Their commitment to adding more moderators is laudable but without progress on the guidelines and other aspects, it means nothing.

In setting up moderation centers, they must ensure that there is no contravention of the law of the land,

Rigorous training needs to be imparted to the moderators. Their rights and well-being as front-line workers must be acknowledged and safeguarded.

Online Hierarchies and Power

Consider this: If you are well connected, everything from securing a gas connection to a college admission seems smooth while the rest of the lumpen proletariat struggle to do the same. That’s pretty much how things work online as well.

Powerful people whom facebook deems influential have extra-protection. Most of them are rich powerful people in real life as well. So while social media may have disrupted communication, distribution etc it can also be a system that more or less also re-enforces existing power structures. What is problematic is that it has a tendency to claim to do exclusively otherwise. The guidelines clearly indicate special rules for a group of ‘haves’ as opposed to another for a group of ‘have-nots’

Also, standards of safety are non-uniform across the internet. This is understandable. But even within facebook, the application of safety and protection guidelines are not standard. For eg:- Denying the holocaust is an offensive act only if you belong to a certain geographic region.

It put across uncomfortable questions that if a certain community is a minority in a region then will the guidelines then not care about their particular sensibilities?

What Facebook Can Do: The guidelines do already identify and mention vulnerable communities. That is a great first step. The next step can be to look at framing minimum standards of care & protection for these groups.

Instead of platforms deciding which users to protect from which content, maybe a greater agency of who gets access to what content can be given to the user. This will enable the user and should result in lesser complaints about being offended by content they came across.

With ‘Hard Questions’ and the articulation and admission of its vulnerabilities, it seems like the start of something new that has the potential to be a laudable initiative. Provided, of course, that there is committed follow through.

The answers aren’t easy but it all starts with the right questions. It is time for introspection.

It is time to air out the black box and let some light in.

 

Leave a Reply

Your email address will not be published. Required fields are marked *