google

They’re Looking Out for the Internet. But Who’s Looking Out for Them?

Jacki Silbermann

Jacki Silbermann is a student at Harvard Law School and a member of the Labor and Employment Lab.

Ensuring that internet searches and social networking feeds remain free of graphic violence and explicit sexual content is no easy task. Companies like Facebook, Google and YouTube rely on thousands of content moderators in the U.S. and abroad to remove illegal or otherwise unacceptable content from their internet platforms every day. While much of the content that moderators handle is relatively benign (such as spam on ad platforms, intellectual property infringements, fake listings on Google Maps), plenty reaches a much more disturbing level, including child pornography, graphic sexual abuse and horrific acts of violence. For workers reviewing the more disturbing videos and images, content moderation can take a serious psychological toll, having caused moderators anxiety, panic disorders, depression and Post-traumatic stress disorder. Yet, workplace protections and mechanisms for preventing such mental health hazards are sorely lacking.

Background

Content moderators are not your typical Silicon Valley tech worker. The vast majority of moderators are employed by a contractor and work out of their own facilities, separate from Facebook, Google or YouTube employees. A typical content moderator in the U.S. earns only a few dollars above minimum wage – far less than the median salaries at the tech giants requiring their services. Many are immigrants with limited job prospects in the U.S. and are dependent on content moderation for their basic livelihood.

Moderators have little to no say regarding the videos that show up on their screens, they simply sit and watch the stream of content that an algorithm drops into their queues, over 100 items a day, deciding if each one must be removed. While some workers receive a random collection of videos and images to review each day, others are assigned to specialize in one type of content. Pornography and violent extremism are just a few of the specialty areas.

Over the past few years, employees at YouTube’s and Facebook’s U.S. moderation sites have reported deteriorating mental health as a result of their constant exposure to disturbing content. Content moderators have collapsed at work, overwhelmed by the stress induced by videos they had viewed on the job. They have been hospitalized for acute anxiety and depression, and they can be seen crying in the bathrooms and in the stairwells at content moderation sites. Many have had to leave their jobs because of the work’s detriment to their mental health.

Workplace Mental Health Initiatives

Employers and the companies that contract out to them have provided some resources for content moderators to cope with the psychological difficulties of their work. Facebook and YouTube content moderators are encouraged to seek assistance from on-site counselors (known as “wellness coaches,” they are not licensed therapists). Facebook reportedly has a plan to implement an enhanced psychological wellness program for its employees and contractors, though the details of what that would entail are vague. And Accenture, a contractor for Facebook, has prompted moderators to seek mental health services both within and beyond the company’s wellness program in order to cope with the disturbing content they encounter on the job.

The Need for Prevention Mechanisms

It is certainly true that workers with mental illnesses and other mental health conditions should receive support from their employers. However, the psychological effects of content moderation should not be addressed only when they have become a problem. Similar to other jobs and workplaces that pose health and safety risks, the Occupational Safety and Health Administration (OSHA) should address the issue by enforcing a safety and health regime that can prevent mental health problems that arise from content moderation work. This would be a significant and much-needed improvement on any industry attempts, even the best of them, to offer ex-post solutions to workers who have already been psychologically harmed by their work, sometimes irreparably.

Preventing content moderation hazards via OSHA standards is not easy given the present lack of standards that speak directly to this type of harm. OSHA’s existing rules and guidance documents are largely focused on physical workplace hazards. To date, OSHA has not promulgated regulations that address the mental health hazards discussed above. While the Occupational Safety and Health Act’s general duty clause (section 5(a)(1)) requires employers to maintain a workplace that is “free from recognized hazards that are causing or are likely to cause death or serious physical harm to [their] employees,” this is an exceedingly broad obligation and does not necessarily encompass the specific psychological harm that content moderators face. While the Act also allows individual states to adopt additional workplace safety standards above the federal floor set by OSHA, of the twenty-eight states and territories that have adopted OSHA-approved state plans, none have provisions for protecting against mental health hazards at work.

More recently, OSHA has touched on the importance of protecting mental health in the workplace, publishing a Critical Incident Stress Guide which outlines the steps employers can take to mitigate the psychological and physical stresses that workers might experience after witnessing a traumatic event at work. It is important to note that these are not regulations or guidance, and impose no additional duty on the employer. Moreover, the guide’s subject matter is limited and primarily addresses the psychological effects of witnessing a traumatic event in the physical workplace, such as a gunman in the office or when emergency health professionals are called to the scene of a horrific accident.

Safety standards for content moderators might include, for example, limiting the amount of time per day a moderator is exposed to graphic video or images of violence or sexual content, or allowing moderators to opt out of viewing certain images and videos. There is talk at Facebook and Google of certain technological solutions that can mitigate the psychological effects of viewing disturbing content – options to blur out faces in the videos, see the videos in grayscale and mute the audio. These solutions and other industry standards drafted by big tech companies (though never implemented) could also provide a useful benchmark for OSHA regulations.

Applicable OSHA standards are imperative, but this is not to say that they will necessarily solve the problem. Insufficient inspectors and resources weaken the enforcement apparatus set out in the Act. Additionally, the Act does not create a private right of action which allows employees to bring injury claims resulting from OSHA standards violations. Absent concrete rules and guidance on the types of work-induced mental health hazards to which content moderators are susceptible, and considering the enforcement and claim difficulties, at present OSHA provides such workers almost no recourse for addressing workplace mental health hazards before they become a problem.

Union bargained-for protections may provide a good solution to the present gap in federal and state OSHA workplace safety standards for content moderators. Research has shown that unions promote workplace safety not only by encouraging enforcement of existing OSHA regulations, but also by bargaining or organizing for more extensive safety regulations, implementing training programs, and establishing worker committees dedicated to improving workplace safety. If present OSHA regulations and the tech companies themselves cannot adequately protect content moderation workers, then workers’ best hope might be their own ability to join together and demand the healthy work environment they deserve.

More in Google

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.