Putting the onus on women is a PR stunt–the platforms are the problem

July 1, 2021

Re: Online gender-based violence–the platforms are the problem

Dear Mark Zuckerberg, Sheryl Sandberg, Sundar Pichai, Susan Wojcicki, Jack Dorsey, and Vanessa Pappas,

Today’s announcement by Facebook, Instagram, Twitter, TikTok, and Google in response to the Generation Equality Forum’s efforts to surface the harms that women experience on your platforms is little more than a feel-good publicity stunt. It’s obviously meant to distract from the very epidemic of online gender-based violence that the forum is attempting to highlight. The truth is that your platforms are the problem; these false solutions are simply victim-blaming and telling women to “cover up” online to prevent their own harassment.

The recommendations from the Web Foundation’s tech policy demonstrate an understanding of the depth, scale and urgency of the problem. Unfortunately, the proposed solutions fail to address the full scope of the problem and fall short of what is truly needed to address gender based violence on your platforms — as well as offline violence that is planned on or inspired by content on your platforms. They shift the burden of preventing online violence and misogynist attacks to the very people most likely to suffer from them. This is merely a cosmetic change that pushes hate and disinformation into the shadows–while ignoring the root causes of this violence: your platforms and the way that they encourage the spread of hate and disinformation and facilitate harassment.

Existing reporting systems consistently fail to deter bad actors. Hate speech and harassment restrictions are often unenforced and rely on a “notice and take down” model that fails to address the systemic nature of online abuse and puts the onus on the victim to stop the harassment they’re experiencing. Furthermore, a model that relies on taking down individual pieces of content rather than moving to deplatform will be ineffective, given the speed at which content spreads on your platforms. Many of our organizations have direct experience with flagging problematic and violent content directly to your staff, without ever getting action to address it.

By failing to address the root algorithmic causes of these attacks, your platforms are demonstrating that their commitments to combatting online violence against women is performative at best. That is extremely dangerous when online misogyny is proven to fuel gender-based violence, mass shootings, and hate crimes–including attacks in Christchurch, New Zealand, and Paris, France; at the U.S. Capitol in Washington, D.C.; and in Atlanta, Ga., and Charlottesville, Va.

The power to determine what content people see, whose voices are amplified or excluded, and what will and will not remain on your platforms gives you the ability to influence the worldview of billions of users. If your platforms aren’t safe places for women, BIPOC, LGBTQ folks, people with disabilities, religious minorities, and other marginalized groups to share their voices without risking harassment and violence, then it is clear that your platforms are the problem. We need solutions that center the most impacted and most attacked: women, BIPOC, and LGBTQ people.

Black, Indigenous, women of color, and transgender women especially face an onslaught of racist and misogynist attacks, while you profit off of these harms. Pew Research Center found that three-quarters of Black and Latinx people and two-thirds of women say that online harassment is a major problem.

Your platforms–Facebook, Instagram, Twitter, TikTok, and Google–directly benefit from gendered disinformation and the engagement associated with coordinated racist and misogynist attacks. Hate, conspiracies, and disinformation keep people on your platforms and put billions into your pockets, as you profit off of the same hate and lies used by extremist groups like the Proud Boys and the Three Percenters to recruit and radicalize.

And this isn’t a coincidence–it is intentionally built into the business model of your product. The algorithms and policies on all of the major platforms have been created, in large part, by privileged men who recreate their own biases in the algorithms.

Social media companies have a responsibility to stop the spread of hate and violence on your platforms and protect users without placing the onus on the victims. But today’s announcement does little to address the ways in which your platforms spread hate and disinformation and encourage harassment.

If your platforms were serious about combating online violence and hate toward women, you would update and actually enforce your hate speech rules and prioritize the safety of women, BIPOC, and LGBTQ people.

A/B Partners
Abortion Access Front
Center for Countering Digital Hate
Color Of Change
Courage California
Decode Democracy
Equality Labs
Faithful America
Free Press
Friends of the Earth
Global Project Against Hate and Extremism
Higher Heights For America
Institute for Research on Male Supremacism
Jewish Women International (JWI)
Lake Oconee Community Church
Make the Road Nevada
Media Matters for America
NARAL Pro-Choice North Carolina
National Equality Action Team (NEAT)
ProgressNow New Mexico
Religious Coalition for Reproductive Choice
Restaurant Opportunities Centers United
Stop Online Violence Against Women Inc
Tech Transparency Project
The League
The Sparrow Project
Women’s March