SURVEY: Here’s how UltraViolet members feel about their online experiences

()

The internet is not a safe place for Black, Indigenous, and women of color and LGBTQ people who face an onslaught of both racist and misogynist attacks.

We asked 600 UltraViolet members about their experiences online. Here’s what we found.

60% experienced harassment online
96% say social platforms are not doing enough to stop white supremacy
92% say they’re not doing enough to protect women and girls
96% say they’re not doing enough to stop disinformation
62% have had relationships with family or friends damaged because of social media
When asked how safe they feel with their kids or grandkids using social media 49% gave a 1 on a scale of 1-5. The average rating from all respondents was 1.4

Read the article

UltraViolet joins justice groups in demanding TikTok address hate speech

()

Date: August 30, 2021

Dear Shou Zi Chew and Vanessa Pappas:

In a relatively short amount of time, your platform has grown exponentially and amassed an unprecedented base of more than 1 billion users worldwide. It is because of this unprecedented reach, particularly with teens and pre-teens, that we are writing with grave concerns about the problems of hate speech and violence on TikTok that were uncovered by a new report issued by the Institute for Strategic Dialogue (“Hatescape: An In-Depth Analysis of Extremism and Hate Speech on TikTok.”). The report clearly demonstrates that TikTok has an extremely serious content moderation problem; your terms of service are not sufficiently being enforced or curbing the spread of extremist and even terrorist-related footage from being easily discoverable and spread to the masses. We urge you to act immediately to remedy the dangers that ISD has identified.

Most concerning in ISD’s report is the overwhelming breadth of hate that is rapidly spreading across TikTok on a daily basis. Whether it’s content that praises and celebrates the actions of terrorists and extremist, denies the existence of historically violent incidents such as genocides, or hate and violence directed at Muslims, Jews, Asians, Black people, refugees, women and members of the LGBTQ+ community, ISD’s report demonstrates that this is not a problem that is narrow in scope or limited to isolated incidents. That is why we are urging you to take substantive action now before it is too late.

ISD has suggested several policy changes based on their findings. These range from overarching calls for improved content moderation, specifically with a call for better enforcement of existing policies as they relate to removing terrorist content from TikTok, to addressing language gaps that exist in current content moderation programs and developing a better understanding of how bad actors exploit loopholes to spread hate on TikTok. That includes evolving moderation programs that are currently limited in scope as they relate to hashtag bans. These, along with broader calls for transparency and data access that will bolster accountability, are all of critical importance for TikTok to begin curbing the dangerous content circulating on its platform. We urge you to quickly act on each of these recommendations.

We’ve already seen countless examples of online hate translating to offline violence, both in the United States and around the globe, often with deadly consequences. It’s because of this dangerous reality that TikTok has a responsibility to act to address gaps in your content moderation policies and boost transparency more widely for the public and researchers. For some populations around the world, this is a matter of life and death.

As the ISD report clearly demonstrates, there is a sizable gap between TikTok’s content moderation policies and the platform’s actual enforcement. Policies are only as good as their actual application and enforcement, and to that end, TikTok is failing on too many fronts. That failure has led to bad actors actively leveraging your platform to spread hate and violence with the potential for grave consequences.

What ISD has identified is a platform wide problem that requires comprehensive solutions that must be implemented immediately. The potential for grave consequences, especially with younger users of your platform who are particularly vulnerable as targets of hate, being influenced by hate speech, and for being recruited by hate groups, are far too high. Failure to act is not an option. We hope your platform fully grasps the depth of the problems ISD has identified and acts accordingly on behalf of the public’s best interest, and we appeal to the conscience of every individual in a position of leadership at TikTok to take meaningful action now to make the platform safe for all of its users​​

Sincerely,

Accountable Tech
ADL
American Jewish Congress
Broward for Progress
Decode Democracy
Friends of the Earth
GLAAD
Global Project Against Hate and Extremism
Indivisible Northern Nevada
Media Matters for America
MediaJustice
Muslim Advocates
NARAL Pro-Choice America
National Equality Action Team (NEAT)
National Hispanic Media Coalition
ParentsTogether
Southern Poverty Law Center
Stop Online Violence Against Women
UltraViolet
Women’s March

Read the article
New York Airports Reject Ads Reading: “WELCOME TO NEW YORK. WHERE OUR GOVERNOR, ANDREW CUOMO, IS A SEXUAL ABUSER”

New York Airports Reject Ads Reading: “WELCOME TO NEW YORK. WHERE OUR GOVERNOR, ANDREW CUOMO, IS A SEXUAL ABUSER”

()

New York’s largest airports – John F. Kennedy International Airport and LaGuardia Airport – have both rejected new ads from UltraViolet aimed to call attention to Governor Cuomo’s documented abuse of women, and highlight the need for him to be immediately removed from office. UltraViolet initially called for Governor Cuomo to resign or be removed from office in March 2021.

Read the article

Facebook, Instagram, and Twitter, Deplatform COVID-19 disinformation disseminators

()

Sign-on letter demanding Facebook, Instagram, and Twitter deplatform COVID-19 disinformation disseminators

July 19, 2021

Dear Mr. Zuckerberg, Ms. Sandberg, and Mr. Dorsey,

As the United States reopens and “normal” life resumes, COVID-19 infection rates continue to increase in areas where COVID-19 vaccination rates are low.1 This is happening as coronavirus variants are emerging and spreading across the country and the spread of disinformation about the pandemic, vaccine, and variants spread through social media platforms and bad actors. People are quite literally dying because disinformation about the COVID-19 pandemic, the vaccine, and public health leaders is spreading.2

According to the Center for Countering Digital Hate and the Anti-Vax Watch alliance, COVID-19 disinformation is spreading across social media platforms, including Facebook, Instagram, and Twitter. The primary perpetrators are the “Disinfo Dozen:” 12 content creators who share lies about COVID-19 and the vaccine, specifically targeting women, girls, and Black, Indigenous, and people of color.3 The accounts are owned by Joseph Mercola, Robert F. Kennedy Jr., Christiane Northrup, Erin Elizabeth, Sayer Ji, Charlene & Ty Bollinger, Sherri Tenpenny, Ben Tapper, Kelly Brogan, Rizza Islam, Rashid Buttar, and Kevin Jenkins.4 As of July 15, 62 of the 97 accounts controlled by these individuals are accessible on social media platforms.5

The spread of medically inaccurate disinformation about COVID-19 and the vaccine is causing further suspicion, mistrust, and concern for already vulnerable populations. Lies about the vaccine causing infertility, or even death, are exacerbating the divide between the vaccinated and the unvaccinated.6 While such fears regarding health and the ongoing pandemic are valid, perpetuation of dangerous and false information with malicious intent is extremely dangerous.

The COVID-19 pandemic has caused profound loss across the country and world. In the U.S. alone, COVID-19 has killed more than 600,000 people.7 Among the most severely impacted are low-income women, BIPOC, and LGBTQ+ communities. The pandemic has only further exposed societal inequities like systemic racism, lack of access to public healthcare and accurate medical information.8,9

Facebook, Instagram, and Twitter have allowed disinformation and quack science to thrive, allowing the Disinformation Dozen to share inaccurate content that goes viral due to your platforms’ biased algorithms that prioritize clicks over the truth. Anti-vaccine “activists,” including the Disinfo Dozen, reach more than 62 million followers on Facebook, Instagram, and Twitter, bringing in upwards of $1.1 billion in annual revenue for social media giants while the Anti-Vaccine industry brings in at least $36 million in annual revenue.10,11,12

While Facebook, Instagram, and Twitter have committed to curbing the spread of disinformation on their platforms, they have stopped short of deplatforming the Disinfo Dozen, who contribute to more than two-thirds of anti-vaccine content. From a sample of 812,000 Facebook and Twitter shares and posts, 65% of anti-vaccine content is directly linked to these anti-vaxxers, their affiliated organizations, or their co-conspirators.13

Disinformation campaigns directed at women, girls, and Black and Brown people are perpetuating and instilling fear while increasing the number of preventable deaths and illness among already vulnerable communities.14,15,16 This puts a burden on parents in particular, who are forced to sift through thousands of posts, tweets, and stories for factual information. Refusal by Facebook, Instagram, and Twitter to speedily remove false content and deplatform bad actors is leading to misogynist, racist, transphobic, and homophobic targeting of women, BIPOC, and LGBTQ communities on the internet, exploiting people’s deep-seated fears, and clogging up social media feeds to prevent accurate information from being seen.17

12 state attorneys general, Senators Amy Klobuchar and Ben Ray Lujan, and members of the House Committee on Energy and Commerce have sent letters to social media platform CEOs, including Mr. Zuckerberg and Mr. Dorsey, demanding the removal of disinformation disseminators from their respective sites, citing and sourcing reports done by the Center for Countering Digital Hate.18,19,20 They called on Facebook and Twitter to comply with the companies’ own community standards guidelines and remove fraudulent information–including from Facebook-owned platforms Instagram and WhatsApp.

Disinformation currently thrives on the internet, and it has real-world consequences. Disinformation on social media platforms led to the insurrectionist attack on our democracy at the U.S. Capitol on January 6, 2021, and it is now putting lives in danger by discouraging people from getting the COVID-19 vaccine. Disinformation is not a partisan issue, and public health should not be a political issue at all.

Facebook, Instagram, and Twitter have served as the sites for the spread of hateful speech and dangerous disinformation. Now, as we are moving forward with reopening our country, it’s critical that Facebook, Instagram, and Twitter take immediate action to protect platform users by deplatforming the Disinfo Dozen and taking meaningful steps to stop the spread of COVID-19 disinformation.

Signed,

UltraViolet
Center for Countering Digital Hate
#ThisIsOurShot
ACRONYM
AI for the People
Alianza for Progress
Center for Civic Policy
Creators of The Social Dilemma
Decode Democracy
Dr. Jay Bhatt, LLC
Earthseed
Florida Immigrant Coalition
Free Press
Friends of the Earth
GLAAD
Kairos
Media Matters for America
MediaJustice
NARAL Pro-Choice America
ParentsTogether
Peninsula 360 Press
Planned Parenthood Federation of America
ProgressNow New Mexico
Reality Team
Reproaction
SumOfUs
The Sparrow Project
Western States Center
Women’s March

Sources:

1. Coronavirus infections dropping where people are vaccinated, rising where they are not, Post analysis finds, The Washington Post, June 14, 2021

2. Ibid.

3. Pandemic profiteers: The business of anti-vaxx, Center for Countering Digital Hate, June 1, 2021

4. Disinformation dozen: The sequel, Center for Countering Digital Hate, April 28, 2021

5. Center for Countering Digital Hate, July 15, 2021

6. Facebook Built the Perfect Platform for Covid Vaccine Conspiracies, Bloomberg Businessweek, April 1, 2021

7. Coronavirus deaths: U.S. map shows number of fatalities compared to confirmed cases, NBC News, March 23, 2020 (Updated June 21, 2021)

8. Racial Disparities in COVID-19, Science in the News, October 24, 2020

9. Fauci says pandemic exposed ‘undeniable effects of racism’, ABC News, May 16, 2021

10. Pandemic profiteers: The business of anti-vaxx, Center for Countering Digital Hate, June 1, 2021

11. Ibid.

12. Ibid.

13. Disinformation dozen: The sequel, Center for Countering Digital Hate, April 28, 2021

14. 45 years ago, the nation learned about the Tuskegee Syphilis Study. Its repercussions are still felt today., USA Today, July 25, 2017

15. Genetic privacy: We must learn from the story of Henrietta Lacks, New Scientist, August 1, 2020

16. ‘We are going to have to save ourselves,’ Black community fights deadly COVID vaccine conspiracy theories, USA Today, March 10, 2021

17. Instagram Suggested Posts To Users. It Served Up COVID-19 Falsehoods, Study Finds, NPR, March 9, 2021

18. Re: Vaccine Disinformation, Office of the Attorney General, Connecticut, March 24, 2021

19. Letter to Jack Dorsey and Mark Zuckerberg, Senators Amy Klobuchar and Ben Ray Lujan, April 16, 2021

20. Letter to Jack Dorsey, Committee on Energy and Commerce, May 27, 2021

Read the article

Putting the onus on women is a PR stunt–the platforms are the problem

()

July 1, 2021

Re: Online gender-based violence–the platforms are the problem

Dear Mark Zuckerberg, Sheryl Sandberg, Sundar Pichai, Susan Wojcicki, Jack Dorsey, and Vanessa Pappas,

Today’s announcement by Facebook, Instagram, Twitter, TikTok, and Google in response to the Generation Equality Forum’s efforts to surface the harms that women experience on your platforms is little more than a feel-good publicity stunt. It’s obviously meant to distract from the very epidemic of online gender-based violence that the forum is attempting to highlight. The truth is that your platforms are the problem; these false solutions are simply victim-blaming and telling women to “cover up” online to prevent their own harassment.

The recommendations from the Web Foundation’s tech policy demonstrate an understanding of the depth, scale and urgency of the problem. Unfortunately, the proposed solutions fail to address the full scope of the problem and fall short of what is truly needed to address gender based violence on your platforms — as well as offline violence that is planned on or inspired by content on your platforms. They shift the burden of preventing online violence and misogynist attacks to the very people most likely to suffer from them. This is merely a cosmetic change that pushes hate and disinformation into the shadows–while ignoring the root causes of this violence: your platforms and the way that they encourage the spread of hate and disinformation and facilitate harassment.

Existing reporting systems consistently fail to deter bad actors. Hate speech and harassment restrictions are often unenforced and rely on a “notice and take down” model that fails to address the systemic nature of online abuse and puts the onus on the victim to stop the harassment they’re experiencing. Furthermore, a model that relies on taking down individual pieces of content rather than moving to deplatform will be ineffective, given the speed at which content spreads on your platforms. Many of our organizations have direct experience with flagging problematic and violent content directly to your staff, without ever getting action to address it.

By failing to address the root algorithmic causes of these attacks, your platforms are demonstrating that their commitments to combatting online violence against women is performative at best. That is extremely dangerous when online misogyny is proven to fuel gender-based violence, mass shootings, and hate crimes–including attacks in Christchurch, New Zealand, and Paris, France; at the U.S. Capitol in Washington, D.C.; and in Atlanta, Ga., and Charlottesville, Va.

The power to determine what content people see, whose voices are amplified or excluded, and what will and will not remain on your platforms gives you the ability to influence the worldview of billions of users. If your platforms aren’t safe places for women, BIPOC, LGBTQ folks, people with disabilities, religious minorities, and other marginalized groups to share their voices without risking harassment and violence, then it is clear that your platforms are the problem. We need solutions that center the most impacted and most attacked: women, BIPOC, and LGBTQ people.

Black, Indigenous, women of color, and transgender women especially face an onslaught of racist and misogynist attacks, while you profit off of these harms. Pew Research Center found that three-quarters of Black and Latinx people and two-thirds of women say that online harassment is a major problem.

Your platforms–Facebook, Instagram, Twitter, TikTok, and Google–directly benefit from gendered disinformation and the engagement associated with coordinated racist and misogynist attacks. Hate, conspiracies, and disinformation keep people on your platforms and put billions into your pockets, as you profit off of the same hate and lies used by extremist groups like the Proud Boys and the Three Percenters to recruit and radicalize.

And this isn’t a coincidence–it is intentionally built into the business model of your product. The algorithms and policies on all of the major platforms have been created, in large part, by privileged men who recreate their own biases in the algorithms.

Social media companies have a responsibility to stop the spread of hate and violence on your platforms and protect users without placing the onus on the victims. But today’s announcement does little to address the ways in which your platforms spread hate and disinformation and encourage harassment.

If your platforms were serious about combating online violence and hate toward women, you would update and actually enforce your hate speech rules and prioritize the safety of women, BIPOC, and LGBTQ people.

Signed,
UltraViolet
#ShePersisted
#VOTEPROCHOICE
A/B Partners
Abortion Access Front
Center for Countering Digital Hate
Color Of Change
Courage California
Decode Democracy
Equality Labs
Faithful America
Free Press
Friends of the Earth
GLAAD
Global Project Against Hate and Extremism
Higher Heights For America
Indivisible
Institute for Research on Male Supremacism
Jewish Women International (JWI)
Kairos
Lake Oconee Community Church
Make the Road Nevada
Media Matters for America
MediaJustice
NARAL Pro-Choice North Carolina
National Equality Action Team (NEAT)
ParentsTogether
ProgressNow New Mexico
Religious Coalition for Reproductive Choice
Restaurant Opportunities Centers United
Stop Online Violence Against Women Inc
SumOfUs
Supermajority
Tech Transparency Project
The League
The Sparrow Project
Women’s March

Read the article

UltraViolet Condemns Debate Topic “Race and Violence in our Cities”

()

FOR IMMEDIATE RELEASE: Wed. September 23, 2020
CONTACT: Anya Silverman-Stoloff | anya@unebndablemedia.com

UltraViolet Condemns Debate Topic “Race and Violence in our Cities”

Says This Framework Perpetuates Racist Narrative in the Midst of Police Violence, Needs to be Changed Immediately

Yesterday, Fox News’ Chris Wallace, moderator for the first presidential debate on Tuesday September 29th, publicized the list of debate topics, which included “race and violence in our cities.”

In reaction, Bridget Todd, Communications Director for UltraViolet, a leading national women’s advocacy group, explained:

“You know the pervasiveness of white supremacy runs deep if in 2020, at a time when the majority of Americans support the demands of the Black Lives Matter movement, the Commission on Presidential Debates can’t even put out a list of topics without advancing anti-Black messaging and dog whistles that play right into Trump’s racist narrative.

“This false framework, not surprising from a Fox News host, perpetuates racist right-wing disinformation, equates Blackness with violence, and ignores the realities of police violence and white supremacist terrorism.

“This is unacceptable. We cannot let anti-Blackness be the framework for our national debates.

The Commission on Presidential Debates should never have approved this frame and must apologize and take immediate action to change it.

“This highlights how im
portant it is to have the voices of women, Black people and people of color involved in our national conservations. In response to demands by gender equity and justice organizations including UltraViolet, TV networks and the DNC agreed to have at least one woman of color moderate each Democratic primary debate. The Commission on Presidential Debates needs to make that same commitment.

Last month, UltraViolet, in conjunction with ACRONYM, Color Of Change PAC, Disinfo Defense League, EMILY’s List WOMEN VOTE!, NARAL Pro-Choice America, Planned Parenthood Votes, SumOfUs, Women’s March, Strategic Victory Fund, GQR Digital, and #ShePersisted., launched “Reporting in an Era of Disinformation: Fairness Guide for Covering Women and People of Color in Politics,” a new guide for reporting on the 2020 general election.

The guide makes specific recommendations designed to help journalists and platforms identify and avoid unintentional sexist and racist bias or disinformation when interviewing, writing, or moderating content about race and gender in politics.

SEE THE GUIDE HERE: https://weareultraviolet.org/fairness-guide-2020/

# # #

Read the article