Rep. Doris Matsui, D-California, left, and Sen. Ed Markey, D-Mass.
Bill Clark | CQ Roll Call | False images; Countess Jemal | fake images
A new federal bill seeks to demystify how social media platforms determine which posts users view, without touching a law that has become a lightning rod in Congress.
The Online Platform Transparency and Algorithmic Justice Act of 2021, announced Thursday by Senator Ed Markey, D-Massachusetts, and Rep. Doris Matsui, D-California. – seeks to expose and address social injustices that are compounded by online algorithmic amplification.
In this particular use of the word, “algorithms” are parts of software programs that sites like Facebook, Twitter, and Google use to determine what content and ads to show to users.
The bill would prohibit platforms from using algorithms that discriminate based on protected characteristics such as race and gender, empower the Federal Trade Commission to review the algorithmic processes of platforms, and create a new interagency task force to investigate the discrimination in algorithms.
Platforms would also have to explain to users how they use the algorithms and what information they use to execute them.
“It is time to open the hood on Big Tech, enact strict bans on harmful algorithms, and prioritize justice for communities that have long been discriminated against as we work toward platform accountability,” Markey said in a statement.
However, an industry group backed by companies such as Amazon, Facebook, Google and Twitter warned that exposing the platforms’ processes could be risky.
“Nobody wants technology to exacerbate racial inequality or deprive people of opportunities,” Chamber of Progress founder and CEO Adam Kovacevich said in a statement. “One approach would be to expand our existing civil rights and discrimination laws in housing, employment and credit. There is a danger that fully lifting the hood on technological algorithms could provide a roadmap for hackers, Russian trolls and conspiracy theorists. “
Researchers and government agencies have accused the platforms of employing discriminatory algorithms in the past. For example, in 2019, the Department of Housing and Urban Development accused Facebook of violating housing discrimination laws with its targeted advertising. Shortly after that, researchers from Northeastern University, University of Southern California, and the nonprofit group Upturn found that Facebook’s ad serving algorithm could discriminate based on race and gender, even if that’s not what it was. advertisers intended.
Facebook said at the time that it opposes “discrimination in any form” and pointed to changes it made to its ad targeting tools to address some of the concerns.
Don’t touch Section 230
The new bill is a notable approach to tech reform in part because of what it doesn’t do: touch the much-debated legal shield that protects companies from liability for what users post online.
Section 230 of the Communications Decency Act is a law from the 1990s that says online platforms are not responsible for their users’ speech and empowers platforms to moderate their services essentially as they see fit. In recent years, both Democrats and Republicans have criticized the shield for being too broad.
But modifying Section 230 is no easy task. Democrats and Republicans disagree on their problems and how to solve them. Progressives advocate removing liability protection for platforms that don’t moderate certain types of content, fearing the proliferation of hate speech. Conservatives say the law should limit which platforms can moderate, claiming that platforms suppress posts that express conservative views (companies have denied this).
Many legal scholars have warned of the potential unwanted harms that could come from the curtailment of Section 230. Platforms might actually be incentivized to limit speech much more than intended, for example.
Progressive digital rights group Fight for the Future sees the new bill as a responsible way to address harm from big tech companies “without poking holes in Section 230,” according to a statement.
While it was introduced by two Democrats, the bill touches on a key principle put forward by Republicans earlier this year about how they seek to handle technology reform. In an April memo, Republican staff on the House Committee on Energy and Commerce urged an emphasis on transparency in content moderation practices. The Markey and Matsui bill would require online platforms to publish annual reports to the public on their content moderation practices.
WATCH: The complicated business of content moderation on Facebook, Twitter, YouTube