For many of us, the internet is our public square, marketplace, employment agency, bank, travel agency, library and theater.
That’s why the way tech companies collect, use and secure our personal information — their most valuable commodity — has become a national priority. These companies have used our data to enable and sometimes even participate in discrimination against people of color, women, members of the LGBTQ community, religious minorities, people with disabilities, immigrants and other marginalized communities.
Generations of activists marched, sat-in, organized and voted to outlaw discrimination in our brick-and-mortar economy. Now is the time to outlaw discrimination in the data economy.
We must ensure that powerful interests don’t use our data in ways that violate our rights and silence our voices. We must have control over how our personal information is used, and prohibit its use to build systems that oppress, discriminate, disenfranchise and exacerbate segregation.
That’s why Free Press Action and the Lawyers’ Committee for Civil Rights Under Law have drafted this legislation calling on Congress to protect civil rights and privacy online. We believe that privacy rights are civil rights. A new bargain must be struck between ordinary people and the powerful companies that act as gatekeepers to participation in 21st-century life.
That means giving people real rights and control over their personal information and the means to assert those rights. It means ensuring that companies are transparent about the collection and use of our information. It means preventing our information from being used to discriminate against people or disenfranchise voters.
And it means protecting nondiscriminatory access to places of digital commerce for everyone, regardless of their identities and demographics — just as we have protected such access to lunch counters, buses, schools, shops, libraries and theaters since the Civil Rights Movement desegregated them.
Everything we do online generates data and every bit of that data can be tracked, no matter how innocuous it may appear in isolation.
Data feeds powerful algorithms to deliver personalized ads and other services. There are many beneficial and harmless uses of these mechanisms, but algorithms that profile users and target content to them can also facilitate age, racial and sex discrimination in employment, housing, lending, e-commerce and voting.
Here are just a few examples.
A ProPublica investigation found that employers like Uber were advertising jobs exclusively to men via Facebook in likely violation of civil-rights laws.
A study by the nonprofit Upturn showed that predictive-hiring algorithms “reveal and reproduce patterns of inequity,” especially when based on past hiring decisions and past employment evaluations.
The Department of Housing and Urban Development is currently investigating Facebook for enabling discriminatory advertisements that allow people to exclude housing applicants based on their protected characteristics like race, gender and sexuality — in likely violation of the Fair Housing Act.
In retail and lending:
A UC Berkeley study found that online mortgage lenders were systematically charging Black and Latinx borrowers more for loans. This exploitative behavior earned the lenders “11 percent to 17 percent higher profits on purchase loans to minorities.”
Companies like Home Depot, Rosetta Stone and Staples have charged people higher prices for the same products in different locations, changing their online prices based on where the buyer lives.
The Senate Intelligence Committee found that foreign actors used commercial advertising tools to suppress the African American vote during the 2016 presidential election.
The Brennan Center has reported that this behavior continued into the 2018 midterm election with voter-suppression campaigns on Twitter. “Voter suppression,” the Brennan Center noted, “has gone digital.”
In public accommodations:
According to Amnesty International, platforms that are generally open to the public like Twitter are “toxic” places for women. Those platforms are failing in their responsibilities to investigate abuse and violence in a transparent manner.
Data-driven and algorithmic bias also exacerbates racism by perpetuating harmful stereotypes about people of color. Google search results for Black teenagers pull up mug shots while search results for White teenagers offer youthful innocent images. Bias and discrimination in algorithms have been documented again and again and again, and they are arguably building a “new infrastructure of racism.”
However, we have the tools to address these inequities.
Beginning slowly in the wake of the Civil War and then with deliberate speed during and after the Civil Rights Movement, laws were passed to prevent discrimination in public accommodations, education, employment, voting rights, housing, lending and insurance.
These laws, notably the Civil Rights Act of 1964, prevent discrimination on the basis of a person’s race, sex, religion or national origin. The Voting Rights Act of 1965 prevents discrimination with the right to vote; and the Fair Housing Act, part of the Civil Rights Act of 1968, prohibits discrimination in housing because of race, sex, religion and other protected categories.
These laws have helped the United States make limited but important strides in creating a more fair, just and equitable society. However, they were built and written for a pre-digital age.
For example, Title II of the Civil Rights Act of 1964 ended formal segregation by barring discrimination in public accommodations. But the courts haven’t consistently applied this law to online businesses. The Voting Rights Act prohibits voter suppression via violence or intimidation, but it doesn’t prohibit voter suppression by deception — the primary way of disenfranchising voters on the internet.
And no law requires companies to be sufficiently transparent about how they use our personal information. Without that transparency, it’s almost impossible to figure out which specific practices are causing unlawful discrimination.
We believe the best strategy is an extension of these time-tested anti-discrimination and equal-opportunity principles to the processing of our personal information.
Brick-and-mortar businesses have respected these rights for more than 50 years, and thrived as more people gained opportunities to enter the market. Internet companies’ interest in “moving fast and breaking things” cannot be allowed to jeopardize the gains of the Civil Rights Movement. Online businesses and services, like their physical cousins, must also respect the rights of marginalized and vulnerable communities to protection against discrimination based on their personal information and characteristics.
Ordinary people must have the ability to assert these rights too. Marginalized communities have often had to fight for their rights directly in court, without the help of a federal or state government agency interested in protecting them. That’s why most civil-rights laws, and our model privacy bill, contain a private right of action that would allow people to sue and vindicate their rights.
On uses of personal information
Our model approach to privacy also centers individual dignity, and our right to use online commerce and services without companies taking advantage of us.
We believe that people should be able to make an understandable bargain with internet companies when they hand over their personal information for a specific service. That means apps should use your data to provide the service you signed up for, but shouldn’t use it to surreptitiously track you across the web. Information you hand over for one reason, like providing a phone number for security purposes, shouldn’t be used to help deliver advertisements.
You give your information to a company for a reason: Unless you tell them otherwise, the company should use your information only for that reason. And any unfair or deceptive practices should be prohibited outright, such as those that currently require a user to waive their privacy or other rights just to obtain the service when there’s no need for that data to deliver the promised goods.
On enforcement and preemption
The United States has no real privacy watchdog that can issue rules for how our personal information ought to be used. That situation is untenable.
Without clear rules, ordinary people won’t know their rights and businesses will take advantage accordingly. And businesses also need to know what is and isn’t allowed to invest in new technologies.
Congress must give the Federal Trade Commission the power to conduct rulemakings on data usage, and should give the agency adequate resources to protect the public too. We cannot anticipate companies’ future uses of data, and we should enable the FTC to respond to future violations.
State laws, state consumer-protection commissions, and state attorneys general also have a role to play in enforcing our rights. A federal law shouldn’t automatically preempt the work states are doing to build their own consumer-protection or privacy regimes and enforce their residents’ rights. State attorneys general are often out front addressing real-world problems long before the federal government takes action.
Preemption is a major danger to civil-rights protections nationwide. Housing, employment and other forms of discrimination almost always involve the use of personal information. There are many excellent state civil-rights laws — and some are better than federal law.
Many state consumer-protection laws are used to protect marginalized communities. A federal data-privacy law that broadly preempts state laws to weaken them will jeopardize civil rights.
On transparency and individual rights to control your data
We also believe people should have rights to access, correct, delete or download their own personal information and take it with them when they leave an online service. Making data portable by law would let people free themselves from a corporate walled garden and easily use other services.
People need to know what kinds of information companies and data brokers are collecting about them. Companies need to disclose not just what information they collect, but where they get the information; who shares data with them, and with whom they share data; how they analyze data to profile you; how they use your information; how they make decisions about what content, goods or services to offer you; and how they secure your data.
Companies need to conduct routine audits for bias and privacy risks. And all of this information needs to be conveyed in two different ways: in an easy-to-understand format for users, and in an exhaustively detailed format for regulators and privacy advocates.
The government also has a role to play in increasing transparency. Federal agencies that protect the public with specialized expertise — such as the Consumer Financial Protection Bureau, Department of Education, Department of Labor and Department of Veterans Affairs, among others — should study how personal information is used in their fields, identify disparities and risks for discrimination, and make public reports to Congress on a regular basis.
As Congress debates online privacy in this session and in the years to come, we hope that our draft bill will serve as a model for how to both protect the individual right to privacy and to ensure that the information companies collect is never used to discriminate. The online economy holds great promise in its convenience, both for commerce and communication. We also need the tools to make sure those conveniences liberate rather than oppress.
We’ve addressed these challenges before. We can build on the generations of civil-rights advocates that came before us. And we can continue to move toward justice.
Gaurav Laroia is a policy counsel at Free Press Action and David Brody is a counsel and senior fellow for privacy and technology at the Lawyers’ Committee for Civil Rights Under Law.