Jump to content

A new bill would force companies to check their algorithms for bias


steven36

Recommended Posts

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias. The Algorithmic Accountability Act is sponsored by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), with a House equivalent sponsored by Rep. Yvette Clarke (D-NY). If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

 

ti phpfn 8047 ab 774 ea 257 adb 6 d 49 ad 26358 af 79

 

 

The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data.

 

These companies would have to evaluate a broad range of algorithms — including anything that affects consumers’ legal rights, attempts to predict and analyze their behavior, involves large amounts of sensitive data, or “systematically monitors a large, publicly accessible physical place.” That would theoretically cover a huge swath of the tech economy, and if a report turns up major risks of discrimination, privacy problems, or other issues, the company is supposed to address them within a timely manner.

 

The bill is being introduced just a few weeks after Facebook was sued by the Department of Housing and Urban Development, which alleges its ad targeting system unfairly limits who sees housing ads. The sponsors mention this lawsuit in a press release, as well as an alleged Amazon AI recruiting tool that discriminated against women.

 

And the bill seems designed to cover countless other controversial AI tools — as well as the training data that can produce biased outcomes in the first place. A facial recognition algorithm trained mostly on white subjects, for example, can misidentify people of other races. (Another group of senators introduced regulation specifically for facial recognition last month.)

 

In a statement, Wyden noted that “computers are increasingly involved in the most important decisions affecting Americans’ lives — whether or not someone can buy a home, get a job or even go to jail. But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.”

 

A couple of local governments have made their own attempts at regulating automated decision-making. The New York City Council became the first US legislature to pass an algorithmic transparency bill in 2017, and Washington state held hearings for a similar measure in February.

 

Source

 

 

 

Link to comment
Share on other sites


  • Replies 2
  • Views 313
  • Created
  • Last Reply

This is definitely a step in the right direciton.

After defeating the privacy bill, to counter monitoring of citizens by big five, I certainly did not expect this development to be in place.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...