AI Falsely Accused Thousands Of Fraud?

Civil rights attorney Jennifer L. Lord represents clients in wrongful termination cases. In 2013, she noticed a common story emerging:

“We were experiencing this rush of people calling us and saying they were told they committed fraud,” Lord said. “We spoke to some of our colleagues who also practice civil rights and employment law and everyone was experiencing this.”

Lord and her team then discovered the Michigan Unemployment Insurance Agency in 2013 purchased the algorithmic decision-making computer called MIDAS, and when the agency did so, it also laid off its fraud detection unit.

“The computer was left to operate by itself,” Lord said. “The Michigan auditor general issued two different reports a year later and found the computer was wrong 93 percent of the time.”

The government accused about 40,000 people of fraud and seized close to $100 million by garnishing their wages or seizing tax refunds. This led Lord and her team to file the class action lawsuit Bauserman v. Unemployment Insurance Agency (subscription required), and the Michigan Supreme Court recently ruled the case may proceed.

According to Lord, it took some digging just to learn that the decision was performed by an algorithm. Recognizing algorithmic — as opposed to human — decisionmaking, has become an increasingly important skill, one of Yuval Noah Hariri’s 21 Lessons for the 21st Century.

This also raises the question of whether machine can and should provide due process. In this case, fraud requires proving intent, involving the subtle and inferential determination of what people were thinking under the circumstances. Are algorithms currently up to the task? How soon before they are?

It also raises the question of how algorithms should be used to make law enforcement decisions more generally. With facial recognition and surveillance data increasingly available, using algorithms to make decisions becomes more attractive. Just ask Chinese authorities.

Finally, this reminds me of the situation in Baltimore, where a cash-strapped municipality finds itself overwhelmed with technological advancement. For Baltimore, it was the inability to fight back against malware. Here, the state may have been tempted by what seemed like a good technological substitute for an expensive government function.

For now at least, significant government decisions still need to be made by humans. As algorithms are incorporated into government decisions, the process must be disclosed and made as transparent as possible. And critically, there must be a way to appeal the decision to a human.