How we can make AI less biased against disabled people

 

How we can make AI less biased against disabled people

Disability bias is rife in trained AI models, according to recent research from Penn State. Here’s what we can do about it.

How we can make AI less biased against disabled people

[Source Photo: Jack S/Pexels]

AI continues to pervade our work lives. According to recent research by the Society for Human Resource Management, one in four employers use AI in human resources functions. Meanwhile, technology is becoming an increasingly common presence in everything from education and healthcare to criminal justice and law.

Yet we largely aren’t addressing bias in any meaningful way, and for anyone with a disability, that can be a real problem.

Indeed, a Pennsylvania State University study published last year found that trained AI models exhibit significant disability bias. “Models that fail to account for the contextual nuances of disability-related language can lead to unfair censorship and harmful misrepresentations of a marginalized population,” the researchers warned, “exacerbating existing social inequalities.”

In practical terms, an automated résumé screener, for example, may deem candidates unsuitable for a position if they have unexplained gaps in education or employment history, effectively discriminating against people with disabilities who may need time off for their health.

“People may be engaging with algorithmic systems and have no idea that that is what they’re interacting with,” says Ariana Aboulafia, who is Policy Counsel for Disability Rights in Technology Policy at the Center for Democracy and Technology, and has multiple disabilities, including superior mesenteric artery syndrome. (SMA is a rare disease that can cause various symptoms, including severe malnutrition.)

“When I was diagnosed with superior mesenteric artery syndrome, I took a year off of law school because I was very sick,” Aboulafia says. “Is it possible that I have applied to a job where a résumé screener screened out my résumé on the basis of having an unexplained year? That is absolutely possible.”

Sen. Ron Wyden of Oregon alluded to the risk for bias during a Senate Finance Committee meeting about the “promise and pitfalls” of AI in healthcare in early February. Wyden, who chairs the committee, noted that while the technology is improving efficiency in the healthcare system by helping doctors with tasks such as pre-populating clinical notes, “these big data systems are riddled with bias that discriminates against patients based on race, gender, sexual orientation, and disability.” Government programs like Medicare and Medicaid, for example, use AI to determine the level of care a patient receives, but it’s leading to “worse patient outcomes,” he said.

In 2020, the Center for Democracy and Technology (CDT) released a report listing several examples of these worse patient outcomes. It analyzed lawsuits filed over the prior decade related to algorithms used to assess people’s eligibility for government benefits. In multiple cases, algorithms significantly cut home- and community-based services (HCBS) to the recipients’ detriment. For example, in 2011, Idaho began using an algorithm to assess recipients’ budgets for HCBS under Medicaid. The court found the tool was developed with a small, limited data set, which CDT called “unconstitutional” in its report. In 2017, there was a similar case in Arkansas, where its Department of Human Services introduced an algorithm that cut several Medicaid recipients’ HCBS care.

Some legislators have proposed measures to address these technological biases. Wyden promoted his Algorithmic Accountability Act during the meeting, which he said could increase transparency around AI systems and “empower consumers to make informed choices.” (The bill is currently awaiting review by the Committee on Commerce, Science, and Transportation.) And, in late October, President Joe Biden released an executive order on AI that explicitly mentioned disabled people and addressed broad issues such as safety, privacy, and civil rights.

Aboulafia says the executive order was a powerful first step toward making AI systems less ableist. “Inclusion of disability in these conversations about technology [and] recognition of how technology can impact disabled people” is key, she says. But there’s more to do.

Aboulafia believes that algorithmic auditing—assessing an AI system for whether it displays bias—could also be an effective measure.

But some experts disagree, saying algorithmic auditing, if done improperly or incompletely, could legitimize AI systems that are inherently ableist. In other words, it matters who performs the audit—the auditor must be truly independent—and what the audit is designed to assess. An auditor should be empowered to question all underlying assumptions its developers make, not merely the algorithm’s efficacy as they define it.

Elham Tabassi, a scientist at the National Institute of Standards and Technology and the Associate Director for Emerging Technologies in the Information Technology Laboratory, suggests working with the communities affected to study the impact of AI systems on real people, as opposed to solely analyzing these algorithms in a laboratory. “We have to make sure that the evaluation is holistic, it has the right test data, it has the right metrics, the test environment,” she says. “So, like everything else, it becomes . . . about the quality of the work and how good a job has been done.”

Fast Company – technology

(5)