Algorithmic And AI Evaluation Instruments — A New Frontier In Incapacity Discrimination

Algorithms are based on large amounts of data with which the normative, standardized behavior of is modeled … [+] Majority population

getty

The use of software algorithms to support organizational decision-making and its possible negative impact on minorities will be an increasingly important area for humanity as we embrace our AI future.

These critical issues were brought even more focus earlier this month with the release of a new report by the Center for Democracy and Technology entitled “Algorithms for Recruiting Algorithms: Innovative Recruitment or Accelerated Disability Discrimination?”.

A special panel discussion at the Sight Tech Global conference last week looked at other important areas for people with disabilities affected by algorithmic decisions, such as: B. the administration of social benefits, education and the criminal justice system.

The key messages emerging from both the panel discussion and the report are a unanimous warning.

Disability rights run the risk of being undermined if they become entangled in society’s drive to be more efficient by automating processes that previously required careful human reflection.

This is dangerous for disabled people as there is an inevitable tension between the way algorithmic tools work and the lived experience of many people with disabilities.

Algorithms are naturally based on large amounts of data with which the normative, standardized behavior of majority groups is modeled.

The lived experience of disabled people is of course on the edge of “big data”. Due to the individualization of diseases and the prevailing socio-economic factors, it is still difficult to reflect the experiences of disabled people through modeling at the population level.

Jutta Treviranus is the director of the Inclusive Design Research Center and has participated in a panel discussion at Sight Tech Global entitled “AI, Fairness, and Bias: What Technologists and Proponents Must Do to Make sure AI helps people with disabilities rather than harming them” contributed.

“Artificial intelligence amplifies, automates and accelerates everything that happened before,” said Treviranus at the virtual conference.

“It uses data from the past to optimize what was optimal in the past. The terrible flaw with artificial intelligence is that it doesn’t deal very well with diversity or the complexities of the unexpected, ”she continued.

“Disability is a perfect challenge for artificial intelligence because when you live with a disability your entire life is much more complex, much more involved and your experiences are always diverse.”

Algorithm driven hiring tools for recruiting

The use of algorithmic assessment tools in recruitment is a particularly delicate problem for the disabled community. It is estimated that the employment rate for people with disabilities in the US is around 37%, compared to 79% for the general population.

Tools for setting algorithms can include various exercises and components. This can include candidates who record videos to assess facial and vocal stimuli, resume review software to identify red flags such as long gaps between employment times, and gamified tests to assess reaction speed and learning style.

Algorithmic software is also marketed to identify less tangible, but potentially desirable, traits in candidates such as optimism, enthusiasm, personal stability, sociability, and assertiveness.

Of course, the direct inaccessibility of the platform is the immediate problem that comes to mind when considering interactions with disabled candidates.

It is entirely fair to wonder how a candidate with a visual impairment can access a gamified test with graphics and images, how a candidate with motor disabilities can move a mouse to answer multiple-choice questions, or how a person can On the autism spectrum reacting to an exercise in reading facial expressions from static photos.

In fact, the Disabled Americans Act specifically forbids screening candidates with disabilities through inaccessible hiring processes or those that do not measure attributes directly related to the job in question.

Employers may themselves believe that they are helping disabled candidates by eliminating traditional human bias and outsourcing evaluation to a seemingly “neutral” AI.

However, this is to ignore the fact that the tools were most likely developed primarily by capable, white men.

In addition, approval criteria are often modeled based on the established positive characteristics of a company’s currently successful employees.

When the workforce lacks diversity, it is simply reflected in the algorithmic testing tool.

By becoming overly dependent on these tools without understanding the pitfalls, employers run the risk of promoting sleepwalking discriminatory practices on an industrial scale.

To address this point specifically, the report’s authors note: “Ultimately, the individualized analysis to which candidates are legally entitled under the ADA may be profoundly at odds with the mass recruitment approach found in many algorithmic tools.”

“Employers need to think seriously not only about the legal risks they may be exposed to by using such a tool, but also about the ethical, moral and reputational risks that the use of poorly designed recruitment tools can cause exclusion in the workforce and in the Aggravate society as a whole. “

During the Sight Tech Global panel discussion, Lydia XZ Brown, a policy advisor for the Center for Democracy and Technology’s privacy and data project, was asked if algorithmic assessment tools were really a truly modern form of disability discrimination.

“Algorithmic discrimination highlights existing ability awareness, sharpening and sharpening existing ability awareness, and just shows different ways in which existing ability awareness can manifest itself,” replied Brown.

She later continued, “When we talk about ability awareness in this way, it helps us understand that algorithmic discrimination does not create anything new, but builds on ability awareness and other forms of oppression that already existed throughout society.”

However, it is the extent and the speed at which automation can promote and embed discrimination that must be paramount.

Building a more inclusive AI future

The CDT report makes some recommendations to create more accessible recruitment practices.

The most important step for organizations is to first develop an understanding of the inherent limitations of these tools for assessing individuals with diverse and complex disabilities.

Once this executive reality check is done, companies can proactively initiate policies to mitigate the issues.

This can start with a deep dive into what these tests are actually measuring. Are positive but vague characteristics such as “optimism” and “high self-esteem”, as evoked by a snapshot test, really essential for the advertised position?

By understanding and properly exercising their legal responsibilities, employers should seek to educate and inform all applicants of the specific details of the algorithmic tests.

Only by submitting these details can candidates make an informed decision about accessibility.

For candidates who proceed with the test, organizations should be vigorous in collecting data on accessibility issues.

For candidates who fear that an algorithm will unfairly weed them out, a range of alternative test models should be provided with no implicit stigma.

Ultimately, it should be up to software vendors to keep accessibility a priority in the initial design process.

This can be further supported by stricter regulations in this area. However, the most useful action providers could take right now is to work with disabled people to design and to take their feedback into account.

The simple truth is that AI is not just the future. It is already here, and its presence exponentially extends into all facets of human existence.

The goal can be set, but there is still time to change the journey and use best practices to use the more direct links to inclusion rather than learning from mistakes that risk leaving people behind.

Comments are closed.