Breadcrumb

  1. Inicio
  2. node
  3. Testimony of Heather Tinsley-Fix

Testimony of Heather Tinsley-Fix

Chair Burrows and distinguished Commissioners, on behalf of our 38 million members and all older Americans nationwide, thank you for the opportunity to speak to you today regarding the intersection of AI-enabled employment decisions and the potential for age discrimination. I am honored to be here. My name is Heather Tinsley-Fix and I am Senior Advisor, Employer Engagement at AARP. AARP believes that any type of discrimination in the workplace is unacceptable. Too often, when discussing discrimination, age is not included, although ageism continues to be a widespread problem.

My remarks today will focus on (a) ways in which the current use of AI in hiring and other workforce decisions can affect older workers and (b) what employers and the EEOC can do to mitigate the risks of unintended age discrimination.

Current Use of AI in Hiring and Other HR Technologies

Advances in technology over the past two decades have drastically changed the way companies recruit, hire, and manage talent. The prevalence and reach of platforms designed to connect job seekers to the right jobs means that companies can no longer manually process the flood of resumes they may receive for any one job opening. In addition, the competition for skilled workers coupled with historically low unemployment rates have intensified the demand for automated solutions that help organizations find, hire, train, and promote the best candidates for the job. Furthermore, the tantalizing promise of outsourcing the analysis and selection of job candidates to a bloodless algorithm which will curb or even eliminate human biases is difficult to resist.

Before I dive into ways in which AI and automation have the potential for discouraging or discriminating against older candidates, I want to make two points. The first is that, across the AI-enabled hiring process, the inputs used to define and then train the algorithms build iteratively on each other toward the ultimate goal of predicting which candidate should be hired. This makes unpicking the source of bias extremely challenging because the algorithm not only spots patterns based on what it’s been told to look for, it also learns from the decisions introduced to the process by human actors. Throughout the creation and implementation of such systems, human definitions, decisions, and inputs mingle with the data stream to the extent that the “A” in “AI” is more of an augmentation of existing human intelligence rather than an artificial replacement of it. And the second is that not all companies use AI across all aspects of the hiring process – some may simply use an Applicant Tracking System that scans resumes while others might leverage matching and ranking functionality, or chatbots, or online games – which makes analysis of what works and what doesn’t challenging.

In terms of age bias and discrimination, the potential pitfalls associated with the use of AI in hiring and workforce management platforms are, at the root, the same for older candidates as they are for other protected classes – namely, the quality or relevance of available data used to train algorithms, and the normative judgments baked into the process about what “good” looks like. However, the way those pitfalls affect older workers can look a little different or come from unexpected places. Here are some examples:

  • Type and amount of data collected – to the extent that algorithms scrape and use data from social media posts and activity, professional digital profiles, internet browsing history, mobile device use, etc. to power their predictive rankings, older adults may be left out of the consideration set due to either a lack of those types of data in their digital footprint or the fact that fewer older job candidates are considered when building “ideal candidate” profiles. Furthermore, any data point collected that explicitly reveals or serves as a proxy for age – such as date of birth, years of experience, or date of graduation – can be noticed by the algorithm as part of a pattern denoting undesirable candidates and signal the algorithm to lower their ranking or screen them out entirely.
  • Cultural norms – there are a host of unconscious assumptions baked into our culture that associate age with slowing, cognitive decline, an inability to learn new things, and resistance to change. These norms inform the way job descriptions are worded, target variables are defined, interviews are conducted, and assessments are designed and scored. For example, if reaction time is a variable on which candidates are scored, older workers may be at a disadvantage. Research shows that older brains exhibit slower processing speeds but greater contextual knowledge.[1] However, if skills assessments or the analysis of interview footage are optimized toward younger brains by the data scientists working on them, older workers could receive arbitrarily lower scores. Additionally, older workers could be excluded at the start of the hiring process because they never see the job ads to begin with. In 2017, ProPublica revealed that Facebook was allowing organizations to age-target their employment ads, in some cases excluding workers over the age of 35, but in most cases excluding workers over 50. This can also include the way job descriptions are worded – phrases like “recent college grad” and “digital native” are explicitly ageist, but even subtle references such as “fast-paced,” “high-energy,” and “super fun,” have been shown to deter older workers from applying.[2]
  • The feedback loop of decisions taken by recruiters or hiring managers – AARP research on the experiences of older workers shows that age discrimination remains stubbornly with us. Our most recent survey reveals that 64% of workers aged 40+ face age discrimination at work. To the extent that algorithms learn from the preferences and decisions made against older candidates during the recruiting process, they will spot the patterns in the data that indicate an older candidate, and subsequently promote those candidates less frequently and less far through the automated process. For example, if an older candidate makes it past the resume screening process but gets confused by or interacts poorly with the chatbot and ultimately gets rejected by the recruiter, that data could teach the algorithm that candidates who have similar dates of graduation and hesitate when chatting with a bot should be ranked lower. This applies to performance data as well; research shows that performance reviews tend to level out or even decline with age despite weak to no correlation between increased age and a drop in productivity. To the that extent performance evaluations, or indeed a wider host of employment-related decisions such as who is selected for training, innovative projects, high-performing teams, etc. are influenced by ageism and that data is fed into ranking algorithms as proof points, older workers could be disadvantaged.   

What Employers and the EEOC Can Do to Mitigate the Risks of AI-Enabled Age Discrimination

Employers use AI-enabled hiring technologies and platforms shorten the time it takes to fill open positions, to find the best match between available job seekers and available jobs, and ideally to continue in the quest to remove as much human bias from the process as possible. The task at hand is not to convince them not to use such technologies but to provide them with the best information and guidance to make informed decisions, and to shore up that awareness with regulatory guardrails.

In the realm of practical guidance, the non-profit organization Upturn, which seeks to advance equity and justice in the design, governance, and use of technology, has a comprehensive set of recommendations that serve as a template and starting point for the goal of using such technologies wisely.[3]  In addition, there are many steps employers can take to specifically address the risk of unintended age discrimination and bias. They are as follows:

  1. Stop asking for age-related data in applications, such as dates of birth or graduation, unless there is a proven business reason to do so. If employers must know the age of a candidate, they should not limit the age a candidate can be, such as limiting the years listed in a drop-down menu. Alternatively, platforms could simply verify that candidates are at least 18 years old if that is a business requirement.
  2. Pay close attention to the words used in job descriptions, and don’t cap the years of experience required. Replacing “2 – 5 years’ experience” with “at least 2 years’ experience” signals that candidate of all ages are welcome to apply. AARP has a guide to age-inclusive job posting language which can be found at aarp.org/employers.
  3. Don’t age-target employment ads on platforms that allow such targeting, even if that includes filters that approximate age such as job seniority or years of experience.
  4. Look for vendors who work with certified Industrial/Organizational Psychologists, who are trained in the development and evaluation of tests, assessments, and other selection procedures. In particular, any use of non-employment-related data should be vigorously scrutinized for its potential to rely on correlation rather than causation. The Society for Industrial Organizational Psychologists have recently published guidelines for evaluating AI-enabled selection technologies which can be used when evaluating vendors.[4]
  5. Request (or conduct) regular and independent audits of algorithmic performance to see whether adverse impact is occurring, and in what part of the hiring funnel.
  6. Include age as an element of diversity, equity, and inclusion initiatives. Driving awareness of the value of age diversity at work will help shift a culture of unconscious ageism.
  7. And finally, empower recruiters to challenge hiring managers who exhibit conscious or unconscious preferences for candidates based on age. There is a strong business case for the inclusion of older workers as part of an age-diverse workforce. Visit aarp.org/employers to learn more.

On the legislative front, AARP is supporting federal and state legislative initiatives to ban age-related questions during the application process that have the effect of screening out and deterring older applicants. Such information is simply irrelevant to a candidate’s qualifications and skills. Connecticut and Delaware have both enacted such bans and AARP is working with legislators in New York and Oregon to enact similar bans. 

At the federal level, AARP supports the Protect Older Job Applicants Act (POJA), which would clarify that  job applicants are allowed to bring disparate impact claims under the Federal Age Discrimination in Employment Act. Two appellate cases, Villareal v RJ Reynolds (11th Cir. 2016) and Kleber v PriceWaterhouse (7th Cir. 2019), interpreted the law to mean that applicants may not bring disparate impact claims under Section 4(a)(2) of the ADEA, only employees. POJA closes this inadvertent gap in the ADEA to ensure the legal rights of applicants for jobs are protected as well.

AARP continues to advocate for passage of the bipartisan Protecting Older Workers Against Discrimination Act (POWADA) to overturn Gross v. FBL Financial Services, Inc. and amend the Age Discrimination in Employment Act (ADEA), Title VII’s retaliation provision, the Americans with Disabilities Act, and the Rehabilitation Act of 1973, to clarify that the same standards of proof apply to all claims under all of these laws.

Again, thank you for providing AARP the opportunity to testify today. I look forward to answering any questions.

 

 

[1] https://news.brown.edu/articles/2014/11/age

[2] https://www.nber.org/papers/w30287

[3] https://www.upturn.org/work/help-wanted/

[4] https://www.siop.org/Portals/84/docs/SIOP%20Statement%20on%20the%20Use%20of%20Artificial%20Intelligence.pdf?ver=mSGVRY-z_wR5iIuE2NWQPQ%3d%3d