logo

Recruiters Beware: Algorithms Tend to Learn Discrimination

Are you using social media or other online platforms for recruiting new people to your ad agency or marketing firm? Maybe you should try something else…

Discrimination Is in the Code

Algorithms incorporating machine learning can look at the kinds of people who respond to an ad, and target more people just like that first group. That sounds great in theory. However, tests by tech-industry non-profit Upturn, which studies how discriminatory code becomes embedded in artificial intelligence, tested Facebook ads by setting the exact same parameters for several job categories. Facebook’s algorithms took those non-discriminatory settings and altered them over time, so ads for certain jobs disproportionately targeted narrow groups.

Facebook has been challenged on how their systems increase discrimination for years, made lots of empty promises through many “listening sessions”… and done absolutely nothing. It took the exposure and bad publicity of the Cambridge Analytica scandal, and the threat of government regulation to force Facebook to admit their algorithms were promoting the opposite of diversity, and enabling the spread of misinformation and hate group influence. It’s hardly surprising that the Department of Housing and Urban Development (HUD) recently sued Facebook for violations of the Fair Housing Act; their ad-serving algorithms were found to be serving housing ads based on race, gender, age, family status and other “micro-targeting” characteristics protected under the statute.

Algorithms Learn from Human Behavior

Facebook isn’t alone in having algorithms that discriminate. Amazon used its own AI-based recruiting tool, until it was discovered it selectively eliminated women job candidates by zeroing in on implicitly gendered words, i.e., words more likely to be used by men. Because the original data fed into the program was based on resumes on file (the tech industry is notoriously dominated by men), the program learned to screen out female candidates. Garbage in, garbage out, as early computer programmers used to say. Amazon tried reprogramming, but the algorithm developed other problems, recommending unqualified candidates for all kinds of positions, so they finally shut down the screening engine.

Attempts at legislation aimed at forcing tech companies to pay greater attention to inherent discriminatory coding may not go far enough… or even make sense, given that machine learning algorithms sometimes take detours humans can’t predict. And the problem of starting with faulty data remains unaddressed. The American Civil Liberties Union (ACLU) Racial Justice Program has expressed concerns about law enforcement and the judicial system developing AI-based tools for such things as sentencing and other actions that could be built on data embedded with historic, systemic inequities. San Francisco recently banned all governmental use of facial recognition technology; facial recognition programs being employed for security scans by police, retailers, airports and schools have a hard time identifying people of color, women and various ethnic groups.

The ACLU tested Amazon’s Rekognition program, and found the systems had difficulty identifying people of color, especially darker-skinned people; the program erroneously matched 28 members of Congress to criminal mugshots. Elsewhere, Google Photos tagged images of two black people as gorillas. A Chinese woman claimed a friend’s face was able to bypass iPhone’s facial recognition lock. IBM tried to help by compiling and releasing a facial data set of 1 million human faces, gathered from photo site Flickr… without getting users’ permission to include their images. (Sigh.)

Amazon claimed the ACLU used “the wrong settings” in their test. Tech experts (including a later MIT study) suggest it is more likely that the data sets used to train the algorithms were insufficiently diverse. While the programs are slowly improving, other problems keep cropping up. PewInternet.org canvassed a panel of 1,302 technology experts, scholars, corporate practitioners and government leaders on the topic of algorithms and their potential impact on society and individuals; they also compiled a list of algorithmic issues that have emerged just since 2016. The list is daunting, and growing.

Computer scientist David Gelernter, writing at Peachpit, said we need to begin to increase “topsight,” to “see and explore [technological solutions’] consequences before we build business models, companies and markets on their strengths, and especially on their limitations.”

Recruiting Requires Human Oversight

A new technology divide is arising, where a small percentage of users understand and step back from algorithms having increasing control of all aspects of human life, and the rest of us, who are a bit too willing to trust algorithms more than we trust people. The tendency for algorithms to worsen institutionalized or systemic racism, sexism and other forms of discrimination and biases should raise greater concern among the general public, but definitely among HR people seeking to grow more diverse ad agencies and marketing firms.

Due to these algorithmic learning curves, recruiters have to carefully review candidates delivered through online platforms with an eye to diversity in results. Heavy skews toward narrow groups when parameters are not gender, race or able-specific are red flags that the platform’s programs are not helping to deliver a more diverse candidate pool.

To combat the ad industry’s notorious tendency to hire or contract people who are “just like us” (young, white and male), Forsman & Bodenfors New York set up an open source database, Grow Your Circle, to help U.S. ad agencies and producers search for and find underrepresented talent in digital, film/video and experiential production. The idea emerged from the agency’s own difficulties in hiring an all-female production crew for a 2018 project, but offers broader resources than the 2016-founded #FreeTheBid effort aimed at encouraging more women directors and producers in TV, video and film work.

The Dots, a British creative networking and jobs platform, is trying to improve diversity recruiting and reduce hiring biases by blocking people’s headshots and info including names, education and employment history. The goal is to get potential employers to focus on individuals’ portfolios first. “Blind screening” has been tried in other fields, notably in orchestras where blind auditions increasingly are standard practice after studies found that orchestras disproportionately hired male musicians more than females when they could see the person playing. After blocking gender and names and focusing auditions on musicians’ ability, the 6 perecnt of women in the top five U.S. orchestras swelled to 21 percent by 1993.

Currently, The Dots’ “bias blocker” is optional (they’re is considering making it the default search setting). Feedback has raised concerns that people advantaged by biased hiring previously would likely have fatter portfolios of work and still have the advantage. Founder Pip Jamieson believes viewing only candidates’ work allows talent with less experience and less impressive credentials to be more competitively ranked—it allows the quality of the work to shine. The larger issue is that there are simply more white men in the field, which automatically skews the algorithms. Still, it’s a start. Jamieson advises that HR people need to consciously select for a more diverse candidate pool to ensure diversity hiring improves, and not rely solely on technology to serve up those candidates. Call it affirmative action if you will, but the industry’s notoriety for lacking diversity can’t be fixed without some changes in recruiting and hiring practices.

Increasing diversity recruiting resources should assist the ad industry to expand diversity, and hopefully help us deliver advertising and marketing more in tune with globally diverse target audiences. But awareness of the problem is key.

Your takeaway is that over-reliance on algorithm-driven recruiting could actually reduce your diversity success. It’s just one more reminder that we need to keep humans in human resources.

Comments

0 comments
Please to use this feature.
Comments for website administrator (optional):