Review “The Algorithm”
In her book “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted & Fired & Why We Need to Fight Back Now,” Hilke Schellmann offers a detailed and critical analysis of the use of artificial intelligence (AI) in human resources. Schellmann explores how AI is progressively replacing human roles in critical processes such as hiring, promotions, and layoffs. While the use of AI is widespread (with 99% of Fortune 500 companies using it for hiring processes), the author highlights a fundamental issue: the lack of transparency in these tools. This opacity raises significant ethical and social concerns, indicating an urgent need for greater awareness and action against potential inequalities and injustices created by these technologies.
Highly qualified individuals are excluded from subsequent stages of selection processes (like human-conducted interviews) due to irrelevant factors such as first name, type of sport played (individual or team), zip code (which in the US can be an indicator of socio-economic or ethnic status), or reaction speed in specific games (difficult for people with disabilities). These flawed selection criteria can lead to not only unfair and discriminatory decisions but also create a significant barrier for candidates wanting to contest the outcome. The opaque nature of the algorithms used makes it extremely difficult to understand how the score was calculated and, consequently, to challenge a negative evaluation.
The book clearly shows how algorithms are affected by many of the typical biases of humans, with the aggravating factor that they can amplify and repeat them indefinitely. Given the high number of resumes reviewed by these systems (IBM and Google each receive about 3 million applications a year, Unilever about 1.8 million, and Goldman Sachs about 230,000 requests for just an internship position), the problem is of enormous relevance. This is just considering candidates for a new job, but what about those waiting for a promotion or potentially being fired due to the algorithm?
According to some CEOs and many software sellers, AI-based tools not only analyze the present but seem capable of predicting future employee behavior. These analyses can include interpreting voice tone, evaluating visited websites (even through company devices used for personal purposes), analyzing facial expressions and response times, to determine potential risks of burnout, depression, or likelihood of resigning.
Hilke Schellmann, with this book, raises a further alarm: the growing interest of CEOs in AI-based surveillance tools, whose sales exploded during and after the COVID period. With an increasing number of employees working remotely, many employers have relied on algorithms to assess the commitment and productivity of their staff. Real-time scanning of emails, the duration and frequency of meetings, mouse movements, and periods of inactivity have become determinants for job security or career advancement opportunities. The book also reports cases at Amazon where managers were forced to call out employees on software’s request, without a clear understanding of the underlying reasons.
However, there is also a problem that can disadvantage companies looking for qualified personnel: these systems tend to standardize judgment criteria, ignoring factors that make an individual unique. Therefore, they do not promote diversity, which can be a key element in professional success: thinking outside the box, adopting individual approaches, standing out for creativity, etc.
But why do companies rely on these imperfect tools?
One of the main reasons is their ability to handle a high number of applications, reducing costs and time. Moreover, the skill of software sellers convinces many companies that the solution to their HR problems lies in their product. And if we add blind trust by many CEOs in technology, the damage is done.
Conclusion
Hilke Schellmann, with the help of university professors, researchers, whistleblowers, software houses, and people who have been affected by the problems highlighted, reveals the limits of AI-based decision systems. The book aims to show the risks associated with the indiscriminate use of AI in human resources. It suggests that HR managers should thoroughly understand the algorithms they rely on, which currently does not happen.
The author does not oppose technology or algorithms per se, but highlights the need for independent evaluations before their widespread use, similar to what happens today with drugs. Only with full understanding and assurance that they are not discriminatory should these tools be authorized for use.
The book (published by Hachette Books), released in early 2024, is highly recommended to anyone wishing to delve into the limits of tools used today in the HR world and to bring attention back to the importance of the human element in evaluations.
Biography of Hilke Schellmann
(taken from https://www.hilkeschellmann.com/about-hilke-schellmann)
As a contributor to The Wall Street Journal and The Guardian, Schellmann writes about making artificial intelligence (AI) accountable.
Her four-part investigative series on AI and hiring for the MIT Technology Review was a finalist for a Webby Award.
Her documentary “Outlawed in Pakistan,” screened at Sundance and broadcast on PBS FRONTLINE, won an Emmy, an Overseas Press Club, and a Cinema for Peace Award. In her investigation on student loans for VICE on HBO, she discovered how easy money flow from the federal government is driving up the cost of higher education in the United States, threatening the country’s international competitiveness. The documentary was a finalist for the 2017 Peabody Awards.
Former Director of Video at Columbia University’s Graduate School of Journalism, Schellman also led video coverage as a multimedia reporter for The Wall Street Journal’s New York section. Her work has appeared in several publications including The New York Times, VICE, HBO, PBS, TIME, ARD, ZDF, WNYC, National Geographic, The Guardian, Glamour, and The Atlantic.
Schellmann’s work has been generously supported by the Patrick J. McGovern Foundation, the MIT Knight Science Fellowship, the Pulitzer Center AI Accountability Network, and the NYU Journalism Venture Capital Fund.