Silver Taube: The dangers of AI in the workplace
SAG-AFTRA members picket at Netflix in Los Gatos. The strike is fueled by fears of AI related job loss. Photo courtesy of South Bay Labor Council.

Artificial intelligence has the potential to transform our lives. It is now used to read X-rays with greater precision than radiologists and spot cancer growths no human doctor can detect. In one clinical trial, AI helped detect 20% more cases of breast cancer than radiologists.

But AI also has risks, particularly in employment where a company’s quest for profits and the absence of regulation can adversely affect workers.

Widening socioeconomic inequality triggered by AI-driven job loss is a cause for concern. The SAG-AFTRA strike is fueled by fears of AI driven job loss. By 2030, tasks that account for up to 30% of hours currently worked in the U.S. economy could be performed by AI—with Black and brown employees left vulnerable according to McKinsey.

Goldman Sachs states that 300 million full-time jobs could be lost to AI, and Bloomberg reports that, according to an IBM survey, more than 120 million workers globally will need retraining in the next three years due to AI’s impact on jobs. Workers won’t have the skills needed for the new jobs that AI creates and could get left behind.

The use of artificial intelligence for hiring is widespread. In fact, 88% of companies globally use some form of AI in human resources, according to a Mercer report. In some cases, candidates are not only pre-selected, but also interviewed by an intelligent machine before a real person decides, based on a detailed machine-produced report.

Inherent biases are baked into AI algorithms. “A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Olga Russakovsky, computer science professor at Princeton said.

Researchers from the University of Southern California found up to 38.6% of the “facts” used by AI systems in their study were biased. According to the study, women were associated with the “B” word, Muslims with words like terrorism and Mexicans with poverty.

Researchers have also found that 76% of companies with more than 100 employees use personality tests and that algorithms administer and analyze the tests. Resume mining tools consider extracurricular activities and work experience, but an applicant’s disability may have excluded them from those activities or experiences. The candidate’s resume may describe how they developed the same skills in different ways, but the resume mining tool may screen out the applicant without considering the applicant’s skills.

A single vendor’s facial recognition tool is being used by more than 100 employers and has analyzed more than 1 million job applicants. One test asks applicants to match images of faces to emotions. Autistic people do not generally perform as well as non-autistic people on this test. If the ability to identify specific emotions is not necessary to the job position, the test would violate anti-discrimination laws.

Built-in stereotypes are likely to prompt discriminatory managerial decisions when immigrants who speak with an accent are interviewed by a machine.

If the system notices that recruiters interact more frequently with white men, it may find proxies for those characteristics and replicate the pattern. After an audit of its algorithm, a resume screening company found the algorithm determined the two factors that are most indicative of superior job performance are the name Jared and playing high school lacrosse. A recent study from Northeastern University and USC found that targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% Black.

The U.S. Equal Employment Opportunity Commission settled its first-ever AI discrimination in hiring lawsuit with iTutorGroup Inc., a company that allegedly programmed its recruitment software to automatically reject older applicants. The company will pay $365,000 to a group of rejected job seekers age 40 and over, according to a consent decree filed in the U.S. District Court for the Eastern District of New York.

AI can also make personnel decisions that adversely affect workers. Between 2011 and 2015, teachers in Houston had their job performance evaluated by a data-driven appraisal algorithm called the Educational Value-Added Assessment System. The algorithm allowed the board of education to automate decisions that translated into which teachers were awarded bonuses, disciplined for poor scores or fired. The teachers were unable to challenge the decisions or receive an explanation of them because the source code and other information underlying the algorithm are proprietary trade secrets owned by SAS, a third-party vendor.

In mid-2017, a federal judge ruled that the use of the secret algorithm to evaluate worker performance without proper explanation denied the teachers their constitutional rights.

AI has also led to labor exploitation overseas. According to a recent Washington Post report, an army of overseas workers in digital sweatshops is behind the AI boom.

Nearly half of the online freelance work is performed in India and the Philippines. Remotasks, owned by the $7 billion San Francisco start-up Scale AI, does work for firms like Meta, Microsoft and generative AI companies like Open AI, the creator of ChatGPT. Remotasks operates in Cagayan De Oro in Mindanao in the Philippines, a hub for AI data annotation. Remotasks taskers have had payments delayed, reduced or canceled after completing tasks and often earn far below the local minimum wage.

The ACLU and two dozen partner organizations are calling on the Biden administration to take concrete steps to center civil rights and equity in AI and to actively work to address its systemic harms—because the tech industry lacks people who understand and can work to address these harms. There have also been calls to nationalize AI by creating a governing body for the technology like the United States Atomic Energy Commission to wield the powers of the government to mitigate possible harms.

We must take steps to ensure this technology does not adversely affect workers in the United States and around the world.

San José Spotlight columnist Ruth Silver Taube is supervising attorney of the Workers’ Rights Clinic at the Katharine & George Alexander Community Law Center, supervising attorney of the Santa Clara County’s Office of Labor Standards Enforcement Legal Advice Line and a member of Santa Clara County’s Fair Workplace Collaborative. Her columns appear every second Thursday of the month. Contact her at [email protected].

Comment Policy (updated 5/10/2023): Readers are required to log in through a social media or email platform to confirm authenticity. We reserve the right to delete comments or ban users who engage in personal attacks, hate speech, excess profanity or make verifiably false statements. Comments are moderated and approved by admin.

Leave a Reply