Skip to content
Home » Amazon Seller news from industry leaders » Amazon’s AI managers targeted in California legislation meant to empower warehouse workers

Amazon’s AI managers targeted in California legislation meant to empower warehouse workers

A new California law designed to prevent the warehousing industry from overworking employees doesn’t name a specific company. But the legislation’s target is clear: Inc, which has given machines unparalleled control over workers and is accused of using the technology to impose unreasonable demands on them.

Authored by Assemblywoman Lorena Gonzalez, AB 701 prohibits the use of monitoring systems that thwart basic worker rights such as rest periods, bathroom breaks and safety. The legislation will help determine whether governments can regulate human-resources software that’s expected to play an increasing role in deciding who gets hired and fired, how much workers are paid and how hard they work.

“This is just the beginning of our work to regulate Amazon & its algorithms that put profits over workers’ safety,” Gonzalez, a San Diego Democrat, tweeted earlier this year. The legislation, signed by California Governor Gavin Newsom in September, goes into force on January 1.

Regulators are constantly playing catch-up, especially with the tech industry. Computer science experts are doubtful laws can effectively regulate machines that are ultimately expected to be smarter than people and even capable of tricking them. Designing artificial intelligence systems that meet their intended purpose remains a feat in itself. Ensuring they don’t cause unintended consequences is even harder.

Amazon has outsourced many of the roles traditionally played by human managers to machines. At giant fulfilment centers, software determines how many items a facility can handle, where each product is supposed to go, how many people are required for a given shift and which truck is best positioned to speed an order to a customer on time. Algorithms and cameras constantly monitor delivery drivers, ensuring they drop off a certain number of packages per shift, place them correctly and obey traffic laws.

The company argues that automation is required to manage its sprawling operations and says the technology mostly works as intended. But no algorithm is perfect, and even a small margin of error at a company of Amazon’s size can inflict lasting collateral damage. Saturday Inside China Tech Newsletter By submitting, you consent to receiving marketing emails from SCMP. If you don’t want these, tick here By registering, you agree to our T&C and Privacy Policy

Over the past year, a Bloomberg investigation into algorithmic management chronicled the experience of a gig delivery driver mistakenly fired by a machine, an aspiring doctor paralysed after a harried Amazon delivery driver ploughed into his car and warehouse workers who said they felt like disposable cogs in a machine. Time and again, workers characterised the algorithms as merciless taskmasters. They described an unforgiving workplace where people often don’t stay long, injuries are higher than the industry average, and employees are expected to meet unreasonable productivity quotas.

In the coming years, companies are widely expected to adopt aspects of the management automation pioneered by Amazon. Machines already routinely sift through job applications, determine work schedules and even figure out which employees are planning to quit.

So far, Amazon hasn’t sold its worker-monitoring software to other companies – and some industry watchers believe it never will. But Amazon’s cloud division in recent years started selling tools designed to automate real-world tasks, including some developed for its e-commerce and logistics operation. Last year, Amazon Web Services announced a suite of products to monitor equipment and factory lines, supplementing or even replacing human workers. Companies as varied as Capital One, Labcorp and GE Appliances use Amazon Connect call-centre software, which deploys artificial intelligence tools to make human agents more productive and automate some customer interactions entirely.

Algorithms’ growing ubiquity has prompted calls for legislation that would force companies to be more forthcoming about how such software affects people. Last December, Senator Chris Coons, a Democrat from Delaware, introduced the Algorithmic Fairness Act. It would require the Federal Trade Commission to create rules that ensure algorithms are being used equitably and that those affected by their decisions are informed and have the opportunity to reverse mistakes.

“Artificial intelligence brings real benefits to society and opens exciting possibilities. However, it also comes with risks,” Coons said, after unveiling the proposed legislation. “Companies are increasingly using algorithms to make decisions about who gets a job or a promotion, who gets into a certain school or who gets a loan. If these decisions are being made by artificial intelligence that is using unfair, biased or incorrect data, it has an outsized impact on people’s lives.”

So far, the proposal has stalled in a divided Washington grappling with more immediate concerns, from the pandemic to voting rights.

The California law has a narrower focus. It requires warehouses with at least 100 employees to disclose performance quotas to workers and prohibit workloads that prevent them from taking legally mandated meal and rest breaks. The law seeks to give workers some redress if quotas violate safety regulations and gives the state labour commissioner the power to review worker compensation records at facilities with elevated injury rates and issue citations if the injuries are attributable to excessive workloads.

An Amazon spokesman said the company will update managers at its 32 California fulfilment centers on the productivity tracking process but declined to say what changes will be made to comply with the law. He also said Amazon doesn’t have workload quotas but uses productivity metrics to identify employees who need help meeting expectations.

Computer scientists have long debated whether algorithms and artificial intelligence can be regulated effectively. Roman Yampolskiy, a professor at the University of Louisville who studies AI safety, says the technology is such a black box that it’s difficult to know if it’s creating potentially dangerous situations. He’s sceptical that the California law will have its intended effect.

“We don’t always know why machines make certain decisions because it’s a complex web of neural networks,” Yampolskiy said. “Any time you put in writing what you are trying to accomplish, smart people will find a loophole to work around it. Legislation will be gamed by algorithms and their owners.”