How does AI-surveillance at the workplace affect workers?

How does AI-surveillance at the workplace affect workers?
Published on

An online discussion organised by the Internet Freedom Foundation on workplace surveillance, emphasises why the State should acknowledge that democratic oversight is needed around artificial intelligence. 

——–

ON April 9, the Internet Freedom Foundation [IFF] hosted Prof. Anupam Guha, Assistant Professor at Centre for Policy Studies, IIT Bombay, for a discussion on his new academic paper titled The Automated Workplace: Digital Taylorism in India as a part of its conversation series. Prof. Guha began his discussion by explaining how workplace surveillance is built upon the concept of 'digital taylorism' that allows for the scientific management of the workplace, leading to more productivity. He said, "This idea of 'digital taylorism' was rediscovered in the 2000s; since then the idea of digital technology became common and especially in the BPO sector. Over there, attempts were made at observing the employee's behaviour or measuring their activities, even when they get out of their cubicles or when they sit down."

Prof. Guha, an Artificial Intelligence [AI] researcher, stated: "The argument that I am trying to make in this paper is that the central vanity of all of this – that digital taylorism is a scientific way of running a workplace and that it will increase productivity – is nonsense. This has nothing to do with productivity. It has nothing to do with scientific management, which was the central claim of taylorism and now of digital taylorism, but rather, it is an arbitrary terror mechanism to keep the employees constantly jumpy to make them aware that all their actions are being monitored."

Moreover, on the point as to why digital surveillance at workplace leads to productivity, Prof. Guha answered, "The reason they are witnessing higher productivity or profit or what they have is because the workers are more strained and exploited and they are more vary in speaking out against such managerial practices…now you have all kinds of tools to run a workplace, especially post pandemic, when people were more okay with their liberties being taken away because there was an 'emergency'."

While replying to a question as to what are the surveillance technologies being used at the workplace and how they are being enabled specifically at Indian workplaces, he said, "Digital taylorism became famous in the factories of Henry Ford, the famous car manufacturer; that in order to get the maximum out of the workers, you have to measure everything they do and pay them accordingly. Before taylorism came into the picture, there was a lot of self-management of workers in factories. They could decide how to run the place…but when taylorism came into the picture, all that flew out of the window. You had professional managers on the factory floor because it was like an Army barracks…workers were not against it because there was a lot of arbitrariness in capitalism and that is why they thought if this could be measured, they could measure the salary they would demand."

"…When India became independent, we even introduced this practice…but when machine learning became a technology which is based on the dominant area of AI, it means in a given data you find patterns…the AI part of it began in a sterile and peaceful manner, and nobody thought the end point would be invasive monitoring…Slowly slowly from the periphery of the company, we see the monitoring and behaviour modification coming inside the company. We see the idea of facial recognition, apart from the monitoring that is happening in the camera. Then we go to biometric and extremely invasive stuff. The thing with India is that it does not have a law against facial recognition. We do not have law in the workplace on what level of surveillance can be done or cannot be done, and what are its consequences."

"….The final level, when it comes is completely and utterly false and pseudoscientific and hogwash. There are companies that claim that their machine learning can tell you which of the employees have what emotions, which are irritable or capricious, or going to harass their fellow workers. This is all fraud. No AI can tell what your emotions are. Emotional detection is fraud".

Prof. Guha was then asked, by the host Anushka Jain from IFF, on how the use of this surveillance technology leads to a suppression of workers' rights and perpetuates structural power asymmetries. He said, "India does not have a law to regulate these technologies…There is absolute policy vacuum. There are attempts to make a data protection law but the Bill is extremely controversial. Its final draft is diluted from what was suggested by the Justice BN Srikrishna Committee….Can we use some variant of the labour laws to protect the rights of the workers against these kinds of things? Not really. Labour laws recently have been diluted to a significant level….the places where these technologies are coming through is market logic, which can only be defeated through political logic, by regulation and laws."

Lastly, when questioned if all AI are bad, he answered, "I would not have become an AI scientist if I believed that AI is bad…It depends on how you use it. If you use it to police, to do judicial functions or policy making, that is something you should not do. You should not have arbitrary processes in those things. If you are using it at the workplace to essentially – and I would stress this – unscientifically, harass workers by random and (under) arbitrary standards of performance which is really what digital taylorism is, that is also bad."

Public awareness, education and shifting political pressure on these issues, Prof. Guha remarked, would create a political will to make significant change in this increasingly important topic.

Click here to view the IFF's entire conversation with Prof Guha.

logo
The Leaflet
theleaflet.in