ITEM: On of the hot new trends in enterprise software is so-called ‘bossware’ – employee monitoring software that enables employers to monitor the productivity of their employees by basically spying on them.
Employee monitoring software is nothing new – it’s been around for quite some time, enabling enterprises to harvest the data generated from employee computers and devices in order to find ways to boost their productivity and maximize efficiency, or at least see if they’re slacking, leaking company secrets or spending company time on Pornhub or whatever.
Perhaps unsurprisingly, the rise in remote working during the COVID-19 pandemic has resulted in an increase in usage of employee monitoring software. And employers are spoiled for choice when it comes to solutions, reports The Guardian:
The number and array of tools now on offer to continuously monitor employees’ digital activity and provide feedback to managers is remarkable. Tracking technology can also log keystrokes, take screenshots, record mouse movements, activate webcams and microphones, or periodically snap pictures without employees knowing. And a growing subset incorporates artificial intelligence (AI) and complex algorithms to make sense of the data being collected.
AI is taking employee monitoring software to new levels of capabilities and potential intrusiveness. One product offers WFH security by using real-time facial recognition and object detection technology to determine if anyone other than the worker is looking at the screen (and if the worker is eating or drinking while on duty). Another calculates a “risk score” to determine whether they pose a security threat to the company.
Apart from the (hopefully) obvious ethical and legal issue here regarding privacy, there’s also a question of how accurate employee monitoring software really is. Put another way, can an algorithm really quantify human behavior in a work environment in a meaningfully objective way? From the report:
Productivity scores give the impression that they are objective and impartial and can be trusted because they are technologically derived – but are they? Many use activity as a proxy for productivity, but more emails or phone calls don’t necessarily translate to being more productive or performing better. And how the proprietary systems arrive at their scores is often as unclear to managers as it is to workers …
And then there’s this:
AI models, often trained on databases of previous subjects’ behaviour, can also be inaccurate and bake in bias. Problems with gender and racial bias have been well documented in facial recognition technology.
And finally of course there’s the question of how employees feel about it:
Being monitored lowers your sense of perceived autonomy, explains Nathanael Fast, an associate professor of management at the University of Southern California who co-directs its Psychology of Technology Institute. And that can increase stress and anxiety. Research on workers in the call centre industry – which has been a pioneer of electronic monitoring – highlights the direct relationship between extensive monitoring and stress.
As enterprises worldwide make the shift to hybrid workforces with more staff continuing to work from home, this is shaping up to be a major issue in the next couple of years. A few court cases in Europe ruled that employee monitoring software is legal, but can’t be utilized at the expense of employees’ privacy rights. The danger is in employers losing sight of this and being dazzled by the magic of digital surveillance that may not be giving them an accurate view of the situation and could actually be making things worse by stressing out their employees.
Full article here.