Data privacy

Chicken Little, Big Brother and workplace tech

By Michael Moran 

A version of this article originally appeared on Forbes. The below is a more expanded version of the original piece.

With the world taking another crack at a “return to work” this autumn, a spate of stories has appeared warning of a subtle but very real byproduct of the global pandemic: A reset of societal norms when it comes to privacy expectations in the workplace. In recent weeks, everyone from The Wall Street Journal to The New York Times to The New York Review of Books has weighed in. There’s even been some action in the US Senate, where Sen. Bob Casey (D-PA) is asking the Labor Department to take a fresh look at what has become known as “surveillance tech.”

The best of these pieces carefully parses the advantages and disadvantages of technological innovations and new protocols born of the pandemic emergency. But the general tone has been disappointingly alarmist, giving far too much attention to bad actors and dark scenarios, and far too little to the transformation of the worker-employer relationship that is also a byproduct of the pandemic, and which promises to make most workplaces more rather than less humane places.

Now, I’m not naive about this. Journalists (as the former journalist in me knows all too well) have a hard time letting good news get in the way of a good story, especially when the specter of “Big Brother” can be summoned. The idea that our employer is tracking our every keystroke, timing our bathroom breaks and using facial recognition or other biometric technology to make judgments on our economic value is indeed alarming.

Indeed, there are instances of companies that have used the pandemic as an excuse to push into clearly unethical (if not illegal) frontiers of surveillance. The Electronic Frontier Foundation, an NGO that monitors data privacy matters, breaks these so-called “bossware” products into two broad categories: those that employees know about and can possibly control, and those that secretly snoop on employees, in the office or remotely, with no consent. Some of this stuff is dastardly; others dangerous and evil. The New Yorker’s Ronan Farrow documented the story of NSO, an Israeli software firm that produced a deep-link surveillance software called Pegasus sold to governments to track dissidents.

What you can’t see can hurt you

It’s one thing to insist that people turn on their cameras for Teams or Zoom meetings, or insist they clock in when they arrive at work. It’s quite another to deploy software that secretly photographs or videotapes a remote employee, or which reads keyboard strokes and assumes that a lack of such activity equals goofing off.

Many people assume such activities are confined to authoritarian states like Russia, China and Saudi Arabia, where the use of the term “Big Brother” is more than justified. For instance, China’s Ministry for Public Security runs a nationwide network of cameras, web crawling spiders, algorithms and other methods to keep tabs on citizens, ultimately assigning them a loyalty grade based on an algorithmic assessment of their thoughts, deeds and potential behavior. Arrest for a poor grade is entirely possible.

Most western democracies are a long way from that level of abuse. And yet, in the US, only a handful of states (New York, Delaware and Connecticut) have laws that require an employer to reveal whether such surveillance is taking place. By and large, this leaves the question of worker surveillance in the US up to the judgment of the employer, and the company’s general counsel. In Europe, the General Data Protection Regulation (GDPR) affords some protection and generally requires employers to notify workers of any collection of their personal data. It may not be water-tight, however, as the law does not specifically address the parsing of web searches or downloads on an employer-owned laptop, for instance. But EU workers are protected, and the fines involved in violating GDPR can be steep.

None of this, counter to the current media narrative, is particularly new. Courts have upheld the right of employers to read all your email and log all the websites you’re visiting for many decades now if they were taking place on a corporate-provided domain, phone or computer. This was true in the EU right up until 2017, when GDPR took effect. Meanwhile, the layers of cybersecurity software being piled into these domains extends this capability beyond corporate-provided hardware as companies try to navigate the tricky balance between giving people with their own devices access to the corporate domain while protecting the domain from cyberattack. Not for nothing were the world’s governments terrified about what rogue cyber security billionaire John McAfee might know about them.

Chicken v. Little

In fact, there is far more than the pandemic driving all of these developments and it is very important to keep them in perspective. There are some in society who will always be threatened by technological innovation. For my part, I sometimes miss the hours I could spend walking around town not being bothered by someone texting, calling or emailing me. But this is the price of having a far greater access to information, communications and entertainment, a trade-off that on balance I think has been beneficial. I have a 16-year-old daughter. Call me old school, but I like to know where she is.

In the same vein, the benefits of the technological innovations that were spawned by the epidemiological emergency of the global pandemic will far outweigh the downsides.

Across the world, the need to monitor the health of indoor spaces and the wellness of those within them has spawned a new push to outfit buildings with Internet of Things (IoT) sensors measuring everything from CO2 and particulate matter in the air to the density and usage of conference rooms, labs and washrooms. Decades of studies certified by scientific institutions and government watchdogs have documented the dangers of poor air and sanitation in a building, but until the pandemic the idea of taking empirical real-time measurements of such things was considered beyond the responsibility of a landlord, school administrator or employer.

The pandemic did not necessarily create these realizations, it simply acted as an accelerant. Already, well before anyone had heard of COVID-19, other trends were raising awareness of the need for new ways to measure the world we live in. First, the rise of Environmental, Social and Governance (ESG) investment strategies and the more general concern over climate change have created new demand for previously non-existent metrics on things like a building’s energy efficiency, its carbon emissions, a company’s employee engagement and the diversity of its leadership.

The “Built World” – ESG parlance for the world’s hard infrastructure, factories, construction activity, homes, retail outlets and office buildings ­ accounts for about 40 percent of all carbon emissions. Technologies developed over the past several years now offer the ability for building owners, regulators and tenants to measure with real-time granularity their building’s use of energy and water, its generation of waste and the comfort and safety of people within. The great majority of these technologies are completely anonymous, do not capture “Personally Identifiable Information” (PII) or use cameras or other forms of biometric tracking.

Another important development that preceded the pandemic was a demographic handoff taking place in the workforce of the world’s advanced economies. Over the past five years, Millennial and Gen Y workers have become the majority of the workforces of most economies, and very soon they will dominate investment, too. Impact of this development is blunted in the authoritarian world, where dead wood will remain in control for years, even decades, to come. But in the democracies of the G7, their focus on issues like climate change, gender equality and a general reluctance to fall into the Mad Men culture of Baby Boom office employment is changing society. Already, combined with tight labor markets, this has spawned a new demand for transparency and data in the workplace, greater flexibility in schedules and culture, as well as innovative “workplace experience” technologies that remove risk and friction from simple tasks like catering a meeting or finding a conference room with clean air and empty trash bins.

Under the ‘guise’ of safety

As a byproduct of some of the new technologies discussed above, the owner of a building, an employer or even a parent may have more information available about what’s going on in a given space than was possible just a few years ago. Depending on the capability in question, these either collect no PII that could compromise someone’s privacy or log identity information with full consent because – say, in booking a conference room – the question of who is booking it is highly relevant.

Wireless Internet of Things (IoT) sensors, for instance, can be installed on doors and windows to indicate if they have been left open, in refrigeration units to ensure prescription drugs or food is being preserved properly, or across an open-plan office to determine how many desks are vacant at any given moment. Which of these is snooping? None of these technologies collects PII.

Take that last example another step forward. What if, rather than simply showing which desks are available, the system includes a smart phone app so that workers can reserve one? In this case, yes, there is PII involved. But that information is shared willingly.

The situation becomes more complex when risk mitigation and safety is the goal. Microshare, my company, developed a wearable contact tracing technology during the early months of the pandemic that was based on a previous product, Asset Zoning, that had been used in hospitals to track wheelchairs and other mobile equipment with a tendency to get lost in distant corners. While many of us decamped to “remote working,” many of our clients, including workers at global GlaxoSmithKline pharmaceutical plants, could not be productive from their bedrooms. This meant, by definition, that density was going to persist and potentially pose a risk of COVID-19 infection and outbreaks in the company’s two dozen global production sites.

At the time, April 2020, Google, Apple and a host of others were developing smart phone-based contact tracing apps. We immediately saw big problems with this idea. Tracing apps by definition collected PII. They also had no geographic limit: They traced workers at work, at home and at play. As a result, uptake for these apps was fairly miserable considering the enormous public and private investment that went into them. What’s more, smartphones, widely available in, say, Mountain View, California, aren’t ubiquitous in the developing world, and are often a safety hazard in and of themselves on a busy production floor or warehouse.

Our solution, based on Bluetooth wristbands, was active only on worksites and had no ability to store PII. The data remained anonymous until someone reported a symptom, at which point GSK did a reverse database query and informed anyone who had been within six feet of the person in question over the past several days to test for COVID. Throughout the pandemic ­ and right down to this writing, since GSK has renewed its contract ever since due to the pandemic’s persistent variants – GSK has experienced no large outbreaks or shutdowns.

But what about privacy? Back in 2020, a BBC journalist asked me, “Isn’t this just a way for employers to track their employees?” The BBC’s report faithfully noted the efforts Microshare had made to avoid collecting PII. “While firms like Microshare aim to protect privacy, the truth is, we may have to accept some level of incursion, says Mr. Moran, as we accepted enhanced security measures following the 9/11 terrorist attacks, as a trade-off that is necessary to protect us,” its report said. But the headline writers couldn’t resist: “Coronavirus: How Much Does Your Boss Need to Know About You?”

As with all technologies, bad actors will misuse them or twist them into something they were not designed to be. Consider the automobile or the aircraft. Perhaps the CIA should have imagined that someone would weaponize Wilbur and Orville Wright’s technology into a devastating suicide weapon. The intelligence community’s failure of imagination was tragic, but it does not mean we should stop flying.

Similarly, I once worked closely with Dr. Nouriel Roubini, the infamous “Dr. Doom” of CNBC fame who won that moniker by predicting the 2008 financial crisis during the booming years that preceded it. I was in a New York taxi with Nouriel one day when we heard the news that Google was developing a “self-driving car.” Nouriel, with a wry smile, turned to me and said, “Self-driving car bomb.” He was right, of course. We humans can make a weapon out of anything – a stone, a butter knife, love or an iPhone. But I’m still inclined to think our ingenuity, better natures and instinct for survival will prevail.

Michael Moran | CMO, Chief Risk & Sustainability Officer | MMoran@microshare.io

If you’d like to receive more information about our products and solutions