Whose health? (of the 4th industrial revolution!)

Digital technologies, human behavior data, and algorithmic decision-making are set to play an increasingly critical role in addressing future crises. As we increasingly place our hope for solving fundamental problems in big data, the most important question we face is not what we can achieve with it, but rather what we want to achieve.

Just a few weeks after the first cases of covid 19 appeared in China, South Korea implemented a system for broadcasting the exact profile and movements of individuals who had been diagnosed positive with the disease. Other Asian and European countries followed by developing their own “tracking and monitoring” systems with varying degrees of success but also concerns about the ethical issues that arise.

This strong tendency was understandable: if systems that are already available can save thousands of lives, then why shouldn’t states use them? However, in their rush to fight the pandemic, societies paid little attention to how these processes can be implemented almost overnight, and what the consequences would be.

It is true that the South Korean tracking and monitoring system had already caused quite a few discussions. And this was because this system crossed ethical boundaries by publicly disclosing in writing the exact movements of individuals who were positive for covid-19 to other individuals in the area, revealing their visits to karaoke bars, one-night hotels, and gay clubs, for example.

But the South Korean system goes further because it correlates mobile phone location data with individual data on each person’s travels, health data, footage from police cameras, as well as data from dozens of credit card companies. This information is correlated through a system developed for use in South Korea’s smart cities. By removing the friction of administrative bureaucracy, this system achieved a reduction in tracking time from one day to just ten minutes.

Legal experts on digital privacy and data security have been warning for years about the consequences of interconnecting personal and public databases. But the pandemic showed for the first time how ready these data flows are for concentration and interconnection. Not only in South Korea but around the world.

The bitter truth is that we have been creating the infrastructure for collecting deeply personal data related to everyone’s behavior for some time now. Author Shoshana Zuboff identifies the birth of this “surveillance capitalism” in the expansion of state security forces following the terrorist attacks of September 11th in the US.

Business models based on data processing have enhanced the use of the core elements of this infrastructure: smartphones, sensors, cameras, digital money, biometrics, and machine learning. Their ease of use and effectiveness—the promise that they can do more in less time and at a lower cost—has won over both entrepreneurs and individual users. But the rapid, enthusiastic adoption of digital technologies left little time and few reasons to consider the consequences of connecting all these dots.

Although the media often refers to pandemic-related technologies as “cutting-edge,” very little of it is actually new—except, perhaps, that more people are learning about it now. The tracking of human movements, whether on an individual or global scale, lies at the heart of many well-known companies. Google’s reports on “COVID-19 mobility,” for example, present a dizzying volume of user data, from city-level to country-level, showing who is staying home, who is going to work, and how these daily trajectories have changed due to lockdowns.

The same applies to data about what we buy and how we act as individuals and groups. Tracking private behaviors at scale is so central to automation that the pandemic lockdowns affecting over 4 billion people caused problems for artificial intelligence and machine learning models, with the consequence, for example, that algorithms hunting for fraud and supply chain management systems became misaligned.
The sudden social visibility of behavioral data collection could trigger a public awakening. Indeed, Edward Snowden’s revelations increased public attention to the fact that their communications via Skype and their emails were being monitored in the name of counterterrorism. And the Cambridge Analytica scandal in the United Kingdom highlighted the sale and use of personal data for political micro-targeting.
More specifically, the covid-19 crisis could have shown how behavioral data reveals what we do at any given moment, and why what we do matters. Instead, we accepted these technologies because we regarded them—at least during the current crisis—as intended for a broader social good (meanwhile overlooking the question of their effectiveness).

But as the boundaries between private and public health blur more permanently, we may begin to treat differently the concessions we are asked to make. We may show less tolerance for behavior tracking if individual lifestyle choices are constantly monitored for the greater good. Technologies that will help us manage the post-pandemic future, such as workplace monitoring tools or/and permanent digital health passports, will severely test our value systems. And this may lead to strong disagreements, cultural and political, about which technologies should and should not be promoted.

It would be easy to limit this confrontation to issues of surveillance and privacy. But these are not the only important stakes. The continuous accumulation of new behavioral data at scale not only empowers large companies, but also enables the creation of predictive models, early warning systems, and national and global policing and control systems. Furthermore, the future will be shaped by crises, from natural disasters to famines and pandemics. And digital technologies, data on human behaviors, and algorithmic decision-making will play an increasingly growing role in predicting, mitigating, and managing them.

Consequently, societies will face difficult questions about how they address challenges beyond political freedoms and harmful biases, discrimination, and inequalities related to data collection technologies. We will need to decide who owns the knowledge of behaviors and how this knowledge is used for the public good. And we will have to acknowledge that whoever decides what based on this data and what political views they hold, will create new forms of power with long-term consequences for our lives.

As we increasingly rely on big data to solve major problems, the biggest question we face is not what we can do this way, but rather what we want to do. If we don’t answer this question, others will answer it on our behalf.

Stephanie Hankey
Project Syndicate, June 29, 2020
original title: The behavioral data debate we need
translation / adaptation: Ziggy Stardust