You are here

Will democracies stand up to big brother?

Jun 14,2023 - Last updated at Jun 14,2023

By Simon Johnson, Daron Acemoglu, and Sylvia Barmack

CAMBRIDGE — Fiction writers have long imagined scenarios in which every human action is monitored by some malign centralised authority. But now, despite their warnings, we find ourselves careening towards a dystopian future worthy of George Orwell’s 1984. The task of assessing how to protect our rights, as consumers, workers and citizens, has never been more urgent.

One sensible proposal is to limit patents on surveillance technologies to discourage their development and overuse. All else being equal, this could tilt the development of AI-related technologies away from surveillance applications — at least in the United States and other advanced economies, where patent protections matter, and where venture capitalists will be reluctant to back companies lacking strong intellectual-property rights. But even if such sensible measures are adopted, the world will remain divided between countries with effective safeguards on surveillance and those without them. We therefore also need to consider the legitimate basis for trade between these emergent blocs.

AI capabilities have leapt forward over the past 18 months, and the pace of further development is unlikely to slow. The public release of ChatGPT in November 2022 was the generative-AI shot heard round the world. But just as important has been the equally rapid increase in governments and corporations’ surveillance capabilities. Since generative AI excels at pattern matching, it has made facial recognition remarkably accurate (though not without some major flaws). And the same general approach can be used to distinguish between “good” and problematic behavior, based simply on how people move or comport themselves.

Such surveillance technically leads to “higher productivity”, in the sense that it augments an authority’s ability to compel people to do what they are supposed to be doing. For a company, this means performing jobs at what management considers to be the highest productivity level. For a government, it means enforcing the law or otherwise ensuring compliance with those in power.

Unfortunately, a millennium of experience has established that increased productivity does not necessarily lead to improvements in shared prosperity. Today’s AI-powered surveillance allows overbearing managers and authoritarian political leaders to enforce their rules more effectively. But while productivity may increase, most people will not benefit.

This is not just speculation. Corporations are already using AI-enhanced surveillance methods to monitor their employees’ every move. Amazon, for example, requires delivery workers to download an app (Mentor) that scores their driving, supposedly in the name of safety. Some drivers report being tracked even when they are not working.

More broadly, the consultancy Gartner estimates that the share of large employers using digital tools to track their workers has doubled since the start of the COVID-19 pandemic, to 60 per cent, and it is expected to reach 70 per cent, within the next three years. Although the available evidence suggests that more surveillance is correlated with lower job satisfaction, even many employers who agree that monitoring their employees raises “ethical concerns” still do it.

True, surveillance technology is not inherently anti-human. On the contrary, it could improve safety (such as by monitoring for active shooters) or convenience. But we must find the right balance between these benefits and privacy, and we must do everything we can to ensure that AI technologies are not biased (such as on the basis of skin color or sex).

Tackling these issues will require new international norms and cooperation. Any AI used to track or punish workers should be disclosed, with full transparency about how it makes recommendations. If you are fired because an AI deemed your behaviour problematic, you should be able to contest that decision. Yet, because many of the new AIs are “black boxes” that even their developers do not understand, they automatically limit the scope of due process.

Even in a country as polarized as the US, people are likely to unite in favour of restrictions on surveillance. Everyone from left to right shares a basic concern about being constantly watched, even if their specific fears differ. The same is true across the world’s democracies.

In this bifurcated world, one camp will probably develop robust standards to govern when and how surveillance may be used. The topic will remain controversial, but the technology will be substantially under democratic control. In the other camp, autocratic leaders will use extensive surveillance to keep their populations under control. There will be cameras everywhere, facilitating as much repression as the regime sees fit to use.

A big economic choice looms for the world’s democracies. Should we continue to buy goods from countries where workers are subject to surveillance technologies that we would not countenance at home? Doing so would encourage more surveillance and more repression by regimes that are increasingly seeking to undermine our own democracies. It would be much better for shared prosperity if we advocated for less surveillance technology, such as by stipulating that only products fully compliant with surveillance safeguards will be allowed into our markets.

In the 1990s and early 2000s, the US and Europe granted China much greater access to their markets on the assumption that exports from low-wage countries would benefit domestic consumers and contribute to democratisation at the source. Instead, China ‘s export-fueled growth has bolstered its regime.

We should no longer have any illusions about the consequences of allowing unfettered market access for countries that keep tight control over their workers. Will AI technologies be used to help workers, or to rob them of their dignity? Our trade and patent policies must not be blind to such questions.

 

Simon Johnson, a former chief economist at the International Monetary Fund, is a professor at MIT’s Sloan School of Management and a co-author (with Daron Acemoglu) of “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” (PublicAffairs, 2023). Daron Acemoglu, professor of Economics at MIT, is a co-author (with Simon Johnson) of “Power and progress: Our Thousand-Year Struggle Over Technology and Prosperity” (PublicAffairs, 2023). Sylvia Barmack is a senior pursuing a major in data science at the University of Michigan. Copyright: Project Syndicate, 2023. www.project-syndicate.org

up
23 users have voted.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF