Fever screening cameras. Disinfecting UV robots. Social distance sensing smart watches. These are just a few of the wave of new technologies being used to fight the COVID-19 pandemic. London Heathrow airport has been testing passenger fever screening technology, and 33 countries are now employing robots in more “hazardous” spaces, such as hospitals, restaurants, and public transportation. Bi-weekly testing and symptom reporting online is the norm at some universities, and the NBA is coupling contact tracing apps with video technology to ensure players maintain a COVID-free “bubble”.
At the same time, the pandemic has heightened the existing digital divide, racial inequality, and job loss. An NPR poll found that least half of city dwellers in four American cities have experienced wage cuts and job loss since the onset of the pandemic.
At their core, these technology-centric responses to COVID-19 provide many benefits. They remove workers from many hazardous tasks and increase efficiency of critical services and human-related activities. They allow companies to return to physical workplaces, which can stimulate productivity and satisfaction. They are stimulating important decisions to improve the health outcomes of the built environment and, in many cases, encouraging higher standards for safety and accessibility through engineering and policy.
Given the convenience and opportunities for flexibility, these solutions do have a place post-pandemic and are likely to stay. But we may be moving too quickly through the problems COVID-19 poses. What are some of the pitfalls of these technology-driven, safety-enhancing products and tools? There is concern about the potential impacts on at least two critical issues: privacy and equity.
The pandemic has caused immense physical and social sacrifices, and one of the lesser-discussed impacts include data privacy concerns. Many devices are often compared to “Big Brother” Orwellian sensor technologies, tracking individual movement and facial features that make it difficult to be truly anonymous. The A.C.L.U. called out organizations using fever screening technologies for potential surveillance capabilities.
The most critical concern is that such technologies would not directly benefit some of the most vulnerable to COVID-19. Outside of hospitals, such innovations have only been applied to a small fraction of society. Many service jobs are replaceable or difficult to do remotely, which bodes trouble for low-wage workers. Such loss is even more pronounced in countries without strong residential computer access. Younger students are also dealing with a significant reduction in educational quality. While new devices and programs have improved virtual learning, they cannot recreate libraries, cafeterias, and other support services that low-income students rely on, if students can even afford such devices in the first place. It can be difficult to stay optimistic about the purpose of innovations like indoor and outdoor air quality sensors, when in California, there are farm workers picking fruit without masks in wildfire smoke and 200+ AQI.
Health is deeply connected to the environment, and technology can be used to address both. However, without forethought into smart technologies and perhaps physical and social protocols that precede them, we can inadvertently exacerbate long-standing health and environmental justice issues.
As an Arc intern who cares about tangible sustainability commitments, I am very interested in the potential for data-driven systems to create new opportunities to conserve resources. I am equally, if not more, aware that innovation without critical thought will harm communities — many of whom have managed to live sustainably without such “advanced” tools and social support. Where physical and social policies are just as effective, if not more, technological solutions should not be the default given all known qualities regarding equity and unknown ones about data privacy. More broadly, shifting capital towards every service in society, not just to traditional and often lucrative professional spaces, is imperative to combat the massive inequality COVID-19 has exposed. In considering these points, I offer the following questions:
-
Who is most affected by the pandemic, and what solutions and support services exist to alleviate their concerns? How are we prioritizing those issues?
-
What COVID-19 innovations involve potential threats to privacy (facial recognition, automation, location-based data, etc.), and what are the long-term consequences related to privacy? Are their physical or social analogs to these products that accrue other benefits?
-
How can smart building and infrastructure products better support human rights, health, and equity?
Amidst the extreme difficulties of the pandemic, conversations regarding the purpose of workspaces, accessibility in health and education, and the connection between infrastructure and public health are both necessary and exciting. In the past six months, our societies have shut down, brainstormed, and in optimistic cases, found old and new innovations to kickstart daily activity. However, these innovations must come at the pace in which every citizen can play a role in evaluating the long-term consequences. As we embed these tools and products into our daily lives, a healthy dose of skepticism and questioning will favor equitable improvements in the long run.