Maximize Impact, But Do No Harm

Ethical Technology Innovation for COVID-19

đź—“ Posted April 15, 2020 by Rakesh Bharania and Evan Paul

“History is full of massive harm done by people with great power who are utterly convinced that because they believe themselves to have good intentions, they cannot do harm or cause grave harm.” – Zeynep Tufecki

The 2020 COVID19 pandemic has created an urgent global health emergency. It is natural that when an emergency of this kind occurs, that technologists and innovators across healthcare, humanitarian and public safety organizations and their private sector partners will ask themselves the question, “How can we help?” In some cases, the organizations have a mandate or duty to respond to the crisis. In other cases, it can be simple empathy and the desire to alleviate suffering that move individuals and organizations to action.

Dear Technologists: Move Fast, But Do No Harm

While the urge to help is natural and should be celebrated, all technology efforts related to the COVID19 response must abide by two fundamental principles: focus on the most critical problems and “do no harm” (i.e., the precautionary principle). As a community of innovators, we must collectively guard against spending limited resources (time, expertise, funds) on efforts that are poorly scoped, or lack an evidence-basis that they will contribute in some meaningful way to the response. In the rush to “do something,” we can find ourselves rushing to “solutionizing,” a form of magical thinking that suggests that just by deploying technology, we can have a meaningful impact on the problem without really considering the underlying non-technical realities. Any effort not grounded in a full and complete appreciation for the problem and its complexity risks the creation or positioning of technology solutions that lack the potential for impact and are not fit for purpose, or worse, could cause downstream harm to individuals and communities. Ensuring a focused technical response that minimizes negative impacts and harms must be a core ethical pillar for any technology solution intended for acute crisis response.

For mission-driven organizations and private sector innovators, we believe that a quick assessment of potential technology projects related to the COVID19 response is necessary to ensure that limited resources are not wasted on low-impact efforts, and that those efforts are structured to minimize the risk of harm of unintended consequences. There is an obvious sense of great urgency in meeting the immediate needs of a pandemic that is escalating at an exponential rate to avoid human suffering. However, that urgency must be tempered with the understanding that any crisis or emergency can create pressure on technologists to ignore principles of good design, of security and privacy just at the very moment where those things are needed the most.

First: Choose an Important Problem to Solve

When a crisis hits, many of us feel the need to do something, anything! While the impulse comes from a good and empathetic place, doing “anything” is not valuable. From the perspective of human health and societal safety, some problems are more urgent and important than others. When an organization has a number of potential efforts it could take action on, it is important to determine the prioritization of those efforts. Triage comes from the world of emergency medicine, and it is used to sort patients according to need when resources are limited. In our context, innovators may have multiple ideas on how to help, but they will need to sort which efforts deserve the most attention and resources.

Look at your possible engagement opportunities and sort them in priority order. Efforts that focus on the immediate challenges of human health and safety or the operation of essential lifelines and services should take precedence over other kinds of projects.

We suggest answering the following questions in the remainder of this article in order, and not moving to the next question until the preceding question is answered.

  1. Have you scoped the problem you are trying to address? It is important that technology efforts for such a large and complicated challenge as the pandemic do not try to “boil the ocean.” Make sure your organization has strictly defined the problem this effort intends to address. If the scope is too broad, work to reduce the scope to something reasonable and achievable.

  2. Is this effort duplicative? It’s possible that the problem has already been solved elsewhere. Because of the urgent nature of the crisis and the need to minimize the use of resources, it may be better to deploy an already existing solution that is proven rather than creating something new. Make sure you’re clear on the value-add of any new effort compared to other similar efforts that may already exist.

  3. Do you have the necessary capacities and competencies? Look at your organization’s core competencies and capacities critically. Given the well-scoped problem from the earlier question, assess whether you have the right expertise, buy-in and resources to actually execute on this effort. What else do you need?

Next: Maximize Impact, Minimize Harm in Your #COVID19 Technology Innovation Efforts

Now that you’ve evaluated your possible projects in terms of priority, scope and capacities, the next step is to determine how to move forward in a way that maximizes the benefit of the effort, and minimizes harms or unintended negative consequences.

  1. Is this effort informed by expert opinion and guidance? Any technology effort for the COVID19 crisis must be informed by expert opinion in the area that is trying to be addressed, rather than being developed solely by technologists in a vacuum. If the technology effort is meant to enable better sanitation in a refugee camp to minimize COVID19 transmission, is the underlying logic informed by Water, Sanitation and Hygiene (WASH) experts? Is the healthcare management platform informed by hospitals and health officials who can describe what the solution needs to actually do? Use experts and evidence to inform your logic.

  2. Will the solution explicitly control for risks to security, privacy, civil liberties and human rights? Emergency situations often create an excuse for people to collect and use data in new ways that may present long term privacy problems, or create digital surveillance systems that can be repurposed to harm civil liberties and civil rights in communities. Even previously available data may cause harm when applied and combined in a crisis response context. While it is impossible to avoid all possible harms from a project, we have an obligation to control for harms that we can foresee. Good frameworks for evaluating these risks include the Harvard Humanitarian Initiative Signal Code and the ICRC Handbook for Data Protection for data ethics considerations in acute, high consequence crisis environments. When possible, get other qualified people beyond those on the project team to review these risks, since it may be easier for others to identify the potential harms than those working on the project.

  3. Will the proposed solution be sustainable for the intended audience? How long will the end users of this effort need this solution? Can the organization sustain the solution (operating costs, licenses, staffing and support expertise) once it is deployed? Is the solution designed to be rapidly deployable, or does it require a lot of effort to bring into production? We suggest designing for long-term sustainability and rapid initial deployment from the outset.

  4. Does the proposed effort take into account the user’s context? Consider user interface design: does the solution need to assume the user is under significant stress, tired and overworked, uncomfortable from wearing Personal Protective Equipment (PPE), or other environmental factors? There should be logic safeguards around high-consequence actions to ensure they’re done without error or confusion. If the solution is intended for a low-resource or remote environment, is it designed for low-bandwidth connectivity, older devices, accessible to people with disabilities, or offline use? Does the interface work in the language(s) of the intended user(s)? Does it use terminology that is familiar to the intended user rather than technology jargon? Lastly, is there a feedback mechanism in the design so that end users can report problems, concerns, or dispute outcomes of the solution to support staff who can then investigate those issues?

Because of the urgent nature of the COVID19 pandemic, it may not be possible for technologists to follow traditional development models to identify risks and mitigate them. In order to minimize the delay to effective, impactful technology action for the COVID19 pandemic and yet still control for the urge to “solutionize” and the need to minimize harm, we propose these questions to our community of innovators and Trailblazers. This should help to facilitate a conversation around rapid assessment of any proposed technology effort related to the COVID19 crisis, or for any other urgent humanitarian crisis for that matter.

About the Authors

Rakesh Bharania is Director of Humanitarian Impact Data at Salesforce.org. He has spent more than 25 years in the humanitarian sector, focusing on the intersection of emerging technologies and international humanitarian crisis response and development. Rakesh has also engaged across the board with policy-makers, senior government officials, academia, first responders, NGOs/IGOs, volunteer organizations and industry leaders.

Evan Paul is Director of Global Impact Data at Salesforce.org. He has worked in the nonprofit sector for over 20 years focused on technology for large-scale collaboration and social change. He has led product and project teams focused on a range of issues, including climate and energy policy, sustainable forestry, disaster recovery, marine spatial planning, and strategic philanthropy.

Edit this page on GitHub