Ethics & Algorithms Toolkit

A risk management framework for governments (and other people too!)

The question isn’t whether you should, but when will you start?



Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.

GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them.

We developed this because:

  • We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
  • We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.

Our goals for the toolkit are to:

  • Elicit conversation.
  • Encourage risk evaluation as a team.
  • Catalyze proactive mitigation strategy planning.

We assumed:

  • Algorithm use in government is inevitable.
  • Data collection is typically a separate effort with different intentions from the analysis and use of it.
  • All data has bias.
  • All algorithms have bias.
  • All people have bias. (Thanks #D4GX!)

David Anderson

Advisory Board Member, Data Community DC

Joy Bonaguro

Former Chief Data Officer, City and County of San Francisco

Miriam McKinney

Analyst, Center for Government Excellence @ Johns Hopkins University

Andrew Nicklin

Futurist At Large, Centers for Civic Impact @ Johns Hopkins University

Jane Wiseman

Senior Fellow, Ash Center for Democratic Governance and Innovation @ Harvard

The Toolkit

To use this toolkit, we assume you:

  • Have some knowledge of data science concepts or experience with algorithms
  • Largely understand your data

There are five documents which comprise the toolkit. Access them individually below, or download them all in a zip file :

Overview and Introduction

The overview section of the toolkit is comprised of level-setting background information that will be useful when traversing subsequent sections of the toolkit. We have outlined a few real-life scenarios where the toolkit might be applied, provided definitions, and more. For example, while we briefly touched upon machine learning in the previous module, the toolkit overview helps you understand more about the various types which exist, such as supervised learning, unsupervised learning, and so on.

Part 1: Assess Algorithm Risk

In Part 1 of the toolkit, there are six major steps (or questions) to help you and your stakeholders characterize an algorithm. Many of these steps have multiple components, but also include clear instructions on how to summarize those stages in order to complete the step.

Since this document can be difficult to navigate, we have developed a worksheet for Part 1 , designed to help you track your responses to the individual steps and how they are combined into overall risk values. It’s worth noting that although answering a series of questions seems simple, you will almost certainly need additional people to help - whether they are stakeholders, data analysts, information technology professionals, or representatives from a vendor that you are working with. Don’t expect to complete this part of the toolkit in just a few hours. Some of the steps will evoke considerable discussion.

Part 2: Manage Algorithm Risk

Although it’s helpful to know how concerned you should be about various aspects of your algorithm, that’s really only half the battle. Although there may be a few cases where the risks are too severe to proceed, there are often ways to mitigate them. Using Part 2 of the toolkit, you identify specific techniques to help address the considerations you identified in Part 1.

The results of Part 2 will be highly customized and specific to the factors you evaluated in part 1. Some of the recommendations can introduce significant burdens that are more appropriately addressed within large-scale programs, such as those that support the social safety net. It is not unusual to need executive and political support to be successful.


Although this isn’t specifically required reading in order to use the toolkit, the appendices provide plenty of additional context and depth. The first appendix contains a list of in-depth questions to help you understand your data in more detail. The second provides additional background on bias and how easily it can arise.


Got feedback for us?


We are grateful for the media stories covering the toolkit. If you’d like to write an article or otherwise help spread the work, please contact the Center for Government Excellence.

Related Content

Media Stories


The contents of this site and the Ethics & Algorithms Toolkit are licensed under a Creative Commons Attribution 4.0 International License.

The site code is licensed under an MIT license.