Skip to main content
Action Police
Issue Brief

Artificial Intelligence in Predictive Policing Issue Brief

Action Police

The Use of Artificial Intelligence in Predictive Policing

The National Association for the Advancement of Colored People (NAACP) calls on state legislators to evaluate and regulate the use of predictive policing and Artificial Intelligence (AI) within law enforcement agencies. While these tools aim to tailor law enforcement use of assets for efficiency and objectivity, there is mounting evidence and growing concern that they can increase racial biases. This can result in many disparate outcomes, including disproportionate surveillance and policing of Black communities. This policy brief outlines the potential risks of unregulated AI in predictive policing and offers recommendations for ensuring these technologies are used ethically, transparently, and in a manner that promotes justice and equity.

Background

Predictive policing in artificial intelligence (AI) is when software uses data and algorithms to forecast criminal activity, with the goal of efficiently placing law enforcement resources. Jurisdictions who use this tool argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement. The data used to make decisions around predictive policing comes from compiling and analyzing historical criminal data and police activity. Relying on historical criminal data to make policing decisions is inherently biased, as data shows that the Black community is disproportionately negatively impacted in the criminal justice system due to targeted over-policing and discriminatory criminal laws.

Numerous cities across the United States have already adopted predictive policing tools, and as noted by a letter from US Senators to the Department of Justice (DOJ), "mounting evidence indicates that predictive policing technologies do not reduce crime… Instead, they worsen the unequal treatment of Americans of color by law enforcement". The Senators who authored this letter actually call for the DOJ to cease funding for predictive policing systems until further information, audits and due process considerations occur.

Challenges

Communities of color, and the Black community in particular, are disproportionately affected by law enforcement. We face higher rates of surveillance, stops, and arrests- which will only increase due to biased algorithmic predictions.

  • Bias and Discrimination: AI models can inherit biases from historical crime data, leading to discriminatory policing practices.
  • Lack of Transparency: The proprietary nature of predictive policing algorithms does not allow for public input or understanding on how decisions on policing and resources are made.
  • Erosion of Public Trust: Over-policing has already done tremendous damage and marginalize entire Black communities. Law enforcement decisions based on flawed AI predictions can further erode trust in law enforcement agencies.

Recommendations

  • Implement Rigorous Oversight: Establish independent oversight bodies to review and monitor the use of AI in policing, ensuring algorithms are fair, accurate, and non-discriminatory.
  • Mandate Transparency and Accountability: Require law enforcement agencies to disclose the use of predictive policing tools, including the data sources, methodologies, and impact assessments.
  • Promote Community Engagement: Involve community members in the decision-making process regarding the use of AI in law enforcement to build trust and accountability.
  • Ban the Use of Biased Data: Prohibit the use of historical crime data and other sources known to contain racial biases in predictive policing algorithms.
  • Establish Legal Frameworks: Enact legislation to regulate the development, deployment, and evaluation of AI in policing, with strict penalties for violations of civil liberties.
  • Artificial Intelligence in Predictive Policing Issue Brief