State law enforcement agency awards contract for AI tech

Earlier this month, the state’s Department of Law Enforcement announced that it has awarded a contract with Microoft to incorporate the company’s Azure artificial intelligence technologies into the agency’s operations.

“DLE is leading the way in leveraging the latest Azure AI technologies to enhance public safety and improve the efficient operation of the department,” said Ernie Fernandez, vice president, state and local governments at Microsoft Corp. “This collaboration demonstrates how industry and public safety agencies can work together to responsibly deliver technology innovations to improve communities across the U.S.”

Modernization. Streamlining. Enhanced public safety. Improved response times.

Big promises.

But a department news release announcing the AI contract contained no details about what is going to be done, what oversight will be established, and how problems, including “hallucinations” where AI systems invent their own data, will be identified and corrected.

What could possibly go wrong? Apparently quite a lot, if AI tools aren’t incorporated carefully, transparently, and ethically, and with proper oversight and public buy-in.

Brooks Baehr, DLE public information officer, said the contract is valued at $3.14 million.

I’ve requested a copy of the contract and related information, and have been assured that it is coming soon.

In the meantime, I asked Google’s Gemini AI how artificial intelligence is already being used in law enforcement. Yes, I know that this raises its own issues for journalism. But Gemini did provide a useful summary. This was its answer.

Artificial intelligence (AI) is being used in law enforcement in a variety of ways, including:

Predictive policing: AI algorithms can analyze historical crime data to identify patterns and predict where and when crimes are likely to occur. This information can be used to allocate resources more effectively and prevent crime.

Facial recognition: AI-powered facial recognition software can be used to identify suspects in surveillance footage or to match faces to mugshots.

License plate readers: AI can be used to analyze license plate data to identify stolen vehicles or track the movements of individuals.

Chatbots: AI-powered chatbots can be used to answer questions from the public and provide information about crime prevention and safety tips.

Automated speech recognition: AI can be used to transcribe audio recordings of interviews and interrogations, making it easier for investigators to review evidence.

It then continued by noting several areas of concern.

While AI has the potential to be a powerful tool for law enforcement, there are also a number of concerns about its use. These include:

Bias: AI algorithms are trained on data that may be biased, which can lead to biased results. For example, facial recognition software has been shown to be less accurate at identifying people of color.

Privacy: The use of AI in law enforcement can raise privacy concerns, as it may involve the collection and analysis of personal data.
Overreliance: There is a risk that law enforcement agencies may become overly reliant on AI, which could lead to errors and miscarriages of justice.

Lack of transparency: AI algorithms can be complex and difficult to understand, which can make it difficult to hold law enforcement agencies accountable for their use of AI.

A quick online search turned up numerous examples of reports and recommendations from a variety of sources. While recognizing the current and potential uses of AI technology, they also stress the need for agency transparency, oversight, and accountability going forward. These are, unfortunately, areas where Hawaii’s law enforcement community has not performed well in the past.

Artificial Intelligence in the Criminal Justice System / Demystifying artificial intelligence, its applications, and potential risks,” National Institute of Justice, August 2020.

Technology might enable certain capabilities; however, that does not mean that the use of AI is always ethically appropriate in the criminal justice context. Major ethical issues—such as privacy, fairness, transparency, security, and accountability—are regularly being discussed in the AI community (see Figure 5). Recently, the Department of Defense issued five ethical principles related to AI applications; these principles mention that AI applications should be responsible, equitable, traceable, reliable, and governable. Defining ethical use cases for AI will require ongoing conversations between criminal justice leaders and stakeholders.

Artificial Intelligence Applications in Law Enforcement: An Overview of Artificial Intelligence Applications and Considerations for State and Local Law Enforcement,” Office of Justice Programs, US Dept. of Justice. Auigust 2020.

Law enforcement’s use of AI is prompting important but difficult conversations that communities, states, and even nations will likely answer in different ways. There are important ethical considerations to consider when deploying AI solutions—including issues of fairness, accountability, transparency, security, and privacy. Some communities may decide that certain technologies are not appropriate for use in law enforcement due to privacy or other ethical concerns. For example, in 2019, California became the first state to ban the use of FRTs by law enforcement.

Artificial Intelligence and Policing: Year in Review 2023, Electronic Frontier Foundation,” December 2023.

…the possibility of plugging a vast amount of information into a system and getting automated responses or directives is also rapidly becoming a major problem for innocent people hoping to go un-harassed and un-surveilled by police. So much has been written in the last few years about how predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and just plain-old don’t work. One investigation from the Markup and WIRED found, “Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.”

Police use of Al: A Force for good or a publie threat?” Eviden.com, September 2023.

Key public concerns

One of the public’s key concerns is that the use of AI may only serve to justify existing discrimination or bias — and may even amplify it. However, AI lacks human emotions and, as a result, may have great potential to be free of discrimination and bias. Nevertheless, if the data used by AI systems contains a bias, if those who built the algorithm are themselves discriminatory, or if the effects of biased data simply haven’t been considered, there is a real risk that the AI tool will amplify discrimination and inequality within society. Consider this example: if crime data collected from a largely minority ethnic neighborhood is used by an algorithm to predict crime in areas where there is an ethnic minority, then without recognizing and correcting for this bias, the AI outcomes could lead to police interventions that are disproportionately focused on these communities.

Using the right data in the right way

Steering clear of biases and discrimination caused by misinformation or misappropriated data requires a concerted effort on the part of the police. To ensure this doesn’t happen, they must collect and use the right data in an appropriate, unbiased manner. This means developing and deploying AI capabilities with a diverse team of people that together can ensure the goal of eliminating bias is achieved. Once the capability is deployed, police forces must communicate clearly, effectively and transparently about its purpose, how it is used and how they have mitigated the risk of bias.

Another key public concern is that data will be misused in a way that infringes upon peoples’ rights and freedoms, or that it will not be safeguarded. Complying with current legislation, standards and other relevant regulations, such as management of police information (MoPI), must be a given — but the public will expect police to go further.

AI in Law Enforcement: Old and New Challenges,” Lexipol, January 2024.

During the recent International Association of Chiefs of Police (IACP) conference in San Diego, many chiefs expressed concern that officers have started using AI for writing police reports and probable cause statements. Upon returning home, I decided to give ChatGPT a try to see what it could come up with….

Reports, court documents, and testimony pertaining to any area of the law, whether it be search and seizure, legal aspects of confessions, or use of force, require specific articulable facts that arose from that unique incident. It does not matter if similar facts and observations repeat themselves (driving while intoxicated cases come to mind); the facts must be what you observed in that case, period. There are always nuances because everyone and every situation is different. Being able to properly and accurately articulate facts is a critical skill for officers. Find a way to cut corners and officers will never get better, while comprising their integrity at the same time….

The technology may be new, but the root of the problem is not: Using AI to generate reports is at heart an ethical lapse.

Artificial Intelligence in Predictive Policing Issue Brief,” NAACP

Policy Recommendations

• Implement Rigorous Oversight: Establish independent oversight bodies to review and monitor the use of Al in policing, ensuring algorithms are fair, accurate, and non-discriminatory.

• Mandate Transparency and Accountability: Require law enforcement agencies to disclose the use of predictive policing tools, including the data sources, methodologies, and impact assessments.

• Promote Community Engagement: Involve community members in the decision-making process regarding the use of Al in law enforcement to build trust and accountability.

• Ban the Use of Biased Data: Prohibit the use of historical crime data and other sources known to contain racial biases in predictive policing algorithms.

• Establish Legal Frameworks: Enact legislation to regulate the development, deployment, and evaluation of Al in policing, with strict penalties for violations of civil liberties.

I’ll let you know when I receive a copy of the contract itelf.


Discover more from i L i n d

Subscribe to get the latest posts sent to your email.

5 thoughts on “State law enforcement agency awards contract for AI tech

  1. JKS

    The dumber a person or group of people are, the more amenable they are to taking orders from AI.
    After a lifetime of not applying themselves, they can’t/won’t think past an instruction to question it’s validity so they readily submit their free will to the AI authority.
    By doing so, they get better results than they could ever have hoped to achieve on their own, so they advance in life in a way which has previously been denied to them.
    Soon our leaders are mostly AI-dependent dolts.
    Dunning-Krueger on steroids.

    Reply
    1. Lynn

      Yikes! I don’t pretend to the intellectual capacity to fully grasp what AI means for we humans. But what you say is something that never occurred to me and I intend to think much on what you’ve said. Especially in light of a recent conversation with a civil servant who told me that they have been warned to expect more AI in their future government careers. This because the government simply cannot hire enough qualified warm bodies to fill current and future vacancies, so it is looking to AI to fill the void.

      Reply
  2. Bill Gates

    On a serious note, I hope this includes workflow software and customer relationship management features to help with follow-up from callers, emailers, and other community contact. It’s dismaying how many local government agencies rely on email to track projects.

    Reply

Leave a Reply to Bill Gates Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.