Artificial Intelligence (AI) is a growing trend and concern in policing. If you’re looking to lead people in policing over the next 10 years, there’s no way you can ignore this! You’re going to be in the thick of it and on the cutting edge; being a bystander is not an option.

I was inspired to write this blog following a recent interview published from Temporary Chief Constable Alex Murray, the AI lead for the National Police Chiefs Council (NPCC). I also did a recent deep-dive AI police leadership podcast for my premium subscribers.

In this comprehensive blog, I aim to inform and signpost you to the top issues surrounding police use of AI and the leadership considerations entailed. I’ll set out some the key challenges while identifying the benefits and opportunities for policing to use this technology, along with the risks of policing remaining behind the curve.

Police premium promotion podcast benefits

Police Leadership of AI and Key Goals

Who is now leading on policing’s approach to AI? Shortly due to take up his new role as Director of the National Crime Agency (NCA), Alex Murray the Temporary Chief Constable of West Mercia Police, is the first NPCC lead for developing and progressing AI technology. He is committed to overseeing its responsible use to improve policing.

“There are huge benefits to using AI across the wider criminal justice system, not just in policing, we should not shy away from it.” –  T/CC Murray

The three key objectives of this role in utilising AI are as follows, which mainly focus on the opportunity AI affords in saving time in the workflow of common tasks and data management:

  • Improving productivity and efficiency
  • Making policing more effective in cutting crime
  • Tackling the criminal use of AI

In his recent video interview on PolicingTV, T/CC Murray is keen to highlight the innovative work and ambitious pilots already underway across police forces. Some examples include:

  • Control rooms to support call handlers in managing demand and to focus on those most at risk.
  • To develop and improve redaction tools, transcription and translation services.
  • Using tools to search, refine and distil huge amounts of data to identify where potential child exploitation is occurring.
  • Countering the rise of ‘deep fakes’ frustrating crime investigation.

But before we delve further, it’s probably a good idea to define what this AI trend is all about, seeing as most people don’t understand it…


Understanding AI in Simple Terms

Robot human hand

What is AI? A good starting point! Below is the best definition I can give you of AI, encompassing its mechanical nature and its attempt to do intelligent things:

“AI is software developed so computers can perform complex tasks that typically otherwise requires human intelligence (e.g. pattern recognition, problem solving, decision making). A key element is AI’s ability to learn by itself to improve its own algorithms.”

And if you’re interested, here’s how AI defines itself when asking ChatGPT for a definition:

“Artificial Intelligence (AI) is a branch of computer science focused on creating systems and machines capable of performing tasks that typically require human intelligence. These tasks include learning from experience, recognizing patterns, understanding natural language, solving complex problems, and making decisions. AI systems leverage algorithms, data, and computing power to simulate cognitive functions such as perception, reasoning, and problem-solving.” – ChatGPT

Imagine you could create a super-smart robot that can learn and do things only humans can normally do. For example, understanding language, recognising faces, or playing games. This is what AI is about, it’s like making a machine or a computer program that can think and learn on its own, almost like a human brain, but made of code and circuits, not neurons and synapses.

And as it stands in 2024, AI is mostly about doing complex or detailed tasks quicker. It’s like a more practical and intelligent ‘Ask Jeeves’. AI is not (yet) very successful at trying to emulate true human creativity or innovation.

On this limitation, you’ll have seen for example how AI image generation still looks weird (especially when depicting humans). And there’s a reason most people just type “human please!” to an annoying chatbot on a website as companies try and save money in their call centres. Also, it’s pretty obvious when a news article or blog has been written by AI, with no added value from the publisher. If you want to be sure, there’s even AI plagiarism detector tools (themselves driven by AI!) which check passages of text for you.

Alex Murray seems to recognise this current situation and limitations of AI when he states:

“The public can be assured AI is not replacing officers. Police will remain at the heart of everything we do because violent disorder, domestic abuse, child sexual exploitation for example, will always need a trained human officer to interact, offer support and make the final decisions and that will never change.” – T/CC Murray


How Does AI Work and Why is it Important?

Woman

To work, AI requires lots of information (data) to learn from. Just like how humans learn from experience or books, AI uses algorithms i.e. sets of rules or steps, to find patterns in data and learn from it. This process is called machine learning. AI can then use that learning to make decisions or predictions.

AI is already everywhere and part of our lives. Alexa and Siri are well known AI programs that can understand human speech (most of the time!) and respond accordingly. If you ever wondered how Netflix suggests films for you, or Amazon knows which books you might like, that’s AI ‘learning’ from your interests and preferences. There’s even evidence that tech giants use passive ‘listening’ AI technology in your smart phone to send you related ads.

Why is AI Important?  In essence, AI is about doing things quicker and handling more data by making machines smarter.

“AI offers huge gains in productivity – there is always more demand for policing than it can supply and AI helps release officer time so they can concentrate on those who need them most.” – T/CC Murray

AI can enhance efficiency for example by performing repetitive tasks incredibly fast and without the human errors. This includes tasks such as sorting through millions of documents in seconds to find key words, phrases or search terms. That could massively reduce the time in searching for evidence for complex investigations, which is already being used in arenas like serious fraud.

AI is also important for innovation. It helps to create new technologies, coding, or solving complex problems. Real-world examples of the pattern recognition include weather prediction and disease detection. Microsoft’s Seeing AI even helps bling people navigate the world by describing scenery, reading text, recognising currency, and even identifying friends.

The future of AI is expected to change many aspects of our lives even further, from how we work, live, move or even think. Driverless cars and other vehicles are just one application coming down the road. Might this reduce road collisions and how should police respond to those involving such technology; who’s to blame?

Elsewhere, AI has already cracked the human brain, successfully turning thoughts into words; and that’s just using the tech available to the public. Intrusive? You might say! But it’s been done. It won’t be too long before technology is used to encode those words back into brainwaves, basically enabling situations almost indistinguishable from telepathy. Now was it just me, or did I just receive a targeted advert on Facebook about something I merely thought about earlier?…

There are already challenges for policing to navigate with solutions being explored now. For example, Police Scotland are already being urged to abandon its facial recognition ambitions over questions of ethics and incoherence with the Peelian Principles of UK policing. South of the border, the NPCC are pushing ahead with such schemes regardless, under their Facial Recognition Board.

Other than the ability to mass-process faces or even communicate without talking, what could be some of the other opportunities and challenges for policing to overcome?

Police promotion masterclass video

Ongoing Challenges of AI in Policing

What are some of the problems AI presents for policing? Policing cannot afford to remain behind the curve as technology develops, so it’s important to resolve the issues sooner rather than later, to exploit the potential AI offers.

  • Human Oversight: Human decision-makers will need to be involved at critical stages to validate AI-derived evidence, ensuring AI recommendations are sanity-checked and its decisions tested.
  • Training and Awareness: Legal and technical training will ensure investigators and prosecutors are trained not just in law but also the basics of AI and understanding its capabilities and limitations.
  • Transparency of AI Systems: AI systems must provide clear audit trails that detail specifically how they processed data, especially in policing. Where possible, using AI systems with open-source algorithms or those that can be independently reviewed for bias or manipulation.
  • Chain of Custody, Evidence Handling: Proving the integrity of evidence beyond reasonable doubt may pose significant challenges for UK police investigators and Crown Prosecutors especially in cases involving criminal misuse of AI. Areas of challenge will need bespoke new working practices. These will include ensuring meticulous documentation of how AI is used in evidence collection, including logs of AI interactions, modifications, or outputs.
  • Peer Review and Validation: AI outputs should be validated by independent experts or through peer review processes, which could include simulations or tests under controlled conditions to validate the AI findings. AI experts may need to be called upon to give expert testimony in cases and meet the required burden of proof.
  • Legal Standards and Disclosure: Requirements will ensure all AI-related evidence, including errors or biases, are disclosed to the defence, following the principle of providing material that might undermine the prosecution’s case or assist the accused. Legislation & policy frameworks will need to ensure compliance with existing or new laws specifically aimed at AI in legal proceedings, which might set standards for admissible AI evidence.
  • Ethical AI Use: AI poses threats to fundamental human rights. Therefore, adherence to ethical standards in the use of AI for law enforcement is essential. Mapping any proposed scheme’s adherence to the Code of Ethics throughout would be a good starting point.
  • Public Trust & Court Precedents: In relation to building trust, courts might develop a body of precedent around AI evidence, setting standards for what constitutes reasonable doubt in AI-influenced cases, which would guide future prosecutions.

By addressing the above issues with practical solutions, UK policing can utilise the benefits of AI throughout the criminal justice process. However, this also highlights the need for continuous adaptation as AI technologies evolve, ensuring its use remains fair and effective in the digital age.


Artificial SWOT for Policing…

SWOT for Ar

For those aspiring to more strategic ranks of Inspector and beyond, it’s helpful CPD to think more strategically on the matter. With this in mind, here’s an example SWOT analysis to organise the various factors involved; in this instance, this SWOT has largely been formulated by prompting AI itself to weigh up the differing issues.

Strengths

  • Data Processing and Analysis Capabilities: AI can process vast amounts of data quickly, identifying patterns and predicting crime with higher accuracy than traditional methods.
  • Resource Optimisation: By predicting where/when/what crimes are likely to occur, police forces can allocate resources more effectively, potentially reducing response times and increasing the effectiveness of patrols. Think shift patterns!
  • Public Safety Enhancements:  Technologies such facial recognition, in-vehicle cameras, and drones increase surveillance and response capabilities, enhancing perceptions of safety for many stakeholders.
  • Cost Efficiency: Over time, AI systems could lead to cost savings through automation of administrative tasks (i.e. salary costs), thereby reducing staffing needs in non-policing activities.

Weaknesses

  • Initial Implementation Cost: Significant time and cost investment is required for AI infrastructure, training, and integration in the set-up, which could strain budgets initially.
  • Technological Dependence: Over-reliance on AI itself might weaken traditional policing skills and human judgment, which are crucial in complex social situations. Further, a reliance on big tech companies is risky business in itself.
  • Privacy Concerns: The use of AI, especially in surveillance, raises serious privacy issues, potentially eroding public trust if not managed transparently.
  • Lacks Sophistication: Most AI solutions currently remain focused on and limited to speeding up existing processes and dealing with data. It cannot be left alone to properly ‘innovate’ like people can; it requires detailed human interaction to guide things in the right direction.
  • Knowledge is Lacking: Few people in policing know what on earth AI is all about, what it can do or how it could change things. Change is being attempted to be driven by just a small cohort of what might be considered to be ‘techie enthusiasts’ compared to most.

Opportunities

  • Innovation in Policing: AI can foster new methods of crime prevention, investigation, and community engagement, possibly reducing crime rates through more predictive analytics.
  • Partner Collaboration: Sharing AI driven insights with international law enforcement could enhance global security measures.
  • Public Service Improvement: Beyond policing, AI could streamline interactions with the public, improving overall efficiency and allowing officers and staff to focus on more value-added tasks.

Threats

  • Bias and Errors: AI systems are prone to biases and errors. The initial programming of AI ultimately comes down to a human feeding it algorithms and assumptions, so like any computer program it’s susceptible to GIGO (garbage in, garbage out).
  • Cybersecurity Risks: Increased reliance on digital systems exposes law enforcement to more cyber threats, which could compromise operations and have bigger impacts should cyber attacks or system failures occur.
  • Public Backlash: AI deployment and associated privacy invasion concerns could lead to public outcry and legal challenges affecting the legitimacy of policing methods. Police Scotland are facing increasing public pressure to abandon its plans as it seeks to implement facial recognition.

What might you add to this SWOT? How could these factors influence the direction your force or policing more generally takes with AI? Does it prompt any ideas to capitalise on the strengths and opportunities, while mitigating the weaknesses and threats?

Here’s just a few practical ideas on the opportunity of boosting public service with AI. What concerns and ethics considerations would they present?

  • Automatically notifying victims on the status of their crime, where it’s at, any key upcoming dates, and whether they need to do anything.
  • Matching resources to demand, by plugging in an array of demand metrics, then getting AI to recognise the best shift pattern once and for all for different roles to meet that demand.
  • For the intelligence world, AI could highlight complex trends, MO and patterns to identify potential suspects in line with NIM methods.

In California, police departments are already trailblazing AI for drafting their reports to save time (albeit with mixed success!).

The College of Policing’s Future Operating Environment 2040 considers some of these opportunities and threats of AI in the future world of policing in England and Wales. Take a look for example at Trend 6: Harnessing Artificial Intelligence. The report makes the following key point of what the main human connection may be and how to best prepare people for this future artificial intelligence-riddled world:

“Building a workforce with the intellectual and psychological aptitude necessary to work in an increasingly automated environment will be an important part of preparing policing for the future.”

If you’re particularly interested in the wider gamut of technology for policing over the next decade, you may want to review the NPCC’s National Policing Digital Strategy 2030. Here’s a summary from that report showing how AI and machine learning sits alongside other technologies, grouped by theme:


Preparing Policing for a Future with AI

When considering the future of AI in policing, decisions are required now by various bodies (e.g. Government, NPCC, forces and the College) to lay the groundwork.

“Policing needs to assess the risks associated with AI implementation, decide corporately what its risk appetite is and plan how consolidated procurement can begin the process of countrywide, AI-led digital transformation.” – Tamara Polajnar

How long will it take for UK policing to exploit the benefits and potential AI offers? By 2030, with strategic planning and phased implementation, we could see significant improvements in policing efficiency. But some critical decisions must be addressed soon.

These decisions include funding and resource allocation. How much should be invested in AI development and infrastructure? Would that be better than simply funding more cops? The government and NPCC will have to weigh these costs against the potential long-term benefits in operational efficiency and crime reduction.

“We need to mobilise now and equip our workforce for the future. If we don’t, we risk falling behind criminals who are embracing and exploiting these tools.” –  T/CC Murray

Training and skill development is another critical decision area to ensure forces are adequately trained not just in using AI but understanding its limitations and ethical implications. Then there’s the more external engagement, keeping the public and communities informed and involved in the decision-making process regarding AI use in policing.

Ethical frameworks will be the trickiest decision area, establishing clear guidelines on AI use to address privacy, bias and transparency issues, e.g. through legislation or regulatory bodies. Organisations and big tech have already given AI a bad name by flouting ethics to date (e.g. Facebook were doing this 10 years ago). Policing cannot jump on the bandwagon of malpractice. But fear not, below is a good starter-for-10 ethical framework to prevent public trust disasters…


An Ethical Framework: Police Legitimacy

How will an ethical framework for the use of AI in UK policing uphold 200 year old expectations of legitimacy? Well in principle, “the only way is ethics”, i.e. it should align with the existing Code of Ethics in policing. However, while the existing Code is somewhat relevant, it’s more geared towards behaviour of people and not the implementation of new technologies.

The framework outlined below aims to balance the operational benefits of AI with ethical considerations, to maintain the moral integrity and public trust underpinning ‘policing by consent’…

  • Transparency and Explainability: There will need to be transparent policy for AI systems about their operations, data sources, and decision-making processes including provision of clear, accessible documentation on how AI tools function, what data they use, and how decisions are reached. For legitimacy, the public must be able to understand why an AI system made a particular decision or prediction, thereby fostering trust and accountability.
  • Accountability and Oversight: Establish an independent mechanism (e.g. an Ethics Committee) to oversee and approve 6AI applications in policing. This committee would review AI proposals, monitor deployments, and ensure ongoing ethical standards are met. Specialist legal advice should be sought on the continuing legal compliance of proposals and compatability with fundamental human rights. For legitimacy, clear lines of accountability will ensure misuse or biases in AI can be traced back to responsible entities, thereby enhancing public confidence in policing practices.
  • Fairness and Bias Mitigation: Implement regular audits for bias in AI systems. This will involve checking for demographic biases and training AI on diverse datasets to avoid discriminatory outcomes and ensure compatability with fundamental human rights. AI tools that inadvertently reinforce systemic biases may undermine public trust and legal justice.
  • Privacy and Data Protection Standards: Compliance with GDPR and other data protection laws is non-negotiable. AI systems must handle data with strict privacy protocols, ensuring minimal intrusion necessary for law enforcement. For legitimacy, citizens expect their privacy to be respected with transparency in the data handling practices.
  • Human Oversight: AI decisions in critical areas like arrests or significant surveillance must always have a human review protocol to mitigate errors. AI should support, not replace, human judgement in decision-making processes. For legitimacy, people expect human oversight in decisions affecting their freedom or rights, ensuring empathy and contextual understanding are not being lost to machine efficiency.
  • Community Engagement and Feedback Loops: Strategic oversight should incorporate regular community consultation and involvement to discuss AI’s role in policing. Feedback mechanisms (including in-person forums) should allow citizens to voice concerns or suggest improvements, ensuring AI tools serve community needs.
  • Education and Continuous Improvement. Comprehensive training for officers should include not just how to use AI tools, but their ethical use and human rights implications. AI systems themselves should be subject to regular reviews and improvements, adapting to new standards and learning from errors.
  • Documentation and Reporting Requirements: As with any police system, detailed audit trails should be available of AI interactions, decisions made by AI, and their outcomes. This documentation should be accessible for audits or public inquiries and summarised into performance monitoring regimes (e.g. ‘error rates’). Transparency in operations and outcomes allows for scrutiny, vital for public trust and accountability.

The Responsible use of AI in Policing

“Responsible use of AI is paramount if we are to deliver a service that is trusted by communities. Our only motivation is to improve what we do and to better achieve our mission of making people safer.” – T/CC Murray

HerEthicalAI is one AI consultancy specialising in providing guidance and practical machine learning help and support to police and third sector organisations for the responsible use of AI. CEO Tamara Polajnar provides a useful perspective on the future challenge of AI and how it should be treated in policing: 

“Understanding where AI could be the most effective solution to improve productivity and ensuring these solutions are properly funded and evaluated is going to be the hardest initial challenge facing policing… AI is not a panacea to solve all of policing’s demand problems (at least not yet); it is not always the best solution, as some problems are better solved by people or already existing technologies. But it is a fundamental tool which will enhance policing’s ability to mediate demand sufficiently to improve services quickly, efficiently and cost effectively, now.”

Below I share a little-known but fascinating and topical event from earlier this year if you’d like to hone your CPD on this a little further. There’s presentations and videos available on YouTube, covering the following topics:

  • What is Machine Learning?
  • Algorithms in Policing: Problems, solutions, and more problems!
  • How to Conceptualise and Quantify Problems in Policing
  • Demystifying AI
  • AI in Policing

There’s also a brief write-up of this Middlesex University hosted event. I’m sure there will be other events popping up like this, as AI is progressed and developed for policing.

It seems currently that there’s a few enthusiasts trying to push policing to make use of the opportunity AI offers, while the majority either don’t understand what it’s about and/or are disinterested. For now, here is a recent video featuring Alex Murray talking about his role, with some highlighted takeaways.

Whatever you think of AI, if you’re an aspiring police leader looking to develop your leadership skill into the future of policing, this subject offers a fantastic opportunity. Now is the start of a learning curve towards a future that most people can’t even imagine right now.

Kind Regards, Steve


Police podcast Artificial Intelligence

Seeking police promotion? Want to get a massive head start right now? Hit the ground running with your personal digital promotion toolkit, and/or my market-leading Police Promotion Masterclass. There’s nothing else like it to effectively prepare you for success in your leadership aspirations. You can also contact me to arrange more personal coaching support. Or try my podcast for your ongoing police leadership CPD covering a range of fascinating subjects.