Groundbreaking research from Northumbria University has revealed how West Midlands Police’s use of an independent Data Ethics Advisory Board has transformed understanding of how to achieve responsible artificial intelligence (AI) in policing, providing a blueprint for a national surveillance model.
For the past five years, the West Midlands Police and Crime Commissioner (WMOPCC) and West Midlands Police (WMP) have maintained an innovative Data Ethics Committee. This interdisciplinary body, comprising independent experts in law, computing, ethics, social impact and victims’ rights, advises on the design, development and deployment of advanced AI and data analytics tools in policing. research highlights how this independent consultative model has transformed police officers’ understanding of the ethical and technical implications of AI, while simultaneously ensuring that human rights are at the forefront of thinking around new technology initiatives.
THE research was directed by Professor Marion Oswald MBE from Northumbria Law School, in collaboration with colleagues at Northumbria, Northampton, Glasgow and Aberdeen Universities, and in partnership with the Office of the West Midlands Police and Crime Commissioner and West Midlands Police. Funded by the Arts and Humanities Research Council (AHRC) through its Bridging the divides on responsible AI (BRAID), the research concludes that this type of independent advice can help bridge the gap between ethical thinking, scientific rigor and human rights considerations in law enforcement.
The study found that the Committee’s work did not hinder police operations, but rather supported them, leading to more responsible and ethical use of AI. It concluded that such an independent review could serve as a model for responsible use of AI nationally.
Commenting on the significance of the review, West Midlands Police Deputy Chief Constable Matt Welsted said: “West Midlands Police is a highly innovative force dedicated to delivering the best possible service to the public in the most effective and efficient way. This often involves cutting-edge technology and sophisticated data analytics which, whilst exciting, comes with significant responsibilities. This review and the recommendations it makes will be invaluable in helping us, and other forces, strike the right balance and ensure that the decisions we make and the tools we use to police our communities are ethical and legitimate.”
Main conclusions:
- Police officers gained valuable insights into the technical, operational, legal and ethical aspects of AI through their work with the Committee. One officer said: “When I went to the ethics committee, my eyes were opened… There were some considerations that I didn’t understand at first, but as I got more involved in the project, I was able to see how relevant they were.”
- The Committee’s advice has directly influenced the development of AI and data tools, with adjustments made based on its recommendations.
- Committee members, in turn, gained a deeper understanding of the operational pressures facing police officers and the potential of AI to improve policing, while protecting community interests and victims’ rights.
The study, which included interviews with Committee members, police officers, data scientists and community representatives, as well as a review of Committee documents and observations of the technology in action, calls for greater community involvement in the data ethics process. The research team stresses that community trust in AI policing tools can only be achieved if the voices of community representatives are respected and visibly influential. The study also concluded that greater attention should be paid to a broader range of human rights responsibilities. Sufficient time should also be allocated in Committee meetings to understand the technical details, how AI results will be used in policing operations and how AI could support police public safety responsibilities.
Professor Marion Oswald MBE, Chair of the Committee, led the research project, with interviews and research analysis carried out by other members of the research team. Professor Oswald said: “A cross-disciplinary data ethics committee, such as that maintained by West Midlands Police, is essential to ensuring the responsible use of AI in policing. By integrating diverse perspectives and expertise, the committee strengthens the validity and ethical foundation of AI tools, fosters transparent dialogue and addresses critical issues of privacy and public safety. Its success is based on clear roles, community representation and strong support, providing a model for national strategy and guidance to others in the ethical deployment of advanced data technologies.”
“Our research highlights the importance of balancing technological advances with ethical oversight, advocating for a structured, transparent and inclusive approach to AI in policing, with the West Midlands Data Ethics Committee serving as a leading example.”
Northumbria University has a global reputation for AI research and teaching. Professor Oswald is the Principal Investigator on Responsible AI UK’s Keystone project ‘PROBabLE Futures: Probabilistic AI Systems in Law Enforcement Futures’. The University recently received £9 million from UK Research and Innovation to establish a PhD training centre in AI. Known as Citizen-Centred AI (CCAI), it focuses on including citizens in the design and evaluation of AI, helping to ensure that the rapidly evolving technology is useful to ordinary citizens.