AiLECS seeks to be a leading translational AI research hub, whose outputs contribute to safeguarding communities from serious criminal threats.

We envision AiLECS as a crucial clearing house for applied research that facilitates the use of AI techniques in law enforcement and broader community safety. Our founding partners are the Australian Federal Police and Monash University. We conceive the lab and its operations as a model and platform for AI research related to law enforcement on an international scale. Such a global orientation is necessary given the cross-jurisdictional nature of the problem domain and complexity of the research issues.

Law-enforcement agencies around the world have investigated, and to varying degrees implemented, AI related technologies. This includes among other areas of application: facial recognition, optimised resource allocation, crime prediction, traffic policing, and text/social media analyses. Much of this has been in conjunction with commercial vendors, while direct collaborations between law enforcement and universities has been somewhat ad-hoc. We contend that tighter collaboration between law enforcement and universities through research leads to a deeper cross-pollination of expertise. Not only does this assist agencies to adapt to technological change, but promotes broad understanding in the research and higher education sector, of the issues faced by police. This in turn strengthens the community partnership on which law enforcement is best-based. Moreover, universities are typically research oriented and have highly developed infrastructure around the management of data, research student training and supervision, as well as appropriate ethical oversight.

Academic researchers outside of law enforcement agencies do not typically have legal access to data held by police. The use of evidence seized as part of real-world law-enforcement investigations – for example, as training data for machine learning algorithms – must be carefully considered from legal, ethical and technical perspectives. The transnational nature of technology-facilitated crime poses challenges for the interchange of potentially sensitive material between countries. This is a challenge borne not just of legal export restrictions, but also of security and logistical data management concerns. Infrastructure initiatives such as AiLECS are necessary in order to scale up research in this area, particularly since international collaboration will be vital to further address the large scale technical challenges inherent in combating criminal network activity.

The aim of the AiLECS lab is to pursue the research and development of artificial intelligence technologies that aid law enforcement agencies and enhance community safety.

This is a pivotal time for AI research, with research efforts and increases in computational resources driving the technology forward apace. Accordingly, there is a manifest need to harness these advances in the pursuit of law-enforcement, justice, and community safety – particularly in the context of technology-facilitated crime (for example, the distribution and consumption of offensive material). At the same time, the AI sector is beset with issues around bias, transparency, and the tension between privacy and the need for ever-increasing volumes and granularity of personal data that drives such advances.

We contend that the successful application of AI for law-enforcement will be based on three attributes:

Ethical

While a number of countries have or are developing ethics frameworks for AI, there is a danger of such frameworks becoming check-lists of overly broad statements such as “do no harm”, or “be fair”. In the law-enforcement context, the more pressing challenge is the establishment of ethical principles and protocols that provide concrete guidelines for AI-supported investigative and judicial operations that also serve the interests of communities and individuals in the broader public. Building such an ethical understanding that is useful in practice is best achieved with law enforcement practitioners working closely with ethicists and researchers, in the context of actual case studies.

Transparent

To engender and maintain the trust of the community, the judiciary, and law-enforcement officers themselves, the entire pipeline of AI application – from methods of data collection and curation, labelling, storage, cleaning, and training through to model construction, operation and prediction – needs to be transparent as possible. It is crucial to develop frameworks for such transparency, in addition to investigating how explainable AI (XAI) techniques can be applied and improved.

Effective

the application of AI technology must meet mission objectives and demonstrably enhance capability. We must map candidate applications of AI technology against operational requirements in real environments, ideally in a transformative manner.