We are producing a wide range of material including academic publications, technical notes, reports, white-papers, videos, and code.
BN24/01: A Primer on the Australia and New Zealand Police AI Principles

A briefing note providing, for a general policing audience, background and context around the AI principles developed by the Australia and New Zealand Policing Advisory Agency (ANZPAA).

Briefing Note
BN22/02: Metior Telum – Measure the Weapon

This briefing note provides a broad introduction to the Metior Telum project for a general audience.

Report
TR22/03: The Data Airlock: infrastructure for restricted data informatics

Access to operational data from outside an organisation may be prohibited for a variety of reasons. There are significant challenges when performing collaborative data science work against such restricted data.

This report describes a range of causes and risks associated with restricted data along with the social, environmental, data, and cryptographic measures that may be used to mitigate such issues. These are generally inadequate for restricted data contexts. We introduce the ’Data Airlock’, secure infrastructure that facilitates eyes-off data-science workloads. After describing our use-case, we detail the architecture and implementation of a first, single-organisation version of this infrastructure. We conclude with learnings from this implementation, and outline requirements for a second, federated version.

Tech note
TN22/03: Law Enforcement Data Interoperability (Student thesis paper)

In law enforcement (LE), interoperability, i.e., the ability to exchange information between databases and systems, enhances the ability of agencies to detect and investigate crime. A fundamental way of improving interoperability is data integration, but integrating LE databases is often difficult due to heterogeneity of database types and the semantics of the data. In this study, an ontology-based and Linked Data approach for integrating heterogeneous LE databases is proposed.

The approach is evaluated for use in an operational setting by LE data domain experts. The evaluation feedback indicates that the approach has the potential to address some of the common challenges faced when integrating heterogeneous LE databases, and could provide benefit if used in an LE agency’s operational systems.

Tech note
TN22/05: Geolocation of Images (Student thesis paper)

In this paper, I propose and explore a method for image location classification. Most existing works concentrate on outdoor scenes as scenery or an iconic landmark make it easier to point out the location. Few researchers have addressed the issue of indoor scenes. Although indoor images increase the difficulty of tracking geolocation, it is necessary to respond to this shortcoming as many crimes happen indoors.

 

To address this problem, I propose a method for indoor image location classification by segmenting patterns of extracted objects from images. Specifically, I extract objects from images. Then, based on the accuracy levels of the bounding boxes of specific kinds of objects in the image, I only crop that kind of objects from original images. Moreover, I segment patterns from the extracted objects and crop those patterns by thresholding techniques. To classify images by these segmented patterns, I employ convolutional neural networks. Experimental results in the dataset of hotel rooms across the globe show promising accuracies, which witnesses that my method contributes to ultimately identifying the hotel chain which the image belongs to from the hotel dataset.

Tech note
TN22/04: Cyber Threat Intelligence (Student thesis paper)

Cyber Threat Intelligence (CTI) sharing is a way security professionals and threat analysts can freely access and share information to tackle emerging cyber threats. CTI information can be found in various textual sources such as threat reports, blog posts and online forums, however, there is an increasing centre of attention towards automatic extraction and information retrieval of CTI knowledge. In this study, we evaluate existing ontologies that have worked towards automatic CTI extraction, then we investigate the mechanisms used to extract CTI information automatically. Our contribution is in constructing a pipeline used to develop a training dataset from disparate data sources that can predict tactics and techniques based from the MITRE ATT&CK framework.

Academic publications
Effective, Explainable and Ethical: AI for Law Enforcement and Community Safety

Wilson, C., Dalins, J. & Rolan, G., 2020, 2020 IEEE / ITU International Conference on Artificial Intelligence for Good, AI4G 2020. Piscataway NJ USA: IEEE, Institute of Electrical and Electronics Engineers, p. 186-191 6 p. 9311021

 

We describe the Artificial Intelligence for Law Enforcement and Community Safety (AiLECS) research laboratory, a collaboration between the Australian Federal Police and Monash University. The laboratory was initially motivated by work towards countering online child exploitation material. It now offers a platform for further research and development in AI that will benefit policing and mitigating threats to community wellbeing more broadly. We outline the work the laboratory has undertaken, results to date, and discuss our agenda for scaling up its work into the future.

Video
GovHack 2020: Conversations with Infosys and the AiLECS LAB

Together with our partner Infosys, we sponsored a community safety problem topic at the 2020 Australian GovHack competition.  This video discusses the project and our particular takes on the application of AI for social good.

 

Video
AiLECS Lab Launch

Short video describing the motivation and rationale for the lab.

Academic publications
PDQ & TMK + PDQF – A Test Drive of Facebook’s Perceptual Hashing Algorithms

Dalins, Janis, Campbell Wilson, and Douglas Boudry. “PDQ & TMK+ PDQF–A Test Drive of Facebook’s Perceptual Hashing Algorithms.” arXiv preprint arXiv:1912.07745 (2019).

 

Efficient and reliable automated detection of modified image and multimedia files has long been a challenge for law enforcement, compounded by the harm caused by repeated exposure to psychologically harmful materials. In August 2019 Facebook open-sourced their PDQ and TMK + PDQF algorithms for image and video similarity measurement, respectively. In this report, we review the algorithms’ performance on detecting commonly encountered transformations on real-world case data, sourced from contemporary investigations. We also provide a reference implementation to demonstrate the potential application and integration of such algorithms within existing law enforcement systems.

 

https://arxiv-org.ezproxy.lib.monash.edu.au/pdf/1912.07745.pdf

Academic publications
 Laying foundations for effective machine learning in law enforcement. Majura – a labelling schema for child exploitation materials. 

Dalins, J., Tyshetskiy, Y., Wilson, C., Carman, M. J., & Boudry, D. (2018). Laying foundations for effective machine learning in law enforcement. Majura – a labelling schema for child exploitation materials. Digital Investigation26, 40-54. https://doi.org/10.1016/j.diin.2018.05.004

 

The health impacts of repeated exposure to distressing concepts such as child exploitation materials (CEM, aka ‘child pornography’) have become a major concern to law enforcement agencies and associated entities. Existing methods for ‘flagging’ materials largely rely upon prior knowledge, whilst predictive methods are unreliable, particularly when compared with equivalent tools used for detecting ‘lawful’ pornography. In this paper we detail the design and implementation of a deep-learning based CEM classifier, leveraging existing pornography detection methods to overcome infrastructure and corpora limitations in this field. Specifically, we further existing research through direct access to numerous contemporary, real-world, annotated cases taken from Australian Federal Police holdings, demonstrating the dangers of overfitting due to the influence of individual users’ proclivities. We quantify the performance of skin tone analysis in CEM cases, showing it to be of limited use. We assess the performance of our classifier and show it to be sufficient for use in forensic triage and ‘early warning’ of CEM, but of limited efficacy for categorising against existing scales for measuring child abuse severity. We identify limitations currently faced by researchers and practitioners in this field, whose restricted access to training material is exacerbated by inconsistent and unsuitable annotation schemas.

Whilst adequate for their intended use, we show existing schemas to be unsuitable for training machine learning (ML) models, and introduce a new, flexible, objective, and tested annotation schema specifically designed for cross-jurisdictional collaborative use. This work, combined with a world-first ‘illicit data airlock’ project currently under construction, has the potential to bring a ‘ground truth’ dataset and processing facilities to researchers worldwide without compromising quality, safety, ethics and legality.

Academic publications
Criminal motivation on the dark web: A categorisation model for law enforcement

Dalins, Janis, Campbell Wilson, and Mark Carman. “Criminal motivation on the dark web: A categorisation model for law enforcement.” Digital Investigation 24 (2018): 62-71.

 

Research into the nature and structure of ‘Dark Webs’ such as Tor has largely focused upon manually labelling a series of crawled sites against a series of categories, sometimes using these labels as a training corpus for subsequent automated crawls. Such an approach is adequate for establishing broad taxonomies, but is of limited value for specialised tasks within the field of law enforcement. Contrastingly, existing research into illicit behaviour online has tended to focus upon particular crime types such as terrorism. A gap exists between taxonomies capable of holistic representation and those capable of detailing criminal behaviour. The absence of such a taxonomy limits interoperability between agencies, curtailing development of standardised classification tools.

 

We introduce the Tor-use Motivation Model (TMM), a two-dimensional classification methodology specifically designed for use within a law enforcement context. The TMM achieves greater levels of granularity by explicitly distinguishing site content from motivation, providing a richer labelling schema without introducing inefficient complexity or reliance upon overly broad categories of relevance. We demonstrate this flexibility and robustness through direct examples, showing the TMM’s ability to distinguish a range of unethical and illegal behaviour without bloating the model with unnecessary detail.

 

The authors of this paper received permission from the Australian government to conduct an unrestricted crawl of Tor for research purposes, including the gathering and analysis of illegal materials such as child pornography. The crawl gathered 232,792 pages from 7651 Tor virtual domains, resulting in the collation of a wide spectrum of materials, from illicit to downright banal. Existing conceptual models and their labelling schemas were tested against a small sample of gathered data, and were observed to be either overly prescriptive or vague for law enforcement purposes – particularly when used for prioritising sites of interest for further investigation.

 

In this paper we deploy the TMM by manually labelling a corpus of over 4000 unique Tor pages. We found a network impacted (but not dominated) by illicit commerce and money laundering, but almost completely devoid of violence and extremism. In short, criminality on this ‘dark web’ is based more upon greed and desire, rather than any particular political motivations.

 

 

 

Academic publications
Monte-Carlo Filesystem Search – A crawl strategy for digital forensics

Dalins, Janis, Campbell Wilson, and Mark Carman. “Monte-Carlo Filesystem Search–A crawl strategy for digital forensics.” Digital Investigation 13 (2015): 58-71.

 

Criminal investigations invariably involve the triage or cursory examination of relevant electronic media for evidentiary value. Legislative restrictions and operational considerations can result in investigators having minimal time and resources to establish such relevance, particularly in situations where a person is in custody and awaiting interview. Traditional uninformed search methods can be slow, and informed search techniques are very sensitive to the search heuristic’s quality. This research introduces Monte-Carlo Filesystem Search, an efficient crawl strategy designed to assist investigators by identifying known materials of interest in minimum time, particularly in bandwidth constrained environments. This is achieved by leveraging random selection with non-binary scoring to ensure robustness. The algorithm is then expanded with the integration of domain knowledge. A rigorous and extensive training and testing regime conducted using electronic media seized during investigations into online child exploitation proves the efficacy of this approach.