The EXPLAIN Project

Status: Ongoing

Although there has been a great deal of work in the use of AI technologies across the law enforcement sector, the legal position of AI and data-driven techniques as part of an evidence chain is currently  indeterminate. 

For example, the legal admissibility of automated image and video classification, social network graph analysis, audio participant identification, NLP conversation analysis and so on, is largely untested (as was DNA evidence decades ago).

Thus, there is a pressing requirement to establish and address the requirements for explainability for AI and related technologies in legal contexts – that transcend the typical use-case for explainable AI (for example, explanations of automated transactional decision-making as mandated by the GDPR).

In this case, we are focusing specifically on criminal justice system which, as an additional complexity, has the highest standard of proof and the greatest protections for the accused. The need to explain AI to participants in the criminal justice system can arise at a number of stages, and be subject to differing legal standards.

In moving from investigative support tools to a more prominent role in a brief of evidence, AI capabilities need to align with the legal and epistemological frameworks within which laws are enforced and judgement made. At trial, where evidence produced by AI is challenged, questions of authentication arise. This may require proof that the AI produced what it purports to produce, and whether that output can be relied upon for the purposes of a criminal trial. Such questions may involved assessment by trial judge, but also by a lay jury who may be asked to assessing the reliability and credibility of such evidence. This in turn will typically require the use of expert evidence to explain to the jury how the AI produces its outcome, and this will also involved questions of validity behind the science.

This project, in collaboraiton with the Faculty of Law at Monash University, is researching how research how the explainability of AI can best be pursued from an evidential perspective, as well as understanding  mechanisms for its incorporation in LE and juridical workflows.