Writing the future
When I was little and learning how to write, I asked my mother why the words had to go from left to right on the page. She told me that language scripts can run right to left, left to right, vertically … but all of them have rules; so that we can understand each other more easily. Fixated on my immediate concern, I suggested I might prefer to write from right to left. She responded that I absolutely could do this, but it would make it harder for others to follow what I was trying to say. (My mum adds that I was not fully convinced; but the classroom reward system of elephant stamps won out.)
I now work in a research and engagement role in a lab where my colleagues have the power to cipher computational meaning in codes I don’t understand. And I have a better appreciation of what my mother was trying to teach me about communication.
How we are represented is always curated: by the cultures and paradigms we live in; by ourselves (what we choose to share and how we choose to share it); and by others exercising their capacity to interpret and disclose things about us. “Controlling the narrative” is recognised as a core aspect of public relations, but the significance of storytelling has deeper roots in the exercise of power.
Stories shape how I am seen and valued; how I relate to others; my expectations of society and understanding of the rules by which it functions; and my capacity to refuse, accept, or enact change. Stories are one of humanity’s most powerful and universal ways of connecting knowledge and connecting with each other.
So, when I sat down to write a blog post introducing myself as a member of the AiLECS team, what came to mind were three quotes from the collective imagination of science fiction:
- Any sufficiently advanced technology is indistinguishable from magic.
– Arthur C Clarke
- The future is already here, it’s just not evenly distributed.
- All that you touch you change. All that you change changes you.
The significance of these quotes to me – and their relevance for communicating the work of AiLECS lab – is in how they prompt us to think about where design and outcomes of AI-informed technology might align or diverge with human concepts of:
The words of Clarke, Gibson, and Butler are cues. They push us to reflect on the social context of technology and the faith people put in it; to scrutinise the possibilities we are trying to activate with these tools (and how these are actualised); and to make choices based on recognition and respect for the consequences of our actions … and those of the machines we train.
I believe that disrupting cultures of data entitlement is a critical imperative for achieving better data futures. This is a large part of what motivated me to join the AiLECS Lab. I am heartened that the lab has a focus on developing approaches that pay attention to the ethics of the data supply chain behind machine learning technologies, as well as the impacts of implementation.
The work of AiLECS lab is geared toward meaningful outcomes: developing innovative capabilities for community protection. One such research area focuses on improving technologies for countering online child exploitation. Undeniably important work against some of the most distressing examples of criminal activity. Preventing CSAM (child sexual abuse material) is also an area where the grey area of ‘by any means necessary’ is never far distant. Being clear about the basis on which technological tools and interventions for law enforcement are premised, designed and implemented is critical: for evidentiary integrity and procedural fairness; for trauma-informed practice that mitigates systemic harms to victim-survivors and investigators of CSAM; and for building and maintaining citizen trust in law enforcement.
In a research lab devoted to AI for Law Enforcement and Community Safety, imagining technical solutions in isolation is not enough. We must also be cognisant of the underlying values, rationale and lived effects of laws that are being enforced by technology. We must engage with policy decisions, legislation, and regulation that govern the extent to which people can choose or challenge how they interact with machine learning as it is embedded in daily life and everyday magic. And we must be alert and responsive to which communities are having their safety prioritised, enhanced, compromised, or overlooked along the way.
Author: Nina Lewis
Nina is a research fellow with the AiLECS Lab. She is project lead for the VALID project (Veracity, Agency, Longevity, and Integrity in Datasets), and the #MyPicturesMatter crowdsourcing initiative (launching in June).