Mileva Security Labs Research

Likelihood Analysis


This project seeks to fill a crucial gap in AI security research by quantifying the likelihood of AI incidents. Likelihood is a key parameter in risk assessment best practices in other security studies, including cyber security and national security.

Risk = severity x likelihood


In the practice of AI security there is a growing need to model risk, however while there is much research into severity there is very little in likelihood. Drawing from established risk assessment methodologies in cybersecurity, the study aims to construct a robust framework for evaluating and mitigating AI security risks, thus shedding light on the elusive dimension of risk associated with AI vulnerabilities.

This project is conducted in collaboration with MINT Lab, and the UNSW Canberra’s Innovation Lab for Cyber Security and Machine Learning. It is generously supported by the Foresight Institute.

Risk Modelling

This project provides methods to quantify and manage AI security risk based on an organisation’s unique risk profile.


By employing cutting-edge techniques and frameworks, we aim to enhance the resilience and security posture of AI systems in various industries. Our approach involves a comprehensive assessment of potential vulnerabilities and threats specific to each organisation's operational environment, followed by the implementation of robust risk mitigation strategies. It is generously supported by the ACT Government through the Canberra Innovation Network.

Tool

COMING SOON

Our software enables teams to understand, monitor, and manage their own AI risk.

Enter your details for updates and to have access to the Beta version. 

Learn More About Our Progress

A Crash Course into Attacking AI [2/4]: What can AI attacks do?

A Crash Course into Attacking AI [2/4]: What can AI attacks do?

In the second installment of our series on attacking AI, we delve into the goals of AI attacks, categorized using the 3D Model: Deceive, Disrupt, and Disclose. Deceive techniques manipulate AI systems... ...more

AI Security

August 04, 20246 min read

A Crash Course into Attacking AI [1/4]: What is AI security?

A Crash Course into Attacking AI [1/4]: What is AI security?

"A Crash Course into Attacking AI [1/4]: What is AI security?" explores AI security, emphasizing the need to protect AI systems from hacking, distinct from using AI for cybersecurity. It describes AI ... ...more

AI Security

August 03, 20246 min read