Security Brief, 22 November 2025

The Connectivity Innovation Network was featured in Security Brief following the Connectivity Innovation Network’s Final Presentation of AI for Network Security on 21 November 2025.

AI-powered assistant CASPER promises smarter cyber threat detection

A new cybersecurity virtual assistant using artificial intelligence has been developed to help detect threats and guide users and organisations through the steps needed to respond to attacks. Australian researchers at the University of Technology Sydney (UTS) led the project, described as a new type of tool capable of integrating information from multiple sources and providing context-aware recommendations.

AI for cyber defence

The system is called CASPER AI, which stands for Context-Aware Security Policy Enforcement and Response. It brings together organisational information ranging from identity management platforms and HR records to travel, expense data and network logs. By correlating data from these sources, CASPER AI aims to identify anomalies that may indicate the presence of insider threats, compromised user accounts, or external breaches.

The assistant relies on large language models and multimodal reasoning, which allow it to analyse diverse datasets and determine if cyber security threats are present. The developers say CASPER AI is able to identify abnormal operations, such as the creation of fake wireless access points in public areas or suspicious behaviour in temporary mobile environments, and can guide users to take appropriate, compliant security actions.

“In the demonstration, we showed how CASPER AI detects abnormal activity within business operations, identifies fake wireless access points in public spaces or temporary mobile deployments, and helps teams take fast, policy-compliant security actions,” said Professor Ren Ping Liu, Project Lead, University of Technology Sydney.

Assisting non-experts

The development team says one of the main advantages of CASPER AI is its contextual understanding. Traditional rule-based cyber security tools are reactive and often miss nuanced threats that bridge multiple sources of information. CASPER AI aims to present clear guidance to staff, reducing reliance on users to interpret technical security policies in high-pressure situations.

The AI assistant has been built to function across a range of environments and can reduce manual workloads for security teams. Its accessible interface and plain-language outputs are also designed to support people with limited digital experience.

“The system reduces the need for staff to interpret complex policies under pressure by presenting the recommended actions in clear, accessible language,” said Liu.

Detecting scams and fraud

CASPER AI is intended for deployment within organisations, but the project leads also highlight its potential benefits for individual citizens, particularly for those at risk from online scams or with lower digital literacy. The system is capable of detecting unusual activity connected to personal accounts and can provide recommendations in formats that non-expert users can understand. It is designed to help individuals validate whether emails, texts or calls are genuine and to assist in detecting identity misuse.

“CASPER AI as a virtual assistant can help people with lower levels of digital literacy and tech experience avoid being caught up in cybercrime, such as phishing and ‘smishing’ (SMS-based) scams,” said Liu.

The assistant can explain in simple terms whether a suspicious message might be a scam and advise users if their login credentials or personal details may have been compromised. For government or other sensitive accounts, it can flag sign-in attempts from new or unauthorised locations.

Network initiative

CASPER AI is part of a suite of projects funded through the Connectivity Innovation Network. This initiative, supported by the New South Wales Government, is led by UTS and the University of Sydney. Its goal is to address challenges in digital connectivity and cyber resilience, as the volume and complexity of online threats grow alongside the increasing reliance of communities, citizens and government agencies on digital services.

“It’s designed to keep government accounts safe by detecting unusual login attempts or activity, protect people’s identities by connecting information across services to flag potential misuse and stop scams by giving people a simple way to check if a text, email or call is genuine before they respond,” said Liu.

By Sam Mitchell for Security Brief