From Signatures to Sentience: AI's Challenge to Traditional Security

Chris Denbigh-White - Cyber Security Leader | Public Speaker | Start Up Advisor |

1/29/20253 min read

This week has been awash with news about new and exciting (and lets face it) somewhat surprising news around the subject of AI. (Looking at you #DeepSeek!) This got me thinking and inspired to write some of my thoughts down on AI and what it might mean for those of us charged with being stewards and defenders of our company's data.

One thing is for sure, the democratisation of artificial intelligence is upon us.

AI tools, once the preserve of tech giants, are now readily available, promising a revolution in fields from medicine to materials science. But this accessibility has a darker side: it is dramatically lowering the barrier to entry for cybercriminals, creating a new and potent threat landscape.

While AI offers immense potential for bolstering cybersecurity, the traditional defences of email security, reliant on signatures and rules, are increasingly inadequate in the face of AI-generated attacks. The question is not whether AI will be used for nefarious purposes, but how effectively we can harness it for our own protection – and how responsibly we deploy it.

For years, cybercrime required a certain level of technical expertise. Crafting convincing phishing emails, developing polymorphic malware, and automating social engineering attacks demanded skills that acted as a barrier to entry for many. No longer. Cheap, readily available AI can now generate these tools with ease, putting sophisticated attack capabilities into the hands of less skilled, but no less malicious, actors. The result is a potential surge in the volume of attacks, and potentially, a dramatic increase in their sophistication.

Traditional email security, built on signature based detection, protocols like DMARC, DKIM, and SPF, and reinforced by awareness training, may soon be struggling to keep pace. ( if it isn't already) These systems rely on identifying known patterns and signatures. AI, however, can generate emails that bypass these checks. Phishing emails, once easily spotted by their clumsy grammar or generic greetings, can now be crafted by AI to mimic legitimate correspondence, personalised to the recipient with uncanny accuracy. Even the most vigilant employee will find it harder to distinguish the genuine article from a cleverly forged imitation.

This new reality demands a fundamental rethink of cybersecurity strategy. Adversarial AI, where AI is used to probe and test defences, becomes both a threat and a vital tool. Cybercriminals are already using it to refine their attacks, creating an evolutionary arms race. But defenders can also leverage adversarial AI to stress-test their systems, identifying vulnerabilities before they are exploited.

The key lies in shifting from a reactive, signature-based approach to a proactive, AI-driven model. The concept of "understanding what normal looks like" (Thank you SANS Digital Forensics and Incident Response for teaching me this years ago in your SEC504 course!) is paramount.

Instead of focusing on known badness, AI can analyse vast datasets – network traffic, user behaviour, email content – to establish a baseline of normality for an organisation. Any deviation from this baseline, however subtle, can be flagged as suspicious, even if it doesn’t match a known attack pattern.

This approach offers several advantages. AI-driven anomaly detection can identify novel attacks, including those generated by AI itself, that would slip past traditional defences. Behavioural analysis can spot unusual user activity, potentially indicating a compromised account. And sophisticated content analysis can detect the subtle linguistic cues and contextual anomalies in emails that betray a phishing attempt, even if the message appears superficially legitimate.

But this new paradigm also brings new responsibilities. The use of AI in security, while essential, must be grounded in ethical considerations. This includes a deep understanding of the terms and conditions governing the use of AI tools, especially regarding data handling. Data flows must be transparent and carefully managed to ensure compliance with privacy regulations and respect data sovereignty. Security cannot be achieved at the expense of individual rights. Responsible AI use means ensuring that the very tools designed to protect us do not themselves become vectors for data breaches or privacy violations. This requires rigorous due diligence in selecting AI providers, clear data governance policies, and ongoing monitoring of AI systems.

The rise of readily available AI presents a formidable challenge. But by embracing AI for defence, and shifting our focus from reaction to prediction – while simultaneously adhering to principles of responsible AI use – we can begin to turn the tide in this new era of cyber warfare. The AI-powered heist is upon us, but it is not an inevitable triumph. With vigilance, innovation, a healthy dose of scepticism, and an unwavering commitment to ethical data handling, we can protect ourselves.