Insights from the Festive edition of #InfosecLunchHour which took place on Wednesday 17 December 2025 at 12.30pm GMT on Zoom.
By Lisa Ventura MBE FCIIS
As 2025 draws to a close, the #InfosecLunchHour community gathered for its final meeting of the year to look ahead at the cyber threat landscape for 2026. What emerged wasn’t just a list of predictions, but a nuanced discussion about the limitations of current technologies, the persistence of systemic problems, and the very human impact that too often gets overlooked in our rush towards technological solutions.
The Uncomfortable Reality: More of the Same
The meeting opened with a sobering assessment. While many hope for breakthrough solutions or dramatic improvements in cyber security, the reality appears far less optimistic. The consensus was clear: the fundamental systems underpinning our digital infrastructure will remain largely unchanged in 2026, even as attackers continue to become more sophisticated and organised.
This isn’t because the security community lacks knowledge or capability. Rather, it reflects the inertia of complex systems and the harsh economic realities that organisations face when considering wholesale changes to their security posture. When evaluating whether to replace critical security tools, many discover that migration projects would span 18 months and cost more than the potential downtime they’re meant to prevent. It’s a calculation that locks us into dependencies that are practically impossible to escape.
The Rise of Cybercrime Influencers
A particularly concerning trend identified during the discussion was the emergence of “cybercrime influencers”, young individuals who are not only exploiting vulnerabilities in large organisations but are being actively lauded for it. This phenomenon represents a troubling shift in how cybercrime is perceived and packaged for younger audiences.
What makes this trend especially worrying is its intersection with broader societal issues. The discussion highlighted that addressing this problem requires far more than just technical solutions or law enforcement responses. It demands a comprehensive approach that includes mental health support, addressing underlying social issues, and creating legitimate pathways for young people interested in technology and security.
The conversation also touched on schemes claiming to route young people into ethical hacking, but without adequate safeguarding measures or genuine understanding of the psychology involved. The lack of proper support structures risks doing more harm than good, potentially pushing vulnerable young people further towards criminality rather than legitimate careers in cybersecurity.
The AI Hype: A Reality Check
No discussion of future technology trends would be complete without addressing artificial intelligence, but the #InfosecLunchHour community approached the topic with characteristic scepticism and clarity. The frustration with terminology was palpable, particularly around the misuse of the term “AI” when referring to Large Language Models (LLMs).
The discussion emphasised a crucial distinction: LLMs are essentially sophisticated autocorrect systems, advanced statistical models that predict the most likely next word based on patterns in their training data. They cannot generate truly original information or make independent decisions. They lack genuine insight, conceptual understanding, or the ability to reason in ways that would constitute actual intelligence.
This limitation isn’t just academic; it has real implications for how we think about and deploy these technologies in security contexts. Concepts like “symbiotic intelligence,” often marketed with considerable hype, were dismissed as merely using LLM tools for specific, narrow tasks rather than representing any fundamental leap forward in capability or understanding.
The group was unequivocal: even with perfect, accurate input data, LLMs cannot completely avoid hallucinations because they are fundamentally designed to simulate human-like dialogue with an element of randomness built in. This isn’t a bug that can be fixed with better engineering; it’s an inherent feature of how these systems work.
Quantum Computing Won’t Save Us
For those hoping that quantum computing might revolutionise AI capabilities or solve fundamental limitations in machine learning, the discussion provided a dose of reality. Quantum computing, while powerful in specific domains, doesn’t fundamentally change the core limitations of current AI approaches. What would be needed for genuine breakthroughs isn’t more computational power but a qualitative change in approach, a new paradigm for how we think about machine intelligence and decision-making.
The Disinformation Challenge
The conversation naturally turned to the risks of malicious actors injecting disinformation into LLMs. While acknowledged as a genuine concern, participants noted this isn’t actually a new problem. It’s merely the latest manifestation of challenges we’ve faced with internet disinformation for years. The fundamental issue remains the same: how do we maintain information integrity in systems designed to process and regurgitate patterns from vast amounts of unverified data?
The discussion also considered whether digital threats would increasingly translate into real-world consequences. With cyber-attacks already affecting critical infrastructure, healthcare services, and essential supplies, this transition isn’t a future prediction, it’s our current reality.
Cryptocurrency Misconceptions
The discussion touched on common misconceptions surrounding cryptocurrency and illicit transactions. While reports of $55 billion in illicit cryptocurrency transactions sound alarming, this represents a relatively small fraction of the total $112 billion daily cryptocurrency ecosystem. Context matters when assessing risk and designing appropriate responses.
However, participants expressed scepticism about whether proposed regulations like the Cyber Security and Resilience Bill or the Online Safety Act for Children would effectively address the root causes of cybercrime. The concern is that these legislative measures, while well-intentioned, may focus too much on compliance frameworks rather than tackling the underlying systemic issues that enable cybercrime to flourish.
The point was made that the Cyber Security and Resilience Bill is primarily aimed at protecting critical national infrastructure (crucial for public safety) rather than addressing all forms of cybercrime. This is an important distinction that often gets lost in public discourse about cybersecurity legislation.
The Urgency Gap
A recurring theme was the frustrating lack of urgency in addressing cyber security issues, despite regular high-profile attacks making headlines. The disconnect between awareness of threats and meaningful action to address them remains one of the sector’s most persistent challenges.
The discussion highlighted difficulties in recruitment and education within the industry. Even when organisations recognise the importance of cyber security, translating that recognition into adequate resources, proper training, and cultural change remains difficult. However, there were positive notes too, examples of organisations that have successfully brought in external expertise and fostered a no-blame culture that promotes better decision-making and collaboration.
The Human Cost
Perhaps the most important thread running through the entire discussion was the human impact of cyber-attacks. It’s easy to view incidents through the lens of technical failures and recovery processes, but the reality is far more personal and profound.
The conversation noted how insurers like Lloyds of London are now requiring more detailed reporting on phishing incidents, reflecting growing recognition of the scope and impact of these attacks. But beyond the corporate and insurance implications, there’s a pressing need for more empathy and support for cyber security professionals facing these challenges daily.
The discussion emphasised the importance of providing counselling services during incident response. Cyber-attacks don’t just affect security teams, they ripple through entire organisations, impacting staff across all departments who may face abuse from frustrated customers or the stress of working through crisis situations.
Vulnerable populations face particular challenges. The elderly, for example, are increasingly targeted by sophisticated social engineering attacks. Creating resources specifically designed for these groups (such as audio stories that help older adults understand online protection) represents an important but often overlooked aspect of cyber security awareness.
The conversation touched on how anyone, regardless of their technical sophistication, can fall victim to well-crafted attacks. This recognition challenges the outdated “human firewall” narrative that places blame on individuals rather than addressing systemic vulnerabilities and design failures.
Looking Ahead: Realism Over Optimism
As the year draws to a close, the #InfosecLunchHour discussion painted a picture of 2026 that is neither apocalyptic nor triumphant. Instead, it reflected a mature understanding of the challenges ahead: persistent threats, limited resources, technological capabilities that are powerful but constrained in specific ways, and the ongoing need to balance security with functionality, cost, and human factors.
The greatest challenges for 2026 won’t be technical. They’ll be organisational, cultural, and human. How do we build security programmes that work in the real world, with real constraints? How do we support the people doing this difficult work? How do we communicate risks honestly without contributing to fatalism or hype? How do we address the root causes of cybercrime while protecting those who keep our digital infrastructure running?
These questions don’t have easy answers but asking them openly and honestly is perhaps the most important work the security community can do. As one participant noted, the biggest risk isn’t any particular attack vector or vulnerability, it’s apathy. Not a lack of understanding, but a lack of will to do what needs to be done, even when we know what that is.
The challenge for 2026 isn’t figuring out what we should do. We largely know that already. The challenge is finding the collective will, resources, and organisational support to actually do it.
The author participated in discussions with cyber security and Infosec professionals during the December Festive Special #InfosecLunchHour meeting. This article reflects observations and insights shared under Chatham House rules.
The next #InfosecLunchHour event will be on Wednesday 7 January 2026 at 12.30pm GMT. To join the monthly #InfosecLunchHour meet ups, please email Lisa Ventura MBE FCIIS via lisa@csu.org.uk.




