#InfosecLunchHour Discussion, 4 February 2026
Today’s #InfosecLunchHour session brought together cyber security professionals from across sectors to tackle a pressing question facing the industry: Is AI security becoming a dedicated specialism, or is it an expectation that all security professionals should integrate into their existing skillsets?
The Central Question
The discussion centred on whether organisations are building dedicated AI Security teams or expecting existing security professionals to adapt. With AI moving from emerging concern to operational reality, the profession faces a critical juncture in how it approaches this evolving domain.
Participants debated the fundamental challenge of defining specialist roles in an area that is still developing. One attendee articulated the core issue: how can specialists be defined for an aspect of security that is itself being developed? This uncertainty creates challenges not only for individuals seeking to position themselves in this space, but also for organisations attempting to hire for these roles.
The Generalist Argument
A strong thread in the discussion advocated for generalist approaches. Several participants argued that whilst AI technology may be novel, the underlying threat vectors remain fundamentally unchanged, albeit operating at different speeds and scales. This perspective suggests that solid security fundamentals, particularly the ability to apply security by design principles, can be adapted to AI contexts without requiring entirely new specialist roles.
One attendee emphasised the importance of being able to walk through technical concepts such as how multidimensional vector spaces relate to large language models but noted that deep AI expertise isn’t necessarily required to apply AI security effectively. The skill lies in applying existing security knowledge to new contexts.
The discussion drew parallels with the evolution of other security domains. Participants noted that learning to be a generalist often involves developing the ability to learn in different environments, a skill that specialism doesn’t always provide. The recommendation of David Epstein’s book ‘Range’ highlighted this perspective, suggesting that breadth of knowledge offers distinct advantages.
The Specialist Perspective
Counter to the generalist view, others highlighted unique challenges that suggest AI security may indeed require dedicated expertise. The conversation revealed that many organisations are seeing genuine demand for AI security skillsets, but prefer to develop these capabilities by embedding professionals into projects rather than hiring pre-formed specialists.
One participant shared their experience establishing an AI Centre of Excellence, noting the need for both deep technical expertise and broader risk management skills. This dual requirement suggests that AI security may require a hybrid approach, combining specialist technical knowledge with generalist security awareness.
The challenge of self-identification emerged as a significant concern. Attendees noted that anyone can claim to be an AI security specialist, but meaningful expertise depends on practical experience working with AI and large language model projects. This creates difficulties for both professionals seeking to demonstrate credibility and organisations attempting to assess candidates.
Practical Frameworks and Tools
The discussion touched on several practical resources gaining traction in the field. The MITRE ATLAS framework was highlighted as a useful reference point, providing a structured approach to understanding AI-specific threats and controls.
Participants were introduced to aidefend.net, a newly released online tool that is like MITRE ATT&CK but focused specifically on modelling threats to AI models. The tool illustrates attack types and paths through AI systems, along with necessary defensive controls. The recommendation came with an important clarification: organisations should be looking to web application security rather than cloud security when it comes to AI defences.
Organisational Challenges
A recurring theme centred on the pressures organisations face in implementing AI security. Financial incentives are driving rapid AI adoption, often outpacing the development of security frameworks and expertise. This creates a difficult environment where security professionals are expected to upskill quickly whilst simultaneously managing multiple existing responsibilities.
The conversation touched on the challenge of organisations without specialist knowledge attempting to assess the skills of candidates for security roles. As one attendee noted, the answer is often ‘badly’. This problem is particularly acute in AI security, where even defining the required skillset remains contested.
Internal policy emerged as a critical element. One participant stressed the importance of having published policies that staff must attest to understanding, particularly around issues such as restricting use to internal co-pilot systems and preventing the sharing of company data in public AI tools.
Burnout and Professional Wellbeing
A significant portion of the discussion addressed the human cost of the current environment. Participants spoke candidly about the pressure on individuals to upskill continuously whilst managing existing workloads, with limited organisational support.
One attendee challenged the phrase ‘do more with less’, describing it as a recipe for burnout. The reality, they argued, is simple: we can only do less with less. The onus falls on security professionals to be honest and frank, both with themselves and with leadership, about what is actually achievable.
Whilst acknowledging that organisations need to improve their support structures, participants recognised the challenging market conditions that make it difficult for professionals to simply move to better employers. The discussion balanced idealism about what organisations should do with realism about the current state of the job market.
Role Definition and Professional Standards
The conversation explored broader questions about how security roles are defined and recognised. Reference was made to the UK Cyber Security Council’s work on role definitions, noting that whilst the first four roles landed cleanly because they were well established, subsequent definitions have proven more challenging.
Participants observed that many professionals see a defined box and want to fit into it for job role alignment. However, the discussion suggested that generalist approaches may become increasingly valuable, even as organisations continue to seek clearly defined specialist roles.
The evolution of professionalism in the sector was highlighted, with observations about a potential ‘gold rush’ to crystallise roles before patterns have stabilised. This tension between the need for defined career paths and the reality of a rapidly evolving field remains unresolved.
Legislative and Regulatory Context
The UK Cyber Security and Resilience Bill 2025 was discussed as part of the broader regulatory landscape affecting the profession. Analysis shared during the meeting highlighted particular implications for small and medium enterprises, noting that organisations may be regulated whether they planned for it or not if they are critical to someone else’s operations.
Participants drew parallels with health and safety legislation, questioning whether appropriate director-level accountability would be established or whether responsibility would fall on Chief Information Security Officers, potentially creating scapegoat situations.
From an international perspective, participants were directed to developments in Ireland regarding AI legislation, suggesting that regulatory approaches are evolving across different jurisdictions.
Community and Networking
The session demonstrated the value of peer networking and knowledge sharing. Attendees represented diverse sectors including digital forensics, financial services, risk management, and international organisations. The West Midlands Cyber Hub was highlighted as a successful community initiative, effectively engaging practitioners through events and placements.
The Cyber Marathon was noted as another successful community building effort, having attracted registrations from 38 countries, demonstrating the global interest in cybersecurity professional development.
Conclusions and Ongoing Questions
The discussion concluded without resolving the central question of whether AI security is fundamentally a specialism or a generalist expectation. This perhaps reflects the current state of the field itself, still in formation and resisting simple categorisation.
What emerged clearly is that the answer may be ‘both’. Organisations appear to need professionals who can combine solid security fundamentals with the ability to learn and adapt to AI-specific contexts. Whether this is labelled as generalist expertise or specialist knowledge may matter less than the practical capability to secure AI systems effectively.
The conversation highlighted the need for:
- Better frameworks for defining and assessing AI security skills
- More organisational support for professionals developing expertise
- Realistic expectations about what individuals and teams can achieve
- Continued development of practical tools and methodologies
- Honest discussions about workload, burnout, and professional sustainability
As AI continues to embed itself into organisational systems and processes, the security profession will need to continue evolving its approach. The question is not whether security professionals need to engage with AI security, but how best to develop and recognise the necessary capabilities whilst maintaining professional wellbeing.
_________________________________________________________________________________________
#InfosecLunchHour takes place on the first Wednesday of every month on Zoom. Founded and hosted by Lisa Ventura MBE FCIIS, it aims to bring together infosec and cyber security professionals to provide a valuable forum for discussions on key topics and trends in the industry. It operates under Chatham House rules and offers security professionals a space to share experiences, debate approaches, and build the community connections that help navigate an evolving field.
To join in with #InfosecLunchHour please email lisa@csu.org.uk.




