April’s #InfosecLunchHour brought together cyber security professionals, digital forensics experts, legal specialists, consultants, and academics for one of the most thought-provoking discussions in the meetup’s history. The topic, proposed by one of our regular contributors, examined the intersection of neurodivergence, criminal law, and the assumptions embedded in our legal and security systems. Under Chatham House Rules, the conversation that followed was raw, nuanced, and deeply important.
What began as a case study quickly evolved into a wider discussion about intent, perception, the limitations of AI guardrails, the ethics of undercover operations, jurisdictional contradictions, and the very real human cost of being misunderstood.
This is a long article, but I feel it is a necessary one and to take anything out of it wouldn’t do this very important topic justice.
A Case That Raised More Questions Than Answers
The discussion opened with a forensic expert sharing a recent case they had worked on as a defence expert witness. The case involved a neurodivergent individual with a strong background in mathematics and cryptography who had been conducting what were essentially Turing tests in online chat rooms, attempting to distinguish bots from real humans.
The individual, who was described as being heavily on the autistic spectrum, had set up a structured research project. They had a home server running complex mathematical models, and while waiting for those models to complete, they began experimenting on their mobile phone. They entered chat rooms, initiated conversations, and used a range of conversational techniques (including foul language and provocative statements) to test whether the entity at the other end was a bot or a human. They had handwritten notes, scoring matrices, and a body of academic research to support their methodology.
During this process, the individual encountered an account that identified itself as a 14-year-old girl. The account was, in fact, an undercover police officer. The conversation was described as inappropriate but not sexual, with no evidence of grooming behaviour or any attempt to arrange a meeting. Nevertheless, the individual was prosecuted.
The forensic expert explained that they were tasked with examining the individual’s devices to determine whether there was evidence of pre-existing research activity that supported the Turing test explanation, rather than material gathered after the fact as a cover story. What they found was extensive: hundreds of pages of research, academic papers, and structured notes that predated the alleged offence.
A forensic psychiatrist’s report confirmed the severity of the individual’s neurodivergence, describing conditions that made it extremely difficult for them to understand social nuances or anticipate the consequences of their actions. Ultimately, the charges were dropped for procedural reasons relating to the undercover operation, though the defence team believed they had a strong case had it proceeded to trial.
The individual’s life, however, had already been profoundly affected. Years of distress, reputational damage, and the toll on their family were consequences that no acquittal could undo.
Intent, Recklessness, and the “Guilty Mind”
The case prompted a rich discussion about mens rea, the legal concept of a “guilty mind,” and how it operates in practice. One participant offered an important clarification: mens rea does not simply mean malicious intent. It also encompasses recklessness (where the defendant did not foresee the risk of harm) and negligence (where a reasonable person would have foreseen it). There are also strict liability offences, where intent is entirely irrelevant; if harm has been caused, a penalty follows.
This distinction matters enormously when considering neurodivergent individuals, who may process risk, social context, and consequence in fundamentally different ways to neurotypical people. As one participant put it, the question is not whether someone intended harm, but whether they failed to properly consider the risk. Another noted the uncomfortable flipside of this argument: without some standard of recklessness or negligence, it would be impossible to hold anyone accountable who simply claims they did not understand the consequences.
A contributor who was unable to attend submitted written insights that were read to the group. They argued that this is not simply a matter of intent being misunderstood. It is about systems assuming a shared model of interpretation that does not actually exist. Much of law and security practice relies on implicit expectations: what someone “should have known,” what is “obviously inappropriate,” what constitutes suspicious behaviour. Those assumptions function when people process social context in broadly similar ways. They begin to break down when they do not.
This raises what was described as a “slightly uncomfortable question”: are we assessing intent, or alignment with unspoken norms?
The Internet Jury: Reputation and the Court of Public Opinion
Several participants highlighted a dimension of the case that sits outside the legal process entirely. One participant was particularly direct: it does not matter what the legal outcome is once the accusation has been made public. The “internet jury” delivers its verdict through social media, and the consequences for reputation, relationships, and livelihood can be devastating and permanent.
The individual in the case study had a spouse who, while not neurodivergent in the same way, was deeply affected by the social fallout. The group reflected on how the reputational harm from being charged with an offence involving a minor is qualitatively different from other types of accusation. Even when charges are dropped or an individual is acquitted, the shadow of the allegation can persist indefinitely.
This raised broader questions about the responsibility of institutions, the media, and online communities in how they handle cases where the facts are not yet established, and about whether current systems offer any meaningful route to rehabilitation of reputation once the damage is done.
Undercover Operations: Protection or Entrapment?
The use of law enforcement officers posing as minors in online chat rooms drew considerable debate. One participant observed that in such operations, there is no actual minor involved and therefore no direct crime against a child. The prosecution rests entirely on inferred intent. When that intent is academic or experimental rather than predatory, the situation becomes, as one participant described it, “a weird grey area.”
Others pushed back, noting that the primary purpose of such operations is to identify genuine predators, and that inevitably some individuals who are not predators will be caught up in the process. The legal system’s role is then to distinguish between the two, which is precisely why defence experts, forensic psychiatrists, and the concept of mens rea exist.
The forensic expert added important context: individuals caught through such operations who are not engaged in grooming or sexual communication are likely to face lesser charges and lighter sentences than those who are. The system does make distinctions, even if the process of getting to that point is deeply damaging to those who are ultimately exonerated.
AI Guardrails: The Illusion of Safety
The conversation expanded into the limitations of AI safety mechanisms. One participant described how current large language model guardrails are easily bypassed through reframing. They recounted asking an AI model how to make a pipe bomb and being refused, only to start a fresh conversation framing the same question as a thermodynamics problem. The AI then provided detailed formulas, chemical suggestions, and design considerations, all of which amounted to the same information it had previously declined to share.
The point was clear: AI cannot reliably identify intent. If a direct request is refused, a reframed version of the same request will often succeed. This has implications for how we think about both the regulation of AI and the prosecution of individuals whose interactions with AI systems produce harmful content. If an AI tool itself cannot determine whether a user’s intent is academic, creative, or malicious, how can we expect legal systems to make that determination based solely on the outputs?
Another participant raised the question of AI-generated illegal content, specifically pseudo-images. Under existing law in parts of the UK, AI-generated depictions of child sexual abuse carry the same legal weight as real images. The group discussed the fact that many people are unaware of this legal reality, and that the gap between public understanding and the law creates significant risk, particularly for neurodivergent individuals who may explore topics without fully understanding the legal boundaries.
Jurisdictional Contradictions and Legal Gaps
One of the most striking contributions came from a participant who highlighted the contradictions between legal jurisdictions within the UK itself. They described a scenario in which a defence expert can lawfully obtain forensic evidence in England and travel with it, but the moment they cross the border into Scotland, they are technically in possession of illegal material. Scottish law interprets the handling of such evidence differently, and the mechanisms that protect defence experts in England do not apply in the same way north of the border.
Similarly, material classified as extreme pornography in Scotland may be legally available in other jurisdictions, yet Scottish law treats certain categories of such material with the same severity as child sexual abuse imagery.
The broader point was that legal frameworks have not kept pace with technology, the internet, or the realities of cross-border digital evidence. For practitioners working across jurisdictions, this creates genuine professional risk. For individuals who may not understand these distinctions, particularly neurodivergent individuals, the risk is even greater.
A separate contribution noted that certain publications banned in the UK as terrorist material are freely hosted on the US Department of Justice website. Simply downloading such a document in the UK could constitute an offence, regardless of the reader’s intent.
Security Research and the Computer Misuse Act
The discussion drew natural parallels with the cyber security industry itself. Participants noted that penetration testers and bug bounty hunters regularly operate in legal grey areas, and that the question of intent is central to how their activities are judged.
One participant urged the group to read the sentencing guidelines accompanying the Computer Misuse Act, noting that these guidelines are far more nuanced than the legislation itself. They include detailed direction to judges on considering intent, whether harm was meant, and whether the individual acted responsibly when accidental harm occurred. The participant expressed frustration that very few people in the industry have actually read these guidelines, despite their direct relevance to the work they do.
The same participant noted that there have been lobbying efforts to create licensed categories of penetration testers who would be exempt from the Act, driven by larger companies seeking competitive advantage. The existing system, they argued, already accounts for the complexities of security research, if people take the time to understand it.
Monitoring AI Use: Where Does Oversight End?
In the closing minutes, the discussion turned to the challenge of monitoring employee use of AI tools. A participant from the financial sector described how their organisation had detected employees using personal devices to prompt AI systems with work-related queries, then sending the results into their corporate environment. While no proprietary information was leaving the organisation, the behaviour raised concerns about quality assurance, compliance, and the provenance of work product.
The forensic expert responded that monitoring personal devices is a fundamentally different proposition from monitoring corporate-issued equipment. Company devices come with clear terms of use and monitoring expectations. Personal devices, however, fall under an entirely different legal and ethical framework. Obtaining permission to examine a personal device requires judicial authority, a commissioner, and specific parameters for what can be searched. Without knowing precisely what you are looking for, the process is expensive, intrusive, and frequently produces no actionable outcome.
The group acknowledged that this tension between organisational oversight and individual privacy is likely to intensify as AI tools become more embedded in daily workflows, and that current frameworks are not well equipped to handle the nuances involved.
Key Takeaways
The April #InfosecLunchHour surfaced a number of important themes that deserve continued attention across the cyber security and legal communities:
- Our legal systems are built on assumptions of shared interpretation that do not hold true for everyone. When someone processes social context differently due to neurodivergence, the gap between their intent and how their behaviour is perceived can have life-altering consequences.
- The concept of mens rea is more nuanced than many people realise. Intent is not limited to malice; it includes recklessness and negligence, and the thresholds for each vary by offence. Understanding these distinctions is essential for anyone working in security, forensics, or compliance.
- AI guardrails provide an illusion of safety. Current large language models cannot reliably determine user intent, and reframing techniques can bypass most restrictions. This has implications for both regulation and prosecution.
- The reputational damage from accusation can be permanent, regardless of the legal outcome. The “internet jury” does not wait for evidence, and there are currently few effective mechanisms for restoring a damaged reputation.
- Legal frameworks have not kept pace with technology. Jurisdictional contradictions, gaps in cross-border evidence handling, and outdated legislation create genuine risk for practitioners and individuals alike.
- The cyber security industry itself operates in many of the same grey areas. Penetration testing, vulnerability disclosure, and bug bounty hunting all involve activities that could be interpreted as criminal, depending on how intent is assessed. The sentencing guidelines around the Computer Misuse Act are more considered than many practitioners realise, and reading them is strongly recommended.
- Monitoring AI use raises profound questions about privacy, oversight, and proportionality. As AI tools become more prevalent, organisations will need to develop clearer policies that balance legitimate security concerns with individual rights.
Looking Ahead
This was a session that left participants with more questions than answers, and that is perhaps its greatest value. The intersection of neurodivergence, criminal law, and technology is a space where simplistic thinking is not just unhelpful but actively dangerous. Assumptions that feel obvious or common-sense to one person may be invisible or incomprehensible to another. Systems designed around a neurotypical model of understanding will continue to produce unjust outcomes until that model is examined, challenged, and revised.
The discussion also served as a reminder that these are not abstract legal questions. They affect real people, real families, and real careers. The individual at the centre of the case study lost years of their life to a prosecution that was ultimately dropped. The distress, the damage to relationships, and the professional consequences were real and lasting, regardless of the outcome.
For the cyber security community, these questions are not peripheral. They sit at the heart of what we do: assessing risk, determining intent, and making judgements about behaviour in digital spaces. If we cannot get this right in our own professional practice, we cannot reasonably expect the legal system to do so either.
This article comprises of discussions with cyber security and information security professionals during the April #InfosecLunchHour meeting and reflects real observations and insights shared under Chatham House Rules.
The next #InfosecLunchHour event will be on Wednesday 6 May 2026. To join the monthly #InfosecLunchHour meetups, please email Lisa Ventura MBE FCIIS via lisa@csu.org.uk.




