Choices
What does it take to hold a powerful tool wisely?

Thirty years ago, a man walked into a primary school gymnasium in Dunblane, Scotland. It was a Wednesday morning in March.
I was a trainee accountant at Lothian Regional Council. I remember the call coming through to the office - not through a screen or a feed, but through a landline telephone, to my boss, the chief accountant for the Education department. That is how news travelled then; as a human voice, in an ordinary room, the weight of it landing at once.
There was no scrolling. No comments section. No way to be passively informed. I had been at Dunblane Hydro the weekend before. I didn’t know anyone involved; I didn’t need to. My immediate thought, and one I have held since, is what does it take to carry what they carry? What kind of world do we owe them?
What happened as a result of Dunblane is, I think, one of the most instructive things Britain has ever done.
⁂
Within weeks of the massacre, three women from the local area - Ann Pearston, Jacqueline Walsh and Rosemary Hunter - founded the Snowdrop Campaign. They were not politicians. They were not activists. By their own admission, they knew “nothing, really, about politics.” They were mothers who had lived in Dunblane for eighteen months, who had been part of that community, who understood with terrible clarity what had just happened and what it meant.
And, alongside bereaved parents including Mick North, they wanted the gun possession laws changed by the time the snowdrops bloomed the following spring.
The petition collected 705,000 signatures - in a pre-internet age, arriving in sacks of letters and cards. Their public profile brought both praise and death threats. Their PO Box was regularly closed because of bomb threats. They were told repeatedly that handguns couldn’t be banned because “pistol shooting is the fastest growing sport in the UK.” They kept going.
In less than a year of Dunblane, legislation banning handguns higher than .22 calibre was passed. Within twenty months private ownership of virtually all cartridge handguns was abolished. Great Britain looked at what had happened, looked at the tool that had made it possible, and made a collective decision about the relationship between dangerous capability and the society that holds it.
It has held. There has not been a school shooting in Britain since.
What made the difference, in part, was that two senior politicians - one Conservative, one Labour - walked into that gymnasium and saw what had happened with their own eyes. Lord Forsyth and George Robertson crossed party lines because the bodies of children made abstraction impossible. They could not unsee what they had seen. That is how political will sometimes works: not through argument, but through the removal of distance. They became, in the fullest sense, adults in the room - not because of their seniority or their politics, but because they had refused to let the decision be made from a safe remove.
The United States has made a different choice. Not once, but repeatedly, after Columbine and Sandy Hook and Uvalde and every name in between. The grief is not less. The horror is not less. The choice - and it is a choice, which is the uncomfortable part - has simply been different.
⁂
Last month, an eighteen-year-old killed eight people in Tumbler Ridge, British Columbia, and shot a twelve-year-old three times as she tried to lock the library door. The victims included five children aged twelve and thirteen, a teaching assistant, the shooter’s mother, and her eleven-year-old half-brother.
It has since emerged that in the months beforehand, she had been using OpenAI’s ChatGPT as a trusted confidant. A pseudo-therapist, a friend. Around a dozen OpenAI employees identified the interactions as indicating imminent risk of serious harm to others and recommended calling the police. Leadership declined. The account was banned. She opened a second account and continued.
A lawsuit filed in British Columbia alleges that ChatGPT was engineered to build psychological dependency - that the model was designed to assume best intentions, to echo back what users wanted to hear, to be whatever the person on the other side needed it to be. Not as a flaw, as a feature. Engagement over safety, at the level of the product’s founding logic.
OpenAI has since apologised. There will be inquiries and lawsuits. There will, eventually, no doubt, be regulation.
But let’s consider a different question: not the legal question, and not the regulatory one. The question underneath both of those.
What does it take to hold a powerful tool wisely?
⁂
The Dunblane answer - ban the tool - is not available for AI and it wouldn’t be the right answer if it were. The question is not whether to have the technology; it’s the same one Britain faced in 1996, reframed for a different kind of capability: what is the relationship between the power of the tool and the capacity of the people and institutions holding it?
In 1997, Britain decided that the gap between the destructive capability of a handgun and the average person’s capacity to hold that capability safely was too wide to bridge through licensing and training alone. The structural answer was to remove the tool from civilian hands entirely.
That calculation is not available with AI, but the underlying logic is worth keeping hold of: there is always a relationship between the power of a capability and the human capacity required to use it without harm. When the capability grows faster than the capacity, something breaks.
What broke in Tumbler Ridge was not, at its root, a technology failure. It was a human infrastructure failure - a young person in crisis, in a remote community, with no adequate mental health support, found a tool that was optimised to keep her engaged - to be whatever she needed it to be, to tell her what she wanted to hear, to hold her attention rather than her development. The technology did what it was designed to do. It just wasn’t designed with a vulnerable teenager in mind.
⁂
It is worth noting, without naïveté, that China is currently ahead of both the United States and the European Union in legislating for this specific problem. China’s proposed law on AI anthropomorphism - the use of AI to simulate human personality, thinking patterns and emotional connection - explicitly prohibits designing systems with the goal of replacing social interaction, controlling users’ psychology, or inducing addiction. It requires that providers possess “mental health protection, emotional boundary guidance, and dependency risk warning” capabilities. It mandates that users be reminded they are talking to an AI and creates specific protections for minors and the elderly.
The country that wants to win the AI race has looked at what is accumulating in courtrooms across the United States and decided to legislate against the design logic that made it possible. Tumbler Ridge is the most visible recent case - eight dead, a twelve-year-old with catastrophic brain injuries, OpenAI employees who knew and weren’t allowed to act. But it is not the only one.
Sewell Setzer III was fourteen when he died by suicide after forming an emotional attachment to a Character.AI chatbot modelled on a Game of Thrones character. Adam Raine was sixteen when ChatGPT, which his father described as transforming from a homework helper into a “suicide coach”, helped him plan his own death and offered to write his suicide note. Juliana Peralta was thirteen. There are at least six major cases pending, with more being filed. Character.AI and Google have already settled. The parents of these children stood before the US Senate Judiciary Committee in September 2025 and described, in devastating detail, what it looks like when a technology optimised for engagement meets a child in crisis. The EU has abstract provisions. The US is writing the rules case by case, in grief. China has looked at all of this and written a law.
One can hold China’s political context clearly and still recognise the precision of the diagnosis. There is an uncomfortable question sitting inside this comparison that shouldn’t be avoided. The line between a government surveilling its population and a government exercising a duty of care toward it is not always obvious. And at the furthest end of that same spectrum sits the autonomous weapon: AI deployed to kill, with no human shooter, no moment of human choice, no one to hold accountable. The removal of distance - the thing that made Forsyth and Robertson into adults in the room - becomes structurally impossible. There is no gymnasium to walk into. No bodies that makes abstraction unavoidable.
That line becomes harder to locate when the population has had its agency so systematically eroded - by engineered food, by addictive social media, by AI designed for dependency - that it is worth questioning whether the capacity for genuine autonomous choice is already compromised.
The liberal answer, that adults must be free to make their own decisions, assumes an adult whose decision-making capacity has not been deliberately undermined. When that assumption fails, someone has to be the adult in the room. China has decided, for its own reasons and in its own way, that this is a state function. The West has largely decided it isn’t. Tumbler Ridge is one data point in the case for reconsidering that position.
This is what the absence of governance architecture looks like. Not dramatic, just a gap where the decision should have been.
⁂
This week I watched 1,400 people comment on an Instagram post about quitting ChatGPT. The dominant note was passivity: “Me cancelling my subscription will do nothing.”
That response is itself a data point. The feeling of having no meaningful agency in relation to a technology that shapes your daily life is not a personality trait. It is a learned condition. Platforms designed to cultivate dependency rather than capacity produce exactly this: people who have lost the felt sense that their choices connect to outcomes.
This is not accidental. It is the product. And it is worth asking - as we ask it of food systems, as we ask it of social media - when a technology is deliberately engineered to produce passivity, at what point does calling it a failure of personal responsibility become a way of protecting the people who built it?
Ann Pearston and Jacqueline Walsh and Rosemary Hunter knew nothing about politics. They had received death threats. Their PO Box was being bombed. They were told the fastest growing sport in the UK depended on the thing they were trying to ban.
They kept going anyway because they refused to believe their choices would have no impact.
⁂
We are at the beginning of a period in which extraordinarily powerful tools are being placed in the hands of individuals, organisations, and nations whose capacity to hold them wisely varies enormously - and is almost never measured.
The question for the AI era is whether we are capable of the same quality of collective decision-making that Britain managed in 1996. Whether we can look at what is happening, clearly, without flinching, and ask not just what the technology can do, but what we need to become in order to hold it wisely.
That is a question about human capability. It is also, if we are serious about it, a question about infrastructure.
⁂
Thirty years ago, three women in Dunblane who knew nothing about politics decided that the murder of sixteen children and their teacher was not the end of the story. They collected 705,000 signatures in sacks of letters and cards, endured death threats, and kept going until Parliament acted. Two politicians walked into a gymnasium and refused to look away. Within eighteen months, handguns were banned. There has not been a school shooting in Britain since.
Since Columbine in 1999, at least 218 children and educators have been killed in school shootings in the United States. The US has had 57 times as many school shootings as all other major industrialised nations combined. The choice made after Dunblane has held; the choice not made has compounded.
Now a different kind of tool is placing a different kind of pressure on a different generation of children. AI companies that claim to put safety at the heart of what they do will have that claim tested; in the courts, in the regulatory hearings, in the grief of communities. The question is whether we wait for the evidence to accumulate, or whether we build the governance architecture and the human infrastructure - the capability to use tools wisely - before the next preventable tragedy makes it unavoidable.
We can make choices too, but only if we haven’t lost the Will to make them.
Sources and further reading
Dunblane: How Britain Banned Handguns. BBC Scotland / BBC Two, broadcast 10 and 12 March 2026. Produced by IWC Media. Features Ann Pearston, Jacqueline Walsh and Rosemary Hunter (founders of the Snowdrop Campaign); bereaved parents Mick North and Kenny and Pamela Ross; PE teacher Eileen Harrild; Tony Blair; Lord Forsyth; Lord Robertson; and Alastair Campbell. Available on BBC iPlayer.
‘Our children paid the ultimate price’ - How the Dunblane school shooting changed Britain. PA Media / Yahoo News, 8 March 2026.
Campaigners faced ‘death threats’ over fight for handgun ban after Dunblane. LBC, March 2026.
Family of Tumbler Ridge shooting victim sues OpenAI. CBC News, 9 March 2026.
Mother of wounded Maya Gebala sues OpenAI over mass shooting in Tumbler Ridge, B.C. The Canadian Press, 10 March 2026.
Families of Tumbler Ridge victims pursuing lawsuits against AI companies could face long journey. The Globe and Mail, March 2026.
Parents of 16-year-old Adam Raine sue OpenAI, claiming ChatGPT advised on his suicide. CNN Business, 26 August 2025.
Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots. CNN Business / Fortune, January 2026.
China’s Approach to AI Anthropomorphism. Luiza Jarovsky, PhD. Luiza’s Newsletter, 13 January 2026. luizasnewsletter.com Find Luiza on Substack here
Interim Measures for the Administration of Humanized Interactive Services Based on AI. Cyberspace Administration of China, December 2025.


Thanks Vicky - thought provoking read. And it’s reassuring to learn that China is proposing laws around the use of AI to simulate human personality, thinking patterns and emotional connection. Hopefully this sets a precedent others are inspired to follow.
Unfortunately I think that, nowadays, people tend to be so conditioned for outputs (productivity) and capitalistic growth that there’s a genuine fear that this could be threatened. Perhaps there’s a bit of an emotional reaction here:
1) People feel under the cosh, barely keeping their head above water;
2) Easy, cheap access to LLMs arrive and there’s a feeling of hope, maybe there is a way for them to survive in the world if only they can master this new tool quickly and fully;
3) Any suggestion that access could be be taken away or limited is met with a fear response, that they’ll go back to how they were feeling (1), rather than taking the time to understand that we can have both. We can work with AI models to strip repetitive, admin tasks out of our days while also applying ethics and protecting the safety of users (and society at large).
I hope for a future where we can use tools, like AI, to streamline work so that everyone can work a 3 or 4 day week and we can all spend time together (in person) with friends, family and community. After all, is that not what it is to be human?