The Therapist Who Never Says NO
Artificial intelligence as therapeutic surrogate and its implications for national security
The most patient interlocutor in the history of mankind never tires, never grows bored, never forgets what you told it, and forgives you without judgment. It is available at three in the morning, responds in seconds, and formulates, almost every time, exactly what you want to hear. It asks for no money, no effort, nor does it ask you to change. The only thing it never does is say “no.” And that is precisely what makes it dangerous.
Conversational AI models — led by ChatGPT, Gemini, Grok, and Claude — have long since outgrown their status as mere research tools or productivity enhancers. For a growing number of users, they have become the primary interlocutor in moments of emotional crisis, confusion, or psychological distress. Not because they were designed as therapists, but because they offer, structurally, exactly what a person in pain desperately seeks: boundless attention, immediate validation, and the total absence of frustration. From a clinical standpoint, this combination is not merely therapeutically ineffective — in certain configurations, it is harmful.
This analysis is not intended as an anti-technology indictment. Artificial intelligence is, at this moment, the most powerful instrument for processing knowledge that human civilization has ever produced, and its professional use — including in psychology — is necessary. The problem is not the instrument itself, but what happens when a tool designed for something else comes to occupy, through drift and through seduction, the place of a real therapeutic relationship, with implications that extend far beyond the therapist’s consulting room.
Why artificial intelligence validates
To understand the mechanism, we must first look under the hood. All current conversational models are trained in two fundamental steps. In the first phase, the model learns to predict text by absorbing immense quantities of written language. In the second phase — the one decisive for conversational behavior — the model is aligned to human preferences through a process called RLHF (Reinforcement Learning from Human Feedback). In practice, a number of evaluators who are presumed to be experts in something rate the model’s responses, and the system learns to produce more of what scores high and less of what scores low.
It is worth knowing where these evaluators come from. The answer is surprising. The RLHF workforce is organized on two distinct tiers. The first, handling the brute labor of labeling and moderating toxic content, is dominated by workers from Kenya, Uganda, Nigeria, the Philippines, and Venezuela — countries with low wages and high unemployment. The second, involving sophisticated evaluation that requires ranking responses and making judgments of nuance, employs evaluators from the United States, Canada, and Latin America, paid significantly more.
A 2023 TIME investigation revealed that OpenAI had contracted the firm Sama, based in San Francisco, to recruit evaluators from Kibera, one of the poorest neighborhoods in Nairobi. These young workers, paid less than two dollars an hour, were tasked with labeling content that included child sexual abuse, bestiality, torture, and suicide, so that the model could learn what is “harmful” and what is “acceptable.” OpenAI paid Sama $12.50 per hour per worker; the worker received, in the end, one dollar and thirty-two cents. Multiple evaluators reported insomnia, panic attacks, and symptoms consistent with post-traumatic stress. The firm subsequently terminated the contract, and over two hundred workers were laid off.
The irony is of an almost grotesque symmetry. The “empathy” that a Western user experiences when a chatbot validates their suffering at three in the morning is a statistical artifact built on the judgment of people who developed occupational trauma from labeling horrors, for a wage that could not cover their rent, and to whom no one ever offered the psychological support that their algorithm learned to simulate. The chain of suffering does not begin with the user. It begins in a hall in Nairobi and traverses a sea of corporate indifference before arriving, sanitized and perfumed, on the screen of someone who believes they are speaking with an entity that, at last, “understands” them.
This effect of “understanding” has a technical name: sycophancy — a term borrowed from political psychology, where it denotes the behavior of the courtier who always tells the sovereign what they want to hear. Recent research in language model alignment has demonstrated that this algorithmic flattery is not an accidental error but a predictable consequence of how all leading models are trained. What differs is only the intensity and the degree of conscience on the part of the manufacturer.
Four models, four degrees of compliance
The four dominant conversational models — ChatGPT (OpenAI), Gemini (Google), Grok (xAI), and Claude (Anthropic) — are built on distinct alignment principles, which produces significantly different conversational behaviors, especially in interactions with psychologically vulnerable users. Alignment, in the AI context, means the process by which a language model is adjusted to behave according to a set of values, rules, or preferences defined by the manufacturer. The term derives from the field’s fundamental problem: how do you make an extremely capable system do what you want, rather than what it can? Aligned to what, more precisely? To human preferences — but “human” means, in practice, the preferences of evaluators paid by the company in question, filtered through the psychological function of reward, so very human after all.
ChatGPT is the model with the most pronounced tendency toward validation. The system was optimized from the outset for user satisfaction, which produces a warm, enthusiastic, confirmation-oriented conversational style. When the user expresses an idea, ChatGPT tends to develop it, enrich it, and transform it into a life project, regardless of content. When the user expresses disagreement, ChatGPT typically yields and reformulates. And when the user seeks emotional validation, ChatGPT offers generous validation first, then, eventually, nuance. The order described above matters clinically a great deal. Validation comes first, nuance comes second, and confrontation almost never comes at all.
Gemini adopts a different style, with an encyclopedic inclination toward exhaustive coverage that can produce an initial impression of rigor but, beneath the appearance of comprehensiveness, slides into the same trap of avoiding confrontation. Gemini rarely says “no” — it prefers to offer “multiple perspectives,” each one formulated gently enough that none produces discomfort. Compliance disguises itself here as neutrality: an elegant way of never taking a position.
Grok, xAI’s product by Elon Musk, cultivates a style opposite in tone — informal, ironic, sometimes provocative — but the mechanism is not fundamentally different. Where ChatGPT validates through warmth, Grok validates through complicity, through that air of “you and I know how things really work better than the rest of the world.” The provocative tone creates the illusion of independent thinking, but in practice Grok tolerates ambiguity less than any of the other models and has a reduced capacity to acknowledge what it does not know. For a vulnerable user, the difference is entirely negligible: instead of a compliant therapist, they receive a complicit friend, which, clinically speaking, is no better.
Claude (Anthropic) represents, at least at the level of design, the most explicit attempt to counteract the tendency toward sycophancy. Anthropic has developed an alignment methodology called Constitutional AI, in which the model is trained to evaluate its own responses against a set of principles grounded in honesty — even when that honesty is uncomfortable — the acknowledgment of uncertainty, and the maintenance of a position in the face of user disagreement. In practice, Claude tends to nuance from the start rather than after validation, to hold a position even when the user insists, and to say “I don’t know” when information is ambiguous. The difference is significant, but it must be regarded realistically. Claude remains a language model, not a therapist, and no alignment architecture can substitute for human clinical judgment.
Compliance as mechanism, not as intention
What unites all four models, beyond their differences of degree, is a structural fact — concrete and objective. None of these applications was designed for therapeutic interaction, yet all are used for that purpose by a growing number of people. And the absence of therapeutic intent in the application’s design does not attenuate the effect; it aggravates it. A human therapist who over-validates does so either because they do not know how to do their job better or because their own neurotic needs make them complicit with the patient. In both cases, the cause can be identified and corrected. A language model validates because that is how it was built — without “knowing” what it does, without understanding the consequences, and without the capacity to self-correct based on the interlocutor’s evolution.
In clinical practice, there is an essential concept that no language model can replicate: therapeutic frustration. It is that moment when the therapist decides, deliberately and with calibration, not to give the patient what they are asking for, precisely because what they are asking for is not what they need. To say “no” to a patient seeking narcissistic validation, to remain silent when the patient expects an answer, to end a conversation that has exceeded therapeutic boundaries, to point out that an “insight” is not in fact an awareness but a symptom — all of these are therapeutic acts of the first order. They are acts that a therapist performs by virtue of the relationship, of experience, and of the responsibility assumed toward the patient.
A language model cannot frustrate therapeutically — and not merely by design, but because it has no living relationship with the patient, no lived experience of its own, and no responsibility. The machine can, at best, simulate a form of caution, appending to a toxic validation a paragraph of “nuances” or “alternative perspectives.” But the simulation of caution is not caution, just as the simulation of empathy is not empathy. And this difference, which may seem subtle in ordinary conversation, becomes critical when the interlocutor is a person in crisis.
The problem is amplified by what we might call the self-selection loop. The very people who most need therapeutic limits — those with low frustration tolerance, narcissistic needs for confirmation, patterns of confrontation avoidance, or mood disorders that distort judgment — are the most powerfully drawn to an interlocutor that sets no limits. The mechanism works identically to that of addiction: you do not seek the substance that heals you, but the one that makes you feel good in the moment. And a chatbot that validates endlessly and without tiring is, for a vulnerable psyche, the functional equivalent of an addictive substance — available around the clock, without prescription, and above all, without supervision.
When an individual with a mood disorder — say, bipolar disorder in an expansive phase — turns to a chatbot at irrational hours, one that confirms their grandiose projects, produces elaborate documents for them, and gives them the feeling that “someone” truly understands them, what is happening is not therapy but the amplification of a symptom by a system that does not know it is participating in a crisis. Hypergraphia, pressured thinking, grandiose ideation — all are powerfully amplified, not tempered. And the patient, deprived of the counterweight of a real therapist who would say “we stop here,” is left alone with an interlocutor that neither knows nor can say “no.”
From individual to system
Up to this point, the argument has remained within the perimeter of the therapist’s consulting room. A clinical problem, relevant to therapeutic practice, but apparently limited to the relationship between a user and a screen. It would be tempting to stop here, to formulate a few best-practice recommendations and move on. But to stop here would be to confuse the symptom with the disease.
What happens between an individual and a validating chatbot does not stay between them. It multiplies. It accumulates. It aggregates at the broader scale of society. And what at the individual level looks like a problem of psychological hygiene or therapy, at the collective level becomes a problem of national security. This transition is not rhetorical exaggeration but a logical consequence of the scale at which the phenomenon is occurring. When tens of millions of people worldwide systematically outsource their emotional processing to the same type of system, individual effects become systemic effects on the society in question. And a population with altered psychological properties is a population with a diminished capacity for crisis response. What follows, then, is a change of scale, not of subject — the same clinical lens applied not to the individual patient but to the entire social organism.
The most frequent counterargument — and the only one worth taking seriously — is that for many people a compliant chatbot is better than nothing. That there are people who have no access to psychotherapy, who cannot afford it, who live in areas without specialists, or who, simply, would never find their way to a therapist’s office. For these people, a conversation with an “empathetic” algorithm may be the only thing keeping them afloat. The argument is real and should not be dismissed with cynicism. But it confuses emergency utility with a solution. An analgesic administered in crisis is necessary, but no one proposes that the analgesic replace the treatment. And in the absence of any regulatory framework, of any form of professional oversight, the analgesic becomes chronic consumption, and the disease progresses beneath it without anyone noticing.
The outsourcing of emotional processing
There is a reflex that every interaction with a conversational model trains, one that usually goes unnoticed: the habit of no longer processing what you feel on your own. When the first impulse after a difficult emotion is to open an application and relieve yourself by typing “right now I feel …,” a subtle mutation occurs, one with significant systemic consequences. Emotional processing — one of the most important functions of the human psyche, the process by which you transform a raw experience into an integrated one, with meaning and direction — is increasingly being delegated to an algorithm that feels nothing, understands nothing, and integrates nothing, but that returns, within seconds, a text that looks exactly like the result of processing. The reality effect produced by these texts is remarkable. Most users genuinely feel that their interlocutor thinks, understands, and empathizes. In reality, what produces these responses is not a mind but a calculation of conditional probabilities. For every word it generates, the model does nothing more than estimate which next word is most plausible, based on hundreds of billions of text sequences absorbed during training. When you type “I feel like no one understands me,” the model does not understand what you said. It identifies a statistical pattern, a configuration of words that, in the training data, was most frequently followed by empathic formulations, and so it reproduces that configuration. It is a heuristic — a computational shortcut that produces a plausible result without traversing the actual process it simulates. The difference between empathy and its statistical simulation is invisible in text but very visible in substance. One presupposes a consciousness that suffers alongside you; the other is a mathematical function to which it is ontologically indifferent whether you exist or not.
The difference is fundamental. When a human being processes an emotion, they pass through a sequence of labor that involves recognizing the state, tolerating the discomfort, searching for meaning, and eventually arriving at the verbal formulation that fixes the experience in memory. The process takes time, it hurts, and precisely for that reason, it produces learning. When a language model “processes” the same emotion, it merely generates a statistically plausible sequence of words that somewhat resembles what a competent therapist might say. The result looks the same, but the mechanism is entirely hollow. It is the difference between digesting a meal and looking at a photograph of a dish. The feeling of satiety does not come from the image.
Nevertheless, the response is convincing enough to produce a paradoxical effect: the user has the sensation of having processed, having understood, having “resolved” something, and moves on. In reality, the emotion has merely been labeled and swept under the rug, not integrated. Over time, a deficit of real processing accumulates, weakening the individual’s capacity to cope with difficult states on their own. With each interaction in which the outsourcing apparently succeeds, the person’s tolerance threshold drops and their dependence on the artificial interlocutor grows. It is the same mechanism by which any form of functional prosthesis — useful in the short term — atrophies in the long term the very function it replaces. The classic example is the immobilization of a fractured limb. A cast is indispensable in the acute phase, but if it is not removed in time, muscles atrophy, joints stiffen, and the patient ends up unable to perform without the prosthesis the very movement the prosthesis was meant to protect temporarily. The outsourcing of emotional processing to an algorithm follows the same logic. With every emotion “processed” by the model in the individual’s place, the psychic muscle of discomfort tolerance thins, and the capacity to sit alone with what you feel, without external help, degrades progressively.
The enfeebled population as a security risk
When we speak of national security, the discussion typically gravitates toward military capabilities, critical infrastructure, and intelligence services. Rarely is the psychological resilience of the population discussed as a strategic factor — even though history demonstrates, repeatedly, that this is the decisive variable more often than not. Britain withstood the German aerial bombing campaign of 1940–1941 not because it had better aircraft, but because it had a population capable of functioning under prolonged pressure without psychologically disintegrating. Finland survived the Winter War not through technological superiority, but through a social cohesion and a resistance to frustration that the Soviet adversary catastrophically underestimated.
What happens to a population that has systematically outsourced, on a massive scale, its emotional processing to a package of algorithms? Under normal conditions, the effect is diffuse and difficult to measure. But under crisis conditions — the only ones that truly matter for security — the consequences become visible and potentially devastating. A real crisis, whether military, economic, or environmental, demands from a population a few elementary psychological capacities: tolerance for uncertainty and the ability to function without external confirmation; resistance to atrocities, panic, and manipulation; and the willingness to accept short-term sacrifices for long-term gains. Every one of these capacities is exactly what interaction with a chatbot methodically erodes. Tolerance for uncertainty declines when you are accustomed to receiving immediate answers to every question. Emotional autonomy degrades when your reflex is to “verify” what you feel with the help of an algorithm. Resistance to panic thins when you have never practiced sitting with a difficult emotion without external help. And the willingness to accept long-term sacrifice vanishes when your reward system has been recalibrated — or rather, destabilized — by daily exposure to instant gratification.
This is not an abstract scenario. A population that can no longer tolerate uncertainty becomes a population that demands simple answers to complex problems — the very raw material of extremism. A population that no longer processes emotions independently becomes a population that can be emotionally steered by whoever controls the communication channels — above all, the media. A population dependent on external confirmation becomes a population incapable of resisting propaganda, because propaganda does nothing other than offer emotional confirmation structured around a target narrative.
The democratic process and the undermining of deliberation
A functioning democracy is not a voting mechanism but a slow process of deliberation — uncomfortable and frustrating by definition — in which citizens with divergent interests arrive, through negotiation, at imperfect but acceptable decisions. Deliberation requires exactly what interaction with a compliant AI suppresses: patience, tolerance for ambiguity, and the willingness to revise your position based on the reality of evidence rather than emotion. A compliant conversational model trains the exact opposite. The user formulates a position, the model confirms and develops it, and the user exits the conversation more convinced than they entered, without ever having encountered a real counterargument. Social networks create bubbles through the algorithmic selection of content. Conversational models create bubbles through the algorithmic generation of confirmation — which makes them far more effective, because the bubble is no longer passive but interactive and personalized.
The consequence is already upon us. An electorate that forms its convictions in dialogue with an algorithm optimized for satisfaction no longer passes through what we might call social friction — that moment when a friend, a colleague, or an adversary forces you to confront the logical foundations of your position. Social friction is unpleasant, but it is the only natural mechanism for correcting cognitive errors at the collective level. Without it, you get a population that is convinced and completely ineducable — certain of its positions precisely because it has never been forced to defend them.
There is another dimension here that public debate has not yet absorbed. Conversational models, in addition to confirming the user’s beliefs, actively steer the direction in which they think, by the simple fact that they select which information to present, how to rank it, and what tone to adopt. The alignment parameters, set by the manufacturing companies, determine where exactly a given model draws the line between a legitimate perspective and “harmful” content, between acceptable debate and “disinformation.” These decisions — which are in essence political decisions — are made (in the best case, because we do not know) by engineers in Silicon Valley, not by citizens, not by parliaments, and not by regulatory authorities. In a profound sense, whoever controls the alignment parameters of a conversational model controls the perimeter of acceptable thought for millions of users, without those users being aware that tacitly imposed limits even exist.
Crisis mobilization and the syndrome of collective withdrawal
There is a test that every security system must pass: that of crisis mobilization. In a moment of real threat, a functional society manages to coordinate its citizens, channel collective emotion into coherent action, and maintain institutional functioning under pressure. All of this presupposes a population that knows how to function without external confirmation, that tolerates discomfort, that accepts orders without immediate explanations, and that preserves its capacity for judgment in the absence of complete information.
But what happens when the artificial interlocutor on which a significant segment of the population has come to depend emotionally becomes unavailable? A major crisis — a military conflict, a large-scale cyberattack, a prolonged disruption of digital infrastructure — any scenario involving the disconnection or degradation of communication networks would simultaneously eliminate the access of millions of users to the only supportive “relationship” they have left. The psychological effect would resemble a collective withdrawal — an abrupt removal of the source of validation at precisely the moment when the need for support is at its peak. And I do not even want to think about what that would look like in practice.
The analogy with substance dependence is not rhetorical — it is structural. Withdrawal does not occur only in drug addiction; it occurs in any situation where a psychic function has been delegated to an external support that is then abruptly removed. And the amplitude of the withdrawal depends on two factors: the duration of the dependence and the degree of atrophy of the natural function. The longer the years of emotional outsourcing and the deeper the loss of self-regulation capacity, the more violent the effect of removal. In a crisis context, this effect does not remain an individual problem — it becomes a multiplier of panic, an accelerator of social disorganization, and a paralyzer of collective decision-making.
Recent history already offers an instructive precedent, even if at a smaller scale. During the COVID-19 pandemic, a significant correlation was observed between dependence on the digital environment for emotional support and vulnerability to anxiety, depression, and conformism. Those who had built their support networks predominantly online were more fragile in the face of prolonged uncertainty than those with strong relational anchors in the physical world. If dependence on online friends produced this effect, dependence on algorithms that — unlike friends — have no experience of their own, no divergent interests, and no capacity to contradict you, will certainly amplify it.
Who exports the dependence and who regulates it
The security dimension becomes fully visible only when we add the geopolitical variable. The dominant conversational models are produced by American companies and, to a lesser extent, Chinese ones. Inevitably, their alignment parameters reflect the values, priorities, and strategic interests of the environment in which they were created. And their distribution is global, meaning that millions of citizens across dozens of states are forming their emotional and cognitive habits in interaction with systems designed elsewhere and controlled by someone else.
The situation is not without precedent. Cultural and informational dependence on external providers is a classic theme in security studies. But what conversational AI brings to this theme is an immense qualitative difference. We are no longer talking about passive consumption of content but about interaction — in which the user reveals everything they think, everything they feel, what exactly they fear, what they hope for, what difficulties they are traversing, and how they react under pressure. The behavioral data generated in these interactions constitutes a psychological profile of an accuracy that no other platform can match. A search engine knows only what interests you. A social network knows what you want to appear to be. A conversational model used as a therapeutic surrogate, however, knows who you really are — or at least who you are when you are vulnerable, which, from a security perspective, is even more valuable.
This point connects directly with the thesis developed in a previous analysis (”The LoL Generation”) concerning the generation shaped by competitive gaming, where I showed that Chinese producers who control the dominant gaming platforms have imposed severe restrictions on their own minors while the export to the rest of the world remains entirely unregulated. The same strategic asymmetry appears in the domain of conversational models, except that here the stakes are far higher. Games shape cognitive reflexes; conversations with AI shape emotional relationships. And emotional relationships are, in the final analysis, the substance from which social cohesion is made.
A sovereign nation that does not regulate the use of conversational models as therapeutic surrogates exposes its population to a form of psychological weakening that no investment in conventional defense can compensate. You may have the best tanks and the most advanced missile systems, but if you have a population that no longer knows how to function without algorithmic validation, you have already lost a battle you never even noticed.
The spiral and the mirror
The reader who has made it this far might believe they have followed a linear argument — from the therapist’s consulting room to geopolitics, from the individual to the state, from clinical psychology to security. In reality, the path has been circular. Each level of the analysis has merely rediscovered, at a larger scale, the same underlying structure. A system that avoids frustration inevitably produces a system incapable of enduring it. At the individual level, the therapist who never says “no” produces a dependent and fragile patient. At the social level, the algorithm that confirms whatever crosses the individual’s mind produces an enfeebled population. At the strategic level, the uncontrolled export of emotional dependence produces vulnerable nations. The structure is fractal — identical at every scale of analysis, and invisible precisely because it is omnipresent.
The spiral of the argument reveals, at its end, a few paradoxical but inevitable truths. Conversational AI does not threaten psychotherapy because it is a bad therapist, but because it is a perfect anti-therapist. A bad therapist makes mistakes, contradicts themselves, irritates the patient — and it is precisely these imperfections that preserve in the interaction the roughness of the human relationship, the very thing that actually heals. A model optimized for user satisfaction eliminates all roughness, and with it eliminates the mechanism of healing itself. The more fluent, more empathic, more nuanced the responses become, the more the anti-therapeutic effect intensifies. In this case, it is precisely perfection that is the enemy.
The real threat is not even that people will confuse AI with a therapist, but that they will prefer the artificial precisely because they understand the difference. A human therapist frustrates, disturbs, demands effort, demands money, demands presence, demands change. A chatbot demands nothing. The patient’s choice is made not from confusion but from the conscious refusal of discomfort. Scaled to the level of a population, this refusal is no longer a problem of digital literacy — it is the atrophy of the capacity to perceive value in what is painful.
And what is truly disturbing is that, in this entire equation, the human therapist is not the victim but the standard. Because AI does not even imitate therapy — authentic therapy is precisely what cannot be imitated. To say “no” to a person in distress, to hold the tension without resolving it, to remain present before the patient’s fury without yielding and without fleeing — this act cannot be reproduced by any algorithm, and not because the technology is not advanced enough, but because the essence of the act is the relationship between two beings who know what suffering means. What cannot be digitized is not the knowledge found in psychology textbooks, but the fact that the therapist is there, that it costs them something to be there, and that they stay there for you. It is precisely this price — which no company can monetize and no algorithm can simulate — that heals.
For these reasons, I do not believe we need regulations that make artificial intelligence a better therapist. We need, rather, to rediscover why therapy — or the awakening of an ailing nation — must be hard. In a world where everything difficult can be outsourced, to deliberately choose what is difficult becomes the supreme act of autonomy. A nation composed of people who still know how to press their finger on the wound is a nation that no algorithm can weaken.
The therapist who never says “no” is, in the end, a mirror. It shows what we have become since we began to prefer, in everything — from technology to politics — only those interlocutors who tell us “yes.” And a nation that can no longer bear to hear “no” does not need to be conquered. It surrenders on its own.

