Humanism After Optimization

The real danger of AI is not that machines will imitate us too well. It is that institutions built around optimization will teach people to become more legible, more reactive, and less capable of governing themselves together. A humanistic technology should leave stronger relationships, more accountable institutions, and more room for public judgment after it is used.

By Audrey Tang

Whenever people ask whether AI threatens humanism, I think the question arrives a little late. The deeper threat came earlier, when digital systems learned to sort attention at industrial scale. Long before generative models could write essays, compose songs, or simulate conversation, many people had already been trained to behave like components inside ranking systems: always visible, always reactive, always measurable. By the time synthetic fluency arrived, a quieter transformation was already underway. We were becoming easier to score.

That is why I do not think the central question of this century is whether machines will become more human. The more urgent question is whether humans will still be allowed to remain more than what machines can easily evaluate.

I have seen both possibilities. On one hand, a language model can help me enter a conversation I otherwise could not have had. Before meeting a Japanese thinker whose newest work I could not read in the original, I used AI to build a working vocabulary across our different philosophical traditions. The system did not replace the encounter. It made the encounter possible. Once the conversation began, the model's importance began to fade. It had done its work well precisely because it no longer needed to be at the center.

On the other hand, we have all experienced systems that do the opposite. They do not deepen understanding. They train people to live at the tempo of the feed, to compress themselves into whatever is most legible to the platform, and to mistake constant reaction for participation. The danger is not that AI will talk like us. The danger is that institutions will reward us for talking like machines.

Our scarcity is not content. It is mutual comprehension.

A humane technological future will not be secured by sentimentality, nor by a nostalgic defense of every task that software can now accelerate. It will depend on whether we can design tools, institutions, and norms that protect what is distinctively human in public life: the capacity to interpret one another across difference, to make commitments that bind us over time, to revise ourselves without humiliation, to hold power answerable, and to care for consequences that no benchmark can fully capture.

Not everything worth saving is manual

One common reaction to AI is to defend the human being by defending every human task. If a model can draft a letter, then perhaps dignity lies in writing each sentence unaided. If a model can summarize a book, perhaps authenticity demands that every page be processed directly. If a model can compose music, perhaps the only honorable response is artisanal purity.

I understand this instinct. It comes from a rightful fear that automation may empty out meaning. But I do not believe humanism can be reduced to the preservation of manual effort. Many human tasks are not sacred in themselves. They are scaffolding around something more important.

The crucial line is not between assisted and unassisted production. It is between expression and obligation. A model may help me find words, but it cannot stand behind the promise those words contain. A system may help me prepare for a conversation, but it cannot inherit responsibility for what I decide to say. A tool may render my thoughts more clearly across cultural traditions, yet the intention, the commitment, and the accountability must remain human.

This distinction matters because it allows us to welcome technologies that enlarge our senses without surrendering the moral work of presence. Glasses, subtitles, maps, hearing aids, translation tools, and search engines all extend our capacities. They make the world more reachable. We do not become less human because we use them. We become more able to understand and be understood.

The same can be true of AI systems. The best uses of them, in my experience, are not those that perform a life on my behalf, but those that make me more able to meet another community well. A good system can help me listen where I would otherwise hear only noise, discover a concept in a language I do not yet know, or reframe a difficult exchange in ways that preserve dignity on both sides. In those cases, the tool is acting less like a replacement self and more like a temporary prosthesis for mutual intelligibility.

That is a very different ambition from the one implied by much current rhetoric around agentic AI. A proxy that books a room or sorts my calendar may be useful. A proxy that gradually becomes my social presence, my public voice, or my substitute conscience is something else entirely. Once tools stop extending our participation and begin replacing it, they also begin reshaping the standards by which participation itself is judged.

At that point, people adapt to the proxy. They learn to write in machine-friendly ways, to structure work for model visibility, to optimize emotional expression for algorithmic uptake. Human beings do not merely use the system. They start arranging themselves around what the system can process. That is the true danger of dehumanization: Human conformity to machine legibility.

From the autonomous self to the relational person

For several centuries, much of humanist thought emphasized the autonomy of the individual. This tradition gave us invaluable protections: rights, liberties, conscience, freedom of expression, the dignity of the person against arbitrary power. We should not give these up.

But the age of AI is revealing, with unusual clarity, something that industrial modernity often obscured. We are not only autonomous individuals. We are also relational beings. We become who we are through webs of care, language, trust, conflict, and recognition. A person is not simply a container of preferences. A person is also a pattern of responsiveness.

This matters because many activities once mistaken for the essence of intelligence are now being automated. Rule following, surface fluency, stylistic imitation, pattern completion, and test performance are increasingly available as utilities. When that happens, the most important human capacities become easier to see, not harder.

A student's future should not depend on outperforming a calculator at calculation or a model at standardized phrasing. A worker's dignity should not depend on typing each administrative sentence by hand. A citizen's value should not be measured by how closely behavior matches the assumptions of a recommendation system. What remains distinctly human is not raw output. It is the style with which we direct attention, enter collaboration, exercise judgment under uncertainty, and care about a shared world.

I often think of these capacities not as private virtues but as relational ones. Curiosity is not merely an internal desire to know. It is a way of approaching another person or problem without reducing it too quickly. Collaboration is not just teamwork in the corporate sense. It is the practiced ability to alter one's own plan in response to other intelligences. Civic care for the public good is not abstract altruism. It is the willingness to expand the circle of consequence beyond immediate gain.

None of these qualities are well described by the language of optimization. In fact, they are often damaged by it. A person obsessed with maximizing a metric becomes less curious, because curiosity requires wandering beyond what the metric rewards. They become less collaborative, because collaboration changes pace and redistributes credit. They become less civic, because public goods rarely yield the cleanest short-term score.

This is why the educational question after AI is not, “What can humans still do better than machines?” That question ages badly. The better question is, “What ways of becoming human should education cultivate when many forms of performance can be automated?” My answer is that education must move away from ranking compliant output and toward cultivating interpretive courage, collaborative range, ethical imagination, and the capacity to participate in revision without losing face.

Democracy, at its best, already teaches some of this. It is the social technology by which a society resists the temptation to solve itself once and for all. It gives people procedures for changing course without civil rupture. In that sense, democracy is not opposed to technology. It is itself one of humanity's most important technologies: a method for keeping collective life correctable.

The problem was never intelligence. It was the feed

When people worry today about synthetic intimacy, bot swarms, and persuasion at scale, they often describe these as if they were entirely new pathologies. They are not. Generative AI intensifies a crisis whose architecture was built earlier.

The central pathology of the last decade online was not the abundance of speech. It was the extraction of attention through ranking. The feed rewarded whatever was most combustible, most performative, most likely to trigger rapid emotional contagion. Anger turned out to be a cheap source of energy. Nuance was too slow. Context traveled badly. Correction almost always arrived after affiliation had hardened.

In such an environment, information becomes less important than tempo. People do not adopt beliefs only because the evidence is compelling. They adopt them because the social machinery around those beliefs moves faster than reflection can keep up. By the time a rumor is challenged, a group identity may already have formed around it.

This is where AI could make things much worse. It can generate persuasive content at volume, tailor emotional cues, simulate consensus, and flood every channel with synthetic certainty. If deployed inside the old logic of the feed, it will turn existing polarization into a perpetual engine for schismogenesis.

But AI can also be used to counter that logic. It can annotate before outrage calcifies. It can translate between communities whose moral languages differ even when their practical hopes overlap. It can help surface common questions before conflict is misdescribed as civilizational incompatibility. It can summarize a thread before people perform their identities upon it. It can help a person respond to hostility without having to metabolize every insult first.

I have found this last use surprisingly important. In public life, one often receives messages that are full of contempt, projection, or anxiety. Reading them directly can consume a great deal of emotional energy. A language model, used carefully, can sometimes identify the few phrases within that hostility that still contain a real concern, a misunderstanding worth clarifying, or a grief disguised as attack. Then a human being can answer that smaller, more workable part.

The machine does not reconcile. The machine does not forgive. It does not create friendship. But it can reduce the amount of poison one must ingest before deciding whether a relationship is still salvageable. In that sense, AI can sometimes serve not as an amplifier of conflict but as a membrane that lets meaning through while lowering toxicity.

If we take humanism seriously, this is the direction in which public-interest AI should develop. We need infrastructures of listening, not only infrastructures of generation. We need civic systems that increase interpretive bandwidth, not just expressive throughput. The test should not be how much content a model can produce in a minute. The test should be whether it leaves a public better able to understand disagreement without turning it into enmity.

Truth after the photograph

One reason these questions feel so urgent is that the old relationship between images and truth is breaking down. For more than a century, many societies treated the photograph as a privileged form of evidence. A picture was not infallible, but it was granted a special epistemic status. It seemed to capture reality directly.

Digital culture already weakened that assumption. Generative models now dissolve it. Images, voices, and videos can all be synthesized convincingly. The surface alone is no longer enough.

Many people take this to mean that truth itself is entering terminal crisis. I do not think so. What is collapsing is not truth, but a specific shortcut to truth. We are being forced to rediscover something older and, in the long run, more robust: that public truth has always depended not only on representation, but also on provenance, procedure, corroboration, and the ability to contest.

Even before deepfakes, a good court did not decide a case by looking at a picture in isolation. It asked who produced it, under what chain of custody, in relation to what testimony, and subject to what challenge. Journalism at its best works similarly. Science certainly does. Democratic accountability does too. The issue is not whether a surface looks real. The issue is whether there is an accountable process through which claims can be examined and revised.

This suggests a more mature digital epistemology. In the coming years, the most important distinction will not be between “natural” and “synthetic” artifacts, as if untouched reality and generated media occupied separate worlds. The important distinction will be between accountable and unaccountable mediation.

Who stands behind a claim? What community, institution, or signer takes responsibility for it? How can others inspect it, contest it, or attach context to it? Can the system register dissent without converting dissent into invisibility?

Truth in a democratic society has never meant a single voice speaking from nowhere. It means that different communities can compare notes, challenge each other, and still sustain procedures that keep correction possible. AI can help with this by tracing sources, identifying inconsistencies, and translating among vocabularies. But it must not be positioned as an oracle above society. The point is not to replace public judgment. The point is to make public judgment more capable.

Humane systems must be interruptible

Another mistake in current AI discourse is the assumption that the most general model, fed the most data and offered as a universal layer, is necessarily the highest form of intelligence. This is a technological version of imperial thinking. It confuses scale with legitimacy.

In practice, many of the decisions that most affect human life depend on context, scope, and situated responsibility. A classroom assistant need not also govern welfare eligibility. A medical triage system should not also decide labor scheduling. A translation aid should not quietly become an instrument for political persuasion. A humane system needs a mandate that people can understand.

This is one reason I am skeptical of visions in which a single assistant becomes the operating layer for work, communication, education, and civic life all at once. Such systems may appear convenient, but they combine too many domains under too little contestability. When a tool is everywhere, refusal becomes difficult. When it mediates too many functions, mistakes become structural. When its authority is diffuse, accountability also becomes diffuse.

Humanistic design begins with boundedness. A system should know where it stops. It should have a named custodian. It should generate records that people can inspect. It should offer a path of appeal. It should be possible to pause, override, or retire it without social collapse.

Interruptibility is not a flaw in democratic technology. It is one of its constitutional virtues.

This principle also has practical consequences for infrastructure. Not every use of AI requires the largest model, the broadest data collection, or continuous dependence on a distant cloud. Some of the most humane systems I use are relatively modest tools tuned to specific needs and deployed close to the people who understand those needs best. Their virtue lies not in grandeur but in fit.

Fit matters ethically, but it also matters ecologically and politically. A narrower model trained for a clear purpose may require less energy, less indiscriminate data accumulation, and less institutional dependency. It may be easier for the affected community to understand, easier to audit, easier to revise, and easier to replace. Restraint in scope is not backwardness. It is often a precondition for agency.

The alternative to universal AI is not fragmentation into isolated silos. It is federation: many systems with shared standards, interoperable protocols, and local accountability. That, after all, is how the internet itself became resilient. Its genius was not that every network dissolved into one giant machine. Its genius was that different networks learned how to communicate without surrendering all local structure.

A humanism for the age of AI should aspire to something similar. We do not need one machine to think for everyone. We need tools that help many different communities think with one another while remaining answerable to the people closest to the consequences.

The right not to be optimized

There is another element of humanism that often goes missing in technical debates: the defense of unoptimized life.

Markets, platforms, and productivity cultures all tend to assume that reduction of friction is an unquestioned good. Faster replies, smoother interfaces, higher engagement, fewer pauses, more personalization. But human flourishing does not always reside on the shortest path. Trust grows at speeds that efficiency metrics often misread as waste. Creativity often begins in boredom or drift. Friendship is not a throughput problem. Sleep is not downtime for a production system.

A society that prizes only optimization will eventually pathologize everything that protects reflective life. It will treat silence as underutilization, ritual as inefficiency, privacy as opacity, and delay as failure. Yet these are precisely the spaces where judgment matures.

For that reason, humanistic technology must defend not only freedom of expression, but freedom from constant excitation. We need interfaces that do not always outshine the world around us. We need norms that do not require total availability. We need systems that allow for rest, for opacity, for stepping away before reaction hardens into identity.

Sometimes the most advanced intervention is not another feature, but less stimulation. Dimmer colors. Slower defaults. Fewer triggers. More room for ordinary reality to recover its texture.

Humor matters here too. So does grace. A brittle public sphere is one in which every mistake must be maximalized, every disagreement moralized, every utterance scored for allegiance. Human beings cannot live well in such an atmosphere. We need technologies and institutions that help us lower the emotional voltage, recover proportion, and find forms of response that reopen relation rather than closing it.

To be human is not only to reason. It is to improvise, to laugh, to metabolize tension without always escalating it. These are not ornamental qualities. They are civic capacities.

A civic humanism for synthetic minds

The question, then, is not whether AI can be aligned with humanity in the abstract. Humanity is too vague a noun for that to be politically useful. The real question is whether our tools can be aligned with the kinds of relationships, institutions, and public practices within which human beings remain capable of governing themselves together.

If a system makes me more fluent but less accountable, it diminishes me. If it makes an institution faster but harder to question, it diminishes democracy. If it gives me more information while reducing my ability to meet another person except through suspicion, it diminishes public life.

The future will not be saved by proving that a model has feelings, nor by pretending that feeling is the only basis of moral consideration. What matters is whether the sociotechnical environments we build leave room for promise, contest, repair, and shared judgment. Humanism, in this sense, is not the defense of a species essence. It is the design of conditions under which people can continue becoming human with and through one another.

That is why I do not dream of a single perfect intelligence that will resolve our conflicts. Conflict is not a bug that history forgot to fix. Much of what is creative in civilization comes from people who do not begin with the same worldview learning to act together anyway. The task is not to eliminate difference. It is to build forms of mediation that keep difference from hardening into domination.

AI can help with that. It can help us translate, annotate, compare, scaffold, and deliberate. It can help us carry more complexity without demanding premature simplification. It can help us notice that two communities may tell different stories while still wanting compatible futures. It can help us answer the workable question inside an unworkable rant. It can help us make institutions more legible to the people who inhabit them.

But for any of that to happen, we must stop asking only what AI can do, and ask instead what social capacities it should leave stronger after it is used.

A humane tool should leave people more able to understand one another once the tool steps back. It should leave institutions more correctable, not more inscrutable. It should leave communities more capable of making decisions without panic, more able to verify claims without submitting to a single authority, and more willing to preserve the dignity of those with whom they disagree.

After all the benchmarks, all the demos, and all the synthetic performances, the decisive question will remain wonderfully old-fashioned:

Are we becoming better at living together?