AI In Language Services | A Skeptic’s View

Trade publications in the language services industry rarely miss an opportunity to announce the latest AI this or that. Consultants confidently declare that the old ways of doing things are finished and warn that companies unwilling to “transform immediately” face inevitable irrelevance—preferably after engaging their services.

This narrative deserves skepticism.

Predictions about AI’s impact on language services overwhelmingly come from those building and selling the technology. That alone should give industry participants pause. History shows that technological forecasts often reveal more about incentives and biases than about how markets actually evolve.

 

The Developer’s Incentive Bias

AI developers have strong incentives to predict dramatic disruption, especially job displacement. For a CEO like Sam Altman of OpenAI, proclaiming that AI will soon replace entire professions functions as marketing. The more powerful and existential the claim, the more valuable the product appears.

No venture-backed company attracts capital by saying, “Our technology will modestly improve productivity in specific use cases.” Valuations are built on visions of transformation, not incrementalism. Against that backdrop, it is worth noting that OpenAI reported an estimated $13.5 billion net loss in the first half of 2025—evidence of just how much capital is riding on future promises rather than present economics.

Alongside incentive bias sits overconfidence bias. Developers see systems perform impressively in controlled environments and assume rapid, universal adoption. They routinely underestimate the real-world friction of regulation, legacy systems, organizational inertia, and cultural resistance. In practice, meaningful technology adoption takes years, often decades.

 

What the Real World Usually Does

History offers repeated examples of confident automation forecasts failing to materialize.

When automated teller machines were introduced in the 1970s, experts predicted the end of the bank teller. Instead, ATMs lowered the cost of operating branches, allowing banks to open more of them. Between 1980 and 2010, the number of bank tellers in the United States actually increased.

Electronic spreadsheets were expected to eliminate accountants and financial analysts. Instead, they eliminated low-level manual tasks while dramatically expanding the profession. Bookkeeping roles declined, but higher-skill accounting and financial analysis roles multiplied. We didn’t get less accounting—we got more of it, and at higher levels of competence.

The consistent pattern is not replacement, but expansion followed by specialization.

 

AI, Translation, and the Race to the Bottom

This brings us to a crucial insight articulated succinctly by author and marketer Seth Godin: “The problem with the race to the bottom is that you might win.”

AI undeniably accelerates and cheapens certain translation tasks. But competing solely on speed and price—trying to “beat” AI at being cheaper or faster—is precisely the kind of race Godin warns against. Winning that race produces a hollow victory: commoditized services, eroded margins, and clients trained to value cost over outcome.

AI under controlled conditions can produce high-volume, low-risk, “sometimes good enough” translation. That does not describe the full language services market. Translation is not a binary activity where accuracy alone determines value. It is deeply contextual, asymmetric in risk, and often consequential.

An error in an internal memo is trivial. An error in a legal contract, medical instruction, regulatory filing, or brand-defining marketing message can be catastrophic. AI systems optimize probability, not responsibility. They cannot assess risk, defend intent, or be held accountable.

As AI lowers the cost barrier to translation, organizations translate more content, into more languages, for more markets. That expansion increases—not decreases—exposure to legal, regulatory, reputational, and cultural risk. And risk is where human judgment becomes more valuable, not less.

 

Market Expansion, Not Market Collapse

The core flaw in AI displacement narratives is the assumption that translation demand is fixed. It isn’t. Historically, it has been constrained by cost and speed. AI may lessen those constraints, unleashing demand.

More content creates more touchpoints. More touchpoints create more risk. More risk increases the need for human oversight, domain expertise, and accountability.

This is where language services divide. Low-end, commoditized work becomes automated—and should. That work was already economically fragile. Meanwhile, higher-value human services move up the stack: validation, transcreation, subject-matter expertise, linguistic risk management, and client accountability.

Industries don’t disappear when automation arrives. They polarize.

 

Conclusion

The human mind is remarkably good at imagining what technology will eliminate and consistently poor at imagining what it will enable. The loudest predictions about the demise of human translation usually come from those focused on technical capability, not market behavior.

AI will not destroy language services. It will expose which parts were already racing to the bottom—and which were never meant to.

Efficiency expands markets. Expanded markets increase complexity. Complexity favors humans.

The future of human translation services is not about competing with AI on price or speed. That is a race worth losing. The real opportunity lies in judgment, accountability, and trust—areas where automation doesn’t replace humans, it makes their value impossible to ignore.

Share