
By Patrick Barry
Journalist Ira Glass, who hosts the NPR show “This American Life,” is not a computer scientist. He doesn’t work at Google, Apple or Nvidia. But he does have a great ear for useful phrases, and in 2024 he organized an entire episode around one that might resonate with anyone who feels blindsided by the pace of AI development: “Unprepared for what has already happened.”
Coined by science journalist Alex Steffen, the phrase captures the unsettling feeling that “the experience and expertise you’ve built up” may now be obsolete – or, at least, a lot less valuable than it once was.
Whenever I lead workshops in law firms, government agencies or nonprofit organizations, I hear that same concern. Highly educated, accomplished professionals worry whether there will be a place for them in an economy where generative AI can quickly – and relativity cheaply – complete a growing list of tasks that an extremely large number of people currently get paid to do.
Seeing a future that doesn’t include you
In technology reporter Cade Metz’s 2022 book, “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World,” he describes the panic that washed over a veteran researcher at Microsoft named Chris Brockett when Brockett first encountered an artificial intelligence program that could essentially perform everything he’d spent decades learning how to master.
Overcome by the thought that a piece of software had now made his entire skill set and knowledge base irrelevant, Brockett was actually rushed to the hospital because he thought he was having a heart attack.
“My 52-year-old body had one of those moments when I saw a future where I wasn’t involved,” he later told Metz.
In his 2018 book, “Life 3.0: Being Human in the Age of Artificial Intelligence,” MIT physicist Max Tegmark expresses a similar anxiety.
“As technology keeps improving, will the rise of AI eventually eclipse those abilities that provide my current sense of self-worth and value on the job market?”
The answer to that question, unnervingly, can often feel outside of our individual control.
“We’re seeing more AI-related products and advancements in a single day than we saw in a single year a decade ago,” a Silicon Valley product manager told a reporter for Vanity Fair back in 2023. Things have only accelerated since then.
Even Dario Amodei – the co-founder and CEO of Anthropic, the company that created the popular chatbot Claude – has been shaken by the increasing power of AI tools. “I think of all the times when I wrote code,” he said in an interview on the tech podcast “Hard Fork.” “It’s like a part of my identity that I’m good at this. And then I’m like, oh, my god, there’s going to be these (AI) systems that [can perform a lot better than I can].”

jokerpro/iStock via Getty Images
The irony that these fears live inside the brain of someone who leads one of the most important AI companies in the world is not lost on Amodei.
“Even as the one who’s building these systems,” he added, “even as one of the ones who benefits most from (them), there’s still something a bit threatening about (them).”
Autor and agency
Yet as the labor economist David Autor has argued, we all have more agency over the future than we might think.
In 2024, Autor was interviewed by Bloomberg News soon after publishing a research paper titled Applying AI to Rebuild Middle-Class Jobs. The paper explores the idea that AI, if managed well, might be able to help a larger set of people perform the kind of higher-value – and higher-paying – “decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators.”
This shift, Autor suggests, “would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and – akin to what the Industrial Revolution did for consumer goods – lower the cost of key services such as healthcare, education and legal expertise.”
It’s an interesting, hopeful argument, and Autor, who has spent decades studying the effects of automation and computerization on the workforce, has the intellectual heft to explain it without coming across as Pollyannish.
But what I found most heartening about the interview was Autor’s response to a question about a type of “AI doomerism” that believes that widespread economic displacement is inevitable and there’s nothing we can do to stop it.
“The future should not be treated as a forecasting or prediction exercise,” he said. “It should be treated as a design problem – because the future is not (something) where we just wait and see what happens. … We have enormous control over the future in which we live, and [the quality of that future] depends on the investments and structures that we create today.”
At the starting line
I try to emphasize Autor’s point about the future being more of a “design problem” than a “prediction exercise” in all the AI courses and workshops I teach to law students and lawyers, many of whom fret over their own job prospects.
The nice thing about the current AI moment, I tell them, is that there is still time for deliberate action. Although the first scientific paper on neural networks was published all the way back in 1943, we’re still very much in the early stages of so-called “generative AI.”
No student or employee is hopelessly behind. Nor is anyone commandingly ahead.
Instead, each of us is in an enviable spot: right at the starting line.
![]()
Patrick Barry is Clinical Assistant Professor of Law and Director of Digital Academic Initiatives at the University of Michigan.






























JimboXYZ says
I think we’ve reached that long before AI came along. Look at Congress ? I think we can replace a lot of those legislators with AI & common sense ? I think from the articles that FlaglerLive presents alone, just from the comments for differing viewpoints, AI would be a big improvement over the overpaid legislators at all levels of Government that would be better served being replaced with AI & common sense.
As a society, are those that rely that heavily on AI going to be held accountable for the flaws & failures of AI ? Take self driving vehicles for instance ? Manufacturers aren’t responsible for accidents or even DUI/DWI’s, yet individual motorists are still held accountable & responsible for when they are merely nothing more than passengers in a self driving vehicle. AI won’t be ale to resolve that imperfect world of when something goes sideways/wrong ?
JW says
It is all a question of education or rather a lack thereof (particularly that boring history)
People have never learned to THINK critically. It is all about our “feel good’ society.
Many of us have become lazy and now we are punished for it!
It started well before AI: the smart phone and the internet were precursors but we did not THINK.
Did we not see with our own eyes that the inventors were just interested in their own profit and NOT in what it could do to society? Do we really NOT understand (unregulated) CAPITALISM?
Religions have talked about it (the Apocalypse).
Nobody thought about it as being self inflicted.
Now what?
BillC says
All the focus seems to be on economic considerations and outcomes. Consider this:
Anthropic has refused a Pentagon ultimatum demanding unrestricted access to its AI model, Claude, by the Friday, February 27, 2026, 5:01 p.m. ET deadline. The company stated it “cannot in good conscience” comply with the demand to remove safeguards on its technology.
Pentagon demands included allowing the AI to be used in classified settings without constraints, despite Anthropic’s requests for assurances that Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The company rejected the final contract language, calling it a compromise undermined by legalese that could let the military bypass safeguards.