The Godfather’s Prediction
In 2016, famed AI researcher Geoffrey Hinton made a bold prediction: "People should stop training radiologists now. It is just completely obvious that in five years, deep learning is going to do better", he said.
Since then, his words have taken on a life of their own. They’ve been quoted endlessly in articles, memes, and conference slides as a kind of punchline. And who could resist? It is now 2025, and radiologists have not only failed to vanish but are more in demand than ever.
By that measure, Hinton was wrong. There’s no polite way to phrase it. The Works in Progress article was right to point this out. That said, we shouldn’t let the past decade’s misses blind us to what comes next. The last generation of AI systems may not have been capable of replacing radiologists outright, but what about the new generation built atop large language models?
In this op-ed, I’ll attempt to show you that while Hinton’s prediction fell flat in the past decade, it may yet prove prophetic in the decade to come.
Comforting Narratives
The recent essay in Works in Progress argued that the endurance of human radiologists proves the limits of AI in medicine. It claimed that after years of hype, the technology had failed to deliver, and that radiologists remained as indispensable as ever. It concluded that as technology improves, the workload of radiologists only grows — suggesting that the profession is more secure than ever.
It’s a comforting story, one that flatters human expertise, but it misreads the moment. The last generation of AI models may have struggled to replace radiologists, but this current generation isn’t playing the same game (and, for that matter, neither will the one that follows).
A Gargantuan Opportunity
Before we talk about where radiology is going, it’s worth asking why this is even a problem worth solving.
In 2016 alone, more than 691 million radiologic imaging exams were performed in the United States. Those included:
-
74 million CT procedures
-
275 million conventional radiology procedures
-
8.1 million interventional radiologic procedures
-
13.5 million nuclear medicine procedures
-
320 million dental radiographic examinations
That number has only grown since then. Conversely, U.S. Bureau of Labor stats show that there are only roughly 32,000 trained radiologists in the United States. If you do the math, that averages out to about 11.6k exams analyzed per year, per radiologist (the 320M dental imaging exams are excluded from the calculation, given that radiologists are not usually required to analyze those). That’s a lot of examinations analyzed annually per radiologist! No wonder they are so well paid (the average annual income for U.S. radiologists is around $526k).
On the other side of the equation sit the teleradiology centers — third-party providers that hospitals contract to read and report imaging studies. Their typical pricing can be seen below:

Prices of a single imaging study analysis as charged per study by a typical US teleradiology center
Even at these rates, demand still far exceeds supply. Many teleradiology providers report turning down large hospital contracts simply because they can’t find enough radiologists to handle the volume. Imagine that: hospitals want to pay for more reads, patients need more scans interpreted, yet the system is capped by human bottlenecks. In theory, if the United States suddenly doubled its number of trained radiologists, demand would likely double alongside it.
Let’s do some napkin math. Suppose you charged just $10 per analysis—a rock-bottom rate across all study types. The U.S. market alone would generate about $3.7 billion in annual revenue. Add the rest of the world, which performs roughly 2.6 billion additional imaging exams per year (excluding dental imaging), and even at a discounted $3 per read, that’s another $7.8 billion in potential revenue.
And remember, these estimates assume that demand stays fixed. In reality, if high-quality radiologic analysis became dramatically cheaper and more accessible, utilization would almost certainly explode. More scans would be ordered, more hospitals would run imaging programs, and entire regions currently under-served by radiology would finally enter the market. The ceiling on this opportunity isn’t $10 billion in annual revenue—it’s wherever human health data saturates the planet.
So, the opportunity is big. Now what? Why hasn’t anyone solved it already?
False Start
If the market opportunity is so enormous, why didn’t AI crack radiology years ago? Dozens of startups tried. Companies like Viz.ai, RadAI, and Aidoc raised millions to build deep learning systems that could detect everything from lung nodules to brain bleeds. Hospitals ran pilots, researchers published papers, and for a brief moment it seemed inevitable that radiologists would soon be automated out of existence. Then… nothing happened.
In 2016, at the height of the ImageNet era, Geoffrey Hinton made his now-infamous prediction by extrapolating from the rapid progress of image-recognition models. If neural networks could outperform humans on everyday images, he reasoned, surely radiologists—whose work also revolves around pattern recognition—would be the first to go. And indeed, the early models were impressive: some could spot specific diseases more accurately than trained clinicians. But Hinton overlooked something critical. Identifying a lesion is not the same as explaining it. Someone still had to interpret the finding, weigh it against the patient’s history, and translate it into a coherent clinical report.
By 2017, AI could highlight an anomaly on a scan, but integrating that visual insight with patient context, prior notes, and lab data (most of which live in natural language) was far harder. In the end, radiologists kept their jobs because the models couldn’t reason; they could only recognize.
The unicorns of that first wave all shared the same DNA. Their systems either triaged cases—surfacing urgent scans for faster review—or acted as plug-ins inside the existing viewing software, flagging possible findings for human confirmation. None could write full reports. None could explain uncertainty. Each was trained narrowly, excelling on one condition or body part. One company would have the best lung-cancer detection model, and another would have the best model for pulmonary embolisms. It was a Cambrian explosion of specialized tools, not a general-purpose revolution.
What doomed Hinton’s prediction was that he was using past data to extrapolate into the future. Ironically, I think the Works in Progress article makes the exact same mistake. It attributes AI’s failure in radiology to three factors: models that don’t generalize beyond their training data, regulators who block autonomy, and systems that only automate a fraction of the radiologist’s work. All of this was true in 2017—and all of it misses what has changed since then (except the part about regulators. That part is still true).
Those limitations weren’t immutable laws of nature; they were artifacts of a primitive technological paradigm. The early systems were narrow, single-task models trained to classify static images, divorced from language, reasoning, and context. They could see, but they couldn’t think. The models emerging now operate on an entirely different plane. They’re multimodal, capable of reading text, interpreting scans, comparing prior results, and generating coherent diagnostic reasoning in natural language.
Second Wave
The Real Job of a Radiologist
The Works in Progress article argues that radiologists will adapt rather than disappear, that even if AI masters image interpretation, doctors will simply shift their time toward other tasks. That may hold true for a small number of academic radiologists who teach, research, or manage departments, but it is not the reality for most of the field.
Most radiologists today work in teleradiology centers, whose sole purpose is to interpret scans and generate reports. These centers are effectively data centers filled with biological GPUs. Hospitals transmit imaging studies and patient context over secure connections; radiologists analyze them and send back completed reports. It would not be unreasonable to assume that the vast majority of imaging exams could be processed this way.
This is the heart of the opportunity and of the inevitability. When a profession’s core workflow is already structured like an API, it is only a matter of time before it becomes one. In such a scenario, one might be tempted to think that the future of radiology is not one of full autonomy, but one where humans are “augmented” by AI, and simply use it to complete their work 10x faster. However, this is the biggest misconception of all.
The Myth of the Human-in-the-Loop
Radiologists are already extremely fast. When you interpret more than ten thousand studies a year, you develop muscle memory for it. They also use templates and macros to generate reports in seconds. Introducing AI as a partial assistant doesn’t accelerate this workflow meaningfully. In fact, it may even slow it down.
When a human writes a report, they rarely second-guess their own output. When an AI writes it, the radiologist must review every sentence carefully. Even in the scenario where the AI writes it perfectly, the human radiologist would still feel compelled to check the report given that it is their job to do so. The same pattern has been observed in software development: when AI generates code, engineers often spend more time validating it than if they had written it themselves. A 2024 METR study found that developers using AI assistance were slowed by roughly twenty percent for exactly this reason. Partial automation breeds cognitive friction.
Beyond that, my friend Adib and his MedRAX project have already made great strides to prove that today’s AI can indeed produce better reports with a higher accuracy than even human practitioners. The latest results have yet to be published, but we are quite confident that our systems built on top of Deepmind’s Gemini 2.5 Pro, or OpenAI’s GPT-5, will be vastly more accurate than the best human radiologists. Humans in the loop are only needed when the AI is less accurate than they are. Why still have them if that is not the case?
The Velocity Limit
This is why radiology will not truly accelerate until it becomes fully autonomous. To borrow Keith Rabois’ analogy on barrels and bullets, the same logic applies here. There is a theoretical maximum speed at which a human radiologist (barrels) can produce reports (bullets). We are already approaching that ceiling. Even if we assume adding AI into a radiologist workflow will somehow magically speed it up, it will only lubricate the barrel; it does not multiply it.
To break the limit, we need more barrels, not faster bullets. Fully autonomous radiology systems offer precisely that: infinite barrels, bottlenecked only by compute and inference speed. At that point, radiology stops being a labor market and starts being an infrastructure layer.
Radiology as Infrastructure
The real promise of AI in radiology is infinite, instantaneous analysis. Imagine a nation’s worth of radiologists condensed into a single data center, a population of algorithms generating millions of reports per hour. Radiology collapses into an API call.
This is not the “AI-augmented” vision that so many pundits cling to — the “Cursor for radiology” model where humans remain in the loop for comfort’s sake. Cursor for radiology won’t add the barrels we need. It is something more fundamental: the conversion of expert judgment into computable infrastructure. Software engineering may resist this fate a little longer, given the complexity of large codebases, but radiology is simpler. Each “app” in this field is a single report, structured and bounded. It is the perfect substrate for full automation, and perhaps that is why Hinton made his prediction about radiology so many years ago instead of another domain.
The First True AI Radiology Company
The one thing the Works In Progress article got correct was the part about regulatory blockades. That alone is the true reason we have not yet seen a true AI-native radiology company built in this new era. Existing frameworks were written for a world of human oversight and still assume that a human must sign every report, even when the machine is more accurate. My hope is that will change, even with the snail’s pace at which regulatory innovations are introduced. Once clear pathways exist for fully autonomous medical systems, a new generation of companies will appear almost overnight.
And the first of these may not arise in the United States. It could emerge in places such as Turkey or Singapore, where regulation moves faster and healthcare infrastructure scales more easily. My own speculation is that the initial breakthroughs will first begin with automated second opinions (a much lower-stakes way of testing the algorithms), following the model of early platforms like DocPanel. From there, full automation is only an engineering problem away.
The Prophecy Reconsidered
Nine years after Hinton’s prediction, radiologists are still here, and by most measures, thriving. On the surface, it looks like history has made a fool of him. But zoom out, and the story changes. The systems that failed in 2017 were built on narrow models and limited data; they could recognize patterns but not context. The ones emerging today are different. They can reason, recall, and explain.
The irony is that Hinton was wrong in the short term precisely because he was right in the long one. He saw the direction before the tools existed to make it real. Radiology, more than any other specialty, sits on the edge of that realization. Its work is digital, structured, high-volume, and the perfect substrate for full automation.
It won’t happen overnight, but as soon as the regulatory frameworks are introduced, we might see hospitals slowly routing their overflow to the first fully autonomous systems; insurers will recognize the cost advantage. The machines will simply absorb their work, one report at a time, until the distinction no longer matters.
If that sounds radical, it is only because the future often does. What looks like science fiction today will, in hindsight, seem inevitable. And when that moment comes, perhaps Hinton’s words will finally sound less like a mistake and more like a delayed prophecy.
If you are building this company, please reach out! My email is jesse[at]futurefiles.zip