ChatGPT Goes to Med School: How AI is Diagnosing Brain Tumors Alongside Radiologists

In a head-to-head with radiologists, ChatGPT diagnosed brain tumors with 73% accuracy, outperforming human experts. The AI excelled when fed high-quality reports, showing its potential as a valuable second opinion. While not replacing doctors yet, this study hints at a future where AI could reshape medical diagnostics.

In a head-to-head with radiologists, ChatGPT diagnosed brain tumors with 73% accuracy, outperforming human experts. The AI excelled when fed high-quality reports, showing its potential as a valuable second opinion. While not replacing doctors yet, this study hints at a future where AI could reshape medical diagnostics.

In a head-to-head with radiologists, ChatGPT diagnosed brain tumors with 73% accuracy, outperforming human experts. The AI excelled when fed high-quality reports, showing its potential as a valuable second opinion. While not replacing doctors yet, this study hints at a future where AI could reshape medical diagnostics.

Top Medical News

·

3 min

Blog cover image
Blog cover image
Blog cover image

In a development that feels a little too close to science fiction for comfort, artificial intelligence—specifically, large language models like GPT-4—has started competing with highly trained human experts in the medical field. The latest contestant in this "man versus machine" saga is the realm of radiology, where a correct diagnosis can quite literally mean the difference between life and death. So, in the grand tradition of scientific inquiry (and possibly low-key existential dread), researchers at Osaka Metropolitan University’s Graduate School of Medicine pitted GPT-4-based ChatGPT against a group of actual radiologists to see how it would perform diagnosing brain tumors from MRI reports.

Spoiler alert: ChatGPT held its own.


The AI Challenge: Brain Tumor Diagnosis by ChatGPT vs. Radiologists

The research, led by graduate student Yasuhito Mitsuyama and Associate Professor Daiju Ueda, set out to compare diagnostic accuracy between ChatGPT and a mix of both neuroradiologists (the specialists) and general radiologists (the all-rounders) using 150 preoperative brain tumor MRI reports. These reports, written in Japanese, represent the kind of clinical notes that radiologists deal with every day—rife with jargon, abbreviations, and those terrifyingly cryptic phrases that make patients wish they hadn’t peeked at their charts.

ChatGPT, alongside two board-certified neuroradiologists and three general radiologists, was tasked with providing differential diagnoses and a final diagnosis for each case. The twist here? After surgery, they already knew what the real diagnosis was, so this wasn’t some vague exercise in speculation—it was a real-world test of just how sharp AI has become.


The Results: AI Wins, But Just Barely

Here’s where things get interesting (and possibly a bit unnerving, depending on how you feel about AI eventually taking over the world). ChatGPT’s accuracy rate stood at 73%, beating out both the neuroradiologists, who averaged 72%, and the general radiologists, who clocked in at 68%. The numbers aren’t exactly landslide territory, but they’re significant—especially considering the stakes. What’s more, ChatGPT performed notably better when it had neuroradiologist reports to work from, achieving an 80% accuracy rate, compared to just 60% when analyzing reports from general radiologists.

This matters because it suggests something quite remarkable: when AI has a higher-quality starting point (in this case, more detailed and expert-written reports), it can outperform human experts. When the quality drops, though, so does ChatGPT’s ability to diagnose correctly—a reminder that AI is only as good as the data it’s fed.


What This Means for the Future: AI as a Second Opinion?

Before anyone rushes to replace their doctor with a chatbot, the researchers are quick to emphasize that AI isn’t going to take over the radiology department just yet. But these results show clear potential for AI to assist—whether by offering a second opinion or reducing the diagnostic burden on physicians who are often overworked and overwhelmed. Graduate student Yasuhito Mitsuyama sees AI playing an even bigger role in the future, saying the team aims to extend these studies into other diagnostic fields to further improve accuracy and alleviate the load on medical professionals.

So, what we’re looking at here isn’t some dystopian future where AI is deciding life-or-death matters on its own, but rather a glimpse of how it might collaborate with human experts. The best part? This collaboration might also provide an unexpected benefit in medical education, allowing AI to help train the next generation of radiologists by offering real-time feedback and insights that even experienced doctors might overlook.


The Bottom Line: Man and Machine, Together in Medicine

If there’s one thing this study makes clear, it’s that AI has the potential to fundamentally change how we diagnose and treat diseases. ChatGPT, as strange as it might sound, is already performing at a level comparable to, and occasionally better than, trained medical professionals. But it’s not about AI replacing radiologists; it’s about creating a system where human expertise and machine precision can work hand-in-hand to improve patient outcomes.

And if that means fewer errors, faster diagnoses, and a little less stress for doctors (and patients), then maybe the future isn’t so dystopian after all.

Stay in the loop

No spam, just certified good stuff

Stay in the loop

No spam, just certified good stuff