
A few days ago, I listened to Russ Roberts’s EconTalk interview of cardiologist Eric Topol on the health issues involved with aging. For some reason, I’m getting increasingly interested in that issue.
An interesting issue comes up at about 36:00 point. Topol states:
We should be using better nanoparticles and keeping that mRNA from ever having untoward side effects. But, we haven’t. The companies that make these are stuck in the original version. But, we got about a billion people exposed to them.
Later, Russ follows up with this:
When you say companies are stuck with their original versions, is that because of the intellectual property protection that they’re relying on and that it’s expensive therefore for them to start from scratch, and therefore they just don’t have an incentive to innovate? Or is there something else going on?
Russ is onto something: the expense of starting from scratch.
Topol responds:
No, I think part of it is the intellectual property. Part of it is they have now had mass production of hundreds of millions of vaccines and to go to a new process–the point being, is: we’ve known that the nanoparticles can be optimized so they even have better penetration. We have these things called self-amplified vaccines where you give much tinier amounts of mRNA. And that’s approved in Japan. But there’s not even a bit of effort to get that going in the United States. That would help reduce the mRNA side effects.
So, these companies, they did very well during the pandemic and they got things going quickly. That’s great; but they’re not keeping up with the field. And we’re seeing in other parts of the world the innovations that we need.
What Topol doesn’t get into is why there’s progress in Japan that is not being replicated in the United States.
The answer is the Food and Drug Administration. Ever since the 1962 change in law, drug companies that want to introduce a drug into the lucrative U.S. market must show not only safety but also efficacy. The requirement for showing efficacy has added almost a decade to the drug development process.
So the issue is not intellectual property per se. It’s that the process of getting approval is daunting.
That’s why it makes sense, as Dan Klein has argued, for the FDA to automatically approve drugs that have been approved by the FDA’s counterpart in even one of, say, a list of 15 relatively wealthy countries.
You might argue that that’s too risky. But if you don’t like the risk, wait until the FDA approves it. Other people can take their chances. That’s what’s so great about freedom. We need more of it.

READER COMMENTS
steve
Aug 21 2025 at 10:27am
While I an inclined to agree with it some context is missing here. The US approves far more drugs (and faster) than other first world countries. Since you mentioned Japan the link goes to the disparity between out two countries. The results follow.
“Between 2005 and 2022, 711 drugs were approved in the US or Japan. Among 711 drugs, 633 were approved in the US, of which 280 (44.2%) were not approved in Japan (Figure). Conversely, 78 drugs (18.1%) among 431 Japan-approved drugs were not approved in the US.”
Since we approve more drugs and faster than essentially everyone else it does make me wonder why those relatively few were not approved.
As an aside how would things work if we didnt have efficacy studies? How would I as a physician know if the drug actually worked? How would I even know what doses to use and if there were issues with specific populations?
https://pmc.ncbi.nlm.nih.gov/articles/PMC12152698/
Steve
David Henderson
Aug 21 2025 at 2:00pm
You ask:
Are you saying that you never prescribe a drug for condition A if it hasn’t gone through the FDA-required efficacy tests for that condition? If you, you’re a fairly unusual doctor. A huge percent of uses of cancer drugs, for example, are off-label. That is, they’ve been approved for condition B but not A, but they’re used also to treat condition A.
steve
Aug 21 2025 at 8:52pm
I used a lot of stuff off label, but almost all of those drugs have had efficacy tests done. They may or may not (usually not) have had an FDA approved efficacy test, but they have essentially always had published efficacy tests, almost always multiple studies. So if the drug has been approved for condition B but not A but there are trustworthy studies published for treating A then I go ahead. I have a couple of times used a drug for a condition A type situation when it was brand new when I considered A to be close to condition B. As an example when esmolol came out I used it to treat a pt with a pheochromocytoma whose BP was still a bit hard to control after using the usual drugs. Esmolol was approved for treating HTN but not specifically HTN due to a pheo. I had a lot of experience already with Beta blockers and esmolol was a drug that you use to titrate to effect so knowing a dose per se wasn’t so much of an issue, though I had already used it other times so I had a good idea where to start. It was also rapidly metabolized so in the unlikely event I wasn’t happy with it I knew it would go away quickly.
Just a reminder of what happens after a drug is approved…
“Year 1: Median sales reach only 11% of the drug’s peak, with significant variability in early adoption.
Year 2: Sales accelerate, reaching 31% of the peak.
Year 3: Median sales climb to 58%, as the drug establishes its presence in the market.”
Once a drug is approved by the FDA, they dont get heavily used right away with some exceptions. Most docs are waiting for feedback from the early users, coverage in the major journals including editorials and waiting for the drug to get added to their formulary or approved for payment by the insurance companies who really want to know if there is evidence the drug works.
Steve
john hare
Aug 22 2025 at 3:51am
This reads to me that you trust your own skills and judgment over that of the FDA and that most doctors do also. It seems that FDA approval is more of a legal shield than a true indication of usefulness. Lawsuits over alleged malpractice and insurance company attitudes being the elephant in the room regarding the FDA approvals?
Not suggesting that all doctors are qualified to make the judgments, just that the FDA is mostly a legal shield.
steve
Aug 22 2025 at 9:55am
Not really. It’s more that the FDA efficacy tests are just the beginning. We think they are safe and work well on problem B (to continue the above). We know that problem B is very similar to problem A and likely to work and we have a good idea what doses are likely to work. So people try it and publish their results. So for any given drug you have the one or two basic FDA efficacy tests laying the necessary groundwork, then dozens of follow up testing often focusing on optimizing dosing regimens and comparing the new drug to other drugs. I just dont know how to get around not having those initial tests. We just guess? (Most of the work of publishing results on new uses is done by academic centers.)
So it occurs to me that while I want efficacy test I would note that it wouldn’t necessarily need to be the FDA, just some group with the same skillsets for the initial efficacy studies. After that the medical profession takes over and goes through a bunch of steps to figure out how best to use the new drug and what to use it for. Of note, we often find the new drug is mostly a bust. I would also remember to include the insurance/payors aspect here and the legal aspects. Insurance companies wont want to pay unless they think the drug works and nobody wants to use drugs that dont have the sort of minimal safe harbor aspect of FDA approval.
Steve
David Henderson
Aug 22 2025 at 9:41am
You write:
Exactly. That’s the point of my post. The post is about the FDA requirement for showing of efficacy. You’re admitting that you don’t need that requirement for you to be willing to prescribe the drug. You’re admitting that you look at other indicators of efficacy.
steve
Aug 22 2025 at 10:08am
I guess I am not being clear. We need the initial FDA tests, or the equivalent, to give us an idea of dosing regimens and how well and how soon the drug will work. What kinds of side effects at what dose. We then expand out to use the drug for similar problems. (We arent really taking a new drug for treating say diabetes and just randomly giving it to cancer patients.) And its not really so much that individual docs everywhere do that, it happens, but its more that some academic groups will do a study on problem A, then publish there results. Then everyone knows it’s safe and effective to go ahead and use the drug for problem A. As I said above I just dont see how you get around having the initial efficacy tests. Would drug companies even want to try to release their drugs without those?
Also, as I noted above, uptake of new drugs isn’t usually very fast. Part of that is because are waiting for more studies that refine drug usage and provide comparisons to existing drugs, part because the insurance companies or your formulary need to approve, part because word of mouth is important, part because people dont keep up with the literature as well as they should and part because people are often skeptical of new drugs.
Steve
David Henderson
Aug 22 2025 at 11:41am
Response to steve.
He writes above:
And:
We went 24 years without the FDA requiring efficacy tests.
So if steve were practicing medicine in, say, 1961, would he have refused to prescribe any drugs?
steve
Aug 22 2025 at 1:13pm
Good question. Prior to Kefauver-Harris we relied upon a combination of efficacy tests done by the drug company that developed the drug, basically the same as we do now but without the FDA review, and efficacy studies in the medical literature. I cant find good literature on the rate of uptake of new drugs from the pre-1960 era, but I have sat around having beers and talking it over with older docs. At least among the ones I have known the sense was that the drug company studies were nice but people really lied upon the follow up ones done by the academic centers. It was the era that generated the saying that a wise doctor tried new drugs on his patients first, friends second and family last.
So if your goal is just to get drugs approved faster then it was probably faster back then. You had drugs released with very minimal animal testing that sort of counted as their efficacy tests and we had drugs released where the drug companies did a thorough job before release. Some of those drugs killed people or just failed as we found out they didnt work before they were taken off the market but most were OK. However, if your goal is to have the drugs get out on the market and actually get used quickly then I would posit actual drug uptake and usage will be faster if the FDA or some equivalent group does the needed efficacy tests.
So in short, we always had efficacy tests, they just weren’t done by the FDA. The initial ones were done by the drug companies, which had self interests, and follow up ones by providers. Regardless, we need those baseline efficacy tests. All of which goes back to the claim that the delays in release and patients not treated outweigh the gains of testing. What I have never seen, or dont remember, is anyone looking at what would happen absent good efficacy testing when time was wasted treating people with drugs that are inferior to current treatments due to not having that info.
Lastly, I am writing too much, you ignore the legal atmosphere. Using a drug on someone absent some kind of efficacy study and having a bad outcome means you automatically lose a malpractice suit. Malpractice suits were almost nonexistent in the 50s.
Steve
David Henderson
Aug 22 2025 at 1:43pm
Reply to steve.
You write:
That’s my sense too. I’m advocating returning to that.
You write:
Within the United States, there’s not an “equivalent group” to the FDA because it has life and death over whether drugs are allowed. But if your point is that we should have efficacy tests, I agree, and never disagreed. Those are often reported in medical journals, as you know.
You write:
And they aren’t done by the FDA now. They’re required by the FDA, which is very different.
You write:
Good point. But again, that speaks to the issue of efficacy tests, not to the issue of this post, which is an FDA requirement.
steve
Aug 22 2025 at 5:51pm
The initial tests were always done and reviewed by the drug company that created the drug. There is an obvious conflict of interest so I dont want to go back to the 50s model. I would prefer that some independent group evaluate the studies. It wouldn’t have to be the FDA. This is not so much about concerns for safety but rather for efficacy itself. If you look at drugs from the 50s lots got approved but few lasted long on the market. We shouldn’t waste time and pt health on drugs that really dont work well or any better than existing drugs. We also need to be conscious of costs, a neglected part of this discussion.
As an aside, just as a reminder, we really didnt start using RCTs until the 50s and they weren’t a norm until the 60s. Statistical analysis was also pretty lacking as it was pretty much just p values or Neyman-Pearson which I think most of us didnt understand (I didnt) . Confidence intervals weren’t used much until the 80s and Bayes until the late 90s. Drugs were easier, faster and less costly to test if you didnt need to do blinded RCTs and they were more likely to get approved with just the p values to determine significance.
Steve
Mark Brophy
Aug 23 2025 at 10:09am
In the 1950s, medical malpractice suits were relatively uncommon compared to today. Doctors were often viewed with immense trust and authority, and patients were less likely to question their judgment or pursue legal action, even when outcomes were poor. The concept of suing a physician was seen as unconventional, partly because the medical profession was held in high regard and partly because the legal framework for malpractice was less developed. Courts typically required clear evidence of gross negligence, and the standards for proving a doctor’s error were stringent. Additionally, fewer people had access to legal resources or the financial means to pursue costly litigation. While cases did exist—often involving egregious errors like surgical mistakes or misdiagnoses—they were sporadic and rarely made headlines. The rise of malpractice suits began to accelerate in the decades that followed, particularly in the 1960s and 1970s, as patient advocacy grew, legal precedents expanded, and societal attitudes shifted toward holding professionals more accountable.
Comments are closed.