An article by former NYT science reporter Donald McNeil suggests that the answer is likely yes:

Despite constantly rising biosafety levels, viruses we already know to be lethal, from smallpox to SARS, have repeatedly broken loose by accident.

Most leaks infect or kill just a few people before they are stopped by isolation and/or vaccination. But not all: scientists now believe that the H1N1 seasonal flu that killed thousands every year from 1977 to 2009 was influenza research gone feral. The strain first appeared in eastern Russia in 1977 and its genes were initially identical to a 1950 strain; that could have happened only if it had been in a freezer for 27 years. It also initially behaved as if it had been deliberately attenuated, or weakened. So scientists suspect it was a Russian effort to make a vaccine against a possible return of the 1918 flu. And then, they theorize, the vaccine virus, insufficiently weakened, began spreading.

The paper McNeil links to suggests that the discovery of this accident contributed to a moratorium on gain-of-function research, which was instituted in 2014.  The ban was then lifted in December 2017.  Will recent claims of a Chinese lab leak origin for Covid-19 lead to a new ban of gain-of-function research?

Harvard epidemiologist Marc Lipsitch has warned about the dangers of gain-of-function research for many years.  He recently signed a letter that suggested the Covid-19 virus might have come from either a lab leak or a natural source.  We don’t know.  But even if there is only a one in three chance it came from a lab leak, then the expected value for lab leak deaths over the past year would be more than a million (probably much more.)  So this is a risk that needs to be taken seriously.

It seem to me that scientists who disagree with Lipsitch need to describe their views in very precise terms:

1. Are they saying that gain-of-function research is not in fact dangerous, even if there were a lab leak?  But in that case, why build BSL-4 level labs?

2.  Are they saying that such research is potentially dangerous, but in practical terms it is not dangerous because such research is done in very secure labs where accidents will not occur?  But in that case how do you explain the 1977-2009 H1N1 epidemic?

3.  Are they saying this research is dangerous, but the benefits outweigh the costs?  So why does Lipsitch say the benefits are small?

But Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health in Boston, Massachusetts, says that gain-of-function studies “have done almost nothing to improve our preparedness for pandemics — yet they risked creating an accidental pandemic”. He argues that such experiments should not happen at all. 

I’m genuinely confused on this issue.  Obviously I’m not a scientist, and am not qualified to comment on purely scientific questions.  But I am very interested in public health risks, and would like the scientific community to more clearly explain why they think gain-of-function research does not expose the world to a risk of a pandemic that could kills hundreds of millions of people.  Why are Marc Lipsitch’s fears wrong?

BTW, after my previous post I was taken to task for hyperbolic language about nuclear bomb experiments and Frankenstein monsters.  Thus this part of McNeil’s essay caught my eye:

Like nuclear bomb testing, the need for “gain of function” research is hotly contested.

Proponents argue that it is the only way to stay ahead of epidemics: in a world full of emerging diseases, if you can figure out which pathogens are only a few amino acids tweaks shy of disaster, you can develop and stockpile vaccines and antibodies against them.

Opponents say that, noble as that goal may be, it is inherently too dangerous to pursue by building Frankensteins and poking them to see how strong they are.

Maybe my analogies were not so far-fetched.