News

Cambridge Residents Slam Council Proposal to Delay Bike Lane Construction

News

‘Gender-Affirming Slay Fest’: Harvard College QSA Hosts Annual Queer Prom

News

‘Not Being Nerds’: Harvard Students Dance to Tinashe at Yardfest

News

Wrongful Death Trial Against CAMHS Employee Over 2015 Student Suicide To Begin Tuesday

News

Cornel West, Harvard Affiliates Call for University to Divest from ‘Israeli Apartheid’ at Rally

Radiation Experiment Coverage Was Sensationalist

TO THE EDITORS

NO WRITER ATTRIBUTED

The Crimson's strident coverage of the "radiation" experiments performed in the '50s and '60s by investigators here and elsewhere strikes me as carping, judgmental and in places sensationalist. I would say the same for the reporting in The New York Times and The Boston Globe.

I write from the vantage point of long experience both as a hematologist--who while doing routine clinical work often administers radioactive isotopes to patients for diagnostic purposes--and as a biochemist, who has been using radioactive tracers for in vitro experiments since 1950 when I began as a young investigator in the regional Atomic Energy Commission (AEC) laboratory at the new UCLA Medical School.

The director of that unit, known at UCLA as the Atomic Energy Project, was UCLA's new Dean of Medicine, Stafford L. Warren, a former radiology professor at the University of Rochester--and before that at Harvard. He had also been chief medical officer of the Manhattan Project at Oak Ridge National Laboratory and was deeply involved with medical aspects of the first atomic bomb tests at Los Alamos and Eniwetok Atoll.

Staff Warren, by the way, should not be confused with Harvard's late and distinguished Shields Warren, so prominently mentioned in recent Crimson accounts.

The Project had two missions during the five years I worked there. One was basic radiobiological research into the nature of the effects of radiation on living tissue.

The other concerned operations: project personnel (including me) were active participants in the famous series of atomic bomb tests held at the Nevada Test Site at Camp Mercury over several years in the 1950s.

Those were the years in which most of the experiments we have been reading about were performed here and elsewhere. I would like here to make several points that in my opinion need badly to be made.

The first is simply this: no one in those years really knew very much about the hazards of radioactivity, especially when given in low doses. That contentious subject still preoccupies researchers.

It was in the early 1950s that our Atomic Bomb Casualty Commission working in Hiroshima observed an increased incidence of leukemia among individuals exposed to the atomic bomb--a puzzling observation because these cases did not emerge for several years after the detonations. There was also almost no understanding of the phenomenon called fallout.

Indeed, it was the early bomb tests that brought fallout to scientific attention (Project Nutmeg, 1949). Later it was the Camp Mercury tests and their melancholy consequences that brought it to public attention.

So, point one, we were really quite uniformed on the risks of radiation in those years.

Second: no one today would defend the administration of radioactive material to other human beings, sick or healthy, without their knowledge and consent. But the work in question was done 40 years ago--in another era in which the ethics of medical experimentation were perceived rather differently.

At the least, journalists reporting this story today should balance the picture by recalling the occasional horrors of medical experimentation--many far worse--that had nothing to do with "radiation" (that word again).

A litany of the acts we now deem offensive was long ago detailed in books and articles, beginning in the late '50s with the insistent writings of our late Harvard colleague, the Dorr Professor of Research in Anesthesia, Henry K. Beecher.

Statements like his raised our consciousness, albeit tardily, and led almost immediately to the establishment in medical research institutions of the human studies committees that now regulate ethical aspects of human experimentation.

Today all funding agencies--certainly including the NIH, which led the way--insist on the detailed documented findings of such committees before they will even read a grant application in the field of clinical investigation.

I might note, incidentally, that Beecher's 1970 book, Research and the Individual (Little, Brown), hardly mentioned radiation experiments. He had worse fish to fry. Clearly, this is another case in which the ethical sensibilities of an earlier time have been made to look bad in the context of today's standards.

One needn't look far in the pages of The Crimson for other examples of the phenomenon. (See the recent piece headed "How [President A. Lawrence] Lowell enforced the Jewish quota.") I know many of the investigators mentioned in recent Crimson reporting. The ones I know are good people.

When I see regrettable attempts to sully their reputations without setting the context, I think to myself, "There, but for the grace of God, etc."

Third, it was unfortunately true that research ethics in those days allowed, even encouraged, investigators to use unfortunate members of society as guinea pigs. A great deal of medical research was done on the inmates of penitentiaries. Sometimes, prisoners had their sentences reduced as reward.

There was also, I regret to say, a widespread willingness to exploit seriously ill patients, feeble-minded children and all manner of unfortunates without their consent for purposes the investigators seemed worthy--even if you deduct for the inevitable impulse toward self-aggrandizement that accompanies most creative work. No one, in fact, had yet heard of the idea of informed consent and as far as I can tell everyone thought he was doing something of value in these studies.

Fourth, I have no factual details on what happened to the individuals given "radiation" without their knowledge--actions, I wish to repeat, that I would now deplore. However, on the basis of long experience, I suspect that no harm whatever befell them.

I'm also quite certain that if any of these people later checked in with this or that disability, the traditional dilemma of proving cause and effect would be all but insurmountable in these cases.

As a hematologist, I face that problem every day when a patient turns up with some blood abnormality--say, a low white count--and then is found to have taken drug A or drug B, though nowadays many are down to P or Q--and I am not speaking of recreational drugs.

How does one prove that one drug or the other was the culprit? This is not the place to delve into that knotty problem. I will simply say that there are only two approaches--and one may be unethical. That one is to stop the drug, look for improvement, then start the drug again to see if the badness returns. That, of course, requires the effect to be reversible (it often isn't).

Because a second exposure could be lethal, we do not use the second part of that approach--but without the second part the first part argues weakly.

The other approach is epidemiological and is much harder to come by, i.e., one seeks evidence that a drug regularly causes a certain effect. I feel reasonably sure that neither of those evidentiary elements are available in the cases at issue, just as they are unavailable in other cases we read about every day--the Agent Orange case, the high-tension electric wire case, the artificial sweetener case, etc. In all, we are left in limbo as we seek to reach firm conclusions-and in the end the final judges of what took place are lawyers, journalists and the putative victims themselves.

None of this in any way justifies what was done long ago. It is nonetheless an aspect worth mentioning. All new drugs have to be tested on human beings, as do all new surgical procedures, and other technologies.

We have become sophisticated in protecting the experimental subjects in these studies; more often the loudest criticism we hear today comes from those who say, "Ethics be damned. Skip the testing and give the drug."

My final point is opinion and speculation. I have long felt that one of the factors goading people into bold, even macho, actions with respect to nuclear energy and its scientific investigation was the political climate of the era.

I vividly recall the campaign mounted in Southern California in the '50s by Staff Warren--and his ally, Willard Libby, a new member of the UCLA faculty--stressing the importance of building bomb-shelters.

Believe it or not, this situation had its comic moments. Libby's house was a block from mine in a Bel Air canyon. When the flood rains came, the mudslides simultaneously filled my house and his well-stocked bomb-shelter (food and booze) from floor to ceiling.

Many of us felt that if we didn't have a nuclear war, Staff Warren might feel that his life had been wasted.

People were filled with fear--of nuclear war, of aggressive Communism, of spies, of enemy hordes in Korea, of litigation (there are incredible stories about how Warren and others in the AEC kept secrets in part for this reason), and of the raging politics of McCarthyism.

I am simply suggesting that when solemn wise men like Admiral Lewis Strauss, Staff Warren and General Leslie Grove made decisions in those years their last concern was the interests of downwinders. In my view that set the tone. There was a deep reluctance to do anything that might look timid or weak.

Something like that, I suspect, may have led the best of our scientists to an unwonted boldness they would later regret.

I am glad that a Faculty committee will be investigating what happened at Harvard in those distant days. I hope it will give appropriate thought to the ethical standards prevailing in those years. William S. Beck, M.D.   Professor of Medicine   Tutor in Biochemical Sciences   Fellow of Quincy House

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags