We must be science’s masters, not at its mercy

A commentary by John Harris, appearing in The Guardian, 9 February 2012

Recent advances in neuroscience, such as memory manipulation, create compelling ethical dilemmas

This week it was reported that soldiers could potentially, in the near future, have their minds plugged directly into weapons systems, and have their learning boosted by neural stimulation. The Royal Society’s Brain Waves project on new directions in neuroscience gives us much to reflect on and worry about. And it follows the news last week that scientists are developing a “mind-reading” technique to capture thoughts.

Research in all this is in its infancy but, though new understandings of how the brain works generate new treatments for disease and brain damage, they also expose us to many new dangers. The challenge is always to use judgment and, if necessary, force to maximise good and minimise evil. We should be clear, however, that there is no precautionary approach; therapy delayed is rescue denied. As in all other areas of human activity choice is not an option but a destiny. How should we choose?

The Royal Society report spoke of brain-machine interfaces (BMIs) to connect people’s brains directly to machinery. These interfaces are already being used to control artificial limbs for amputees, but they would also be efficient in improving speed and accuracy in delivering weapons systems. Rod Flower, chair of the report’s working group, rightly asks: “If you are controlling a drone and you shoot the wrong target or bomb a wedding party, who is responsible for that action? Is it you or the BMI?”

While this is a nice puzzle, the alternative without BMIs might be a greater likelihood that the wrong target will be chosen or hit. If we ban military BMIs, who is responsible for that?

The bigger question, though, is how to reduce the incidence of events where people suffer and others need to be called to account. Think of smart drugs that improve thought. Modafinil, a drug that keeps pilots alert, can indeed aid military pilots – but it also protects civilian passengers. The same drug also enhances other cognitive functioning, including exam performance.

We humans need to be smarter in order to combat a monstrous regiment of dangers that include climate change, meteorite strikes, diseases such as Aids and CJD, and an over-precautionary approach to innovation which may increase, rather than reduce, our vulnerability to these and other dangers. The dilemma is: whither caution? The ability to choose between caution and adventure assumes we can predict accurately – something we humans have been lamentably bad at.

In future, we’re also likely to face an ethical dilemma over memory manipulation. This is now a distinct possibility because drugs are available that can wipe, or certainly dampen, our recollection of events. Why should we tamper with our access to history? Well one good reason is that memories can be traumatic. The victim of, for example a brutal rape, might well wish to wipe the memory. But what if so doing removes the capacity to identify the perpetrator, and leaves him free to ruin others’ lives?

The neurotransmitter serotonin and the molecule oxytocin are hailed as agents which, by increasing reluctance to cause suffering on the one hand and trust on the other, can bring about an improvement in morals. Adjusting the levels of these chemicals in the body will effect changes which bypass decision-making and make certain behaviour, for all practical purposes, automatic. Why should we worry about bypassing morally defective decision making? One reason is it takes away our freedom.

Without the ability to reason about our decisions to act on the basis of judgment – rather than prompted by impulse or chemical, or biological, or technological stimulus – we not only lack liberty, the ability to choose. We lack the ability to choose wisely and well, to choose the best “all things considered”.

If we can read minds we might be able to literally see what someone has done and whether they did it on purpose. This would make solving crimes in principle simple and reliable. The problem here will be whether the science will reliably distinguish thoughts that describe fantasies or imaginings rather than real dirty deeds done.

The idea that neuroscience might enable thoughts to be read and intentions revealed is perhaps the most threatening of all to civil liberties. If we know someone intends to commit a murder or a robbery, why not monitor their thoughts and act pre-emptively? Apart from the obvious difference in quality between a wish or intention and an actual attempt, the reason might be that most of us form intentions that we abandon and wishes we never fulfil.

The price of liberty may be eternal vigilance but we need science, not least because it is our most obvious source of the sort of innovation that saves lives and produces welfare. Our vigilance must be as much to ensure we don’t stifle science as it is to be sure science remains our servant not our master.


Doctors Refusing to have Vaccine refusing Patients (U.S.A.)


This article in today’s Bioedge email caught my attention. It raises interesting issues about the role of doctors and their relationship with their patients. In this case patients who refuse to have their children vaccinated are asked (made) to leave a practice.

Should doctors be able to do this or should they provide their service anyway? Some raised concerns about unvaccinated children infecting children who are waiting receive vaccinations, i.e. under 2-year olds. Some seem to believe that their will be unable to establish an effective relationship with the patient if the cannot agree on something as ‘basic’ as vaccination.

Are these the ‘real’ reasons? Is this a case of doctor knows best, or scientific evidence overruling individual viewpoints? There could be an argument that even if parent are opposed to vaccination their children should be vaccinated regardless because partial vaccination programs do not eliminate the specific disease or infection, and that their refusal would undermine and put at risk others.

Should we accept the refusal of some as a legitimate exercise of choice? How does this impact the use of scarce resources when prevention results in better distribution?

Your thoughts on this matter?

Sam Walker (PhD Student)

We have an organ donation crisis, so pay people to give

A commentary by John Harris, appearing in The Times, 31 January 2012

‘Typically, each day three people who are waiting for a donor organ die … each potential organ donor overlooked by the system represents a personal tragedy,’ The Times reported yesterday. The British response to what amounts to a pandemic of lives lost for want of donor organs is shamefully inadequate.

Given our failure to solve this problem, we must think much more radically. We know that live donations are much more successful than transplants from the dead, but how do we increase the supply of healthy adults willing to donate a kidney? The obvious solution is to give donors an incentive — we should pay them.

There is a lot of hypocrisy about the ethics of buying and selling organs. We all believe in altruism — but that is a luxury when relying on self-sacrifice costs lives. And what altruism usually means is that everyone is paid but the donor; the surgeons and medical team are paid for their work, and the recipient receives an important benefit in kind. Only the heroic donor is supposed to put up with the insult of no reward.

Here is how a strictly regulated and ethical market in live donor organs and tissue might work. It would be confined to the UK; only citizens resident here could sell into the system and only citizens would be eligible to receive organs. This would stop any exploitation of desperate people from poor countries. There would be only one purchaser, such as the NHS, that would buy all live donated organs and distribute them according to medical priority. Direct sales or purchases of organs would remain banned.

Those who sold a kidney, for example, would benefit in three ways. They would know that they saved a life or liberated someone from dialysis and the fear of death; they would benefit themselves and others by helping to remove, or substantially reduce, the risk of death from organ failure; and they would be rewarded financially.

Prices would have to be high enough to attract sellers into the market, but dialysis and other alternative care do not come cheap. There is no doubt that a price could be fixed that would save both lives and the NHS money.

Of course, by bringing cash into the equation, people might be doing something that they might not do if it weren’t for the money. But that is not coercion — and almost all of us who work for a living do that every day.

Giving up an organ is a big thing to do, but good people often wish to do big things for others. The choice must be free, but those who did choose to sell an organ would be doing something truly wonderful.

By, John Harris, Professor of Bioethics and Director of the Institute for Science, Ethics and Innovation at The University of Manchester

Ethics and Engagement

Today, all sorts of people are supposed to know about science. An article on the BBC this morning, ‘Children’s science questions ‘stump many parents’’, documented how many parents were “embarrassed” by their “failure” to answer questions like ‘how much does the earth weigh?’. Why, exactly, parents should know this rather esoteric fact is not, however, discussed.

So, do people need to know about science? Certainly, that seems to be the view of some kinds of public engagement programmes, which are often explicitly orientated towards educating ‘the public’. Social scientists have picked holes in this kind of enterprise, showing for a start that there are various kinds of publics, and that many of these groups are extremely knowledgeable about particular realms of science – especially those which have direct import for their lives.

These critiques are important, and are a useful reminder that just because someone isn’t a scientist doesn’t mean they’re ignorant! At the same time, though, we need to be careful not to throw the baby out with the bathwater. In the same way that not everybody is familiar with the minutiae of the worlds of finance, plumbing, law, or social care, we can’t expect that all people who have a stake in science – and that really is all people – to be knowledgeable about every aspect of it, and public engagement programmes can be a useful way of heightening awareness and encouraging dialogue and debates.

In particular, we might want to think about the potential for public engagement with research, rather than science per se. By that, I mean the processes by which facts are generated – as opposed to just the facts themselves (such as the weight of the earth). Why bother doing this? Simply put, if people don’t have much awareness of how research is actually undertaken, and what the point of it is, they can’t consent properly to taking part in it. Now, there are lots of problems with the idea that ‘informed consent’ is the be all and end all of research ethics. Still, though, it’s an important concept that structures research governance in all kinds of ways.

What are some of the issues with people unfamiliar with the processes of research taking part in it? For a start, there’s what bioethicists call the ‘therapeutic misconception’ – the belief of research participants that the investigation they’re involved in will be of direct therapeutic benefit to them (or to others). Linked to this is a belief that research may alert participants to health problems that aren’t clinically obvious yet – and that the study investigators will be able to provide (or point them towards) the care they need. Conversely, there’re also concerns around trust: some people don’t take part in research because they are worried that information about them will be made available to third-parties.

There are, then, ethical issues attached to public (mis)understandings about research; as such, there is an onus on bioethicists, scientific researchers and educators to enhance everyone’s knowledge of how research is actually undertaken, what its realistic implications are, and what its limitations might be. In so doing, participants will be better able to judge whether they would like to participate in research, and will be more informed about the studies they’re involved in. This might reduce the burden on scientists who have to deal with tricky questions about what to do about people under the therapeutic misconception once they’ve been participating in a study for a while, and also may in fact increase public trust in science and involvement in research.




By, Dr Martyn Pickersgill, University of Edinburgh (iSEI Visitor)