Morality

 

Morality through Time

Is there such a thing as moral progress?

We asked the experts this question at our event Agency, Morals and the Mind on 27 September 2016.

Self-Control

holtonRichard Holton is Professor of Philosophy at the University of Cambridge, working mainly on moral psychology, ethics, philosophy of law and philosophy of language.

Richard Holton started by examining the idea of agency and free will, and the notion that the experience of free will has to do with thinking about what drives us to action. He then focused on the notion of will power – our control over our action and the ability to constrain ourselves. In this presentation Holton underlines the importance of balance between resolution and flexibility in ethics.

Continue reading...

Mechanisms of dissolving resolutions

Holton suggested that we constantly adapt our preferences to what is currently available. Counterintuitively, thinking more about future rewards increases the rate of adaptation, making us less likely to wait for them. Bringing future rewards to mind allows a process of re-evaluation. It is when we don’t think about the future option and focus on the action of abstinence that we are less likely to succumb.  Holton suggested that we need a resolution to follow through, and that resolutions should be strong enough to keep you going, but allow responsiveness in the face of changes in the environment. When president Obama declared “we do not torture”, he was making a normative and not a descriptive claim. He was establishing a rule for the future, not describing a matter of fact. He made a resolution. Once reasons for a resolution are considered, it is no longer effective. When one is on a diet, giving reasons why one must avoid a specific food leads to finding counter reasons why this food is actually ok.

 

Resolution and flexibility in ethics

Finally, Holton underlined the importance of balance between resolution and flexibility in ethics. The only way to constrain armies, for example, is through rules. These should be made outside of the context of military action, where they can be evaluated and shaped. But in the context of action, these rules are non-negotiable. This final argument relates to the predominance of reasons or sentiments in ethics. In the history of philosophy there was huge debate about what the source of moral motivation is. If it comes from reason (Kant) or from sentiments (Hume, Scottish Sentimentalists). Holton claimed the debate is somehow misguided, arguing for a pluralist view according to which both reason and sentiments work as a source for morality.

Source: Agency, Morals and the Mind report

Human Sociality

HumanMindProject_HWStudio.co.uk_Day2-40Molly Crockett is Associate Professor of Experimental Psychology at the University of Oxford. Her interests are in the study of human decision-making.

Molly opened her presentation by citing Adam Smith in ‘The Theory of Moral Sentiments’, claiming that there is something immoral about profiting from others’ harm.[1] She than proceeded to ask how much people value profits resulting from an immoral action, and how we might examine such a question in the lab.

Continue reading...

Harming others vs. Harming oneself

In a study from 2014, Crockett and her team examined the willingness of participants to give others painful shocks in exchange for monetary reward.[2] In a series of trials participants had to choose between two options: more money and more shocks, or less money and less shocks. Money was always allocated to the participant, but in two different experimental conditions the shocks could be delivered either to the participant (harm oneself) or to another anonymous participant (harm to other). Anonymity of the other participant was crucial, to exclude effects such as reputation maintenance, reciprocity, or retaliation. The researchers examined the choices made by participants. They hypothesised that choice behaviour was affected by how much pain each option entails, and how much money can be gained from each option. Using model fitting, they estimated how much each participant weighted these outcomes, pain and money, when making a choice. Having low weight on shocks meant that the participant always opted for the higher monetary reward option. Having high weight on shocks meant that the participant always opted for the minimum shocks option. The researchers found that the weight assigned to shocks in the ‘harm other’ condition was higher than the weight of shocks in the ‘harm self’ condition. Participants were more willing to hurt themselves for money than to profit from other’s distress. This finding is in line with Adam Smith’s conclusions, and is not trivial from a rational, economic point of view, which suggests that people will try to maximize their own monetary gain, even on the expense of others.

 

Neural correlates of the value of immoral profits

In a follow-up experiment, Crockett and her team used functional magnetic resonance imaging (fMRI) to examine participants’ brain activity while performing the task. They showed that immoral profit – gaining monetary rewards while harming others – elicited less activity in the ventral striatum, a brain structure associated with processing reward, than profiting whilst harming oneself. Another brain system associated with pain and conflict perception, including the Insula and the anterior cingulate cortex (ACC), was more responsive to the prospect of pain caused to others, compared with prospect of pain caused to oneself. These neural mechanisms can mediate the observed behavioural effects of refraining from profiting from others’ misfortune.

[1] Inbar, Y., Pizarro, D.A., & Cushman, F. (2012). Benefiting from misfortune: When harmless actions are judged to be morally blameworthy. Personality and Social Psychology Bulletin, 38:52-62.

[2] Crockett M. J., Kurth-Nelson Z., Siegel J. Z., Dayan P., Dolan R. J. (2014). Harm to others outweighs harm to self in moral decision making. Proc. Natl. Acad. Sci. U.S.A. 111:17320–17325.

 

Source: Agency, Morals and the Mind Report

The Heart of Human Sociality

jensenKeith Jensen is Lecturer of Psychology at the University of Manchester. Keith is interested in the evolution and psychological underpinnings of sociality, specifically regarding other-related concerns in social behaviour.

In his presentation, Keith Jensen examined the complexities of human sociality, with humans displaying “ultra-cooperative” behaviour on the one hand, and “hyper competitive” behaviour on the other hand. He focused on pro-sociality,[1] the phenomenon of helping others in order to increase their well being. Does pro-social behaviour truly exist? And what are its evolutionary and developmental origins? Jensen first suggested that empathy ­– the capacity to understand or feel what another person is experiencing – is not enough for pro-social behaviour.

Continue reading...

Development of pro-social behaviour in humans

Jensen continued to examine how pro-social behaviour develops in children. In a number of studies, he showed that children learn to share with others, and later on learn about fairness.[2] They start off by preferring equal split of the loot, and later learn to differentiate between more fair and less fair non-equal splits, preferring 30-70 split for example, over 10-90 split. Jensen also showed that young toddlers (less than 24 months old), display helping behaviour under a variety of scenarios, and pay more attention to people and agents that display helping behaviour compared with non-helpers. Children were also willing to punish someone for misbehaving – for example for snatching food from someone else.

 

Evidence of pro-social behaviour in primates

Jensen also examined pro-social behaviour in primates, specifically in chimpanzees. While chimpanzees display social behaviour such as grooming, it is not clear if they display helping behaviour or a preference for fairness like young human children. Jensen surveyed a number of studies that showed little evidence for pro-social behaviours in chimpanzees. They were less likely to help someone, and only in one case helped an experimenter. They did not display preference for fair split, and were willing to accept all splits of loot in an ultimatum game. They were not willing to punish another chimpanzee for stealing, unless they could get the stolen goods themselves.

Pro-social behaviour, and human morality, seems therefore to emerge later in evolution, and follow a developmental process shaped by society and culture.

 

[1] Jensen, K., Vaish, A., & Schmidt, M. F. (2014). The emergence of human prosociality: aligning with others through feelings, concerns, and norms. Frontiers in Psychology, 5:822.

[2] Fehr, E., Bernhard, B., Rockenbach, B. (2008). Egalitarianism in young children, Nature 454:1079-1083; Hamlin, J.K., Wynn, K., Bloom, P. (2007). Social evaluation by preverbal infants, Nature 450:557-559; Warneken F., Tomasello M. (2006). Altruistic helping in human infants and young chimpanzees, Science, 311(5765):1301-3; Wittig, M., Jensen, K, Tomasello M. (2013), Five-year-olds understand fair as equal in a mini-ultimatum game. J Exp Child Psychol., 116(2):324-37.

 

Source: Agency, Morals and the Mind Report

Mind, Society & Control

HumanMindProject_HWStudio.co.uk_Day2-4Steve Fuller has an interest in History and the Philosophy of Science, currently holding the Auguste Comte Chair in Social Epistemology at the University of Warwick.

Steve Fuller opened with a video featuring Yale Professor of Physiology Jose Delgado demonstrating remote controlling the behaviour of a bull by stimulating its brain. Starting in the sixties, this line of research caused a stir in public opinion, as it echoed with the notion of “brainwash” introduced during the cold war. Steve’s lecture looked at the ‘psychocivilized society’ in the context of current neuroscientifically based claims about the prospects for ‘moral enhancement’ and other adventurous proposals for improving society on a large scale.

Continue reading...

Un-constraining the human experience

However, Fuller suggested that the motivation of Delgado and others’ research is crucial to scientific progress. They wanted to find means to un-constrain the human experience. At the end of his life, Delgado said that we could make considerable progress in enhancing human capacities if we had the possibility to conduct unrestricted experiments on humans. In Delgado’s early years, others also shared a more liberal attitude towards the brain. They defended the idea that the brain is an open terrain, and were willing to explore its potential using different means, including LSD and other psychoactive drugs. The psychoanalyst Lawrence Kubie[1] claimed that normal life is restraining us from expressing the things we usually express. Michael Polanyi[2] argued that we know more than we can say, but are unable to access this knowledge – are we fully using our potentialities? T.H. Huxley[3] said that human beings belong to a different species because they resist natural selection and desire transcendence. These and others supported the idea that the mind has to be opened up.

 

As these endeavours fell out of favour, these days’ neuroscience only tries to understand how the mind works, avoiding grander schemes of experimental exploration into mind enhancement.

Fuller suggested that we should re-evaluate the ways in which research programs are developed, and the scientific questions we allow ourselves to ask. We should not avoid exploring new paths for the human mind, beyond studying the biological mechanisms underlying established behaviour and observations.

[1] Kubie, L. S. (1958). Neurotic distortion of the creative process, University of Kansas Press.

[2] Polanyi, M. (1966). The Tacit Dimension, University of Chicago Press.

[3] Huxley, T. H. (1863). Evidence as to mans place in nature by Thomas Henry Huxley, Williams and Norgate.

 

Source: Agency, Morals and the Mind

External Resources

A collection of resources on the subject of memory gathered from across the internet.

Agency, Morals and the Mind Report (September 2016)

The sense of agency – the feeling that we are in control of our thoughts and actions – is a central feature of the human mind How can we define the relation between agency, moral responsibility and the brain? Can cognitive explanations shed light on the subjectivity and voluntariness of action? How can the science of evolution help us understand the nature of ethical constructs, and address the possibility of moral progress? What turns the mere control of bodily movements into conscious acts of morality or immorality?

Questions

Is there such a thing as moral progress?

Morality and Biology?