Moral Tribes (Joshua Greene)


Joshua Greene and the Tragedy of Common sense Morality

“Follow your heart“, that’s a phrase one often hears when seeking advice in difficult situations, but what exactly does it mean? A translation of this metaphor could be something akin to “do that, which your emotions tell you to do“, which basically means favouring feelings over rational thoughts when making a decision. These opposite ways of thinking can be modelled as two different decision-making systems in the human brain: Emotional or rather automatic thinking is represented by System 1, and manual thinking, which is also more logical, is being represented by System 2. Furthermore, both systems also differ in speed, with System 1 being considerably faster than its counterpart [1]. This so-called dual process theory has been the subject of Daniel Kahneman’s book called “Thinking, Fast and Slow“, which Joshua Greene often uses as a reference in his book. So in which situations should you use System 1, aka “following your heart“ instead of “using your mind“ and making decisions with System 2? Why does the human brain even seem to work in two different modes? The answer to the first question depends on the situation: If swift action is required, one is generally better off going with a quick intuitive decision rather than calculating different possibilities and therefore missing the window of opportunity altogether. That sounds logical, so does it settle the question of when to choose the emotional response over the logical one? No not even in the slightest. While it might be true that the quick solution provided by System 1 is the best option in situations constrained by time, it is still unclear how to decide when that isn’t the case.

As noted in the first paragraph, there is no universally better mode of thinking because it depends on the context. An important context would be how an individual should make decisions as part of a group, since it matters to every human as a social being. This situation is also known as “The Tragedy of the commons“ (ToC) [2] or as Joshua Greene coined it in his book, the „Me Vs. Us“ Problem [1]. In this situation each member of a group is required to put their own individual interests behind the collective ones in order to reap the bigger benefits of cooperation. But putting ones own interests behind the interests of a group isn’t as easy as it sounds, especially if bigger individual benefits are an alternative, as demonstrated in countless thought experiments such as the Prisoner’s Dilemma [3]. Moreover, this tension between individualism and collectivism has been, and still is, a great source of conflict in modern history. Greene however, describes a shiny solution for this problem in his book: Morality, which he defines as „a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation“ [p. 53]. These adaptations like for example tribalism, empathy or loyalty have proven themselves to be advantageous in the Darwinian Competition because they solve the ToC, and therefore increase survivability. Morality greatly influences decision making because it manifests itself as intuitions and emotions, which as we know is how System 1 makes decisions, for example if someone had to choose between helping a friend or a stranger they would feel inclined to help their friend. There is, however, one important caveat: While morality fixes the ToC, a new and arguably even bigger problem crept into the room: “The tragedy of Common sense Morality“ (ToCM) as Greene calls it, arises because morality is only fit to facilitate intergroup cooperation but by doing so it undermines cross group cooperation. Tribalism as an example, which is one of the moral tools, enables cooperation between members of a group and therefore increases its survivability. But if two tribes with different, incompatible moral values come across each other, this previously helpful tribalism will make it impossible for them to cooperate. This problem of “moral inflexibility“ [p. 172] is what the modern world is wrestling with, it is the reason why countries with different moral ideals fail to cooperate.

Greene suggests that in order to fix this unwanted consequence a so-called meta morality is necessary: A framework that has to be implemented by every tribe, which then provides a common ground and currency for cooperation. How should it be designed? Greenes’s argument goes as follows:

  1. Moral truths either don’t exist or can’t be accessed [p. 188]

  2. Therefore, meta-morality shouldn’t be based on moral truths

  3. The meta-morality must provide a common ground [p. 189]

  4. Therefore, it should be based on shared values

  5. System 1 is over-/under-sensitive at tracking moral truths, thus unreliable [p. 212]

  6. Therefore, the meta-morality should be based on manual mode thinking

  7. Everyone understands Utilitarianism [p. 194]

  8. System 2 is predisposed towards utilitarian thinking [p. 198]

  9. Therefore meta-morality should be based on manual, utilitarian thinking

Greene calls this meta-morality deep pragmatism [p. 16] , which is basically Utilitarianism in a new dress. The main objective of Utilitarianism is to maximise happiness and minimise suffering. But it doesn’t come without its flaws, one of which has been pointed out by McCloskey, H. J. (1957): A person stands trial for a crime they didn’t commit. In order to pacify an angry crowd, which would cause riots and thus greatly reduce happiness, the judge should convict the innocent person as guilty because it amounts to the least amount of pain, if he had to decide in utilitarian fashion. Greene suggests two strategies in response to cases like this, where our moral intuition tells us a decision is wrong: accommodation and reform [1]. Accommodation means that “maximising happiness does not, in fact, have the apparently absurd implications that it seems to have“ [p.211]. However, in the case of the innocent man standing trial, the negative consequence called injustice is hard to hide. Reform, the second strategy, implies that the utilitarian decision is actually right and our moral intuition is flawed, consequently it should be reformed by the utilitarian decision. Nonetheless, neither of the two strategies fixes this potential injustice, the second one even suggests discarding our moral intuition of justice. Utilitarian John Stuart Mill also doesn’t even try to hide this repelling matter, in fact he even encourages it: “though particular cases may occur in which some other social duty is so important, as to overrule any one of the general maxims of justice.“ According to him, it is not just allowable but a duty to break the law in cases, where the stakes are high enough [4]. But should a meta-morality, shared by every human on earth, really be unjust?

Furthermore, Utilitarianism isn’t just unjust, it is also slow and imprecise. Calculating the amount of happiness that results from an action is a challenge by itself: One attempt of fixing this problem was suggested by Jeremy Bentham [6], who formulated the felicific calculus, an algorithm that can be used to estimate the happiness or pain produced by an action. But no matter how complex of an algorithm one might invent, it will never be 100% accurate because the future is always uncertain. Speed on the other hand might not sound as hard as trying to predict the future, but is also a very real problem. Just imagine the following: Deep pragmatism has been accepted by every country as the meta-morality. However, one day the nuclear early-waring system of Russia detects multiple missiles heading towards the country. An officer must decide what to do next but because the missiles are almost there he only has a few seconds to do so. What I’m trying to illustrate is that there will be situations between tribes, in which a decision has to be made very quickly, and calculating the resulting amounts of happiness for each action won’t be possible.

Earlier in the book, when arguing about whether science could deliver moral truth or not, Greene argued that just because something evolved to do something, that doesn’t imply it’s right [p. 186]. Yet, following claim 7, he argues Utilitarianism should be the moral philosophy of System 2 because it’s predisposed towards it. These two arguments are incoherent, evolution either implies that something is right or it doesn’t. Without claim 7 his argument suggesting Utilitarianism as the best morality for manual thinking is solely based on the fact that everyone understands it, which is a shaky foundation. Not only because first of all there may very well be people who don’t “get Utilitarianism“ but also because there are other ethical theories that can be understood by many people. Debates about the best ethical theory for cross group cooperation aren’t new, and I don’t think Utilitarianism is as easy as a choice as Greene makes it out to be.

Finally, there is one more inconsistency, which may very well open the door for other ways of fixing the ToCM, other than the System 2 thinking approach. As Wielenberg pointed out in his review [5], Greene initially claims moral truths either don’t exist or can’t be accessed (claim 1). Later on in the book however, he describes automatic moral intuitions as under-sensitive or rather over-sensitive at tracking moral truths (claim 5). He then concludes that manual mode thinking should be the basis of the much needed meta-morality instead of the auto mode because it is more reliable. The question that arises is, how our intuitions can be unreliable at tracking something, that can’t be accessed in the first place. How could one determine under-/over-sensitivity if the object to be tracked can’t be accessed? To put it in other words, System 1 can’t fail to detect moral truths, since they can’t be identified in the first place. So what implications does this have on Greene’s argument? Option one would be to accept the fact that moral truths can be tracked (dropping claim 1), which would open up the door for a whole host of moral truth based approaches, for example religion, mathematics and natural science being the ones he mentioned himself [chapter 7]. The other option would be to insist on moral relativism, thus making auto mode thinking a possible alternative to the manual mode thinking approach (dropping claim 5 & 6). Basing the meta-morality on System 1 thinking also fits Greene’s requirement of providing common ground, one could argue it does so even better than deep pragmatism.

In his book, Greene compared the two ways of thinking with a dual mode camera, which also has an automatic and a manual mode: The automatic mode’s strength is efficiency and speed; the manual mode on the other hand is more flexible and can therefore produce better results when time isn’t a constraint. Dependent on the situation, the photographer then should switch between modes, in order to get the best possible result [p. 133]. But what if someone modified this dual mode camera and combined both modes into one, which is automatic by default but can also be tweaked like the manual mode? Instead of having to choose between two modes, the camera would provide the best of them both combined in a single one. Combining both modes together would be like using both hands to do something precise: One hand slightly guides the other one and therefore provides stability and precision, while the guided hand still has complete freedom and can move however quick it wants. Enough metaphors, what I am trying to illustrate is that using System 1 to make a general judgement of a situation and then adjusting it accordingly from there on using System 2, allows for both flexibility and efficiency. This would also fix the problem of Utilitarianism being slow, which has been mentioned earlier, because most of the “camera settings“ are already relatively close to the optimal level due to the auto mode. Furthermore, impreciseness of Utilitarianism judgements will be minimised because the manual adjustments will only be incremental. Using the camera metaphor, it would be comparable to fine-tuning a knob instead of trying to turn it to the right level on the first attempt. This, however, is just merely a suggestion and further thinking will have to go into this.


[1] Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. ISBN: 9780374275631 0374275637

[2] Hardin, G. (1968). The tragedy of the commons. Science 162: 1243-1248

[3] Kuhn, Steven, “Prisoner’s Dilemma”, The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), Edward N. Zalta (ed.), URL =

[4] Mill, John Stuart, 1806-1873. (2002). The basic writings of John Stuart Mill : on liberty, the subjection of women, and utilitarianism

[5] Wielenberg, Erik J. (2014). Review: Greene Joshua, moral tribes: Emotion, reason, and the gap between us and them. Ethics 124 (4): 910–916.

[6] Bentham, J., & Lafleur, L. J. (1948). An introduction to the principles of morals and legislation. New York, N.Y: Hafner Pub. Co.