Lesson eight, Part seven: Moral Tribes two. Currency Exchanges. Let's start by correcting one mistake from the previous video and adding one thought concerning this slide we saw in the last video. First things first. Joshua Green is at Harvard, not Yale. I said Yale. That's wrong. Moving right along. We see four different tribes clashing, not just over who gets to claim the new pasture, but how it is to be governed. How will goods be distributed in cases in which self interest and cooperation are imperfectly aligned? Wouldn't it be nice if we had a meta-morality. That is, a morality that tells us which tribal morality is better to adopt. Wouldn't that just solve our little problem? Meta-morality, I'll just quote Joshua Greene himself, a global moral philosophy that can adjudicate among competing tribal moralities, just as a tribe's morality adjudicates among the competing interests of its members. Wouldn't that be nice? Yes, and it would be nice to have a pony too. If wishes were horses then beggars would ride and a smooth ride it would be. And philosophers are just a kind of metaphysically jumped up kind of beggar, aren't they, right? To repeat, our problem is competing versions of moral common sense. And a very realistic problem that appears to be. Just imagining it away, poof, courtesy of hypothetical meta-morality. Seems like a classic case of philosophers seeing what they like in the clouds. Hm, I think that's a fair way to put the challenge. I think that's a fair way to put the challenge. Joshua Green doesn't wear his scientific credentials like a heavy crown. Stand back, everyone. The scientists are here to sort it all out, but he does think of himself as a scientist and he does think proper empirical lab work. He has done. And other people have done. Investigating moral psychology. This is a substantial, if partial, foundation for what he's proposing in his book. Moral psychology is supposed to allow us to see ourselves more clearly. Morally. As we really are. [BLANK_AUDIO] So, when and where does this meta-morality stuff come in? Is it scientific? Or solid? And is it maybe a bit on the cloudy side? Not that there's anything wrong with that, if philosophy ends up being a bit cloudy well, welcome to the Olympian Pantheon of great thinkers. Maybe. Cloudy, Joshua Greene might reply. Maybe, it's cloudy with a chance of brain. No. That's a bit too cute. But there's a point. There's always a point. One of the advantages of focusing on the brain, is that we don't currently don't know how it works. So, if you learn something about how it works, you really learn something. If that something has to do with moral thinking, you'll learn something new about morality. That's not so easy to do after all these centuries and millennia of wise philosophers holding forth on the subject, saying kind of the same things over and over again in slightly different ways. On the other hand, one of the problems with focusing on the brain is that the more you see it, the harder it is to see yourself. You divide and turn out to be all kinds of things you didn't quite expect. So, the more you derive some kind of meta-morality from the way you think the brain works. The harder it may be to see your self in that meta moral portrait. Can start to seem a bit inhuman, either, too god like or too animal or something in between. Here are a couple of quotes from Greene, let me just read them to you, they're from fairly early in his book. The moral emotions are gut level instincts that enable cooperation within personal relationships and small groups. That's our evolutionary history. Another quote. If morality is a set of adaptations for cooperation, we today are moral beings only because our morally minded ancestors outcompeted their less morally minded neighbors. And thus, insofar as morality is a biological adaptation, it evolved not only as device for putting Us ahead of Me, but as a device for putting Us ahead of Them. Okay. Taking our brain, also our gut brain, out of its native environment, evolutionarily speaking, it's sort of examining in an, in an abstract vat. The whole thing can start to seem functionally weird and a bit out of place. Perhaps it's obvious why the things said in those quotes could be true. From what I said in the last video and just thinking in a general evolutionary sense. Let me add relatedly, as Haidt says, as Greene basically says, morality binds and blinds, that's a Haidt phrase, but Greene would buy it. But binds us into an us keeps us from seeing them for who they really are, morality binds us into tribes, tribalism keeps egoism in check. But tribalism's problematic, because ethnocentrism, and other sorts of in-group, out-group bias, problematic. There's way more to our motion, moral emotions than just that. But some of our moral emotions bend us in that direction. And it's, to repeat, problematic. This is all, to put it mildly, a big claim. Well really, it's a bundle of big claims. It's very controversial, all in all. It isn't only controversial because some people won't like biology in their ethics. Some biologists won't like it. Maybe because they don't like group selection in their evolution. Greene says he doesn't even need that for what he wants. But again, that's controversial. Anyway, even clarifying exactly what Greene means by all this is very serious and fiddly issue, let alone verifying whether it's all true or not. For purposes of this video, for this lesson, let's just suppose he's basically right, empirically. I know, I know. You rightly object that you don't even know what you're accepting when you say that. But you get the gist. I want you to take that gist. And use it to get the gist of his overall argument. If you don't like it in the end, you can return the whole package for a full refund. Except for your time. You won't get that back, but you can get your own beliefs back. And it's all done, if you want them back. Now that I've clarified my generous, epistemic, returns policy. Let me give you something else. You can return it later if you don't like it. Moral reasoning is dual process, so argues Greene. Here we are getting into system one and system two stuff, like we talked about before regarding Haidt, our moral system one has a an emotional dog, but we also have a moral system two, which is, envelop please, basically a utilitarian at heart or at head. So argues Greene, this is where Greene distances himself from from Haidt, at least a bit. But not really fundamentally. We have a moral rider and a moral elephant. It's more a matter of emphasis. Haidt says the normal case, the mean, the average, the modal case of moral judgement. That's the emotional dog. Sometimes yes, sure, the rational tail can wag the whole dog, but that's rare. It's the exception. Greene's point goes more like this. That exception is very important, even if it's rare. One thing we do want to do is distance my, ourselves from my little graphic. I don't mean to imply that all our emotional, moral thinking comes from our gut brain literally. That would just be totally wrong. The truth is, you don't have a moral brain at all. You just have a brain. It thinks about tree, and snakes, and cabbages, and kings and morality. Greene's point is that moral thinking is a dual process. System two isn't just for doing your math homework, it's also for thinking about morality. Which turns out to be a bit like doing your math homework. Five is more than one. We'll get to that. For now, lets explain why our moral system two, our moral system is a utilitarian at heart or at head. Like I said in the previous slide, we don't seriously have two moral brains. We've only got one brain, even though we've also got a gut brain. Good to remember. Our one brain can be thought of instantiating two systems, system one and two. Here, it's good to recall that they cannot be clearly located in particular parts of the brain. Connoman calls them fictions. I'm not sure that it's a good idea to go that far. But let's not argue about it. Just understand my split level brain in a vat is kind of a metaphor for a dual process view. System two, you recall, is the conscious, rational, deliberative, attentional part of us. Joshua Greene talks a lot about automatic and manual mode, that's a camera metaphor. It maps onto system one and two, system one is automatic, system two is manual. System two does our math homework, but it's not a math system, its a problem solving system. What does that mean? Take it away Joshua Greene, I'm finally going to just read the slide. But what exactly is problem solving? In the lingo of artificial intelligence, solving a behavioral problem is about realizing a goal state. Problem solving systems vary widely, but at the most abstract level, they all share certain properties. First, they all deal with consequences. A goal state is a consequence. One that may be actual or merely desired. All problem solvers perform actions, where actions are selected based on their causal relationships to desired and undesired consequences. So why is the human manual mode utilitarian? Still reading the quote here. I don't think that it's inherently utilitarian. Rather, I think that utilitarianism is the philosophy that the human manual mode is predisposed to adopt, once it's shopping for a moral philosophy. So, let's rephrase. Why is the manual mode predisposed towards utilitarianism? The manual mode's job is, once again, to realize states, goal states, to produce desired consequences. Putting it very briefly, if your only tool is a hammer. Every problem starts to look like a nail. If your manually moral too is a general capacity for realizing goal states, every moral problem starts to look like a candidate for utilitarian cost benefit analysis. That is, your brain translates the problem into the sort of thing it can solve. It hears what's the right thing to do. And translates it as, what is the best goal state I can realize, something like that. Sort of, get it? We're obviously moving really fast here. And I don't just mean that I'm omitting the empirical evidence that any of this is true. These statements are very approximate, but that's okay. It's enough that now give you the gist of the overall argument. Regarding that meta-morality we were wishing for at the start. It's utilitarianism or nothing. That's something anyway. Wait. Why does it have to be utilitarianism or nothing? Glad you asked that. Let's zoom in on one of the minds of those little fluffy sheepies, that is, those four hypothetical herder tribes, you recall. Each tribe is really a dual process tribe, morally. Insofar as it's made up of dual process tribes people morally or tribes sheep morally, let's zoom in even closer, you see what we can see. One tribe will do to make the point for all tribes. Each tribe has has its words moral automatic like in its moral manual. Automatic mode isn't just tribe one is number one, if you're a member of tribe one, but that's a big part of it. Automatic mode tends to be tribalistic and that's a problem for purposes of selling this morality to tribes two through four, which won't want to buy anything, that is even in part, tribe number one is number one. If we zoom back out, while keeping this picture in mind, we see why it's utilitarianism or nothing. The reason why this cloudy beast is the only candidate for meta-morality around all of us is that we all have a little utilitarian in us. Let me build the conclusion of this video by shifting metaphors to a different one Greene also likes, a money metaphor. Quote, A metamoralitiy's job is to make trade offs among competing tribal values, and making trade-offs requires a common currency, a unified system for weighing values. Modern herders need a common currency, a universal metric for weighing the values of different tribes. The currency on the left will be accepted anywhere. The one on the right, in a sense it's accepted everywhere. But in a more accurate sense, not. Every tribe mints its own of these coins and they aren't traded outside that tribe. Since trade is good, we have a reason to carry as many of the more exchangeable coins as we can, in our minds. Let's get back to Jonathan Haidt. Like I said, his late confession of utilitarianism kind of comes out of nowhere. Joshua Greene's is clearly better in a couple senses. Most notably, the argument for utilitarianism as meta-morality that I just offered doesn't assume utilitarianism is true. Did you notice that? If not, rewind the tape and watch again. At no point did I, that is, Joshua Greene, assume utilitarianism is true, much less that it's the one true moral theory. On the other hand, although this makes it very parsimonious, in a certain sense, it may make the argument too weak. Not to get all Thrasymachus. Justice is the advantage of the strong. But if there is anyway in which one of these common sense moralities is going to end up winning, realistically, it's probably by somehow managing to kill off the others in one of those traditional maneuvers so beloved of our species. Down the millennia. A metabolic process of absorption does seem more likely than a meta-moral process of argumentation. I know, I know, sheep don't eat sheep. But you get the point. It needn't even be so violent or cannibalistic, as my cartoons make it seem. History is full of tribes absorbing other tribes. And it really doesn't usually go by way of convincing one tribe to dissolve itself on Benthamic utilitarian grounds. But, in so far as we agree, violence is to be avoided. If possible, what about the currency option? Can we turn goodness into money? The first thing I would note, this is just an observation, not really a criticism or objection, is how inevitable money metaphors seem to be in these discussions. Euthyphro turns holiness into a kind of service industry, remember that? Meno's obviously money minded. Cephalus wants justice to be a thing you can exchange, like money. Polemikos is all about payback and more of an eye for an eye sense. Sometimes it seems that the only coherent alternative to Thrasymachian cynicism is some kind of money metaphor. Some kind of market. Justice as a thing that you weigh on the scales like at a market. What specific problems does Greene's money metaphor have? They all tend to have problems in my experience. Here's a way for yellow to come out on top without ever even having any tribal base, but then again, maybe not, in each tribe everybody has got some money on their moral minds. Some moral currency, two kinds of currency. Some of their money can be spent only within the tribe, that is, only other members of the tribe will accept it as moral currency., but everyone also has some currency that will travel abroad as it where. This is Greene's picture that I'm trying to give you here. Suppose you could trade your local currency for the universal stuff. You would, right. Everything being equal. It's better to have money that's good everywhere rather then just locally. But, unfortunately, other things are not necessarily equal. If you trade all your tribal stuff for utilitarian stuff, some of the tribal stuff is going to become, I'm sorry to tell you, worthless. The exchange rate, they look kind of lousy. All your ancient tribal wisdom about how to dress and what not to eat, and whose disgusting. A lot of that may go away. So, in the end, whether you want to trade may kind of depend on the exchange rate you're going to get, and how much you really want to trade, morally with folks from elsewhere. [BLANK_AUDIO] Of course, if you just add another premise to the scale, it becomes a lot stronger. Maybe, utilitarianism is just true. That would sure be a good reason to accept it. But it's kind of a big thumb to press down on the scales, as it were. Then again, aren't we willing to press down on the scales to get the right answer when push comes to shove? Biology and psychology can only take us so far. I think that's right. Let's read Joshua Greene here, as moral beings, we may have values that are opposed to the forces that gave rise to morality. I think that denial of this claim is actually absurd. Why? Well, it's very simple, really. Evolutionary speaking, we're all gene machines. To be really simple about it. If that's right, then maximize the spread of your genes would be the only moral imperative possibly, strictly derivable from biology. But literally, no one thinks that is true. We've got three videos to go. In the next one, I take kind of a deep metaphysical dive. Then I rise up a bit, to the level of ethical theory. Then I end on a very practical note. I hope we all don't get the bends. Only one way to find out.