WEBVTT 00:00.000 --> 00:13.080 Okay, hello. 00:13.080 --> 00:17.600 We had a bit of technical difficulty, but now we are ready to go. 00:17.600 --> 00:18.600 Welcome to today's lecture. 00:18.600 --> 00:21.280 I'm Professor Franco Trevino. 00:21.280 --> 00:23.920 Today we'll be talking about deontology. 00:23.920 --> 00:30.920 This is the second of the three normative ethical theories we'll be talking about in Exfil. 00:30.920 --> 00:35.660 The readings for today are Kant's groundwork for the metaphysics of morals, this letter 00:35.660 --> 00:40.280 exchange between Kant and Maria von Herbert, and then David Velleman's article, A Right 00:40.280 --> 00:41.280 to Self-Termination. 00:41.280 --> 00:50.520 So today's, sorry, today's program. 00:50.520 --> 00:54.280 First we'll be looking at the core concepts of deontology, saying a little bit in general 00:54.280 --> 00:59.080 about what deontology as an approach to ethical theory looks like. 00:59.080 --> 01:02.200 Then we'll start with the first part of Kant's argument. 01:02.200 --> 01:07.200 This is where Kant spends a lot of time talking about the goodwill and the value of the goodwill 01:07.200 --> 01:11.860 and the nature of duties, a lot of sort of conceptual analysis that goes on here where 01:11.860 --> 01:16.720 he builds out the foundations of his theory. 01:16.720 --> 01:20.160 And then we get to the practical end of things, the categorical imperative. 01:20.160 --> 01:26.920 This is the decision procedure that Kant recommends for trying to figure out in every single possible 01:26.920 --> 01:33.160 case you face if what you're doing is morally permissible or not. 01:33.160 --> 01:38.860 And then we'll look at Maria von Herbert's letters and see whether and to what extent 01:38.860 --> 01:43.100 there's a criticism of Kant and what is that criticism, and we can talk a little bit about 01:43.100 --> 01:47.860 whether Kant has a good enough reply or not. 01:47.860 --> 01:50.000 And then Velleman on assisted suicide. 01:50.000 --> 01:54.280 The reason we have Velleman here is because Velleman is doing something interesting for 01:54.280 --> 02:00.400 us that is he is showing us deontology and Kantian inspired deontology in particular 02:00.400 --> 02:01.400 in action. 02:01.400 --> 02:07.920 So we're seeing how to use Kantian deontological notions on a particular case. 02:07.920 --> 02:13.240 So just like Philippa Foote was showing us how you can use virtue ethical notions to 02:13.240 --> 02:19.140 talk about euthanasia, so too is Velleman showing how we can use deontological notions 02:19.140 --> 02:20.960 to think about assisted suicide. 02:20.960 --> 02:25.240 And here they come to, and they're dealing with slightly different sorts of cases, but 02:25.240 --> 02:29.400 they're very similar sorts of cases, and they come to different sorts of conclusions. 02:29.400 --> 02:34.520 So it would be interesting then to sort of compare Velleman with Foote because they're 02:34.520 --> 02:40.240 coming at very similar issue from different ethical perspectives. 02:40.240 --> 02:43.260 Okay. 02:43.260 --> 02:47.200 So what are the core concepts of deontology? 02:47.200 --> 02:52.000 Well deontology is a theory that is focused on the nature of the action. 02:52.000 --> 02:55.480 In the last lecture I sort of showed you, I gave you an example of an action. 02:55.480 --> 02:58.720 You had the agent, the action, and the consequences. 02:58.720 --> 03:01.360 Virtue ethics was the theory that focused on the agent. 03:01.360 --> 03:05.040 Now we're at the theory that's focused on the action. 03:05.040 --> 03:11.320 And the way that deontology approaches the action is through this notion of rules. 03:11.320 --> 03:18.720 So it is a theory that's going to have a strong focus on rules for behavior, rules which tell 03:18.720 --> 03:25.300 you what's permissible and what's impermissible, what's right and what's wrong, and rationality. 03:25.300 --> 03:30.200 And so again, remember rationality was also important for Aristotle's conception of ethics, 03:30.200 --> 03:33.920 but here rationality plays an importantly different role. 03:33.920 --> 03:41.280 We'll see that rationality is not, as in Aristotle, connected to happiness, but connected instead 03:41.280 --> 03:47.440 to morality, where morality on the Kantian view and happiness are two very distinct things, 03:47.440 --> 03:51.160 whereas they came together, as if you recall, in virtue ethics. 03:51.160 --> 03:56.960 Okay, so just generally speaking, some features of deontology just to get, give you a little 03:56.960 --> 04:01.320 taste for it before we go into the details of Kant. 04:01.320 --> 04:04.200 So we have rationality and we have these rules. 04:04.200 --> 04:10.280 Well they're connected in a particular way, and that is that the rules are discoverable 04:10.280 --> 04:11.560 by rationality. 04:11.560 --> 04:16.280 That is, if we use reason, we can figure out what the rules are. 04:16.280 --> 04:20.400 And that's in fact what the categorical imperative is, it's a rational procedure for figuring 04:20.400 --> 04:25.360 out what the rules of morality are. 04:25.360 --> 04:32.320 Another feature, because we have this rational capacity, according to Kant and general deontology, 04:32.320 --> 04:34.760 we have a special sort of value. 04:34.760 --> 04:40.520 So rational agents and rational agency has a particular kind of value, unconditional 04:40.520 --> 04:48.480 value, and this unconditional value has important consequences for what kind of theory it is, 04:48.480 --> 04:53.000 and we'll see this throughout. 04:53.000 --> 04:58.360 In contrast to consequentialism, remember this is the theory that assesses actions and 04:58.360 --> 05:01.760 sometimes agents on the basis of consequences. 05:01.760 --> 05:07.960 For deontology in general, the consequences are not going to be a decisive factor for 05:07.960 --> 05:12.600 whether an act is good or not, whether an act is permissible or forbidden. 05:12.600 --> 05:16.480 In fact some acts are going to be forbidden regardless of the consequences. 05:16.480 --> 05:21.720 So we don't even look to the consequences, even in, as we'll see later on, some very 05:21.720 --> 05:25.320 difficult cases where you might say, hey, wait a minute, shouldn't we have an exception 05:25.320 --> 05:29.600 there? 05:29.600 --> 05:32.320 The agent's motive is going to be important for assessing the action. 05:32.320 --> 05:37.520 We saw a similar kind of move also in Aristotle, where I remember Aristotle says something 05:37.520 --> 05:43.680 along the lines of, you know, for an action to be virtuous you have to choose it for its 05:43.680 --> 05:46.640 own sake and you can't choose it for some extraneous reason. 05:46.640 --> 05:49.200 You have a similar move in Kant. 05:49.200 --> 05:55.520 It matters very much why you're following the appropriate rule if you're following it. 05:55.520 --> 05:57.400 It's not enough just to follow the rule. 05:57.400 --> 06:02.840 You have to follow the rule for the right sort of reason. 06:02.840 --> 06:13.680 Okay, so I went through last time some advantages of virtue ethics and some drawbacks here. 06:13.680 --> 06:17.120 I'll do the same again, even though we've only kind of sketched the theory. 06:17.120 --> 06:22.160 I think you are already in a position to see some ways in which deontology has some positive 06:22.160 --> 06:25.480 features that you might like and want from a moral theory. 06:25.480 --> 06:30.560 First, the dignity of rational agents is a key feature of the theory. 06:30.560 --> 06:32.100 This seems like intuitive. 06:32.100 --> 06:36.860 We have a value as rational agents, as human beings, and this is something that's like 06:36.860 --> 06:40.040 front and center in deontology. 06:40.040 --> 06:44.120 And on the practical end, this means that there are strong protections for individual 06:44.120 --> 06:45.120 rights. 06:45.120 --> 06:52.700 In fact, the language of rights really is something that is most at home in deontology, 06:52.700 --> 06:58.940 because deontology is thinking about what rights I have against other agents just in 06:58.940 --> 07:04.680 virtue of the fact that I have dignity as a rational agent. 07:04.680 --> 07:07.160 There are also strong protections for minorities. 07:07.160 --> 07:12.680 Again, we'll see this as this is a problem in the end for consequentialism. 07:12.680 --> 07:16.440 If you're looking at the consequences all the time, it looks like sometimes you'll be 07:16.440 --> 07:22.280 able to sacrifice minorities for the benefit of majorities if all you're doing is calculating 07:22.280 --> 07:24.240 what's the best consequences. 07:24.240 --> 07:27.320 This is the kind of move that's just blocked in deontology. 07:27.320 --> 07:32.680 Again, because the individuals are protected, all members of minorities, quad individuals 07:32.680 --> 07:37.120 are also protected, so you can't go around exploiting people for the benefit of the majority, 07:37.120 --> 07:41.600 even if the benefit is amazing. 07:41.600 --> 07:48.460 And then the other thing is that it's a kind of intuitive theory in that the rules of action 07:48.460 --> 07:53.400 that it generates generally are ones that we already heard. 07:53.400 --> 07:57.120 So the rules that you're going to get are things like don't lie, don't cheat, don't 07:57.120 --> 08:04.380 kill, all the things that you already think are intuitively plausible moral rules, these 08:04.380 --> 08:08.880 are the ones that are going to be in general generated by deontology. 08:08.880 --> 08:13.400 So it coheres with widespread moral intuitions about what is morally permissible and what 08:13.400 --> 08:14.400 not. 08:14.400 --> 08:18.920 Okay, so those are some advantages. 08:18.920 --> 08:27.160 Nope, that is not what I wanted to do. 08:27.160 --> 08:31.080 Here are some drawbacks. 08:31.080 --> 08:36.720 So this is the flip side, right, so the drawbacks are always the flip side of the advantages, 08:36.720 --> 08:42.420 this idea of having rules seems very intuitive and plausible, that the rules we hold to them 08:42.420 --> 08:48.620 even when the consequences might indicate otherwise seems really like a good idea, but 08:48.620 --> 08:52.080 it does seem like it becomes overly rigid in certain situations. 08:52.080 --> 08:59.360 So if there's a situation, for example, where telling a lie might save a life, you might 08:59.360 --> 09:03.560 think well hey, okay, in that situation you tell the lie. 09:03.560 --> 09:10.000 Kant, on the other hand, has a whole article called on the supposed right to lie to save 09:10.000 --> 09:15.540 a life where he argues no, in fact, in that situation you're not permitted to lie even 09:15.540 --> 09:18.760 when someone else's life is at stake. 09:18.760 --> 09:22.800 And that seems really counterintuitive, so that's the flip side of holding on to these 09:22.800 --> 09:28.240 intuitive rules is that when you hold on to them in a principled and rigid way, you get 09:28.240 --> 09:34.340 cases where it looks obvious to a lot of people that you're supposed to lie, but it turns 09:34.340 --> 09:40.840 out that according to the moral theory, no, in fact, you have to tell the truth. 09:40.840 --> 09:45.480 Another drawback, right, again, you have all these rules, that sounds great, it makes it 09:45.480 --> 09:50.920 easy, but a lot of times you have rules that conflict with one another, and it's not clear 09:50.920 --> 09:55.580 what deontology has to say about when the rules conflict. 09:55.580 --> 10:04.320 So Kant will have something to say about when two different types of rules conflict, but 10:04.320 --> 10:10.360 in general, it doesn't seem like there's any guidance for us when two rules seem to pull 10:10.360 --> 10:12.600 us in two different directions. 10:12.600 --> 10:14.440 What do we do in those situations? 10:14.440 --> 10:16.840 Deontology doesn't seem to tell us. 10:16.840 --> 10:23.020 Okay, so that's my sort of general sort of sketch of what deontology is or what it looks 10:23.020 --> 10:30.480 like as a moral theory and what are some of its drawbacks and some of its advantages. 10:30.480 --> 10:33.720 So let's now sort of jump into Kant. 10:33.720 --> 10:41.200 Now I'll say at the beginning, I say this to students every year, Kant is really hard. 10:41.200 --> 10:46.840 Reading Kant is really hard, and trying to explain Kant is really hard. 10:46.840 --> 10:53.120 All of these things are difficult, so if you've read the text and you thought, oh man, I'm 10:53.120 --> 10:56.340 just lost, that's fine. 10:56.340 --> 11:02.320 Some very renowned scholars are also lost with Kant, so don't feel bad about that. 11:02.320 --> 11:07.240 What I'm going to try and do is give a lecture that will then help you when you go back and 11:07.240 --> 11:12.200 read it again, because you will have to read it again if you want to understand it, see 11:12.200 --> 11:19.680 how Kant's argument is built up and put together and give you a better sense of how he wants 11:19.680 --> 11:25.280 us to think about things like the categorical imperative and his different cases that he 11:25.280 --> 11:27.740 brings up. 11:27.740 --> 11:35.160 So this one lecture will not solve all of your problems, in other words, it's not going 11:35.160 --> 11:39.420 to make you understand absolutely everything, but hopefully it will give you some guidelines 11:39.420 --> 11:45.240 for helping you understand the text better when you look at it the next time. 11:45.240 --> 11:46.960 So what's Kant trying to do? 11:46.960 --> 11:54.040 Well he tells us he wants to ground morality objectively in reason. 11:54.040 --> 11:59.640 Again I mentioned this last time, Kant, like Aristotle and like Mill, that's coming later, 11:59.640 --> 12:07.200 thinks that ethics is an objective science, so it's not a cultural projection, it's not 12:07.200 --> 12:11.800 something that we choose as individuals, even though there are all these individual likes. 12:11.800 --> 12:19.740 No, he's going to ground morality objectively in reason such that all rational creatures, 12:19.740 --> 12:26.060 even so far as they're rational, will be able to figure out what to do just by using reason 12:26.060 --> 12:27.060 correctly. 12:27.060 --> 12:31.320 It matters here again using reason correctly, not just using any kind of reason. 12:31.320 --> 12:35.600 So you can only ground morality objectively if everyone is reasoning the same and everyone's 12:35.600 --> 12:39.320 doing that when they're reasoning in the right way. 12:39.320 --> 12:44.360 So this kind of background I think is important for understanding Kant's project because he 12:44.360 --> 12:51.400 says some things that don't seem to make sense or might be puzzling or he talks a lot about 12:51.400 --> 12:53.960 inclination, what is inclination. 12:53.960 --> 12:57.960 So I'm going to try and explain some of these things, but one way of getting it on the table, 12:57.960 --> 13:01.880 I think it's always helpful with a picture. 13:01.880 --> 13:07.480 So that is what I call the dual nature of the human being. 13:07.480 --> 13:13.760 So that's us, the stick figure in the middle, that's a human being. 13:13.760 --> 13:20.400 And we are, as it were, part of two different realms you might call it. 13:20.400 --> 13:21.640 We have a dual nature. 13:21.640 --> 13:27.040 We're both physical beings and insofar as we're physical beings, we're subject to physical 13:27.040 --> 13:29.440 laws. 13:29.440 --> 13:30.440 Physics is a science. 13:30.440 --> 13:35.000 So if you throw someone out a window, they will fall in a predictable ratio and a predictable 13:35.000 --> 13:40.800 speed and a predictable acceleration because we are subject to physical laws. 13:40.800 --> 13:45.920 And to the extent that we're in this physical world, we're also in a deterministic world, 13:45.920 --> 13:46.920 right? 13:46.920 --> 13:51.440 It's cause and effect and there's no freedom in the physical world. 13:51.440 --> 13:55.600 You can't choose not to follow the laws of physics. 13:55.600 --> 13:58.520 We're also rational beings. 13:58.520 --> 13:59.520 We're rational agents. 13:59.520 --> 14:05.440 This gives us a different kind of status and subjects us to different kind of laws. 14:05.440 --> 14:12.120 And so according to Kant, insofar as we're rational, we're subject to the moral laws, 14:12.120 --> 14:13.120 right? 14:13.120 --> 14:17.320 And the science that studies the moral laws is ethics, just like the science that studies 14:17.320 --> 14:20.400 the physical laws is physics. 14:20.400 --> 14:25.140 And insofar as we're in the realm of ethics and moral laws and insofar as we're rational, 14:25.140 --> 14:29.160 we have free will. 14:29.160 --> 14:35.600 So we're, I mean, this is a view on which both determinism and free will are true, but 14:35.600 --> 14:39.280 they're true in their respective domains. 14:39.280 --> 14:41.040 Okay. 14:41.040 --> 14:48.800 So we have, as it were, both this rational side and this physical side. 14:48.800 --> 14:59.880 Now, the parts of us that are pulling us to and towards our physical nature are what Kant 14:59.880 --> 15:02.720 calls in general inclination. 15:02.720 --> 15:08.160 So our inclinations are like our desires, our emotions, our self-interest, all those 15:08.160 --> 15:12.120 things, you know, we want money and jobs and all sorts of things. 15:12.120 --> 15:17.200 All those things are on the side of inclination. 15:17.200 --> 15:23.560 And insofar as we let ourselves be moved by those things, we are letting ourselves be 15:23.560 --> 15:26.400 moved by our physical nature. 15:26.400 --> 15:30.080 Now this word will here, what is the will? 15:30.080 --> 15:34.360 All right, so the will is the part of us that decides. 15:34.360 --> 15:40.040 All right, that's the part of our nature, our minds that makes a decision about what 15:40.040 --> 15:42.200 to do. 15:42.200 --> 15:49.640 And we insofar as we're physical are pulled by inclination in one direction, but insofar 15:49.640 --> 15:57.480 as we're rational, our will is pulled also towards reason. 15:57.480 --> 16:06.400 And so he thinks that the task of morality is to allow ourselves only and always to be 16:06.400 --> 16:10.840 pulled by reason when reason is pulling. 16:10.840 --> 16:17.280 So what I mean by that is he's not saying never follow your inclination, he is saying 16:17.280 --> 16:21.840 never follow your inclination when it conflicts with reason. 16:21.840 --> 16:27.540 Reason should always take precedence over inclination and to let your will be guided 16:27.540 --> 16:37.480 by reason means for Kant to let it be guided by the moral law. 16:37.480 --> 16:45.720 And this here also indicates a deep disagreement that Kant has with Aristotle because for Kant, 16:45.720 --> 16:50.480 happiness for us is on the side of inclination. 16:50.480 --> 16:56.120 So remember for Aristotle was a rational nature which told us how to achieve happiness. 16:56.120 --> 16:58.360 This is very alien to Kant. 16:58.360 --> 17:04.520 Kant thinks that reason's purpose is not happiness because that would pull us this way, but rather 17:04.520 --> 17:05.520 morality. 17:05.520 --> 17:08.600 And that he thinks is a more noble purpose. 17:08.600 --> 17:14.880 He also thinks that as we mentioned last time, there's no good reason to think that being 17:14.880 --> 17:16.520 a good person is going to make you happy. 17:16.520 --> 17:22.800 And that seems sort of optimistic was one of the criticisms of virtue ethics. 17:22.800 --> 17:24.580 So we have this dual nature. 17:24.580 --> 17:30.060 Our will is pulled in different directions by our being physical and having these inclinations, 17:30.060 --> 17:33.240 our being rational and having this faculty of reason. 17:33.240 --> 17:39.840 And what we want to do is use the science of ethics to discover the moral laws so that 17:39.840 --> 17:46.840 Kant will be able to help us use our reason so that we can be morally good. 17:46.840 --> 17:52.680 And that to be morally good means for Kant to have a good will. 17:52.680 --> 18:05.000 So a good will is a will, the faculty of volition, that is guided by reason. 18:05.000 --> 18:06.720 So the will is the faculty of volition. 18:06.720 --> 18:11.400 The good will is the will that makes decisions with the right kinds of motives. 18:11.400 --> 18:14.920 The motive is the reason behind your action. 18:14.920 --> 18:19.440 And generally speaking, there are two kinds of motives here. 18:19.440 --> 18:23.920 One is reason, rationality, and morality, and the other is inclination, self-interest, 18:23.920 --> 18:24.920 happiness. 18:24.920 --> 18:31.880 And those are different kinds of motives that you might have. 18:31.880 --> 18:36.800 Now Kant says the good will is good in itself or good without qualification. 18:36.800 --> 18:38.960 It's good no matter what. 18:38.960 --> 18:43.000 And it's not good because of its consequences. 18:43.000 --> 18:51.920 And here you might help to think about someone who does the right thing, has the right motives, 18:51.920 --> 18:55.480 but the thing they do doesn't turn out the way that they want it. 18:55.480 --> 19:00.380 He still thinks in that case, if you've done the right thing, the will is good, and it's 19:00.380 --> 19:06.640 not tainted by the fact that you didn't achieve the thing you wanted to achieve. 19:06.640 --> 19:13.280 So in this, we need to make a distinction here, very important distinction for understanding. 19:13.280 --> 19:20.560 Kant, right, so the motive is the reason or principle behind the action. 19:20.560 --> 19:25.420 That is to say what is standing behind the action in your mind as an agent. 19:25.420 --> 19:30.520 That's the motive of the action. 19:30.520 --> 19:37.760 That's that with the purpose of the action, the purpose of the action is the way the world, 19:37.760 --> 19:42.000 the way you want the world to be after the end of your action. 19:42.000 --> 19:48.840 So the motive is the principle standing behind the action and the purpose is the consequences 19:48.840 --> 19:52.760 you want to obtain after your action is done. 19:52.760 --> 19:58.280 So the first one, the motive, that's crucial for assessing the morality of the action. 19:58.280 --> 20:02.920 The consequences are not relevant to the goodness of an action. 20:02.920 --> 20:08.360 So this is again a little schema, right, the motive stands behind the action and the action 20:08.360 --> 20:13.760 has a purpose or if you like end as an Aristotle, a goal. 20:13.760 --> 20:17.520 That is not how we decide whether an action is good by its purpose or goal. 20:17.520 --> 20:22.820 We look rather at the motive, right, what is moving the will? 20:22.820 --> 20:28.480 Is it inclination or is it morality? 20:28.480 --> 20:36.120 Okay, so here comes in this notion of duty. 20:36.120 --> 20:44.080 Kant says that the will is good when it acts from or for the sake of duty. 20:44.080 --> 20:52.920 Now our duties on this picture are what the moral law tells us we ought to do. 20:52.920 --> 20:58.360 Right, so these are moral duties that Kant is talking about, not duties in the military 20:58.360 --> 21:01.120 or civic duties or something like that. 21:01.120 --> 21:03.140 These are moral duties. 21:03.140 --> 21:11.440 So we have a good will, right, when we act for the sake of or from duty. 21:11.440 --> 21:19.620 And here again is a very important distinction for Kant, the distinction between acting in 21:19.620 --> 21:27.800 accord with duty or merely in accord with duty or from duty or for the sake of duty. 21:27.800 --> 21:32.000 So here's I think the easiest way to understand the difference. 21:32.000 --> 21:42.120 So sometimes you do what duty tells you to do but for the wrong reason. 21:42.120 --> 21:47.840 All right, so that's acting in accord with duty but not from duty. 21:47.840 --> 21:53.880 When you act from duty, you do the thing that duty tells you. 21:53.880 --> 21:55.600 Why? 21:55.600 --> 22:00.320 Because duty told you to do it, right, because it's your duty. 22:00.320 --> 22:08.400 So when you act from duty or for the sake of duty, you're acting in a way that allows 22:08.400 --> 22:13.560 your will to be moved by reason and reason alone. 22:13.560 --> 22:23.080 You act because what you're doing is your duty and for no other reason. 22:23.080 --> 22:32.520 But again, think of someone who tells the truth because they're afraid they'll get caught 22:32.520 --> 22:37.720 in a lie as opposed to someone who tells the truth because truth telling is a duty. 22:37.720 --> 22:39.000 Those are two different cases. 22:39.000 --> 22:44.440 One, both of them tell the truth so they both follow the rule but one of them does it for 22:44.440 --> 22:48.000 an extraneous reason and one of them does it for an internal reason. 22:48.000 --> 22:53.160 That is because it's a duty to tell the truth. 22:53.160 --> 23:01.320 And this illustrates the further point that actions that are merely in accord with duty 23:01.320 --> 23:03.320 could come from some inclination. 23:03.320 --> 23:08.080 Again, inclination being that grab bag of motives that have to do with self-interest 23:08.080 --> 23:13.760 and happiness that stand in opposition to reason, right? 23:13.760 --> 23:18.300 So the action is what duty prescribes just as in the case of someone who lies but the 23:18.300 --> 23:22.480 motive is improper. 23:22.480 --> 23:26.760 You didn't lie because you were afraid of getting caught, not because it's a duty to 23:26.760 --> 23:28.760 tell the truth. 23:28.760 --> 23:33.280 Okay, so I want to say a little bit now about inclination. 23:33.280 --> 23:37.680 It is, as I said, a kind of sort of grab bag notion in a way because so many things fall 23:37.680 --> 23:45.120 under inclination but it at least distinguishes two kinds of inclinations that you might have. 23:45.120 --> 23:49.040 One is what I call immediate inclination. 23:49.040 --> 23:54.000 An immediate inclination is a sort of calculation of self-interest, a long-term calculation 23:54.000 --> 23:55.000 of self-interest. 23:55.000 --> 24:00.980 So in the example I just gave, someone doesn't lie because they're afraid of the consequences 24:00.980 --> 24:04.460 that might obtain if they get caught. 24:04.460 --> 24:09.920 That's thinking in terms of what's in my interest if this or that happens and that's acting 24:09.920 --> 24:10.920 for immediate inclination. 24:10.920 --> 24:14.720 It's like, oh, if I get caught, that'll be uncomfortable or I'll get in trouble or something 24:14.720 --> 24:18.220 along those lines. 24:18.220 --> 24:20.800 So a calculation of self-interest. 24:20.800 --> 24:28.100 Immediate inclination is a more like an emotional impetus. 24:28.100 --> 24:36.840 So you tell the truth to someone because you like them, not because it's your duty to tell 24:36.840 --> 24:38.380 the truth. 24:38.380 --> 24:45.880 So in there, you're liking the person isn't the right kind of motive for telling the truth. 24:45.880 --> 24:51.560 Again, the right kind of motive on this picture, the only motive that has moral worth is acting 24:51.560 --> 24:53.580 for the sake of duty. 24:53.580 --> 24:58.860 So if you act for the sake of duty, you're not acting because you like someone or because 24:58.860 --> 25:09.000 you're afraid of getting caught, you're acting because this is what your duty is. 25:09.000 --> 25:12.840 So again, I mean, you can just think about this. 25:12.840 --> 25:13.840 You didn't cheat. 25:13.840 --> 25:19.160 I'm going to go optimistic here and say you didn't cheat on your last exam. 25:19.160 --> 25:21.860 And now I want to know why didn't you cheat on your last exam? 25:21.860 --> 25:23.480 Why did you not cheat? 25:23.480 --> 25:27.200 Why did you abstain from cheating on your last exam? 25:27.200 --> 25:29.280 And you know, we don't have time here. 25:29.280 --> 25:32.240 We started a bit late to actually get answers from you. 25:32.240 --> 25:36.680 But when I've done this before, I get all sorts of good answers like, well, I didn't 25:36.680 --> 25:41.640 need to cheat because I knew I was going to do well enough or I was afraid of getting 25:41.640 --> 25:48.000 caught or I just, you know, wouldn't dare to cheat. 25:48.000 --> 25:55.120 So all sorts of reasons that you didn't cheat and no one has ever in my experience said, 25:55.120 --> 25:58.960 I didn't cheat because it's wrong to cheat. 25:58.960 --> 26:00.320 But that, right? 26:00.320 --> 26:05.840 So those of you who didn't cheat on your exam for all those other reasons, Kant thinks you 26:05.840 --> 26:08.520 will, you acted in accord with duty and that's fine. 26:08.520 --> 26:09.520 That's good. 26:09.520 --> 26:10.520 Right? 26:10.520 --> 26:11.520 Keep doing that. 26:11.520 --> 26:14.840 But if you want your action to have moral worth, you should not cheat because cheating 26:14.840 --> 26:22.080 is wrong, right, because it's your duty not to cheat. 26:22.080 --> 26:32.660 So Kant goes through four kinds of cases where duty and inclination are in different relationships. 26:32.660 --> 26:35.900 So sometimes, right, you can do something. 26:35.900 --> 26:37.760 So you have a case, the first case is easy. 26:37.760 --> 26:41.900 It's contrary to duty, but it's useful for some end, right? 26:41.900 --> 26:47.560 So cheating on your exam is contrary to duty, but it'll help you get a better grade if you 26:47.560 --> 26:48.560 don't get caught, right? 26:48.560 --> 26:50.480 So that's that case we set aside. 26:50.480 --> 26:52.340 That's not the interesting case. 26:52.340 --> 26:58.820 The interesting cases here are two, three, and four because they're showing something 26:58.820 --> 27:04.120 important about the relationship between inclination and duty because in a lot of cases, it's going 27:04.120 --> 27:11.760 to turn out that both inclination and duty tell you to do the same thing. 27:11.760 --> 27:16.120 Like in the case of not cheating on your exam because you don't want to get caught. 27:16.120 --> 27:19.540 That tells you to do what duty tells you to do. 27:19.540 --> 27:21.240 And it's difficult in those cases. 27:21.240 --> 27:30.040 In fact, Kant thinks it's impossible to know exactly about yourself why you acted. 27:30.040 --> 27:37.280 So he has these, so you have cases two and three, right, so where duty tells you to do 27:37.280 --> 27:44.640 one thing, but inclination also tells you to do the same thing. 27:44.640 --> 27:49.280 Now if you step back after you've done the action, why did I act? 27:49.280 --> 27:51.320 Well it's never clear, he thinks. 27:51.320 --> 27:53.720 You can't just think, oh yeah, I acted for inclination. 27:53.720 --> 27:56.280 Oh yeah, I definitely acted for duty. 27:56.280 --> 28:00.840 He doesn't think we're good enough at introspection to discover those sorts of things. 28:00.840 --> 28:08.880 That's why case four is especially important for him because in those cases, when all of 28:08.880 --> 28:15.160 your inclinations are telling you to do the opposite of what duty prescribes, but you 28:15.160 --> 28:24.960 do what duty prescribes anyway, in that case, you isolate and can be sure that you're acting 28:24.960 --> 28:27.720 from duty. 28:27.720 --> 28:33.520 So he gives all these cases, I'll just mention the first one. 28:33.520 --> 28:39.620 So there's a shopkeeper who doesn't overcharge their customers, right, so one reason they 28:39.620 --> 28:43.500 could not overcharge the customers is because they think it's bad business. 28:43.500 --> 28:46.560 It's just bad business to try and overcharge your customers. 28:46.560 --> 28:51.000 Eventually, you know, word is going to get around, people will stop coming to your shop. 28:51.000 --> 28:53.600 It's just not a good idea. 28:53.600 --> 28:56.760 So that's someone, that's case two, right. 28:56.760 --> 29:00.760 You do a calculation of self-interest and you don't overcharge your customers, which 29:00.760 --> 29:05.240 is what duty prescribes, but you also have this immediate inclination, this calculation 29:05.240 --> 29:07.260 of self-interest, right. 29:07.260 --> 29:10.160 It's not good for business. 29:10.160 --> 29:14.360 And then case three, let's imagine someone who doesn't overcharge their customers because 29:14.360 --> 29:16.260 they just love people. 29:16.260 --> 29:17.260 They love their customers. 29:17.260 --> 29:18.260 Why would I overcharge my customers? 29:18.260 --> 29:19.260 I love them. 29:19.260 --> 29:20.320 These are my friends, my neighbors. 29:20.320 --> 29:21.760 They come and they talk to me. 29:21.760 --> 29:25.280 I would never overcharge them. 29:25.280 --> 29:29.120 That's someone who has an immediate inclination to do what duty prescribes. 29:29.120 --> 29:35.720 And so in those cases, it's unclear, right, why people are acting. 29:35.720 --> 29:39.480 Even though someone might say, oh, but the reason I don't overcharge the customers, right. 29:39.480 --> 29:43.360 Imagine the person who loves everyone says, well, I love my customers, but really the 29:43.360 --> 29:46.520 reason I don't overcharge them is because it's my duty. 29:46.520 --> 29:52.980 Think, hmm, do you really know, can you really separate out the inclination from the acting 29:52.980 --> 29:54.480 from duty in that way? 29:54.480 --> 29:57.440 I mean, if you ask people and they say the reason I don't overcharge them is because 29:57.440 --> 30:00.200 I love them, then they've told you what their motive is, right. 30:00.200 --> 30:02.760 Their motive is then clear. 30:02.760 --> 30:06.860 But if they have this motive and then they try and claim, well, I'm acting from duty, 30:06.860 --> 30:11.680 it's not always clear, right, whether it's really true that they're acting from duty. 30:11.680 --> 30:15.240 The fourth kind of case, right, so imagine a shopkeeper who thinks, you know what, I 30:15.240 --> 30:18.760 can make a ton of money if I overcharge some customer. 30:18.760 --> 30:20.080 I can just overcharge the children. 30:20.080 --> 30:21.880 They'll never notice the difference. 30:21.880 --> 30:24.280 And then they think, and you know what, I really hate people. 30:24.280 --> 30:26.600 I wish I could overcharge more people. 30:26.600 --> 30:32.560 And then that person thinks, but, but it's my duty to charge a fair price. 30:32.560 --> 30:37.720 That person, right, we have isolated, right, that they are acting from duty. 30:37.720 --> 30:46.640 Now it's very important in this, in understanding what Kant's doing here to avoid thinking the 30:46.640 --> 30:48.360 following thought. 30:48.360 --> 30:55.240 Kant is not saying that we should always want to do the opposite of duty so that we can 30:55.240 --> 30:58.560 do our duty and have our actions be morally worthy, right. 30:58.560 --> 31:03.520 He thinks it's better for us that our inclinations tell us to do what duty tells us to do. 31:03.520 --> 31:05.840 But what he's trying to do is, is important. 31:05.840 --> 31:12.540 He's trying to isolate the right kind of motive and show us what that motive looks like when 31:12.540 --> 31:16.200 it contrasts with what inclination is saying. 31:16.200 --> 31:22.440 So the point is not that we should hate our customers and think we can get away with things 31:22.440 --> 31:25.000 so that we isolate duty. 31:25.000 --> 31:31.800 It's a philosophical point showing what acting from duty looks like when it's all by itself 31:31.800 --> 31:37.360 and not muddled by inclination pushing in the same direction. 31:37.360 --> 31:39.880 Okay. 31:39.880 --> 31:46.120 So just to kind of sum up this part, a good will is a will that acts for the sake of duty, 31:46.120 --> 31:47.120 act for duty. 31:47.120 --> 31:52.480 It does things, it does what duty prescribes because it's what duty prescribes. 31:52.480 --> 31:55.840 Only actions done from duty have moral worth, right. 31:55.840 --> 31:59.760 So if you tell the truth for the wrong kind of reason, right, it's still good that you're 31:59.760 --> 32:02.240 told the truth, but that's not a morally worthy action. 32:02.240 --> 32:06.760 You shouldn't be praised as a good person for it because your motive was not appropriate. 32:06.760 --> 32:11.400 The action was right, but the motive was wrong. 32:11.400 --> 32:17.800 Inflation is this part of us that pulls us towards acting in self-interest and on the 32:17.800 --> 32:21.320 basis of emotions and feelings. 32:21.320 --> 32:24.240 And this is the thing that we need to fight against. 32:24.240 --> 32:30.240 We need to let reason tell us what our duties are and only act on the basis of what reason 32:30.240 --> 32:33.000 tells us. 32:33.000 --> 32:36.440 Okay. 32:36.440 --> 32:40.680 So reason is going to tell us what our duties are, but what are these duties? 32:40.680 --> 32:44.200 Right, so I've been talking about duties, talking very broadly about, okay, don't lie, 32:44.200 --> 32:51.240 don't cheat, these kinds of things, but Kant isn't satisfied with just taking over the 32:51.240 --> 32:53.980 general rules of ordinary morality. 32:53.980 --> 32:58.620 He wants to show us that we can ground these, remember, objectively in reason. 32:58.620 --> 33:04.400 And the categorical imperative is his tool for grounding the rules of morality objectively 33:04.400 --> 33:07.440 in reason. 33:07.440 --> 33:12.080 So for him, moral reasoning is a fully rational procedure. 33:12.080 --> 33:16.120 Remember, reason was in the driver's seat also in Aristotle, but he allowed sort of 33:16.120 --> 33:20.640 feelings and emotions to be part of what constituted these virtues. 33:20.640 --> 33:23.560 For Kant, we are leaving emotions to the side. 33:23.560 --> 33:26.440 That's all inclination stuff. 33:26.440 --> 33:30.600 Reason is supposed to command the will from a sense of duty. 33:30.600 --> 33:35.400 So it's not that reason says, hey, maybe you should do this. 33:35.400 --> 33:41.360 Reason says, you must do this, right, because this is what your duty is. 33:41.360 --> 33:46.720 And our task, right, the will must follow reason and ignore extraneous considerations 33:46.720 --> 33:48.800 that come at us from inclination, right. 33:48.800 --> 33:51.520 That's the larger picture. 33:51.520 --> 33:57.260 And the way in which it's a rational procedure, and here's where it starts to get a bit tricky 33:57.260 --> 34:02.960 and difficult, is this idea of universalizing our possible actions. 34:02.960 --> 34:07.880 In a way, you can think of this as the reverse of the Aristotelian move. 34:07.880 --> 34:13.680 Remember, Aristotle was talking about, oh, we have to do things, each action has to be 34:13.680 --> 34:16.920 in the right way, with the right people, at the right time, for the right purpose. 34:16.920 --> 34:20.480 You have to get all these moral particulars right. 34:20.480 --> 34:22.200 Kant's idea is in a way the opposite. 34:22.200 --> 34:24.800 It's, look, you're in a situation. 34:24.800 --> 34:30.720 You have to universalize that, to generalize the situation away from the particular so 34:30.720 --> 34:35.480 that you can get up to the level of moral principle. 34:35.480 --> 34:42.840 So you want to move always away from the individual and the particular upwards to the more general, 34:42.840 --> 34:43.840 right. 34:43.840 --> 34:46.400 That's Kant's strategy. 34:46.400 --> 34:55.360 And so when we universalize and we rationalize by leaving behind particular features of the 34:55.360 --> 34:58.420 situation. 34:58.420 --> 35:05.640 So and this is done in Kant's language through two steps. 35:05.640 --> 35:11.120 The first step is this idea that actions can be formulated as maxims. 35:11.120 --> 35:13.400 All right, maxims, what's a maxim? 35:13.400 --> 35:17.660 A maxim is just a rule I make for myself. 35:17.660 --> 35:22.160 So if you think, oh, I'm going to do an action here, I'm going to do an action here, well, 35:22.160 --> 35:27.600 I have it as a maxim for myself, then in these kinds of situations, this is how I act. 35:27.600 --> 35:33.360 Right, you can think that someone, you know, here's a very mundane example, not really 35:33.360 --> 35:36.200 a moral example, but maybe illustrative. 35:36.200 --> 35:38.800 Someone says, hey, do you want another beer? 35:38.800 --> 35:44.600 And you say, no, no, no, you know, on weekdays, I don't have more than three beers. 35:44.600 --> 35:47.880 That's just a rule for myself because the next day I'll be destroyed and won't be able 35:47.880 --> 35:48.880 to work. 35:48.880 --> 35:52.960 That's just a rule you have for yourself that you follow and then you apply in particular 35:52.960 --> 35:53.960 situations. 35:53.960 --> 35:56.700 That's a maxim. 35:56.700 --> 36:02.360 So but these maxims then can be generalized even further into laws. 36:02.360 --> 36:05.320 Right, and here you have to be careful. 36:05.320 --> 36:07.960 We don't know, we don't mean sort of legal objects. 36:07.960 --> 36:12.080 We're not talking about, you know, that's, you know, Norway is going to institute a ban 36:12.080 --> 36:17.560 on more than three beers on weekdays or something like that, but a rule that applies to everybody. 36:17.560 --> 36:21.740 Right, so a rule that applies to everybody, if you want to take your maximum, make it 36:21.740 --> 36:25.960 a law, you say, well, no one should have more than three beers on weekdays. 36:25.960 --> 36:30.760 And now you've, you've gone from, no, I'm not going to have a, I'm not going to have 36:30.760 --> 36:36.640 another beer in this situation to a maxim, which is more general to a law or rule, which 36:36.640 --> 36:37.800 is even more general. 36:37.800 --> 36:38.800 Right. 36:38.800 --> 36:43.080 This is like, this is from the individual action to the set of actions. 36:43.080 --> 36:50.560 This is from the set of actions for you to the set of actions for everybody. 36:50.560 --> 36:54.640 So just to sort of again illustrate it. 36:54.640 --> 36:59.120 Right, so I'm in a situation and I consider doing something right. 36:59.120 --> 37:15.080 Just why to get Z. Yes, you have a question. 37:15.080 --> 37:16.080 Yeah. 37:16.080 --> 37:20.720 So, so I mean, in this case, I don't want to get too hung up on the details of the beer 37:20.720 --> 37:21.920 case. 37:21.920 --> 37:27.560 But one thing I will say, and I'll come back to this a bit later is how you formulate the 37:27.560 --> 37:33.520 maximum will be important for how you formulate the law. 37:33.520 --> 37:38.560 So you're onto something interesting here that it seems like in any different case, 37:38.560 --> 37:43.820 there seem to be different ways of formulating the action and then different ways of formulating 37:43.820 --> 37:44.820 the law. 37:44.820 --> 37:47.320 And that actually winds up being an important point that a critical point we'll come to 37:47.320 --> 37:49.740 in a while. 37:49.740 --> 37:54.700 But schematically anyway, you start with an action in a situation X, I consider doing 37:54.700 --> 37:56.700 Y to get Z. 37:56.700 --> 37:59.480 And then you have these two possible principles. 37:59.480 --> 38:03.800 The subjective principle, the principle for me, the objective principle is the principle 38:03.800 --> 38:04.800 for everybody. 38:04.800 --> 38:15.240 So the maximum is, if it's a weekday, I never have a fourth beer when I'm offered. 38:15.240 --> 38:20.920 The rule, whenever anyone is in the situation where they're offered a beer on a weekday, 38:20.920 --> 38:22.080 they never, right? 38:22.080 --> 38:29.960 So this is just a sort of schematic way of showing how you would go about in any individual 38:29.960 --> 38:37.880 instance thinking from an action you're considering doing to a maximum, which would be a general 38:37.880 --> 38:41.280 rule in that situation to, again, a law. 38:41.280 --> 38:47.160 And you move up in generality, you're universalizing again from an individual action to a set of 38:47.160 --> 38:54.280 actions for yourself, from a set of actions for yourself to the set of actions for everyone. 38:54.280 --> 38:56.240 Okay. 38:56.240 --> 39:06.400 So we're looking for the moral law, Kant says, and he thinks it has a certain form. 39:06.400 --> 39:10.680 Now we're taking it away, this is slightly to the side of what we were just talking about, 39:10.680 --> 39:16.600 but it's important because we're trying to figure out what the moral laws are and we're 39:16.600 --> 39:21.960 going to use this rationalizing, universalizing procedure, but we need to know what exactly 39:21.960 --> 39:23.440 we're looking for, right? 39:23.440 --> 39:26.840 What's the kind of thing that we're looking for when we're looking for the moral law? 39:26.840 --> 39:29.600 What is it going to look like when we get to the end? 39:29.600 --> 39:34.440 And this is his analysis, his conceptual analysis of the concept of law. 39:34.440 --> 39:41.000 And he says, a law by its nature commands, right? 39:41.000 --> 39:43.200 So a law tells us what to do. 39:43.200 --> 39:44.200 That's just what a law is. 39:44.200 --> 39:47.160 It tells us what to do and what not to do. 39:47.160 --> 39:53.080 And the form, the grammatical form we use when telling people what to do and not to 39:53.080 --> 39:55.880 do is the imperative, right? 39:55.880 --> 39:58.300 So the imperative form, do this, don't do that, right? 39:58.300 --> 40:00.800 Those are imperatives grammatically. 40:00.800 --> 40:06.960 So we're looking then, he thinks, for an imperative from the moral law, but he thinks that there 40:06.960 --> 40:10.560 are two different kinds of imperatives. 40:10.560 --> 40:15.560 There's what he calls a hypothetical imperative and a categorical imperative. 40:15.560 --> 40:22.600 A hypothetical imperative is formulated as sort of if such and such is the case, then 40:22.600 --> 40:25.080 do X or don't do X. 40:25.080 --> 40:30.200 A categorical imperative is just do X, don't do X. 40:30.200 --> 40:35.720 And what he's trying to consider here then is which of these two formats is the moral 40:35.720 --> 40:36.720 law going to look like? 40:36.720 --> 40:41.620 It is going to be an if then or is it going to be categorical? 40:41.620 --> 40:45.320 Now the hypothetical, there are two kinds of hypothetical imperatives. 40:45.320 --> 40:49.680 I'm only showing this because I think it helps understand the categorical imperative, also 40:49.680 --> 40:54.260 helps see another difference with Aristotle. 40:54.260 --> 40:58.400 So there are hypothetical imperatives of skill. 40:58.400 --> 41:03.820 If you want to be good at tennis, you should practice three times a week. 41:03.820 --> 41:10.900 So practice three times a week is an imperative, but it's hypothetical, right? 41:10.900 --> 41:15.580 If you want to be good at tennis, now probably in this room, there are not many people who 41:15.580 --> 41:18.400 want to be good at tennis, maybe a couple. 41:18.400 --> 41:21.540 And so for them, the imperative works. 41:21.540 --> 41:25.140 That is to say, for those of you who want to be good at tennis, you really ought to 41:25.140 --> 41:27.140 practice three times a week. 41:27.140 --> 41:30.880 But for those of you who don't, then it doesn't matter. 41:30.880 --> 41:36.820 So it's irrelevant for you because you're not interested in the skill. 41:36.820 --> 41:43.220 Imperatives of prudence are important for everyone because imperatives of prudence are 41:43.220 --> 41:44.580 the following form. 41:44.580 --> 41:47.560 If you want happiness, and everyone does, right? 41:47.560 --> 41:49.480 Kant agrees with that, everyone wants to be happy. 41:49.480 --> 41:52.960 This is like the sum of our inclinations for Kant. 41:52.960 --> 41:56.000 Then you should do these sorts of things. 41:56.000 --> 42:03.420 And so there's, if you want to be happy, find someone you can laugh with. 42:03.420 --> 42:05.100 If you want to be happy, find a good job. 42:05.100 --> 42:07.660 You get all sorts of advice you can give. 42:07.660 --> 42:11.340 Aristotle said, if you want to be happy, you have to develop the virtues. 42:11.340 --> 42:14.500 For Kant, that, if you want to be happy, develop the virtues. 42:14.500 --> 42:21.360 This idea that virtue is necessary for happiness, that's an imperative of prudence. 42:21.360 --> 42:23.300 That's one of these things. 42:23.300 --> 42:31.600 That's not a real moral law because Kant thinks that the only thing that the moral law says 42:31.600 --> 42:34.540 is do this, don't do that. 42:34.540 --> 42:36.700 So there are no conditions. 42:36.700 --> 42:43.860 There's no connection to happiness, and there's no connection to any desires we might have. 42:43.860 --> 42:51.740 Any desires we might have, that could tell us to do all sorts of things or not, but it 42:51.740 --> 42:53.860 has nothing to do with morality. 42:53.860 --> 42:55.740 We want to be happy. 42:55.740 --> 42:59.180 There are certain things we can do or not do to make us happy. 42:59.180 --> 43:01.420 That doesn't have anything to do with morality. 43:01.420 --> 43:04.960 Morality is just telling us what to do and what not to do. 43:04.960 --> 43:10.420 So its form, what it will look like, is an imperative and a categorical imperative, not 43:10.420 --> 43:11.980 a hypothetical imperative. 43:11.980 --> 43:18.700 So I'm just going to say a little bit more about the categorical imperative itself. 43:18.700 --> 43:22.580 That is the actual procedure, and then we'll take a break. 43:22.580 --> 43:26.980 So the categorical imperative is Kant's decision procedure. 43:26.980 --> 43:33.500 It is the method whereby he tells us how to figure out what the rules are and what's permitted 43:33.500 --> 43:35.420 and what's not permitted. 43:35.420 --> 43:36.420 There are three formulas. 43:36.420 --> 43:39.340 We're going to be talking about two in this lecture. 43:39.340 --> 43:42.620 The formula of universal law, this is the most difficult one. 43:42.620 --> 43:47.700 This is the one where you turn maxims into laws and then you test them. 43:47.700 --> 43:50.660 Then there's the formula of humanity. 43:50.660 --> 43:55.500 And this is the one that takes into account front and center the inherent dignity of all 43:55.500 --> 43:59.900 rational agents and uses that as a way to test actions. 43:59.900 --> 44:05.180 And then the formula of the kingdom of ends, which is not part of our reading, which we 44:05.180 --> 44:10.220 won't talk about today, but which asks us to imagine living in a world in which everyone 44:10.220 --> 44:12.260 else is also acting perfectly morally. 44:12.260 --> 44:16.500 So it's a different kind of method. 44:16.500 --> 44:24.340 The formula of universal law, which we will focus on right after the break, is this formulation. 44:24.340 --> 44:31.340 Act only in accordance with that maxim, so the rule for you, through which you can at 44:31.340 --> 44:36.000 the same time will that it could be a universal law. 44:36.000 --> 44:44.380 So the test then is to take the maxim on the basis of your action and see whether you would 44:44.380 --> 44:46.320 will it as a universal law. 44:46.320 --> 44:53.180 And this whether you would will it as a universal law bit, this idea is an imaginative procedure. 44:53.180 --> 44:59.280 You're meant to imagine a world in which everyone acts on the basis of your maxim. 44:59.280 --> 45:02.840 And then you're going to test it, but you're not going to test it for whether you like 45:02.840 --> 45:06.080 it or whether you think it seems cool or interesting. 45:06.080 --> 45:11.780 You're going to test it for whether it contains a contradiction, a logical contradiction. 45:11.780 --> 45:17.100 And this is where it gets very tricky and difficult. 45:17.100 --> 45:18.720 But that I'm not going to start now. 45:18.720 --> 45:29.780 Let's just take our break now and start up again at a quarter after. 45:29.780 --> 45:54.160 Okay, so we ended with the formula of universal law, act in accordance with that maxim through 45:54.160 --> 45:58.700 which you can at the same time will that it become a universal law. 45:58.700 --> 46:01.520 So how do we do this? 46:01.520 --> 46:05.880 So we consider an action just as we saw, right, when I'm in this situation, I'm going to do 46:05.880 --> 46:08.560 this action in order to achieve that. 46:08.560 --> 46:13.520 You formulate a maxim, you formulate a universal law. 46:13.520 --> 46:17.460 And then you imagine a world in which everyone always follows the law. 46:17.460 --> 46:23.620 This is the sort of the difficult part in a way of the whole procedure. 46:23.620 --> 46:28.080 I mean, you can do the schematic thing where you create the maxim and you create the law. 46:28.080 --> 46:33.080 Now we're asked to sort of imagine a world in which everyone always, as a matter of fact, 46:33.080 --> 46:35.220 acts on this maxim. 46:35.220 --> 46:39.740 And then you have to ask, is there a contradiction in conception? 46:39.740 --> 46:44.280 And this is another way of saying, there's two ways of thinking of how to understand 46:44.280 --> 46:45.540 this question. 46:45.540 --> 46:52.840 One way is it, does it make any sense anymore to act in the way that the maxim I'm considering 46:52.840 --> 46:54.800 proposes? 46:54.800 --> 46:55.800 That's the contradiction. 46:55.800 --> 46:58.960 Does it make any sense? 46:58.960 --> 47:04.420 Another way of asking this question is, is there any purpose or point to acting on the 47:04.420 --> 47:10.360 basis of my maxim in the world in which everyone is already acting in this way? 47:10.360 --> 47:16.440 And so, and I'll come to a little bit more detail about how exactly you do that with 47:16.440 --> 47:18.480 an example. 47:18.480 --> 47:24.520 But the point there is that if there's a contradiction conception, if the world is unimaginable, 47:24.520 --> 47:31.240 or if the world, if it makes no sense to act in this way in that world, then, and so here's 47:31.240 --> 47:36.600 the trick, the opposite of what you're considering is a perfect duty. 47:36.600 --> 47:41.760 And a perfect duty is a duty sort of not to do something. 47:41.760 --> 47:48.380 We'll talk, again, I'll talk a little bit about how that works in a second. 47:48.380 --> 47:54.680 So if it turns out that you can't imagine the world, or the world somehow undermines 47:54.680 --> 48:03.920 itself, the world, the imagined world based on your maxim, then the action you're considering, 48:03.920 --> 48:06.760 you have a duty not to do. 48:06.760 --> 48:14.120 Okay, but if there is no contradiction in conception, you have to ask then, is there 48:14.120 --> 48:17.040 a contradiction in the will? 48:17.040 --> 48:23.120 This is a much trickier notion than the already very tricky contradiction in conception. 48:23.120 --> 48:29.120 And here the idea is, does the will restrict its own purposes? 48:29.120 --> 48:30.360 That's a contradiction in the will. 48:30.360 --> 48:35.280 A contradiction in the will is where the will is somehow working against itself. 48:35.280 --> 48:39.700 And again, I'll try and explain this with an example. 48:39.700 --> 48:45.880 But if there's a contradiction in the will, then again, the opposite of what you're considering 48:45.880 --> 48:49.440 is an imperfect duty. 48:49.440 --> 48:55.200 And again, I'll come in a second to the difference between a perfect and an imperfect duty. 48:55.200 --> 49:02.280 But if the answer is no, and this is also very, another sort of tricky point where students 49:02.280 --> 49:05.040 have historically gone awry. 49:05.040 --> 49:11.040 So if you consider an action, and there is no contradiction in conception, there is no 49:11.040 --> 49:16.480 contradiction in the will, does it mean that your action is moral? 49:16.480 --> 49:17.480 No. 49:17.480 --> 49:21.120 It just means that your action is permissible. 49:21.120 --> 49:25.680 It means that you can do the action, that there's nothing, morality does not speak against 49:25.680 --> 49:31.560 the action you're considering, because the action you're considering does not violate 49:31.560 --> 49:34.760 any duties that you have. 49:34.760 --> 49:43.200 All right, if you're considering watching Game of Thrones instead of watching The Wire 49:43.200 --> 49:48.760 this evening, just to give a completely random example, I can guarantee you that it doesn't 49:48.760 --> 49:52.120 violate any contradiction in conception. 49:52.120 --> 49:54.620 It doesn't violate a contradiction in the will. 49:54.620 --> 49:57.580 So it will turn out that it's permissible for you to do that. 49:57.580 --> 50:01.880 That doesn't mean that it's moral for you to do it, or that it's right, or that Kant 50:01.880 --> 50:02.880 approves of it. 50:02.880 --> 50:06.240 It only means that the action is permissible, and again, that only means that it doesn't 50:06.240 --> 50:09.280 violate any duties that you have. 50:09.280 --> 50:15.760 So lots of actions will just sail through and then end up being morally permissible, 50:15.760 --> 50:20.960 and again, all that means is that morality doesn't speak against it. 50:20.960 --> 50:25.360 So if you have some inclination and morality doesn't speak against it, you can pursue that 50:25.360 --> 50:29.920 inclination, like for example, watching Game of Thrones. 50:29.920 --> 50:35.120 Okay, so let's say a bit more about, and I'll come back to that schema with an example, 50:35.120 --> 50:40.320 so if you're still a little bit unsure about how that works. 50:40.320 --> 50:43.800 So contradiction in conception versus a contradiction in the will. 50:43.800 --> 50:47.640 All right, remember, we're looking for a rational procedure according to Kant, so it's important 50:47.640 --> 50:50.120 then this is a rational process. 50:50.120 --> 50:57.600 So contradiction in conception, you can't imagine the world with the maxim as law, and 50:57.600 --> 51:04.640 again, how that works out in an individual case, I want to just hold off on for a second, 51:04.640 --> 51:12.160 but the idea there then is if you can't imagine, right, if the purpose of the maxim is undermined 51:12.160 --> 51:19.320 in such a world, then you have this thing called a perfect duty, and the contradiction 51:19.320 --> 51:26.360 in the will, the will contradicts itself, and it does so by restricting its own possible 51:26.360 --> 51:27.360 purposes. 51:27.360 --> 51:36.760 So I'll just give another, just a brief example to get some of this going a little bit more 51:36.760 --> 51:37.760 clearly. 51:37.760 --> 51:42.360 So a contradiction in conception, so we've already seen that certain kinds of things 51:42.360 --> 51:52.000 are not going to be allowed, and one way to think about a contradiction in conception 51:52.000 --> 52:00.760 is if your maxim is trying to make an exception of yourself, right, when you universalize 52:00.760 --> 52:07.600 that, it's never going to work, right, if you want to gain some advantage over people, 52:07.600 --> 52:13.440 exploit them, use them in certain ways, right, if once you make that into a law that everyone 52:13.440 --> 52:18.800 does this, your strategy for exploiting people will no longer function. 52:18.800 --> 52:24.440 So that's just one way to think about the kind of thing that the universalizing move 52:24.440 --> 52:30.320 does is it blocks cases where we're trying to act on a maxim which takes advantage of 52:30.320 --> 52:32.680 other people. 52:32.680 --> 52:36.840 The contradiction in the will, maybe I'll just give a context example for this, right, 52:36.840 --> 52:43.040 so he says imagine someone who's considering not developing their talents, so imagine, 52:43.040 --> 52:51.640 you know, Nadal at 18 thinking, ah, tennis is a lot of hard work, maybe I'll just hang 52:51.640 --> 52:57.280 out in Majorca, it's beautiful here, I'll just get a job somewhere around here. 52:57.280 --> 53:02.280 All right, so he's got this magnificent talent, he's imagining not developing the talents, 53:02.280 --> 53:08.400 this is not, this doesn't generate for a contradiction in conception because we can imagine a world 53:08.400 --> 53:14.200 which people fail to develop their talents, doesn't sort of generate any purposes or undermined 53:14.200 --> 53:19.800 or anything like that, but the will restricts itself, he thinks, that in the world in which 53:19.800 --> 53:25.320 we don't undermine, we don't develop our talents, what we do is we restrict the amount, the 53:25.320 --> 53:30.180 kinds of things that we can do, right, so it's the will deciding to restrict the will, 53:30.180 --> 53:34.160 that's the contradiction in conception, so if you don't develop your talents, what you're 53:34.160 --> 53:38.840 doing is you're restricting the things that you're able to do and that's what a contradiction 53:38.840 --> 53:40.840 in the will is. 53:40.840 --> 53:45.800 Okay, so now what are these perfect and imperfect duties? 53:45.800 --> 53:50.460 So the perfect duty is what you get from a contradiction in conception and those are 53:50.460 --> 53:55.400 always negative duties, right, they're negative duties in the fact that, remember, all of 53:55.400 --> 53:59.920 our duties are going to be imperatives and they're all going to be categorical imperatives, 53:59.920 --> 54:07.080 negative duties is a duty not to do something, so don't do X, right, don't kill, right, don't 54:07.080 --> 54:12.960 murder is a perfect duty and it's perfect, here's why it's perfect, because it can be 54:12.960 --> 54:20.180 satisfied at all times, right, and what does that mean is that you can go around literally 54:20.180 --> 54:25.520 always not murdering people, you can go your whole life not murdering people, right, when 54:25.520 --> 54:30.000 you're sleeping you're not murdering people, you know, there's all sorts of the whole, 54:30.000 --> 54:35.440 you perfectly satisfy, I hope all of you in fact, perfectly satisfy the duty not to murder 54:35.440 --> 54:41.200 anyone, right, and negative duties are of this sort, right, a negative duty you can 54:41.200 --> 54:47.600 always abstain from, so you can satisfy them perfectly, right, and there's no, it's not 54:47.600 --> 54:51.520 like, yeah, and in these cases there's no discretion, you can't be like, oh well, you 54:51.520 --> 54:56.060 know, maybe this one case, no, no, no, it's a perfect duty, you can satisfy it at all 54:56.060 --> 55:01.440 times and then you also should satisfy it at all times, imperfect duties, these are 55:01.440 --> 55:07.160 the ones where the will restricts itself, right, and these generate positive duties, 55:07.160 --> 55:13.920 and notice the example of not developing your talents, right, well the opposite of not developing 55:13.920 --> 55:19.960 your talents is developing your talents, right, he has the same kind of, imagine you're considering 55:19.960 --> 55:25.000 just not being, not helping other people, right, if you imagine the world in which everyone 55:25.000 --> 55:28.280 doesn't help other people, you restrict your purposes because you're going to need help 55:28.280 --> 55:34.200 sometimes too, so that's the will contradicting itself, so we have an imperfect duty to help 55:34.200 --> 55:40.600 others, it's imperfect not because it's less of a duty or it's like a lesser duty in some 55:40.600 --> 55:46.760 way, it's imperfect because it can't be satisfied at all times, so again, these two examples, 55:46.760 --> 55:52.400 developing others and developing your talents, they're both imperfect duties, maybe if you're 55:52.400 --> 55:56.840 lucky you can satisfy them simultaneously somehow, but you're not going to be able to 55:56.840 --> 56:01.440 satisfy them when you're asleep, for example, or when you're doing other things, right, 56:01.440 --> 56:07.280 so an imperfect duty is a positive duty that can't be satisfied at all times, that's why 56:07.280 --> 56:12.480 it's imperfect, again, not because you don't have a choice about whether to satisfy the 56:12.480 --> 56:18.600 duty, you do however have some discretion in how you satisfy the duty, so if you have 56:18.600 --> 56:25.080 a duty to help others, that doesn't yet tell you which others to help or how exactly to 56:25.080 --> 56:32.480 help them, right, so you have discretion in what you're going to do in order to help others 56:32.480 --> 56:36.980 and which others you're going to help, right, the category of imperative doesn't tell you 56:36.980 --> 56:51.120 that you have to help Syrian refugees as opposed to refugees from some other country, or many, 56:51.120 --> 56:55.600 many different groups of people that one might help, or animals, right, it doesn't tell you 56:55.600 --> 56:59.640 which of these to choose, it tells you that you have to choose others, well, I guess in 56:59.640 --> 57:08.440 this case animals is out because it was helping other humans, okay, so this process then, 57:08.440 --> 57:14.120 right, this universalizing, imagining and then looking for contradictions is going to 57:14.120 --> 57:24.080 generate then moral laws, both of these kinds of moral laws, now I want to give you an example 57:24.080 --> 57:30.240 just to show you how this works in action, so should I lie in order to get out of some 57:30.240 --> 57:34.140 difficulty, it doesn't matter what the nature of the difficulty is, just imagine you're 57:34.140 --> 57:38.000 in a situation where you think, well, if I just lie, I will get out of it, it won't be 57:38.000 --> 57:44.480 embarrassing, it'll be fine, and imagine, should I lie, so you create the maxim, whenever 57:44.480 --> 57:47.960 I'm in a difficulty, I lie in order to get out of it, so this is the, you know, this 57:47.960 --> 57:54.400 is like the no four beers on a weekday, you just have a maxim for yourself, I lie to get 57:54.400 --> 57:59.740 out of difficulties, this is just how I do it, that's the action, right, is a result 57:59.740 --> 58:04.480 of having this maxim, and now we formulate the law, right, this is the law, whenever 58:04.480 --> 58:09.000 anyone's in difficulties, they lie in order to get out of it, right, so I went from my 58:09.000 --> 58:14.960 action to the generalized action for me, and then the generalized action for everybody, 58:14.960 --> 58:20.500 now we're at the law, now we're supposed to imagine a world in which everyone lies in 58:20.500 --> 58:24.840 order to get out of difficulties, right, as a matter of fact, everyone is always doing 58:24.840 --> 58:31.620 this, this is just how people deal with getting out of difficulties in the world that I imagine 58:31.620 --> 58:39.220 on the basis of my maxim, and so the question is, does it make sense in the world in which 58:39.220 --> 58:47.420 everyone lies to get out of difficulties to lie in order to get out of difficulties? 58:47.420 --> 58:53.040 And the answer, Kant thinks, is no, and the reason it doesn't make sense is because in 58:53.040 --> 58:56.760 the world in which everyone is always lying to get out of difficulties, no one is going 58:56.760 --> 59:01.400 to believe you, and since no one is going to believe you, you will never get out of 59:01.400 --> 59:08.120 difficulties via the method of lying, right, in the world in which everyone lies in order 59:08.120 --> 59:17.040 to get out of difficulties, no one actually gets out of difficulties by lying, because 59:17.040 --> 59:23.780 again, what I said earlier, getting out of difficulties in this way depends on you making 59:23.780 --> 59:31.880 an exception for yourself, right, everyone else has to assume you're telling the truth 59:31.880 --> 59:38.100 in order for a lie to work, but if everyone is a liar, and everyone knows that everyone 59:38.100 --> 59:46.560 is a liar, no one is going to believe what anyone says, so we have now generated a perfect 59:46.560 --> 59:54.080 duty never to lie. 59:54.080 --> 01:00:04.680 Now if I say, should I lie in order to save another person's life, whenever I can save 01:00:04.680 --> 01:00:08.620 another person's life, I lie in order to do it, whenever anyone can save another person's 01:00:08.620 --> 01:00:12.980 life, they lie in order to do it, in the world in which everyone is always lying in order 01:00:12.980 --> 01:00:21.600 to save other people's lives, lying to save other people's lives doesn't work or make 01:00:21.600 --> 01:00:31.080 sense, right, so in the world in which everyone is always lying in some particular situations, 01:00:31.080 --> 01:00:36.360 lying in that situation is undermined, because everyone knows that this is the way things 01:00:36.360 --> 01:00:42.960 are, so this perfect duty never to lie notice is unrestricted, it doesn't say we have a 01:00:42.960 --> 01:00:47.260 perfect duty never to lie in order to get out of some difficulty, it says we have a 01:00:47.260 --> 01:00:52.220 perfect duty never to lie, including in cases where we might want to save another person's 01:00:52.220 --> 01:01:03.440 life, so this is the way in which in one example the categorical imperative goes in generating 01:01:03.440 --> 01:01:10.600 a contradiction in conception, I don't have time to go through a contradiction in the 01:01:10.600 --> 01:01:18.160 will example, but if you could take this as something you can do at home, try and do the 01:01:18.160 --> 01:01:23.960 not help anyone else and see how you get to a contradiction in the will, you will again 01:01:23.960 --> 01:01:28.980 just to tell you, you would get through this unscathed, right, you wouldn't get a contradiction 01:01:28.980 --> 01:01:31.240 in conception according to Kant. 01:01:31.240 --> 01:01:37.000 Okay, but let's move over to the formula of humanity, the formula of humanity is mercifully 01:01:37.000 --> 01:01:44.120 much easier to use than the formula of universal law, it doesn't involve that many fancy universalization 01:01:44.120 --> 01:01:50.620 moves or try or and it doesn't involve too many complicated concepts, it involves a couple 01:01:50.620 --> 01:01:55.140 though that I want to get on the table, right, so the distinction between having absolute 01:01:55.140 --> 01:02:01.080 value and having relative value, right, so rational agency has absolute value, other 01:02:01.080 --> 01:02:07.240 things have value because we value them, right, their value is relative to us, money, why 01:02:07.240 --> 01:02:08.460 does money have value? 01:02:08.460 --> 01:02:11.200 Because humans value it, gold, why does gold have value? 01:02:11.200 --> 01:02:14.200 Because humans value it, iPhones, why do iPhones have value? 01:02:14.200 --> 01:02:20.280 Because humans value it, everything other than us has relative value, we have absolute 01:02:20.280 --> 01:02:25.840 value and this maps on to the distinction between a person, right, that's a rational 01:02:25.840 --> 01:02:32.300 agent and a thing, right, things don't have, they only have relative value, persons have 01:02:32.300 --> 01:02:39.360 absolute value and this is what Kant says, means when he says that a human being a rational 01:02:39.360 --> 01:02:50.160 agent is an end in himself, that means that they are something that possesses value in 01:02:50.160 --> 01:02:57.840 themselves, right, a human being possesses value in him or herself, everything else can 01:02:57.840 --> 01:03:06.800 be used as means but you can't use human beings as means because that is a failure to recognize 01:03:06.800 --> 01:03:13.320 that they are, each individual is an end in him or herself and just to connect this even 01:03:13.320 --> 01:03:17.800 more to rationale, the reason you're an end in yourself as a human being is because as 01:03:17.800 --> 01:03:25.920 a rational agent what you can do for yourself is set ends or goals, so you are a source 01:03:25.920 --> 01:03:34.300 of value, you are a source of goals, you can't be used as a mere means and again this maps 01:03:34.300 --> 01:03:38.640 on to another distinction between I started with this notion of dignity, human beings, 01:03:38.640 --> 01:03:43.860 rational agents have dignity, they cannot be bought and sold, everything else can be 01:03:43.860 --> 01:03:50.880 bought and sold and it's a mistake, right, to buy and sell human beings and this is another 01:03:50.880 --> 01:03:56.080 maps onto another distinction, the distinction between intrinsic worth, having value because 01:03:56.080 --> 01:04:01.200 of the kind of thing you are and extrinsic worth which means having value because of 01:04:01.200 --> 01:04:07.440 your relationship to some other thing, okay, so what is the formula of humanity? 01:04:07.440 --> 01:04:13.840 Act in such a way that you use humanity whether in your own person or that of another, always 01:04:13.840 --> 01:04:21.020 at the same time as an end and never merely as a means, so several important things to 01:04:21.020 --> 01:04:29.240 notice here, first this applies not just to how you treat other people but to how you 01:04:29.240 --> 01:04:35.120 treat yourself, right, so use humanity whether in your own person or that of another, so 01:04:35.120 --> 01:04:42.600 you have to treat yourself in a way that's not as a mere means, always at the same time 01:04:42.600 --> 01:04:48.800 as an end to treat someone else as an end is to respect their rational agency, right, 01:04:48.800 --> 01:04:57.320 respect their dignity as a person and never merely as a means, now one tricky thing here 01:04:57.320 --> 01:05:04.460 is that of course there are all these situations in which we seem to use people as a means, 01:05:04.460 --> 01:05:11.180 so for example I can ask you for example what time it is, what time is it? 01:05:11.180 --> 01:05:16.700 It's 11.44, thank you very much, I have just used you as a means but I have not used you 01:05:16.700 --> 01:05:23.440 as a mere means because I respected your rational agency by asking you a question, right, I 01:05:23.440 --> 01:05:27.800 asked you a question and then you could have just not responded, you could have told me 01:05:27.800 --> 01:05:33.640 I don't know, you could have said leave me alone, I'm taking notes, so you didn't, you 01:05:33.640 --> 01:05:39.720 answered my question, I thank you for it, that is me treating you as an end while I'm 01:05:39.720 --> 01:05:46.480 also using you as a means, right, so that's okay, what's not okay is if I walked over 01:05:46.480 --> 01:05:50.360 to you, grabbed your computer, looked at it and gave it back, right, that would be treating 01:05:50.360 --> 01:05:58.240 you as a mere means and that would be a different sort of thing and so I mean this rule I think 01:05:58.240 --> 01:06:05.560 is or this formula is easier to apply in a lot of cases, so slavery is a very open, clear 01:06:05.560 --> 01:06:11.760 case owning other human beings violates the formula of humanity, it is by definition using 01:06:11.760 --> 01:06:19.320 someone as a mere means, working for pay seems more like the example I just gave you, we 01:06:19.320 --> 01:06:25.920 enter into a sort of contract where I, you know, allow myself to be told what to do in 01:06:25.920 --> 01:06:31.640 exchange for money, right, and if I don't like this anymore I can quit, so working for 01:06:31.640 --> 01:06:42.420 pay seems permissible, lying again doesn't seem permissible because if I lie to you then 01:06:42.420 --> 01:06:49.440 I'm using you as a means, I am trying to propagate false information, I'm trying to fool you, 01:06:49.440 --> 01:06:57.480 I'm trying to get you to do something, right, so lying is the kind of thing that is in fact 01:06:57.480 --> 01:07:03.080 a way of using others as a means and one reason I take the lying case here is because it turns 01:07:03.080 --> 01:07:10.840 out that no matter what formula you use, you come to the same set of rules, so all of the 01:07:10.840 --> 01:07:18.120 rules of morality things can be generated by any of these methods, the formula of humanity 01:07:18.120 --> 01:07:26.320 foregrounds the inherent value and dignity of rational agency, whereas the form of universal 01:07:26.320 --> 01:07:35.160 law foregrounds reason's ability to universalize towards the general principles of humanity, 01:07:35.160 --> 01:07:39.960 so they're both foregrounding something about reason but something slightly different about 01:07:39.960 --> 01:07:53.480 reason, so I think I'm going to leave that because I want to say a few words about Maria 01:07:53.480 --> 01:08:02.800 Von Herbert before we move on to the Velleman article, if you're still sort of wondering 01:08:02.800 --> 01:08:08.480 about some things about Kant and I'm sure that some of you are, again I recommend that 01:08:08.480 --> 01:08:14.580 you read the text over, maybe listen to this lecture one more time, you can of course send 01:08:14.580 --> 01:08:21.400 me or several experts we have on Kant some questions if you have them as well, Maria 01:08:21.400 --> 01:08:28.000 Von Herbert is an interesting case, this is a true story, she was someone who was alive 01:08:28.000 --> 01:08:35.280 in Kant's time, who had a famous letter exchange with him, she was in despair and looked to 01:08:35.280 --> 01:08:41.680 Kant and his theory for certain kind of solace but she didn't find it there, she has read 01:08:41.680 --> 01:08:51.280 the metaphysics of morals including the categorical imperative and it doesn't help a bit, so the 01:08:51.280 --> 01:08:59.160 reason she says it doesn't help a bit is because she finds it too easy, as an interesting perspective 01:08:59.160 --> 01:09:06.480 right so she thinks the commandments of morality are too trifling and too easy to satisfy and 01:09:06.480 --> 01:09:14.160 you know on Kant's theory there should be some tension in us because of our dual nature, 01:09:14.160 --> 01:09:20.000 we should be pulled by inclination but it doesn't seem like Maria Von Herbert is particularly, 01:09:20.000 --> 01:09:25.160 doesn't seem like she has like countervailing inclinations that she's supposed to fight 01:09:25.160 --> 01:09:32.240 against so that's how at least she presents herself, it should put her in an ideal ethical 01:09:32.240 --> 01:09:36.080 position right, you might think oh well this is great, this is perfect, you have someone 01:09:36.080 --> 01:09:43.520 who's just able to follow the dictates of rationality is never in a position where they 01:09:43.520 --> 01:09:50.120 will not follow what morality says but it has the kind, well it doesn't have the opposite 01:09:50.120 --> 01:09:54.920 effect but it puts her in a position of meaninglessness, she thinks I live according to this theory, 01:09:54.920 --> 01:10:02.240 I'm in fact in some ways ideal with respect to this theory but I feel totally empty, so 01:10:02.240 --> 01:10:10.040 the question then is is it a weakness of Kant's theory that it does not maybe cannot provide 01:10:10.040 --> 01:10:16.760 anything like solace or meaning to its adherents? 01:10:16.760 --> 01:10:22.280 Is a Kantian moral saint like Herbert seems to be doomed to a life of empty rule following? 01:10:22.280 --> 01:10:29.560 This is the kind of challenge to Kant's theory that we see and the question is well is it 01:10:29.560 --> 01:10:35.300 a good challenge to the theory and some might say yes, it shows a weakness, something lacking 01:10:35.300 --> 01:10:41.300 in the sort of human aspect of Kant's theory that it doesn't do anything in the direction 01:10:41.300 --> 01:10:49.160 of meaning, I mean others can say something like well no this isn't a theory about meaning, 01:10:49.160 --> 01:10:53.720 this isn't like Aristotle telling us how to live our lives and be happy, this is just 01:10:53.720 --> 01:10:58.920 telling us what we're allowed to do and what we're demanded to do by morality, you know 01:10:58.920 --> 01:11:02.760 happiness that's all on, you know meaning that's all on the inclination side, that's 01:11:02.760 --> 01:11:05.540 not what my theory is doing. 01:11:05.540 --> 01:11:11.020 So that's the kind of sort of debate about you know what is the scope and power of Kant's 01:11:11.020 --> 01:11:18.020 theory and is it meant to do the kind of thing that Maria von Herbert sees it failing to 01:11:18.020 --> 01:11:23.720 do and that's a discussion that one can have. 01:11:23.720 --> 01:11:29.200 Okay so what is going on in Bellman's article? 01:11:29.200 --> 01:11:36.880 So Bellman's article is an application of Kantian ethical principles to the case of 01:11:36.880 --> 01:11:44.480 assisted suicide and what he's trying to do is use Kantian the ontological notions to 01:11:44.480 --> 01:11:49.580 make an argument about the permissibility of assisted suicide that's going to take up 01:11:49.580 --> 01:11:54.140 a number of these notions that we just talked about in particular in looking at the formula 01:11:54.140 --> 01:11:55.920 of humanity. 01:11:55.920 --> 01:12:05.720 So the question that he asks, is there a right for a person to quote live and die in light 01:12:05.720 --> 01:12:13.320 of his own convictions about why his life is valuable and where its value lies? 01:12:13.320 --> 01:12:18.760 Right so there's one question you might say about whether someone has a right to live 01:12:18.760 --> 01:12:21.960 in light of his own convictions about why his life is valuable and where its value lies 01:12:21.960 --> 01:12:27.000 and maybe another question about whether someone has a right to die in light of his own convictions 01:12:27.000 --> 01:12:31.360 and so this is where the assisted suicide question is going to be focusing obviously 01:12:31.360 --> 01:12:34.160 on the die part of it. 01:12:34.160 --> 01:12:41.880 So he says that well you can generate a broad principle that answers, you can generate that 01:12:41.880 --> 01:12:50.960 is a broad moral principle that affirms the right on the basis of two prior principles 01:12:50.960 --> 01:12:57.720 so one, this is principle number one he considers, a person has the right to make her own life 01:12:57.720 --> 01:13:02.080 shorter in order to make it better. 01:13:02.080 --> 01:13:10.200 So someone who is looking at assisted suicide is trying to make their overall life better 01:13:10.200 --> 01:13:16.760 by avoiding some unpleasant bit of it that might happen in the future so you cut it short 01:13:16.760 --> 01:13:20.840 in order to make it overall better. 01:13:20.840 --> 01:13:26.800 So the principle here is a person has that right, that is to make life shorter if doing 01:13:26.800 --> 01:13:32.880 so is a necessary means or consequence of making it a better life on the whole for her. 01:13:32.880 --> 01:13:37.280 And again you just have to think someone who is looking forward to some part of their life 01:13:37.280 --> 01:13:41.240 in the future that's going to be horrible they decide they want to make it shorter and 01:13:41.240 --> 01:13:48.040 avoid having that as part of their life and then their life overall is better. 01:13:48.040 --> 01:13:53.880 The second principle, there is a presumption in favor of deferring to a person's judgment 01:13:53.880 --> 01:13:57.880 on the subject of her own good. 01:13:57.880 --> 01:14:02.620 So when people say it's good for me to do this, it's good for me to do that, we don't 01:14:02.620 --> 01:14:07.640 have to always agree with them but there's a presumption in favor of deferring to them 01:14:07.640 --> 01:14:09.860 on their own judgments. 01:14:09.860 --> 01:14:14.520 So someone who says oh it's good to smoke and you say no it's not really good to smoke, 01:14:14.520 --> 01:14:19.800 you don't though thereby attain the right to stop them from smoking, you can argue with 01:14:19.800 --> 01:14:24.520 them but you have to defer to their judgment, you have to allow them to do what they want 01:14:24.520 --> 01:14:29.840 to do and live their life. 01:14:29.840 --> 01:14:35.000 But if you take these two principles together, they imply that a person has the right to 01:14:35.000 --> 01:14:39.000 live and die by her own convictions about whether continued life would be better for 01:14:39.000 --> 01:14:40.000 him. 01:14:40.000 --> 01:14:48.560 So this question that's being asked, Bellamy thinks, is answered in the affirmative if 01:14:48.560 --> 01:14:54.080 you assert these two prior principles, that you have the right to make your own life shorter 01:14:54.080 --> 01:14:57.920 and that there's a presumption of deferring to a person's judgment on the subject of their 01:14:57.920 --> 01:14:58.920 own good. 01:14:58.920 --> 01:15:04.680 This is, I mean just to make it even clearer, when you say a person has the right to make 01:15:04.680 --> 01:15:09.200 her own life shorter in order to make it better, that's a judgment about the goodness of the 01:15:09.200 --> 01:15:13.720 life, right, if that's not clear from the, right, so the subject of her own good, when 01:15:13.720 --> 01:15:19.280 you're making it better, you're making a judgment about its goodness or badness, right. 01:15:19.280 --> 01:15:25.240 That's how those two principles connect to one another and it's the first principle that 01:15:25.240 --> 01:15:28.780 Bellamy rejects, right, this principle that a person has the right to make her own life 01:15:28.780 --> 01:15:36.600 shorter, that's the one that he rejects and the reason he rejects this is that he thinks 01:15:36.600 --> 01:15:41.640 that the right to die conflicts with values that are non-relative. 01:15:41.640 --> 01:15:47.760 Now remember, when we were talking about the formula of humanity, I made a distinction 01:15:47.760 --> 01:15:54.840 between relative and absolute values, something has relative value, gets its value from something 01:15:54.840 --> 01:16:01.240 else, something that has absolute value just has value in virtue of itself and not as a 01:16:01.240 --> 01:16:05.800 result of relationship to something else, right, so things of absolute value, things 01:16:05.800 --> 01:16:10.640 of relative value in relation to something that has absolute value, right, so this is, 01:16:10.640 --> 01:16:17.280 so this Kantian point, right, the right to die conflicts with values which are non-relative 01:16:17.280 --> 01:16:22.000 that is not related to personal interests, so what's a personal interest? 01:16:22.000 --> 01:16:26.860 Personal interest is like the things I think about my own good, about my own life, right, 01:16:26.860 --> 01:16:34.480 so I like, you know, playing chess, that's something that's interesting to me that has 01:16:34.480 --> 01:16:41.480 relative value but it gets its value in this case because I have absolute value and it's 01:16:41.480 --> 01:16:44.040 something that's my personal interest. 01:16:44.040 --> 01:16:51.720 Now the values connected to the right to die are ones that concern the inherent dignity 01:16:51.720 --> 01:16:59.240 of persons, so again this is back to the formula of humanity, right, we have absolute value 01:16:59.240 --> 01:17:12.380 insofar as we're rational agents, we have this dignity and it's not something, right, 01:17:12.380 --> 01:17:19.520 your dignity as a rational agent is not something that you own in the way that you own your 01:17:19.520 --> 01:17:27.960 car or your apartments or your watch or your phone, it's not just yours because its value, 01:17:27.960 --> 01:17:37.000 right, your value as a person is not relative to you as an individual. 01:17:37.000 --> 01:17:47.160 So Kant, sorry, Kant, Bellman doesn't just assert this, he gives an argument for it which 01:17:47.160 --> 01:17:52.640 starts from the question of what's good for a person and so what we're going to do now 01:17:52.640 --> 01:18:00.320 is sort of look at this notion of what's good for a person and show what it entails or implies 01:18:00.320 --> 01:18:08.480 and what Bellman is going to argue that it entails or implies a value that is non-relative 01:18:08.480 --> 01:18:13.080 and if it's non-relative it's not something that we can just make a choice about, not 01:18:13.080 --> 01:18:17.500 even for myself. 01:18:17.500 --> 01:18:25.640 So what's good for a person is what's rational to want for his own sake, right, so this is, 01:18:25.640 --> 01:18:31.160 you know, you want what's good for your children not because you think that this is going to 01:18:31.160 --> 01:18:36.720 be good for you but it's going to be good for them, you want what's best for them. 01:18:36.720 --> 01:18:44.840 Now he said well why do we want things for the sake of other people, why we want goods 01:18:44.840 --> 01:18:52.040 for the sake of other people because we're concerned with the other person. 01:18:52.040 --> 01:18:59.440 So this is what he's trying to do here is work out a kind of priority relationship between 01:18:59.440 --> 01:19:07.340 the good of a person and the person himself. 01:19:07.340 --> 01:19:16.800 So the good of a person is so what do you want for your children, you want them to be 01:19:16.800 --> 01:19:27.440 intelligent and caring, you want them to be employed in jobs that they enjoy, in a relationship 01:19:27.440 --> 01:19:35.940 with someone that loves them, you have all these desires connected to the good of some 01:19:35.940 --> 01:19:41.480 other individual and that's because you care deeply about that individual themselves, right, 01:19:41.480 --> 01:19:44.700 the other person. 01:19:44.700 --> 01:19:51.200 So that's to argue that a person's good, right, all these things I talked about, right, the 01:19:51.200 --> 01:20:00.280 good for the person, that is a relative value and it's relative to the value of the person 01:20:00.280 --> 01:20:02.180 herself. 01:20:02.180 --> 01:20:15.600 So think back to the formula of humanity to treat someone as an end in themselves, right, 01:20:15.600 --> 01:20:23.560 is to treat them as a rational agent and to acknowledge in a way their dignity and absolute 01:20:23.560 --> 01:20:25.780 value. 01:20:25.780 --> 01:20:34.440 And remember the formula of humanity also applies not just to how I treat other people 01:20:34.440 --> 01:20:38.280 but it also applies to how I treat myself. 01:20:38.280 --> 01:20:42.160 So right, the formula of humanity act in such a way that you use humanity whether in your 01:20:42.160 --> 01:20:47.800 own person or that of another always at the same time as an end and never merely as a 01:20:47.800 --> 01:20:49.720 means. 01:20:49.720 --> 01:20:57.720 So if you elevate the good of a person, that is to say what's good for them or what's in 01:20:57.720 --> 01:21:04.160 their self-interest or what will make their life go well over the person themselves, then 01:21:04.160 --> 01:21:08.180 you've gotten the priority relationship wrong. 01:21:08.180 --> 01:21:19.200 If you, in terms of the formula of humanity, use an individual to promote their own good, 01:21:19.200 --> 01:21:27.280 if you use yourself as a means to promote your own good, it seems like you're violating 01:21:27.280 --> 01:21:28.800 the formula of humanity. 01:21:28.800 --> 01:21:37.960 I just want to just jump back here a second just to look at principle one again, the one 01:21:37.960 --> 01:21:42.180 that he's rejecting, right, a person has the right to make her own life shorter in order 01:21:42.180 --> 01:21:44.800 to make it better, right. 01:21:44.800 --> 01:21:52.720 So you sacrifice the person for the sake of the person's good and it's then formulated 01:21:52.720 --> 01:22:01.520 to make it shorter if doing so is a necessary means or consequence of making it better, 01:22:01.520 --> 01:22:03.120 making the life better on a whole. 01:22:03.120 --> 01:22:11.440 So again, that's using the person, the individual as a means, again, to that same individual's 01:22:11.440 --> 01:22:12.440 good. 01:22:12.440 --> 01:22:21.440 And that's what would be to agree to the principle one would be to think that that's okay. 01:22:21.440 --> 01:22:32.540 And for Kantian reasons that we've seen, that's a mistake, right. 01:22:32.540 --> 01:22:38.800 It's a mistake to think, it's a violation of a moral norm to think that you can use 01:22:38.800 --> 01:22:49.220 a human being as a means even to promote their own good, right. 01:22:49.220 --> 01:22:54.560 As I have it here, right, we assume that a person's goods matters because we assume that 01:22:54.560 --> 01:22:56.640 people matter. 01:22:56.640 --> 01:23:05.120 And it's the mattering of people, right, that is the value that Kant is talking about in 01:23:05.120 --> 01:23:10.260 the formula of humanity and when he's talking about our dignity as rational agents. 01:23:10.260 --> 01:23:17.000 And that's just, that's the same thing that Velleman is talking about when he is arguing 01:23:17.000 --> 01:23:20.960 against the first principle, right. 01:23:20.960 --> 01:23:26.200 So remember the rhetoric here, there are two principles which generate this idea that you 01:23:26.200 --> 01:23:29.200 can make your life shorter to make it better, right. 01:23:29.200 --> 01:23:34.980 That first principle he thinks is the one they reject and that's precisely the one which 01:23:34.980 --> 01:23:39.520 says that you can use yourself as a means to make your life better. 01:23:39.520 --> 01:23:43.800 The second principle that you defer to people on what they think about their own good, that's 01:23:43.800 --> 01:23:48.360 a principle that he accepts. 01:23:48.360 --> 01:23:57.480 Now does this mean that Velleman thinks that assisted suicide is always impermissible? 01:23:57.480 --> 01:24:01.160 No, right. 01:24:01.160 --> 01:24:05.280 So those of you who are maybe starting to think about like, oh, well how does this contrast 01:24:05.280 --> 01:24:10.580 with what Foote was saying, Foote is also talking about the good of a person, she's 01:24:10.580 --> 01:24:15.780 also talking about rights, but she seems to have, you know, have them doing different 01:24:15.780 --> 01:24:19.540 things and functioning in different ways, right. 01:24:19.540 --> 01:24:25.520 Notice one thing, I'll come back to Foote in a second, is that Velleman thinks that 01:24:25.520 --> 01:24:33.320 it's the dignity of the human being that blocks this idea that you can just sacrifice the 01:24:33.320 --> 01:24:40.480 life to make it better, but the dignity of the human being can actually disappear, right. 01:24:40.480 --> 01:24:48.320 So if someone is in so much excruciating pain that they can never exercise their rational 01:24:48.320 --> 01:24:56.520 agency, in that case you're not sacrificing the dignity of the person because it's again 01:24:56.520 --> 01:25:00.520 the dignity of the person qua rational agent, right. 01:25:00.520 --> 01:25:05.760 So if someone is in a brain dead condition, for example, it doesn't seem like it would 01:25:05.760 --> 01:25:13.380 sacrifice their dignity to say, you know, take them off of life support because in that 01:25:13.380 --> 01:25:18.280 case the dignity doesn't seem to be present anymore. 01:25:18.280 --> 01:25:26.480 So he opens for cases of assisted suicide, but not the sort of case where you have someone 01:25:26.480 --> 01:25:34.200 in their full rational faculties who could continue life and pursue other goals, but 01:25:34.200 --> 01:25:35.920 doesn't want to. 01:25:35.920 --> 01:25:42.600 So his view is restrictive, right, we can't just choose, right, so there are, you know, 01:25:42.600 --> 01:25:46.640 certain countries now you can just kind of order a package and then someone will come 01:25:46.640 --> 01:25:53.280 and you'll get an injection or a pill or whatever, right, that kind of thing seems to be ruled 01:25:53.280 --> 01:25:55.560 out by Bellman's considerations. 01:25:55.560 --> 01:25:56.560 Yes. 01:25:56.560 --> 01:26:05.560 Would you say it's correct to describe Bellman's theory here as a hypothetical imperative 01:26:05.560 --> 01:26:11.320 because you can have the right to endure all life if these conditions are in place? 01:26:11.320 --> 01:26:18.400 So good, so the question was, is then what Bellman doing hypothetical because it seems 01:26:18.400 --> 01:26:21.440 like he allows for certain conditions? 01:26:21.440 --> 01:26:22.600 That's a good question. 01:26:22.600 --> 01:26:30.400 I mean, I think that he would say that the answer is it's not a hypothetical imperative 01:26:30.400 --> 01:26:37.840 because the idea of the presence of dignity and absolute value, right, generates the categorical 01:26:37.840 --> 01:26:45.640 imperative, but once that value is gone, right, then you have a choice, then it's no longer 01:26:45.640 --> 01:26:51.760 sort of relevant for the same kind of question, right, so in other words the categorical imperative 01:26:51.760 --> 01:27:00.760 is generated by the presence of this objectively valuable rational agency, right, and that's 01:27:00.760 --> 01:27:06.280 not like a hypothetical thing in the same way that whether I want to play tennis is 01:27:06.280 --> 01:27:10.720 a hypothetical thing or in the same way that whether I want to be happy is a hypothetical 01:27:10.720 --> 01:27:11.880 thing. 01:27:11.880 --> 01:27:19.640 I mean, unlike Kant though, I think we might say Bellman is more open to this idea that 01:27:19.640 --> 01:27:24.360 our absolute value can be undermined by certain conditions that we have. 01:27:24.360 --> 01:27:30.840 I think Bellman is more modern in the sense that he recognizes that the kind of rational 01:27:30.840 --> 01:27:36.880 agency that we have can be undermined by sickness and illness and pain and that's something 01:27:36.880 --> 01:27:40.560 I think that departs from Kant to a certain extent. 01:27:40.560 --> 01:27:47.680 So for Kant, our rational agency has this almost magical status whereas it's a much 01:27:47.680 --> 01:27:53.580 more mundane thing for Bellman, but I don't think that he would then say it's a hypothetical 01:27:53.580 --> 01:28:01.360 imperative because I still think you get from this a kind of moral restriction, but on the 01:28:01.360 --> 01:28:07.680 other hand I guess I should say that modern deontologists are much more willing to break 01:28:07.680 --> 01:28:11.800 the rules than Kant was, to put it that way, so there's tons of deontologists who go out 01:28:11.800 --> 01:28:15.360 and say Kant was wrong to say that you shouldn't lie to save a life, you should lie because 01:28:15.360 --> 01:28:19.960 it's a supreme emergency or etc. etc. 01:28:19.960 --> 01:28:25.600 So many of those who are currently inspired by Kant don't hold to the letter of Kant on 01:28:25.600 --> 01:28:29.840 a number of issues and this might be one place where I think Bellman could be seen as departing 01:28:29.840 --> 01:28:32.640 from Kant to a certain degree. 01:28:32.640 --> 01:28:35.920 So good, good question. 01:28:35.920 --> 01:28:43.720 Now just we have a couple of minutes left, so one way to think about, I promised a comment 01:28:43.720 --> 01:28:52.880 on the comparison between Foote and Bellman and here's one I think important way to say 01:28:52.880 --> 01:28:56.760 an interesting example that Foote uses that might help you see the difference between 01:28:56.760 --> 01:28:59.120 Foote and Bellman. 01:28:59.120 --> 01:29:05.480 So Foote says about voluntary euthanasia which is the kind of thing assisted suicide is really 01:29:05.480 --> 01:29:13.200 about it's a sort of form of voluntary euthanasia, says about this that look if someone wants 01:29:13.200 --> 01:29:24.520 to die no rights are violated in helping them do this and she says by analogy if you say 01:29:24.520 --> 01:29:31.600 well you know if anyone destroys my house it doesn't matter you know then and someone 01:29:31.600 --> 01:29:36.600 comes and burns down your house well that you've given away the right and having given 01:29:36.600 --> 01:29:43.440 away the rights no rights are violated. 01:29:43.440 --> 01:29:50.200 So if you say if you voluntarily allow someone to destroy your property none of your rights 01:29:50.200 --> 01:29:55.940 have been violated so too if you allow someone to help you die and that's what you want none 01:29:55.940 --> 01:29:58.300 of your rights have been violated. 01:29:58.300 --> 01:30:07.320 For Bellman this is a deep mistake because we don't own property in the same way that 01:30:07.320 --> 01:30:12.600 we relate to ourselves because he thinks that property is one of these things that have 01:30:12.600 --> 01:30:18.640 relative value and our rational agency is something that has absolute value. 01:30:18.640 --> 01:30:24.840 So I mentioned this before you know the dignity that we possess as rational agencies isn't 01:30:24.840 --> 01:30:32.120 ours personally it's a shared objective value so we don't own it in the same way that we 01:30:32.120 --> 01:30:43.140 own a house and it's a mistake to then assimilate things like property to the way we own ourselves. 01:30:43.140 --> 01:30:52.200 So for remember for Foote she said that you know generally where justice doesn't oppose 01:30:52.200 --> 01:30:57.840 euthanasia she says that well charity will generally be in favor of it because euthanasia 01:30:57.840 --> 01:31:03.160 is defined as the good you know it's promoting the good of the person dying however right 01:31:03.160 --> 01:31:08.200 certain cases it's going to turn out that charity will suggest that we should try and 01:31:08.200 --> 01:31:15.400 talk people out of suicide for their own good sorry talk people out of euthanasia for their 01:31:15.400 --> 01:31:19.320 own good in exactly the way we would talk them out of suicide for their own good make 01:31:19.320 --> 01:31:22.140 them see that there's a better life out there. 01:31:22.140 --> 01:31:28.120 Now that strategy again is what Velleman would say is focusing on the wrong thing you're 01:31:28.120 --> 01:31:35.600 focusing on the good of the person and not on the value of the person as an individual 01:31:35.600 --> 01:31:40.880 and that's what you should really be focusing on when you're talking about assisted suicide 01:31:40.880 --> 01:31:43.280 and euthanasia and such things. 01:31:43.280 --> 01:31:57.560 Okay thank you very much.