Wednesday, March 23, 2005

Solving the Prisoner's Dilemma

A Prisoner's Dilemma is when two (or more) people can each benefit themselves at a greater cost to the other(s). So if they are both self-interested, they each end up worse off than they would have been if both were more altruistically inclined. This unfortunate result cannot be avoided in one-off dilemmas between rational agents if no communication is allowed. If they can meet beforehand, it would be best for both if each agent could somehow force the other to co-operate. They (rationally) should agree to incapacitate themselves if this would guarantee that the other would do likewise. But in practice this is difficult to guarantee. Fortunately, the problem is alleviated somewhat in repeated versions of the dilemma. A Tit-For-Tat strategy works out best in those cases, as each agent should try to entice the co-operation of the other.

Those 'selfish' dilemmas are interesting enough, but there is an even more fascinating sort which I want to consider here: moral dilemmas. According to Common-Sense Morality (M), we have special obligations to help certain people (e.g. family) over others. So my moral reasons will differ from yours. But this means we can construct a dilemma whereby each of us can fulfill our own M-given aims at a greater cost to the M-given aims of the other. Consider Parfit's "Parent's Dilemma" (Reasons and Persons, p.97):
We cannot communicate. But I could either (1) enable myself to give my child some benefit or (2) enable you to benefit yours somewhat more. You have the same alternatives with respect to me.

(Note that many-person versions of this dilemma are extremely common in real-life, e.g. the production of public goods.)

According to M, we should both do (1). But our children would be better off if we both did (2) instead. If everyone successfully followed M in such cases, it would in fact be damaging to the M-given aims of each. So M is a collectively self-defeating theory. (It is true that, of the options open to him, each does what best achieves his M-given aims. So M is not individually self-defeating. But morality is surely collective by its very nature. As Parfit says (p.103): "If there is any assumption on which it is clearest that a moral theory should not be self-defeating, it is the assumption that it is universally followed"!)

M is therefore indefensible, and must be revised to form a new theory (R) which includes the following claim:
(R1) When M is self-defeating, we should all ideally do what will cause the M-given aims of each to be better achieved.

This claim is about the 'ideal' case where everyone follows the moral theory successfully. In practice, of course, things are not so simple. We also need to know what to do when others fail to act morally. Presumably we should continue to act impartially so long as enough others do likewise. But how much is enough? Parfit answers (p.101):
There must be some smallest number k which is such that, if k or more parents contribute [to some public good], this would be better for each contributor's children than if none contribute... The number k has two special features: (1) If k or more contribute, each contributor is joining a scheme whose net effect is to benefit his own children. The children of each contributor will be benefited more than they would have been if no one had contributed. (2) If less than k contribute, any contributor's children will be benefited less than they would have been if no one had contributed. (1) and (2) make k a plausible moral threshold above which each parent ought to contribute. We can claim

(R2) In such cases, each ought to contribute if he believes that there will be at least k contributors.

So, at least when M is self-defeating, we ought to be impartial so long as we believe that enough others will do likewise. When collectively followed (as any good morality ought to be), this will better achieve the M-given aims of each person, without exception. That is, it will better fulfill even your special obligations.

Parfit also mentions a more sweeping revision (N): that we should always impartially fulfill everyone's M-given aims (so long as enough others follow suit), even when M is not self-defeating. But although this would maximize the fulfillment of everyone's M-given aims, it would not necessarily best achieve those of each person. There would be exceptions. So we cannot compel the M theorist, in their own terms, to accept N. But they do at least have to accept R. And that alone is a very significant result!

7 comments:

  1. I would have thought you "should" be impartial no mater what. Morality by your scale might favour you taking care of your own family but that is not an absolute scale and is meaningles to other participants. "should" is external to you (or universal). desire or somthign like that can be internal.

    ReplyDelete
  2. Yeah, I generally favour impartial moral theories (like utilitarianism). The point is that many other people believe M. But they shouldn't, because it is self-defeating. According to the values inherent in M (which they accept), they should instead accept the revised theory R.

    For many people, this would be a significant change (and improvement) in their moral thinking.

    ReplyDelete
  3. I think there is a certain degree to which an argument is reinforced by the nature of the debate.

    So if I an talking to my kids, M will tend to be reinforced. If I am in a public debate R will tend to be reinforced because it doesn’t help me to keep saying M level arguments.

    I think this naturally converts M-people to R-people in as far as they find themselves using those arguments. (i.e. your arguments influence your beliefs) although it is temporarily superficial.

    ReplyDelete
  4. It might be annoying to comment to such an old post, but...

    It seems to me that empirically you will not find yourself in prisoner's dilemmas such as this one. I know better what will improve my life, and you know better what will improve your life, and it will not arise that it will be better for both of us to meddle in each other's affairs, at least most of the time. I don't think arguing abstractly about counterfactuals changes this.

    ReplyDelete
  5. Hi Thom, that's true enough as a practical point, but these counterfactuals are intended to illuminate our fundamental moral theory (which should apply in all possible situations). In particular, it counts against the idea that our commonsense moral rules are really fundamental, and in favour of the 'indirect utilitarian' idea that some more impartial moral theory is fundamental, and just so happens to yield agent-relative rules for us to follow in our contingent circumstances. (For an even more radical argument along these lines, see 'The Contingent Right to Life'.)

    ReplyDelete
  6. Richard,

    Thanks for the links! Your position is much clearer to me now (I think). I wholly endorse the view of the Romantic commenter in 'The Contingent Right to Life'. It seems your thinking fundamentally regards the ol' utilitarianism vs. deontology debate, so we're probably talking way past each other.

    ReplyDelete
  7. Found this blog yesterday, great work :) ,loving it .Also, this seems to be relevant to the article : http://en.wikipedia.org/wiki/Superrationality

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.