Friday, April 02, 2004

Ideal Agent Theories

Jonathan Ichikawa wrote:
This is from Peter Railton, discussed yesterday in Jamie Dreier's metaethics class. Railton is looking to define a person's best interests. He suggests something like this: "It is in A's interest to do x in situation S just in case it would be in A+'s subjective interest to do x in situation S, where A+ is what A would be if he had unlimited cognitive and imaginative abilities."
...
So we attempt to patch up the account: "It is in A's interest to do x in situation S just in case it would be in A+'s subjective interest for A to do x in situation S." I'm not sure how to make sense of this formulation. The following looks like it might be a counterexample:
Bubba's IQ is 95. Bubba is confronted with the choice of whether or not to press the big red button. The big red button, if pressed, would have the following result: God will empty the bank account of every person whose IQ is higher than 100, and give the money to those whose IQ is lower than 100.
...
Jamie's response to this kind of suggestion was to push the idea that Bubba+ is the same person as Bubba. Well, ok... but he's still different in an important way. And it still seems like it would be in Bubba's interest to press the button, but not in Bubba+'s. So that looks like a problem. But I recognize that I'm pretty confused about the argument.

I've just been reading a bit of Railton myself, and it seems that Jonathan (or Jamie) has crucially misinterpreted what Railton was saying.

Railton emphasises that we are not interested in A+'s direct subjective interests (for precisely the reasons that Jonathan identifies) - that is, we do not ask what A+ would do in situation S, nor what action of A's is in A+'s interest. Both of these possibilities make the mistake of envisaging A+ as continuing to be a distinct individual from A, even after the decision is made. Instead, we ask A+ what he would want his non-idealised self A to seek were he (A+) to find himself in the actual condition and circumstances of A.

One way to think of this would be to consider A as temporarily gaining full cognitive powers (i.e. turning into A+), and being frozen in a moment of time until he makes a decision, whilst knowing that the moment the decision is made, he will be turned back into A. This ensures that A+ has motivation to seek what is in A's genuine interest, even in those cases when the apparent interests of A and A+ would otherwise diverge. Hopefully this way of conceptualising the situation may help to make sense of Jamie's suggestion that we think of A and A+ as being the same person.

Since A+ knows all of A's desires, and knows how to best realise them, and furthermore knows that he himself will soon share those desires and lose his own divergent ones (because he is about to turn back into A), it seems that this conception of the 'ideal agent' overcomes the problems Jonathan raises.

So to apply this to Jonathan's counterexamples:
  • A should still talk to other philosophers, the fact that A+ would gain nothing from it is irrelevant, for in making the decision, A+ knows that he will afterwards lose his omniscience, and so learning from others is still worthwhile.

  • Bubba should press the button. Bubba+ would choose this, because by the time the button is pressed, Bubba+ will have lost his omniscience (by turning back into Bubba), and so will reap the rewards of having an IQ less than 95.


  • I should probably clarify that I don't particularly like Ideal Agent theories. They can be useful as a sort of intuitive heuristic, but ultimately the reason for doing X is not merely that "A+ would choose X". Instead, surely, the reasons are those that are behind A+'s choice.

    I think Railton actually agrees with me on this point, too. For he goes on to say that the "objectified subjective interest" of an agent (as identified through the Ideal Agent heuristic) can be 'reduced' to purely descriptive facts (eg about the agent, his underlying desires, and how best to realise them). It is then these facts, this 'reduced form', that constitute what Railton identifies as an Agent's "objective interest".

    0 comments:

    Post a Comment

    Visitors: check my comments policy first.
    Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

    Note: only a member of this blog may post a comment.