Wednesday, August 31, 2005

The Surprise Examination Paradox

I really like this one. Here's how it goes: your teacher tells you (i) she's going to give the class a surprise exam next week, and (ii) you won't be able to work out beforehand on which day it will be. Using this information, you work out that it can't be on Friday (the last day), or else you'd be able to know this as soon as class ended the day before, contrary to the second condition. With Friday excluded from consideration, Thursday is now the last possible day, so we can exclude it by the same reasoning. Similarly for Wednesday, Tuesday, and finally Monday. So you conclude that there cannot be any such exam. This chain of reasoning guarantees that when the teacher finally gives the exam (say, on Wednesday), you're all surprised, just like she said you'd be.

I originally thought that this paradox would only arise when the speaker was known to be fallible. The reason you can't know beforehand what day the exam will be, is because when running through the reasoning, you come to suspect that what the teacher said is false. But what if the teacher was known to be infallibly truthful? Imagine it is God that makes the announcement (and suppose we know that God cannot speak falsehoods). My thought was that the announcement would then simply be false, i.e. the exclusionary argument proves that this statement could never be made by a being known to be infallible.

But I've changed my mind about this. It seems that if God were to make such a statement, we would be thrown into confusion by it, and would be unable to work out on what day the surprise exam would be. So the statement would turn out true, even though we know the speaker is truthful and so cannot (unlike before) suspect that there will be no exam. (But then what's wrong with the exclusionary reasoning? Most puzzling!)

A commenter over at Opinatrety distills the core of the problem:
I have a coin in my hand. But you will never know if I really have a coin in my hand before I open it.

So you do not know if I have a coin in my hand or not. Then, I open it. Yes, there is a coin. My statements are right.

As he notes, the trick lies in saying 'you will never know'. By saying that, you throw the listener into a state of epistemic conclusion, thereby ensuring the truth of that very statement. Even the known-infallible God could do this, by saying to you: "X is true, but you cannot know it". After all, you'd then reason, "If God says I cannot know X, and he's never wrong, then there must be something dodgy about it... I'd better suspend belief" -- so then you don't know X any more! Even though you were told it from an infallible source! Tricky.

Tuesday, August 30, 2005

All of None

Some might deny that "all ravens are black" means the same thing as "there are no non-black ravens". The latter claim is true even if there are no ravens at all. But it seems odd to say that the former would be true in such a case. After all, it would be just as true to say "all ravens are entirely non-black", since there are no black ravens to contradict it. From these we can infer that "all ravens are both black and entirely non-black", which seems very odd indeed. Logicians mutter that it's just another way of saying that no ravens exist at all. But it clashes sorely with common linguistic usage. We might instead hold that universal statements have existential import: to say "all ravens are black" implies that there are some ravens.

This would provide a simple resolution of the raven paradox - for while randomly sampled red herrings might confirm that there are no coloured ravens, they do nothing at all to confirm the existence of black ravens, so if the universal claim is committed to such existence, then the red herrings do not confirm that "all ravens are black" after all. Our intuitions are saved! (Not really: read on.)

It's worth noting just what the randomly sampled red herrings really show. They don't specifically confirm that there are no coloured ravens. This implication is very much implicit -- red herrings don't say anything direct about ravens. Rather, the fact that our random sample turned up a red herring indicates that red herrings are probably quite common. (If you pick an object at random, chances are you're going to pick an object of a common rather than rare type, for obvious reasons.) We might say that it directly confirms the claim "everything is red herrings". This in turn then implicitly confirms the logical entailments, "there are no ravens", and particularly, "there are no non-black ravens", which the modern logician takes to be equivalent to "all ravens are black".

But the red herring doesn't just confirm that all ravens are black (without existential import); it equally confirms that all ravens are green, and that no ravens exist at all. It is only from our background knowledge that some black ravens exist that we rule out the other equally "confirmed" hypotheses, and say that the observation supports the specific claim that all ravens are black (rather than, e.g., that no ravens exist at all).

I think this helps explain why the herring/raven confirmation seems paradoxical to people. The observation in itself says nothing specific about ravens, and that's why we suppose that it cannot confirm claims about ravens (e.g. that all ravens are black). But it can confirm that there are no (or relatively few) ravens. And while this alone cannot support "all ravens are black" over "all ravens are green", the latter can be ruled out by our background knowledge.

Through most of this I've been assuming, with modern logicians, that universal statements have no existential import. "All X are Y" is true if no X's exist, no matter what Y might be (it can even be a contradiction, as we saw above). But in fact the herring confirmation of "all ravens are black" goes through regardless once we factor in background knowledge. Given that some black ravens exist, the existential condition is already satisfied, so all we need to confirm in addition is that there are no non-black ravens, which the randomly sampled red herring does help to confirm.

But enough about ravens already. I'm wondering: do universal statements have existential import?

Let's take an example: (M) All mermaids are green.

Is M true? False? Neither? I don't know what to make of it. The problem with stipulating that empty universals are vacuously true was mentioned above: it implies the truth of apparent contradictions such as "all mermaids are not mermaids". But existential import fares no better, for that implies that empty universals are all vacuously false, which is implausible for tautologies like "all mermaids are mermaids" (which is surely true even if no mermaids exist).

Perhaps what we need is for the quantifier to range over non-existent possibilia in such cases. We go to some appropriate possible world where mermaids exist, and assess the truth of the universal in that world.

Are all unicorns white? I think so. Are they all black? Nope! Is it possible to get a coloured unicorn? Maybe in some distant possible world, but that need not contradict our judgment that all unicorns are white in the appropriate world.

Hempel actually discusses this issue in his original raven paradox paper ('Studies in the Logic of Confirmation', pp.16-17). One reason he offers, to think that universal statements don't have existential import, is that some just obviously don't. An astronomer's universal hypothesis about what happens in extreme conditions "need not imply that such extreme conditions ever were or will be realized".

But in fact this supports my view, not his. We want to say the astronomer's hypothesis is true only if his prediction would happen were the extreme conditions to occur. In other words, the hypothesis must hold true in the nearest possible worlds where the extreme conditions obtain. Otherwise we would hold it to be false. Logicians, by contrast, would hold the universal statement to be vacuously true if the conditions never actually obtained. They would say the opposite prediction would be just as (vacuously) true. But this is clearly wrong. His prediction is not a vacuous one, and it can be either true or false, no matter whether the conditions actually obtain.

(One way to take my point here is to say that such claims are not properly analyzed in terms of universal quantifiers and material conditionals. They are better understood as counterfactual claims, if they refer to nothing actual.)

Monday, August 29, 2005

Epistemology without Evidence

There are two ways to challenge evidentialism. One is to argue (as in my previous post) that there can be 'practical' or non-epistemic reasons for belief. The other, which I want to explore presently, is to argue that there can be non-evidential epistemic reasons. There can be theoretical reasons for you to believe that p, which are not at the same time reasons for the truth of p.

How could this be? Well, the normative potency of evidence derives from taking truth as the normative standard against which to assess beliefs. We say "belief aims at truth", and then assess beliefs according to whether they achieve this goal. But there are other normative frameworks we might adopt instead. There is, for example, the standard of coherence, which assesses a set of beliefs for its internal coherence, rather than applying an external standard such as truth. Despite making no reference to truth, the standard of internal coherence is nevertheless clearly an epistemic standard, rather than a practical one.

So, I propose, we have epistemic reason to believe that p if doing so would yield a more coherent belief set. (There are shades of Michael Smith in this suggestion.)

Suppose that you believe that p, and believe that if p then q, and so on this basis you rationally form the belief that q. The two former beliefs provide you with a reason to adopt the latter belief. In doing so, you make your belief-set more coherent. This is epistemically rational.

So there we have a reason for believing. What about reasons for the thing believed, i.e. the proposition that q? If asked, you would likely point to the propositions that p, and that if p then q, as being your reasons for taking q to be true. However, let us suppose that your former beliefs are false: p is false, and if p then q is false. So while you took these to be reasons for the truth of q, in fact they are not. You were mistaken about whether there are reasons for the the truth of q. There are no such reasons in this case.

Thus we find that there can be (even epistemic) reasons for believing, which are not reasons for the truth of the thing believed. Musgrave's distinction is vindicated, even if his particular application of it is not.

Belief Without Evidence

There's no denying that there's a very tight connection between belief and conscious assessment of evidence or truth. Shah sees this in terms of the phenomenon of transparency, whereby "the deliberative question whether to believe that p inevitably gives way to the factual question whether p". But I previously argued that this only holds if we artificially restrict what is to count as "doxastic deliberation". To avoid this problem, I think a better way to look at the issue is to build on the neutral description offered in my first sentence. In particular, the phenomenon in need of explanation is what I will call the subjective evidence principle:

(SEP) When one attends to one's belief that p, one must believe that one has adequate evidence that p is true.

Or, as Adler puts it ('The Ethics of Belief: Off the Wrong Track', p.273):
Necessarily, if one regards one's evidence or reasons as adequate to the truth of p then one believes that p; and if one attends to one's believing that p, then one regards one's evidence or reasons as adequate to the truth of p.

Note that this principle is sufficient to explain transparency in all and only those contexts where it actually occurs. In particular, it explains why "one cannot arrive at a belief that p just by deliberating on whether it would be beneficial to believe that p." So long as one holds the evidence for p to be inadequate, one is incapable of consciously believing p. But it doesn't, for example, prejudge the question of whether one can take account of practical reasons when deliberating about whether one ought (all things considered) to believe that p.

So, how are we to explain SEP? One option (and Adler's own, I should add) is to agree with Shah that the normative hegemony of truth is built into the very concept of belief. An alternative, that I prefer, is to appeal to the internal logic of belief. To quote Foley (The Theory of Epistemic Rationality, p.215):
Just as it may be impossible for a person S to believe p and also to believe its contradictory not-p, so too it may be impossible for him to have "near contradictory" beliefs, such that he believes p while also believing that his evidence indicates that p is likely to be false.

What's the difference? Well, this latter standard of coherence is entirely internal to the person's belief set. In contrast, evidentialists like Shah and Adler want to hold that it's part of the concept of belief that belief aims at truth -- a standard external to the person's set of beliefs.

A problem for these evidentialists is that their account over-reaches. If the normative hegemony of truth were intrinsic to the concept of belief, then "I ought to have a false belief" would be a necessary falsehood. But it obviously isn't. If a demon will torture your family unless you believe the Earth is flat, then by damn, you ought to believe it! Reflecting on your situation, you may think "p is false but I ought to believe it", and there is nothing incoherent about this thought. You can think "the welfare of my family is a reason for me to believe that the Earth is flat", and this is not just coherent, but true!

My coherence account has no problem with these cases. While you cannot simultaneously believe that p and believe that the evidence is against p, there are two ways to remedy this. You can change your belief about p, or you can change your belief about the evidence. You are not rationally compelled to follow the evidence, contrary to the evidentialist's suggestion. You can instead conclude "I ought to have a false belief", and thereby resolve to manipulate your evidential situation (perhaps by taking a magic pill) in such a way as to enable you to form this false belief.

Thus the pragmatist can provide an account of SEP and transparency to rival the evidentialist's.

Inaccessible Reasons

Shah (following Williams) holds that:
1) R is a reason for X to Φ only if R is capable of being a reason for which X Φs.

Suppose that God will reward people who act from virtuous motives. This is clearly a reason for them to be virtuous. But it's not a reason that they can recognize, or act upon, because in doing so they would be acting from self-interest rather than virtue. So they wouldn't get the divine reward. Thus it would be self-defeating to act for this reason. The continued existence of the reason is dependent upon its not being recognized. Once it is recognized, the reason will disappear. Nevertheless, it seems clear that, so long as nobody is aware of it, the divine reward is a reason for them to act virtuously. So (1) is false. Just as there can be unknowable truths, so there can be inaccessible reasons, i.e. considerations that count in favour of an action even though they cannot possibly "guide" it.

Sunday, August 28, 2005

Carnivals

The 18th Philosophers' Carnival is now up at Language Games and Miscellaneous Arbitrary Marks. It was actually not a bad turnout in the end, with 11 submissions (not counting del.icio.us nominations, even). Though I still noticed absences from a couple of my favourite philosophy blogs - tsk tsk. Hopefully you'll send in submissions next time, eh? ;)

We need more hosts again, so if you're interested, check out the guidelines then let me know.

Also, Tim is keen to give the Kiwi Carnival another shot, so he'll be hosting it next Sunday. His submission instructions are here - any NZ bloggers should be sure to send something in by the end of the week!

Dishonesty in Politics

Politicians are notorious for never giving a direct answer to anything. Why do we let them get away with this? Presumably they only do it because it works for them -- perhaps they can't risk providing their opponents with a juicy quote to exploit, so instead they merely waffle. But this can't be good for democracy. We should be looking for ways to change our political culture so that sincerity is rewarded and disingenuity discouraged. We need to get the incentives right.

This means that journalists need to be willing to ask the hard questions. And if the politicians start to give misleading or waffly answers, they need to be called on this, interrupted, and told to state their position clearly or else shut up. If politicians are obviously being dishonest, then analysts shouldn't hesitate to point this out, loud and clear, shaming the liars and forcing them to own their words. (Blogs are generally good for this much, if nothing else, as demonstrated by recent posts on Frogblog and No Right Turn, for example.)

Just as important, we should be willing to accept politicians who own up to making a mistake. When Labour makes a u-turn, they shouldn't try to hide the fact. They should stand tall and say, "Yes, our past policy was stupid, for reasons x, y and z. That's why we've changed it to this new policy which avoids these problems because of p, q and r. We're here to promote what's best for the country, and we're not too proud to let our past mistakes get in the way." (Of course, if they can't offer any such explanation -- say, because the new policy is actually no better, but is simply an opportunistic vote-grabber -- then they shouldn't be proposing the policy in the first place.)

My opinion of a politician would actually improve were they to make such a frank admission. But I guess I mustn't be the average voter. Besides, no doubt partisans would jump all over such an admission, and use it to paint the politician as "incompetant" or a "flip-flopper" or some other nonsense. To any bloggers reading this who would be tempted to behave in such a fashion, I ask you simply: please don't. You're damaging the quality of political discourse. Just don't do it.

Perhaps late-night blogging lends itself to misplaced idealism, but I simply cannot comprehend why we tolerate such unclarity in our political discourse. Surely our current political culture is not by necessity so muddled. We could make it better. So why don't we?

Politicians should be exposed not just for dishonesty, but also illogic. An example that springs to mind is Rodney Hide telling National supporters to vote ACT instead because National "can't govern alone" and so will need ACT there as support. But of course what matters isn't the number of parties, but the number of seats obtained by the left vs right blocs. If National loses seats to ACT, that will do nothing at all to make a National government more likely. Rather, what National needs is for the center-right to get as many seats as possible, however they are distributed. So Hide's argument is simply illogical. (There are complications regarding the 5% threshold which I won't get into here. Suffice it to say that while there are possible situations where it would benefit National to sacrifice some of its votes to ACT, that is an entirely different argument from the idiotic one that Hide was presenting -- which quite literally consisted of nothing more than the premise "National cannot govern alone", from which he tried to derive "so National supporters should vote for ACT instead". And he gets away with it too. Pitiful.)

There are some issues that are genuinely difficult, and we can't expect any easy answers to them. But for others, it really isn't that hard to come to the truth if one is willing to think critically. Politicians and partisans defend obvious falsehoods all the time. It shouldn't happen. They ought to be exposed as either stupid or dishonest. Once we're all agreed on the easy questions, then we can concentrate on disputing the hard ones. And if we continue to hold each other up to the high standards of reasoned discourse, then perhaps some real progress might be made. So why aren't we doing this?

For Shame

In a recent TV interview, a disabled guy was asked what he wanted to do to people who poach disabled parking spots. His answer: attach a hard-to-remove bumper sticker to the car, announcing what they had done (e.g. "I stole a disabled park", or whatever). It's not a bad idea, really. I mean, these people know what they're doing is wrong -- when stopped and challenged by the disabled guy, they seemed quite embarrassed, and grasped at flimsy excuses ("oh, I'm in a hurry, and I was only going to be a minute..."). But they refuse to face up to this fact, hiding their selfishness from themselves and from the world. It would be fitting for such behaviour to be made public, and hidden no longer. More generally, I want to argue that society ought to make better use of shame to promote ethical behaviour.

Now, shame has a bad name amongst liberals, and I'll grant it's been misused in the past. The problem is that shame serves as a method of general norm enforcement, but of course not all norms are worth enforcing. Indeed, some quite explicitly ought not to be! Still, despite its shady history, I think it's time to bring shame back, and put it to use for good instead of evil. As citizens, we should shame each other into behaving with more community-mindedness and less selfishness. (Feel free to discuss in comments any specific examples which you think would or would not be appropriate for "shame treatment".)

Why should we use shame as a punishment? Well, as a utilitarian I think there's only ever one justification for punishment, and that is that it works (i.e. has good consequences). Humans are social animals, we care a great deal about our social status, so any threat to that is going to serve as a powerful deterrent. And with the advent of the internet, publicizing misdeeds has become that much easier. Just consider the case of dog poop girl: when a woman refused to clean up after her dog pooped on the subway, someone took a photo and posted it on the net, where it spread like wildfire and made the poor girl infamous. Now, one can feel sympathy for her due to the disproportionate response. But if such public shaming were more widespread, no one event would get such disproportionate attention. Rather, one hopes, jerks would get shamed just the amount they deserve.

So far I've mainly been thinking about social digressions, but shaming might also be appropriate for some criminal behaviour. Convicted drunk drivers might have to attach bumper stickers to their cars announcing the fact. You can probably think of other examples. Remembering how terribly inefficient prisons are, we should be looking for possible alternative punishments. But some claim that institutional shaming would prove counterproductive - and if they're right, then we shouldn't do it. Some people advocate public shaming for purely retributive purposes, but I don't agree with that. If it doesn't work, we shouldn't do it. But I do think we should be looking into whether or not it would work. Because maybe, just maybe, it would. It certainly seems plausible that it would serve as a powerful deterrent.

Others claim that even if it works, we still shouldn't do it, because it's "dehumanizing". But that depends on how it is done. Forcing criminals to march around town naked might be inappropriately degrading. (Then again, I don't see why such degradation is so much worse in kind than depriving them of their freedom.) But the sort of appropriate shaming I have discussed is nothing like this. It merely involves forcing people to take responsibility for their actions, rather than hiding behind the urbanite's cloak of anonymity.

There is nothing even remotely dehumanizing about getting people to own their actions in such a way. Quite the opposite, in fact. When the jerk hides behind the cloak of anonymity, he hides the full import of his actions not just from others, but also from himself. This is evident from the embarrassment the carpark-poachers felt when challenged by the disabled guy. Deep down they knew they were behaving wrongly, but they didn't want to face it. To force them to face it is thus to help them to become a more authentic individual, with a more accurate appreciation of their own moral nature. The sort of shaming I have in mind actually promotes, rather than degrades, the recipient's full humanity.

Saturday, August 27, 2005

The Argument from Exclusion

To continue on from my previous raven post, I want to further discuss what I will call "the argument from exclusion" for Hempel's R1 account of confirmation.

As previously explained, I hold that one's sampling method is of crucial importance to determining whether an observed instance of a G-ish F is evidence that all Fs are G. To borrow Blar's example, suppose we're wondering whether there are any extraterrestrials. If God provided me with a random sample of living creatures, and all of them were from Earth, then that would provide me with evidence that all living creatures are from Earth. But I get no such evidence from a sample that is restricted to Earthly creatures to begin with. If I see my neighbour's dog, it is both alive and on Earth, but - contrary to Hempel's R1 principle - this does not provide any evidence whatsoever that all life is on Earth (i.e. there are no aliens).

The argument from exclusion denies this. It begins by noting that there are various rival hypotheses about extraterrestrial life. On one, let's call it H(0), there are no aliens: all living creatures are on Earth. At the other extreme, we have H(1), the claim that everything is an alien, and nothing else but aliens exists. There are infinitely many such hypotheses H(p/q): that the proportion of aliens to objects in the universe is p/q. [We will see later that this strict definition fails to yield the rough description of H(0) given above.]

We can begin by assigning each of these hypotheses a non-zero probability. Now, when we observe my neighbour's dog (or indeed anything at all that isn't an alien), that allows us to rule out H(1). Consequently, the non-zero probability previously attached to this will get reassigned around all the other H's, including H(0). Each will receive some miniscule, but non-zero, boost. Hence, the argument goes, observing my neighbour's dog does (just slightly) confirm that there are no aliens.

I want to show that this argument fails. I previously showed that it yields a paradox of interpretation, because what conclusions we draw are radically relative upon how we choose to carve up the space of hypotheses. Suppose that, instead of appealing to the proportionally-focused 'H' hypotheses, we instead carved up the problem space in terms of cardinality. That is, we consider the range of hypotheses C(n): that there are exactly n aliens in existence.

Now, each partition method is exhaustive. It must be the case that exactly one of the H(p/q) hypotheses is correct, and exactly one of the C(n) hypotheses is correct. Each partitions logical space in such a way as to exhaust the possibilities. There is no possible state of affairs that is not covered by exactly one H hypothesis and exactly one C hypothesis. Okay. Now, it seems that H(0) and C(0) are the same hypothesis. The proportion of aliens to other existing objects is zero iff there are zero aliens (but see below). But while observing my neighbour's dog allows us to rule out H(1), and thus slightly confirm H(0), the very same observation does not falsify any of the C(n) hypotheses. For any n, it is possible for there to exist n aliens plus my neighbour's dog. So the C hypotheses are unaffected by the observation. In particular, C(0) is not confirmed.

Thus we have the following inconsistent triad:
1) C(0) = H(0)
2) H(0) is confirmed by evidence E
3) C(0) is not confirmed by evidence E

Perhaps we should reject (1). If it's possible for there to be infinitely many objects, then H(0) could be true when C(0) is false. The proportion of aliens to other objects could be zero even if there are some finite number of aliens. This suggests to me that H's focus on proportions or ratios is misplaced. We're not interested in the relational question of what proportion of objects are aliens. Rather, we're interested in the absolute number of aliens (and, particularly, whether it is non-zero).

This is further highlighted by considering repeated applications of Hempels R1 principle. It claims that each observed co-instantiation of F-properties with G-properties confirms that all F's are G. So if, after observing the dog, I also observe a rabbit, the rabbit provides further evidence that all living creatures are Earthly. How can the argument from exclusion deal with this? Well, it must say that we rule out the hypothesis E(-1) that everything is an alien except for one thing.

There are two things to note here. Firstly, E(-1) is not part of our previous partitions of the problem space. It does not correspond to any C or H hypothesis. Granted, an exhaustive partition of logical space is provided by the hypotheses E(-n): that everything is an alien except for n things. But this is a bizarre and unhelpful partition. None of the E hypotheses correspond to the desired hypothesis C(0) that there are no aliens. C(0) will be true iff, for some n, E(-n) is true and there are exactly n objects. Depending on how many objects there are, this could potentially end up being any of the E hypotheses. None of them are inconsistent with C(0). So, because they are not rival hypotheses, ruling out E(-1), say, does nothing to confirm C(0).

Are there any other exhaustive partitions of logical space possible that would support the argument from exclusion? I can't think of any, but it isn't obvious how to prove this. I guess I need to show that the C partition is the appropriate one to make, such that any other partition that yields contradictory results (e.g. the H partition) must be mistaken.

How can I show this? I'm not too sure. It seems fairly self-evident though, really. If we're wondering whether there are no aliens, this corresponds to the claim that the number of aliens is zero, and this contrasts with all the rival C(n) claims that the number of aliens is n. This partition appeals only to the absolute number of aliens -- it is in this sense the most basic and fundamental partition we could make. The R partition appeals to the relative proportion of aliens to other objects, and so is more complex, depending upon both the absolute number of aliens and the absolute number of everything else. Plus it misbehaves terribly if one allows for the possibility of infinitely many objects. Similarly the E partition appeals to both 'everything' and the absolute number of aliens. So both R and E can be redefined in terms of C plus some extra variables. It seems fairly clear that there is no simpler possibility than C. Any other such partition is going to build on it in much the fashion that R and E do.

But that's a fairly rough intuitive account, so if anyone can see how to turn this into a rigorous proof, please let me know!

P.S. Put another way, I need to show that there is no possible partition of logical space such that "there are no aliens" [i.e. C(0)] and "everything is an alien" [i.e. E(0), and perhaps R(1)] are rival hypotheses within this partition. But perhaps this follows simply from the fact that C is an exhaustive partition which contains C(0) but does not contain anything corresponding to E(0)? I don't think it is quite this simple though -- one might always respond that it simply requires a finer-grained partition. Perhaps the better answer is to simply point out that E(0) and C(0) are not mutually exclusive: if no objects existed at all, then it would be true both that there are no aliens and that everything that exists is an alien! Perhaps this suffices for my proof?

Friday, August 26, 2005

Inquiry and Deliberation

Returning to the question of whether truth governs belief, I've been thinking about the position of philosophers like Adler, Shah, and now Velleman too, who claim that evidentialism is contained within the very concept of belief. A major motivation for this position (let's call it "conceptualism") is the phenomenology of first-person doxastic deliberation: when deliberating about what to believe, the question of what to believe seems to collapse into the question of what is true. If I offer you a million dollars to believe the world is flat, this practical incentive doesn't carry any weight in your deliberation -- the only thing to influence the immediate outcome of your deliberation is evidence concerning the truth of the belief in question. That's the story. I want to argue that it rests on a question-begging definition of 'deliberation'.

Here's the simple counterexample: Suppose you have a pill that, if swallowed, will cause you to believe the Earth is flat. Now, given my generous offer, you ask yourself what you ought to believe. After reflecting on the costs and benefits of having a false belief in this case, you conclude that you (all things considered, rationally) ought to believe that the earth is flat. So you take the pill, and so receive the million dollars.

Surely this is a case of deliberating about what you ought to believe. But it's a case where you took practical incentives, and not merely truth-indicative evidence, to be relevant reasons influencing your decision. So "transparency" does not hold in all cases of doxastic deliberation after all. There is a special type of deliberation, which we might call rational inquiry, which is (by definition) exclusively concerned with the pursuit of truth and knowledge. But not all deliberation about what to believe need be so constrained.

Now, Shah responds to this line of argument by redefining 'doxastic deliberation':
In the sense I have in mind, deliberating whether to believe that p entails intending to arrive at a belief as to whether p. If my answering a question is going to count as deliberating whether to believe that p, then I must intend to arrive at a belief as to whether p just by answering that question. I can arrive at such a belief just by answering the question whether p; however, I can't arrive at such a belief just by answering the question whether it is in my interest to hold it.

In the practical counterexample, as a result of your deliberation you do end up intending to believe that the earth is flat, it's just that to realize this goal you must take the extra step of swallowing the pill. You can't achieve the goal through deliberation alone. But why should that matter? In any sort of practical deliberation, you end up with an intention to perform some further action. If I decide that I ought to give to charity, I cannot achieve this goal through deliberation alone - I need to actually go out and do it! That doesn't mean I wasn't deliberating about whether to give to charity. So why should it mean that I wasn't "truly" deliberating about whether to believe that p? It seems arbitrary and ad hoc to restrict doxastic deliberation in such a way.

No doubt the conceptualist will want to reject my analogy with practical reasoning -- after all, their whole point is that theoretical reasoning is of an entirely distinct nature. They grant that one can deliberate practically about what to do regarding one's beliefs (e.g. whether to take the flat-earth pill), but they want to distinguish this from theoretical deliberation over what to believe.

I think this separation is artificial, however. In the flat-earth case, the only reason you need to take external action (i.e. the pill) is because it's psychologically impossible to believe at will for non-evidential reasons. If you could change your beliefs by sheer force of will, you presumably would do so. It's just that you lack the capacity. But that fact doesn't seem to be of any great normative significance. Suppose you've already taken another pill, which temporarily gives you precisely this capacity. Then you could come to believe that the earth is flat by deliberation alone. As soon as you conclude "I ought to believe that p", you will thereby find yourself believing p, no further action required.

So doesn't that serve as a counterexample to transparency? Further, I don't see why the stipulations about 'further action' should really matter in principle. It should be enough that one can deliberate and come to the conclusion that I ought to believe that p, for non-evidential reasons. This is surely the crucial step. How you intend to acquire this belief, or even whether it is psychologically possible for you to do so, are further questions that have no obvious relevance here.

Now, there are deliberative contexts where transparency holds without fail: namely, those I described above under the moniker of "rational inquiry". But this holds trivially. If we define 'inquiry' as the single-minded pursuit of truth, then it should come as no great surprise that we accept only truth-indicative reasons when deliberating in a context of inquiry. Transparency is here built into the very definition of this deliberative context.

So here's the problem for conceptualists: If they define theoretical reason broadly enough to capture any deliberation over what one ought to believe, then it will include practical reasons as in the case of the flat-earth pill. Alternatively, if they restrict theoretical reason to the specific practice of inquiry, then they have simply built transparency into the definition, and can derive no interesting conclusions from this tautology. Neither extreme can support their claims. But attempts to stake out a middle ground by appealing to our psychological capabilities just seems arbitrary. Why should the nature of a deliberation be affected by how we intend to implement our conclusions?

Upcoming Carnival

We've just passed the first birthday of the Philosophers' Carnival! To celebrate, be sure to submit a post of your own to the upcoming carnival, and maybe nominate a few others too, before the Saturday deadline.

[update: moved to front]

Inconsequential Intuition Test

An exchange I had with Vera recently struck me as making very stark the contrast between consequentialism and deontology. Vera wrote:
Suppose you are approached by credible aliens from constellation Sadists-R-Us, and they tell you that they are planning to destroy planet Earth. However, they say, you can save it if you yourself (and they will provide you with the tools to do it): 1) round up all the children under 10 and torture them to death under excruciating conditions, or 2) infect all humans with a lethal strain of plague (but you, your friends and close relations will have immunity), or 3) roast all your close relatives or friends on the spit alive, and then eat them, or 4) insert the most hideous "unthinkable" evil you yourself can think of. Is there any point along the continuum of evil where you would take a stand and say, no, THAT I will NOT do?

To which I replied:
Let me flip your question around: consider a horrible action that you would be (understandably) reluctant to perform -- say, torturing a baby. Are there any possible consequences that would make you reconsider, that you would be willing to sacrifice your "moral purity" for? Say the aliens would torture every baby if you didn't do this one, and then they'd blow up the world as an added bonus. Is there any point along the continuum of evil where you would take a stand and say, no, THAT I will NOT allow?

Chances are, you'll feel a lot more sympathy for one or other of these two lines of attack. If the first, you exhibit symptoms of deontology, and should consult a health professional immediately for psychiatric evaluation. If the second, you have broadly consequentialist intuitions, and should not be allowed near sharp implements, babies, or political power.

Judeo-Christianity

I've noticed that Americans seem to talk a lot about "the Judeo-Christian faith". While it's nice to see mereological fusions getting a foothold in ordinary language, they do seem to have confused themselves along the way. Judaism is a religion. Christianity is another religion. Together, they make two religions. The fusion of the two religions is one object that is not a religion. Just like the fusion of two people is some third object that is not a person. So I don't think it makes much sense to speak of "the Judeo-Christian faith", unless we're also going to start talking about "the mother-father parent", "the French-US president", and so forth.

Thursday, August 25, 2005

Assorted Comments

My second raven paradox post has now slipped off the main page. But I'd very much welcome any further comments or suggestions regarding how to deal with the "ruling out rival hypotheses" argument (follow link for full details). Same goes for Does Truth Govern Belief? On the one hand, it seems obvious that if I offer you a million dollars to believe the earth is flat, then that provides you with a reason to believe it. On the other hand, it's impossible for you to have the belief whilst recognizing that you have no evidence for it -- practical incentives don't seem, within the context of first-person doxastic deliberation, to count as justifying reasons for belief. So any suggestions on how to sort that out would be most welcome. (I think I want to defend practical reasons here. I'm just not quite sure how, yet.)

Also, you might want to check out the ongoing discussion in the comments to Vera's guest post on the immorality of moral justification. I need to write a followup to that at some point. (Also: if anyone else would like to publish a guest post here, email it to me, and I will consider posting it. No guarantees, of course.)

I've had some interesting discussions on various other blogs. You can find them all via my del.icio.us links, if you're interested. Some recent favourites would be No Right Turn on democracy vs. liberalism; Siris on "the burden of proof" as an obligation of discourse; and Ian Olasov on the Geach sentence.

The latter problem concerns whether one can translate the sentence "Some critics admire only one another" into first-order logic. It's supposed to be impossible. But if Ian's "Link" relation were legitimate, then I suggested we could translate it as follows:

Ex(Critic(x) & Ay(Link(x,y) => Critic(y)))

In English: there is some critic for whom anyone he is linked to in admiration is also a critic. To be "linked to in admiration" is for either Admires(x,y) to be true, or else for there to be some chain Admires(x,i1) & Admires(i1, i2) & ... & Admires(iN, y), for any number N of intermediate i's. But I'm suspicious as to whether this is a well-defined relation (at least in first-order logic). I'd be curious to hear what others think.

Wednesday, August 24, 2005

New Comment Forms

I just installed this hack. The 'x comments' link still brings up the usual Blogger popup window. But now you also have the option to leave a comment using a form right on the post page itself. Let me know if there are any problems with it.

What Behaviour is About

What Behaviour is About: ascribing intentionality to animals

Richard Chappell

Animal behaviour can appear strikingly intelligent and purposive at times, but what lies beneath this appearance? What are the reasons behind an animal’s actions? We commonly explain human behaviour by way of intentional[1] mental states, which represent the world as being a certain way. Humans act so as to achieve their goals. Can the same be said of non-human animals? Do they conceive of the world and act purposefully to achieve their goals, or are they mindless stimulus-response automatons? How can we tell? After clarifying what intentionality involves, I will examine experimental methods that allow us to make inferences about animal minds on the basis of their behaviour.

I will take means-ends reasoning to be the cornerstone of practical rationality. Thus the hallmark of agency is recognizing the link between some goal G and the behaviour B apt to achieve it. Behaviour can be goal-achieving even in the absence of such recognition. For example, ants remove decaying corpses from their nest, which achieves the biological goal of good hygiene. But the ants themselves have no conception of this goal. They merely respond to oleic acid, a chemical produced in the decay process. Bits of paper or even live nest-mates are similarly disposed of, if daubed with the chemical.[2] In this case, the ant is not the agent – it does not recognize the purpose of its own behaviour. Nevertheless, there is a genuine sense in which the ant behaviour is about removing dead nest-mates. After all, it’s no coincidence that ants remove corpses from their nests. Evolution selected for a behaviour to fill this functional role. The ‘agent’ here is natural selection, which ‘chose’ the behaviour precisely because it fulfils adaptive goals.[3]

Intentional explanations, like other causal explanations, can be illuminated via their counterfactual implications.[4] Intentional content can be fixed upon by determining what properties are ‘tracked’ by the organism’s behaviour through relevant possible worlds. Whereas an ‘actual-sequence’ explanation details how a particular event actually came about, a ‘robust-process’ explanation situates events in modal space.[5] It tells us what is common to all the close possible worlds in which the event occurs – explaining the occurrence of this act type rather than token. Taking a broad view, we may note that if oleic acid had not been a reliable corpse-indicator, ants would likely have evolved sensitivity to some other appropriate cue instead. Reliable detection and removal of decaying corpses is what remains constant across the various evolutionary scenarios we might imagine. So it isn’t merely metaphorical to say that this biological goal is the reason for the ant behaviour – it really does offer a robust explanation, at the phylogenetic level.[6] We can thus use it to fix intentional content (of a sort): when ants detect oleic acid, the function of this registration is to detect corpses; that is what it is about.[7] But this ‘biosemantic’ theory offers only a very weak form of intentionality, which doesn’t even require the organism to have a mind. So let’s move on now to considering intentional mental states.

First we must indicate existence conditions for mental states. I will adopt Sterelny’s suggestion that we think of cognition in terms of “flow of control”.[8] Mindless organisms exhibit a “straight-through” flow, with particular stimuli provoking invariant responses, unmediated by internal processing or feedback. Mental ascriptions are only warranted when organisms exhibit behavioural plasticity. Applying this rule to an organism’s registration of information, we may conclude that single-cued tracking does not involve representing the world.[9] Genuine possession of a concept X requires that one “abstracts away from the perceptual features that enable one to identify X’s.”[10] Single cued tracking leaves no room for identification of error – the organism has no other basis on which to identify the X-property. No matter how the acid-coated live ant might struggle as it is dragged from the nest, the other ants have no possible basis on which to revise their conception of the situation, because their ‘removal behaviour’ is triggered solely by the oleic acid cue. Indeed, given the inflexibility of their behaviour – the tight connection between stimulus and response – we have no reason to attribute mentality here at all.

So, to represent a property X, the organism must be capable of robust, multiple-cue tracking of this property. Otherwise it is incapable of identifying an X as such, distinct from the perceptual cue by which it detects it. This guideline may be fleshed out by appeal to the notion of a unified explanation. If we find that a particular behaviour occurs in a range of situations, we should want to unify these events under a single explanation. To achieve this, we look for something that all the situations have in common. In the ant case we have oleic acid. But other times there may be “no one sensory kind of stimulus”[11] common to all the situations. As such, we may need to look for higher-level explanations. Perhaps warning cries are elicited in situations across which the ‘lowest common denominator’ is that they contain evidence that a predator is nearby. If we can find no simpler way to unify these disparate environments, then this is probably how the cases are united for the animal itself.[12]

So far we have concentrated on animals’ discriminatory powers. Robust, multiple-cue tracking gives rise to representations, the content of which can be inferred from what is common to all the eliciting contexts. But this leaves open the crucial question of how the registered information contributes to the animal’s behaviour.

Genuinely purposive behaviour requires separation of the ‘indicative’ and ‘imperative’ functions of the intentional system.[13] This binary distinction lies at the core of folk psychology. Beliefs represent how the world is; desires, how we want it to be. Either alone is impotent: beliefs are motivationally inert; desires, blind. Our actions arise through utilizing information in pursuit of our goals. But many animals have no such distinction, instead showing a strict connection between information-registration and action. The ant does not distinguish between the indicative ‘here is a corpse’ and the imperative ‘get rid of it!’. This leaves no room for practical reasoning – which, recall, we are taking to be the hallmark of agency. An animal cannot think about how to achieve its goals unless it first represents these goals as distinct from the means by which it may achieve them. Fully-fledged beliefs are ‘decoupled’ from particular actions, providing a general-purpose “fuel for success” that can potentially influence a wide range of different behaviours.[14]

We have now developed our theory to the point where it can be put into practice. Two upshots from the above discussion bear highlighting. First, beliefs should be sensitive to a range of evidence, unlike brittle single-cued tracking systems. Thus, in addition to the ‘unified explanation’ strategy described earlier, we might also conduct experiments based on the idea that intelligent animals might learn to give less weight to cues they’ve recently found to be unreliable.[15] If, in robustly tracking some property X, an animal can pick up on some of its own discrimination errors and thereby learn to better discriminate Xs, then we can be fairly confident in ascribing the concept of X to the animal.[16] Suppose the ants had become suspicious of oleic acid after experimenters started splashing it over their live nest-mates. That is, the next time the ants came across a corpse, they instead checked for other signs of death before engaging in ‘removal behaviour’. Such flexibility would warrant attributing to the ants the concept of death.[17]

The second point to note is that beliefs should be utilized in pursuit of a wide range of goals, rather than being tightly coupled to some particular action only. So we might test whether animals can make use of previously acquired information in novel situations. Thus Allen and Hauser suggest that, if an animal A has previously recognized animal B as dead, then when A is later presented with a stimulus that would normally elicit a response directed at B, A should modify its response in light of its knowledge.[18] More generally, if an animal’s behaviour is sufficiently flexible and responsive to past experience, this constitutes evidence of representation.

The above provides a groundwork for ascribing beliefs. But desires too have their interpretative difficulties. Does the monkey give its warning call because it wants to inform the other monkeys of the leopard, or because it wants them to run into the trees, or from sheer instinct? Again, behavioural plasticity is key – first to establishing the voluntary status of a behaviour,[19] and then its intentional content. We can determine the content by setting up situations in which the various hypothesized goals are in conflict, and seeing which one the animal most reliably produces. For example, if we found that a monkey failed to give a warning call when it noticed that the only endangered animal was a personal rival,[20] then this would help illuminate the animal’s motivation in producing warning calls. It is also worth noting that the earlier suggestions regarding concept-fixation can similarly inform our goal-ascriptions, since one cannot represent a goal about X as such unless one has the concept of X.

As a general rule, cognition is reflected in behavioural plasticity. We can test this by varying the stimuli presented to an animal, and observing its subsequent responses. If the range of behaviour can be simply accounted for in terms of responses to sensory kinds of stimuli, then higher-level intentional explanations are unnecessary. On the other hand, if the animal engages in robust tracking which (1) can most simply be unified under an intentional explanation; and (2) is ‘decoupled’ and used to inform a wide range of behaviours; then we have grounds to consider it an intentional agent. Finally, we can test rival ascriptions by putting them into competition, and seeing which goal the animal seeks to realize. Thus a plausible methodology has been outlined for what might otherwise have seemed an impossible task: namely, making warranted inferences about animal minds based solely on their behaviour.


Bibliography

Allen, C. (1999) ‘Animal Concepts Revisited: The Use of Self Monitoring as an Empirical Approach’ http://grimpeur.tamu.edu/~colin/Papers/erk.html

Allen, C. & Hauser, M. (1991) ‘Concept attribution in non-human animals’ Philosophy of Science 58.

Bennett, J. (1983) ‘Cognitive ethology: Theory or poetry?’ Behavioral and Brain Sciences 6.

Bennett, J. (1991) ‘How to Read Minds in Behaviour’ in A. Whiten (ed.), Natural Theories of Mind. Oxford: B. Blackwell.

Dennett, D. (1987) The Intentional Stance. Cambridge, Mass.: MIT Press.

Heyes, C. & Dickinson, A. (1990) ‘The intentionality of animal action’ Mind & Language 5.

Millikan, R. (1989) ‘Biosemantics’ Journal of Philosophy 86.

Roitblat, H. (1983) ‘Intentions and adaptations’ Behavioral and Brain Sciences 6.

Sterelny, K. (2001) ‘Basic Minds’ The Evolution of Agency and Other Essays. Cambridge: Cambridge University Press.

Sterelny, K. (2003) Thought in a Hostile World. Malden, MA: Blackwell.

Terrace, H. (1983) ‘Nonhuman intentional systems’ Behavioral and Brain Sciences 6.



[1] Throughout the essay I use the term ‘intentionality’ in its technical sense of ‘aboutness’. Intentional states need not be voluntary – comprehending this sentence will cause you to think about elephants, whether you want to or not. I will use the term ‘purposive’ in place of the common sense of ‘intentional’.

[2] Allen & Hauser, p.229.

[3] Dennett, pp.259, 299.

[4] Heyes & Dickinson, p.88.

[5] Sterelny, ‘Basic Minds’, p.207.

[6] The scope of the counterfactual is crucial here. Of course, on the individual level (and given their actual evolutionary history) ant ‘removal behaviour’ merely tracks oleic acid, not corpses. That is why we cannot attribute such rich intentional content to the individual ant, as we will see in the next paragraph.

[7] Cf. Millikan, p.290. I’m speaking rather loosely here. It might be more appropriate to say that what we are fixing here is the intentional content of the ant’s genetic information. But this provides an intentional explanation of the resulting behaviour in much the same way as mental explanations do, if on a somewhat different scale – cf. note 6 above.

[8] Sterelny, ‘Basic Minds’, p.206. I don’t mean to take this as a strict analysis of mentality. All we need for present purposes is a rough indication.

[9] Ibid., pp.210-211. Note that ‘single-cued tracking’ is when an organism detects an object or property (e.g. corpses) via a single cue only (e.g. oleic acid).

[10] Allen & Hauser, p.227.

[11] Bennett, ‘How to Read Minds in Behaviour’, p.102.

[12] Ibid. Instrumentalists about intentional explanation need not commit themselves to such a strong claim, however. In ‘Cognitive ethology: Theory or poetry?’, p.356, Bennett suggests that “if a teleological generalization does work for us – giving us classifications, comparisons, contrasts, patterns of prediction that mechanism does not easily provide – then that justifies us in employing it.”

[13] Millikan, pp.295-296.

[14] Sterelny, Thought in a Hostile World, pp.29-31.

[15] Allen & Hauser, p.232.

[16] Allen, ‘Animal Concepts Revisited’. But see note 17 below.

[17] Or some related concept, at least. Fixing the details is not so easy – perhaps they are instead looking for decay, or hygiene risks, or immobile ants, etc. Nevertheless, the problem should be empirically tractable using the methodologies previously outlined. We can rule out various proposals for X by noting that the organism must be capable of robustly discriminating Xs from non-Xs, and ideally demonstrate some capacity for learning to improve their discriminatory skills. Experimenting with enough diverse stimuli should (eventually) indicate what environmental properties the organism’s perceptual mechanisms are latching on to (i.e. ‘tracking’).

[18] Allen & Hauser, p. 232. They go on to apply this template to the specific case of infant distress calls in vervet monkeys, see p.233.

[19] Cf. Terrace, p.379: “To show that an organism wants to do X, it is necessary to show that there are comparable circumstances in which it elects not to do X.” See also Roitblat, p.375, who distinguishes “positive optionality” – that is, performing an action in the absence of the normal eliciting stimuli – from “negative optionality” – neglecting to perform the action in face of the normal stimuli.

[20] Terrace, p.379.

Sunday, August 21, 2005

Implicit beliefs

Sometimes we gain information (of a sort) without having conscious access to it. This is especially common with procedural knowledge, or "know how". You know how to tie your shoelaces, or ride a bike, but it's very difficult to explicate in words how you do it. Further, psychologists have shown that our conscious beliefs about how we do it are often downright mistaken! Still, there is a sense in which the knowledge of how to do it must be stored inside us somehow -- since we are able to act successfully. What is the intentional status of this 'tacit' or 'implicit' knowledge? Is it representational? Does it exhibit 'aboutness'?

There seems a sense in which my implicit knowledge of how to ride a bike is clearly about riding a bike. That's how I obtained the skill, and also what I use it for. The knowledge has a particular function, which we might consider to be a kind of intentionality.

But perhaps it's a category error to hold skills to be repositories of information. They're not genuine representations -- "know how" is distinct from "knowledge that". Procedural knowledge lacks the 'aspectuality' of intentional states. That is, the skill doesn't involve seeing something as a bike. The skill is purely extensional -- it will apply to anything that has the relevant (bike-like) properties, no matter how I conceive of it. Moreover, the skill might generalize, enabling me to balance better in a wide range of situations, let us suppose. Would that mean that the skill is really 'about' balancing?

These problems suggest that procedural knowledge is not properly intentional. But this seems to clash with instrumentalist conceptions of intentionality. Instrumentalists conceive of mental states as behavioural dispositions: I believe that p if I generally act in ways that would achieve my goals if p were true. But we are disposed to act on our procedural knowledge. To say that I know how to ride a bike is just to say that, when attempting to riding a bike, I will tend to behave in ways that are apt to be successful (e.g. shifting my weight appropriately to retain balance, and so forth).

Suppose the following two facts are true:
1) Cyclists retain balance by shifting their weight.
2) When asked how they retain balance, most cyclists reply, mistakenly, that they twist the handlebars.

Now, what should we say cyclists know about retaining balance? Do they know how to retain balance, or not? In the implicit, behaviour-dispositional sense, sure they do. But they lack explicit awareness of this knowledge/skill, and indeed have false beliefs about it on the conscious level. It seems odd to say that the cyclists believe that shifting their weight will retain balance, since they avowedly deny having any such belief. But I guess one could always respond that we don't have perfect knowledge of our own minds. The cyclists have a false belief about what they believe. Their false belief is a second-order one (it's about another belief), they don't actually have a false belief about how to retain balance.

I guess that makes sense. Though I'm still worried about the apparent lack of aspectuality, as mentioned above. That seems a problem for dispositionalist accounts of belief.

On the other hand, acquired automatic skills do seem to exhibit a sort of purposive intelligence. Does the wicketkeeper intend to make his reflex catch? (Note that common sense 'intentions' involve philosophical 'intentionality', but not necessarily vice versa.) I'm not sure if this question has any clear answer. He's surely responsible for it, and warrants praise, etc. But perhaps he deserves this because of his earlier (voluntary and purposeful) training, through which he developed good reflexes, rather than for the reflex itself. It's not as if he engaged in practical reasoning about how to catch the ball. He just did it. So his present mind lacks intentionality here, because it wasn't involved in connecting the behaviour to its goal. The intentionality is instead found in the mind of his past-self, who undertook the training.

We can now distinguish three levels of intentionality in goal-achieving behaviour:

1) The behaviour is purely instinctive. The animal is unaware of the goal (possibly set by natural selection) that its behaviour is directed towards.

2) The behaviour is automatic/reflexive ('instinctive' only in the sense that its proximal cause is thoughtless), but was acquired through past learning which was itself goal-directed.

3) The behaviour is goal-directed: the animal has an internal representation of the goal G, and voluntarily (i.e. non-automatically) produces behaviour B as a means to achieve that goal.

Saturday, August 20, 2005

Drives and Preferences

Let's distinguish two kinds of motivating states. On the one hand are non-representational drives, such as lust or hunger. They're pure 'affect'/emotion/sensation. Then there are preferences, which have representational content: they represent some 'goal' state of affairs that the animal deliberately aims to bring about.

Why is this important? Sterelny (TiaHW, pp.92-95) argues that, like decoupled beliefs, preferences increase an organism's adaptive plasticity, improving their ability to make apt decisions when faced with a broad range of possible responses.

For one thing, some biological goals are too complex to be captured by a purely sensory/affective drive (p.94). Consider social status in primates. There is no simple pattern of sensory stimuli common to all status-raising goal states for a biological drive to latch on to. Instead, the animal needs to be sensitive to a wide range of social relations, unified by functional properties that can only be tracked by way of abstract representation (cf. the Bennett quote).

Further, Sterelny suggests that drive structures are "winner-take[s]-all control systems" (p.93) -- "the strongest drive determines the action chosen, and at that point the other drives are epiphenomenal." To balance different motivators appropriately requires representation. (I'm not entirely sure why - there seems an air of stipulation about this.) When acting on your strongest preference, you can alter your behaviour in light of your other (lesser) preferences.

This would make sense if pure drives are defined as leading to involuntary/automatic action. Sterelny writes, "We digest, breathe, and vary out heartbeat rate without any cognitive representation of the metabolic needs these activities service." ('Situated Agency and the Descent of Desire', p248.) But our eating behaviour is more flexibly controlled, which I guess means that we act on a representation of our internal hunger drive, rather than the drive itself? We can certainly modify our eating behaviour in light of our other goals, so the behaviour isn't the result of any simple competitive drive hierarchy, at least.

I take it the idea behind this is that to engage in practical reasoning about a goal (e.g. sating one's hunger), one must represent that goal to oneself. Otherwise one couldn't reason about it, relating our beliefs about some possible state of affairs with our preferential ranking of that possibility. Drives are brute action-drivers, whereas preferences can combine with beliefs to produce more informed action.

Can beliefs combine with drives to produce intentional action? They might be guided by 'implicit beliefs' or information within the organism. But 'driven' action is not, I take it, a product of reflection or rational processes taking place within anything resembling 'central cognition'. (Otherwise I'm really confused about what "drives" are supposed to be.) Any such deliberative action must instead be motivated by preference-representations of the drives, rather than the drives themselves.

Does that sound right?

Nature's Mind

Animal behaviour is geared towards achieving various goals. But whose goals are they? Sometimes the driving force is not the animal itself, but rather, natural selection. Returning to my favourite example, ant behaviour achieves the goal of ensuring nest hygiene by removing the corpses of dead nestmates. But is not a psychological goal of the ants; they merely react to a chemical cue, and have no concept of death (or hygiene). So who recognized that this behaviour would be a good means to achieving the biological goal? Dennett explains:
Who else but Mother Nature herself? That is to say: nobody. Evolution by natural selection "chose" this design for this "reason."

Is it unwise to speak this way? I call this the problem of free-floating rationales. We start, sometimes, with the hypothesis that we can assign a certain rationale to (the "mind" of) some individual creature, and then we learn better; the creature is too stupid to harbor it. We do not necessarily discard the rationale; if it is no coincidence that the "smart" behaviour occurred, we pass the rationale from the individual to the evolving genotype. [The Intentional Stance, p.259.]

The benefit of doing so is that adopting the intentional stance towards "Mother Nature" offers great predictive power, and a robust-process explanation of why animal genotypes contain the innate information that they do. It's no coincidence that ants remove corpses from their nests. Indeed, if the decay process had released some other chemical than oleic acid, we may be fairly confident that the ant "funeral march" behaviour would have evolved to occur in response to that other chemical instead. This then means we can attribute a sort of intentionality to Nature. The ant behaviour has a function, to fulfill a particular goal (i.e. removing dead nestmates) -- but it is a goal of Nature, rather than of the creature itself. If practical intelligence is the ability to produce goal-achieving behaviour, then the intelligence behind the ant's behaviour is not the ant, but evolution.

Now, we can ascribe a weak sort of intentionality on the basis of biological functions. There is a genuine sense in which the ant behaviour is about removing dead nestmates. Evolution selected for a behaviour to fill this functional role. But we are also interested in a stronger sense of intentionality. Sometimes we want to attribute intentionality to the animal's mind, rather than Nature's. When are such attributions warranted? When it is the animal that recognizes the link between the behaviour B and the goal G; when it is the animal that engages in practical reasoning.

This is where decoupled representation becomes vital. Without a separation between indicative and imperative representations, between stimulus and response, there is no room left for practical reasoning to occur. Thus decoupled representation is a necessary precondition for fully-fledged intentionality in animal minds and behaviour.

It's worth noting that intentionality can survive a separation between nature's goals and the animal's own. For consider sex. The biological function is obviously reproduction -- having sex just happens to be a good way to make babies. But human sexual behaviour can be intentional even if the person doesn't recognize the biological goal. Rather, they might take sex itself as their goal, and engage in practical reasoning about how best to achieve it. This will still require decoupled representations, of course. My point is merely to highlight that one cannot infer non-intentionality merely from ignorance of biological function. The behaviour might be intentional under some other goal description.

Decoupled Representation

I previously discussed Sterelny's distinction between single cued and robust (multiple cue) tracking. Robust tracking will be more accurate, flexible, and intelligent. But it might still be tied to particular behaviours. That's where "decoupled representations" come in. Genuine belief-like representations are 'decoupled' from particular actions, becoming much more general purpose. They can potentially influence a wide range of different behaviours.

Millikan, in 'Biosemantics', points out that human minds separate the 'indicative' and 'imperative' functions of their representational system. (Beliefs represent how the world is; desires, how we want it to be. Either by itself is impotent: beliefs are motivationally inert; desires, blind.) Many animals have no such separation. When ants detect the oleic acid of a decaying corpse, they inevitably carry it out of their nest. There is a strict connection between stimulus and response. We cannot separate the indicative representation (of the corpse) from the imperative (to get rid of it). These are not two distinct states within the ant's cognitive system.

Without decoupled representation, there is no separation between identification and action, stimulus and response. This in turn leaves no room for practical reasoning, i.e. cognition which relates means to desired ends, by way of conditional beliefs ("If I perform behaviour B, then I can achieve goal G"). The importance of this will become apparent in my next post.

Sterelny writes (Thought in a Hostile World, p.31):
True beliefs are a "fuel for success": they form an information store about the world that advantages the animal in many different actions, but they are not tied to specific behaviors.

Decoupled representation aids behavioural plasticity, but how does it tie in with the two types of flexibility previously mentioned (i.e. robust tracking and response breadth)?

I think decoupled representation is conceptually, but not practically, independent of robust tracking. Sterelny grants that single-cued tracking could in principle produce decoupled representations. However, he considers it unlikely in practice for such a combination to evolve, as single-cued tracking is relatively unreliable, which decoupled beliefs cannot afford to be.

The relation with response breadth is more interesting. Sterelny argues that "decoupled representation evolves as response breadth increases. It is nothing but very broad-banded response." (p.34) It isn't clear to me that they are the same thing. An animal might choose between a broad range of options on the basis of immediate, functionally-specific stimuli (thus demonstrating response breadth without decoupled representation). Conversely, an animal might utilize long-term memory and decoupled representation to make a binary decision (i.e. a relatively narrow response breadth). So again, I think we should hold decoupled representation to be conceptually distinct from response depth, even though they may be linked in more practical ways. Though I'm not certain I've understood him correctly, I think Sterelny's main point here is that decoupled representations will help an animal to make successful choices, when faced with a broad range of possible responses. So they'll tend to go together -- but this is an empirical rather than conceptual truth.

Friday, August 19, 2005

The Purpose of Blogging

At risk of oversimplification, let's say there are two broad approaches to blogging: you can write for yourself, or you can write for other people. Put another way, one might view blogging as a solipsistic activity, or else as an ongoing dialectic within a community of inquirers.

I mostly write for myself. Putting my thoughts into words can help clarify them, and my archives serve as a wonderful external "memory bank" of past ideas and arguments. Of course, it's certainly nice when other people respond positively, and discussions in comment threads can be both valuable and fun. But I started this with no readers, and no expectations for more, so that certainly isn't my primary purpose.

So, in practice, I'm mostly a blogging solipsist. But I do (perhaps not so secretly) harbour some dialogistic ideals. I write occasional responses to others' posts, offering counterarguments and so forth -- which is at least the first step towards dialogue. And one of the original motivations behind the Kiwi Carnival was to promote reasoned dialogue between blogs and even across party lines. The hope was that bloggers would respond to challenging arguments they found in the carnival. But, sadly, this never eventuated. For example, I've tried to submit carefully reasoned posts on relevant political issues, but they don't get any response. More generally, there's scant evidence to suggest that the NZ political blogosphere in general has any interest in reasoned debate, as opposed to "point and express outrage"-type posts reacting along predictable partisan lines to the latest political scandal.

That's not to say there are no well-reasoned posts out there. There are. But they tend to get lost in the noise. So we don't get the sort of ongoing dialogue that I think could be really valuable. Instead, we merely have isolated bubbles of reflection. Involuntary solipsism, perhaps. Ideally, the Kiwi Carnival would provide 'scaffolding' to encourage reasoned political debate in the NZ blogging community. It could highlight those bubbles of reason, bring them into contact with each other, and stimulate further discussion. That's the ideal, anyway. Of course, it hasn't happened yet. Perhaps it's an unrealistic goal, I'm not sure.

Or perhaps carnivals just don't provide the right sort of structure for this purpose. Maybe we instead need a more specifically targeted form of 'scaffolding'. Say, a page to keep track of particular debates, with links to specific arguments - and counterarguments - under each heading. That way, anyone interested in, say, the S59 "anti-smacking" bill, or the recent blog debates over substantive freedom, etc. etc., could find all the relevant blog posts and arguments from across the blogosphere, at a glance.

Say, I really like this idea! I'm not quite sure how to implement it though. Any suggestions are most welcome. Perhaps some form of 'wiki', so anyone can edit it and add their own arguments to the list, with a minimum of hassle. Though we'd need to protect against vandalism somehow. Any other ideas? (And does anyone else like this one?)

Carnivals

The fifth Kiwi Carnival is now up at Random Contributionz, and quite entertainingly presented. I'm not sure whether there will be any more. Half the entries in this last carnival had to be "force"-picked by the host due to lack of submissions. If it's received well enough by the other kiwi blogs, and someone volunteers to host the next one, maybe we'll go ahead anyway. But otherwise, that will be the last Kiwi Carnival.

On a more positive note, the next Philosophers' Carnival is coming up in just over a week -- so start thinking about making a submission!

Flexible Content

In this post I want to explore the relationship between plasticity and intentional content. Intentional explanations are a form of causal explanation (we're claiming that those mental states caused that behaviour), and as such commit us to certain modal claims, (different mental states would have led to different behaviour). We can thus use counterfactuals to illuminate our ascriptions of intentionality.

The distinction between robust-process and actual-sequence explanations is especially helpful here. To quote Kim Sterelny ('Basic Minds', pp.207-208):
Distinct explanations of the same event can both be important, for they can convey distinct breeds of modal information. In earlier work I have used the origins of World War I to illustrate this idea. This time, I illustrate it through a more significant conflict, Australia's victory over England in the 1974-5 cricket tests. One explanation of this victory would walk us through a play-by-play description of the tests, detailing each dismissal, run by run and out by out. An alternative would appeal to the strengths and weaknesses of the opposing sides: in particular, Australia's strengths in fast bowling and fielding. These two explanations do not conflict, and each is of value. The play-by-play explanation is an actual-sequence explanation, for it identifies the particular possible world that we inhabit. But if it is true that Australia in that series was much the stronger side, we could know the precise sequence of plays without knowing something very important. Namely, had Australia not won that way, they would have won in another and similar way, with the fast bowlers taking most of the wickets, often to spectacular behind the wicket catches. A robust-process explanation compares our world to others. It identifies an important class of counter-factuals. It identifies the feature characteristic of the worlds in which Australia triumphed in the series.

Sterelny continues: "When representations are causally relevant, they will be relevant as part of robust-process explanations of behaviour." Thus he thinks the key question, when attempting intentional explanations of animal behaviour, is to ask: what tracks the possible worlds in which this behaviour occurs? With basic "reflex-like organisms", the answer will be some simple sensory cues. For example, ants identify corpses to drag out of their nest by the oleic acid produced by the decaying process. Apply this chemical to a live ant, and nestmates will carry it away, despite its lively struggles! The behaviour is dominated by the stimulus, rather than mediated by a more flexible internal representation (which we may assume would be open to contrary evidence, recognizing that real corpses don't struggle, etc.).

So Sterelny insists that "robust tracking", i.e. using multiple cues to identify a property, is a necessary condition for genuine representation. Ants don't have any mental representations of death or corpses, they merely react to the stimulus of oleic acid in their environment. Allen and Hauser, in their 'Concept Attribution in Nonhuman Animals', similarly hold that to have a concept of X, "[o]ne must have a representation of X that abstracts away from the perceptual features that enable one to identify X's." Because the chemical signal is generally reliable, ants are able to identify dead nestmates. But because of the inflexibility of this single-cue tracking, we should say that the ants to not recognize the corpses as dead. They have no concept of death -- no "internal representation of death that is distinct from the perceptual information that is used as evidence for death."

Building on this foundation, Allen and Hauser go on to propose two general empirical tests to detect animal concepts. First, their behaviour must be sufficiently flexible, using multiple cues to form decoupled representations which in turn impact upon a wide variety of behaviours. Second, animals should be able to revise "what they take as evidence for an instance of that concept" -- giving less weight to cues that they've recently found to be unreliable. If ants had a concept of death, they should have become suspicious of oleic acid after experimenters started splashing it over their live nestmates. The next time they detected it, they should have checked for some other signs of death before starting the funeral march. That's what creatures with concepts would do.

I'm not entirely sure what to make of all this. It seems possible to simply respond to inflexibility by saying that ants have brittle detection mechanisms which mean we can manipulate them into having false "beliefs". Just spray some oleic acid, and ants will think the target is a corpse! It will always be logically possible to save an intentional explanation by insisting that the creature's representations are simply false, unreliable, or impervious to contrary evidence. So it seems we will need to appeal to some sort of simplicity principle, and refuse to accept ad hoc intentional ascriptions unless the evidence clearly warrants it.

J. Bennett too, in 'How to Read Minds in Behaviour', appeals to a form of plasticity. If we find an animal behaving similarly (e.g. giving warning cries) in response to a wide variety of diverse stimuli, we should want to unify this behaviour under a single explanation. He writes:
Even if no one sensory kind of stimulus is shared by all the episodes... they may have something in common that lets us generalize across them, namely the fact that each of them provides evidence to the animal that there is a predator nearby. If they share that, and there is no more economical way of bringing them under a single generalization, that gives us evidence that the episodes are united in that way for the animal itself. That is tantamount to saying that in each episode the animal thinks there is a predator nearby... We get at belief content through what is perceived as common to all the environments in which the behaviour occurs. [bold added]

Then again, even if we grant that the ants do not represent death, why not say that they represent the presence of oleic acid? After all, that is what's "common to all the environments in which the behaviour occurs". I guess it's the simplicity argument again: postulating a mental representation here is unnecessary, and adds nothing to our explanations. Further, there is the principled objection that the stimulus exhibits 'tyranny' over behaviour, which indicates a lack of internal cognitive processing.

Heyes and Dickinson utilize many of the ideas discussed above in 'The intentionality of animal action'. If an action is caused by a belief-desire pair, then we may suppose that the action would not have occurred had either of those mental states been lacking. This generates two critera for intentional ascriptions. The belief criterion requires that "[t]he behaviour must be sensitive to whether or not environmental contingencies will support a belief with the appropriate causal content." The kicking and screaming ants did not support the belief that they were corpses, so we shouldn't attribute any such belief to the manipulated nestmates. (Heyes and Dickenson concede that creatures aren't always perfectly rational, but we need to assume something like it for practical reasons. Otherwise we'd have no basis at all on which to ascribe mental content. We need there to be a reliable connection between mental content and behaviour.) Their desire criterion is that "the performance of the action adjusts appropriately to manipulations designed to alter the desire for the outcome." But they grant that this is especially difficult to manipulate in any controlled way (i.e. without causing further, unintended consequences).

I actually don't think the Heyes and Dickinson criteria are especially helpful. Suppose a creature tracked a functional property (say, food) via multiple cues, which were generally quite reliable, but there was some further cue that they were incapable of detecting. We might trick them using a dummy which presented all the former cues and lacked only the latter -- which (suppose) strikes us as a very crucial and obvious cue, the lack of which would "clearly" mean the belief is unsupported. Here it seems a mistake to say the animal lacked beliefs about the functional property. The fact that it was robustly tracked by multiple cues seems sufficient -- the belief doesn't have to be absolutely foolproof. Judgments of whether some situation "supports" the belief or not is surely relative to the discriminatory abilities of the creatures involved. What beliefs are reasonable for a blind man might not be so reasonable for a sighted person to hold in that same environment.

But then, what if the ants are simply "blind" to the struggles of the pseudo-corpse? The movements impact upon their senses, sure, but if their perceptual systems don't pick up on this as relevant, passing the information along to "central cognition" (if such exists in ants), then isn't this equivalent in principle to simple blindness? If the presence of oleic acid is the only relevant environmental datum that the ant is capable of fully perceiving, can we still hold that a belief about the "corpse" is "unsupported" by the environmental contingencies? Here I find Allen's ideas in 'Mental Content' helpful: he points out that's it's fine for individuals to make the odd mistake, or to occasionally fail to distinguish between two properties (e.g. being dead, and being marked with oleic acid) -- humans do it all the time (e.g. gold vs. "fools gold"). But if one was entirely incapable of making the distinction, in any possible circumstances, then one does not have the concepts. Ants cannot possibly identify corpses independently of oleic acid, therefore they have no concept of death as distinct from the perceptual information they use as evidence for it.