Wednesday, March 30, 2005

Supervenience vs. Hume's Law

Hume's Law: you cannot derive 'ought' from 'is'; in other words, factual premises cannot entail an evaluative conclusion.

The supervenience constraint: the moral facts are, in a sense, 'fixed' by the natural facts. There is no possible world that is exactly like our own one in all factual respects, but where it was 'morally right' of Hitler to carry out the Holocaust.

It seems to me that there is some tension between these two principles. If the natural facts 'fix' the moral facts, then wouldn't that set up an entailment relation between them? If you had an accurate theoretical account of just how this 'fixing' was determined, couldn't you reason straight from the natural facts to a moral conclusion? (And don't we attempt to do this all the time - albeit with imperfect natural and theoretical knowledge?)

So why does anyone believe Hume's law?

Tuesday, March 29, 2005

Evaluative Meaning (Again)

I've discussed this issue before, but am trying to clarify my thoughts on it. The following is a brief excerpt (minus footnotes) from an essay draft I'm working on. Any criticisms or advice would be most welcome!

Descriptive and Evaluative Meaning

Non-cognitivist defenders of the fact/value gap might seek to ground it instead on how we use language. When one describes a ‘good wine’, there appears to be an evaluative component to the expression which is independent of its descriptive meaning. Two people might agree on all the (descriptive) qualities of the wine – call these ‘X’ – and yet disagree as to whether these qualities made it good or not. What meaning ‘good’ has over and above ‘X’ is the evaluative meaning, that is, the commendation of the speaker.

I think this picture is rather misleading, in that it conflates evaluation with conation. Evaluations need not be intrinsically action-guiding at all; ‘dispassionate evaluation’ is not a contradiction. One might evaluate an object against criteria which they care little for, as does the reluctant judge of some local competition. Note that we may happily grant a fact/attitude gap, and locate values in the former realm. In doing so, however, we will need to provide some explanation of why it is that evaluations are often – if not always – action-guiding.

The best explanation for this is found not in positing some special sort of ‘evaluative’ meaning, but by noting the variety of uses to which language may be put. We may use language not only to describe, but also to warn, recommend, express conative attitudes, and so forth. As McNaughton writes:
In telling my fellow-picnickers that there is a bull in the field I may not only be making a statement but also, in that context, warning them and advising them to take evasive action. This does nothing to show that the sentence ‘There is a bull in the field’ has a special kind of meaning.

So even if sentences are purely descriptive in meaning, they may still be employed for various other communicative purposes. Philippa Foot points out that we often employ the word ‘dangerous’ for the speech-act of warning, much like ‘good’ is often used for commending. But of course we are not thereby tempted to incorporate the ‘warning function’ of ‘dangerous’ into the very meaning of the word, or suggest that anyone may, without mistake, assess as ‘dangerous’ whatever they have a fearful attitude towards. That such language often happens to be action-guiding is more plausibly due to the nature of good or dangerous things than the mere force of the words.

Contrary to the non-cognitivist’s suggestion, description and evaluation are not always entirely independent of each other. For example, the primary evaluative criteria for functional concepts, such as ‘knife’ or ‘pen’, are internal to the concept. Knowing that a good pen must write legibly is part and parcel of knowing what a pen is. Pre-established internal criteria also arise in the assessment of roles, such as ‘father’, ‘farmer’, or ‘patriot’. One could not appeal to just any old fact as evidence that someone is a good father. Descriptive elements put constraints on what might legitimately feature in evaluations.

The entanglement of fact and value becomes even clearer in the case of thick ethical concepts such as ‘cruel’ and ‘brave’. Dichotomists must argue that such concepts can be ‘factored’ into strictly descriptive and evaluative components. It is not obvious how to go about this, however. Indeed, Hilary Putnam argues that it is quite impossible to give the ‘descriptive meaning’ of ‘cruel’ without using the word itself or some synonym. Mastery of the concept, even in neutral (‘descriptive’) contexts, requires sensitivity to the evaluative point of view. This exemplifies a fundamental intertwining of the descriptive with the evaluative.

Sunday, March 27, 2005

Philosophy Quiz

What philosophy do you follow? (via The Hanged Man) My results:
Existentialism
95%
Utilitarianism
70%
Hedonism
50%
Justice (Fairness)
40%
Kantianism
40%
Strong Egoism
35%
Apathy
10%
Nihilism
5%
Divine Command
0%
I wouldn't have thought I'm that much of an existentialist. I fully agreed that we have to find our own purpose in life rather than just follow that provided by received tradition and whatnot. But surely existentialism involves other substantive claims besides that?

Book Meme

I was hoping to avoid this latest meme that's been taking over the blogosphere, but alas I've been targeted twice: first by Scottish Nous, because he hates me, and then by Jason (for undisclosed - but no doubt similarly malicious - reasons).

1) You're stuck inside Fahrenheit 451 [explanation here], which book do you want to be?
The Sparrow by Mary Doria Russell - it's definitely one of my all time favourites.

2) Have you ever had a crush on a fictional character?
Ha, I can't beat Dr Pretorius' answer: "Yes, though only the fictional characters of others, not on characters in literature." Actually, it wouldn't surprise me if I'd also had some literary ones at some point, but I can't specifically recall any.

3) The last book you bought is:
Moral Vision by David McNaughton (text for my metaethics course).

4) The last book you finished is?
The Dark Tower VI: Song of Susannah by Stephen King.

5) What are you currently reading?
A dozen-odd metaethics books for an essay due next week. In my spare time I've been slowly working my way through Derek Parfit's Reasons and Persons (as should be evident from some of my past posts).

6) Five Books you would take to a deserted island:
I generally don't like to re-read books, so I'll just have to guess at what I might enjoy (and I'm excluding the philosophers who I've already mentioned are coming with me to the island).
  • The Dark Tower VII by Stephen King - It's the last one, and I want to finish the series.

  • Anna Karenina by Tolstoy - it's been recommended to me, but I don't know anything much about it.

  • The Conscious Mind by David Chalmers - I'm going to need an antidote to Dennett, after all.

  • Where Mathematics Comes From by George Lakoff - I would dearly love to know the answer to that question. (Whether Lakoff really has it is another matter, I suppose.)

  • The Wheel of Time (Series) by Robert Jordon - Okay, this is a dozen or more books and so seriously cheating. But I read the first eight or nine in early high school, and found them absolutely engrossing. Assuming my tastes haven't changed too much, I really ought to read them again some time, along with any new ones that have been published since.

7) Who are you going to pass this stick to (3 persons) and why?
1- Spanblather, so as to further infiltrate the New Zealand blogosphere with this virulent meme. [Update: Just to be sure, I'll also nominate Greg Stephens here.]
2- Clark at Mormon Metaphysics, because I used to think that Mormonism was just a crazy cult (like the Raelians or Protestants) but now I've learnt that even crazy cultists can have cool blogs.
3- Lindsay Beyerstein, for the twin crimes of leaving me off her philosophy blogroll and neglecting philosophy in favour of politics in her recent blog posts. Tsk tsk.

Saturday, March 26, 2005

Facts, Values, Apples

It's commonly thought that there is a fundamental divide between fact and value, 'is' and 'ought', such that no amount of knowledge about the former could entail any conclusions involving the latter. But I wonder if the same could be said of facts and apples, 'is' and 'eat'. No matter how many non-apple facts you list, you're not going to end up with a ripe juicy apple. You can't even make any claims about apples. Does it thereby follow that apples are non-natural and fundamentally separate from the rest of reality?

I'm probably missing something. But let's look at the naturalistic fallacy. The 'naturalistic fallacy' is the fallacy of disagreeing with G.E. Moore about whether some property can be given a reductive definition. One proves that an opponent is guilty of this fallacy by professing one's own ignorance; this is known as the 'Open Question argument'. For example, if a chemist proposes that salt can be reduced to sodium chloride, one can expose their fallacy by asking: "I know this shaker contains salt, but does it contain sodium chloride?" The fact that this is an 'open question', as you do not know the answer, demonstrates that salt and sodium chloride cannot be identical after all. For if they were, it would be equivalent to asking "I know this shaker contains salt, but does it contain salt?" and even Cambridge philosophers know the answer to that one!

[Retraction: this post is badly confused.  See here and especially here.]

Friday, March 25, 2005

An Odd Request

The other day I received the following email:
My name is Theresa, i'm a student of The College of Staten Island, New York City. I am a philosophy student, in my first year. I got your mail address from one of the philosophy websites. I just want to ask you for a favour. I have this assignment that is bothering me and i don't know how to go about it . it is
(1) Do you think we have free-will. Give an arguement for or against it'
(2) Is the global skeptisism correct? Give an arguement for or against skeptisism.

I'll appreciate it if you can get this in as soon as possible before tuesday
Thank you

It sounds like she's requesting that I do her assignment for her. Bizarre. Still, I gave her the benefit of the doubt and replied:
Theresa,

What exactly is the "favour" you want from me? The assignment questions seem clear enough. I suggest the best way to "go about" doing them is: (1) read the assigned readings; (2) think about them; (3) discuss the readings with your tutor or lecturer, to iron out any misunderstandings you might have; (4) think some more; (5) write up a reasoned argument to support your position on the topic.

Best,
Richard.

I haven't heard from her since. Can anyone else imagine what she might have meant?

Possibly Necessary

DTWW discusses using modal logic to prove that God exists. In the modal system S5, if something is possibly necessary then it logically follows that it is actual. Think about it: X is necessary if it is true in all possible worlds. X is possible if it is true in some possible world. So if Nec(X) is possible, that means there is some possible world for which X is true in all possible worlds. That could only happen if X really was true in all possible worlds (including the actual one).

Dan then justifies the premises:
Surely it's analytic that God is a necessary being (if God exists at all). And it's equally obvious that God is possible; after all, God is conceivable. Therefore, God exists.

Now, this is a terrible argument for God's existence. We could just as well apply it to any conceivably necessary entity. For example: it seems possible that there could be a necessary blob, therefore the blob is actual.

This is patently ridiculous. I think the problem is due to an equivocation on the modal terms. People don't realise what they're saying when they call something 'possible' or 'necessary' (let alone 'possibly necessary'!) within the S5 system.

Let me use the term "quasi-necessary" to refer to any entity that is necessary if it exists at all; but without prejudice to the question of whether it actually does so exist.

It seems we can conceive of such quasi-necessary entities, be they Gods or blobs. But it would be a mistake to call them 'possible' on these grounds. To make a claim about possible necessity is not just to make a claim about some possible world, but rather the entire collection of all possible worlds. To say that Nec(X) is true at some possible world (i.e. "possibly true") just is to say that Nec(X) is true, simpliciter. (See here.)

Now, we can of course turn the argument around and point out that if it is possible that a quasi-necessary entity not exist, it actually must not exist at all. After all, to say it's possible that not-X, is just to say that X is not necessary. Since it is patently obvious that it's possible for God to not exist -- atheism is not inconsistent -- it follows (via the same logic as before) that God actually does not exist.

Theists will claim this argument is question-begging -- and I agree -- but it is no different from the earlier argument for God's existence. My point is that it is question begging to make any claims about the possibility of quasi-necessary entities. Claiming it's possible is just the same as claiming it is actual; and claiming it's possibly false is the same as saying that it's actually false. No progress can be made here.

Dinner Table Donts highlights what I think is the core confusion here:
I find it hard to think that God is not at least possible in our given context. Are the intuitions of millions of people, over millions of years suddenly just wrong about a Higher Power? I know the evolutionists will eat me alive for this, but doesn't the intricate design of our world and our universe at least allow for the possibility of a creator?

This confuses epistemic with metaphysical possibility. Given our evidence, it is possible that God exists, and also possible that he doesn't. Either is consistent with the evidence. (Of course, that isn't to say that both positions are equally justified.) But, as already noted, any quasi-necessary claim cannot be both (metaphysically) possibly true and possibly false, for that would make it both actually true and actually false, which is a contradiction.

Going back to my quasi-necessary blob, it seems possible to us that it might exist. At least, it is epistemically possible. (I certainly can't rule out the possibility of a necessary blob.) But, assuming that it does not actually exist, it follows that, due to its property of quasi-necessity, it isn't even (metaphysically) possible. The same goes for God, if understood as a quasi-necessary being. His actuality as a quasi-necessary being would rule out even possible non-existence, and similarly his actual non-existence would rule out his possible existence.

This can be overcome if the blob is not quasi-necessary. Your average blob is metaphysically possible, even if it isn't actual. The same would go for God if we stopped conceiving of him as quasi-necessary. A quasi-contingent God, though actually non-existent, could possibly exist.

But a quasi-necessary God (or blob)? Sorry, not a chance.

Thursday, March 24, 2005

The Ethics of Spam

I assume everyone agrees that spamming is wrong. But what are the ethical obligations of those who receive spam? I want to make the further claim that it is immoral to respond positively to spam (e.g. through clicking a link, or - worse still - actually buying a spammed product). To quote BBC News (via Dr. Pretorius):
The fact that one in ten e-mail users are buying things advertised in spam continues to make it an attractive business, especially given that sending out huge amounts of spam costs very little.

An old PC World article suggests that "even a 0.0001 percent positive response rate constitutes a break-even point". So if you respond to spam, you've just created an incentive for criminals to spam a million other people. I say that makes you seriously morally culpable. Don't do it.

The only way spam is going to stop is if we remove the market for it. Each time someone responds to spam, they guarantee the rest of us will suffer more of it. That's just wrong. Given the huge costs imposed on society by even a few responsive individuals, this might justify going so far as to make it illegal to respond to spammed offers. Even if the law couldn't be enforced, it might provide people some extra reason to not respond to spam. At the very least the measure should help highlight the societal costs of spam-responding, and so discourage those who do so out of simple ignorance. What do you think?

Wednesday, March 23, 2005

Solving the Prisoner's Dilemma

A Prisoner's Dilemma is when two (or more) people can each benefit themselves at a greater cost to the other(s). So if they are both self-interested, they each end up worse off than they would have been if both were more altruistically inclined. This unfortunate result cannot be avoided in one-off dilemmas between rational agents if no communication is allowed. If they can meet beforehand, it would be best for both if each agent could somehow force the other to co-operate. They (rationally) should agree to incapacitate themselves if this would guarantee that the other would do likewise. But in practice this is difficult to guarantee. Fortunately, the problem is alleviated somewhat in repeated versions of the dilemma. A Tit-For-Tat strategy works out best in those cases, as each agent should try to entice the co-operation of the other.

Those 'selfish' dilemmas are interesting enough, but there is an even more fascinating sort which I want to consider here: moral dilemmas. According to Common-Sense Morality (M), we have special obligations to help certain people (e.g. family) over others. So my moral reasons will differ from yours. But this means we can construct a dilemma whereby each of us can fulfill our own M-given aims at a greater cost to the M-given aims of the other. Consider Parfit's "Parent's Dilemma" (Reasons and Persons, p.97):
We cannot communicate. But I could either (1) enable myself to give my child some benefit or (2) enable you to benefit yours somewhat more. You have the same alternatives with respect to me.

(Note that many-person versions of this dilemma are extremely common in real-life, e.g. the production of public goods.)

According to M, we should both do (1). But our children would be better off if we both did (2) instead. If everyone successfully followed M in such cases, it would in fact be damaging to the M-given aims of each. So M is a collectively self-defeating theory. (It is true that, of the options open to him, each does what best achieves his M-given aims. So M is not individually self-defeating. But morality is surely collective by its very nature. As Parfit says (p.103): "If there is any assumption on which it is clearest that a moral theory should not be self-defeating, it is the assumption that it is universally followed"!)

M is therefore indefensible, and must be revised to form a new theory (R) which includes the following claim:
(R1) When M is self-defeating, we should all ideally do what will cause the M-given aims of each to be better achieved.

This claim is about the 'ideal' case where everyone follows the moral theory successfully. In practice, of course, things are not so simple. We also need to know what to do when others fail to act morally. Presumably we should continue to act impartially so long as enough others do likewise. But how much is enough? Parfit answers (p.101):
There must be some smallest number k which is such that, if k or more parents contribute [to some public good], this would be better for each contributor's children than if none contribute... The number k has two special features: (1) If k or more contribute, each contributor is joining a scheme whose net effect is to benefit his own children. The children of each contributor will be benefited more than they would have been if no one had contributed. (2) If less than k contribute, any contributor's children will be benefited less than they would have been if no one had contributed. (1) and (2) make k a plausible moral threshold above which each parent ought to contribute. We can claim

(R2) In such cases, each ought to contribute if he believes that there will be at least k contributors.

So, at least when M is self-defeating, we ought to be impartial so long as we believe that enough others will do likewise. When collectively followed (as any good morality ought to be), this will better achieve the M-given aims of each person, without exception. That is, it will better fulfill even your special obligations.

Parfit also mentions a more sweeping revision (N): that we should always impartially fulfill everyone's M-given aims (so long as enough others follow suit), even when M is not self-defeating. But although this would maximize the fulfillment of everyone's M-given aims, it would not necessarily best achieve those of each person. There would be exceptions. So we cannot compel the M theorist, in their own terms, to accept N. But they do at least have to accept R. And that alone is a very significant result!

Tuesday, March 22, 2005

Rational Irrationality

Sometimes the most rational thing to do is to cause yourself to become irrational, paradoxical though this sounds. Towards the start of Reasons and Persons, Parfit describes a scenario where a robber threatens to kill your children unless you hand over the gold in your safe. But if you could take a drug which would render you entirely irrational, the robber's threats would be made impotent:
Reeling about the room, I say to the man: 'Go ahead. I love my children. So please kill them.' The man tries to get the gold by torturing me. I cry out: 'This is agony. So please go on.'

Given the state that I am in, the man is now powerless. He can do nothing that would force me to open the safe. Threats and torture cannot force concessions from someone is so irrational. The man can only flee, hoping to escape the police. (p.13)

Threats of all sorts depend on exploiting another person's rationality. We can thus neutralize threats by making ourselves irrational. Since we have good reason to want to neutralize threats, it can be rational to make ourselves irrational!

This can work just as well for offence as defence. Parfit later describes a society of perfectly rational individuals who are also transparent (others can tell whether they are being honest). Suppose one of these individuals could turn themselves into a threat-fulfiller - someone who always carries out their threats, no matter the cost to themselves. Would it be rational to make yourself irrational in such a way? In this case it would. Since you are transparent, whenever you make a threat others would know that you would carry it out if not appeased. And the others are all perfectly rational, so they would always placate you if the threat was serious enough. So if you strapped a bomb to yourself, you could get others to do whatever you wanted simply by threatening to blow everyone up if they didn't do as you say!

Everyone else should then (rationally speaking) impair their own rationality by becoming threat-ignorers and advertising this fact. You would no longer threaten them if you knew that they would ignore it, since then you would have to blow yourself up and of course you would rather not have to do that!

It's been a couple of years since I read Steven Pinker's How the Mind Works, but I think he explained the evolution of overpowering emotions in terms of the advantages such irrationality affords us. The best way to win a game of 'chicken' is to conspicuously remove your steering wheel and throw it out the window. When others can no longer rely on your rationality to compel you to compromise, their own rationality then forces them to surrender. The craziest man is the most dangerous, and the most dangerous man wins. Overpowering emotions such as jealous rage thus make one very powerful. Pinker argues that the jealous man's wife wouldn't dare have an affair if she knew that he'd kill her if he found out.

I'm not entirely convinced of the evolutionary tale, but the game-theoretical issue is certainly an interesting one. And it doesn't only apply to rationality, but indeed to any goal-directed activity which may sometimes be best achieved through indirect means.

Thus utilitarians could argue that morality requires us to form our character in such ways that we are disposed to sometimes act wrongly (e.g. saving your own child at the cost of many others' lives). Such favouritism is still wrong, just like ignoring the threat-fulfiller's threat is still irrational. But it is justified in an indirect sense, since it results from dispositions that are dictated by the goal in question. It is a case of what Parfit calls blameless wrongdoing.

Good To & Good For

Regular readers will know that my theory of welfare is that our individual good consists in objective desire fulfillment. What matters is whether the world really is the way we desire it to be, not whether we merely believe it (and so are happy). The linked post supports this claim by appeal to thought experiments where we would prefer to be mistakenly sad rather than mistakenly happy.

Over on the NZPhilosophy blog, Patrick argues that these thought experiments are irrelevant:
There are things I value more than my happiness. Perhaps, however, it would make sense to say that even though I prefer [mistaken sadness], I would be much better off [if mistakenly happy].

The basic claim is that we may simply care more about things other than our own wellbeing. What we value -- what is good to us -- may not be what is good for us. This sounds plausible. If it is true, then the fact that we care more about things other than our happiness does nothing to refute welfare hedonism. It could still be true that wellbeing simply consists in happiness, even if we value other things more.

This could explain an oddity I wrote about last year: my intuitions about welfare vary depending on whether I judge a case from the first person or the third person. In the third person I am much more inclined towards hedonism than I otherwise would be, at least in cases of other-regarding desires, e.g. about the health of loved ones.

But I think other-regarding desires complicate things. Plausibly, what is good to us is the fulfillment of all our desires, but other-regarding ones may not contribute to our welfare (what is good for us). This could explain my intuitions without recourse to hedonism. So let's consider a case involving self-regarding desires that may resolve this question:
Molly the Mathematician
Suppose Molly spent her whole life trying to prove a fiendishly difficult theorem. She finally thinks she's achieved it, and has some other mathematicians check her proof. Molly receives their answer, believes it wholeheartedly, then dies the next day. It is later discovered that she was told the wrong answer.

Which of the following scenarios is better for Molly?
1) The mathematicians (mistakenly) tell her the proof is flawed, when in fact it is correct.

2) The mathematicians (mistakenly) tell her the proof is correct, when in fact it is flawed.

Now, it seems to me that (1) is better for Molly, even from the third-person perspective. (There is little question that Molly herself would prefer (1) to be the case, as would I if I was in her position.) I'd be interested to hear what others think. If my intuition is accurate, it follows that welfare hedonism is false. We should instead adopt some (perhaps restricted) version of the desire fulfillment theory. Needless to say, I would be happy with that result. (Question for hedonists: if I'm wrong about this, should you inform me?)

Monday, March 21, 2005

Philosophers' Carnival #11

The 11th Philosophers' Carnival is now up at Clayton Littlejohn's blog.

There are a lot of interesting entries on offer -- mostly because Clayton did such a great job "scour[ing] the blogosphere" for extra entries to include. Next time be sure to submit a post yourself to help ease the load of our overworked hosts!

Update: My favourite posts this time around were probably Tucker on metaphysics, Clark on maths, Hugo on scientific realism, and Chris Ragg on degrees of belief (which ties in nicely with a previous post of mine).

Sideblog Syndication

My sideblog now has an RSS feed:
http://www.sideblog.com/prem/rss/pixnaps.xml

Very convenient. Simply click here to subscribe with Bloglines.

Sunday, March 20, 2005

Branching Timelines

Enwe has an interesting post on time travel...
In the year 2007, after the horrible war on Iran a young man travels back to the year 2005 - He wants to convince Bush not to attack the Iran... Let us assume that Bush believes that the time-traveler told him the truth. In the best of all possible worlds he will then immediately calls C. Rice so that she stops all intended attacks. But then it is unclear why the time-traveler is still standing in Bush's office - at least there is no more motivation to travel back from 2007 to 2005. Assuming that there still must be some motivation, Bush has also to believe that the war on Iran will happen - whether or not he now calls Rice. But then it seems rather doubtful why Bush should belief what the time traveler says and why should act accordingly.

Note that this reasoning assumes that there is only one time-line, such that events either occur or do not occur. (Probably a reasonable assumption, but it's worth noting nonetheless.)

I think in the above scenario, the astute time-traveler would also contact his past self to notify him that he must (in future) travel back in the past to warn Bush in order to prevent the war. In such a case, the war might never have actually happened at all. Perhaps the only reason Traveler goes back in time is because his future self (whom he trusts) had earlier told him to! Such 'loops' are troublesome, however, since it isn't clear how they could start in the first place. (Where is the ultimate cause? It's all so circular.)

[I'm reminded of an old post from DTWW which summarizes a Heinlein story involving a fascinatingly devious causal loop.]

But what if we allow for multiple timelines? Perhaps in one timeline there really is a war, but when Traveler goes back in time the universe 'branches' in some sense, so that he successfully brings about a new world where the war does not occur. Is that possible? Does it properly count as time travel, if he's not really traveling back into his own past, but rather creating a whole new one?

The attraction of this view is that it frees time-travelers from the strict demands of their own timeline's past. I want to say that once events in a particular timeline are determined, each moment is forever fixed and cannot be changed. If something occurs, you cannot make it not occur. So it's impossible to actually change the past through time travel. All you can do is fulfill your 'destiny', carrying out those actions that your future self has (in a sense) already done. That is, all you can do is make your causal contribution to bringing about those past events that already occurred.

So it's impossible for Traveler to prevent a war that has already happened. Either it never occurred in the first place (as in the loop described earlier, where he only ever went back in time because his future self told him to), or he didn't really change his past, but rather created some altogether new one.

Does that sound plausible? I'm fairly short of sleep, so I can't be entirely sure that I'm making sense. And the topic does lend itself to insensibility. For a moment I wondered whether branching timelines could explain (i.e. originate) causal loops. But on second thought, that really doesn't make sense at all.

Thursday, March 17, 2005

General Freedom

I previously advanced an expansive understanding of freedom, such that any constraint whatsoever counts as an impingement upon it. The laws of nature are such that I am not free to jump over the moon. But one might question the importance of this conception. At least, it needs to be supplemented by some account of our 'on balance' freedom, or freedom simpliciter. Although I lack the particular freedom to jump over the moon, it would be most implausible to conclude that I am thereby unfree in any general sense. Some particular freedoms are more important than others. In this post, I want to explore the question of how we might weigh particular constraints as being more or less important, which may then serve as an account of general freedom.

The whole point of freedom, by my earlier conception, is that it enables us to achieve our goals or desires. So this strikes me as the logical place to look for a scale against which to weigh constraints. The simplest view which springs to mind might be called the 'actual desire' account:

(AD) Constraints are to be weighed according to the strength of the desires that they thwart. I thus have 'general freedom' to the extent that I can get what I want.

One objection appeals to the happy slave scenario, where we feel the happy slave is unfree despite the fact that he can still achieve his strongest desires. If one is unfree, you cannot change this simply by ceasing to care about it. The facts are unchanged - you remain unfree whether you care or not.

To some extent I agree with this - the happy slave has particular unfreedoms insofar as he must work for his master and lacks other options. But 'general freedom' is a rather different concept. It is not just about in what ways you are constrained; it is about which constraints actually matter. Insofar as the happy slave is able to achieve everything he desires through his servitude, he suffers no significant constraints.

Alternatively, we may feel that (AD) is too narrow. After all, if the slave had changed his mind, he would not be able to act on his new desires. The fact that he lacks this ability might be deemed significant, whether or not his desires are actually thwarted. If you feel swayed by such considerations, you might prefer a broader, 'possible desires', account of general freedom:

(PD) Constraints are to be weighed according to the strength of a possible desire they might thwart, and the likelihood of your having such a desire. I thus have 'general freedom' to the extent that I can get what I want in this and all close possible worlds.

This view has some nice results. It suggests that the happy slave is unfree, because he could very easily have had different desires and thereby felt the force of the constraints upon him. But (PD) also suggests that I am not made unfree simply because I cannot jump over the moon. There is no nearby possible world where I would have a genuine desire (as opposed to mere whims) to jump over the moon. So this constraint is not a significant one.

One might object that my desire-based approach effectively conflates 'general freedom' with wellbeing - either actual wellbeing, as in (AD), or our wellbeing in close possible worlds, as in (PD). I'm not sure whether this is really a problem, though. My fundamental 'freedom' concept is that of particular freedom, which remains appropriately distinct. It is only when we ask which freedoms really matter that our answer collapses into wellbeing. But since wellbeing is what matters, it should come as no surprise that the forms of freedom that matter are those that enable our wellbeing! It's almost tautological. But perhaps I've been discussing the wrong sort of significance. Perhaps general freedom is not about those freedoms that are significant qua good life, but rather, those freedoms that are significant qua freedom.

This notion may be illuminated if we consider the flipside of freedom: coercion. On G.A. Cohen's analysis, one is 'forced' to do something when there are no reasonable or acceptable alternatives (e.g. "your money or your life!"). This suggests a 'reasonable alternative' account of general freedom:

(RA) S is free in doing A iff S has a reasonable alternative to A that she could have chosen instead.

This is fairly vague, and it isn't immediately clear how to cash out this notion of a 'reasonable alternative'. One option would be to appeal to a probabilistic or statistical notion of what the average person would consider reasonable in that situation. But I think a better option is to judge this based on the specific individual in question. We could do this through a possible worlds analysis, i.e. something like:

(PW) B is an acceptable alternative to A iff it is chosen by the agent in some close possible worlds (and the relevant details of A and B remain fixed across these worlds).

(This bears some rough similarities to an account I've suggested before, though in a somewhat different context.)

Cohen suggests an expected utility analysis of reasonable alternatives:
(EU) B is an acceptable alternative to A iff: either B is not worse than A, or B (though worse than A) is not thoroughly bad.

This then leaves the notion of 'thoroughly bad' unspecified, but it could be spelled out as some absolute threshold of wellbeing.

Now, these reasonable alternative views of freedom end up being quite radically divorced from our notion of wellbeing. If I can push a button to get everything I desire, then I'm obviously never going to accept any alternative. The option is much better than any other, so much so that I would choose it in all close possible worlds. If we further suppose that the alternatives (which I never seriously consider anyway) are "thoroughly bad", then, according to (RA), I am thus 'unfree'! Conversely, if all my options are equally miserable, then I am apparently 'free', because whichever I end up choosing, there would have been an alternative just as good that I could have chosen instead.

Given such consequences, I'm not really sure that (RA) is a very interesting or important concept. I actually prefer the concepts of general freedom that are tied more closely to wellbeing. They make it more obvious just why freedom is worthwhile. If freedom is not tied to wellbeing in any way, why should we care about it?

New Template

I've made some significant changes to the site template, so... what do you think?

Note that the semi-transparent backgrounds will only display properly on decent browsers like Firefox. Internet Explorer renders it opaque, and the wrong colour to boot. So if you think it looks ugly, check that isn't the reason first. (And if it is, take advantage of the big 'Get Firefox' button I've added to the sidebar!)

Otherwise, let me know of any bugs, problems, or just general suggestions you care to offer. (I'm thinking of cutting the blogroll back down to about half its present size. Any objections?)

Wednesday, March 16, 2005

Elsewhere...

I've a post up at Splitting Atoms on Jay Wallace's "The Deontic Structure of Morality", which may be of interest to some readers.

P.S. Please do comment on my recent post on moral naturalism. I have to write an essay soon, on a related topic, so any criticisms would be most helpful (by forcing me to strengthen my position). I especially can't believe you're all letting me get away with my claim that ethical naturalism is "easy" - I'm still waiting for the obvious rejoinder that it's too easy, and doesn't really establish as much as we might have hoped for...

Tuesday, March 15, 2005

Upcoming Carnival

The eleventh Philosophers' Carnival will be held next Monday 21 March, hosted by Clayton Littlejohn.

Start sending those submissions in...

Monday, March 14, 2005

Natural Foundations

Full-blown moral nihilism strikes me as unnecessary. A meaningful notion of 'the good' can be grounded naturalistically, so we needn't worry that it commits us to metaphysical flights of fancy.

I should begin by conceding that the universe doesn't care what we do or how we fare. Value doesn't come pre-built into the fabric of the cosmos. If something matters, it's because it matters to us. And, indeed, things do matter to us. We have desires, and their fulfillment is of value to us.

So I think the existence of agent-relative non-moral value is particularly difficult to deny. Surely everyone would agree that some things can be better or worse for me. (I argue for the more specific claim that what is good for me is the fulfillment of my desires, but the specifics are not important here.) But once we grant the existence of non-moral value, we can easily construct an agent-objective aggregate, which could plausibly be equated with moral value.

There's a particularly nice argument to this effect in Stephen Darwall's Philosophical Ethics (p.125), inspired by J.S. Mill:

1) Morality, by its very nature, is concerned with what is good from the perspective of the moral community.
2) What is good from the perspective of the moral community is the greatest amount of what is good to the individuals comprising it.
3) What is good to any individual is that person's pleasure or happiness.

The details of (2) and (3) could be revised without damaging the overall force of this argument for naturalism. For instance, one might replace the maximization principle of (2) with a more Rawlsian 'maximin' principle, or something along those lines. The result would no longer be utilitarianism, but it would be no less naturalistic. Similarly, you can plug your favourite account of wellbeing (or non-moral value) into premise (3). The overall structure is flexible enough to handle it. Really all we need is the eminently plausible premise (1), in conjunction with any naturalistically specifiable account of "what is good from the perspective of the moral community" (which shouldn't be too difficult to provide). 

When ethical naturalism is this easy, why would you ever resort to nihilism?

Big Force "Oughts"

I started writing a reply to Jim Ryan's comment to my previous post, but my comment became long enough to warrant a post of its own. So here it is...

Note that the psychopath (devoid of altruistic desires) can do wrong even though he has no reason to avoid doing so.

Yes, I very much agree with that. What I wasn't so sure about is what it means to say he "ought" not act wrongly. If the prescription is redundant, adding nothing new to the general value claim, then that's unproblematic. But if it's supposed to express some further fact, then I'm not sure what it is.

Our psychology is such that we are motivated to fulfill our (own) desires. This is how I understand prudential prescriptions as having "force". Motivational force is a perfectly naturalistic and comprehensible phenomenon. But the force of morality is less clear, since we are not intrinsically motivated to fulfill other people's desires.

If morality lacks intrinsic motivational force, what other "force" is left? The idea of some supernatural "Big Force" behind morality makes no sense to me (literally - I have no idea what is meant by such claims - does anyone else?). But I don't think it really matters if morality doesn't have prescriptive force in this sense. We have other reasons to care about and promote it, as I argue here.

So, relating this back to my rejection of analysis (1), I can understand (1) only if we interpret 'ought' in the weak sense of 'morally ought' that effectively reduces to the descriptive statement "X is morally right". But of course it would be empty/circular to then use this as an analysis of morality! I prefer to take (3) as fundamental, and interpret (redefine?) normativity in terms of this.

Jim makes another interesting point which I wanted to address:
The key is that if the man on the street means by normative force what the Big Force people do, you're sunk and naturalism fails. You would then have to choose full-blown nihilism about morality or accept Big Force non-naturalistic objective realism. I maintain that the man on the street agrees with me that motivation not to do wrong to the old lady boils down to your not desiring her to suffer harm. ("I would hate to see her crying over her lost wallet later on.") He doesn't think of Big Forces. ("I don't care about the old lady, I just don't like to do anything Wrong," is something he doesn't say.) Thus naturalism goes through and the Big Force objective realist is left out in the cold implausibly maintaining that you need Big Force to avoid full-blown nihilism about morality.

That sketch of the "man on the street" seems fairly plausible to me. But I'm not so sure it matters either way. Why should we care what ordinary language refers to? Reality remains the same no matter how we describe it. Suppose we stipulate that "morality" refers to the Big Force nonsense. I'm then a nihilist about morality. But that's fine, because I don't think that's what's important. Let's use the word "schmorality" to refer to what I mean by 'morality', i.e. analysis (3) discussed previously. I'm a realist about schmorality, and can give a naturalistic account of it. Everyone (collectively) has reason to care about and promote schmorality. This is the concept that matters. If ordinary language is talking about something else, then so much the worse for ordinary language.

Saturday, March 12, 2005

What is Morality?

I've been wondering: what is morality about? I mean, what is the most fundamental analysis or 'essence' of the concept? Here are the possible answers I've thought of so far...

1) Morality is fundamentally concerned with prescriptive force - what you really ought, all things considered, to do.

The problem with this view is that I really haven't the faintest clue what 'prescriptive force' is, in any categorical sense, or whether it even exists. Same goes for the idea of what you 'really ought' to do (well, it's the very same idea, I think). I just cannot imagine what those words are supposed to mean. So long as I don't think about it too much, I can use the words competantly; I know how to use prescriptive language. I just don't get what it means - what sort of claims I'm making in doing so. (I'm tempted to go non-cognitivist on this one. But not on morality; see below.)

2) Morality is about acting on reasons, or doing what you have most reason to do.

Perhaps (1) just means the same as (2)? I can at least understand this one, I think. Though if reasons are anything like I suggested in the linked post, then this would lead to ethical egoism, which is very different from how morality is usually understood. Perhaps that simply shows that my previous post was wrong? But if reasons are something else entirely, then I'm not sure I understand them after all.

It seems obvious to me that a selfish rogue could easily have reason to act immorally. For some people, given their aims/desires, it would be downright irrational to do the right thing. So I think it's a mistake to conflate (individual) rationality and morality in this way. I much prefer the next option...

3) Morality is about what's good for everyone, socially rational, or justified from an impartial point of view. (I take those three to be equivalent.)

One might complain that any given individual won't necessarily have reason to act altruistically. But given my earlier remarks, it should be clear that I consider this a point in favour of this analysis!

[Updated to add:]
4) Norm expressivism. Moral language is for expressing the mental state of accepting a norm.

I think this is far and away the most plausible form of moral non-cognitivism. But that's not saying much. I still think it's silly to deny that moral beliefs are, well, beliefs, i.e. have cognitive content and so can be true or false.

Can you think of any other options? Which analysis do you prefer, and why?

Reasons

Here's a very simple view about reasons. It makes just two claims:

(O) You have (objective) reason to do that which would best fulfill your desires in actual fact.

(S) You have subjective reason to do that which is most likely to fulfill your desires, given your beliefs.

This makes it clear why true beliefs are so worthwhile. They help us to close the gap between subjective and objective reasons, allowing us to notice (and act on) the best reasons. But this benefit would accrue to any theory which made a similar distinction between objective reasons and belief-relative 'subjective' ones. So the real issue here is with the substance of (O) - that we only have reason to fulfill our desires. (See also my related post on the desire-fulfillment theory of wellbeing.)

I guess the core assumption behind this view is that a reason is necessarily something that would guide the action of an informed agent. But according to the belief-desire theory of action, all action aims at desire-fulfillment. So the agent would treat as action-guiding only information that was relevant to the fulfillment of his desires. If you presented him with a 'reason' which appealed to some end that he did not share (i.e. desire), then he would not consider it a reason to act at all.

Any objections? Could there be reasons that informed agents would not care about?

Friday, March 11, 2005

The Speed of Thought

I've been thinking about that inner voice you 'hear' as you think. Is that just what you're thinking - are you 'hearing' your thoughts? Or is it something different - perhaps only a subset of thoughts, those that are conscious? Can you have subconscious thoughts? (Of course most of our cognitive processing occurs below the conscious level, but can they properly qualify as thoughts?)

How fast can you consciously think? I seem to be limited by the speed of my tongue, which seems rather arbitrary and bizarre. Even though I don't actually use my mouth to 'talk' to myself, it still constrains me in some sense. It's as if I must employ imaginary correlates of the vocal organs in order to produce the imagined words in my head - and these imagined correlates are just as slow as the real things! I can't think a stream of words any faster than what I would be able to vocalise. Why is that? (And do others find the same thing?)

Do we really think in our spoken language, or is it just the end result? I vaguely recall reading a Heinlein novel where people were taught a much compressed language which then allowed them to think several times faster than usual. Could that really happen?

It seems pretty implausible to me. I guess it would speed up conscious thought, at least, by loosening the "vocal" constraints I described above. But I doubt it would greatly influence our underlying thought processes. I don't know much cognitive science yet, so I'm stuck with the notoriously unreliable method of introspection as my only source of evidence. But it seems to me that thoughts just kind of pop into my head. That is, most of the hard work seems to be done by my subconscious; my conscious mind just takes the credit!

I'm not even sure that all conscious thoughts occur in our spoken language. One can also think in images, for example. Further, it sometimes seems like I can have a thought that is purely semantic; a 'meaning' without any 'vehicle' (e.g. words or images) to carry it. They're not very clear, is the problem. I might have this odd 'feeling', and then nod to myself and think: "hmm, yeah, that seems right... now how do I spell that out?" So, unless I'm delusional (which I wouldn't rule out!), it looks like sometimes the thought may come first, to be 'translated' into language later.

A better example might be of that common experience where you are grasping for a word that is just out of mental 'reach'. You know the meaning of it, but you can't put it into English. You can't remember what word encapsulates the meaning you have (literally) in mind.

One last thing: have you ever noticed that, when shadowing another speaker, you hear your internal voice as being simultaneous with the actual speaker? I noticed this in my psyc lectures last week; I would imagine my inner voice 'talking' along with the lecturer, and I'd hear them both at the very same time! But it must take time for my mind to process what I hear and then reproduce it, so I think this must just be a nifty example of the difference between objective and subjective time that I've discussed before. (In short: my brain represents a later event as happening simultaneously with the earlier one. It then seems simultaneous to me, because all I have access to is the representation, not the underlying neural 'vehicle' that may have quite different temporal properties.)

If I tried to speak out loud, the illusion of simultaneity would be dispelled, of course. But why does the internal voice behave so differently? One might argue that a delay is involved in physically vocalising one's thoughts and then hearing them again. Or perhaps the brain can more easily manipulate the temporal properties of our representations of internal events? (That is, although the thoughts actually occur after hearing the speaker, we can more easily represent them as occuring simultaneously, which is why it seems to us as though they are simultaneous, when really they're not.)

Well, that was some fun speculation. Hopefully someone who actually knows something about this stuff can set things straight...

Update: Mixing Memory responds.

Wednesday, March 09, 2005

NZ Philosophy Blog

Anyone who's snuck a look at my Blogger profile page in the last couple of months might have noticed Splitting Atoms Prior Knowledge - a group blog for New Zealanders that I've been working on putting together. It's taken a while, but the blog is now officially 'open' for public viewing, so head on over and have a read. Feel free to spread the word and add the site to your blogroll.

Most of the contributors are students, but we've also had a couple of noteworthy professors sign up - Graham Macdonald and Douglas Bridges - so I'm especially looking forward to hearing from them...

Update: click here to subscribe to the site feed with Bloglines.

Tuesday, March 08, 2005

Freedom and Autonomy

In my previous post I discussed how freedom might be understood in relation to constraints. This is the basis of 'negative liberty', though I argued that the typical analysis was impoverished because it was concerned only with positive external constraints. A richer understanding of freedom results if we also recognize the importance of negative and internal constraints.

But there is a different way to approach the concept of 'freedom', and that is in terms of autonomy (sometimes called 'positive freedom'). We call an ex-colony "free" when it rules itself, attaining formal independence from other powers. This idea of freedom as self-mastery can be extended to people. The core idea is that the free individual is sovereign over himself, slave to no other, nor to his inner passions.

There is something very right about this idea. Self-directed agency is an important part of freedom. It's no use removing constraints if there is no agent able to choose and engage in actions. If I merely do another's bidding, rather than deciding matters for myself, then I am a puppet, not a free man. It's also important to recognize the possibility of internal constraints to freedom, as I argued in the previous post. Mental illness or 'phobias', for example, can seriously constrain our freedom. It is a great insight of 'positive freedom' to recognize that our freedom may be constrained from within, as well as without.

We can take these new insights and integrate them into the argument of my previous post, to yield a unified concept of 'freedom' as a 3-place predicate which relates an agent (S), an overcome constraint (X), and a goal (Y). That is: S is free, from X, to Y. The apparently different concepts of freedom simply arise from focusing on different aspects of this more fundamental underlying concept. Autonomy focuses on the agent (S), whereas negative freedom highlights particular forms of constraint (X). I maintain that the enabled choice (Y) is most important, but the above formula shows how this ties in naturally with the other sub-concepts.

(As a brief aside, I note that 'positive liberty' is sometimes identified as 'freedom to' rather than 'autonomy'. Indeed, the two are often conflated, even by Berlin. But it should be clear that this is a mistake. As my previous post showed, 'freedom to' is intimately connected with 'freedom from', unless you artificially restrict what may count as a 'constraint'. Autonomy is less obviously connected to the other two, though I aim in this post to bring all three into unification. Anyway, discussion of 'positive liberty' has traditionally tended to focus on autonomy rather than 'freedom to', so I will continue to use it in the former sense only.)

So far so good. But the concept of positive liberty is often pushed further - quite inappropriately, in my opinion. Rather than considering the person as a whole, some theorists posit a "true self", or rational ego, imprisoned within our vulgar minds. This "true self" is then contrasted with the empirical (vulgar) self, so as to discredit the claims of the individual. Thus the worst of tyrannies may be excused in the name of "liberation", and any objections dismissed as "inauthentic".

But it should be noted that such abuses are not instrinsic to the concept itself. Even supposing that I can be mistaken about my real desires/interests, it does not follow that anyone else knows better. And even if they did, it does not follow that benevolent paternalism is somehow making me more free. Paternalism might make me more well off, but this is at the cost of autonomy! It is thus the very opposite of the positive liberty ideal.

Isaiah Berlin characterizes negative liberty as wanting to curb (external) authority, whereas positive liberty seeks to invest (internal) authority. There is something to this - especially when we add the parenthetical clarifications! Rousseau, perhaps the most notorious proponent of positive liberty, suggests that "the impulsion of mere appetite is slavery, and obedience to the law one prescribes to oneself is freedom." It thus appears that he sees freedom in terms of imposing constraints on ourselves, in stark opposition to my earlier analysis. However, I think this idea is instead best understood as freedom from internal constraints (weakness of the will, and so forth) to act as we choose. If I'm right about this, then the unified analysis can be plausibly maintained.

(By the way, I offer a sympathetic exploration of Rousseau's philosophy of freedom in this essay. I find Rousseau a fascinating thinker, though I would not want his ideas to actually be implemented!)

Monday, March 07, 2005

Freedom and Constraint

I'm a big fan of freedom. But I think the most common analyses of 'freedom' are rather poor, in that they point to something much less important than what I have in mind. The most common conception is probably 'negative freedom' (or "freedom from"), the laissez faire approach, that sees freedom simply as the absense of externally imposed constraints. On this view, you would be perfectly free if everyone else would just leave you alone.

This impoverished view fails to recognise that the whole point of being free from some constraint is to enable us to achieve some goal. What matters is that options be open to us; if removing constraints will enable more options, then we can indeed be made more free by their removal. But it is the 'enabling' that matters, not the 'removing'. If I am stuck in the desert with no water in sight, then I am not free to drink, even if no-one else is around to obstruct me. Natural conditions can obstruct the fulfillment of my desires just as badly as humanly-imposed constraints.

Joel Feinburg ('The Concept of Freedom', chp 1 of Social Philosophy) identifies two distinctions between types of constraints - positive vs. negative, and internal vs. external. Positive constraints are impositions of the usual sort. Negative constraints could be characterized as a lack of something needed. Our goals might be thwarted due to a lack of money, ability, or knowledge, for instance. The other dimension assesses the source of the constraint, as either within ourselves (e.g. mental illness) or out in the external world.

Combining these two dimensions, we find a total of four classes of constraint. The problem with the traditional conception of 'negative liberty' is that it concerns itself only with external positive constraints (such as being bound up in chains), ignoring the other three-quarters of the problem. What we need is a more expansive understanding of freedom, which recognizes all of these constraints. Only then can "freedom from" constraint guarantee us "freedom to" realize our goals. (As Feinburg notes, these are not two distinct concepts, but simply two elliptical forms of a single underlying concept: 'Freedom, from X, to do Y'.)

Next up: the problem with so-called 'positive liberty'.

Saturday, March 05, 2005

Blog Review: Melbourne Philosopher

Many readers will recognise Tennessee of MelbournePhilosopher as a regular commenter to this site.

Aside from his own blog (linked above), he also hosts a more general purpose philosophy website, Melbourne Philosophy, that includes a 'wiki' section to which visitors are invited to contribute, and a recently added Aggregated Philosophy News section, which offers links to the most recent posts from various philosophy blogs around the net. (As yet it's mostly just tracking the various group blogs, so it complements enwe's meta-blog quite nicely, without too much overlap.) If you want your blog to appear on MP's auto-updated list, just leave a comment here.

To quickly highlight some other cool posts from his blog: MP's Parable of the Mugs raises questions about collective responsibility - something which also came up in my recent Charity post, where I discussed Parfit's case of the "thousand torturers".

More recently, MP originated the top-5 philosophers meme I posted last Friday. I also enjoyed his post on motivation, where he interpreted my own view as being...
against the idea of all motivation essentially "trickling down" from any single motive force, Grand Narrative, or Categorical Imperative. I more or less buy into [RC's] ideas about human motivation being more of a web of motivations rather than like a tree.

I'm not sure whether I've ever described it quite like that before, but I do like the web metaphor, and think it captures my position quite nicely.

MP goes on to ask how a programmer should structure the motivations of an Artificial General Intelligence. In particular, should we allow feedback from sub-goals to modify the more fundamental 'super-goals', or should we prefer a more uni-directional (foundationalist) approach? I prefer the flexibility of the former approach, and that seems to be the one that applies to humans. Feedback from the pursuit of our minor interests can influence our broader goals too - and potentially cause a revolutionary upheaval and reconsideration of our fundamental values.

Evaluative Meaning

There seems to be a distinction between 'descriptive' and 'evaluative' meanings of sentences. Descriptive meanings are cognitive (have truth values) and express propositions. It might be suggested that evaluative meanings are non-cognitive, expressing attitudes instead. Some statements might be purely descriptive, such as "Pandas are animals." Other utterances seem purely evaluative, e.g. "Hooray for philosophy!" Consider the phrase: "These are good scissors," which most clearly expresses a positive evaluation of the scissors. The phrase also seems to involve a descriptive component, i.e. that the scissors are sharp and able to cut well, or something along those lines. ("Good" is rather generic - the descriptive meaning more obvious in the case of more specific evaluative terms such as "intelligent", "courageous", etc.)

What is the relationship between descriptive and evaluative meaning? Non-cognitivists will want to say that the two are entirely distinct, mirroring the fact-value distinction. Moral realists will suggest that there is only one type of meaning (descriptive meaning), though of course language can be employed for a variety of purposes. As a reductionist, I will argue that evaluative meanings are contained within the broader category of descriptive meanings.

I think a desire-based analysis of value (where we take "good" to mean "is such to fulfill the relevant desires") offers a neat explanation of the link between descriptive and evaluative meanings. "These scissors are good" (value) just means "these scissors will fulfill the relevant desires" (fact). To fill in the details: we all recognise that people usually want scissors for cutting stuff. We know that to best fulfill these desires, the scissors should be sharp. So, in the appropriate contexts, we can infer "these scissors are sharp" from "these scissors are good". The descriptive meaning follows from the evaluative meaning, for they are really just one and the same.

It's not clear to me that the non-cognitivist can so readily explain this connection. They hold facts and values to be entirely separate spheres, such that no descriptive facts can entail an evaluative conclusion. So you should, in principle, be able to couple a positive evaluation with any descriptive claim whatsoever. This disallows the sort of reasoning (from one 'type' of meaning to another) demonstrated in the previous paragraph. For example, the non-cognitivist might agree that the scissors are sharp and so would best fulfill our 'cutting' desires, yet disagree that the scissors are in any way "good". This strikes me as inconsistent.

Now, it's true that someone might admit the scissors to be good for our purposes, yet still hold a negative attitude towards them himself. This suggests to me that we need to distinguish between evaluations and attitudes. Evaluations are made objectively, relative to some set of standards. Attitudes are purely subjective - expressions of personal whim rather than real value. So my understanding of 'evaluative meaning' is actually quite different from that initially described in this post. Utterances like "philosophy rules!" express positive attitudes, but have no real evaluative meaning, since it isn't made clear just what the value of philosophy is. (Compare this to "philosophy develops the mind", which makes a descriptive claim about what good it is that philosophy does, and thus could be said to have genuine evaluative meaning.)

So, all meaning is descriptive meaning. Evaluative meaning is just a particular kind of descriptive meaning, namely, the kind concerned with facts about desire fulfillment (or whatever the natural 'reduction basis' for value turns out to be). Value claims are about desires, and thus are cognitive. Expressions of attitude are something else entirely, distinct from the value claims involved in typical evaluative terms like 'good', 'intelligent', etc. The non-cognitivist is mistaken to conflate these.

But we might now instead consider the "meaning/attitude distinction". This looks similar to how the non-cognitivist originally conceived the descriptive/evaluative distinction, except that value is now located in the left box. We can safely agree with the non-cognitivist that this is a strict distinction, for it is also a fairly unimportant one. It no longer allows the non-cognitivist to insist that descriptive premises cannot yield evaluative conclusions, for instance. The analogous claim that meanings cannot yield attitudes seems far less threatening.

Friday, March 04, 2005

Constructing the Largest Prime

After poking some fun at a journalist who claimed the "largest prime number" had been discovered, Maverick Philosopher asks:
It occurred to me this morning that my criticism of the journalism may rest on a platonist assumption. What if the journalist were some sort of intuitionist (Brouwer, et al.) who held that the existence of a number is identifiable with its being constructed or generated?

That's an interesting question. I'm not completely sure, but I suspect even intuitionists would agree that there is no largest prime number. The result does not rely on proof by contradiction or other dubious platonic methods. We have a constructive algorithm that, given any prime number n, will generate a larger yet prime. Consider m:= n! + 1. Either m is prime, or can be factorised into prime factors larger than n.

I assume that constructivists only require an algorithm we know to be in principle computational. (I hope the above is indeed computational in principle. Expanding a factorial surely is - that's just repeated multiplication. But how about checking for prime factorizations? I suppose it should be, as a computer could just do an exhaustive series of divisions by x [for all x: n < x < m] to check that m/x never yields a whole number?) The mere fact that we cannot yet actually compute it, e.g. due to hardware limitations, might not be a problem. Does anyone know for sure: do intuitionists (constructivists) require only that we demonstrate something to be constructable, or must it actually be constructed?

Top 5 Philosophers

Melbourne Philosopher asks: "who are your all time, desert-island, top 5 philosophers?" I haven't read very widely yet, so my selection pool is rather limited, but here goes...

1. J. S. Mill
2. Dennett
3. Hume
4. Parfit
5. Douglas Adams [hey, if MP could choose Heinlein... ;)]

To develop the meme, I'll add my top 5 philosophers I feel guilty about not having read:

1. Kant
2. Wittgenstein
3. Aristotle
4. David Lewis
5. Sartre

What do your lists look like?

Tuesday, March 01, 2005

Contextual Impossibility

I want to reconsider something from my Adolf desire paradox:
Adolf desires that more desires be thwarted than fulfilled. [A 'desire that P' is 'fulfilled' if P is in fact true, 'thwarted' if P is false.] Further suppose that - apart from this desire of Adolf's - there are in total an equal number of fulfilled and thwarted desires in the world. It follows that Adolf's desire is fulfilled if and only if it is not fulfilled.

The problem with Adolf is that he was put in a context where his desire became self-referentially paradoxical. This situation could potentially arise whenever you have a desire that refers to 'all desires'. But it's logically impossible that a desire could have its own thwarting as a fulfillment condition - that's equivalent to a statement having its own falsity as its truth condition! So I think we must hold all such desires to be impossible. (It would seem odd to say they are possible some times but not others, depending on the external context. Why should the existence of an internal desire depend on the external world?)

Patrick suggested to me that the analogous liar sentence is not always impossible. Imagine a sheet of paper headed with the sentence: "Most statements on this page are false." But suppose the rest of the page contains an equal number of true and false statements. The original statement would now be true iff it is false. In other words, the context converts it into the liar sentence. Contradictions are impossible. Since it's impossible to have a statement truly asserting a contradiction, we must conclude that the apparent sentence is in fact meaningless (it refers to no proposition) in this context.

But we do not want to say the original sentence is always meaningless - that would seem clearly mistaken. For example, we can imagine the rest of the page is instead filled with false statements, in which case the original statement will be plainly true. It is not plausible to claim the statement is meaningless in such a harmless context. It is not really a liar sentence. It is merely a potential liar sentence. Whether the paradox is realised or not depends on the context.

This suggests that Adolf's desire should similarly be possible in appropriate contexts. It is unproblematic to desire that more desires be thwarted than fulfilled, so long as the sum of other desires is not exactly balanced between the two.

As I noted before, it does seem odd to say a desire is possible in some contexts but not others. But given the quasi-indexical nature of this desire, its context-dependence is perhaps not so surprising after all. In a sense, the meaning of "most statements on this page are false" will vary depending on which page it refers to. Similarly, the content of a desire that "most desires be thwarted" will depend on the background set of all desires that it refers to. In most contexts, the desire will be unproblematic (contrary to my earlier suggestion). Only in a few specific contexts will a paradox arise. So whether such desires are possible will depend on their external context.

Hopefully that doesn't sound too ad hoc. I guess an alternative solution to the liar paradoxes would simply be to accept true contradictions and adopt a paraconsistent logic, but I'm a little reluctant to do that.