“The Human Prejudice,” by Bernard Williams (1985)

“The Human Prejudice,” by Bernard Williams (1985)

When I was still teaching, the last unit of my introductory level “Ethics and Contemporary Issues” course was devoted to the question of moral concern for non-human animals. We would begin with excerpts from Peter Singer’s Practical Ethics, then move on to Cora Diamond’s “Eating Meat and Eating People,” and finish with Bernard Williams’ “The Human Prejudice.” It is a fitting way to end the course, for not only is Williams’ essay easily the most interesting and thought-provoking piece we read over the semester, Williams himself was one of the most interesting and thought-provoking philosophers to have worked since the Second World War.  

Peter Singer employs the word ‘speciesist’ in order to compare our tendency to give greater moral consideration to one another than to non-human animals with racism, sexism, and anti-Semitism. It is this comparison and the conclusion Singer draws from it that Williams rejects. But beyond merely defending our inclination to give greater consideration to human beings than to non-human animals, Williams maintains that what moral consideration we do extend to non-human animals is dependent on the very human prejudice that Singer and his sympathizers want to stamp out. Indeed, for Williams, moral consideration of any sort, directed towards anyone or anything, is only intelligible in light of our reserving our greatest moral consideration for human beings and human concerns.  

Early in the essay, Williams observes that historically – at least in the West – the human prejudice  has been grounded in the idea, common to the Abrahamic religions, that we are made in the image of God and have been given dominion over nature, something that remains true whether one’s version of the story is optimistic, in the manner of Renaissance humanists like Pico, or pessimistic in the manner of Lutheran Protestantism.

Whether the views were positive and celebratory, or more sceptical or pessimistic, there was one characteristic that almost all the views shared with one another. For a start, almost everyone believed that human beings were literally at the centre of the universe.  Besides that purely topographical belief, however, there was a more basic assumption, that in cosmic terms, human beings had a definite measure of importance. In most of these outlooks, the assumption was that the measure was high, that humans were particularly important in relation to the scheme of things. This is most obviously true of the more celebratory versions of humanism, according to which human beings are the most perfect beings in creation. But it is also present in outlooks that assign human beings a wretched and imperfect condition – Luther’s vision, for instance, in which man is hideously fallen and can do nothing about it simply by his own efforts. The assumption is still there – indeed, it is hardly an assumption, but a central belief in the structure – that that fact is of absolute importance… The human condition is a central concern to God, so central, in fact, that it led to the Incarnation, which in the Reformation context too plays its traditional role as signaling man’s special role in the scheme of things. 

What is left of the human prejudice, if we no longer accept the idea that it is grounded in God’s overriding concern with us and our affairs? Williams rightly observes that the notion that human beings just are more important than any other creature, as if this is a matter of objective fact, or that we are of the greatest significance “to the universe” or “mother nature” or some other such thing is hardly credible, but he also maintains (again, rightly) that no such cosmic or objective importance is required. “We do not have to be saying anything of that sort at all,” Williams explains. “These actions and attitudes need express no more than the fact that human beings are more important to us, a fact which is hardly surprising.”

Of course this is precisely what Singer deplores and is why he associates the human prejudice with prejudice against black people, women, and Jews. And it is because he thinks that such prejudices have no epistemically respectable ground, resting on nothing but a naked, unreasoned preference for one’s own, that he believes them to be pernicious. 

Only, it isn’t true that sexism, and racism, and anti-Semitism are based in nothing more than a naked preference for white, male gentiles. If one asks sexists or racists or anti-Semites why they privilege white, male gentiles above others, they don’t simply say, “because they are white, male gentiles.” Rather they give reasons, like “Women are emotionally unstable” or “Jews are devious and clannish.” Of course, these reasons are terrible, because they are untrue, and it is precisely because they are grounded in the slander of their relevant targets that racism, sexism, and anti-Semitism are malign.

In the case of the human prejudice, however, substantive reasons typically are not offered, when one is queried about it. If you ask someone why he so prioritizes a stranger’s life that he is about to run into a burning building to save him, it is very likely that the only answer you’ll get is that the stranger is a human being. “What, are you kidding?” he’ll say, “There’s someone in there!” And it’s worth noting that it makes no difference who the person is, whether a Nobel laureate, modest  laborer, or a child with Down’s syndrome. Indeed, there would be something quite weird, bordering on the grotesque, if one were to give reasons that went beyond the humanity of the individual trapped inside. Imagine if the person said in response to your query, “Well, he is a Nobel laureate after all.” You would wonder about him and with good reason.

The question returns, then, to whether the human prejudice is bad and why. For Singer, it would appear that a naked preference for one’s own is as bad as one based in a slanderous view of the other. Hence his view that we should turn to moral philosophy – specifically, to Utilitarianism – to determine to whom (or to what) we should extend moral concern. Such a vantage point is supposed to give us an impartial, rational, objective account of moral obligation free from pernicious prejudices.  

Williams thinks this is a delusion on Singer’s part, one that is shared by many – probably most – moral philosophers. Consider the quality that the Utilitarian selects as the ground upon which the extension of moral concern should be based: the capacity for suffering. Why should this be the relevant quality? Consider that in various parts of the tree of life, we find creatures that have no capacity for suffering – microbes; insects; worms; arachnids; etc. For the Utilitarian such beings are not deserving of our moral concern. But why should this be? What is so special about the capacity for suffering? Why shouldn’t the possession of a certain kind of carapace be what determines moral concern or having a certain kind of eye or whatever? The moral philosopher will, of course, answer that such qualities are morally irrelevant. But why? Is what’s morally relevant something that can be determined objectively; neutrally; logically? 

The answer, lurking just beneath the surface, is that the Utilitarian’s chosen quality, suffering, is a major human concern. As are autonomy and dignity, which, of course, are at the heart of Kantian moral philosophy. And so, the grounds on which moral philosophers instruct us to give moral consideration to non-human animals turn out to be an expression of the human prejudice, not some neutral, objective standard that hovers somewhere above it, in logical space.  (It is worth noting, here, as Williams does, that the word we use to describe the kind treatment of animals is ‘humane’.) To deny this is simply to return to the archaic and unsustainable notion addressed (and dismissed) earlier – that suffering or dignity or whatever just matter objectively, from no perspective. 

Beyond moral concern for animals, then, it is Williams’ view that ethical concern of every type and variety can only be understood in light of the human prejudice and ultimately, as an expression of it. It is people who have ethical concerns and we do so as a result of the things that we care about.  

We can act intelligibly from these concerns only if we see them as aspects of human life.  It is not an accident or a limitation or a prejudice that we cannot care equally about all the suffering in the world: it is a condition of our existence and our sanity. Equally, it is not that the demands of the moral consciousness require us to leave human life altogether and then come back to regulate the distribution of concerns, including our own, by criteria derived from nowhere. We are surrounded by a world which we can regard with a very large range of reactions: wonder, joy, sympathy, disgust, horror. We can, being as we are, reflect on these reactions and modify them to some extent… But it is a total illusion to think that this enterprise can be licensed in some respects and condemned in others by credentials that come from another source, a source that is not already involved in the peculiarities of the human enterprise.

Liberalism and Kitsch

Liberalism and Kitsch

On a number of occasions, I have defended what I’ve taken to calling “procedural liberalism” on the grounds that in large pluralistic societies (a) one cannot expect one’s fellow citizens to share a common, substantive conception of the good, and (b) one cannot expect that one’s “community,” in the sense of the word that implies a shared set of values, will always maintain a hold on the levers of state power. It is in everyone’s interest, then, to uphold a “procedural” liberalism, according to which (c) we allow one another significant latitude in the pursuit of our private lives, constrained only by the harm principle, and (d) we rigorously maintain state neutrality with regard to such pursuits. Such an arrangement permits people to engage with what they find significant and meaningful in life, among their family and friends, and in the broader civil society. It also makes it possible for them to trust that they will be treated fairly within “political society,” by which I mean those sectors of society that are governed by the formal institutions and powers of the state, such as the courts, the federal, state, and local bureaucracies, and the like. 

If we assume (as I think we should) that our ability to engage with what is significant and meaningful to us is a precondition for a satisfying life, then this procedural liberalism presupposes that a person has access to family, friends, and to an open and free civil society, meaning one in which one’s capacity to associate with people of one’s choosing is largely unrestricted. The diminishment of any significant number of these fosters feelings of emptiness and futility, with the exception of those rare souls whose capacity to find satisfaction in life is not undermined by solitude.

It is a common refrain in the developed world today that liberalism is either in trouble or already in the process of dying, and while the reasons commonly given vary widely in terms of their plausibility, the claim – or as in my case, the worry – is on the mark. Not because the arguments for liberalism are any weaker today than they were yesterday (if anything, they are stronger) and not because anyone has thought up a better arrangement (they haven’t), but because of certain developments distinctive of modern industrial and post-industrial societies and especially, Western ones.

For one thing, the presupposition I just discussed – that in our search for meaning, we can count on there being and our having access to a network of family, friends, and acquaintances, with whom we can freely engage within the largely unconstrained space of civil society – can no longer be assumed. Indeed, the evidence, be it anecdotal or social-scientific, suggests that these crucial personal and civil associations are diminished and diminishing. This source of liberalism’s troubles has been much remarked upon and is relatively well understood (which is not the same as having a clue how to reverse or otherwise address it), and is at the heart of much of the last century’s discussion of the crisis of the modern individual who with the great urban migrations effected by the Industrial Revolution lost the psychic moorings that previously had been provided by extended family-networks, a shared culture, and near-ubiquitous religiosity. As Carl Jung put it in Modern Man in Search of a Soul (1933):

The modern man has lost all the metaphysical certainties of his medieval brother, and set up in their place the ideals of material security, general welfare, and humaneness.  But it takes more than an ordinary dose of optimism to make it appear as if those ideals are unshaken. [F]or the modern man sees that every step in material progress adds just so much force to the threat of a more stupendous catastrophe.

Jung’s reference to the modern ideals of “material security, general welfare, and humaneness” suggest a second reason for liberalism’s plight, one that is less frequently remarked upon but equally significant. That is the overwhelming tendency of liberal societies in the more advanced stages of capitalism to commodify our relationships and pursuits, our identities, and even happiness itself. The result is that they have become “kitsch” and we have become consumers of kitsch, which means that they no longer have the power to satisfy us, and we no longer have the capacity to be satisfied.  

Kitsch is that mimic of things of depth and substance that is commercially produced so as to allow people, whether out of indolence or incapacity, to purchase spiritual depth without the need for substantial investment, struggle, or sacrifice; a cheap simulacrum futilely engaged so as to sate a jaded sensibility and a shallow character. Clement Greenberg, in his landmark essay, “Avant Garde And Kitsch” (1939), restricted his analysis of this socio-cultural development to art and the art-buying public, but as Roger Scruton pointed out in “Kitsch and the Modern Predicament” (1999), under late capitalism virtually every dimension of life can be – and is being – kitschified, for kitsch indicates a “spiritual,” rather than an aesthetic deficiency:

Kitsch reflects our failure not merely to value the human spirit but to perform those sacrificial acts that create it. It is a vivid reminder that the human spirit cannot be taken for granted, that it does not exist in all social conditions, but is an achievement that must be constantly renewed through the demands that we make on others and on ourselves.

One example of this “kitschification” and its effects, not just on art but on whole forms of life, is contemporary, mainline religion, where the rigorous, highly particular demands once imposed on lifestyle and belief have been abandoned, so that the religious and spiritual life might become easier and more congruent with popular mores and tastes. As a result, mainline religion has become generic to the point that one church is largely indistinguishable from the next. (I used to serve on my synagogue’s Beit Din (Jewish court) and would ask prospective converts why they wanted to be Jewish. The answers I got inevitably involved a benign mishmash of progressive platitudes, so I would always ask the same follow-up question “That’s a great reason to become an Episcopalian. What I asked was why you want to be Jewish,” to which I never received anything better than a baffled look.) The predictable result has been the collapse of the mainline churches and an upsurge in fundamentalist religion, the crude and atavistic harshness of which at least makes it possible for people to feel something in the conduct of religious life.  

Another example of kitsch that lies well beyond the artworld is our society’s treatment of old-age and retirement – one’s “golden years” in kitsch-speak – whose commodification and its effects was described by Nathanael West in The Day of the Locust (1933), as a prelude to what remains one of the most terrifying depictions of mob violence in American literature: 

All their lives they had slaved at some kind of dull, heavy labor, behind desks and counters, in the fields and at tedious machines of all sorts, saving their pennies and dreaming of the leisure that would be theirs when they had enough. Finally that day came… Where else should they go but California, the land of sunshine and oranges?

Once there they discover that sunshine isn’t enough. They get tired of oranges, even of avocado pears and passion fruit… They don’t know what to do with their time. They haven’t the mental equipment for leisure… They watch the waves come in at Venice, [but] after you’ve seen one wave, you’ve seen them all.

Their boredom becomes more and more terrible. They realize that they’ve been tricked and burn with resentment. Every day of their lives they read the newspapers and went to the movies. Both fed them on lynchings, murder, sex crimes, explosions, wrecks, love nests, fires, miracles, revolutions, wars… Oranges can’t titillate their jaded palates. Nothing can ever be violent enough to make taut their slack minds and bodies. They have been cheated and betrayed. They have slaved and saved for nothing.

Today, in the age of social media and advanced communications, it is our relationships and identities that have been most intensely commodified and which are being sold to us by Facebook, Twitter, Instagram, and the like as simulacra of what they once were: “friends” for the friendless; “followers” for those with no real influence; and “likes” for those whose statements fail to carry any genuine weight or whose posted images are bereft of any actual interest or appeal. This virtual world of ersatz interactions and relationships is inhabited by equally unreal people, encouraging everyone, as it does, to misrepresent themselves, so as always to appear in the most positive and interesting light. It is small wonder then that those who are most dependent upon these cyberspaces in their pursuit of meaningful lives – those for whom social media has essentially replaced civil society – are also the most obsessed with their identities and with the validation of those identities by others, thereby demonstrating a devastating level of personal insecurity. 

It is this combination of social atomization and the kitschification of every aspect of life that I am suggesting poses the greatest threat to the liberal consensus, for they undercut the capacity to pursue a meaningful life in the private and civil spheres, which are a fundamental precondition for liberal society. With it no longer met, our need to feel that our lives are significant remains unsatisfied, so we seek fulfillment publicly, politically and by way of the law. The person who has no real friends enlists the powers of the state to compel others to act as if they were his friends. The person who finds himself unfulfilled by the identities he has taken on appeals to the law to force everyone to genuflect before them. The person who is frustrated by the impotency and ineffectualness that follows from a lack of any investment in real people or causes will bolster himself by joining in the professional ruination, public ostracizing, and other mobbish behavior that currently falls under the banner of “canceling.” 

A successful liberal society consists of people whose lives and relationships and pursuits are substantial and for the most part satisfying, for this is what sustains the live-and-let-live ethos on which liberalism is predicated and which ultimately protects us all. But in a society of shallow, anxious, disconnected, inchoately yearning recluses, the generosity of spirit necessary to sustain the liberal consensus is absent, and the rational self-interest presupposed by liberal political philosophy can no longer be credibly ascribed.

Hedonism

Hedonism

[Modern] Hedonism is the view that pleasure is the sole intrinsic good and that all other goods are either constitutive of pleasure or servants to it. It should not be underestimated. For one thing, Hedonism has ancient roots, going as far back as the Cyrenaic and Epicurean schools of ancient Greece. For another, it is highly intuitive and benefits from the fact that it puts on a pedestal something that every normal person likes.

One cannot discuss Hedonism in modern ethics without mentioning modern biology, according to which the pursuit of pleasure and the avoidance of pain are among the most fundamental human imperatives, beyond survival and reproduction. As Jeremy Bentham described it in the opening sentences of his 1781 book, An Introduction to the Principles of Morals and Legislation: “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.”

When looked at through the lens of political thought, hedonism represents a kind of realism with respect to human affairs, in contrast with the idealism implicit in pre-modern notions of human semi-divinity of the sort propagated by the Abrahamic religions and classical Greek philosophy and the caste-based political systems that followed from it. What the best government looks like is very different when one is talking about sophisticated, hedonically motivated mammals, as opposed to embodied semi-gods.

It is worth noting that Machiavelli and Hobbes (and to a lesser extent, John Locke) drew egoistic conclusions from the hedonic premise: they reckoned that a creature driven by the desire for pleasure will also be selfish and cannot be trusted to respect the desires of others, in the absence of external controls. This inference from Hedonism to Egoism is not one that following generations of hedonists – and especially the Utilitarians – would make. On their view, a person has a duty to promote his neighbor’s happiness as well as his own, and John Stuart Mill, the greatest of the Utilitarian philosophers, believed that this duty has an “inner sanction”; that we are motivated to maximize everyone’s happiness and not just our own, because we feel pleasure when we do so and pain when we do not, an emotional arrangement that Mill calls “the essence of conscience.”

___

The 17th century revolution in physics was followed by generations of thinkers, intent upon accounting for human nature and behavior in mechanistic terms, something that would only accelerate with the subsequent revolution in biology of the 19th century. These developments were revolutionary, because they overturned the Aristotelian framework of essences and purposes from which all explanation – scientific or otherwise – took place, from late antiquity through the Middle Ages. It also established a dichotomy that would plague philosophy thereafter: between “naturalists” who found themselves hard-pressed to retain robust, longstanding conceptions of human volition and responsibility, now that human beings had been brought into the “scientific image,” and dualists of one kind or another, who were unable to make sense of the relationship between human thought and will and our embodiment. It wouldn’t be until Wilfrid Sellars’ exploration of the relationship between the scientific and manifest images (in his landmark 1962 paper, “Philosophy and the Scientific Image of Man”) that a potential way out appeared – one that I have explored at length in my Prolegomena for a Pluralist Metaphysics – but the subtlety of Sellars’ analysis, combined with prevailing and profound disagreement among philosophers as to how it should be interpreted, has prevented it from being the widely accepted solution it might have been.  

Though hedonism may have satisfied the modern desire for realism, scientific credibility, and progress, it also has been widely perceived as robbing human beings of their dignity. In its claim that human and animal behaviors have essentially the same etiology, hedonism has invited criticism that it provides an unflattering picture of humanity. Of course, this tension was inherent in the very aims articulated by the doctrine’s promoters: the desire for greater realism with respect to human nature and behavior suggests that one wants to deflate what one thinks is a fictional and inflated sense of human dignity.

Now, I can imagine any number of contemporary philosophers dismissing this concern as irrelevant. “Truth and falsity are what count,” I can hear them say, “and the attractiveness or unattractiveness of a theory tells us nothing about whether it is true or false.” But this simplistic separation of the true and the false, the attractive and the unattractive is hard to swallow when extended to a moral philosophy, the sole value of which lies in its adoption and whose guiding impulse is aspirational.

Mill was a practical man and was very concerned that his theory seemed to – as he put it in Utilitarianism – “excite in many minds…inveterate dislike” and to lead people to conclude that Utilitarianism is “a doctrine worthy only of swine.” His response depended upon a clever mining of the ancient Epicurean tradition. Mill acknowledged that both human beings and animals are ultimately motivated by the desire for pleasure, but there are enormous differences in the types of pleasure that they seek. Indeed, there are varieties of pleasure that are sought by human beings, exclusively and are of a higher order than the pleasures that are the product of “mere sensation” and which constitutes the whole of the happiness of animals. I am speaking, here, of the pleasures associated with the intellect and with intellectual activity, as well as those belonging to what I will call the “higher sentiments,” those states of mind that are born of the collaboration of the affective sensibility and the contemplative mind. “[T]here is no Epicurean theory of life which does not assign to the pleasures of the intellect, of the feelings and imagination, and of the moral sentiments a much higher value as pleasures than to those of mere sensation,” Mill wrote. “Human beings have faculties more elevated than the animal appetites and, when once made conscious of them, do not regard anything as happiness which does not include their gratification.” Animals lust, but human beings love; animals satiate their hunger and thirst, but human beings enjoy the delights of gastronomy; animals roll in the grass and mud, but human beings read poetry and listen to violin concertos; and so on and so forth.

So much for realism, you might think. After all, to say that human beings will choose the higher pleasures over the lower ones, “once made conscious of them,” doesn’t really ring true, especially when one reflects upon one’s daily intercourse with the common mass of humanity or on contemporary popular culture, entertainment, and food. But Mill has a response to this: Every human being has the capacity for enjoying the fruits of intellection and the higher sentiments, but like anything that requires cultivation it can be neglected or worse, actively undermined by a debased and debasing culture. “Capacity for the nobler feelings is in most natures a very tender plant, easily killed,” Mill observed, “not only by hostile influences, but by mere want of sustenance; and in the majority of young persons it speedily dies away if the occupations to which their position in life has devoted them, and the society into which it has thrown them, are not favorable to keeping that higher capacity in exercise.”

So, while Mill admitted that many if not most people do not prefer higher pleasures over lower ones, he thought that they would, were it not for certain obstacles to the cultivation and pursuit of nobler feelings and good taste. As with all counterfactuals, this is a guess and one that we might legitimately be quite skeptical about. Nonetheless, Mill’s account provides an important reminder that the quality of our cultural diet is as grave a matter as that of the food we eat, the water we drink, and the air we breathe. 

___

Critics are correct in thinking that one of Hedonism’s main faults is in the diminished human being that follows from it, but they are wrong as to the nature of that diminishment. Hedonism does not diminish us because it promotes pleasure or even because it elevates it to the status of an intrinsic good. G.K. Chesterton wrote of the materialist account of nature that “if the cosmos of the materialist is the real cosmos, it is not much of a cosmos. The thing has shrunk,” and I would argue that if the hedonist’s human being is the real, complete human being, then it isn’t much of one. When we embrace hedonism, it is humanity rather than the cosmos that shrinks. More precisely, it recedes from the world and into the recesses of the individual mind. It does not diminish us by reducing the nobility of our pursuits, but by transforming their objects: from achievements in the world to the mere experience of such achievements, whatever that experience’s source.

Modern hedonism is only secondarily about pleasure. At its core, it is an experientialist philosophy, by which I mean that it treats the value of a thing or activity as lying solely in the experience that is engendered by it. What is valuable about playing tennis is not the playing, but the experience one has in doing so. What is valuable about charitable activity is not the activity, but the experience that one has in engaging in it and that others have in being the object of it. What is valuable about rising to the top of one’s profession is not that one has done so, but the experience that having done so effects in oneself and in others. It so happens that the hedonist believes that it is pleasurable experience that makes these things valuable, but this characteristic of the doctrine is conceptually separable from the experientialism itself.  

As Robert Nozick observed in his “experience machine” thought-experiment (in his 1974 book, Anarchy, State, and Utopia), such a view renders it impossible to explain why actually doing a thing is preferable to simulating doing it. If simulating playing tennis, in a Star Trek style holodeck, gives rise to as pleasurable an experience as really playing tennis, why think the latter is a more valuable activity than the former?  

Imagine a machine that could give you any experience… When connected to this experience machine, you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things, how they feel “from the inside.” You can program your experiences for tomorrow, or this week, or this year, or even for the rest of your life. If your imagination is impoverished, you can use the library of suggestions extracted from biographies and enhanced by novelists and psychologists. You can live your fondest dreams “from the inside.”

The question of whether to plug in to this experience machine is a question of value… The question is not whether plugging in is preferable to extremely dire alternatives — lives of torture, for instance — but whether plugging in would constitute the very best life, or tie for being best, because all that matters about a life is how it feels from the inside.

I think that most of us think that it matters that we actually do things, not just have the experience of doing them, pleasurable as the experience might be. We care about whether we actually are good tennis players, charitable givers, and successful professionals, and that’s because we think that flourishing, in the eudaimonic sense, is valuable. But, to flourish is actually to succeed in one’s endeavors, not just to feel that one has done so, and in the experience machine one can only feel that one has succeeded, as one is not actually engaged in the activity in question.

The point is not that hedonism renders flourishing impossible, but that it cannot explain why it matters. To embrace Hedonism is to adopt an attitude that is ambivalent about real activity and real success and which, when held consistently and is taken to its logical conclusion, pines for the day in which we can replace more and more of our real lives with simulated ones, where pleasurable experience can be better insured. It is a view that takes the endpoint of all activity as lying in our own heads. If philosophers in the past were unconcerned about this consequence of the theory, it could only have been because they couldn’t have imagined that one eventually would be able to live entirely in one’s own head, by way of virtual and holographic technologies.

Lives and Principles

Lives and Principles

This essay comes in the wake of an exchange I had some time ago on social media. (Well, perhaps not really an exchange, which suggests people actually talking to one another, something that never happened in this case.) Nicholas Christakis, of Yale-Halloween-Costume-Student-Meltdown fame, posted the following:

The older I get, the more I appreciate the importance of having principles and of living a life of integrity and honor. I recognize this quality in my friends, too, and it’s admirable. People who lack these qualities are sad, and likely are surrounded by others without scruples.

I thought it interesting for its somewhat unusual combination of naivety, ordinariness, and contempt. (Indeed, if not for the contempt, I likely would have ignored it.) It’s the sort of thing that people say all the time, yet it is entirely wrong. It is the kind of thing people express when they think they are channeling their virtue but which, in fact, reflects a lack of understanding of it. And it is the sort of thing one says after being thrust into the public eye, having survived some well-publicized ordeal, only to discover that people suddenly expect you to have some special moral and political insight as a result. The same thing happened to Brett Weinstein, an unassuming biology professor, who was made famous after having been the target of a rabble of students who staged a bizarre coup at Evergreen College several years ago. Weinstein now fancies himself a political Wise Man and is responsible for the “Unity Party 2020,” which he thought would have an effect on American politics. (It did not.)

I tried to engage with Christakis and wrote this as a first move:

I appreciate this sentiment, but I don’t share it. What my aging has impressed upon me is the emptiness of principles and abstractions; the self-deception involved in thinking one is living by them; and the messy, muddy, ultimately prudential character of most of what we do.

I then received the following from Christakis: “I understand what you say. But I find it too cynical to accept. And I see many people with integrity, who I’d just as soon resemble.” And though I replied several times more, in an effort to further explain my position, I received no additional response from him. 

Christakis notwithstanding, the idea that a good person and a good life (the “life of integrity and honor”) are characterized by living and acting according to “principles” is quite common and worth commenting on. While Christakis never explains what these principles are – whether by characterization or enumeration – it seems safe to infer that he means general, abstract moral principles of the sort that one finds in religion and in philosophy: “Do no harm”; “Be honest in what you say and do”; “Respect the dignity of others”; that sort of thing.

The trouble is that it is difficult to identify any such principles that are actually true, without slipping into tautology. If by ‘honesty’ you mean ‘always rightfully telling the truth’, then, of course, “It is right to rightfully tell the truth” is true, but only trivially and therefore uselessly. What good is it to know it, after all, if one doesn’t know what rightfully telling the truth consists of? If by ‘honesty’ you mean ‘always telling the truth’, then “It is always right to tell the truth,” though substantive and potentially useful, is obviously false, as one easily can imagine any number of scenarios in which not only would it be wrong to tell the truth, but it would be obligatory not to. And if by ‘honesty’ you mean “telling the truth some of the time,” then it’s unclear whether one is acting on principle at all, as you’ll need to know which times, and that is going to depend.

Aristotle explained in the Nicomachean Ethics why we shouldn’t put much stock in principles. For him, virtue is associated with moderateness of temperament and action, so as far as principles go, we may be able to say truthfully that one should never act excessively or deficiently, but rather to the right extent or degree. Yet, what counts as any of these can only be determined by a judgment that is entirely dependent on the circumstances, as the very same action may represent excess on one occasion, deficiency on another, and “the right amount” on yet a third. This is why virtue requires practical wisdom, which would not be the case if the good life could be achieved through fidelity to a set of principles that one simply memorized and followed.

To suggest, then, as Christakis does, that virtues like “honesty” or “integrity” are the result of following principles depends on a misunderstanding. Principles can never tell us how we should act on any particular occasion, because it will depend on the situation, the actors, and any number of other variables. Virtue is not “top down,” but rather, “bottom up.” It involves not fidelity to principles, but a well-developed sensitivity to what circumstances require of us, which can only be born of substantial experience, perceptiveness, and sound judgment.

___

What struck me the most upon reading Christakis’s remarks was that they are supposed to reflect his thinking as he has gotten older. Well into middle age myself, I have been noticing more and more the impact of aging on my own thinking, and it is very much the opposite of what Christakis describes. If I was to try and summarize the essential elements of this thinking, they would include:

—The realization that few if any of my aspirational narratives are going to play out as I had hoped.

—An understanding that one can do all the right things (and follow all the “best principles”) and things can go horribly wrong, nonetheless.

—The related understanding that on many occasions, all of the possible choices will be bad.

—The acknowledgment that neither I nor most anybody else is nearly as good (or as bad) as we would like to think.

—The recognition that in light of all this (and more), happiness and fulfillment are best sought in the ephemeral joys that arise in the course of an ordinary day in the ordinary life of an ordinary person, rather than in the fulfillment of grand plans regarding the world or – and this is important, in light of what we are talking about – oneself.

These realizations have been slow and hard coming and are, to a great degree, unwelcome, as is the aging from which they follow and which involves, prominently: the decline and eventual death of one’s parents; the departure of one’s grown children from the home, as they embark upon their adult lives; one’s own physical and mental deterioration and that of one’s spouse; and the accumulation of a lifetime’s worth of compromises, disappointments, betrayals, failures and losses. 

It is only in one’s later years, when one’s life is sufficiently rich in these sorts of experiences, that the full complexity and radical contingency of things comes into clear focus and can serve as a foundation for wisdom. That wisdom is a matter of having outgrown black-and-white thinking; idolization and the heroification of people; breezy, blanket condemnations of those whom one does not know and of whose lives one is ignorant; and simplistic and self-important proclamations regarding one’s virtue, whether current or as part of some “life-plan.” 

Let me say something about such “life-plans,” in light of Christakis’s stated desire to “live a life of integrity and honor.” I find it a strange ambition. If someone asked me what I would like for my life, my answer would include: satisfying relationships, successful endeavors, and memorable experiences. If you’d asked me when I was young how I thought I would do, my answer would have been that I would succeed across the board and superlatively so. If you ask me now how I’ve done, my answer is “so, so.” And if you ask me what I think is the best anyone can do, my answer is also “so, so.”

Perhaps we should take these kinds of appeals to a “life of honor and integrity” as part of an effort to console oneself in the face of disappointment, failure, and sadness. After all, “But, I was a good person” and “At least I stuck to my principles” are the sorts of things one says to oneself in the wake of a broken marriage, an unsuccessful business venture, or an unsatisfying experience. Alas, in my own life, I  have found little consolation along these lines. For one thing, as mentioned earlier, neither I nor anyone else is nearly as good as we’d like to think, nor will we ever be, and for another, having “done the right thing” provides no solace for any significant failure or loss, unless one is deceiving oneself. During my late father’s extensive and terrifying decline, I made all the right decisions and followed all the “best principles,” and what wound up happening was horrific, not in spite of my choices but because of them. The thought that I had acted throughout with “integrity and honor” or that “I am a good person” provided no comfort whatsoever, as I watched my father rant and rave and thrash and struggle week after interminable week, nor do I think it should have. The consolatory conception of the life of virtue seems to involve what I would argue is a refusal not just to acknowledge, but to digest and finally, accept the tragic dimension of life. Existential wisdom lies not in feeling good in the face of disappointment, failure, and loss, because their significance is trumped by one’s own virtue, but in the ability to think well of one’s life and feel good for all the little successes and joys one manages to accomplish in between. What is striking about the view articulated by Christakis isn’t that someone thinks that way, but rather that an older and supposedly wiser person does. We should become less certain as we age, not more; disinclined to criticize and condemn, rather than inclined (I’m still working on that one); realistic and practical, rather than utopian; more hesitant, rather than less; and retiring, rather than brazen. Why? Because if we don’t, it means we haven’t learned a damned thing.

Self-Sufficiency and Human Flourishing

Self-Sufficiency and Human Flourishing

It is no secret that Massimo Pigliucci is a practicing modern Stoic. Indeed, working within and promoting the modern Stoic form of life has become his chief project, including a very well-received and popular book of the same name, published by Basic Books. Massimo and I have done two video dialogues on the subject.

It is also no secret that I have a number of problems with Stoicism as a philosophy of life, many of which I have raised with Massimo. But there is one in particular that stands out for me and that is whether human flourishing is self-sufficient. Massimo thinks it is. I don’t. But this is not just a disagreement between the two of us: it represents a fundamental difference between the Stoic understanding of flourishing and the Aristotelian one.

First, let’s be clear on what is meant by ‘human flourishing’. The Greek term, for which this is our best translation, is ‘Eudaimonia’. It is often translated as ‘happiness’, an unfortunate concession to readability that may have permanently compromised our understanding of the concept, even among those who are perfectly aware of the more literal translation, as Massimo certainly is. 

‘Happiness’ in modern English indicates pleasure or good feeling and is subjective: what makes one person happy may not do so for another. Happiness in this sense is self-sufficient, insofar as I can always find the sunny side of something and thereby make myself feel good about it and about myself more generally. If I have a mediocre tennis career or fail to succeed as a scholar, I may nonetheless be happy, if I am able to convince myself that “it’s the effort that counts, and I did my best” or “everything happens for a good reason” or some other such thing. 

But Eudaimonia is not like this. To have lived a eudaimonic life is to have actually flourished; to have lived a life that is rightly characterized as excellent; as having been worth living, as Socrates would put it. Eudaimonia is a normative concept, which means that unlike the modern concept of happiness, it is not supposed to be subjective. And given that the success and failure it imagines occur in the world and among other people and thus, depend on things other than oneself and one’s efforts, the eudaimonic life cannot be self-sufficient.  

Being an excellent tennis player depends in part on the quality of my opponents and making it in my career depended on the judgments of journal editorial boards concerning my work, the financial health of my home institution, the interests and desires of the students who pursue degrees at Missouri State (my former employer), and more. To have flourished in one’s life would seem, at a minimum, to involve success in these sorts of activities and relationships – it is the flourishing of a human life, after all – and clearly, such success depends on variables beyond one’s own efforts, which include a certain level of material well-being, positive native endowments – whether in physical appearance, intelligence, or the like – and more generally, good luck. 

Stoics reject this. In particular, they reject the idea that success in the actual world, among actual people, is necessary for human flourishing. Instead, they define ‘Eudaimonia’ so as only to indicate virtue, which is characterized in terms of wisdom, courage, justice, and temperance. The question of whether one has flourished then, is entirely a matter of whether one has been wise, courageous, temperate, and just in the conduct of one’s life, and not whether in doing so, one has actually accomplished the things one set out to do. For the Stoic, all those actual accomplishments and the external variables upon which they depend, are deemed “preferred indifferents,” meaning that while one might legitimately hope for them, they are irrelevant to one’s flourishing, and failing to have or accomplish them should be treated with “equanimity.” The virtues themselves, which non-Stoics would say are valuable, in part precisely because they make success in one’s endeavors and relationships more likely, thereby become fetishized, in that they are torn from the context of their employment and treated as ends in themselves.

This is why Massimo can cite approvingly Cicero’s claim that for an archer, shooting at a target, “the actual hitting of the mark [is] to be chosen but not to be desired” or suggest that in dieting, I should be focused not on whether I’ve actually lost a certain amount of weight, which, obviously depends on various factors that I do not control, but on whether I’ve done my best, which does not. 

It is worth noting that in ordinary discourse, such statements are commonly offered as consolation in the face of failure. “You did your best” is what one says to a kid, when his Little League team loses a game or to a friend who wasn’t chosen for a role in a play for which she auditioned. The aim of such talk is to help the other person feel better after having failed. It is not to suggest that they’ve actually won, when they’ve lost, or that succeeding just is trying your best. Such claims would make no sense, given the nature of the endeavors involved: one engages in archery to hit targets, not to try and hit them; one diets in order to lose weight, not to try and lose it; one plays baseball games in order to win them, not try to win them; and one auditions for roles in plays to get those roles, not to try and get them. Consequently, success in these endeavors cannot consist of trying to accomplish them, but only in the actual accomplishment of them.

And yet, this is precisely what the Stoic is telling us that a flourishing life is like: one in which a person has tried to do various things, not in which he has actually accomplished the things he has tried to do. It is a very odd conception of excellence, and in truth, I don’t think it really is one. Rather, what Stoicism describes is an effective way to be happy with one’s life (in the modern sense of the term), regardless of whether it is an excellent one or not, and in that sense it is a useful discipline that no one should dismiss. But it is not an account of flourishing in any meaningful sense, for flourishing, in both its technical and ordinary senses, clearly indicates actual success in some endeavor, not merely the earnest and diligent pursuit of it.  

It is telling that Massimo’s main objection to Aristotelianism – according to which flourishing means that one actually has succeeded, as opposed to merely trying to – is that it is elitist and unfair, for it confirms my suspicion that rather than an account of Eudaimonia, Stoicism really offers consolation in the face of the vicissitudes of fortune. After all, plenty of things that are elitist and unfair are real and true, which means that the point of saddling Aristotle with such labels isn’t to indicate that he is wrong or that the Stoics, whose philosophy is neither elitist nor unfair, are right. Rather, it is to make the point that the Stoic philosophy is more agreeable, in that embracing it will make one feel better about oneself than if one follows the path set by Aristotle. But though this may very well be true, it has nothing whatsoever to do with whether or not one has, in fact, flourished in the living of one’s life.

Those who study ancient civilization distinguish the  Classical period of ancient Greece from the Hellenistic, the latter which, both in its arts and its philosophy, reflects a society in turmoil and decline. The Hellenistic philosophies are “philosophies-under-siege,” and I would argue that the chief indication of this is precisely their retreat into the self, of which the doctrine of “indifferents” is a very clear expression. The idea that flourishing is entirely self-sufficient reflects a siege mentality, broadly construed, its purest expression, of course, being the monastic, cloistered life that emerged in the early Christian Middle Ages.  

Such an outlook makes sense in times of social disintegration, in which it functions as a kind of existential self-defense program. There was an appropriateness, then, to the development and adoption of these philosophies in the Hellenistic period or in the later days of Rome or in the Dark Ages. In the contemporary world, such an outlook would make sense if one was living in Rwanda or Sierra Leone or Syria. But it seems inappropriate – indeed, it seems rather weird – to adopt such a view of one’s life in a time and place where there is unprecedented material prosperity, longevity, and overwhelming safety, as there is in the modern, industrialized world. One can see the reason for refusing to invest oneself and one’s emotions too much in “externals” in a world in which a person is stalked by tragedy and has every reason to doubt whether he will ever enjoy tangible success in his life. One reasonably adopts such a posture to protect oneself, in short, when one is in extremis. But it is difficult to see why such a philosophy would be reasonable for modern, bourgeois Westerners, who decidedly are not in any such condition.

Virtue at Others’ Expense

Virtue at Others’ Expense

During some Thanksgiving or other, while preparing a big feast for over a dozen guests, the following item from Vox caught my eye. 

How to host Thanksgiving dinner when everyone has a dietary restriction

by Julia Belluz

On holidays like Thanksgiving, we bring our weight loss diets, health issues, aversions, religious beliefs, and world-changing agendas to the dinner table with us. This isn’t a bad thing; it’s evidence of a growing awareness about where our food comes from and what it can do to our bodies. But it does mean hosts are left panicking over how to accommodate everyone’s needs and preferences.

This challenging new reality was all too familiar to many of you who wrote in. Elie Challinta described a dinner in which one person had celiac disease, another was allergic to garlic, a third was pescatarian, and a fourth couldn’t eat anything spicy.

“I hadn’t done that much research since getting my masters,” Challinta wrote.

Now I’ve had just about enough of everyone’s “growing awareness,” and the discovery that there is such a thing as a “pescatarian” threatens to bring on a sudden rage, but it’s the overall gist of the thing that’s got me wondering how things could have gone so horribly wrong. Certainly, there are duties involved in one’s role as a host, but there are equally many that apply when one is a guest (including not being a burden to your host and the other guests), so statements like “hosts are left panicking over how to accommodate everyone’s needs and preferences” suggest that at least this writer from Vox and her reader, “Ellie Challinta,” are confused about certain elements of human social interaction.  

I recall, as a young child, failing to eat some vegetables that were on my plate. My mother told me to eat them, explaining that they were “part of the meal,” and I did. The point was not one of nutrition – no one thought about that in the early 1970’s – but of respect. That my parents had spent their hard-earned money and my mother had put in a substantial amount of work in shopping, prepping, and cooking the meal, were of far greater significance than my personal tastes – there was nothing wrong with the vegetables, I simply didn’t like them – and thus called for deference on my part. 

This was the beginning of an education in manners. My parents were of a generation for whom virtue is self-effacing and never involves imposing oneself on others, a view that not only no longer holds sway, but is threatening to disappear altogether, in favor of its opposite, as people today are inclined not just to pursue their virtue at others’ expense, but their (often nebulously defined) “well-being” too. Sometimes I find myself wondering why, at a time of unprecedented freedom, prosperity and long life, so many people in the developed world are so pissed off so much of the time, and perhaps it’s because of stuff like this. After all, who among us hasn’t found him or herself fuming with aggravation, in an endless snarl of traffic, only to discover that it’s all because of a lone jogger or cyclist, who thought it a good idea to make scores upon scores of people late to wherever they might be going, just for the sake of his workout?  

A story related by a close friend of mine also comes to mind, regarding something that happened to him when he was in rural China. He was at a dinner, hosted by a family in their village home, and was dismayed when his hosts passed him a plate of roasted cicadas. The prospect of eating them was beyond nauseating, and he considered refusing or disposing of his portion somehow, while his hosts weren’t looking. But he could see how much it meant to them that they were able to share with a guest what they obviously considered a delicacy, and he ate the cicadas, which, as he expected, were the vilest things he’d ever consumed.

Would he have been justified in refusing, had he been an ethical vegan? I don’t think so. Indeed, in my view, it would be worse to do so for that reason than out of disgust. There is a kind of rugged honesty to the rudeness involved in rejecting something your host offers you at a dinner party, on the grounds that you don’t like it, but to do so because your virtue demands it represents an entirely different level of dickishness.  

For one thing, what was supposed to be about everyone is suddenly all about you. The host now has to figure out how to accommodate your food preferences. Does he prepare you a separate meal? If you bring your own food, will it need to be heated up, and will he have the oven or stovetop space to do so? The host may have twenty guests coming, but now he’s spending more time on you than on all of them put together.

For another, you’ve just signaled to your host and fellow guests that you think they are bad people. After all, the reason why you have refused your host’s hospitality and insist on munching on kale while everyone else tucks into their kibbeh and kebabs is because you think that eating meat is a serious moral offense. So what does that mean you think of your host and of your fellow guests? Indeed, why are you even sitting with them at the table at all, given that in another context, you might be throwing containers of blood at them or brandishing a sign and screaming across a barricade?

Finally, however the tension is resolved, the result is the segregation of oneself from one’s host and one’s fellow guests, at what is supposed to be a social event. Even the most considerate person, who brings his own meal, requiring no heating or other preparation, and displays no contempt for his host or fellow guests, is not fully participating in the party. He is not accepting his host’s hospitality; not experiencing the culinary traditions of the host’s family and culture; and not sharing the dining experience with the other guests.  

And all for what? Eating one meal out of thousands you will consume in a year will have absolutely zero effect on your weight-loss program, health, or the welfare of a single animal, but it will honor your hosts and make you a part of an important social, human experience that has for millennia signified the bond of friendship that might exist between all of us. When we break bread together, we become close in a way that we were not before.

It’s hard not to conclude that the person who refuses someone’s hospitality, whether for reasons of weight loss or general considerations of health or morality, beyond simply being boorish, is engaged in a kind of posturing and showing off. And it’s only in today’s weird cultural climate that anyone could confuse such a performance with virtue.  

The public’s interest in your virtue and well-being has always been grounded in the fact that good character and health are supposed to make you less of an asshole to and a burden on everyone else. Today, unfortunately, they are just as likely to make you more of both. 

Loved, not Owed

Loved, not Owed

Philosophers sacralize moral obligation and maintain that moral considerations are always overriding of all others, but ordinary people (as well as philosophers in their ordinary lives) hold actions done from earnest desire in higher esteem than those done from duty. “Don’t just do it out of a sense of duty” and “Do it because you want to” are commonly expressed sentiments, and who, upon hearing that someone is doing something for you out of duty, hasn’t at least thought that it would be a lot nicer if the person was doing it because he or she wanted to? Or even that perhaps, under these circumstances, one would rather the person not do it at all? 

Of course, moral philosophy can offer an analysis in terms of hypothetical imperatives, for example: People should sometimes do things out of affection and kindness, as it is a good way of currying favor or some such thing. But I would maintain that this (or anything similar) doesn’t provide a good reconstruction of our common attitudes in this area and regardless,  categorical imperatives – which are in no way conditioned upon anyone’s sentiments – are supposed to override them. Nor are these common attitudes well-construed via Kant’s conception of “imperfect duties,” which simply are obligations that apply under practical constraints (and which also are overridden by categorical ones).

The idea is that it is less rather than more admirable to be someone who is always moved by duty rather than sentiment. Indeed, it is a wisdom so common that – as already mentioned – we have a set of stock expressions and aphorisms articulating it that we use in raising our children. The reasons why it’s a bad idea to operate this way are many and cut across a number of categories, in a manner that is resistant to the abstractions, simplifications, and small ‘r’ rationalism required by most modern philosophy. 

I discussed this once with my former colleague, Elizabeth Foreman, who said that this is a good part of why she dislikes act-centered moral theories – a category to which virtually every major modern moral philosophy belongs – and prefers agent-centered ones. One might imagine, then, that a virtue ethic has an easier time making sense of these matters. But how much better is it really?

Aristotle famously thought that while flourishing is not identical with pleasure, it is nonetheless inseparable from it: virtue is marked, partly, by the actor’s enjoyment in doing what is right. So it would seem that in his ethics, the idea of the begrudging or unenthused obligation-follower is at odds with that of eudaimonia. But Aristotle also believed that pleasure in itself is inherently value-neutral; it has no particular axiological valence. Pleasure experienced in the doing of ill-deeds does not make them better, and pleasure experienced in the doing of things that are neither prohibited nor obligatory doesn’t make them better either, at least not in relation to eudaimonia.

So, I’m not sure that going the virtue-ethics route really helps us. The highest esteem is attached to eudaimonia of which virtue is constitutive. Yes, the highest good – and highest esteem – attach to those who enjoy doing what’s right, but it’s still acting out of obligation that is lionized, rather than affection for the object of one’s beneficence, and I maintain that this is at odds with our common sentiments. We would rather that people care about us than feel obligated to us, and we hold acts done on our behalf out of affection or love in higher esteem than those done out of duty. Of course, the two may go together – nothing prevents someone from both feeling obligated and caring – but when they come apart, we prefer to be cared about than to be the object of another person’s sense of duty. And though they may go together, I would suggest that there is a tension between the two. Obligation is intrinsically coercive, while genuine love and affection can only be freely given, and where there is love and affection and the voluntary doing of kindnesses, obligation is unnecessary. I don’t need to be ordered to do something I want to do anyway – by other people; by God; or by moral philosophy – and I’d much rather people do something for me because they want to, than because they have been ordered to, regardless of where the ordering is coming from. 

Even on the most self-legislating version of Kantian ethics, regard for the other is the product of intellect and ratiocination – of whether one’s principles for acting are consistent or contradictory – rather than affection or love, and the same is true of the Utilitarian, for whom the rationally-determined Utility Calculus overrides any consideration of feeling or care. After all, from a “rational” perspective, those Bengali refugees (of Peter Singer fame) matter more than your kids, wife, siblings, friends, neighbors, etc. And while moral philosophers may adore reason to the point that they can hold this sort of impersonal regard in the highest esteem, the rest of the human race would rather be liked or loved than rationally regarded or being obligated to. By a mile. 

“Modern Moral Philosophy,” by G.E.M. Anscombe (1958)

“Modern Moral Philosophy,” by G.E.M. Anscombe (1958)

Not only do I think that “Modern Moral Philosophy” (MMP) is G.E.M. Anscombe’s greatest philosophical accomplishment – beyond her translation, editing, and publishing of Wittgenstein – but it is perhaps the most important essay on ethics published since the Second World War. It posed a serious challenge to the moral philosophy following in the tradition of Kant, Bentham, and Mill, one to which, in my view, philosophers working in these and other modern moral philosophical traditions have not adequately responded. And it almost single-handedly created the contemporary revival of interest in virtue ethics though, as we will see, this development may be more at odds with what Anscombe suggests in MMP than in keeping with it. Alasdair MacIntyre, the second greatest influence on the contemporary revival of virtue ethics recognized this which is why, in After Virtue (1981), he says that his work on this front, while “deeply indebted to” MMP, is nonetheless “rather different” from it.

Anscombe does several things in MMP, but I am only going to focus on the most important aspect of it, which is described in the first two statements that she makes at the essay’s outset:

The first is that it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology, in which we are conspicuously lacking. The second is that the concepts of obligation and duty – moral obligation and moral duty, that is to say – and of what is morally right and wrong, and of the moral sense of ‘ought’, ought to be jettisoned..; because they are survivals or derivatives of survivals, from an earlier conception of ethics which no longer generally survives…

Back when I was still teaching, when explaining Anscombe’s critique to my students, I would use a case, drawn from MMP, with my own embellishments added. I described the case in two “versions”: that of an Aristotelian and that of a Humean.

Version One, as told by the Aristotelian:

[1]  I supplied John with a bushel of apples.

[2]  John promised to pay me £5.

[3]  John owes me £5.

[4]  If John doesn’t pay me the £5, he is a deadbeat.   

[5]  If he does pay me the £5, then he is an honorable man.

 Version Two, as told by the Humean:

[1’]  I handed over a bushel of apples to John.

[2’]  John uttered the words “I promise to pay you £5.”

[3’]  John not paying me would consist of John failing to hand over £5.

[4’]  John paying me would consist of John handing over £5.

The striking difference between the two versions is that the Aristotelian describes the situation with terms that are axiologically thick – meaning that they have embedded evaluative connotations. To “supply” someone, in the context of commerce, is to do something that incurs a debt and to “promise” to pay is to acknowledge this and accept it, entailing, as a result, that the person in question “owes” something to the supplier. To fail to pay someone, when one “owes” him is what it means to be a “deadbeat,” and paying one’s debts is part of what it means to be an “honorable man.”

Why is the story as told by the Humean so different? The short answer is “the Scientific Revolution happened.” The longer, more substantial answer is that by the time one gets to Hume, philosophers in the West had changed their minds as to what should be accepted as constituting a “fact.”

For Aristotle, everything in nature – indeed, everything that exists – has a purpose and thus a distinctive good (and bad). Value is therefore objective for Aristotle – a part of the basic furniture of the universe – and this means that human beings also have  a purpose and thus, a distinctive good, which Aristotle called “Eudaimonia,” which just means “human excellence” or “human flourishing.” And because human beings are complex, their flourishing takes complex forms: we can flourish intellectually – hence, the “intellectual virtues” (both practical and theoretical); we can flourish as builders and makers and artists – hence, the “virtues of craft” – and we can flourish in terms of our non-technical, social and civic activities – hence, the “moral” virtues. And notice, this is all that ‘moral’ means for Aristotle, something that will become crucial in a moment.

To describe an action as “supplying” or “promising,” then, is, for Aristotle, a straightforward statement of fact, as is to describe a person as “honorable” or as a “deadbeat.” Indeed, every one of the statements [1] – [5] would be considered straightforwardly factual by an Aristotelian.

This all changes after the Scientific Revolution of the 17th century. Nature and the things in it are no longer conceived of as purposeful and the only objective characterizations of things are in terms of mathematically quantifiable magnitudes, which means that their qualitative characteristics have come to be understood as subjective; as impressions in our minds. One cannot characterize motor movements as “supplying” or uttered noises as “promising” or speak of people as “honorable” or “deadbeats” and speak objectively, in the modern framework. And so the Humean, in encountering such terms, could only conclude, as Anscombe describes it, that “there was a special sentiment expressed by [them] which alone gave [the words] their sense,” which, of course, is just the idea of a “moral sentiment.”

But there is a further problem, for as we have already seen, ‘moral’ doesn’t mean for Aristotle what it means for Hume. In the modern context, ‘moral’ and the moral ‘ought’ indicate that which is obligatory; i.e. that which is either required or forbidden. The Aristotelian sense of ‘moral’ does not mean this. Human flourishing may include moral and intellectual virtues, as well as virtues of craft, but there is no sense in which the cultivation of such virtues are obligatory or required. For those who wish to flourish – and perhaps, to attain a certain esteem within their society – this is how it’s done. Put another way, all that Aristotle gets you is a bunch of hypothetical imperatives: If you want X, then you ought to Y. And the only thing that makes some of them “moral” is that they have to do with moral subject-matter; that is, with our social and civic activity.

Anscombe maintains that the obligatory sense of the moral ‘ought’ arises from the combination of the Greek ethics of virtue with the law tradition one finds in Judaism and pre-Protestant Christianity. It’s what you get, when you take the Greek moral virtues and say that they are required by divine command.

The ordinary … terms ‘should’, ‘needs’, ‘ought’, ‘must’ acquired this special [moral] sense by being equated in the relevant contexts with ‘is obliged’, or ‘is bound’, or ‘is required to’, in the sense in which one can be obliged or bound by law, or something that can be required by law.

How did this come about?  The answer is history: between Aristotle and us came Christianity, with its law conception of ethics. For Christianity derived its ethical notions from the Torah…

In consequence of the dominance of Christianity for many centuries, the concepts of being bound, permitted, or excused became deeply embedded in our thought…The blanket term ‘illicit’, ‘unlawful’, meaning much the same as our blanket term ‘wrong’ explains itself.  It is interesting that Aristotle did not have such a blanket term…He has terms like ‘disgraceful’, ‘impious’; and specific terms signifying defect of the relevant virtue…

To have a law conception of ethics is to hold that what is needed for conformity with the virtues failure in which is the mark of being bad qua man (and not merely, say, qua craftsman or logician) – that what is needed for this, is required by divine law.

So, modern moral philosophy and its contemporary progeny suffer from two problems which, when combined, leave a subjectivist sentimentalism as the only real option in metaethics: (a) they deny that there are any genuine – objective/mind-independent – axiological facts; and (b) they reject the idea of divine legislation. The results are values that reflect only what people subjectively prefer/abhor and requirements/prohibitions that have no objective grounding and hence, enjoy nothing but, as Anscombe described it, “mesmeric force.” “It is as if the notion ‘criminal’ were to remain,” she remarks, “when criminal law and criminal courts had been abolished and forgotten.”

Of course, there are those who have tried to provide grounding for the requirement/prohibition side of morality other than divine command, the most famous being Kant, who reconceived the categorical force of moral imperatives as deriving from a kind of self-legislation, but Anscombe dismisses this as absurd on its face:

Kant introduces the idea of “legislating for oneself,” which is as absurd as if in these days, when majority votes command great respect, one were to call each reflective decision a man made a vote resulting in a majority, which as a matter of proportion is overwhelming, for it is always 1-0. The concept of legislation requires superior power in the legislator.

The most significant reaction to Anscombe, as mentioned, is to be found in the extraordinary revival of interest in classically inspired virtue ethics over the past several decades. But I seriously question whether this is the proper lesson to take from “Modern Moral Philosophy.” For just as we no longer accept divine command ethics, we no longer accept a teleological picture of nature. And just as it is hard to see what could replace divine commands, it is hard to see what could replace Aristotelian teleology, something about which Anscombe is quite explicit:

[P]hilosophically there is a huge gap, at present unfillable as far as we are concerned, which needs to be filled by an account of human nature, human action, the type of characteristic a virtue is, and above all of human “flourishing.” And it is the last concept that appears the most doubtful.

So, I see no refuge for contemporary ethicists in a revived Aristotelianism, and I see no hope for contemporary Utilitarians and Kantians, given that there has been nothing by way of an adequate response to Anscombe’s critique. That so many are content to continue on with this sort of work is not surprising, given the purely professional imperatives of the discipline – philosophers have felt free largely to ignore Wittgenstein’s equally devastating critique of philosophy as a whole, despite the fact that there has been nothing by way of an adequate response to it either – but it does lend somewhat of an air of twiddling and fiddling to it, at least for those of us who are familiar with and have fully digested Anscombe’s essay.

Morality Everywhere?

Morality Everywhere?

Over the last few years, I’ve been having a certain kind of argument, over and over again. It always starts with feeling compelled to defend something that I really like and which I would have thought it quite normal to like against emphatic moral criticisms from various quarters. Eating food, for instance (well, food that includes meat or dairy), and going on nice vacations and having nice cars (we don’t have particularly nice cars, but we go on nice vacations) and even something as seemingly benevolent as supporting museums. Common things. The sorts of things that millions upon millions of people around the world do and enjoy, many of them multiple times a day.  

In her essay, “On Morality,” Joan Didion worried about excessive moral appeals, back in the morally heady 1960’s: 

The most disturbing aspect of ‘morality’ seems to me to be the frequency with which the word now appears; in the press, on television, in the most perfunctory kinds of conversation… There is something quite facile going on, some self-indulgence at work.

Little did she know what was coming. Intense, aggressive moral scrutiny directed at the minutiae of everyday life – what one eats and drinks and wears and watches and listens to; what sort of car one drives to work (or even that one drives to work); what sort of job one has; how one spends one’s spending money; what sort of apartment or house or neighborhood one lives in; whether one uses gendered pronouns or words like ‘brother’, ‘sister’, ‘uncle’, and the like in one’s ordinary conversations; even what one thinks to oneself, entirely separate from one’s behavior. The sorts of things that would never have received even a moment’s moral notice just several decades ago are now at the front and center of much of our public moral conversation and increasingly, our philosophical one as well.  

I have found myself sometimes wondering if this development in the broader culture represents the vulgarization of a trend in philosophy, beginning in the early 1970’s with Peter Singer’s “Famine, Affluence, and Morality,” which, if not the first, certainly was the first to have the effect of rendering the more humdrum dimensions of our lives morally charged in the way I’ve described, something that would only increase with his subsequent work, especially Animal Liberation (1975) and Practical Ethics (1979). (In “Famine, Affluence, and Morality,” Singer argues that we are morally obligated to give away our money to the point that we are living just above the standard of a Bengali refugee, which means that spending money on and possessing nice things is to be morally condemned.) But Singer is hardly the only one. In Living High and Letting Die (1996), Peter Unger extended this sort of hyper-critical attention to those who send their kids to private schools or take jobs that they actually enjoy (as opposed to those that pay the most, so that one can give more to charity), things, again, that it would be almost impossible to imagine someone morally criticizing not very long ago. Indeed, it would seem that there is no element of our daily lives, no matter how mundane or common or routine, that is not receiving intense moral scrutiny. And all that one need do is stick one’s head outside for a few moments to see that our public discourse, especially online, is awash in this sort of thing – “callout” culture, online-mobs, hashtag campaigns – all devoted to expressions of moral outrage, assignments of moral blame, calls for people to be fired from their jobs, social ostracism, and worse.

Of course, this is socially and culturally corrosive and quite dangerous. As a practical matter, I’m not sure what should be done about it or even how one goes about resisting it. After all, an important, serious question is buried underneath all the annoying stuff, namely: What determines whether something is an appropriate object of moral scrutiny? Clearly, in principle, anything could be morally scrutable, but that is a far cry from saying that everything or anything should be examined through a moral lens on any particular occasion.

Perhaps the only things I’m sure of at this point are the following: [a] that this moral hyper-vigilance cannot possibly be a good or sensible way of approaching social and cultural life and [b] that we aren’t going to be able to come up with some theory or principle that will tell a person when something should be morally scrutinized or not. Beyond that, I can think of two boundary conditions or limits that I would apply to the question of the proper scope of morality.

First, it can’t be the case that one ought to be moral all the time or even that one should try to be. Put another way, moral considerations cannot and should not always be overriding. My reason is essentially the same as that given by Susan Wolf, in her landmark essay, “Moral Saints”: to act solely on moral considerations means that one will fail to cultivate any number of other virtues. Not only is this undesirable at the individual level – people whose only virtues are moral tend to be pedants and scolds and unpleasant bores – but at the societal level as well. A society that included nothing but moral saints would be worse than one in which people have cultivated different types of virtues. Consequently, we should resist the following, depressingly common sort of line: “You shouldn’t be doing (non-moral) X, because you could be doing (moral) Y instead,” which applies directly to Singerisms like “You shouldn’t be spending your money on (non-moral) X, because you could be spending it on (moral) Y instead” and can also be extended to the ethical vegan’s “You shouldn’t eat (immoral) X, because you could be eating (moral) Y instead.”

Second, it can’t be the case that too many people are too immoral too often. Put another way, if your moral outlook entails that too many people are bad, too much of the time, then something is wrong. This is, of course, vaguely put – how many are too many? – but deliberately so.  

At some level, immorality – badness – has to be understood as the deviation from a norm and thus, is something that should be somewhat rare. I am inclined to say the same thing about moral virtue. It strikes me as unlikely that most of our mundane, daily business should have any moral valence whatsoever, even when it involves the little kindnesses and slight cruelties, with which our ordinary lives are filled. Our moral meters should not be such sensitive instruments, and moral praise and blame should be infrequent; saved, as it were, for “special occasions.” For one thing, there’s something absurd, along the lines of an informal reductio, about suggesting that every kid’s bologna sandwich or family barbecue  or utterance of “Thank you, sir” constitutes a moral offense. For another, there is a real futility in targeting basic, common activities for moral condemnation: the point of moral praise and blame is to get people to behave better, and not only are most people not going to stop fishing or eating bologna sandwiches or saying “Excuse me, ma’am,” regardless of how often they are condemned for doing so, but one runs the real risk of affecting a general moral exhaustion amongst the public that will make it more difficult to persuade people, when the important issues come along. Put another way, “choose your battles” seems like generally good advice.  

Finally, to treat so much of what we do as morally obligatory or prohibited risks a kind of concept creep that threatens to undermine the relevant concepts themselves. Just as it is a mistake to suggest that what is supererogatory is obligatory, it is a mistake to suggest that what has no real moral significance falls under either a positive or negative obligation. If everything one does is morally charged then what does the claim that something is a “moral issue” really mean? If everything is moral, how special can the moral be? The irony, then, is that those who have done the most to moralize every activity and every minute of every day, may have only succeeded in making everyone else think that the moral isn’t anything of which to take much note. 

A Foolish Impartiality is the Hobgoblin of Moral Philosophy

A Foolish Impartiality is the Hobgoblin of Moral Philosophy

Philosophy professors like to think that ours is a clarifying business, so some of you may be surprised to discover that we can be confused about things that most ordinary people are not. One of these things is partiality and impartiality and how they affect ethical questions.

Certainly, the average person thinks impartiality is a condition of ethical and professional conduct, within particular spheres. Judges presiding over courts should be impartial in the judgments they make, for example, as should umpires officiating athletic competitions. 

The average person does not think, however, that impartiality is some kind of general ethical requirement; that not only should a judge or umpire be impartial in their decisions, but that we all should be impartial as a condition of being moral, generally speaking; that when considering where to allocate our personal resources or labors or energies or affections, we should not favor or advantage those to whom we are intimately connected over strangers. 

Hence, the weird spectacle of a Peter Singer suggesting that we (all of us) are obligated to reduce ourselves “to very near the material circumstances of a Bengali refugee,” as he did in “Famine, Affluence, and Morality” and of a Peter Unger maintaining that it is immoral to spend money on private schooling for one’s children or to work in a job one loves (because one is not maximizing income and giving the most possible to charity), as he does in Living High and Letting Die (1996).

Meanwhile, the popular fashion of rejecting people’s personal prerogatives in favor of a ruthlessly policed requirement to “care about and cater to” every stranger’s every imagined need and fancy is simply the common, vulgar application of this idea in the public discourse: Don’t treat anyone differently! to minds even less subtle than Singer’s and Unger’s becomes Treat everyone as if he or she was your best friend or closest relative!

Now, this is some silly stuff to be sure, but understanding why is important and reveals how philosophy can go terribly wrong, with the effects of its bad ideas downstream on public manners and mores being even worse.

___

One might think impartiality is a general condition of morality, but deny that morality is always overriding or should always be our chief concern. Call this the “Moral Saints” approach (after the Susan Wolf article of the same name). Relatedly, one might accept that regardless of context, impartiality is a condition of moral obligation, but think that in order to render the ethical life accessible to the weak-willed among us, morality should be “enforced” (whatever that means) in a somewhat less rigorous fashion. (Singer suggests something like this, in his discussion of the more “moderate” version of his, “you are obligated to live just slightly better than a- Bengali refugee thesis.”)

I prefer simply to deny that impartiality is any sort of general moral requirement. After all, we can provide straightforward, easy-to-understand reasons why impartiality should be an ethical and professional requirement for judges and referees and umpires and the like, but when philosophers have tried to explain why impartiality should be a general moral requirement, the arguments have been neither plentiful nor credible. One defends against a strong position. One exposes a weak one. No one actually practices this ethos after all – the examples of the Singers of the world demonstrating this in their personal conduct are documented and many – and as it seems a universal human inclination to extend concern according to distance along a number of vectors, as Hume famously observed, it is those who are telling us that we should act like umpires all the time, who have to do the bulk of the work, not those who think we should act like umpires, when we are umpires, and otherwise when we engage with other parts of our lives. 

___

There are any number of contexts in which impartiality is apt, and for which the reasons why are easy to see. The point of athletic competition is to see which athlete or team is the best in some sport or other – tennis, basketball, baseball, etc. – and this can only be assessed if the playing field is level, which includes consistency and impartiality in the application of the game’s rules by referees and umpires. Liberal democracies depend on widespread, voluntary compliance with the law, which means that citizens must believe that they will receive a fair shake in the courts, and this depends on the rules – in this case laws – being consistently applied and the judges being appropriately objective. 

But there are any number of other contexts in which impartiality would seem inapt, and for which the reasons, again, are easy to see. Growing up, my daughter became interested in vocal performance, and found herself in various competitive situations as she pursued it: spots in the choir; opportunities to sing in the Vatican, as part of a selective group; statewide competitions; etc. We had the means to invest quite a bit of money into her musical education, hiring private voice teachers, enrolling her in an intense voice camp at NYU, etc. Undoubtedly, this advantaged her relative to her competitors in a way that substantially affected the outcome. Equally undoubtedly, those who lost to her in competition were unhappy about it. So, why did we do it? Because she is our kid, and we care about her more than we do about other people’s kids. 

In philosophy, those banging the General Impartiality (GI) drum the loudest are the utilitarians. The requirement to maximize the general happiness is in itself an expression of GI: it tells us not to maximize our own happiness or the happiness of those we care about the most, but happiness generally. The reason why we should think this is never well-explained, alas, or even explained at all. Mill, famously, in his “proof” of the general happiness principle, simply says this:

[T]he sole evidence it is possible to produce that anything is desirable, is that people do actually desire it. If the end which the utilitarian doctrine proposes to itself were not, in theory and in practice, acknowledged to be an end, nothing could ever convince any person that it was so. No reason can be given why the general happiness is desirable, except that each person, so far as he believes it to be attainable, desires his own happiness. This, however, being a fact, we have not only all the proof which the case admits of, but all which it is possible to require, that happiness is a good: that each person’s happiness is a good to that person, and the general happiness, therefore, a good to the aggregate of all persons. Happiness has made out its title as one of the ends of conduct, and consequently one of the criteria of morality.

It’s a mess for sure. The good, here, is defined subjectively – that someone desires x shows it is possible to desire x and that therefore, x is desirable and consequently, good – but the conclusion is delivered with an air of false objectivity. ‘Desire’ after all, is a two-place predicate – “So and so desires such and such” – so what can it mean to say that “the general happiness is a good to the aggregate of all persons”? If to be good is to be desirable, then to whom is the general happiness desirable? The aggregate of all persons is not itself a person who can desire something, and the very question we are entertaining is why anyone should think as a general matter that people should care as much about the happiness of total strangers as they do about that of their intimates. Simply to declare, “because the general happiness is desirable,” which is what Mill’s “proof” comes down to, obviously begs the question, as the opponent’s view boils down to: No, it isn’t. At least not to everyone in every context.

In Kantian and Kant-inspired ethics, arguments for GI proceed from the strong idea of equality that Kant develops in his theorizing on personhood, expressed most vividly in his treatment of the so-called Kingdom of Ends, in which we are all equally valuable, because we are equally creators of value. But Kantian ethics also sits somewhat uncomfortably with GI, given Kant’s conception of imperfect duties and the flexibility it offers us in choosing the occasions on which to pursue them. It will be hard to derive even a weak GI from Kant and impossible to obtain one as strong as the sort we’ve been conceiving throughout and which is my concern here. The point, after all, is not to criticize those who may feel the pull of obligation more often or more strongly than others, but rather to reject the idea that GI represents some sort of universal principle that sits in authority over our every action, no matter the sphere or place in life in which we find ourselves.

In popular culture, GI is manifested mostly in the vulgar form already identified. One is expected to not only tolerate or treat as on par but to celebrate and even genuflect before strangers’ ideations, self-conceptions, and predilections, regardless of how bizarre or objectionable they may be. The popular version of GI is thus more extreme than the philosophical one, for while the latter tells us, Don’t advantage your intimates over strangers in any meaningful way and thereby demands we impose a crude equality where there normally is no corresponding equality of sentiment, the former requires us to prioritize and “center” strangers over one’s intimates in order to balance some imagined social and cultural scales; to care even more about strangers than about our families and friends. While the philosophically inspired GI focuses mostly on money and our behavior in the economy, the popular version is more concerned with social standing and status. On this view, the special consideration, attention, affection, toleration, support, and care that I might give to my daughter or mother or wife is something I’m supposed to offer generally, in all my dealings with anyone and everyone.  

The reasons given for this version of GI in the popular discourse – when reasons are given at all, which they mostly are not – tend to cluster around barely considered notions of kindness, fairness, and the like. The idea is that we should “be kind” and “be fair” – with these terms being treated as simply co-extensive with the vulgar version of GI – and that’s it. Of course, this provides no argument or positive rationale for anyone not already on board, as rejecting GI just is to reject the idea that we must always be equally generous or caring or kind or fair or attentive or tolerant. And where no reason is given for something beyond the invocation of a buzzword or slogan or mindless repetition of an already rejected proposition, after a polite “no thanks” I have little more to offer than a less polite “fuck off.”

Just as “a foolish consistency is the hobgoblin of little minds,” a foolish impartiality or disinterestedness is a hobgoblin of moral philosophers, and is antithetical to real (as opposed to imaginary) ethical life. There are times when we should treat strangers equally to how we treat our intimates, and there are times when it is perfectly appropriate not to. Knowing which is which is the better part of wisdom. Being incapable of or unwilling to is not.

Caring and Catering

Caring and Catering

As I was nearing retirement, I found myself in an exchange on social media with several philosophers on the subject of conversations between ethical vegans and meat eaters about the rightness or wrongness of eating meat. The following remarks by one of the participants caught my attention:

“…it definitely seems gauche to defend eating meat while eating meat (it is asking enough to ask vegans to watch non-vegans eat meat).”

“If you’re having a lunch with a vegan philosopher you don’t know well, and as you bite into a hamburger launch into why it’s OK, that strikes me as similar to arguing that professor/student romances are OK at a party where you are openly flirting with a student and another professor thinks it’s seriously wrong. Which I think is bad not just because the position you hold would be problematic, but also because one shouldn’t flaunt one’s own scandalizing of other people’s consciences.”

“I do think that catering to other people’s sensibilities, within reason, is basic decency.”

I asked whether those of my interlocutors who agreed with this kind of thinking were inclined to apply it generally. The similarity between the sorts of media employed by animal rights activists and pro-lifers is striking (pictures of animals in filthy, over-crowded conditions and posters of bloody, torn up fetuses) and suggests that a number of vegan activists are as eager to “scandalize people’s consciences” as their pro-life counterparts. 

Regardless, I’m disinclined to accept the idea that catering to other people’s sensibilities is required for “basic decency” in the first place. (The “within reason” caveat is large enough to drive a truck through, so I am ignoring it.) Indeed, once we have moved beyond our family, friends, working partners and others within our circle of intimates and relative intimates, I think our obligations to one another consist of quite minimal, low-bar kinds of stuff and that this gentleman and some of the others in the debate are confusing the supererogatory with the obligatory: what it would be generous to do with what “basic decency” requires. Of course, I am speaking only of our general, moral and civic obligations to one another, as people who have to live with one another. There are indefinitely many other stronger and more specific obligations that we may have by virtue of the various roles we play, whether professionally, institutionally or what have you.

Like everyone else, my concern extends outwards in diminishing circles. Unlike everyone else, I’m happy to admit it. I don’t care about complete strangers as much as I do about intimates, I don’t care about animals as much as I do about people, and I don’t care about strange animals as much as I did about our late pet Bichon Frise. So, unless you and I are involved in some relevant way, I not only don’t care what you think of me or what I do, I’m also not interested in what you think or do, including the ways in which you may choose to “identify.” That you are x, y, or z or may be something I have no choice but to consider, depending on the circumstances – if you are a policeman and I am involved in a traffic stop, for example – but the various things you might think you are need not mean anything to me or anyone else not personally involved with you. 

These sorts of feelings and concerns are fundamental to what I and everyone else find obligatory, permissible, unacceptable, etc. and with regard to what we are inclined and disinclined to do in our interactions with others. In my case, I don’t think I have any general obligation to affirm the opinions, values, or self-declared identities of strangers or otherwise to “cater to their sensibilities.” I do feel obligated not to gratuitously insult or offend most of the people I deal with, in most contexts, but don’t accept the notion that eating a hamburger in front of an ethical vegan or posting a photo of my favorite grilled octopus recipe in reply to someone who believes that eating octopus is monstrous are rightly characterized as such. (The exchange over the octopus actually happened in a spinoff conversation from the initial one and ended amicably.) At least, not if we are talking about mentally sound adults.

So it is generosity, not obligation, that is at issue, and while generosity is appropriately wished for by everyone, it is neither required nor something to be expected from those with whom we do not have any kind of substantial involvement. To demand it or claim it is somehow “obligatory” is to misunderstand what the thing is.

I’ll also note that in the current climate, the tendency on the part of so many to engage in emotional blackmailing and other forms of manipulative behavior substantially diminishes my inclination to be generous in my engagements with strangers or near-strangers. Once one has been told for the umpteenth time that people are so distraught from hearing or seeing or reading something they disagree with or dislike that is it is comparable to being seriously injured or made an invalid or that they are going to be driven to self-harm and suicide if anyone fails to affirm their self-identifications and beliefs, the idea of being generous as some sort of blanket-policy seems like a sucker’s game. 

Increasingly and with alarming speed, we seem to be embracing an ethos according to which we not only are right to impose ourselves on others, but others are obligated to accept and even embrace our impositions. This strikes me as perverse on its own, but it becomes even more so, when the imposition is in the service of some conception of virtue that one has and is at the expense of someone who doesn’t share it. 

Indeed, I don’t really see how to take the demand for generosity – as opposed to the hope for it – as anything other than a mixture of excessive self-confidence, self-righteousness, and self-importance. Non-vegans must be generous with regard to the vegan’s sensibilities, because “we” are sure that vegans are right about the morality of meat eating and non-vegans are wrong. Gay people, meanwhile, need not be generous regarding the sensibilities of orthodox Christians, because “we” are certain that the latter are wrong about homosexuality and the former are right, and so on and so forth. Of course, those representing the losing side of these generosity-demanding-equations might just as well think the same thing, but in the opposite direction, and they often do.

The “solution,” of course (scare-quoted, because the alleged problem is a product of wooly thinking), lies in not expecting too much of people whom you don’t know and getting over yourself. These may seem obvious and simple things, but they are by no means easy, especially at a time when we are not only encouraging everyone to lionize themselves and share far too much with everyone, but when technology has made doing so far too easy and rewarding.

“Does Moral Philosophy Rest on a Mistake?” by H.A. Prichard (1912)

“Does Moral Philosophy Rest on a Mistake?” by H.A. Prichard (1912)

In this essay from 1912, H.A. Prichard argues that the aims of traditional, modern moral philosophy are similar to those of modern epistemology, in that they are an attempt to overcome skeptical doubts. Modern epistemology, which begins with Descartes, is a response to the fact that we can doubt many of the things that we think we know to be true, and the theorizing that follows is an effort to find a procedure by which we can demonstrate that we really do know what we think we know. And Prichard thinks that similarly, modern moral philosophy’s primary aim is to find a way by which to demonstrate that what we think is our duty, really is obligatory.

Just as the recognition that the doing of our duty often vitally interferes with the satisfaction of our inclinations leads us to wonder whether we really ought to do what we usually call our duty, so the recognition that we and others are liable to mistakes in knowledge generally leads us, as it did Descartes, to wonder whether hitherto we may not have been always mistaken. And just as we try to find a proof, based on the general consideration of action and of human life, that we ought to act in the ways usually called moral, so we, like Descartes, propose by a process of reflection on our thinking to find a test of knowledge, i.e. a principle by applying which we can show that a certain condition of mind was really knowledge, a condition which ex hypothesi existed independently of the process of reflection.

The question, then, is whether or not modern moral philosophy has succeeded in identifying such a procedure. Prichard says that there have been two broad strategies that correspond roughly to the consequentialist and the deontological approaches (though the latter can be articulated in non-deontological ways as well): (a) to demonstrate that something is a duty from the fact that it produces some good; (b) to demonstrate that something is a duty, because it is the product of good motives. Both strategies fail, but the consequentialist’s failure is the easiest to see so let’s examine it first.

Does the fact that some state of affairs is good demonstrate that I ought to do it? It would seem obviously not, for the following is not a valid inference:

(1)  x is G (good)

(2)  Therefore, x is O (obligatory)

In order for 1. to justify 2., one would have to presuppose that what is good is obligatory. With that added as a premise, the inference is valid, after all.

(1)  What is G is O

(2)  x is G

(3)  Therefore, x is O

The trouble, of course, is that I’ve grounded an obligation in what is good, only by invoking another obligation, for which I have no ground, which means that the consequentialist effort to ground obligation ultimately fails.

The deontological strategy is just as problematic, though the difficulty with it is more difficult to articulate, in part because the circle is tighter and in part because for Kant it is really willing that falls under the concept of obligation, rather than acts. (Kant maintains that one can do what duty requires, but if one does it for the wrong kind of reason, the action does not “count,” morally speaking.)  

What makes an action moral for Kant is that it be willed because it is obligatory – this is just his idea of acting from duty. And the idea is supposed to be that willing for this reason is inherently good and that this is what explains the obligatoriness of certain acts of will. But one should see that this is just as circular as the consequentialist’s account. One appeals to the goodness of a certain kind of motivation to explain why certain acts of will are obligatory, but what makes those motives good is that they are motives from obligation. Once again, then, rather than explaining what is obligatory in terms of what is good, all that one has done is explain what is obligatory in terms of…what is obligatory.

Prichard maintains that our feelings of obligation are basic and immediate – prima facie, to borrow an expression from W.D. Ross – and for anyone who has ever felt morally obligated, this seems pretty hard to deny. But it is important to understand what he is not saying. Prichard is not suggesting that nothing can get us to feel an obligation – for example, seeing someone’s or hearing or learning about something. What he is denied is that any description of such facts, no matter how complete, entails or otherwise implies any particular obligation. In this sense, our feelings of obligation are like our experience of aesthetic properties. It may be that getting you to attend to the pale colors, curved contours, and long, thin neck of a vase may help you to see that it is delicate, but it does not follow from the fact that something is pale colored, has curved contours, and is possessed of a long, thin neck that it is delicate.

What is even more interesting, however, is what Prichard says this all means with regard to our uncertainties. How do we know whether we really are obligated? How do we render those pre-theoretical, intuitive moral reactions reflectively and thus, rationally, respectable? The short answer is that we don’t and we can’t, but the details are fascinating and spread well beyond our feelings of moral obligation and into the realm of belief and of epistemology, proper.

Prichard observes that there was always something quite odd about the modern epistemological enterprise, and it is something that Descartes arguably saw himself: namely, that it is folly to search for procedures that will guarantee or even substantially increase certainty. Suppose that I am taking a math test, and I believe that the answer to ‘3+5’ is ‘8’. Surely, I can doubt this, because I have made arithmetic errors in the past and more fundamentally, because I know that I can reason incorrectly.  

But wouldn’t it be awfully strange to reach for a proof procedure, by which to ensure that my arithmetic is correct on any given occasion? For if recognizing that I can and sometimes do reason incorrectly is what led me to doubt whether 3+5=8, wouldn’t it also cause me to doubt whatever result I got from working through the proof procedure? More so, in fact, given how much more complex the proof procedure is than the initial sum? And isn’t it obvious that this will be true of any proof or procedure I might come up with to “check” any belief, whether it’s that 3+5=8 or that I am currently sitting here typing this essay? 

What do we actually do, if we doubt whether we added a series of numbers correctly? We do the sum again. And what do we do if we are not sure whether we really saw something? We look again. And what do we do, Prichard asks, if we doubt whether we really are obligated to do something? We put ourselves back in the situation – either really, or mentally – and see if we feel the obligation again. That’s all that we can do. And the level of confidence that arises as a result is all that we ever legitimately will have.

On Our Use of the Moral Idiom

On Our Use of the Moral Idiom

An unpopular teen – call her “D” – is in her high school cafeteria, eating alone. Several other girls taunt and humiliate her, to the point that she bursts into tears and begs them to cease their torments, crying, “You’re hurting my feelings, please stop!”

What would be different if she had told them, instead, that they ought to stop what they are doing, because it is morally wrong? Why might she say this, rather than what she actually said?

A common feature of our moral thinking is that statements and requests like those made by D are not enough; that the language of joy and sorrow, love and hatred, sympathy and callousness is inadequate for the purpose of addressing the dramas that characterize so much of human life; and that we need the language of morality in order to do so.

But why isn’t it enough to say that something is horrible and that one hates it and wants it to stop? Or that something is wonderful and that one loves it and wants it to continue? What does the moral language add that is missing from the language of emotion and feeling?

One potential answer lies in Kant’s philosophy: I make a moral appeal rather than an emotional one, because in the absence of a moral reason, what I say or do will fail to be moral. But, while this explains why we will make moral appeals, if we want to be moral, it doesn’t say anything about why we would want to be moral, a question for which, notoriously, Kant has no compelling answer. For Kant, moral obedience is constitutive of being a person, so to fail to be moral means suffering a kind of “diminished personhood,” but this simply raises the question of why anyone should care about that.

By “Why do you want to be moral?” I do not mean “Why would you want to be moral, rather than immoral?” which is the way that philosophers typically frame the question. My question is “Why would you want to be moral, as opposed to (merely) being nice, generous, sympathetic, etc.?” 

Now, a Utilitarian might say: “Being moral just is being nice, generous, sympathetic, etc. It is not some extra quality.” But while I can imagine a Utilitarian saying this — and I don’t need to imagine it; one self-described Utilitarian has said something very much like this to me — I wonder whether she can really mean it.

Certainly, what she says is true to some extent. When we act under the impetus of positive feelings, we sometimes do things for the object of our affections that the Utilitarian would deem good, and when we act under the pressure of negative feelings, we sometimes do things to the object of our antipathy that the Utilitarian would deem bad. In short, acting out of sympathy sometimes serves the cause of utility, while acting on the basis of antipathy sometimes undermines it. But neither is any sort of guarantee. One can act out of sympathy and cause harm nonetheless, and one can act out of antipathy and maximize utility, notwithstanding. So the aims of being sympathetic and maximizing utility are not only conceptually separable, they will often come apart, as a matter of fact.

An admittedly cynical reason for using moral language rather than the language of positive and negative sentiment is that we don’t believe that people care enough about one another to be moved by the mere voicing of a desire. By characterizing what we want as an obligation, we imbue it with an air of necessity that will make it more likely that the person will comply with our wishes. Those who think this way will say that D should recast her wish as a moral obligation, if she really wants to ensure that the girls will cease their taunting. After all, if the girls don’t care about the feelings of the girl whom they are making miserable, then why would they care that D wants them to stop?

I can imagine someone protesting that there is no need to be so suspicious about morality; that our use of the moral vocabulary is simply a matter of being truthful. One might point to the common admonition that we should not “trivialize” the things that have happened to others, as in, “How dare you trivialize what those bullies did to that poor girl in the cafeteria!” and suggest that the offense of trivialization is an offense against the truth: the misdeed of having failed to adequately characterize a situation or event; specifically, of having underestimated its significance. The thought is that engaging with the moral framework of concepts is necessary, if we are to sufficiently respect the truth, in the sense of honoring the full significance of something that has happened to someone.

Why are we offended by what we perceive as a failure to adequately represent this particular reality? Certainly, it seems odd, at least as described thus far. The mere fact of misrepresentation, taken separately from its tangible effects, is a purely aesthetic offense, and while there undoubtedly are those who are gripped by the idea of truthfulness for truthfulness’s sake, they are a rare and obsessive bunch.  

I would like to suggest that the offense of trivialization is not an offense against the truth but is rather one of insufficient sympathy. When I am upset by what I perceive as your trivializing description of something that has happened to me, the reason that I care so much isn’t because this particular misrepresentation offends the truth, but because it offends me, for it suggests that you don’t care about me enough. I am thinking, “If you really cared about me, you would have characterized this situation in such a way as to engender maximal sympathy,” which means, in a way that maximized the significance of what has happened.

Lest we think that this brand of offense is egoistic, people can clearly get angry over what they perceive as trivializations, even if they don’t know the trivializer or if the thing that is allegedly being trivialized happened to someone else, even if it is someone they don’t know. For example, a common refrain that one hears when someone tries to compare the Holocaust to some other mass murder or genocide, is that such comparisons trivialize it, and the people saying this need not have survived the Holocaust themselves or even know anyone personally who did. They are saying, in effect, “How can you care so little about those people?!”

Regardless, our turn to the moral framework is born of an impression of inadequate human sympathy. What we care about is people caring about each other, and when we invoke morality and employ the moral conceptual framework, it is because we think that people don’t care about each other enough. That we invoke it so frequently and in so many different contexts and have done so for such a long time suggests that this perception of a failure of human sympathy is both general and longstanding. 

At a minimum, these considerations suggest the following:  

First, we would be better served by attending to the cultivation of human sympathy than by the proliferation of moral philosophies and moral discourse. With respect to the formal curriculum, this might suggest a diminished place for philosophy, in comparison with subjects like literature and the fine arts – the latter of which directly engage us at the level of the affective sensibility – or like cultural anthropology, which confronts us with the actual practices and sensibilities of those who belong to cultures other than our own. 

Second, we must face the fact that the “moral image” that philosophers have been peddling since antiquity is based on a fundamental misconception – or, if you read the history of ideas the way that Nietzsche does, a fundamental dishonesty. Philosophers, including Plato, Aristotle, Locke, Kant, and others have taken our moral frameworks and practices as demonstrative of our essential nobility; as something that speaks highly of us. They take it as constitutive of our personhood; as that which distinguishes us from the beasts; as the thing about us wherein our dignity lies. 

But, if the points raised here have been correct, exactly the opposite is the case. Our moral framework and language, rather than demonstrate our elevation, point towards our debasement. We invoke and engage in moral performances not because we believe in our fellow men and women, but because we do not; specifically, because we don’t think that they possess any kind of inherent or instinctive charity; at least on average.

The Special Standing of Moral Skepticism

The Special Standing of Moral Skepticism

I want to describe a kind of moral skepticism that I believe enjoys special standing. It is skeptical, insofar as it denies that there are good reasons for believing that moral properties are real (in the realist’s sense of the term). It has special standing, because unlike general skepticism – by which I mean, skepticism about the external world – the doubts it describes are real, not hypothetical, and derive not from the exploitation of formal gaps in the logic of justification or the mere contemplation of human fallibility, but from reflection upon the substance of actual moral practice. Certainly, there are moral experiences and moral performances (or at least, there are experiences and performances that we call “moral”) that represent the basis for any theorizing that we care to do on the subject. But whether there is anything beyond those experiences and performances – some independent “moral reality” – is something about which there is good reason to doubt.  

Consider the following two statements, uttered or written on a particular occasion:

(a)  I want you to do such and such.  

(b)  You ought to do such and such.

When confronted with a person who has just said or written (b), what reason do we have to believe that he isn’t really thinking (a); that is, that his real motivation isn’t desire, rather than the belief that a moral obligation is at stake?

I can think of two good reasons for wondering about this.  

First, a person can more readily obtain what he or she wants from another person by attributing some sort of necessity to it, and to suggest that something is a moral obligation conveys just such a sense of necessity. The advantage that comes from telling me that I ought to do such and such, rather than simply saying that you’d like me to do such and such, is similar to the advantage that comes from telling someone that you need something rather than saying that you want it.

The necessity conveyed by the language of obligation, like that conveyed by the language of need, lends our writing and speech an aura of urgency that the language of mere wanting lacks. It accomplishes this in two steps: first, by suggesting (sometimes explicitly, sometimes obliquely) that the reason one “ought” or “needs” to supply the desideratum is in order to satisfy some way that things are supposed to be; second, by trading on the common instinct that opposition to this carries with it some sanction; whether divine punishment, physical or psychological penalties (including the pangs of conscience that we experience), or social ostracisation of one kind or another.  

Philosophers have tended to paint these sanctions in more abstract strokes, whether in terms of a kind of diminished personhood — thinkers ranging from Aristotle, to Locke, to Kant have maintained that moral obedience is constitutive of the very idea of being a person — or even less tangibly, in terms of suffering “a less valuable existence,” as Robert Nozick described it in Philosophical Explanations (1981), because, as he put it, “it is better and lovelier to be moral.” But regardless of whether you think that the punishment for violating the Way Things are Supposed to Be involves damnation, mental or physical distress, being shunned by others, or some sense of personal diminishment, an implicit (and sometimes explicit) threat of punishment always accompanies invocations and intimations of moral necessity, and this largely explains the sense of urgency that they convey. Even where there is little or no explicit belief in the reality of punishment, there remains a kind of racial memory, from the days when everyone believed in the reality of some sanction or other, and this alone is sufficient to imbue the language of moral obligation with the desired force. So we have a very clear reason for wondering whether a person, in uttering (b), really means it: it is to this person’s advantage to pretend that what he wants me to do is a matter of moral obligation, since if he can convince me of this, he is more likely to get me to do it.

A second reason for wondering whether a person who utters (b) really means it is that it is common for people to misrepresent their motives, not out of a conscious intention to mislead others, but as a result of having already successfully lied to themselves. One’s fraudulent moral posture, in such circumstances, is not a product of scheming, but of self-deception.

It may have been Nietzsche who first made us aware of the degree to which our ostensible morality — the words and actions that we offer on behalf of benevolence, charity, selflessness, etc. — may be motivated, in fact, by a contradictory set of deeper feelings and desires, though in truth, it was probably St. Paul who first introduced this idea into the Western consciousness. But the weird class- and race-based conspiracies that Nietzsche posited as being responsible for this moral/psychic dissonance render his account unsuitable as a basis for sound moral skepticism. It is Freud and his theory of the unconscious and its relationship to our conscious thoughts and behaviors that provides us with a powerful and comprehensive framework with which to make sense of the idea of moral self-deception.

One can also usefully employ Marx along these lines, and it was Paul Ricoeur who, in his 1970 book, Freud and Philosophy: An Essay on Interpretation, first identified a “school of suspicion,” led by the triumvirate of Nietzsche, Marx, and Freud, each of whom was devoted, in his own way, to exposing the “illusions and lies of consciousness.” And though Ricoeur rightly points out that the sort of unmasking promoted by these “masters of suspicion” was not ultimately intended as a basis for skepticism — since each also provided a hermeneutics, by way of which we might decode our misleading utterances and behaviors — it certainly lends support to a moral skepticism the roots of which lie in an analysis of common human behavior, and which focuses on the reasons for doubting the sincerity of peoples’ professed moral motives, postures, and other moral performances, including, importantly, our own.

Freud maintained that we adopt a moral posture as an act of repression, specifically, of our core desires — desires that are largely unconscious — because acting on them is alleged to be incompatible with social and civic life. The pressure to be morally obedient, then, is ultimately social: In Freudian parlance, the Ego confronts the Id with the demands of social and civic life (the so-called “Reality Principle”), while the Superego confronts it with the moral ideals of the local culture, as manifested to children in adult authority figures. In a very literal sense, then, when a person professes a moral reason for an action or a request for one, he is always misrepresenting his underlying motives and is largely unaware of doing so. Significantly, Freud depicts this repression as strenuous and difficult (in his 1917 Introductory Lectures on Psychoanalysis, he describes it as an “unstable arrangement”), one that is often, even routinely, prone to failure. Not only, then, does the profession of a moral motive never reliably indicate that a person is really thusly motivated, it in no way insures that he will act in accordance with it, a fact that is amply demonstrated in everyday life, by everything from the ubiquitous, low-level moral inconsistency to which we are all prone to the most stunning displays of moral hypocrisy, in which it seems that the stronger, more insistent the moral postures and performances, the greater the fraud lurking beneath.

Some may want to resist these conclusions, on the grounds that Freud and psychoanalysis have been discredited, but one need not buy into any of the specific details of the Freudian story — the existence of faculties named ‘Ego’, ‘Superego’, or ‘Id’, for example — in order to sustain the substantive points, which have become a part of our common wisdom: that we are mysteries to ourselves as well as to others; that we are complex and conflicted; that we are much more selfish than we let on or are even aware of ourselves; and that we pose and posture for our own as well as popular consumption. Consequently, our public behavior and speech, as well as our internal conversations with our own alter-egos and consciences, are as likely to mislead as they are to provide a real picture of our actual motives.

At this point, I can imagine a moral philosopher protesting that nothing that I have said provides the slightest argument for moral skepticism. That we don’t always mean what we say doesn’t mean that we never do, and even if we never tell the truth in reporting moral motives, it doesn’t follow that moral properties do not exist. 

This is a standard move in philosophy and stems from our subject’s deep and longstanding commitment to abstraction, generality, and rationality, as well as its discomfort with particularity and contextuality and dislike for impulse and inconsistency. Someone who wouldn’t go back to a car dealership after being cheated just once by one of its salesmen, even though it “doesn’t follow” that all the other salesmen are crooks, will insist, without irony, that there are moral truths and that people rationally apprehend and act on them, even while being bombarded with examples of people who clearly are doing no such thing and in the face of modern psychology which undermines the sort of naïve treatment of human behavior that the typical moral philosophy requires.  

This attitudinal lurch is unsurprising once one realizes that for most moral philosophers, the fascination has always been with the theorizing itself and not with the practices that the theorizing is alleged to be about. For these people, Ethics is about the existence and nature of rightness and wrongness, goodness and badness, idealized actors, beneficiaries, and victims, justification, and other such abstractions, not actual people, their actual behavior, or their actual reasons for acting, so when that behavior and those reasons are revealed for what they are, as modern psychology tries to do, it is easy to simply ignore them or to point out that “nothing follows” with respect to ethical theory.

This is why it is essential that we understand that ethical theorizing is itself a human practice — a form of human behavior — with its own “genealogy,” by which I mean its own historically and psychologically grounded raisons d’êtres. Here, again, Nietzsche is crucial (though once again, the specific account he offers is bizarre and histrionic), for he made the point that our reasons for engaging in ethical theorizing may be as conflicted and contradictory as our reasons for making first-order moral claims and engaging in ostensibly moral behavior. Once we adopt a psychological model in accounting for this dissonance, as we have already done in our analysis of moral acts and utterances, we see immediately that there can be no refuge for the theorist from the disconfirming practical realities of daily human life in the tidy, comfortable back-rooms of ethical theory, since his own theorizing is as much a part of those practical realities as the moral speeches and behaviors, the implications of which he is trying to escape.

Nevertheless, philosophers have continued to insist on drawing this distinction between ethical theorizing and moral practice — Mary Mothersill did it in a particularly interesting way decades ago, in her essay “Moral Knowledge” (1959)  — and it is worth taking it at face value, if only as an exercise, because we will discover that our skeptic is still able to maintain his doubts about moral properties. Moreover, in seeing why, we will be afforded an important opportunity to discuss the ways in which moral skepticism differs from general skepticism.

___

The moral skepticism that I am describing proceeds on the basis of two points: First, as just discussed, we have tangible, realistic reasons to doubt our own and others’ sincerity when we profess moral motives, adopt moral postures, and engage in moral performances, and this includes when we engage in moral theorizing; Second — and the point to which I turn now — these professions, postures, and performances are the only reasons for thinking that there are any moral properties in the first place. Put another way, the belief that there are moral properties is exclusively the product of our collective behaviors and testimonies, which are convoluted, conflicted, and ultimately unreliable, for all the reasons we have discussed.

This second point is obvious in the kind of way that often causes things to escape peoples’ notice, so consider: Moral properties are not discovered via the sensory modalities or the affective sensibility — certainly, I can feel disgusted or amorous, find something appealing, moving, indifferent, etc., but these natural affective states will arise irrespective of whether we ever adopt moral frameworks or ways of speaking – and there has yet to be any convincing account of morals, in which moral truths are deduced in the manner that we deduce truths in mathematics or logic. The only reason that we believe that there are moral truths and that moral properties are real is because people talk and act as if there are. As a result, our ability to believe in their — and our own — sincerity, is crucial to the case for there being moral properties, and this ability is precisely what has been called into question, on grounds that derive both from our common, daily experience and from modern psychology.

When we consider general skepticism, by which I mean doubts about the existence of the external world and the various iterations of those doubts, we find something entirely different. Unlike the specific, substantial suspicions that lead to moral skepticism, the sources from which general skepticism derives are entirely abstract. The general skeptic notes that both our senses and our powers of reasoning could be mistaken and are, therefore, a potential source of error, and that every justificatory inference, in taking something as playing the role of a premise, begs the question as to why we should believe that something is true. Any belief that we might justify, then, can be doubted, not just because the grounds on which the belief is based might be mistaken, but because the logic by which we justify one thing A, by appealing to another thing, B, has built into it a structural regress.   

General skepticism is general, then, because of the abstractness, generality, and ubiquity of these sources of doubt, but these qualities also render it incapable of sustaining any real grip on our consciousness. As a result, general skepticism is suitable for little more than epistemological calisthenics – Thomas Reid famously pointed out that while there are many generally skeptical hypotheses, there are no actual general skeptics – since in the face of all the vivid experiences of everyday life, no one will ever believe that the world doesn’t exist on the basis of nothing more than a logically possible story according to which one is a solitary consciousness living in a perpetual dream. And it is important to remember that with respect to general skepticism, these sorts of contrivances of the imagination are the sole reason for the relevant skeptical doubts. I may, on a particular occasion, have tangible reasons to believe that my eyes or other senses are tricking me — say that I am intoxicated or have had my eyes dilated after a visit to the optometrist — but such cases raise no general suspicions about the reliability of the modalities by which we acquire beliefs about the external world. The doubts that characterize general skepticism are exclusively the products of abstract speculation into various logical possibilities and recognition of the inherently question-begging structure of the logic of inference.

It is for this reason that the best solution to general skepticism has always been the philosophy of common sense, the proponents of which understand that the search for justification follows — because it presumes — belief and action, and does not precede it. This is why, as Hume tells us in the Enquiries, we must be people before we can be philosophers, a stance that precludes commitment to any sort of generally skeptical position, because a condition of being people is that we have the kind of trust in our faculties and the basic things that they tell us that is required in order for it to be possible to believe and to act. As Wittgenstein explained in On Certainty, in his own version of a common-sense philosophical response to general skepticism, our belief in these basic things are not themselves tested in any way, but rather, comprise the frame of reference within which testing and thus, believing and acting, take place.  

In defending the reality of moral properties from skepticism, some would like to claim that there is a moral common sense that plays a similar role with respect to our moral beliefs and behaviors to that which our common sense about the external world plays with respect to our beliefs and behaviors more generally. Mothersill would have us believe, for example, that “the robust practical wisdom which restores us to the world of backgammon and sociability is as impatient with moral skepticism as with any other philosophical doubts”; that “the mood in which we know for sure that chairs and tables are real and that the sun will rise tomorrow is the mood in which we know for sure that cruelty and deceit are wrong and that we must fulfill our obligations”; and that “one can’t claim the advantages of common sense when it comes to scientific knowledge and at the same time claim the advantages of a superior insight when it comes to moral knowledge.”

Alas, these points are wrong in every detail, for there simply is no analogy to the story I have just told about general skepticism in the case of moral skepticism. As we’ve seen, moral skepticism is not the result of speculating about abstract possibilities or recognizing structural gaps in the logic of inference, but is the product of tangible, realistic, psychologically-grounded suspicions regarding peoples’ professed motives when they speak and act. It is not the sort of skepticism, then, for which a philosophy of common sense is suitable, since its main point is that our basic moral beliefs — our moral common sense, if you like — cannot be trusted. Even worse, we have every reason to be dubious of the very practices for which these beliefs are supposed to provide the frame.

One of the consequences of the psychological story we have been telling is that our moral practices cannot be taken face-value, as phenomena to be explained straightforwardly by way of a moral theory. The phenomena themselves – the moral speeches, postures, and performances – must first be decoded, before they can be explained, which means that we must have a moral hermeneutics before we can even think about having a moral theory. And it is only so much the worse for the prospects of moral theorizing, when our hermeneutics tell us — as the Nietzschean, Freudian, and Marxian hermeneutics all do — that our moral practices are essentially rhetorical and are designed to misrepresent our motives, both to others and to ourselves.

Mothersill’s examples, then, are obscuring, rather than revealing of the true status of moral knowledge. Of course there is no trouble in knowing trivial analytic truths like “cruelty is wrong” and “deceit is wrong,” wrongness being semantically contained within the concepts of cruelty and deceit, as Mothersill is using these terms. The trouble is with knowing what makes something — or anything — count as cruel or deceitful. My point has been that our only reason for thinking that something is cruel or deceitful is because people (including ourselves) call it cruel or deceitful and act as if it is. So, there is no epistemic resemblance whatsoever between the claim that someone is sitting on a chair and the claim that some action is cruel, deceitful, etc. Barring some specific evidence to the contrary, I have no reason to think that either the chair-sitter or my eyes are telling anything other the truth, but, as we have seen, there are any number of good, concrete reasons for thinking that the person who says that doing such and such is cruel, deceitful, etc., is misrepresenting both himself and the relevant facts. 

In the case of the belief in the existence of the external world, the principle of induction, the existence of other minds, etc., the beliefs in question enjoy an evidence-base that remains essentially untainted, save for a small number of highly abstract, purely theoretical doubts, and the result is that only a hypothetical skepticism, suitable for epistemological theorizing, is sustainable. By contrast, with respect to our belief in moral properties, the only evidence to which anyone can appeal is decidedly tainted, which means that a substantive, real skepticism regarding the existence of moral properties is warranted. Hence, moral skepticism’s “special standing.” 

Intuition and Morals

Intuition and Morals

I want to say a few things about morality and intuition and the relationship between the two. One is that some variety of what is commonly called “moral intuitionism” must be true; that is, if there are such things as moral obligations and duties, however conceived. If there are – and if theorizing about them is possible – then our moral intuitions – the feelings of obligation and duty that we experience over the course of our daily lives – ultimately provide the measure against which the accounts we give must hold up. In this sense, they play a role, relative to our moral theories, that is analogous to that played by observations, in the context of scientific theorizing. Just as a scientific theory – say, a theory of motion – must stand or fall on how well it accounts for what is observed, so a moral theory must stand or fall on how well it accounts for our feelings of obligation and duty.

A few points, before continuing on this line of thought:

[1] It is irrelevant to the role I am assigning moral intuitions that we may be wrong about a felt obligation or duty, in the sense that it might turn out that we are not obligated to do something for which we previously felt obligated. Analogously, it is irrelevant to the role played by observations in science that we sometimes misperceive things. That an obligation is felt simply means that it is a prima facie duty (to employ a Rossian phrase), but whether it is an actual duty will depend on a full consideration of the circumstances and whether or not other prima facie duties arise that may be overriding, a process that is, of course, fallible.  

[2] There need be no “faculty” of intuition and one need assign no enigmatic qualities to intuitions to speak of them, in the manner that I am. There is nothing mysterious about feeling obligated to do or not to do such and such, and feeling that way requires no mental apparatus above and beyond that which we already recognize.

[3] What is distinctive about moral intuitions is that they are unmediated by any process of theorizing or inference. As H.A. Prichard described it, “[t]he sense of obligation to do, or of the rightness of, an action of a particular kind is absolutely underivative or immediate.” This is not to deny that some process of non-moral thinking or acquisition of non-moral knowledge may engender a sense of obligation in a person – learning of the hyper-incarceration of black people in the United States, for example, may arouse a feeling of obligation on my part to engage in certain sorts of political activity – but simply to say that the sense of obligation that arises is not the result of inferring it from the knowledge acquired, as traditional moral theories suggest.  

[4] That moral intuitions provide the ultimate measure against which the adequacy of a moral theory is determined is but a specific instance of a more general truth, namely that all theoretical inquiry ultimately is arbitrated against the backdrop of intuition, common sense, and ordinary language and experience. Even logic looks to intuitions for its basic conditions of adequacy, as there can be no proof for the logical axioms; no justification beyond their intuitive plausibility. As Stanley Rosen once described the matter in Metaphysics in Ordinary Language (1999):

My thesis is not simply that there is an ordinary language, reflective of the common strata of human nature … I also claim that this ordinary language is retained as the basis…of all technical dialects.  It is this basis that serves as the fundamental paradigm for the plausibility or implausibility of theoretical discourse and in particular of philosophical doctrines.  It is not satisfactory to evaluate philosophical doctrines on purely technical or formal grounds because these grounds cannot establish their own validity or authority.

Philosophy is extraordinary speech, but extraordinary speech derives its first and most important level of significance from ordinary experience  or everyday life.

Though purely theoretical concerns may cause us to reject a moral theory, it is much more commonly the result of a clash between the theory and our pre-theoretical moral intuitions, and it is never the case that a moral theory passes muster, solely on the basis of theoretical considerations. 

We reject Kant’s brand of deontology, for example, because his view that moral duties are indefeasible and outcomes are morally irrelevant yields perverse results, such as requiring us to tell the truth, even if doing so yields no benefit and harms an innocent person. (That such results are perverse is an intuition.) We reject consequentialist moral theories like Utilitarianism, in part, because they deny the moral significance of intentions, which routinely causes them to yield counterintuitive results, such as treating cynical, selfish actions as morally equivalent to selfless, altruistic ones, so long as they produce the same outcome.

To the extent that we are inclined to accept theories like these – either in whole or in part – it is due to the fact that they provide a satisfactory reconstruction of our moral intuitions. It is because Utilitarianism can make sense of the intuition that moral rightness and wrongness admit of degrees, for example, and this makes it preferable to Kantian deontology, which cannot. And it is because Ross’s moral philosophy provides a framework through which to understand our intuition that there is value in our relationships with others, beyond that of benefactor to beneficiary, that his theory is preferable to Utilitarianism.

It can only be with a mixture of puzzlement and amusement, then, that we read the following from Peter Singer, in an essay devoted to denying any role for moral intuitions in ethical deliberation. (“Should we Trust our Moral Intuitions?” (2007)) He has just finished describing a study by Joshua Greene, the purpose of which is to explain why we react in disparate ways to Trolley-style cases – for example, why we are willing to pull a switch to divert a train, so that it will kill one person, rather than five, but are unwilling to shove a man in front of a train, to prevent five from dying – when he says the following:

Greene’s work helps us understand where our moral intuitions come from. But the fact that our moral intuitions are universal and part of our human nature does not mean that they are right. On the contrary, these findings should make us more skeptical about relying on our intuitions.

There is, after all, no ethical significance in the fact that one method of harming others has existed for most of our evolutionary history, and the other is relatively new. Blowing up people with bombs is no better than clubbing them to death. And surely the death of one person is a lesser tragedy than the death of five, no matter how that death is brought about. So we should think for ourselves, not just listen to our intuitions.

Notice how much of the work ‘surely’ is doing here. Singer’s blanket assertion that the death of one person is a lesser tragedy than the death of five could be a naked appeal to our intuitions – it certainly looks like one – but in his case, it is almost certainly a product of his Utilitarianism. The trouble, of course, is that the adequacy of Singer’s Utilitarianism is itself a matter of how good of a rational reconstruction the theory offers of our already existing moral intuitions. Far from “not listening” to our intuitions, then, Singer’s entire statement is an exercise in appealing to them – whether directly or indirectly.  

It is quite understandable that Singer would like to banish intuitions from respectable ethical thinking and discourse, as it would seem to be his mission to convince us of any number of very counterintuitive things – that infanticide is morally acceptable and even obligatory, in some cases; that every family barbecue or fishing trip is a moral catastrophe; that sex with dogs, pigs, or cows is perfectly ethical, so long as it consists of “mutually satisfying activities” (this last gem comes from his 2001 essay, “Heavy Petting”) – as part of what look like an effort to affect a kind of wholesale moral revisionism. Singer’s brand of Utilitarianism is what provides the sanction for radical judgments like these, but it is precisely such results that most of us will deem perverse and which will thereby render Utilitarianism unacceptable to us as a moral theory. 

Certainly, moral attitudes and intuitions change over time, given that we change as people, so we should expect that what seems plausible to us, moral theory-wise, may also change over time. But our theories do not and cannot change our moral attitudes and intuitions on their own. For one thing, such changes are not simply a matter of our switching beliefs, but of our changing as people, and our engagement with moral theories is simply at too intellectual – and thus, too superficial – a level for them to have such an effect, and for another, as already indicated, moral theories cannot confirm themselves and must appeal to our moral attitudes and intuitions for their justification.

Chapter Two of The Right and the Good, by W.D. Ross (1930)

Chapter Two of The Right and the Good, by W.D. Ross (1930)

People who are familiar with my work know that I am an admirer of W.D. Ross’s The Right and the Good and think it is one of the best works of moral philosophy of the last century (the others would be between Elizabeth Anscombe’s essay, “Modern Moral Philosophy,” Bernard Williams’ Ethics and the Limits of Philosophy, and Alasdair MacIntyre’s After Virtue), so it will come as no surprise that I enjoyed teaching this material, when I was still a working professor.

Ross’s take on what moral theorizing consists of reveals something that both the Kantian and the Utilitarian have in common: they both see moral theorizing as fundamentally prescriptive in nature. That is, they take moral theory as being prior to and determining moral practice: one first identifies the general character of moral obligatoriness and prohibitedness, by way of philosophical theorizing, and then, using it as a criterion, identifies what one’s duties and obligations are, irrespective of what one might have thought prior to having theorized. Felt obligated to keep a promise to a friend? Well, you were wrong, the Utilitarian says, because your duty is to maximize happiness, and if you broke your promise, there are things you could do instead that would more effectively accomplish this. Thought you should lie, when the SS officer asked if you were hiding Jews in your attic? That would be a mistake, the Kantian says, because you are obligated to act only on principles that you can rationally universalize, and one cannot rationally will a principle of universal dishonesty.  

Ross takes the opposite approach. For him, that we feel morally obligated in various ways is a fundamental, basic fact about us that arises from our activities and relationships with other people. Someone does me a kindness, and I feel obligated to demonstrate my gratitude in some way. I promise someone that I will do something, and I feel obligated to follow through on it. I see someone suffering, and I feel obligated to help. These are basic phenomena of normal, day-to-day moral life, and not only are they not the products of ethical theories, they are what motivate and provide the grounds for them. Were feelings of obligation not fundamental elements of human interaction and social life, it would never occur to us to engage in ethical theorizing in the first place, and once we do, those feelings become the subject-matter of that theorizing, the purpose of which is to help us better understand them. Our feelings of obligation are to moral theorizing as our sensory experiences and observations are to scientific theorizing. In each case there is a set of phenomena in which we take a considerable interest, and the aim of the relevant theorizing is to give some rational account of it; to make sense of it in a distinctly philosophical way. As Ross put it, “We have no more direct way of access to the facts about rightness and goodness and about what things are right or good, than by thinking about them; the moral convictions of thoughtful and well-educated people are the data of ethics just as sense-perceptions are the data of a natural science.”

Some, like Peter Singer, who are committed to the prescriptive model of ethical theorizing maintain that this picture misses the extent to which theory may – and should – be used to correct our pre-theoretical feelings of obligation which, after all, may be wrong, and which is supposed to demonstrate that the analogy with sense experience and science is a bad one. But Ross thinks that this represents a misunderstanding. For one thing, our observations may also be mistaken, but that is not something that can be determined by way of a theory, but only by further observations and entails nothing different about the relationship between observations and theories in science. Correspondingly, our feelings of obligation may be overridden, but only by other feelings of obligation that may arise upon further consideration of the relevant situation. “Just as some of the latter [sense experiences] have to be rejected as illusory, so have some of the former [feelings of obligation],” Ross explains. “But as the latter are rejected only when they are in conflict with other more accurate sense-perceptions, the former are rejected only when they are in conflict with other convictions which stand better the test of reflection.” To think otherwise – to think that a theory could override the feeling that I am obligated in one way or another – would be akin to thinking that an aesthetic theory could override my experience of something as beautiful, because according to the “correct” theory of beauty, it shouldn’t be experienced as such, which is absurd.

Ross describes the feelings of obligation that arise from our activities and relationships with other people as prima facie duties; as duties on their face and presumed obligatory until demonstrated otherwise. There may be as many of these as there are dimensions of our relationships and activities, and Ross outlines just a small number of them in The Right and the Good:

1. Some duties rest on previous acts of my own. These duties seem to include two kinds, (a) those resting on a promise or what may fairly be called an implicit promise, such as the implicit undertaking not to tell lies which seems to be implied in the act of entering into conversation (at any rate by civilized men), or of writing books that purport to be history and not fiction. These may be called the duties of fidelity, (b) Those resting on a previous wrongful act. These may be called the duties of reparation. 2. Some rest on previous acts of other men, i.e., services done by them to me. These may be loosely described as the duties of gratitude. 3. Some rest on the fact or possibility of a distribution of pleasure or happiness (or of the means thereto) which is not in accordance with the merit of the persons concerned; in such cases there arises a duty to upset or prevent such a distribution. These are the duties of justice. 5. Some rest on the mere fact that there are other beings in the world whose condition we can make better in respect of virtue, or of intelligence, or of pleasure. These are the duties of beneficence. 6. Some rest on the fact that we can improve our own condition in respect of virtue or of intelligence. These are the duties of self-improvement.

But what determines our actual duty, on a given occasion, for it seems quite clear that in a particular situation, more than one prima facie duty may be in play and may even conflict? A promise I’ve made to meet you for lunch is a duty of fidelity and requires me to honor it, but if I pass a burning car wreck on the road, with a bleeding person lying next to it, a duty of beneficence also arises, namely that of helping to save the person’s life. In this case, I cannot act on both, so beyond the question of what my prima facie duties are, the further question of what is – and what determines – my actual duty now becomes pressing.

Ross’s answer here seems to me exactly the right one, but it inevitably frustrates those who are committed to ethical theory in its traditional role. In determining our actual duty on any given occasion, we must consider all of our prima facie duties, as well as the specific details of the current situation, and then make a judgment as to which prima facie duty is the most significant one, given that situation. Once we have done that, the other competing duties are not so much overridden as temporarily defeated, the evidence for which is that they continue to exert influence over our actions, both present and future. If, in the case I’ve described, I’ve deemed my duty to help save the person’s life more significant than my duty to keep my promise to meet you for lunch, I may apologize to you for standing you up, suggest that we meet for lunch another time, and even offer to pay for your lunch, as recompense for breaking my promise to you, none of which I would likely do, if the duty to keep my promise truly had been overridden.

To the extent that this picture is grounded in perception and judgment, it is characterized by fallibility. I could get things wrong. It also shows that what I ought to do – my actual duty – cannot be determined in advance, by way of any general principle or line of reasoning that a theory might provide me with. Each determination of an actual duty is the result of a carefully considered judgment, based on a number of feelings of obligation and both a perception and assessment of the present situation. Those who are wedded to ethical theorizing in its traditional mode take this as a reason for rejecting Ross’s approach, but I would maintain that it is a strength, rather than a weakness. After all, ethical life looks exactly as you’d think it would if Ross were correct. We experience profound, wrenching moral dilemmas, in which duties conflict; we have to make judgments in order to get through those dilemmas; and we can and often do get things wrong when we make those judgments. Meanwhile, ethical life looks nothing at all like it should if Kant or the Utilitarians are correct. On their view, there should be no moral dilemmas or need for judgment. Moral decision making should proceed as per a theoretical calculus, with the only difficulty being essentially akratic; i.e. an inability to make ourselves do what we know is the right thing.  

I would maintain that if anyone reflects earnestly on his or her own experience of moral life, it will quickly become evident that it is nothing like this. 

Thinking about the “Impermissible”

Thinking about the “Impermissible”

If something is permitted, it means that you are allowed to do it. If something is not permitted, it means that you are not. If swimming in the community pool is permitted only on the weekends, it means that if you show up on a weekend, you will be allowed into the pool. If you show up on a weekday, it will be locked and you won’t. If you break in and swim anyway and are discovered, you will be forcibly removed by staff or by the police.

All of this is perfectly straightforward and easy to comprehend. 

Philosophers, however, concern themselves with what is permissible and impermissible, neither of which is straightforward or easy to comprehend.

To say that something is impermissible is to say that it is rightfully sanctioned or punished and to say that something is permissible is to say that it is not. If eating a shrimp salad sandwich is impermissible, then one is rightfully sanctioned for doing it. If it is permissible, then one is not.

What is the significance of claiming something is or is not “rightfully sanctioned”?

Imagine the following: Wanting to swim on a Tuesday, George breaks into the community pool and is arrested. In court, he maintains that swimming on a Tuesday is permissible; that he was wrongfully sanctioned. The judge counters that it is impermissible – that he was rightfully sanctioned – because there is a law prohibiting use of the pool on a weekday.

George, however, is tenacious and armed with a little philosophy and tells the judge that the law is wrong. The judge replies that the law was properly enacted – i.e. by way of the procedures and mechanisms by which laws come into being – to which George counters that regardless of the procedures and mechanisms by which it was enacted, there is no good reason to prohibit swimming on a weekday and “one should be able to swim on a Tuesday if it doesn’t hurt anyone.” The judge observes that the law was passed by local legislators who were voted into office by a majority of the population and that this is the basis of the law’s legitimacy and is why swimming on Tuesday is rightfully sanctioned. George points out that majorities can be wrong and repeats that “there is no good reason” for the law. The judge says that George can think whatever he likes but cannot break the law and sentences him to ten days in jail.

When George says, “there is no good reason,” what does he mean? That there are none that he finds persuasive? Undoubtedly, but so what? Others thought differently and voted accordingly. That others shouldn’t find them persuasive? Again, undoubtedly, but so what? George’s reason for swimming on Tuesday – that whatever does not hurt others should be permitted – is controversial, so others could say the same thing to him. 

Now imagine that rather than a little philosophy, George has quite a lot of it; enough, indeed, for him to go metaethical. Outraged over the court’s decision – and being locked up in jail – he writes a letter to the judge:

Whether something is in fact permissible or impermissible is independent of anyone’s opinion of it as is whether or not something constitutes a good or bad reason. There is, as a matter of independent fact, no good reason for prohibiting swimming on a weekday and consequently, my swimming in the pool last Tuesday was permissible, regardless of the misguided laws that misguided people have enacted. Thus, I was wrongfully sanctioned.

Now, it is hard to imagine any judge changing his or her ruling on the basis of something like this, and this judge is no exception, so while George remains convinced that he is right, he also remains in jail to serve out his ten days.

After getting out of jail, George, who also happens to be a friend, agrees to meet you at a local park for lunch. You arrive late and see that he has already started on a kale and bean sprout salad. You join him and unpack your lunch, which consists of a shrimp salad sandwich, potato chips, and a pickle. 

George, who describes himself as an “ethical vegan,” tells you that you shouldn’t eat your sandwich, but that the pickle and chips are ok. You disagree and prepare to tuck in, whereupon he tells you that eating shrimp salad sandwiches – or anything involving meat, fish or dairy – is impermissible.

You point out that there is no law prohibiting the eating of shrimp salad sandwiches. George replies that there should be such a law but regardless, eating your sandwich is impermissible, because it violates the shrimps’ interests. You remark that you don’t care about shrimps’ interests and observe that more than 95% of the population agrees with you and that unless this changes radically, there never will be a law banning the eating of shrimps, to which George replies that majorities are often wrong and that your lunch is impermissible, nonetheless. You ask him what sanction you can expect for eating the sandwich to which he replies that there is none, but if there was, it would be rightful. Puzzled as to why you should care about any of this and annoyed at having your lunch spoiled by George’s badgering, you leave to eat with one of your non-vegan friends, whom you spot at a bench nearby.

The question of sanction is worth exploring for a moment, so let’s slightly amend our story. When you ask George what sanction you can expect, he tells you that while there is no official sanction – no jail time or anything like that – he will shun you for eating the sandwich. Now, though George is a friend, he isn’t a close one, and as he’s become quite annoying of late, you are relieved at the prospect of never having to listen to his hectoring again and tell him so. He departs in a huff, and you are left to finish your lunch in peace.

That one has been sanctioned would seem to be a matter of fact, but what this fact consists of requires some examination. Clearly, George was sanctioned, insofar as he suffered ten days in jail, but have you been sanctioned, simply because George shunned you? This seems less clear, insofar as you are happy no longer to have to endure any association with him, and a sanction that inflicts neither pain nor cost doesn’t seem like much of one. Does this mean that if George didn’t care about his incarceration, he would not have been sanctioned? Perhaps, but given that there are social, financial, and other costs associated with having been convicted of a criminal offense and locked up (including physical injury), it seems likely that George’s conviction and incarceration will hurt or cost him in some way or other. Let’s say, then, that the philosophical concept of sanction has an ineliminable element of subjectivity to it: if something doesn’t hurt or cost in some way that one cares about, though it may be called a “sanction,” effectively it isn’t one. [1] And this is why when sanctions need to matter – as they do in the law – they typically consist of things that hurt or exact a cost that the overwhelming majority of people do care about: loss of freedom; uncomfortable and even dangerous incarceration; financial loss; and destruction of reputation.

___

If one has no capacity to sanction someone or to induce others to do so, permitting or not permitting something is obviously out of the question. What good is it for George to tell you that you are not permitted to eat your sandwich, when he has no means of stopping or sanctioning you? What I am wondering about here, however, is what purpose is served in telling you instead that eating the sandwich is impermissible? That were you sanctioned, it would be rightful? You’re eating the sandwich. You haven’t been sanctioned. You aren’t going to be sanctioned. So, why should you care about George’s modal proposition? 

Perhaps the claim that you would be rightfully sanctioned is supposed to awaken your conscience; to get you to reconsider and to cease and desist in your shrimp eating. If so, it is a strange expectation. George has already stated his reasons for not eating shrimps – doing so violates their interests – and you were unmoved. What good is it to insist that if you’d been sanctioned, it would have been rightful? Thinking that a sanction is rightful depends on accepting the reasons for it, and in this case, you don’t. And notice that if you did, you wouldn’t be eating the sandwich in the first place and the claim that doing so would be rightfully punished would be unnecessary. So, it’s unclear what the point of permissibility claims is, even where the parties agree on whatever is at issue.

George is inordinately frustrated that people find his reasons unpersuasive, and I think this is true of many academic philosophers. I say ‘inordinately’ for two reasons: First, because ordinary people seem to cope more easily with this common variety of disappointment than those whom we credential and pay to engage in (what is supposed to be) deep thinking; and second, because philosophers’ reasons are almost always controversial – within the framework of academic philosophy as well as outside of it – the expectation of agreement is unreasonable to start with. That shrimps have interests or that those interests should matter to us or override all other considerations is controversial. That utility or right intention or “principle” or the “categorical imperative” is morally overriding are all controversial. So, not only shouldn’t we get worked up when our appeals to such reasons fail to move others, we should expect it and react gracefully when it happens.

Assertions of impermissibility on the basis of reasons that are controversial and with regard to things for which there is no sanction either presently or forthcoming are entirely rhetorical; a combination of foot-stamping and clumsy efforts at manipulation. ‘Impermissible’ sounds close enough to ‘not permitted’ to obscure the fact that the philosopher is powerless to do anything about whatever it is he or she is inveighing against. Leaving out ‘as far as I can see’ and ‘to me’ when saying “there are no good reasons” or “the reasons are compelling” for x, y, or z lend an air of objectivity, determinacy and normativity where there is none. And metaethical moves like George’s appeal to moral Realism (all that stuff about “matters of independent fact” and the like) give the impression that there is some force out in the ether that will compel you, when there isn’t. Indeed, of all the moves George makes, this last one is the stupidest; a classic example of the philosophical conceit that there is such a thing as purely discursive force. If there is an “independent fact of the matter” with regard to permissibility, but no one accepts it, it has no capacity to compel action, and if enough people agree on something and are willing to do something about it, they can compel action, even if the “independent fact of the matter” contradicts it.

Notes

[1] Less “deep” than philosophy, the law considers a person sanctioned, so long as some such penalty has been imposed by way of the proper mechanisms and procedures, whether he or she cares about it or not. 

Standards

Standards

Both religion and philosophy endeavor to provide us with objective standards for our moral and aesthetic and other judgments, as well as our actions. Religion does it by stipulating a supreme authority in the person of God, while philosophy does it by appeal to reason and rationality, the authority of which is expressed by justifications or “warrants.” Some examples:

[A] A Christian tells you that he condemns same-sex romantic relationships. You say you’re fine with them. He says you are objectively wrong. On asking why, he tells you that God forbids same-sex romantic relationships and that God is the ultimate standard of right and wrong. He goes on to provide the appropriate scriptural and creedal evidence in support of his judgment.

[B] You are about to enjoy your shrimp scampi, and a philosopher tells you that it is wrong to eat it and that you should become a vegan. You tell her that you disagree, but as you prepare once again to tuck in, she tells you that you are objectively wrong. On asking why, she tells you that our ultimate obligation is to maximize happiness and minimize suffering, and your shrimp scampi is in violation of this. Unconvinced, you ask why you should accept this understanding of our ultimate obligation, and the philosopher tells you that rationality compels it and goes on to rehearse the reasoning step by step. (This example can be reconfigured to include any moral philosophy one wishes.)

Now, both of these examples are explicitly about values, but this is incidental. After all, we could add:

[C] You’ve expressed sympathy with metaphysical anti-realism. A philosopher tells you that this is a mistake and that you should be a metaphysical realist instead. When you ask why, he explains that truth entails realism, and since you think your anti-realism is true, then you are really a realist. Unconvinced, you ask why you should accept this alleged relationship between truth and realism – or the account of truth presupposed in it – and the philosopher tells you that rationality compels it – that “it’s impossible to formulate any version of Anti-Realism that doesn’t immediately collapse into Realism” or some such thing – and goes on to take you through his reasoning.

In all of these cases something is invoked (God in one instance, rationality and reasons in another) that is supposed to provide an objective standard; something that commands you, rather than something you command. And the language is indicative of this ambition: ‘warranted’; ‘permissible’; ‘legitimate’; ‘justified’; all such talk connotes a scenario in which a person is authorized by some external authority to think or do something.

This all strikes me as a mistake. 

The issue is not that God’s existence and attributes are unverifiable and can only be stipulated (though this is a problem). Nor is it that different, equally qualified people may or may not find the same reasoning and appeals to rationality compelling, with no way to adjudicate amongst them other than by further reasoning and appeals to rationality, though this also is a problem. It also is not a matter of my examples involving disagreeing interlocutors trying to persuade one another, as I could easily develop examples that involve just one person considering competing positions that would have identical implications and consequences.

I’ve already written about the force that discursive imperatives are alleged to have, and what I am going to say here is meant to supplement that analysis, not amend it. The short of that essay’s argument was that the only imperatives that have any real force are the non-discursive ones: the imperative to go to the bathroom; the imperative to breathe; the imperative to digest food one has swallowed; and the like. But spoken and written imperatives? They only have force if the person to whom they are directed accepts them, and this remains true, even if these spoken or written imperatives are delivered under threat. One must care about the thing being threatened – or not care more about something that overrides it – for the threat to be of any good to the threatener in compelling the threatened.     

This does not change, when we talk about standards that serve as a basis for imperatives. One may like to think that God or Rationality represents a standard, on the basis of which people are compelled to think or do something or other, but unless we accept the standard, it does no such thing. The thing to notice, with regard to our current subject, is that if we do accept it, that just means that it satisfies our own standards. 

A person cannot be compelled merely by way of written or uttered imperatives. If, after listening to the metaphysical realist, I agree that he is right and I should be a realist too, it’s because the standard he’s appealed to – in this case, “rationality” and whatever reasons he’s followed up with – is one that I accept, which means it satisfies my own standards. If, after listening to the Christian, I come to agree with him, it is because the standard he has presented – God and his commandments – is one that I embrace, again, according to my own standard.  

I can make – in the causal sense – your body move by shoving you, but I cannot make – in the same sense – you think something or act in a certain way by nothing more than an utterance or scribble. That may only occur once you’ve interpreted and accepted the utterance or scribble, so some of our confusions in this area are mixed up with our more general tendency to conflate actions and events.

Believing and acting are among the chief modalities of agents, and the notion that one could be commanded or compelled by the mere contemplation of an utterance or bit of writing without having first accepted it (on whatever basis one accepts and rejects things) is a strange one. Indeed, our common practices in this regard suggest that we already know this, which is why we go to such lengths to come up with sanctions and penalties and rewards and reasons that people will care enough about or find suitably convincing (again, in light of whatever standards they operate under) that they will think or do the things we want them to.

That a standard cannot compel without first having been accepted according to the standards of the compelled renders standards, both effectively and per se, fundamentally subjective and (informally) paradoxical, because that which is accepted cannot be rightly characterized as compelled. And I’m not sure the concept of a standard can be sustained without some element of compulsion in it. That class of things, we call “recommendations.”

Value and Objectivity

Value and Objectivity

Exchanges with Robert Gressis (of Cal State Northridge) and others have led me to think quite a lot about obligation and objectivity. Up until now, the discussion has focused entirely on morality, but I want to shift gears to aesthetics, for two reasons. First, regarding the two types of value – aesthetic and moral – people are more inclined to be objectivists about the second than about the first. Second, the reasons why objectivity adds nothing of significance to our understanding of aesthetic values strike me as being transferable to the case of ethical values.

But first, I want to say something about conceiving of something as being objective versus being a Realist about it. If one is a Realist about X, then one believes X is objective, but one might reasonably think that someone could believe X is objective, and yet not be a Realist about it. If by “Realist/Realism,” one means something along the lines of “mind independent” or “independent of one’s conceptual schemes or frames of reference,” then one might observe that the rules of chess or baseball or tennis are objective, while not being “real” in the philosophical sense. Indeed, many aspects of social reality will be objective in this way, without being philosophically real. What something’s being objective comes down to, then, is it’s not being variable by way of personal inclination, perception, or opinion.

Massimo Pigliucci and I have discussed this question of the real versus the objective over the course of multiple dialogues, and his inclination is to take values and obligations as being objective, though not real. On those occasions, I expressed uncertainty as to what I thought – of course, I haven’t been a moral realist since I was in my early twenties, but I was open to the idea of moral objectivism, partly out of my love for Aristotle – but am now pretty committed not just to anti-realism with respect to values, but subjectivism as well. And my aim here, in part, is to show why, and I deliberately say “show” rather than “explain,” for reasons that I hope will be evident.

My early work in aesthetics and the philosophy of art was devoted to the question of artistic value and whether there was a way one could construe it as objective. The dominant view in aesthetics, greatly influenced by the work of Hume and Kant, was that artistic values are and could only be subjective. The question that remained, then – to which both Hume’s and Kant’s work in aesthetics is largely devoted – is whether we can retain or recover any sense of the apparent normativity of evaluative judgments pertaining to the arts, despite that subjectivity. I was convinced that one could not, and as I thought at the time that at least some critical evaluations were normative, I set myself the task of finding some sense in which artistic value was objective.

What I settled upon were judgments pertaining to the fulfillment (or lack thereof) of reasonably expected artistic purposes or functions. That people find The Producers (1967) funny entails that it is an artistic success, given that the purpose of comedies is to offer audiences humorous experiences, and it would be quite strange for someone to suggest that a comedy that audiences found hilarious was not a good comedy. Of course, it may not succeed with respect to other artistic aims, but that does not alter the point that qua comedy, The Producers is objectively good.

What I didn’t realize at the time is that artistic value in this sense doesn’t matter. For one thing, whether something is funny or not remains entirely subjective (in my sense of “variable by way of personal inclination, perception, or opinion”), so the objective fact, “a funny comedy is good,” is itself dependent upon the subjective fact that “this comedy is funny.” For another, even if a comedy is funny, in that large numbers of people find it so and is thus, objectively good, what does this mean if I dislike it, nonetheless? The overwhelming majority of viewers over the years have found The Producers hilarious, myself included, but suppose someone didn’t. Imagine that a person finds no humor in it whatsoever. In that case, what difference would its objective goodness make, as far as this person is concerned? And would there be any point in telling him that he “ought” to like it, because it is objectively good? If unanimity on the subject mattered enough societally, people might decide to silence him, shun him, or prevent him from participating in discussions of the film, but the fact that The Producer’s goodness is objective would have no significance either way. On its own, knowing that The Producers is objectively good wouldn’t make him find it funny, and in the case that a sufficient number of people deemed it important enough to take some kind of action against him, it wouldn’t matter whether The Producer’s goodness was objective or not.

I would maintain that the logic I have been describing with respect to artistic values and value-judgments is equally applicable in the ethical context. Take any moral prohibition, around which there is a sufficiently wide consensus such that the claims “X is wrong” and “You ought not to do X” are credibly deemed objectively true. Now, imagine a person who is not part of this consensus; who simply doesn’t feel or believe in the wrongness and prohibitedness of X. Does the fact that this wrongness and prohibitedness are objective rather than subjective make any difference to how this person feels or what he believes? Would pointing that objectivity out make any difference to what he felt or believed on the matter? And suppose unanimity was of sufficient importance to a sufficient number of us that we collectively decided to remove this person from our society. Would it matter to this decision whether the wrongness or prohibitedness of X was objective or not? I don’t see how it would or could.

Prescription, Reason and Force

Prescription, Reason and Force

Philosophy is largely a normative discipline, which means that philosophers expend a good amount of energy telling you what you ought to believe, say, and do. Just look at the concepts with which philosophy is most preoccupied: ‘truth’ – something you should believe – ‘justification’ – a reason you should accept – ‘good’ – something you should value – ‘right’ – something you should do – ‘justice’ something you should receive – ‘authority’ – someone you should obey. Even the business of defining terms and concepts, philosophers’ favorite pastime, has a prescriptive mode: provide necessary and sufficient conditions for the application of a term and you’ve also determined all the things to which it does not apply. Define ‘X’ a certain way and you can be certain that some philosopher, somewhere, will tell you that something that doesn’t fulfill the definition shouldn’t be called “X.” (See, for example, the “What is art?” debate, as it unfolded over the 20th century.)

What is the actual force of these sorts of prescriptions? Why should anyone be moved by them?

Prescriptions ascribe imperatives, so it’s useful to distinguish the different kinds of imperatives. There are what I will call the “have-to’s,” by which I mean those things that you will do involuntarily, if you fail to do them voluntarily: urinating; defecating; breathing; that sort of thing. Then there are the “must-do’s,” by which I intend those things that if you refuse to do them will result in consequences, the unpleasantness of which is intended to convince you to do them after all: arriving at school before the bell rings; driving within the speed limit; paying your taxes; keeping at least somewhat healthy; taking certain safety precautions. These are all things one must do, though it should be noted that if a person is willing to bear the consequences, must-do’s lack force. Philippa Foot once observed that people think moral imperatives have some sort of intrinsic force, but it would seem obvious that once one gets beyond the “have-to’s,” such force only exists if a person feels him or herself compelled.

Then there are the “should-do’s,” which include the list of things we started with, and here, characterization is much more difficult. There are many moral shoulds, the violation of which will earn a person no effective penalty whatsoever, and the same goes for authorities and one’s refusal to obey them. And when we consider the rest of the list — calling things by their right names, accepting justifications, believing truths, valuing goods – it seems clear that refusing or failing to follow the relevant prescription carries no sanction whatsoever.

___

One thing that would seem to connect all of the should-do’s is that refusing to do them is in some way “contrary to reason” and will earn a person a certain label which, generically, I will brand as that of being “unreasonable,” and which, depending on the should-do in question, may turn more specific: the person who refuses to accept evidence and logical argumentation will earn the label “irrational”; the person who rejects what is right will earn the label “immoral”; etc.

What is the significance of such labels? Suppose that I reject all the scientific evidence and insist that my local fundamentalist pastor is correct and the earth is only six thousand years old, for which I am deemed irrational by the geological community. Or I ignore the counsel of Peter Singer and refuse to give everything I can to starving children in Africa or wherever, for which I am deemed immoral. So what? There isn’t sufficient consensus or passion to coerce me in any way, and unless I want to be a geologist or gain access to the Effective Altruist club, all that’s happened is that I’ve been called something. If this is all the force that such prescriptions have, then they have no force at all and thus fail to be prescriptive in any meaningful sense of the word. After all, what is a prescription lacking force, but an expression of some person’s or peoples’ desire(s), which may or may not be fulfilled? With respect to logic and evidence, one may want to say that it is not peoples’ desires that are being frustrated by a person’s refusal to buy in, but rather, reason or truth itself, but any such talk is entirely metaphorical. People can be frustrated, insofar as they possess a nervous system. Abstractions cannot and do not. Moral imperatives, when stripped of divine or other such threat for noncompliance, have nothing more than what G.E.M. Anscombe called “mesmeric force.” “It is as if the word ‘criminal’ were to remain,” she wrote, “when criminal law and criminal courts have been abandoned.”

___

In ethics, this type of question falls under the “why be moral?” line of inquiry, and surprisingly, in surveying the great works of moral philosophy on the subject, one discovers that it is rarely explored and when it is, it is done poorly. Kant tells us that to be immoral is to cease to be a rational person, but unless someone cares about this – and how many immoral people do? – it fails to provide moral shoulds with any real force. Mill maintains that the ultimate sanction of moral obligation is that we suffer when we do wrong, and it is this potential pain that gives moral shoulds their force, which renders them must-do’s in our normative taxonomy. Undoubtedly this is true of some people – those with strong and sensitive consciences, for example — but equally undoubtedly it is not true of many, so this too fails to provide moral shoulds with any consistent, substantial force. And we must always remember that we are not only talking about morals, but about should-do’s across the conceptual landscape — shoulds pertaining to truth; justification; (non-moral) value; authority – and with respect to these, the accounts of force that we find in ethics, sparse as they already are, will not do. So, even were we to accept the accounts of moral force given by Kant or Mill, we still would be left without an account of the force of all the other should-do’s that we have been talking about.

That failing to have any account of the force of an imperative that lacks either external or internal coercive power is worrisome is evinced by the fact that when people offer prescriptions, they tend to use language whose aim is to imply the stronger forms of imperatives, where the relevant force is less in question. How many times have you been told that you “Have to ….” or that something “has to be …,” when what comes after the “have to” clearly, obviously, manifestly does not fall into the first category of imperatives? What I always do, in such situations (providing I fail to find the imperative compelling), is simply repeat the phrase “Have to,” with increasing emphasis and decreasing speed, until the person finally relents and switches to “must do” language, with the idea of leaning on the force of social sanction. More often than not, this is a bluff, as there is no consensus that would produce such social sanction, and the best response is simply to inquire what precisely will happen if you refuse, after which one’s interlocutor will tend to retreat to “should do” ways of speaking; i.e. the expression of a wish.

Which is precisely where anyone would start, were we not all aware that a prescription without force is nothing more than the voicing of a desire. Alas, people don’t want to ask nicely and hope for the best, understanding that you may not accede to their demands. They want to make you think that you have no choice.

___

Among Kant’s many insights, one that really stands above the others is that in a modern, secular framework, we should understand morals as a kind of self-legislation (though Anscombe thought little of it.) Now, I think that Kant made a mistake in simply assuming that rational personhood was somehow compelling – and thus forceful – in itself, but the core idea seems sound, and provides the ground for an account of what the force of at least some shoulds consists of.

Prescription at the level of the should-do’s is an invitation to self-governance, broadly understood as the opportunity to regulate one’s own behavior, within the bounds of what is reasonable, and as a result, it must be seen as a precious gift. Because a should-do can easily turn into a must-do, if the matter at hand is one of sufficient social consensus and concern. Put plainly, you can be reasonable on your own or we can make you be reasonable (and yes, what is reasonable or not is determined by the “we,” not by any individual’s ratiocinations or thought experiments, no matter how clever or special one might think one is). And if you are stubborn enough to be willing to bear whatever coercive force we apply and persist in being unreasonable, then we will remove you from our midst permanently in one way or another. The day even may come when we can “reprogram” those who refuse to be reasonable, Clockwork Orange style, in which case the should-do’s, which have turned into must-do’s, will become have-to’s.

The force of should-do’s like these, then, lies in the value one places on one’s own freedom and welfare. There are already a number of things that we have made legal must-do’s and not left to self-regulation — either because enough people have demonstrated in sufficient numbers that they will not govern themselves or because feelings are so strong on the matter than we are not willing to take the risk — so the question is how much more of our autonomy we’d like to give up, something we should take very seriously, every time we are considering behaving in a mule-headed fashion.

But these are a relatively small number of  cases, among the should-do’s described at the beginning of the essay, which means that the rest still lack force and fail to be prescriptive. And as Anscombe believed with regard to moral oughts generally, I think it would be to all of our benefit to drop all the should-do’s that lack force. As irritating as the unreasonable person is, equally irritating is the person who proclaims what you should think and do, with regard to things for which there is insufficient public agreement or concern to provide his or her imperatives with the force we’ve described. But beyond irritation, there is a real, substantial risk to overdoing the “should-do’s.” Just as stupid, petty, and unenforceable laws cause the public to lose respect for law generally, and just as the wild overuse of ‘moral’ and ‘immoral’ and their cognates has eroded respect for morality, the proliferation of weak, self-serving “shoulds,” whether of the moral variety or otherwise, may cause people to lose respect for reasonableness itself. The language of coercion and implicit (and sometimes explicit) insult – which is what the language of prescription involves – when applied with no force behind it, is more likely to cause a person to hunker down; to become more defiant and extreme and more unreasonable. 

The language of wishing however, when combined with politeness and respect for the other, may bring people closer to your position, and at least, if they fail, will not push them farther away.

Ethics and Criteria

Ethics and Criteria

When speaking of morals, the majority view among philosophers is that they are criterial. Something – some action or a state of affairs – is right or wrong, obligatory or prohibited, because it satisfies certain criteria that come from a moral theory. Thus, for a Utilitarian, actions are right or wrong, obligatory or prohibited, because they either succeed in or fail to meet the criterion of promoting the general welfare. And for a Kantian, actions are right or wrong, obligatory or prohibited, because they succeed or fail to meet the criterion of rational universalizability. Indeed, the very idea of a moral theory is defined, in part, in terms of identifying the general characteristics of moral rightness and wrongness, obligation and prohibition and this is tantamount to providing a criterion for the application of the words ‘right’, ‘wrong’, ‘obligation’, and ‘prohibition’.  

That morals are criterial also seems embedded in the way we speak about them; specifically, in the giving of reasons for our ethical judgments and actions. People ask us why they ought to do something or why we have a certain moral attitude towards a given state of affairs, and in answering, we offer reasons that mention various features of the action or of the state of affairs in question and effectively appeal to criteria.

That this view is so prevalent among professional philosophers is interesting in part because the criterial view of terms and their application has faced a number of tough challenges, over the course of the last century, to the point that it is largely in tatters.  

Wittgenstein showed that the substantial and open-ended heterogeneity that one finds in the extensions of common terms like ‘game’ and ‘art’ renders it impossible for them to be applied by way of criteria. Keith Donnellan and Hilary Putnam argued that natural kind terms like ‘mammal’ or ‘lemon’ cannot be criterial, in light of atypical and borderline cases. Saul Kripke maintained that proper names cannot be criterial, because the person to whom the name refers might not have had the characteristics associated with him, and in speaking of that possibility, we would still be speaking about him and not someone else. And philosophers like Frank Sibley have pointed out that aesthetic terms like ‘delicate’ and ‘vibrant’ cannot be applied by appeal to non-aesthetic criteria, because the very same characteristics may be found in something to which these terms do not apply or even worse, to which the opposite terms may apply. The very same non-aesthetic characteristics that make one thing delicate, for example, may make another insipid.

One might wonder, then, about the prevalence of the criterial view in moral philosophy. After all, if scientific terms like ‘mammal’ and common words like ‘game’ cannot be applied by way of criteria, what chance is there that terms like ‘moral’ or ‘immoral’ or ‘obligatory’ can? This seems a fair question to ask, before we even get to the further question of the merits or faults of any particular moral theory. And when we do turn to moral theories, what we find is a mess of different views, many of them mutually exclusive, invoked and applied in a haphazard, inconsistent manner, often by the same person, in a single day. An encounter in the morning may provoke me to think and act with regard to the general welfare, and another, in the afternoon, may elicit concern for another person’s dignity and rights. Of course it is possible that our words, ‘right’, ‘wrong’, ‘obligatory’ and the like have different meanings on each of these occasions, so that when I say, “this is the right thing to do” in the morning and “making you do that wouldn’t be right” in the afternoon, I mean different things by ‘right’, but the simpler and in my view, much more plausible way of interpreting these facts is that our applications of moral terms are not criterial.

Of course, not everyone who worked in moral philosophy over the last century embraced the criterial view. The often ignored and underrated Intuitionists – notably, H.A. Prichard and W.D. Ross – rejected the idea that our moral judgments and actions follow from the application of criteria. “The sense of obligation to do, or of the rightness of, an action of a particular kind is absolutely underivative or immediate,” H.A. Prichard wrote. “[W]e do not come to appreciate an obligation by an argument, i.e. by a process of non-moral thinking…,” which means that we do not determine whether the word ‘right’ or ‘wrong’ applies to an action, by seeing if the action meets a set of moral-making criteria. We know this, because whenever we try to derive an obligation, say from the claim that some action is intrinsically good or has a good result, we wind up with an infinite regress (for such an inference to go through, we must presume that what’s good ought to be the case, which is simply to invoke another obligation). And when we consider what we actually do when we wonder whether we were right in thinking something is a duty, we don’t consult a theory or look to criteria, but rather place ourselves back in the situation and see whether the sense of obligation arises again. 

It is important to make clear that Intuitionists like Prichard are not suggesting that perceiving some fact about an action or a state of affairs cannot get a person to the point that they feel obligated to act in a certain way or find some action or state of affairs to be wrong. Seeing a person suffering from cancer may lead to my feeling obligated to donate to cancer research, and when I remember that my neighbor has done me a kindness, I might feel that I ought to reciprocate. But this is not the criterial view of morals, according to which it follows from the fact that an action or state of affairs has certain characteristics that the word ‘right’ or ‘wrong’ applies to it and therefore, that I ought to do or accept it or not.

Another non-criterial approach to obligation, associated with the Wittgensteinian philosopher Cora Diamond, sees obligations as arising from the application of concepts that are, themselves, morally “thick,” meaning that their moral content is implicit, rather than inferred. We are all familiar with morally thick concepts, as they are what one typically finds in moral codes like the Ten Commandments. ‘Stealing’, ‘Murder’, and the like are morally thick, insofar as they have both descriptive and moral content. To steal is to take someone’s property wrongfully, and to murder is to kill someone wrongfully, and thus, the statements “Stealing is wrong” and “Murder is wrong” are analytic and necessarily true, whereas their morally neutral, “thin” counterparts, “Taking someone’s property is wrong” and “Killing someone is wrong” are synthetic and only contingently true (or false).

Diamond maintains that concepts like ‘person’, ‘neighbor’, ‘friend’, ‘pet’, and others are also morally thick in this way and that looking at things from this perspective far better explains our behavior than the criterial approach. In her 1978 paper, “Eating Meat and Eating People,” Diamond dismantles the criterial approach to our treatment of both people and non-human animals, and specifically with regard to so-called “ethical eating” practices, like vegetarianism and veganism.

Ethical vegans want to say that we shouldn’t eat non-human animals for the same reason that we shouldn’t eat people: animals and people satisfy the same moral criteria. The reason why we shouldn’t eat people – so the story goes – is because they have a certain morally relevant characteristic, namely the capacity for suffering, and since non-human animals also have this characteristic, we shouldn’t eat them either.  

As Diamond points out, however, if this is the reason why we shouldn’t eat people, then there is no reason not to eat our dead, so long as the person in question was not unjustly killed and so long as the meat is good to eat. Likewise, there is no reason why we shouldn’t eat amputated limbs, so long as the same caveats hold. And yet, most of us don’t think that we should do either of these things, other than in the direst of circumstances (like if we find ourselves in a Donner Party-style scenario), which means that whatever the basis of the prohibition against eating people, it does not lie in the fact that they meet certain criteria. Diamond maintains, instead, that it is because the concept ‘people’ is morally thick and as a result, people are not things to eat.

The same is true on the non-human animal side of things. If the reason why we shouldn’t eat beef or chicken is because they are capable of suffering, then there is no reason why a vegan shouldn’t eat an animal that has died of natural causes or been struck by lightning or lost a limb, in an accident. That vegans eschew all meat-eating shows, therefore, that whatever the reason, it is not because cows or chickens have certain characteristics and thereby meet certain criteria.  And consider the non-vegan or non-vegetarian person. I might be more than willing to eat Gaegogi (dog) in a restaurant in Seoul, while at the same time being appalled at even the suggestion that I might eat my pet Bichon Frise. Is this because the dog used to make my Bosintang (dog stew) lacked some morally relevant characteristic that my dog possesses? Or is it like this: I think of my Bichon Frise as a pet; ‘pet’ is a morally thick concept; and pets are not things to eat.

One might wonder how certain things manage to fall under morally thick concepts while others don’t. Certainly, there will be a story, in each case – one that explains, for example, why this Bichon Frise and not some other dog became my pet – but there may be as many such stories as there are cases, and it seems highly unlikely that there will be some general principle, unifying them all, which could serve as some kind of meta-criterion, by which the criterial view can be saved.

The criterial view of ethics is part of a broader rationalism in philosophy, about which I have been quite critical in my work. It renders ethics procedural and thus transparent and scrutable, and this is both comforting and flattering to us. In contrast, the anti-criterial approaches to ethics that I have described treat ethics as a matter of perception, feeling, conception, and naming, which are not procedural and which render morals opaque, ultimately unreasoned, and thus, somewhat inscrutable. This is far less comforting and flattering, but them’s the breaks.

On a Metaphor in Aristotle’s Nicomachean Ethics

On a Metaphor in Aristotle’s Nicomachean Ethics

I want to talk about a certain metaphor in Aristotle’s Nicomachean Ethics. It appears in a lone sentence in Book III, Ch. 3., and while mentioned only briefly, it is significant, not just in understanding the Ethics, but in grasping a crucial point about the limits of reason and deliberation in ethics more generally.

[T]he end cannot be a subject of deliberation, but only the means; nor indeed can the particular facts be a subject of it, as whether this is bread or has been baked as it should; for these are matters of perception.

My interest here is not with the first part of the sentence, regarding ends and means, but with the second part regarding the bread. In the first part of that second part, Aristotle seems to anticipate Wittgenstein’s idea of family resemblances: we cannot deduce that something is bread, by holding it up to a list of criteria, but must see that it is bread by perceiving it as being suitably similar to already established instances of bread. Aristotle then goes on to say that the same applies to whether the bread is sufficiently baked – that is, whether or not it is good bread – and this too is a matter of perception, not deliberation. The reason is fascinating and sheds a great deal of light on the relationship between the general and the particular in ethics and thus, the appropriate role of reason in social and civic activity.

Aristotle’s ethics is probably most associated with the famous “doctrine of the mean,” according to which the virtuous temperament is the moderate or “reasonable” one, and the right thing to do in any given situation, consequently, is whatever lies between extremes of excess and deficiency. He arrives at this conclusion after an investigation into human nature, concluding that human beings are essentially rational in both the contemplative and deliberative sense of the word and that moral well-being, like physical and psychological health, is served by moderation and undermined by its opposite.

One of the notable things about Aristotle’s view is that it means that no action is intrinsically good or bad, because any action could lie anywhere on the relative spectrum of deficiency/moderation/excessiveness, given suitable circumstances. Also notable is that as far as action goes, the doctrine of the mean doesn’t help us very much. To be told not to do too much or little of something but rather, the right amount, isn’t particularly helpful in determining how I should act on any given occasion and neither is being told that “the reasonable/virtuous thing to do is what the reasonable/virtuous person would choose.”

Of course, this is all by design, as early in the Nicomachean Ethics, Aristotle tells us that “it is the mark of an educated man to look for precision in each class of things just so far as the subject admits.” Ethics and political science are subjects that do not admit of much precision, in this sense, and thus, Aristotle warns us that “fine and just actions, which political science investigates, admit of much variety and fluctuation of opinion. We must be content, then, in speaking of such subjects and with such premises to indicate the truth roughly and in outline.” 

Aristotle’s account here of why ethics and political science do not admit of precision ultimately boils down to the question of the general versus the particular. Geometry, to take an example, is concerned with the general: Squares are definable in terms of a set of essential characteristics, and thus, what applies to one square will always apply to another. But people and their lives and behavior are not like this. One person is very different from the next, as are the circumstances in which people find themselves, and this means that what applies to one person, in one set of circumstances, may not apply to another person in another set of circumstances. Unlike geometry, then, ethics and political science are concerned with the particular, much more than the general, the latter about which we can only make the vaguest of claims. In the case of moral and civic virtue, the devil is pretty much all in the details, as the very same action could be the best thing to do for one person on one occasion, and absolutely the worst for another on a different one.  

In The Closing of the American Mind, Allan Bloom wrote that “the particular as particular escapes the grasp of reason, the form of which is the general or universal,” and we can see why. One square is as good as another, so I can reason my way to a complete understanding of squares, on the basis of nothing but their essential characteristics and a number of intuitively grasped axioms. In the case of particulars like people and circumstances, however, one is not as good as another, so in order to gain a full understanding of them, one must, as Wittgenstein put it, look and see, as deducing will only get us so far – in this case, as far as “don’t do too much or too little, but rather the right amount.”

Returning to the metaphor of the bread, across the street from my old university is a Panera Bread store, which, oddly, allows you to enter from the back, through the kitchens. One thing that always caught my eye is a chart that is supposed to instruct bakers on when a bagel is properly baked. On it are pictures of three bagels: one is undercooked, one is overcooked, and one is cooked just the right amount. Of course, the chart is entirely useless to someone who is not already an experienced baker, just as models of excessive, deficient, and moderate behavior would be useless to someone who is not already well experienced in social and civic goings-on. Resemblances and certainly, sufficient resemblances, can only be seen, not reasoned to. No bagel that you cook is going to be identical with any of the pictures in the chart, so whether the bagel in your hand is cooked the right amount is a matter of seeing whether it sufficiently resembles the bagel in the picture, and this requires an experienced eye. Similarly, no action-situation with which one is confronted is ever going to be identical with the models of excessive, deficient, and moderate behavior you have been shown, which means that determining whether this is the right thing to do in these circumstances will be a matter of seeing whether the thing and the circumstances are sufficiently similar to others, in which it was the right thing to do. This also requires an experienced eye, which, with respect to moral and civic matters, is what Aristotle calls “practical wisdom.”

Ethics, then, is the discipline in which ratiocination and rational investigation do us the least good. A practically wise person – the one with an experienced eye for moral and civic affairs –  who has never been told the doctrine of the mean or given any other moral principles or rules can be counted on to do the right thing on a consistent basis, whatever situations may come.  But the person educated in principles, rules, and doctrines, if not practically wise, will get no farther than the platitudinous generalities that are all we can get by way of reason in ethics, and as a result, can be counted on to get things wrong, morally and civically speaking, on many occasions.

Some Questions about Obligation

Some Questions about Obligation

[1] You are with a friend in a restaurant, and he tells you that you ought not to (in the moral sense) order the linguine con vongole that you are considering. When you ask him why, he tells you that eating clams violates their interests. You reply that you don’t care about clams’ interests, and he says that you ought to care. When you ask why, again, he tells you that as a general matter, you ought to care about others’ interests. Does this make the initial claim more compelling?  

[2] Would it make any difference if your friend also told you that moral obligations are “real,” in the sense of being “mind-independent” or some such thing?

[3] Imagine that you deny that moral obligations are “real” in this sense, and say, “Hey, this business about the linguine con vongole is just your opinion, man.” What would settle the dispute between the two of you?

[4] On an earlier occasion, you argued with a different friend about the moon’s gravity. She said that it is lower than the earth’s gravity, while you claimed it is higher. She pointed out that people have visited the moon and showed you footage of astronauts jumping up and down in a way only possible in lower gravity. Is there anything comparable that your friend today might do to demonstrate that eating linguine con vongole is wrong? Or that violating interests is wrong?

[5] Suppose that there are three of you at lunch and that the second friend says that the linguine con vongole is fine but that you shouldn’t order the lamb chops, which you also have been considering. When you ask why, she explains that while the clams have no claim to personhood, the lamb does and that it is wrong to treat another person as a means rather than as an end. So, your friends agree that some of the things you are thinking about eating are wrong to eat, but they disagree as to which one(s). What would settle the dispute between them?

[6] Would it make any difference to their dispute if moral obligations are “real” in the sense mentioned in [2]?

[7] Since the second friend is disposed to eat some things that the first friend thinks would be wrong to eat, are the second friend’s gustatory inclinations (according to the first friend): (a) equally morally abhorrent to yours? (b) less morally abhorrent than yours, though still morally abhorrent? (c) not morally abhorrent?

[8] Imagine that both friends agree that any and all meat/fish/seafood/poultry consumption is immoral, but one of them is a moral “realist,” in the sense identified in [2], while the other is a moral subjectivist in the Humean sense. Does this affect the force of their otherwise identical normative judgments regarding your lunch? Is there a reason to take one’s judgments more or less seriously than the other’s?

[9] Assume that your friends think that both suffering and personal sovereignty/prerogative are valuable, but they disagree as to which is the greater or more overriding value. What would settle their dispute?

[10] Suppose that you choose the lamb, drawing the moral ire of both your friends, and they say they will shun you for it. Now imagine that a third friend joins the lunch party and also objects to your dining choice. While he doesn’t say it is morally wrong to eat lamb, he talks a lot about your character and indicates that eating animals is a vice and thus, undermines your flourishing. Is there an obligation to flourish or to be a certain sort of person? If there is, is it different from the obligation to do – or not do – certain things?

[11] A fourth friend arrives late to the lunch party. He has no moral theory and eschews moral discourse, advocating and opposing things entirely on the basis of what matters to him and what he cares about, and he also says that he will shun you for eating the lamb. Should you take the first three instances of shunning more seriously than the third? And what if you neither accept the first three friends’ moral/axiological reasoning nor care about the third friend’s dietary eccentricities? Is there anything substantive beyond all of this that differentiates the cases of shunning?

[12] Is there a reason to care more about violating someone’s moral convictions than his or her personal preferences (or vice versa)?

[13] What meaningful difference is there between being shunned or otherwise sanctioned for being immoral and being comparably penalized for being disliked? 

[14] Why are “don’t just do it out of a sense of duty” and “do it because you want to” such  commonly used phrases?

[15] Suppose your friends argue amongst themselves about your shunning. The first three are critical of the fourth, saying, “Well, in our case, we rightly shunned him – the shunning is legitimate – while in your case, you did not and it is not.” Does this make any difference to being shunned by the first three friends versus being shunned by the fourth? 

[16] Imagine two societies: In the first one, people are punished for violating moral principles, derived from a moral theory that a sufficient number of people think is true; in the second, people are punished for doing things a sufficient number of people dislike. What substantive difference is there between the two societies? 

[17] Suppose that there also is a “moral reality” to which the consensus of one or the other societies in [16] “corresponds.” Does this change anything?   

[18] After some consideration, you realize that you don’t care about keeping these friends and consequently, their threats of shunning have no bite. Does it matter that you have been labeled “immoral” nonetheless?

[19] What is the significance of a negative moral judgment for which there is no sanction of any kind that matters to the accused?

[20] Does the existence of a “moral reality” in any way affect the answer to [19]?

[21] Imagine that there is, in fact, a “moral reality” of the sort that we have been discussing, but that the moral judgment of every person in the world contradicts it. Imagine further that everyone agrees with regard to what is right and wrong. What would the claim that something is morally “permissible” or “impermissible” mean in these circumstances?

[22] Imagine two worlds, one in which there is a “moral reality” and one in which there is not. In both worlds, there is significant disagreement over what is right and what is wrong. What would be the substantive difference between the two worlds?