A popular form of monomania

by Theodore Dalrymple

We hope by this time next year to have from a hundred and fifty to two hundred healthy families cultivating coffee and educating the natives of Borrioboola-Gha, on the left bank of the Niger.”
—Mrs. Jellyby, Bleak House, by Charles Dickens (1852)

Conceptions of morality change, never more so than in the lifetime of The New Criterion. If a person who died at the time of the first issue were to return to life, he would find himself in an angrier and more charged moral atmosphere than the one he had left—and one in which the principles that undergird Western civilization scarcely seem to be in evidence. Indeed, he would think that the world was a moral hornets’ nest that had been poked with a stick, so furious is the buzzing.

Some years ago, I shared a public platform with a person considerably more eminent than I. The subject of discussion was what it took to be good, which can be argued over for an eternity without compelling a universally accepted answer. This does not make the question meaningless, however, as the Logical Positivists might once have claimed; indeed, there are few questions more important.

The person more eminent than I (whom I shall not name, the avoidance of pointless personal denigration being a part, albeit a small one, of what it takes to be good) said that it took intelligence to be good. Since the eminent person almost certainly considered only the upper 1 or 2 percent of the population to be intelligent, and since he claimed that intelligence was a necessary but not sufficient condition of goodness, it was clear that he did not think much of the moral qualities of the great majority of his fellow creatures. His answer was perfectly compatible with the claim that only one in a thousand people is good.

I said that I thought that what he said was appalling, true neither philosophically nor empirically. But just because a viewpoint is bad philosophically, is empirically without foundation, horrible in its implications, does not mean that it cannot be held. History would no doubt have been rather different if this had been impossible.

It is always a good idea to try to elucidate what can be said in favor of an opinion with which you disagree: there is probably no better way of getting one’s own thoughts clear. What, then, can be said in favor of the idea that one must be clever to be good?

There must surely be some cognitive element to goodness; one does not speak of a morally good bird or lizard, for example. And if to be good means a disposition to be good and not merely to perform an occasional good act, as if almost by chance, then cognition clearly has a part to play in being good. Cognitive ability—intelligence—should therefore be at least an advantage in, if not actually a precondition of, being good.

This is all the more so since moral decisions are often complex rather than simple and straightforward. The world is not so constituted that it only provides us with easy moral questions to answer, which is why laying down invariable moral rules is so difficult. Circumstances really do alter cases; if they did not, Kant’s view that we should tell the truth even to a murderer on his way to cut someone’s throat, simply because we should always tell the truth, would not strike us as so absurd, believable only for a man with little ordinary intercourse with the world.

When to speak the truth, when to hold one’s tongue, when to indulge in euphemistic evasion, when to tell little white lies for the good of others—these are subtle questions often requiring a swift appreciation of the many possible reasons or circumstances that must go toward sound judgment. Surely high intelligence, in the meaning of my co-panelist, helps here?

But whether or not it does so cannot be answered in a priori fashion. Even if some intelligence were required to be good, the relationship, if any, between its level and goodness would still have to be established. It might well be, for example, that there is an adequate level of intelligence above which any surplus contributes nothing as far as moral character is concerned. I think this is ordinarily the case: we do not expect better behavior of someone with an IQ of 140 than of someone with an IQ of 110. And if the man with the higher intelligence quotient is bad, he is probably better at being bad.

Understanding the moral argument for a certain course of action is not the same as performing that action: indeed, most of humanity has found the former rather easier than the latter. I once worked in a prison as a doctor, and though the prisoners were, on the whole, poorly educated, and though it is generally held that prisoners are of a lower average or median intelligence than that of the general population, I found that very few of them were moral imbeciles in the sense that they could not understand or grasp a moral argument. Clearly, many of them had difficulty in applying the conclusion of that argument in practice, but their problem was not with intelligence.

It is in any case a matter of common experience that the best people one knows are not necessarily the most intelligent—though I do not want to imply, either, that high intelligence is incompatible with goodness. In fact, in my experience there is little or no relation between goodness and intelligence.

Why, then, did my co-panelist, himself highly intelligent, assert something so patently false? I think it was because there has been a progressive distancing of the locus of moral concern, at least among the educated, from personal conduct to wider, impersonal questions.

This is not an entirely new phenomenon. Lenin, for example, wrote that “our [Bolshevik] morality is entirely subordinated to the interests of the class struggle of the proletariat.” All other morality, according to Lenin, was deception and dupery, designed to throw dust in the proletariat’s eyes. He, at any rate, found his theory of morality extremely liberating: since it was he who decided what was in the interests of the class struggle of the proletariat, it meant that he could do what he liked. And what he liked, often enough, was to have priests, poets, and sundry other enemies murdered.

In similar fashion, Islamists think that whatever supposedly defends or advances their brand of Islam is both morally justified and obligatory, making irrelevant all other considerations that are normally thought to be moral. This, too, liberates them from restraint, though not necessarily to the direct personal advantage of those who act in accord with the doctrine—unless, that is, they really are rewarded in heaven.

Leninism and violent Islamism are, of course, extreme examples of what happens when a narrow ideology is made the philosophical basis of moral action. Such ideologies are not founded and propagated by men lacking in intelligence, though their rank-and-file followers may well be of lesser intelligence than they. Lenin, surely one of the most unattractive figures in world history though for decades treated in the Soviet Union as a moral exemplar, the Muhammad of Communism as it were, was certainly not of deficient intelligence. These totalizing ideologies elevate highly doubtful intellectual constructions and abstractions into realities in the mind of the believer, who then views the whole world in their light. The abstractions become more real to believers than the everyday reality in which most of us live most of the time. By this means, viciousness is transmuted into duty and cruelty into an act of cleansing. Sadism is part of the makeup of many people, if not quite of all, and totalizing ideologies appeal to the most sadistic among us, with their justification for, indeed requirement of, conduct that would otherwise be deemed beyond the pale.

Thus intellectualization of a certain kind can promote the dulling of normal moral sensibilities. Even more than political language (as analyzed in Orwell’s essay), ideology is designed to make lies sound truthful and murder respectable, indeed to make lies the proclaimed truth from which all dissent is impermissible, and murder a duty.

It would be an exaggeration to claim that the populations of Western liberal democracies have succumbed entirely to the siren song of totalizing ideologies, but nevertheless the signs are not altogether encouraging. Two factors promote the advance of ideology in our societies: first, the death of God, and second, the spread of tertiary education. The two may well, of course, be related.

Whatever else the death of God may have done for us, it has not lessened our desire for an overall or transcendent meaning or purpose in our lives, especially in conditions in which our basic material needs—those which, when not met, lead to a struggle for existence in the most literal sense—are almost guaranteed to be supplied. This is especially true of those who have been most stimulated to experience the desire for transcendent meaning or purpose, that is to say the educated, by whom I mean those who have passed the first quarter of their lives (at least) in supposedly educational institutions.

In the absence of religious belief—and I take it as axiomatic that religious belief in any other than an etiolated, semi-pagan claim to spirituality, is unlikely now to revive with sufficient vigor or unity fundamentally to alter the moral atmosphere, especially in that part of the population that experiences most acutely a need for transcendence—there are relatively few possible sources of meaning greater than that of the flux of everyday life. Chief among these is political ideology, the most obvious being fascism and Marxism. Both have lost their salience for most people, however, fascism because of the appalling catastrophe of Nazism, and Marxism because of its prolonged and brutal failure in Russia. (However much Marxists proclaimed their distance from the Soviet Union, the ignominious collapse of the Soviet Union profoundly damaged the intellectual prestige of Marxism.)

But just as the death of God did not destroy the human need for transcendence, so the disasters of Nazism and Communism did not halt the search for transcendence by means of ideology. Instead of disappearing, therefore, as one might have expected or at least hoped, ideology in the sense of an overarching system of thought that simultaneously explains the woes of the world and suggests a means to eliminate them, thereby endowing a profound meaning to a person’s existence, giving him a cause that appears larger than himself, did not disappear, but on the contrary has flourished pari passu with the expansion in tertiary education. This time, ideology appears in a balkanized form, with a rich variety of monomanias, ranging from the “liberation” of sexual proclivities to the salvation of the biosphere. But, as if to prove the theory of intersectionality, various monomanias have formed tactical alliances with one another, such that if you know a person’s monomania, you also know what his attitude to many seemingly unrelated questions will be. There has developed a popular front, so to speak, of monomaniacs.

It is not surprising, then, that one’s opinion on matters social and political has become for a considerable part of the population the measure of virtue. If you have the right opinions you are good; if you have the wrong ones you are bad. Nuance itself becomes suspect, as it is in a tabloid newspaper, for doubt is treachery and nuance is the means by which bad opinions make their comeback. In this atmosphere, people of differing opinions find it difficult to tolerate each other’s presence in a room: the only way to avoid open conflict is either to avoid certain persons or certain subjects. Where opinion is virtue, disagreement amounts to accusation of vice.

Since there is no new thing under the sun, at least where human error and foolishness are concerned, this is far from the first time in history that people have taken opinion as a metonym for virtue. And where there is political divergence, there is the possibility, or likelihood, of polarization, for Man is a dichotomizing animal. Neutrality in a polarized situation is deemed cowardice or betrayal; it is impermissible to have no opinion on an issue, no matter how ill-informed on, or indifferent to, it one might be. The right of non-participation on one side of a debate is abrogated: as some members of the Black Lives Matter movement put it, “silence is violence.”

But while polarization of opinion is nothing new, the number of people who concern themselves with political and social affairs in both a theoretical and practical way has increased enormously, and with it the sensation of living amid conflict even in times of peace. Family and generational estrangements on the grounds of opinion, while not unknown previously, have become more common, almost expected. I know of several parents who have to hold their tongues if they wish to maintain relations with their children. No lyricist would now write, as did W. S. Gilbert in 1882,

I often think it’s comical

How Nature always does contrive

That every boy and every gal

That’s born into the world alive

Is either a little Liberal

Or else a little Conservative!

Such lightheartedness is not possible where opinion is the main, or sole, test of decency, the litmus of vice and virtue. Every difference of opinion is a difference not on the single point at issue alone, but of an entire Weltanschauung, and a few words are often sufficient to demonstrate which camp a person is in, that of vice or that of virtue.

The extreme importance now given to opinion (by contrast with conduct) in the estimation of a person’s character has certain consequences. This is not to say that in the past a person’s opinions played no part in such an assessment, and no doubt there are some opinions so extreme or vicious, for example that some whole population should be mercilessly wiped out, that in any day and age one would hesitate to associate with someone who held them. But before, even when someone held an opinion that we considered very bad, we still also assessed the degree of seriousness with which he held it, the degree to which it was purely theoretical, the importance it played in his overall mental life. The holding of such an opinion would not redound to his credit, but if lightly held and with no likely effect on his actual behavior, it would detract only slightly from our view of him. He might still be a good man, albeit one with a quirk, a mental blind spot.

If we take as an example the question of capital punishment, it should be possible for people to disagree without concluding that those who take a different view from their own are morally deficient or defective. I am against the penalty on the grounds that even in the most scrupulous jurisdictions mistakes are made, and that for the state wrongly to execute one of its citizens is a heinous thing, moreover one which will bring the whole criminal justice system into disrepute. If it is argued that the state’s unwillingness occasionally to execute wrongfully will lead to more wrongful deaths by murder than would otherwise have occurred, I would reply that this is a price that must be paid (though I concede that there might in theory be a price too high to be paid, though I think this is unlikely in practice ever to eventuate). A person who declared himself in favor of summary execution without trial of all those with whom he disagreed or otherwise reprehended would probably be regarded as mad rather than bad, at least outside the purlieus of the Taliban.

If someone were either for or against the penalty on more deontological grounds, that it was either just or barbaric in itself, I would recognize these as sensible arguments without necessarily subscribing to either, and without giving to them the weight that those who subscribe to them give them. Thus, it should be possible for us to have a discussion on the question, disagreeing but without casting aspersions on each other’s character. It seems to me, however (though I have no strict scientific evidence to prove it) that such reasonable discussions have become less and less frequent, precisely because opinion and not conduct has become the touchstone of virtue.

This naturally raises the temperature of any discussion and inclines participants to bad temper. But there are other consequences too.

For one thing, the elevation of the moral importance of opinion changes the locus of a person’s moral concern from that over which he has most control, namely how he behaves himself, to that over which he has almost no personal control. He becomes a Mrs. Jellyby who, it will be remembered, was extremely concerned about the fate of children thousands of miles away in Africa but completely neglected her own children right under her own eyes, in her house in London.

This is not to say that huge abstract questions, such as the best economic or foreign policy to follow, have no moral content. Of course they do, but generally in a rather vague, diluted, and distant way, for example that one should choose the policy that does the most good or, more realistically, the least harm. This policy is likely to remain extremely difficult to determine, being subject to so many variables. Again, reasonable people are likely to disagree, all the more so as the desiderata of human existence are plural: there is, in most situations other than total war, no one goal that morally trumps all others. If it could be shown that the creation of wealth is favored by, or even requires, a degree of unemployment, but it also accepted that wealth is a good and unemployment is a harm, then decent people might well disagree how much wealth creation should be foregone in order to reduce unemployment, in part because goods and harms are often not commensurable.

The question of what conduces to economic prosperity is large and important, but people may disagree as to the worth—the human worth, that is—of prosperity itself, at least within quite wide limits. But even if they agree as to its worth, and assuming that there is a single indubitable measure of such prosperity, they may still disagree as to how it is to be produced or encouraged. If someone says that personal freedom is a precondition of prosperity, someone else might point to the case of China, whose rise to prosperity has probably been the most remarkable, at least in terms of timescale, in the whole history of the world. China, of course, is a highly authoritarian country. But the first might reply that China in certain respects allows more freedom than Western countries, for the weight of the state in China, so much heavier in some respects than elsewhere, is so much lighter in others. There is no social security there to speak of, so that people must fend for themselves when necessary, and this in turn means that the state is spared the need to raise taxes to pay for social security. Since taxes are generally coerced even in democracies (where one group coerces them from another, and few people pay them if avoidance is possible), there is less coercion in this respect in China than in almost any Western country. Thus is the argument saved that personal freedom is a precondition of prosperity.

It is evident that discussions of this nature can be, and often are, endless: even that of free trade versus protection is not decided once and for all. People engaged upon such debates will rely on what I hesitate to call alternative facts. If there is a correct answer to be had, it cannot be decided by moral virtues of the disputants. The better person may have the worse arguments, and the worse person the better. The difference between laissez-faire and dirigisme cannot be settled by reference to the moral qualities of those who advocate for either, at least not in a world in which rational argument counts for something.

The overemphasis on opinion as the main or only determinant of a person’s moral character thus has the effect of promoting irrationalism, and all argument becomes in effect ad hominem. If a person holds one opinion, he is good; if another, he is bad. Everything is decided in advance by means of moral dichotomy. Nuance disappears. If a person approves of abortion, even in restricted circumstances, he is, for opponents, virtually a child-murderer, and therefore beyond the moral pale. If he does not, he is, for opponents, a misogynist who would condemn girls of twelve to bear the children of their rapists, and therefore beyond the moral pale. One does not willingly talk to or otherwise consort with people beyond the moral pale.

There is a positive-feedback mechanism built into opinion as the measure of virtue, for if it is virtuous to espouse a particular opinion, it is even more virtuous to espouse a more extreme or generalized version of it. It then becomes morally impermissible for a person to hold the relatively moderate opinion; he is denounced with the peculiar venom that the orthodox reserve for heretics. When J. K. Rowling, a feminist once in good odor with the morally self-anointed, delivered herself of an opinion couched in moderate terms stating something so obvious that it will one day (I hope) astonish future social or cultural historians that it needed saying at all, namely that a transsexual woman is not a woman simpliciter, she was turned upon viciously, including by those who owed their great fortunes to her—or at least to her work. She had committed the cardinal sin in a world of opinion as the criterion of virtue of not having realized that the moral caravan had moved on. How easily sheep become goats!

Taking opinion as the hallmark of virtue has other effects besides provoking dichotomization, bad temper, and the exertion of a ratchet effect in the direction of ever more extreme and absurd ideas. It tends to limit the imagination, moral and otherwise. For example, once something tangible is declared to be a human right, which no decent person can thenceforth question or deny on pain of excommunication by the virtuous, the good procured by the exercise of that right ceases to be a good for any other reason than that it is a right. The recipient has no reason to feel grateful for what he receives, because it was his right to receive it, though he may, of course, feel rightfully aggrieved if he does not receive it. A United Nations rapporteur recently condemned New Zealand for its breach of human rights because it did not provide decent housing for all its citizens (and other inhabitants); rents were expensive and there was overcrowding as well as some homelessness. The New Zealand government, which had committed itself to the view that there was a human right to decent housing, meekly promised to try to do better. It had not promised to treat housing as if it were a human right, but to treat it as a right itself; it was therefore skewered by its own supposed virtue.

If anyone were to deny that decent housing was a right, it is almost certain that he would soon be attacked as a landlords’ apologist, as someone indifferent to the plight of the homeless—as if there could be no other reason why people should be decently housed and not homeless other than that they had a human right to decent housing.

The potential consequences of a right to decent housing, if taken seriously, are pretty obvious: the commandeering of private property, for example, which proved such a triumphant success in the Soviet Union that people were living in communal apartments two-thirds of a century later. And since “decency” in housing is not a natural quality but varies according to circumstance (what is verging on the indecent in Auckland, Christchurch, or Wellington would be palatial in Lagos or Dhaka), compliance with such a right is an invitation to, or would require and justify, ceaseless and constant bureaucratic interference. Furthermore, declaring decent housing to be a right, irrespective of the conduct of the person exercising it, would not exactly be a spur to personal effort, especially in the lower reaches of society. And if decent housing is a right, why bother to seek the true economic or social reasons for high rents and homelessness? A right is a right, independent of economics; all that is required is that it be complied with.

These rather obvious objections to the concept of decent housing as a human right are seldom aired, at least in public discussions, because those who make them are so easily portrayed as Gradgrinds or Scrooges, heartless and cruel, lacking in the imagination necessary to understand or sympathize with poverty. The supposed moral quality of the objector trumps the possible validity of his objections, which therefore do not have to be considered. Far from the objector lacking imagination, however, it is the proponent of the human right who lacks it: he fails even to try to imagine what the consequences of what he advocates might be. Words are the money of fools, no doubt, but also of people who desire unlimited powers of interference in the lives of others.

The importance accorded to opinion—correct opinion, of course—as the criterion of virtue has another strange effect, besides increasing intolerance and limiting imagination, for it conduces both to a new dictatorial puritanism and a new libertinism whose equilibrium is forever unstable.

The puritanism manifests itself in language. In the modern moral climate, for example, it is essential to be a feminist. A person who declared himself indifferent, let alone openly hostile, to feminism would be considered by many to be, ex officio, a morally depraved person, perhaps even a potential Bluebeard. Purity is imposed on language itself: in Britain, a person who subscribes to “correct” opinion would now not dream of using the word “actress,” as it is allegedly demeaning to females who act, suggesting inferiority to (rather than mere sexual difference from) males who act. In France the same type of person would not now dream of using the word “écrivain” for a female who writes, instead employing the feminine neologism “écrivaine,” because not to do so would imply that writers, at least ones worthy of notice, are solely or predominantly male.

It is obvious from these examples, which demand the defeminization of language on the one hand and its feminization on the other, that the purpose of this language reform is the exercise of power to impose virtue, rather than the solution to any real problem, since neither “actress”nor “écrivain”as applied to a woman has intrinsic derogatory connotations.

In similar fashion, American academic books now routinely use the impersonal “she”rather than “he,” and sub-editors impose this usage on their authors, even on those women to whom it would not come naturally because of their age. Sometimes such absurdities as alternating the impersonal “he”and “she”are employed, to ensure (or rather imply a deep commitment to) sexual equality; the phrase “she and he” is also employed rather than “he and she,” considerations of euphony being disregarded in favor of an ideological commitment to righting past wrongs and protecting women from domestic violence by means of sub-editorial vigilance. No doubt the guardians of “correct” usage imagine that they are doing what would once have been called “God’s work,” and therefore brook no demurral from their edicts. The mills of the new morality grind very fine indeed.

This new morality contrives to be both liberal and illiberal at the same time. It takes as its basic or founding text the famous words of John Stuart Mill in his essay On Liberty (1859):

[T]he sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. . . . [T]he only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. . . . The only part of the conduct of any one, for which he is amenable to society, is that which concerns others.

Whether or not people have actually read On Liberty, Mill has probably had a more profound effect on common modes of political thought than any philosopher other than Marx. Mill’s “one very simple principle,” as he calls it, is as omnipresent in our way of thinking about morality as the Sermon on the Mount once was.

But the one very simple principle turns out not to be so very simple after all. What seems initially to be permissive, and indeed is used to justify permissiveness, can also be highly restrictive, even totalitarian, in its implications. In the first place, it must meet what might be called the “no-man-is-an-island” objection: human beings are social and political by nature, so that it is difficult to think of any conduct which does not concern others. Even an anchorite who lives in a cave must have cut himself off from people who once knew him, who might be severely affected emotionally by his withdrawal from society. Most of us are very far from being anchorites.

What is less frequently remarked in objection to the very simple principle is that the notion of harm to others is indefinitely expansible. When Mill wrote, life for most people was extremely tough; there was no treatment for most illnesses, and existence therefore hung by a thread, even for the privileged (the life expectancy of members of the British royal family in the middle of the nineteenth century was about forty-five). The slightest injury could, through septicemia, lead quickly to death. In these circumstances, people were likely to take less seriously claims of harm done to themselves by minor inconveniences or verbal infelicities. Real victims were too numerous for claims to victimhood by the objectively fortunate to be entertained with the reverence they now frequently receive. Even in my childhood, we used frequently to recite the old proverb in response to an intended insult, “Sticks and stones may break my bones, but words will never hurt me.” No more: in a world in which opinion is the measure of Man, words are poison, dagger, Kalashnikov, hand grenade, and atomic bomb, and no one now gains a reputation for moral uprightness who does not sift the words of others for the wickedness they may contain.

No one could possibly deny the great importance of words and opinions in human life, of course, or their power to give offense and even to provoke violence. Words and opinions may inspire people either to the best or the worst acts, but we do not usually absolve people of their responsibility, or fail to praise or blame them, on the grounds that they were inspired or influenced by the words of others. If I translate your words into deeds (excepting situations of duress or some other extenuating or excusing condition), the responsibility is mine. In advocating the most complete freedom of speech—at least for members of a civilized community who are capable of exercising it, another somewhat fluid and contestable limiting condition—Mill assumed that there was a great gulf fixed between words and opinions on the one hand and deeds on the other.

But now, for the first time, we have found a way to reconcile the most illiberal impulses (and we ought to remember that it does not come naturally to people to grant liberty to others) with Mill’s one simple principle, which has achieved almost sacred status and cannot be directly opposed or contradicted head-on. Mill tells us that we must not forbid ourselves anything except that which harms others, but (unlike him) today we have expanded the possible harms to include almost everything that we can say, since offense is harm done to the person who tak

image_pdfimage_print

Leave a Reply

Your email address will not be published. Required fields are marked *

New English Review Press is a priceless cultural institution.
                              — Bruce Bawer

Order here or wherever books are sold.

The perfect gift for the history lover in your life. Order on Amazon US, Amazon UK or wherever books are sold.

Order on Amazon, Amazon UK, or wherever books are sold.

Order on Amazon, Amazon UK or wherever books are sold.

Order on Amazon or Amazon UK or wherever books are sold


Order at Amazon, Amazon UK, or wherever books are sold. 

Order at Amazon US, Amazon UK or wherever books are sold.

Available at Amazon US, Amazon UK or wherever books are sold.

Send this to a friend