Thursday, 15 November 2018

REVIEW: I Think, Therefore I Eat (2018)

Liebig Meat Extract collectible card. The philosopher, Nietzsche, experimented with many different diets, including one unwisely based on drinking this, at the time ‘very modern’, beef broth.
I Think, Therefore I Eat

 Review article by Keith Tidman
I Think, Therefore I Eat: The World’s Greatest Minds Tackle the Food Question
By Martin Cohen
Turner Publishing Company, 2018
http://www.turnerpublishing.com/books/detail/i-think-therefore-i-eat
$19.99 (paperback)
ISBN 9781684421985 (paperback)
You could be excused if it hadn’t already dawned on you to make the connection between history’s deep thinkers of philosophy and food and nutrition. Yet, that is precisely what philosopher and writer Martin Cohen* does—in ways unforeseen and strikingly effective—in this book, entitled: I Think, Therefore I Eat: The World’s Greatest Minds Tackle the Food Question. The main title being, of course, a play on Renée Descartes’s best-known aphorism, ‘I think, therefore I am’—which if any maxim is familiar and handily summoned from philosophy’s archives, it’s typically this one. Martin Cohen draws from his own vast knowledge of philosophy and food—itself an uncommon combination of specialty areas—to explain that nexus between philosophers’ ruminations about food (from the Ancients to the Moderns), as well as what scientists say and what the rest of us unassuming mortals presume to know about food and nutrition.

To be sure, however, just because there’s philosophy aplenty, this is not some impenetrable tome. To the contrary, the style is highly approachable throughout, and even breezy on occasion. And, I should underscore, an enjoyable read. Readers in no way need, therefore, formal prior knowledge about either philosophy or the science of nutrition. Rather, he spins a tight and engaging narrative that tells us what we need to know, in plain English—at the right time, and in just the right way. And when a topic might prove a little knottier than others, he deftly untangles it for us. It's a book that proves highly informative, while throughout maintaining a conversational, entertaining tone—at times engaging in insightful yarns to make key points all the more palpable. Often providing practical tips (like ‘Know Your Yogurt’) that one might fold into daily routines. Seeding the book with ponderable observations, such as ‘nowadays it is acknowledged that microscopic organisms are the hidden puppet masters of human health’. And suggesting the importance of Taoist-like balance and harmony in all things—including, of course, in eating.

The author suggests that ‘you can read I Think, Therefore I Eat in one sitting’. However, he’s being modest; my own suggestion would be to read the book less hurriedly, perhaps over a few days at your leisure, to properly savor what he dishes up—contextualised by his ‘holistic’ view of human health. A level of leisure that’s in the spirit of that giant of philosophy Immanuel Kant—Kant perhaps being inspired by ‘suffer[ing] from poor digestion’and whose ‘most useful bit of food-related advice’, as Cohen describes it, ‘is the recommendation—no matter how busy you are!—to have a proper lunch, ideally in company, and to eat it very slowly’. Consuming this book ‘slowly’ might likewise be more gainful as to useful takeaways.

Martin Cohen shares with us insights into the indulgences—and not uncommonly head-scratching idiosyncrasies—of the philosophers, from two millennia ago to the present, using a storytelling literary device. If you remember to keep the philosophers’ indulgences squarely in the context of history, they’ll make all the more sense—and, if I may opine, appear all the more forgivable. Both the philosophers’ eye-catching eating practices and off-the-cuff beliefs about food come to light. A case in point:
‘Friedrich Nietzsche was always obsessed with meat—his charcuterie—and drew inspiration and strength from an assortment of hams and sausages. The infamous philosophical architect of the ‘Superman,’ or Übermensch . . . also dabbled in vegetarianism but decided pleasure should come before health in such things.’
As Nietzsche himself professed, as an undisguised sideswipe, ‘anyone who is ripe for vegetarianism is generally also ripe for “socialist stew”.’

And another:
‘In a letter to a friend in 1769, [Hume] jokes of his “great talent for cookery, the science to which I intend to addict the remaining years of my life”. . . . But judging from his famous girth, he had left this interest in cooking science rather late. “Ye ken I’m no epicure, only a glutton”, he once admitted. . . . Hume seems not to have known what was good for him, since his forays into the kitchen produced mainly . . . “sheep’s-head broth”.’
A mouth-watering dish, I’m sure, to share on any occasion with family and friends. Meanwhile, Martin Cohen doesn’t let the Ancients off the hook:
‘“Oh, my fellow man!’ exclaimed [the lacto-vegetarian] Pythagoras, a philosopher so ancient that he is even older than Plato and Socrates. . . . “The earth affords you a lavish supply of riches, of innocent foods, and offers you banquets that involve no bloodshed or slaughter”.’
Depending on how the scene is pictured, the last part—the bit about ‘bloodshed’ and ‘slaughter’ that is—might strike one as an appetite suppressor, no?

As Martin Cohen makes clear, however, Pythagoras has today, some 2,600 years later, plenty of company when it comes to food choices and awareness of nutrition: the growing numbers of vegetarians and vegans in all corners of the world—while we avoid what Pythagoras rather judgmentally blasted as ‘sinful foods’. To the larger point about food choices, the author devotes a chapter to what he dubs the ‘ethics of the dinner plate’. There, he delves into such sensitive—and perhaps for some people, stomach-turning—food choices as elephant trunks, horsemeat, grasshoppers, snakes, and dog burgers. And the unsavory list goes on—though ‘unsavory’ is a decidedly culturally subjective matter potentially fraught with preconceptions based on one’s upbringing and custom.

Such food choices should not surprise anyone, given archeological findings that revealed that the ancient Romans harvested and ate snails a couple of thousand years ago. Seemingly inspired by such consumption of sentient beings, Martin Cohen turns to a laconic aphorism by the German playwright Bertolt Brecht: First comes the food, then comes the moralizing (a line in his The Threepenny Opera, Cohen reminds us). However, in light of many people’s deep conviction in the rightness of, say, meals based on halal, low-calorie, vegetarian, organic, sacred-cow, fasting, kosher, and other moral (religious and secular) principles, the author fittingly turns Brecht’s words around: ‘first come the morals, then comes the eating’. In many cases, that seems about right. Though there are exceptions, of course, as told to us by the anecdote about how money shortages in Karl Marx’s family led to a stark diet of just potatoes and bread—here, poverty trumping the luxury of considerations of morality.

The catalyst for Cohen’s reflections on philosophy’s intersections with food is his three guiding principles: Detail matters; everything connects; and don’t mess with the ‘crystal vase’. He weaves these into the discussion, rendering each more concrete and comprehensible through example after example; you quickly get his point, and it makes sense. The first two principles seem less in need of explanation here, so I’ll leave it to the book’s readers to discover them in further detail on their own. However, ‘don’t mess with the crystal vase’ surely begs for brief explanation. Besides, as the author declares this particular guideline as the ‘most important’ of the three, let’s try to clarify it.

As Cohen explains regarding the ‘crystal’, ‘The point is that the human body is a very delicate arrangement of intricate parts’, and that it therefore ‘defies logic that people—not least experts—seek to reduce to simple rules and linear “cause and effect” explanations’. This is a central, guiding theme in this book: acknowledge complexity, and don’t ‘take a hammer to the crystal . . . by, for example, drastically restricting your diet . . . or, conversely, by indulging in just one or two favorite (or convenient) foods’.

A notable instance of ‘messing with the crystal vase’ and consuming one favorite food is Ludwig Wittgenstein, who for dinner—every dinner!—feasted on just one thing: a pork pie. That is, until he discovered and switched to dining on just rye bread and Swiss cheese. With who knows, again, what health consequences. I’m not sure what the father of modern medicine, Hippocrates, would have said to the likes of Wittgenstein about this kind of single-mindedness, with Hippocrates equating food and health, much as we often do today: his averring, deceptively simply, let ‘your medications be your food and let your food be your medicine’. A nice idea, but ‘detail matters’, so how, exactly can food be our medicine and what’s the proof?



Cohen does a credible job of deconstructing the research claims and pointing out what’s spurious and what’s credible—and what’s simply not yet known, which seems to be vast.



Martin Cohen has a preternaturally sharp eye for ferreting out myths—bogus or unsupported, questionable (even absurd) ideas—advanced by researchers of food and nutrition, as well as by government regulators and (not to be too jaundiced) biased influencers within the food industry itself. That is, when claims and the science don’t pass the smell test—important in order not to be misled, either intentionally or accidentally, as ideas take on a life of their own in the public sphere, in some cases to circulate (social media enters from left stage) as modern-day memes. Tailored messaging that convinces even some well-intentioned, perfectly scrupulous physicians, who may nonetheless rely on unreliable tests for safety or other guidance such as efficacy. As a result, the public has long since become tired of the whipsawing effects of researchers’ advice about food and nutrition being put on the table one moment, only to be yanked off later. The public’s not-infrequent suspicion being that the science is ill-informed and unhelpfully in constant flux, made worse by influencers in the food industry sometimes with their thumbs on the scale.

Cohen does a credible job of deconstructing the research claims and pointing out what’s spurious and what’s credible—and what’s simply not yet known, which seems to be vast. As he encapsulates the meaning, ‘the profound implications for both how knowledge is created and defined and how it is disseminated’. Of note, the author is not at all preachy or polemical in doing so, as he concedes that no single approach to food choices, nutritional needs, and dieting fits everyone’s requirements or desires or proclivities. Among his many cautionary notes regarding myths—and the not-uncommon misdirection by researchers, industry representatives, nutritionists, agriculturists, internet sites, and regulators—relate to overindulgence or under-indulgence, depending on the food in question: fats, sugar, water, carbohydrates, supplements, salt, calories, protein, milk, soy, and on and on. (Not a spoiler: The answers aren’t obvious.) And, of course, often muddying the water are the diets galore that come in and out of favor, often guilty of confusing correlation with causation, and pushed in pursuit of profit.

It seems common wisdom can be heavy on the common and lighter on the wisdom. Take fibre (‘fiber in the US), for example: seemingly wise souls have long touted it as highly desirable—almost an elixir—leading to the false assumption that there can never be too much in one’s daily meals. But, as Cohen tells us, ‘stealth fiber’ (in the form of inulin) is put into many types of processed foods, generally unknown to consumers who have no idea how much (often too much) they’re actually ingesting. The author presses forward: ‘We have to suspect the entire edifice of nutritional advice’—advice that appears to have come down to us through the ages from the most sage of philosophers.

Martin Cohen assigns several agreeable chapters to the history and benefits—and pleasures!—of chocolate: from chocolate ganâche to hot chocolate drink and ‘Plato’s noble cakes’. (With an appendix on how to make a chocolate cake and another on chocolate’s health benefits—giving all the more meaning to the expression ‘death by chocolate’.) Cohen effuses as eloquently about the merits of bread as he does about chocolate—evoking Locke and Rousseau’s passion for bread (well, for authentic bread, anyway) as a gentle excuse and entrée to reveal his own passion. All the while, the author nimbly steers clear of excessive faith in the questionable—sometimes ‘irrational’—examples set by philosophers. That said, Cohen gives credit where credit is due. For instance, he is seemingly partial toward the back-to-nature, vegetarian lifestyle of someone like Henry David Thoreau—‘an anarchist who eked out a living by making pencils while living in a shed by a pond’—who presaged what the author admiringly describes as the ‘ecological renaissance that today’s philosophers (and diet gurus) have only just begun to talk about’.

We’re told, for example, that Jean-Paul Sartre, who himself wrote of dreading becoming ‘a bald little fatty’, nonetheless favored indulging in spicy red pork sausages and sauerkraut, accompanied by quaffs of beer, and convinced that ‘processing food was good—by making it more truly a man-made product, which for him meant therefore better’. Sartre carried this intriguing assumption over to canned fruits and vegetables, his inexplicably believing that fresh ones were somehow too natural. John Locke also didn’t shy away from playing the role of food adviser, going so far as to offer three rules for eating fruit—the rules’ basis surely being little more than intuition and hunches. And Jean-Jacques Rousseau, while forever admiring milk’s nutritional value, would (in arguably a bit of philosophical overreach) refer to milk’s ‘psychological properties and its ability to reconnect people with nature’. Would that it were that simple.

In sum, I Think, Therefore I Eat cleverly navigates between nourishing the mind through philosophy and nourishing the body through food—and importantly, describing their many interesting junctures. Not an easy feat, given both subjects’ vastness, but one that the author accomplishes admirably: his doing the hard work for us in teasing out what matters—the wheat from the chaff. Indeed, detail matters, everything connects, and don’t mess with the crystal vase! Although Cohen (correctly) professes that the book is ‘a course in critical thinking and skeptical science’—and, yes, ‘a bird’s-eye view rather than a narrow, partisan recommendation for this or that approach’—it is also much more than that.

Above all, per the vintage style of writing Martin Cohen is noted for, the book skirts what might otherwise have been off-putting abstractions—instead he offers focus and clarity, and provides concrete insights that readers can choose to act on in their daily lives, should they wish to do so. Also, the story of philosophy and the story of food don’t just run parallel to one another; there’s no requirement for readers to shoulder toggling back and forth between them. Rather, the two threads already cross over each other, back and forth, like a braid, each crossover multiplying the significance, effectiveness, and meaning of the other. I Think, Therefore I Eat would thus fill what is a likely gap in the personal library of anyone interested in social philosophy (in its broadest application), the ‘mindful eating’ of food, and the science of nutrition—a book one is likely to reach for time and again.


*And indeed, an editor of this Journal...



The Philosopher's verdict: The threads cross over each other, back and forth, like a braid, each crossover multiplying the significance, effectiveness, and meaning of the other!

Monday, 5 November 2018

Distilling Philosophy’s Essence (2018)

From The Philosopher, Volume CVI No. 2 Autumn 2018


 Distilling Philosophy’s Essence in a Quest for Clarity

By Keith Tidman



‘What is the meaning of these words:
“The first cause does not necessarily inflow anything into the second, by force of the essential subordination of the second causes, by which it may help it to work?” When men write whole volumes of such stuff, are they not mad, or intend to make others so?’
That’s Thomas Hobbes, quoting and chiding the Scholastics of the Middle Ages. His challenge steers me back to an old term in computing called ‘lossless’. It refers to reducing digital file size by dropping some detail for ease of handling, but with no loss of quality. The process is analogous to what Martin Cohen and Robert Arp have done in philosophy with their recent book Philosophy Hacks — that is, to ‘distill the essence’ of one hundred of the big ideas they selected from thousands of years of philosophy — but, importantly, to do so without compromising the quality of the original theories. To this point, might Hobbes have been spared the hazard of going ‘mad’ if the words of the Scholastics had been parsed, reduced, and clarified in the process of similar distillation?

So, what is the path the authors take to arrive at philosophy’s ‘nuggets of insights’? The idea of a ‘lossless’ approach to compressing philosophy’s archetypal theories down to what Cohen and Arp refer to as ‘their barest of bare essentials’ is itself worth exploring. Since philosophy entails contemplating foundational ideas about life and our world writ large — a way of reflecting on, testing, framing, and sharing a wide expanse of issues — the argument in favour of distilling philosophy has merit. First, distillation translates to a wider audience by breaking down barriers; second, the situations in life in which the ‘big ideas’ (original iconic thoughts) might apply are more evident; and third, the ideas’ place, relevance, application, and vividness in contemporary thought (informed by snapshots of historical context) are brought to life.

And their approach seems to work. But so that I don’t inadvertently compromise quality by compressing their clever three-part method, let me quote the authors: ‘Helicopter view: This offers an overview of the philosophical idea, and usually its creator too, as well as a brief sketch of the context within which the insight was created’. ‘Shortcut: This strips the idea down in order to expose and explain the core elements of the theory’. ‘Hacks: Short and to the point, this part offers a shortcut to making sense of the idea — and, crucially, remembering it’. Yet, nothing was shortchanged: the book’s scope is ambitious, starting with the Ancients (Eastern and Western), proceeding to Medieval and Renaissance philosophy, then early and late modern philosophy, and finally twentieth-century philosophy.

As philosophy has historically explored the fundamental nature of the world, of knowledge, of human conduct, of reasoning, of reality, of existence, of cognition, of values, of proof, and of truth, there has, perhaps too often, been a tendency by some of the great thinkers to default to opaque abstraction that shrouds meaning. This tendency has mattered even as philosophers have talked to philosophers, leading to interminable debates about what was meant. Interpretations abound. More to the point, dense abstraction has proved off-putting to a larger audience, disinviting many otherwise intelligent people from the philosophical table. Yet, the fog of abstraction is not always necessary — and has handicapped the democratising of philosophy by rendering it inaccessible. That’s where Philosophy Hacks, Cohen and Arp’s book, self-described as a ‘word map with 100 firmly located landmarks (iconic ideas)’, comes upon the scene.

As an example of unfortunate obfuscation, I would point to this passage — section 5.3 of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus. Truly, the heavily clouded passage might excusably discourage some readers who would like to learn from history’s otherwise deepest thinkers:
‘All propositions are results of truth-operations on elementary propositions. A truth-operation is the way in which a truth-function is produced out of elementary propositions. It is of the essence of truth-operations that, just as elementary propositions yield a truth-function of themselves, so too in the same way truth-functions yield a further truth-function. When a truth-operation is applied to truth-functions of elementary propositions, it always generates another truth-function of elementary propositions, another proposition. When a truth-operation is applied to the results of truth-functions on elementary propositions, there is always a single operation on elementary propositions that has the same result. Every proposition is the result of truth-operations on elementary propositions’.
To cut otherwise complex philosophical ideas to the core and, in doing so, making sense of them implies favoring simplicity of expression — all the while still attempting to organise the rich dimensions of human experience and thought, and to penetrate often-elusive reality. However, making ideas simpler — to engage in philosophy’s equivalent of ‘lossless’ file management — should not be equated to reducing the ideas to pointlessness or meaninglessness. Indeed, very much the opposite. The process of simplification in philosophy is to deconstruct deep theories, temporarily set aside what’s nonessential or merely misdirecting, and then faithfully reconstruct the kernel of the ideas’ meaning — at the crux, what matters about the theory for those edifyingly shiny eureka insights.

Besides, there’s a natural appeal of theories with fewer moving parts whose relationships and contributions to the so-called ‘helicopter view’ of a big philosophical idea are unpretentious, transparent, and obvious. Larding a theory with myriad assumptions, parameters, postulates, and what-ifs branching in head-spinning fashion in all directions risks consigning otherwise noble ideas to dusty shelves. Immanuel Kant (for example, The Critique of Pure Reason), Georg Wilhelm Friedrich Hegel (for example, Phenomenology of Spirit), Martin Heidegger (for example, Being and Time), and the writings of Michel Foucault and Jacques Derrida are just a few among the many whose philosophy has been criticised, fairly or otherwise, for sometimes being obscure and even bordering on impermeability. Let me offer an excerpt gleaned from Nietzsche’s Thus Spoke Zarathustra to further illustrate the point here:
‘But the worst enemy you can meet will always be yourself; you lie in wait for yourself in caverns and forests. Lonely one, you are going the way to yourself! And your way goes past yourself, and past your seven devils! You will be a heretic to yourself and witch and soothsayer and fool and doubter and unholy one and villain. You must be ready to burn yourself in your own flame: how could you become new, if you had not first become ashes?’
In fact, I’ve never heard a specialist, in any field, beg colleagues for more convolution and abstraction and incoherence. Indeed, this is the very antithesis of Cohen and Arp’s book, which holds the excusable, if mildly radical, notion that philosophy’s big ideas should not only be crystal clear but also be alive, sharable, digestible, and actionable. Ready, possibly, to be set against competitive theories.

This discussion is as much about communication as it is about philosophy. Even the most competently distilled essence of philosophy’s big ideas needs to be comfortably couched in concise, clear language. Some specialists, from the humanities to the sciences, are more instinctively concerned with and skilled at that than are others. Meanwhile, what others regard as the ‘imperfections’ of the world’s thousands of natural languages — like their imprecision and uniquely different vocabularies and syntax — add to the challenge of clear communication. Clumsy language, language that is intensely abstract, incoherent, and indecipherable, can fundamentally undo the good accomplished by even the artful distillation of big ideas. So, ‘messaging’ matters, something the most effective specialists — philosophers or others — are eminently aware of as they reach out to express their ideas plainly. And, as many observers have said, clear, critical thinking and clear, lucid writing often go hand-in-hand. Since, however, some (perhaps too many) philosophers lost sight of this simple axiom of communication, there’s all the more need for books like Philosophy Hacks — to unravel philosophy’s mysteries and shed light on them through the trifecta of a ‘helicopter view’, a ‘shortcut’, and that ‘hack’.

Unfortunately, for many (including well-educated) readers, too often ‘philosophy’s mysteries’ have doggedly stayed mysteries. Not because of their presenting recondite ideas, which might be excused, but for the ideas’ laboured presentation. A.J. Ayer, in Language, Truth and Logic (1936), includes another example of a passage that risks unnecessarily marginalising prospective readers, its calling out for some kind of clarifying shortcut:
‘For, roughly speaking, all that we are saying when we say that the mental state of a person A at a time t is a state of awareness of a material thing X, is that the sense-experience which is the element of A occurring at a time t contains a sense-content which is an element of X, and also certain images which define A’s expectation of the occurrence in suitable circumstances of certain further elements of X, and that this expectation is correct: and what we are saying when we assert that a mental object M and a physical object X are causally connected is that, in certain conditions, the occurrence of a certain sort of sense-content, which is an element of M, is a reliable sign of the occurrence of a certain sort of sense-content, which is an element of X, or vice versa, and the question whether any propositions of these kinds are true or not is clearly an empirical question’.
At the same time, it is worth remembering that philosophy has broad shoulders. By that I mean that, following thousands of years, and despite necessarily increasing specialisation, philosophy still manages to crisscross with issues of sociology, psychology, politics, literature, theology, history, anthropology, physics, cosmology, biology, mathematics, artificial intelligence, and technology, among other fields of study. The passage of time and the evolution of human thought may have prompted change in some philosophical theorising, but much other theory has endured largely intact — continuing to underlie humankind’s exhilarating forward leaps in intellectual endeavour. Philosophy Hacks reflects that observation, when and where the intersections across fields are important in order to clarify and advance the story about philosophy’s touchstone ideas.

One confounding factor in this discussion is that the available methods and metrics for determining comprehensibility in philosophy are not neatly and uniformly laid out along different dimensions, handy for anyone to pick up and wield according to a formal set of rules and criteria. Conclusions are usually not consecrated by fine-tuned granularity in comprehensibility or by reassuring consensuses. Clear-cut definitions in this process of evaluation are few. Besides, conclusions regarding comprehensibility may be hampered by subjectivity and consequential vagueness. This is the case whether we are evaluating a single philosopher or contrasting styles across multiple philosophers — philosophers being heavily influenced by immersion in different periods in history, by the natural and irresistible evolution of language usage itself, and by their own individual approach to articulating profound theories. Indeed, one might be forgiven for concluding, based on ample instances, that the bias has been toward complexity of expression, even at the discouraging expense of shackling comprehensibility of otherwise laudably big ideas.

All this said, throughout history there have been plenty of philosophers who have been able to sum up their ideas presentably, without gumming up basic concepts with lots of extraneously branching thoughts and without elaboration that requires meandering clauses piled upon meandering clauses. Here are three Ancients who have done such a nice job: Confucius’s description of the Golden Rule is an example of clarity and brevity: ‘Perfect is the virtue which is according to the Mean’. Indeed, in his Analytics, Confucius goes on to say, ‘Never impose on others what you would not impose on yourself’. And Lao Tzu, the sage credited with the original concept of yin and yang, the two elements both simultaneously opposite and yet the same, saw the virtue of simplicity, with aphorisms or axioms such as:
‘Human beings are born soft and flexible; yet when they die are stiff and hard ... thus the hard and stiff are disciples of death, the soft and flexible are disciples of life.’ 
And Thales of Miletus, often attributed with having kick-started Western philosophy and science, who explained that the single material substance underlying everything was water. Their meritorious styles are pithy, visual, clear, evocative, accessible and to the point.

In going from thousands of pages of the original classic works of the philosophers to an approachable, accessible — and ‘lossless’ — distillation of some of the more iconic ideas, multiple potential audiences are served: there are those who may be satisfied in treating the curated shortcuts as endpoints, the latter offering enough philosophical grist to ponder further. And there are those who may be inspired by the shortcuts to venture deeper into the waters by either picking up more expansive descriptions of select topics or, even, seeking out some of the original, sometimes-rarefied works to laudably tackle head-on. Either way, I think that books like these serve a worthwhile purpose on philosophy’s behalf, parsing, illuminating, and bringing concreteness and contemporaneousness to some of history’s memorable, hallmark ideas about life and the world.



*Philosophy Hacks: Shortcuts to 100 ideas
By Robert Arp and Martin Cohen
(Cassell 2018)


Monday, 14 May 2018

REVIEW: The Character Gap (2018)

Who is more evil? The Stanley Milgram experiment

Beyond Good and Evil

 Review article by Thomas Scarborough
The Character Gap: How Good Are We?
By Christian B. Miller
Oxford University Press, 2018 

$21.95

ISBN-978-0-19-026422-2


It is an ambitious project, to chart the character of the human race. Its scope is vast, and the results are yet patchy—like a crude map which has great holes in it—a few towns perhaps being drawn in greater detail, and a few favourite walks in the woods. “These are early days,” writes Christian Miller — and here we find both the greatest strength and the greatest weakness of the book. For one man to have drawn this map (with frequent acknowledgement of his colleagues) is a major achievement. It is no mean author who can combine such scope with such detail. To have had the courage to do so at all has to be admired.

There surely was no other way to do it—yet at the same time, the book is filled with admissions of the limitations, both of the author and of the research. Much of the evidence is “merely correlational”, writes Miller. “I can only register my own personal opinion.” “I wish I knew the answer.” “We need to gather a lot more data.” His greatest regret is that longitudinal studies are “almost non-existent” in psychology today—which is observational research in which data is gathered for the same subjects repeatedly over a period of time. Call it “diachronic” research, which may stretch over years or decades. It is not enough merely to have snapshots of how people behave.

The Character Gap (subtitle: “How Good Are We?”) represents the first major popularisation of research which, until now, has been fairly much scattered about in academic papers. While some of these papers have come to the public’s attention—the Milgram experiment that revealed an alarming preparedness of people to do evil under orders, for instance—the wider implications of such research have been far from clear. The Character Gap represents the cumulative findings to date—backed by millions of dollars of funding through the John Templeton Foundation, a philanthropic organisation with a spiritual inclination that funds inter-disciplinary research about human purpose and ultimate reality.

Most of us, says Miller, tend to think of ourselves, our friends, and our families as good people, but “such a picture of our character is badly mistaken.” Relentlessly, then, he pulls this picture apart—drawing on more than a hundred academic papers to prove it. We are capable of good; we are capable of evil. This we all know. Yet now we know that we generally choose neither, and little prevents us—by far the most of us—from behaving deplorably. And we do.

What, then, is “character”? How does one begin to explore it? These are difficult questions, to which there are no definite answers. In fact, is it possible at all to quantify “how good” we are? Miller seems guided by much horse sense—yet it would be difficult to imagine how he could do otherwise, in a nascent field of psychology. He decides, “There is broad agreement today about most of the virtues and vices,” and with that he rests content. This is not to say that the book is simplistic, or ill-considered. Miller is aware of his lack of firm ground—but to accept this, it seems, is a necessary evil for the purpose of getting on.

“What does our character actually look like today?” In answering this question, Miller selects just four dominant character traits—and their opposites—to develop his theme. These represent the core of the book, in four chapters:
• Helping
• Harming
• Lying, and
• Cheating
There is more to it than this, however. One cannot consider such character traits in isolation. For example, an isolated act of helping may not reveal one’s motives—or whether one’s helping is consistent, constant, or appropriate—not to speak of what one does in other areas of one’s character. One would view a person’s helping very differently if, for instance, it were done for selfish reasons, or to alleviate guilt, even to deceive—or if one helped a few and harmed many—as we are told most of us would do, given half the chance. Here follows a selection of just one statistic from each of the above chapters:
• Where participants heard a woman screaming in pain in an adjacent room, 93% ignored her.
• Where subjects were ordered to shock a man (as best they knew) to death, 72% obeyed (the Milgram experiment).
• Where respondents were asked to report on their behaviour over seven days, 91% stated that they lied.
• Where students (without supervision) were asked to put down their pens at the end of an exam, 71% cheated—and not just “a little bit”.
These few examples are accompanied by a great weight of evidence, all of which shows us fairly much the same thing. Not only that. Miller gets personal. This is not merely about statistics, he writes, or the distant subjects of experiments. It is about the person working across from you in the office, or driving your taxi, teaching your class, or “sleeping in your bed”. He writes, “We are seriously mistaken about many of the people in our lives.”

However, the book is not all bad news. There is at the same time a different side to the story. Take the first example above, of the woman who screams in pain. While 93% ignored her screams where they were in the company of an apathetic confederate, given the chance to respond alone, 70% helped her. Take the fourth example, of students who cheated on an exam. While 71% cheated in a typical classroom situation, 93% were perfectly honest when seated in front of a mirror. This introduces another major aspect of the book—the factors which may influence our virtues and vices.

Here, too, we may throw away past assumptions. Above all, we may discard the notion that there typically are sound reasons—at least explicable ones—for what we do, whether good or evil. While in many cases there are, our behaviour may often change wantonly, capriciously, for the flimsiest of reasons—if we should know the reasons at all. For instance, the smell of cookies and cinnamon rolls made a group of men 105% more likely to do a good turn—and women even more so. Or having the choice to opt out, rather than in, increased participation in a retirement plan by 51%. We are greatly affected by “trivial influences”, writes Miller. These include temperature, noise, even ions in the air.

All in all, he draws out seven “lessons” from the research. Here are, in my view, the most significant three—and given our assumptions, it might be expedient to draw attention to the word “most”:
• There are many situations in life where most people will demonstrate the finest forms of moral behaviour.
• There are many other situations where most people will exhibit the worst forms of ethical behaviour.
• Our changing moral behaviour is extremely sensitive to features of our environment, and often we do not even realise what those features are.
We have “an impressive capacity” for good, writes Miller, and “a frightening capacity” for evil. Freely quoting the Christian Scriptures, he observes, “The picture of our character outlined by the New Testament seems to fit quite comfortably with the research findings … There is no one righteous, not even one. ... When I want to do good, evil is right there with me.”

What, then, to do? How should we respond? But first, are these the kind of answers an empirical study should be expected to deliver? Would we not be crossing the line, from empirical research to prescriptive morality or self-help? Miller decides to cross that line—as best he can, with observational assistance—and to this he dedicates the last three chapters of the book. There is a raft of promising strategies, he writes, for improving our behaviour. Above all, he selects four as holding “more promise” than the rest:
• getting the word out as to who we really are
• selecting moral role models
• limiting the situations in which we trust ourselves, and
• turning to divine assistance (which is, religion).
One particular strategy seems to illustrate as much about the author as it does about the strategy. The research shows that, if we believe we are virtuous, we shall be more virtuous—but only as long as we believe it. For instance, when one experiment put a “charitable label” on people, they were 71% more likely to donate to charity. We could, therefore, lead people to believe that they are virtuous. Yet in order to do so, we would have to deceive. And what if we were found out? The book is filled with such nuances which, while they do not always provide answers, give interesting insight and useful pointers for future research.

There were a few things I missed myself in the book, which could have been profitably explored. Above all, there was little if any consideration of government policy, or the influence of legislation on the virtues and vices—yet Miller explored many issues which would intersect with the same: penalties, rewards, conditioning, nudging, and freedom of religion, among other things. Given our new understanding of human character, how may government influence behaviour which is beneficial to all? On the other hand, what is it that happens when disorder and vice are unleashed in society? And how would a government itself be virtuous—if this should ever be possible?

I missed something, too, in Miller’s treatment of religion. While he readily confesses that he may have Western biases—his research being largely confined to North America and Europe—there was a big theological gap which would seem to have a material bearing on the book. One believes in a cosmic God; one believes in a personal God. But there is, in the West, a weakness of belief in an interventionist God—much weaker than one finds it in the (Abrahamic) Scriptures. It is described by “the flaw of the excluded middle”. If God should intervene in our reality, what bearing would this have on our moral actions? If one sees that outcomes are determined not so much by ourselves as by God, this should have a profound effect on our behaviour.

Lastly, Miller gives considerable thought to whether the research might lead to what I shall call a strange new legalism. Our old-style legalism is simple: one starts with a moral code—say, the Ten Commandments—then seeks to obey them. But in view of the recent research, it seems imperative that we should additionally understand influences on our behaviour. These influences, however, are legion. This clearly troubles Miller, who devotes many pages to the debate. If we try to understand what motives us, he writes, for the purpose of behaving better, ”I foresee major problems with information overload.” “What if we were always monitoring ourselves and our situations to make sure we do not fall prey to the negative influence of our unconscious desires? ... I do not have an answer to these questions at the current time.”

There is, in my view, a simpler way. We know today that our visceral (“gut”) feelings are generated by novelty, discrepancy, and interruption—in each case, by a world which is not so arranged as we had anticipated. Even a dog, when faced with food it does not expect to see in its bowl, is visibly affected. The 20th century psychologist, Richard Gregory, describes it as encountering the “unexpected”—or, as his fellow American, the philosopher, Willard Quine, has it, “the expected which fails to happen”. In short, the arrangement of the world in our minds, when we hold this up against the world, has everything to do with emotion and motivation.

Now consider that many people’s arrangement of the world in their minds is parochial, self-interested, or short-sighted. Miller shows us that a narrow focus on “techniques and devices” morally diminishes us, while “perspective-taking” profoundly influences our behaviour for the good. In fact, in each of the four areas of character which he surveys, our behaviour worsens where our perspective narrows. With this in mind, it is a partial or fragmented view of our world which causes us to act in strange ways. The remedy then is clear: rather than “monitoring ourselves constantly”, we may both develop and teach a more holistic view of the world. Virtue would, we suppose, be a by-product of the same.

The Character Gap marks a great shift in thinking—perhaps on a par with the discovery, more than a century ago, that a large part of our thinking was unconscious or unawares. To put it too simply, this implied not only that our very own thoughts might lie beyond our control, but that we might not be fully in command of our moral judgement. Contradicting everything we thought about ourselves, it came as a great shock at the time, and had a major influence on the century which followed.

It is impossible to say whether the research presented in this book will have the same kind of impact on coming generations as the issue of the unconscious did—but if it does, it would be no surprise to me. Miller himself commends the work as “extremely important” and actually, by the end of his account, I am persuaded that it is.

Once the genie is out of the bottle, it cannot be put back. While Miller himself seems optimistic as to the good that this research will do, and fairly confident about the direction things will take, I think there is no telling. Not least, given the fact that our moral behaviour deteriorates where we fail to think of ourselves as virtuous, it seems possible to me that the book might have the opposite effect.




The Philosopher's verdict: Glimmers of hope in the search for goodness.

Thursday, 15 March 2018

On Humour (2018)

From The Philosopher, Volume CVI No. 1 Spring 2018


René Magritte, La Clef des Songes, 1935

So the Essence of Humour is Self-Deception?
Who Are You Kidding?

By Christopher Gontar


Every theory of humour must in some way acknowledge what actually appears in humour, but one sense of such theory should also be concerned with something only inferred. That deduced thing we may call humour’s essence, and though never perceived, it affects us once we see its associated signs. In this essay, I will seek to show that the theory accounting for this object is the theory of what humour is. But as to any theory whose content is all appearance (we will call this ‘appearance-focused’), it is a set of various ideas that are perceived in humour or used in its creation. That practical aspect could be called ‘theory’, but it cannot be a theory of humour’s essence because it omits this very aspect. Instead, it consists of appearances, disregarding their meaning and implication. Such a theory is not informatively descriptive, but is at best prescriptive. Of course, a prescriptive theory of humour describes this or that, but what makes it prescriptive is that humour creation is its sole possible function.

The so-called incongruity theory presents only appearances and only in one broad area of humour are these useful for composition. Consequently, though the incongruity theory has always been universal in intention it is not a theory of the essence, regardless how well it unites all classes. The first, however, and now forgotten version of a theory with the name incongruity was much more uniform than the one known today. The earlier model, of James Beattie, in On Laughter and Ludicrous Composition (1778), treated every kind of humour not as incongruity between one term and another, but as this contrast: that things fit within a scheme while also not fitting. Presently, this context-based approach—which describes a part of humour’s appearance but misses the point of all of it—is mistakenly considered a full explanation of linguistic humour and called appropriate incongruity (for example, see, Elliott Oring’s 1992 book, Jokes and Their Relations). Yet exactly the same theory was applied by Beattie to both linguistic and non-linguistic humour so that, formally, his view looked more universal than incongruity theory as usually known.

Another contemporary author, Tomáš Kulka, proposes a universal theory of incongruity resolution (2007), but apart from Beattie himself there is no known universal ‘appropriate incongruity’ theory. ‘Incongruity resolution’ can obviously be seen as a variant of appropriate incongruity and yet both of these distort the experience of humour by confining it to the cognition of what appears. The problem with this kind of position, moreover, is not that it is about the stimulus and neglects the response—for the response can be known only after one has described what is most important about the stimulus. But to theorise about what appears in humour is not to describe either the full stimulus or response, so that these theories become strictly prescriptive in use. Now, with these appearance-focused methods can be classed ambivalence theories, such as those of Hugh Lafollette or Niall Shanks, of Thomas Veatch, and Peter McGraw—the last known as benign violation. Indeed, an ambivalence theory is hardly more than a response-side extension to the old theory of fitting and not fitting. Though it has prescriptive value, benign violation fails as a descriptive theory because it is very often a theory about the offense of humor rather than humor itself. There is, however, one sense in which benign violation itself is humorous, namely that in any mean or mixture of violence with non-violence, self-deception, the essence of humour, is always evoked. Self-deception is there implied by the pretension to violence, or because acts which cause revulsion and embarrassment always tend to signify immodesty, a sense of self-deception. The linguistic transgression in puns and the like cannot be constitutive of the humour, since many kinds of linguistic error are not humorous. Schopenhauer’s remarks in The World as Will and Representation, finally, conform with the theory of James Beattie, but are applied to linguistic examples, rather than universally.


The term incongruity is particularly useful only as a name for juxtaposed things that may be called ludicrous, and of no use at all, either creative or explanatory, in linguistic humour itself.


Though his was a theory of laughter, Beattie distinguished this as the laughter of humour, and he, not Kant or others, to this day remains the prime exemplar of an incongruity theory of humour. Non-linguistic incongruity is now commonly placed under the former, simpler of those two senses, incongruity between terms rather than the fact that two terms relate differently to a context or frame. The former is at least a much better way to characterise the appearance of non-linguistic humour, and will here be assumed. My first effort will be to examine the incongruity theory where it actually applies, then explain why it has no other notable application and is not universal. It has no value at all. It is not a significant theory of anything, but an egregious misconception that has hindered theory and criticism for centuries and ought to cease to be studied immediately. No current authoritative text credits the alternative so-called relief, or release theory, or the superiority theory, as describing humour in itself, as they are based upon extraneous aspects.

The term incongruity is useful only as a name for juxtaposed things that may be called ludicrous, and of no use at all, either creative or explanatory, in linguistic humour itself. First, consider where it actually applies. It describes the outward appearance of a few closely related examples of humour that occur frequently. As the ludicrous, incongruity is the juxtaposition of the serious with the trivial, or the unfashionable or ill-made thing, typically alone. As the ridiculous, it is the interaction, not mere juxtaposition, of the mind with bodily appetites, and second, any falling or failure. But since the ridiculous is so elementary, the word ‘incongruity’ there serves no creative role. In all these forms, however, the first weakness of incongruity is that its sense of ‘violation’ of patterns does not convey relations of better and worse, and it certainly does not explain the significance of these. Incongruity treats such juxtapositions as inert, leaving us wondering why they have any effect. The last attempt to save the incongruity theory was to assert that humour is either play or pleasure in relation to incongruity, either of which attempt clearly fails.

Now the unfashionable in humour need not be explicitly contrasted with its better counterpart. Dress is humorous whenever it can be seen as though it intends and fails to meet the standards of the viewer, or as work-clothes. The reason in the latter case is that a class difference is evoked, implying self-deception in an interloper. But the unfashionable would only be explicitly juxtaposed if actually set next to something better, surely possible.

Two clashing garments could be used to mean either a failed attempt at a combo, or what is hardly wearable ‘presuming’ that it can ‘hang’ with what is passable or very good. Already we find, prior to fully explaining, that self-deception is the essence of humour, and incongruity only humour’s appearance. The trivial next to the serious makes one of those halves the self-deceived, as we will clarify. As to physical appetite and the mind, the former overpowers the latter in all people, and renders the human self a constant hypocrite. This theme becomes more intense in that a more intellectual person is the more troubled by sensuality, as is the comic hero, Ignatius J. Reilly, in A Confederacy of Dunces (1980). Such a character is particularly driven to deceive themselves. Few things are as instinctively seen, and reproached, as hypocrisy. Yet how it is humorous, by way of self-deception, is only being explained now. The evocation of self-deception by falling or failure is self-evident. As to personal ugliness, the unkind fact is that its humour consists in the same sense as the unfashionable.
A practical joke may surprise by use of incongruous things. But this is not mainly incongruity by violating our expectations in the manner of the unfashionable, though it could be that. The idea of a practical joke is that the victim ultimately represents a self-deceived person. It is true that horrible or fantastic things fit the incongruity theory, and these override humorous feelings and introduce others. But this is because where something could be merely pretentious or offensive it may be more, and what rules this out of the class of humour is a negative condition. Now if tragedy might be invoked by a pictorial juxtaposition between humanity and what is beyond its reach (of course the simplicity of a picture makes it unserious), the kinds of things juxtaposed are not those of humour and comedy. Incongruity, then, cannot be challenged as a description of how a lot of humour looks. But it is no more than that.

The incongruity, however, in the above cases, has no role at all in linguistic humour in itself, but only where it appears in combination with linguistic humour. A theory of ‘appropriate incongruity’, as introduced by Elliott Oring, for example, describes the process of experiencing linguistic humour, but such an explanation is entirely prescriptive in meaning. As mentioned above, this kind of theory closely parallels Beattie’s universal theory, because the relation of partial fit within an assemblage is highly comparable to appropriate incongruity. Here is a typical example of what the theory of appropriate incongruity would explain, or presume to create.Woody Allen said in a monologue:
‘Most of the time I don’t have very much fun. The rest of the time I don’t have any fun at all.’
This is not only linguistic (it is also bathetic), though it is because of language that the punchline is puzzling. But this puzzling quality, where it does obtain, does not have at all the same role as ‘incongruity’ in things or people. Furthermore, what if it did? But then it would be of no creative use since the joke-writer could not look for it. Even if he actually hunted for ambiguities, he would not be on the lookout for puzzlement. And though he knows humour when it comes to him, he does not search for it. Finally, we could only unite the linguistic and thing-based incongruity, if we saw them both as did James Beattie, as partial incongruity. But there is no reason to do that.

As the ‘appropriate incongruity theory’ suggests, the punchline does refer us back to the previous line, which by its ambiguity gives a place to the punchline. In other words, the idiomatic meaning of ‘not having very much fun’ is indeed having no fun, whereas ‘some fun’ is a robotic, literal interpretation. The linguistic part in itself has some humour, not because the absurd meaning is ridiculed, but just because ambiguity is not only an instrument of deception, but a punisher of presumption, a kind of self-deception.

That is important, as it is the real reason puns and the like are inherently humorous. The pun is intrinsically humorous and therefore must allude to something, but that cannot be itself, or jokes. For that possibility takes us in a circle. Linguistic ambiguity must, necessarily, get its intrinsic humour from imagined situations. In the first place, to make linguistic humour is not generally to ridicule language. George Carlin, while not to everyone’s taste, had a monologue about clichés in which he included a few puns. He succeeded in mashing these together, although puns and clichés are completely different in humorous force. “‘Legally drunk.’” Well if it’s legal -- what's the f***in’ problem!? ‘Hey! Leave my friend alone officer, he’s legally drunk!’”

But the self-deception associated with linguistic ambiguity needs to be unpacked: it always signifies a context, even if not present, of social exclusion and therefore self-deception as the presumption to fit in. For instance, in Vietnamese, “cảm ơn,” “thank you,” if the pitch falls where it should instead rise, may be interpreted as “shut up!” The divide, in such cases, of in-group from out-group might result in anger. But what humour here is necessarily implied, even if drowned out by negative emotion, derives from this particular social relation. That condition, always implied by ambiguity, does not obscure the meaning of the pun. Rather, it definitively reveals what ambiguity contributes to any humour. Put another way, the sense in which ambiguity signifies self-deception is like the wet floor and gaping hole in the street signifying a fall, the upturned rake a blow to the head, or a beautiful woman—a rebuff. Though there are jokes where this may be more convincing than others, the theory has a strong case because there is no plausible alternative.

There is another vast category of linguistic humour, namely irony, but this too derives its humour from association with self-deception. All humorous irony not only reserves self-deception as its implicit target, but irony’s essential obliqueness, even if not sarcastic, is humorous only because it points to the lack of awareness of the self-deceived.

The solution to the appearance-focused theory of humour, then, and its lack of insight, is this. The stimulating power of humour extends beyond what appears immediately to the senses. This is not the tautology that we cannot see what we do not see, or that everything has an inside and outside. Rather, humour exerts its effect because it always presents either the image of a person in which we infer self-deception, or else other things or ideas that lead to that association, of a person being self-deceived.

Though this point might elicit further discussion, there is a simple reason why humour’s essence is self-deception rather than passive deception. It is not only because deception of others is but one area of humour; rather, mere gullibility or stupidity are capable of being blameless, and derive their humour from association with self-deception, whereas the converse is never so. Now self-deception could be blameless either out of folly or out of justifiable discomfort. We should, then, define self-deception as lying to oneself to escape unpleasant truth, but where it is blameless for any reason, it cannot be the essence of humour. Phenomenally, however, self-deception consists in a restriction of perception, which is why we associate it so strongly with intoxication. In humour, we infer and imagine self-deception as restricted perception. Self-deception thus finds its place in a peculiar class, in that nothing in nature except mental states, or the infinitesimal or excessively large, cannot in principle be seen. Essences are normally understood to have appearance, but self-deception cannot appear at all. This premise, that the essence of humour lacks appearance, has been the likely cause of most uncertainty about it. One can believe self-deception to be present in others only by a sign or combination associated with it.

Now while the stimulus in humour outwardly signifies self-deception, the response to humour copies this inferred object, mentally. But we take a moment to think of self-deception when we see something humorous. It is not as though we see humour full stop, and are immediately amused. And there are, moreover, three ways this can occur, with respect to what is real and imaginary. In humour that depicts only words or things, self-deception must be imaginary, whereas it could be real when inferred in a person. But it is plainly often insinuated with exaggeration even in an actual person, as is so typical of mockery.

The response to humour does not simply outwardly signify self-deception, but mentally copies its mind; the accompanying theory of laughter, external response, could not be fit in this essay. There can also be a sympathy in the response, but that is separate. The response to humour copies self-deception, once that has been derived from things we actually perceive. For this theory there is overwhelming evidence, and it has no confirmation bias, thus meeting the demand of falsifiability. This is what the response to humour feels like if one considers it closely, all humour represents or alludes to self-deception, and no other explanation is as solid.
In humour, what we will call ridiculous are persons who represent self-deception more directly. But ludicrous things signify it. Linguistic ambiguity, for example, signifies its own power to dupe, even to alienate someone. In one kind of linguistic humour, there is accompanying reference to physical desires and thus to self-deception. The other sort of ludicrousness consists of things. For instance, if a dagger is used as a kitchen knife, the two basic humorous meanings yield much the same result, ultimately. There are two ways of reading this, as:
(1) war in a kitchen, or 
(2) as a kitchen in war.
In either case, violence is used as a sign of self-deception. In the first case, the chef cooking with a weapon gives the impression of an unstable person or a failed warrior, or by appropriating war he unmans it and mocks it. But in the second case, war is shown to be human, and thus again unmanned. We could call ludicrous things ridiculous, but as inanimate things it is best to distinguish them as ludicrous, and consistently.

Now self-deception causes eccentricity, as well as suffering. Clark Griswold’s erotic fantasy on the vacation highway, and Ron Burgundy’s courtship of Miss Corningstone in Anchorman, a 2004 American comedy film and a tongue-in-cheek take on the culture of the 1970s, make us laugh and cringe. While these acts of boldness signify self-deception, their shame gives us empathetic pain (in this context the benign violation theory is put to rest). We should expect that seeing self-deception gives us empathetic pain, indeed whether it is tripped up, or even if it strolls along in deluded bliss. But we do not naturally expect that an empathy is the essential response. But self-deception, which we see as the cause of misfortunes and eccentricity (while the reverse is also true), overtakes us in an imitative form.

They are correct who note that humour relaxes, but they fail to add that the relevant tension comes from fear of judgement, and actual self-deception is the other way of dealing with tension, a potentially ridiculous way. Humour consumption, then, taking in the toxin of self-deception, is almost a homeopathy, because it seems to kill the real self-deception in us. But in truth it does not do that. And neither are our modesty and sociability highly dependent on the threat of ridicule. Rather, humour appreciation, by taking in the small dose of the toxin, signals and celebrates self-honesty, which finally explains humour in terms of socio-biology.

In other words, humour is to actual, full self-deception what a single glass of wine is to, say, a whole bottle. Humour is, then, deeply schizoid, since in reacting to it we merge with the self-deception we regard as shameful. In intoxication by substances, a certain base effect is the same as humour, restriction of external awareness, along with other reactions such as visions or affect. But in humour the source of perception-dimming is less obvious, since it is a mind with self-imposed unawareness. Actual humour presents this image more indirectly as the ludicrous, or more concretely as the ridiculous. There are three steps, a stimulus, an inferred mind, an empathy. With the ludicrous, the second is reached more indirectly. In the ridiculous, which is strictly speaking always a person, these three steps are unmistakable.

About the Author: 
Christopher teaches philosophy as an adjunct professor in Chicago and performs jazz piano. 

Address for correspondance:
Email: <cdgontar@gmail.com>


Thursday, 8 March 2018

Arendt on Public 'Truth' and Virtue (2018)

From The Philosopher, Volume CVI No. 1 Spring 2018


Le Serment du Jeu de Paume à Versaille

HANNAH ARENDT
and the link between Public ‘Truth’ and Virtue

By Will Denayer



Hannah Arendt, the German-born, but America-based political theorist, is not well-known for her work on truth and she had no interest in epistemology. She once approvingly noted (misleadingly, as it turns out), that in Kant’s Critique of Judgment ‘the word “truth” does not occur’. Nonetheless, in her writings, she makes a fascinating and fundamental contribution to what she called 'public truth' and I think that her attempt to present and reinstate the ethical dimension to public policy is extremely valuable and relevant even today.

To give an example of the sort of truths we are dealing with, is there a ‘true’ answer to the question of whether or not it should be acceptable to see people sleeping on the streets? Should the answer depend on my view or on yours, on that of the liberals or the conservatives, on the view of the majority, the analyses of the economists, the competence of experts in ethics and morals or on religion or is there another way to solve this conundrum?

In The Human Condition (published in 1958), which Arendt calls her study of the ‘active life’, she distinguishes between three distinct human activities, labour, work and action. While labour corresponds to the metabolic process with nature – taking care of the bare necessities of life - work creates a distinctively human world because it leaves tangible results behind. It is the activity the craftsman, the artist and the scholar. Arendt’s vigorous distinctions have led to the widespread conviction (if not consensus) that she divorces politics from all strategic interactions and all instrumentality. Action cannot have any regulative function, because this implies the development of relations of means and ends – relations which are characteristic to work, but foreign to action. Unsurprisingly, this misinterpretation gave rise to charges of irrelevance: a fairy-tale of ‘once upon a time in Greece’ that leads, by means of ingenious etymological explanation, to a bizarre concept of a self-referential meta-discourse. Arendt purifies politics from all ‘vulgar’ or more ‘prosaic’ social and economic issues out of politics, her critics affirm. The result for them is an empty concept, which they then ridicule. 

The reading of a self-contained politics points to a problem that is easy to formulate and impossible to solve. Even if politics corresponds to some bizarre, rite-of-spring-like self-referential ritual (and it does not), it still has to have some content: what are these great speeches of the orators supposed to be about? This discussion is essential for what I am trying to explain. Leaving the self-referential thesis behind, we can see that the articulation of interests is coupled to the formulation of principles. This, in turn, leads to the questions as to where these principles originate and how the enter the public world.

Arendt explains that political action comprises the freedom to bring the unpredictable into the world. The interactions in the ‘web of human relations’ can never fully be anticipated. Action can therefore not include clearly describable causalities. But this does not mean that action is the realm of human caprice. Action is bound to the articulation of a principle (Aristotle’s ‘first cause’ of something that appears). Principles inspire action, but they are not motives. They are much too general to prescribe specific goals, although any concrete action can be judged according to the principle that inspired it. If this is not the case, there simply is no action. Without this in-between and the disclosure of the actor, ‘… action loses its specific characteristics and becomes …achievement. It is then indeed no less a means to an end than making is an end to produce …and this achievement cannot disclose the 'who', the unique and distinct identity of the agent’, Arendt, writes in The Human Condition.

In the discussion of the ‘web of human relations’, Arendt explicitly construes what her critics (and most of her admirers) deny, namely a straightforward and convincing relation between action and the development of relations of means and ends. She also writes in The Human Condition that:
‘Action and speech …retain their agent-revealing capacity even if their content is exclusively ‘objective’, concerned with the matters of the world of things in which men move …and out of which arise their specific, objective, worldly interests. There interests constitute, in the world’s most literal significance, something which inter-est, which lies between people and therefore bind them together. Most action and speech is concerned with this in-between …so that most words are about some worldly objective reality in addition to being a disclosure of the acting and speaking agent.’ (Emphasis added).
Action, like work, creates tangible outcomes, but, unlike work, it brings into the world a ‘second, subjective in-between …for all its intangibility, this …is no less real than the world of things we visibly have in common. We call this reality “the web of human relations”’. This ‘web’ is the public world, with its unique characteristics power generation, meaning and happiness, the unfolding of human plurality, and so on. The remaining questions are then where the principles originate from, how they find their way into the ‘web’ of human affairs and how they become validated, i.e. ‘true’.

Arendt set out to explain this in The Life of the Mind, her final work published in 1978. The book was meant to consist of three parts: thinking, willing and judging, but she died before she could write the third part. However, another, slightly later, edited work, entitled Lectures on Kant's Political Philosophy, provides a good indication of what she had in mind for the part on judging.

Arendt first discussed thinking, however. And thinking deals with abstractions and generalities, such as justice, fairness, goodness. The faculty of thinking does not stand in a factual relation with reality – abstractions are not phenomenal. Judging, on the other hand, does deals with particulars. It is inherent to judging that we search for approval from others for our judgements. While thinking is solitary – the Socratic inner dialogue with myself – judging is social and can become paradigmatic for the public sphere.

Arendt explained how in the Lectures on Kant. Since the faculty of judgement is autonomous, the particular that has to be judged has to be compared with something that is also a particular. However, the particular with which we compare has to somehow contain a generalisation, otherwise judging is impossible. She located the particular that contains in itself a generality in the exemplary example of the representative figures. Thus: ‘Achilles is an exemplary example of courage’. Judgement, then, has exemplary validity to the extent that the example is rightly chosen.

But what assures that the example will be rightly chosen? The figure, or deed, that has to be recognised by a community of peers really has to be courageous. Arendt explained that, in the process of solitary contemplation, we anticipate the possible objections of others. This representative thinking is not empathy, as if I try to feel like someone else. The imaginative process has to remain disinterested, so as to assure the relative impartiality of the final conclusion. In case the wooing of consent of others in the public sphere proves successful, the example becomes a tertium comparationis - meaning the common element that two things share. ‘A is courageous, but not as courageous as Achilles’. Arendt emphasised that this mediation between solitary thinking and social judging, between abstract and particular, constitutes the only way in which an ‘ethical principle’ can become binding without corrupting action. As she wrote in Between Past and Future (1961):
‘… this teaching by example is …the only form of “persuasion” that philosophical truth is capable of without …distortion; by the same token, philosophical truth can become “practical” and inspire action without violating the rules of the political realm only when it manages to become manifest in the guise of an example… this is the only chance for an ethical principle to be verified as well as validated.’
Or consider her view presented in On Revolution (1963):
‘Only to the extent that we understand by law a commandment to which men owe obedience regardless of their consent and mutual agreements, does the law require a transcendent source of authority for its validity, that is, an origin which must be beyond human control.’
It is blatantly clear that nothing of this has anything to do with how politics works today. Arendt’s prudent public discourse simply does not exist. In politics, the strongest lobbies will win and usually all methods to achieve their goals are good enough. However, it is equally clear that ‘the life of the mind’ is essential wherever people try to deal with certain aspects of life in an authentically political way, that is, a way in which the public sphere has the function to generate virtues, consideration, shared definitions and regulations.

The insights drawn from The Life of the Mind and the Lectures on Kant's Political Philosophy are consistent with my reading and they are inconsistent with the self-referential thesis. Once action is interpreted as an activity without relevancy to anything outside itself, the essential connection between politics and the activities of the faculties of the mind becomes invisible, as the link between political interests and principles disappears. Action is free and innovative, but its inherent freedom and unpredictability play within the ‘confines’ of a sensus communis that is created through political action itself in the web of human relations. Only in this way, Arendt asserted, is it possible for the members of a political community to remain free as well as equal.

The Life of the Mind is an exploration into the mental operations that are necessary requirements to act politically in the world. Instead of seeing the Lectures on Kant's Political Philosophy and The Life of the Mind as divorced and unrelated to Arendt’s earlier concerns, these works complete her investigations into the nature of action. It is therefore my contention that this article made a contribution in understanding why Arendt asserted that ‘the principles by which we act and the criteria by which we judge and conduct our lives depend ultimately on the life of the mind’.

To close, let me attempt to make this a bit more concrete. Imagine that we would no longer allow self-interest to reign over society, how then could we manage it instead? Arendt would surely answer that everything which can give rise to a principled discussion is fit to enter the public realm. It can be, for example, because the dispute has exemplary relevancy for a community or for part of it or, indeed, for another one or for several ones or for all of us, for example, when justice, basic human rights, elementary welfare or future generations are at stake. But I do not think that Arendt excelled in getting her point across. Asked to clarify, she commented that:
‘… everything which can really be figured out in the sphere Engels called 'the administration of things' are social things… That they should …be subject to debate seems to me phoney and a plague. But (for example) the question of whether …adequate housing means integration (in city planning) or not is certainly a political question. With every one of these questions there is a double face… There shouldn't be any debate about the question that everybody should have decent housing.’ 
(The Recovery of the Public World, 1979).
To which we might say, ‘certainly’, but I do not think that it is so easy to distinguish between the ‘administration of things’ and principled discussion. The position that everybody should have decent housing is not generally accepted – in the United Kingdom, the Conservative Party unashamedly voted against it. To many people, this is not a matter of principle, although it should be. How can we make them?

Arendt added some further distinctions in the obviously hastily written article Private Rights and Public Interests. She argues that our public interests (as citizens) differ from our private interests (as ‘selves’). Public interests do not derive in a direct way from private interests, they are not ‘collective private interests’, they do not constitute the highest common denominator of private interests, they are not enlightened private interests. Public interests differ from private interests in their nature. She concludes the passage by citing the slogan: Near is my shirt, but nearer is my skin, in order to elucidate the self’s inherently private mentality:
‘That (“near is my shirt” …) may not be particularly reasonable, but it is quite realistic; it is the not very noble but adequate response to the time discrepancy between men's private lives and the altogether different life expectancy of the public world. To expect people, who have not the slightest notion of what the res publica …is, to behave non-violently and argue rationally in matters of interest is neither realistic nor reasonable.’
As is the case with Adam Smith, Arendt has been claimed by forces she had no affinity with. However, Arendt was very consistent. The following point is almost never made in the literature, although it is obvious. The upshot of everything she wrote on rights, interests leads to the egalitarian position that freedom from want is a necessary condition for rational political debate. Otherwise, instinctive egoism will always prevail. Public Rights and Private Interests ends with a consideration that leaves little room for doubt on her position:
‘To ask sacrifices of individuals who are not yet citizens is to ask them for an idealism which they …cannot have in view of the urgency of the life process. Before we ask the poor for idealism, we must first make them citizens, and this involves so changing the circumstances of their private lives that they become capable of enjoying the “public”.’
In fact, Hannah Arendt, despite often being read as a conservative thinker, went much further. In On Revolution (1963) she went as far as to advocate for the creation of council-states which 'would permit every member of the modern egalitarian society to become a participator in public affairs'. This would mean ‘a new form of government rather than mere reform or mere supplement to the existing institutions’.

The relation between the loss of a private place and the rise of modernity as an era bereft of genuine action is a major theme in Arendt’s work. In The Human Condition, she wrote that ‘the eclipse of a common public world, so crucial to the formation of the lonely mass man and so dangerous in the formation of the worldless mentality of modern ideological mass movements, began with the much more tangible loss of a privately owned share in the world’. This, I think, is a thought to keep in mind because we now again live in an era of dispossession. Assuredly, as misery grows, so too does the political influence of the radical right. 



About the author

Dr. Will Denayer is a political theorist and macroeconomist. He is head of research of Flassbeck-economics, a German-based think tank.

Address for correspondence: willdenayer@yahoo.ie


Thursday, 1 March 2018

Linguistic Culturism (2018)

From The Philosopher, Volume CVI No. 1 Spring 2018



LINGUISTIC CULTURISM
An argument for Common World-views based on a Shared Linguistic Heritage

By Lina Ufimtseva



Many have attempted to explain the relationship between language and thought, or even philosophised on the topic of the building blocks of language (or morphemes) and the conveying of a meaning. Here, I will investigate the specific question of whether meaning is created from within or without us.

If we were to take words out of a language and thereby simplify it (think, for example, of the ‘Doublespeak’ of George Orwell’s dystopian novel 1984), our ability to express thoughts would surely diminish, and thus the breadth of our worldview would also start to perish. The process is often how leaders – dictators – have used language as propaganda to control masses of people.

It would make sense to say that, yes, language does create thought, and that meaning is created linguistically based on external factors. After all, if you don’t have the words, you can’t describe it. Yet when we come across a word that describes a very specific situation, there is a click of recognition, an a-ha! moment that is universal, wherever you may hail from. For example, many have experienced what the Scots call a tartle – the panic-like hesitation just before you have to introduce someone whose name you can't quite remember. Would this not indicate that meaning is, actually, created cognitively, meaning from within, and that we simply assign references to help us construct meaning? If we don’t have the words, we can’t describe something, but we can still experience it.

Then again, there's the idea that one word can signify different concepts. Here lies the middle ground between linguistic and cognitive references. Let us take the concept of time. Compare how the English in Britain speak about time – how precise and punctual they are about it – and the casualness with regards to the same things in South African English. Distinctions embodied in terms like ‘now now’ and ‘just now’ seem nonsensical to the typical Westerner, yet they are much more than simply a label for ‘coming another twenty, perhaps twenty five minutes later’. Such dialects comprise an entirely different mind-set, and thus also, a different world-view.


In Chinese, the expression ‘Form is empty’ is rendered as se bu yi kong, or 色不异空. At first glance, this seems to make little sense. However, the first character 色 originally meant ‘sex’. Semantic change saw that the meaning evolved into ‘sexiness’, then ‘beautiful features’, before lastly coming to mean ‘colour’ or ‘form’ today.


The Germans have a special word: Weltansicht, which refers to ‘the general attitude towards life and reality that an individual or character demonstrates’. Weltansicht is closely allied with the words which we speak that it is difficult to escape their pull. Thought is not objective and some chance difference in what language or linguistic dialect you were raised in can indeed shape the way you think and perceive things. For composing the same thought in different languages may yet yield different meanings.

And so I propose that we understand language as being ultimately not only culture bound but as encompassing an entire cultural identity. This is why even the same language spoken in different parts of the world evokes modified meanings.

However, let us take a step back to look at the how language and perception have been classified in the past. Theories regarding language and thought range between two logical extremes. Followers of the American socio-linguist, Benjamin Lee Whorf hold that each language both represents and leads to its own modestly different Weltansicht, or world-view. This is linguistic relativism.

At the other extreme is linguistic universalism, which holds that every language is a manifestation of the same human cognitive system, and therefore obeys the same principles. But, as argued above, language is determined by the current culture it is spoken in. I specify ‘current’, because culture is certainly no static entity. If anything, it is evolving Or regressing, depending on how cynical one is! Language is thus culture-bound and is part of a cultural identity, and so the nuances of translated words are not universal. If world-view is culture bound – ethno-liinguistically bound – and not language bound, then we could speak of linguistic culturism.

In terms of the first position of the linguistic relativists or Whorfians, some languages may be virtually untranslatable, while in terms of the second, true translation can be done, even if it cannot be done word for word. This may not satisfy relativists, but the point is clear. An apparent problem with the hypothesis of linguistic relativism is that concepts are very much translatable. They just require more words to describe them. Think of the various forms of love in ancient Greek: eros, philia, ludus, pragma, philautia, agape. Non-Greek speakers understand what they mean, even if it takes a full sentence to explain a word – for instance that agape is selfless, unconditional love, and comparable to the love which is described by most religions. If linguistic relativism were entirely correct, a concept in one language could not be understood in another. And so, only in certain cases of poetry, humour, and other creative forms of language are ideas truly ‘lost in translation’.

Yet the problems with linguistic universalism seem equally obvious. There are considerable differences between languages, even different dialects of the same language, and the language which I speak is deeply affected by the linguistic society in which I live and the culture it has developed in. Wilhelm von Humboldt, The diplomat and pioneer of the theory of language, correctly observed that the diversity of languages ‘is a diversity of the world-views (Weltanschichten) themselves'. Without accepting the entire edifice of Humboldt’s theory of language, one may grant him this: language has everything to do with culture.

It may be helpful to survey the matter from a descriptive point of view. Chinese characters include a range of meanings, so related concepts tend to be compounded into one collective word. The aggregation of concepts into one word leaves a great deal of ambiguity and room for interpretation. Everyday Chinese speech is riddled, so to speak, with figurative speech. Many examples exist: ‘boiled water’ is called ‘rolling water’, revenge is called ‘snowed hatred, ‘stranger’ is ‘raw man’, and a ‘friend’ is a ‘cooked man’.

And so to this question: how do cultural influences on language affect everyday thinking? Cultural influences that influence modern thinking are discovered especially from old scriptures. The Heart Sutra, or ‘The Heart of the Perfection of Wisdom’, is the best known Mahāyāna Buddhist scripture. In it is famously stated, ‘Form is empty’ – in Chinese: se bu yi kong, or 色不异空. At first glance, this seems to make little sense. However, the first character 色 originally meant ‘sex’. Semantic change saw that the meaning evolved into ‘sexiness’, then ‘beautiful features’, before lastly coming to mean ‘colour’ or ‘form’ today. Taking that into consideration, the intentional meaning of the phrase ‘Form is empty’ could be ‘Lust is empty’. This is how, when Chinese speak about the appearance or shape of something, they automatically link it to the concept of transience – something short-lived and ephemeral.

In Russia, babushkas – old women or grandmothers – endure long hours of standing in church because there are rarely benches provided to sits on. The thinking behind this is that people come to church to worship God, not to be ‘comfortable’. That people should exhibit a kind of suffering as the Christian deity suffered for his people. This shared cultural knowledge, or awareness, infiltrates the language, and is present in many ways in everyday life.

Another example. If a Chinese person asks an English speaker to verify something, a perfectly acceptable response is ‘You could say that,’ or ‘That's okay.’ But China has a collectivist culture which values the group above the individual. Hence, the response from a native Chinese speaker would be ‘We don't say this.’

Traditionally, the Chinese speakers have a greater inclination to see themselves as a part of a national identity than most Westerners. The factors which play into this mentality include Confucianism, as well as the long history of dynasties and communism. It may even be as simple as people’s close proximity to one another.

Russia, too, values the group above the individual, and ‘fitting in’ means more than just ‘not standing out’. Not being accepted into a group typically means that there something ‘wrong’ with you, creating. The Japanese have a proverb for this: ‘The nail that sticks out will get hammered down.’ Russians would say, ‘We, with our friends, are going to dinner,’, not ‘My friends and I are going to dinner.’ If a Russian were to use the latter phrasing, the underlying meaning would convey that he or she does not want to associate with the group of friends, and dislikes the group. Saying ‘We, with our friends (family/class)’, reinforces the idea that one shares the same values, and finds identity in that group.

The communist era saw seven decades of Soviet-ordained gender equality for women in the workplace. Even today, Russia is a global leader in gender equality in the professional arena. However, the Russian language had always had a subtle way of expressing this equality. In recent years, gender bias in language has stirred some hot debate, and we are seeing English terms such as ‘businessman’ or ‘businesswoman’ being dropped for the gender neutral ‘business executive’. Russian has long escaped such issues. Words for terms that in English have a strong masculine connotation are far more descriptive in action, rather than descriptive in responsibility (meaning whether the man or the woman was traditionally responsible for something). Or Russian simply omits gender all together. For instance, the word ‘manpower’ is translated as ‘labour force’ (Рабочая сила). No-man’s land is translated as no one’s (ничейная) territory. In Russian, pronouns can be omitted. In English, it would be grammatically correct to say ‘Somebody forgot his or her cellphone.’ Instead, many people drop the feminine pronoun and simply say ‘Somebody forgot his coat.’ Russian grammar avoids such an issue altogether by omission: ‘Someone forgot cellphone (кто-то забыл мобильный).’

Let’s look at last instance in which language-culture can fundamentally shape a person’s world view. It affects even more practical parts of everyday life, such as how one counts. The Japanese would say ‘Nine tens plus five’ (九十五). The French are known for their curious way of counting. The number 95 would be said as ‘four twenties plus fifteen’ (quatre-vingt quinze). But nothing quite takes the prize like the Danish way of counting from 50 upwards: 95 is said as ‘5 and 4½ times 20’ (femoghalvfems).

Another most interesting case would be the well-known linguistic curiosities of two Amazon tribes. In 2008, researchers tested the mathematical skills of two tribes in the heart of the Amazon basin where the Pirahã and the Mundurukú tribes were studies. It was discovered that their concepts of numbers did not follow our usual counting system. For starters, the Mundurukú have no words for numbers greater than five. It would be hard for us even to play hide and seek with them, as we generally start counting down from ten. How do the Pirahã and the Mundurukú tribes count if they don’t have numbers? Counting without precise numbers is simpler than one may expect: one, two, many. The researchers were at first perplexed that the tribal people struggled to give the exact number of items that the researchers showed them: ‘In that experiment, the tribe members used the word previously thought to mean ‘two’ when as many as five or six objects were present, and they used the word for ‘one’ for any quantity between one and four. The researchers concluded that there was no need to express large numbers, as tribes simply did not think of using a monetary system. This indicates that these aren't counting numbers at all; they're signifying relative quantities.’ Numerical language was primarily intended for comparative measures such as ‘some’ and ‘more’.

This spurred further research into the field on children’s cognitive development and inherent human aptitudes. How do children learn numbers? Small children first memorise numbers in a list format – but ask them for three apples, and they are just as likely to bring you five or seven. In terms of what people do and do not do from birth, Edward Gibson, professor of brain and cognitive sciences at the Massachusetts Institute of Technology gave the following insight:
‘It is often assumed that counting is an innate part of human cognition, but here is a group that does not count. They could learn, but it's not useful in their culture, so they've never picked it up.’

I began by speaking about the difficulty of articulating one’s thoughts clearly, regardless of which language one speaks in. I would love to wrap up with a perfectly formed conclusion, but might well fall prey to what seems to me to be the interminable wont of imposing theories on our language. In name, at least, we largely abandoned prescriptive approaches to language a long time ago. I instead propose an added description to the relationship between language and thought.

In a paper ‘Science and Linguistics’ published in 1940, Benjamin Lee Whorf emphasised the importance of common linguistic backgrounds to truly understand one another: ‘we are thus introduced to a new principle of relativity, which holds that all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.’ I would like to add a different crucial element. I am led to believe that not only our linguistic background, but, moreover, our cultural usage of language is what articulates our common perception of reality. Culture is bound to language, and language is bound to culture.

I think, that in order to understand the workings of language, we need to see it through a prism of what I call linguistic relatedness, in which human languages exist in harmony with one's environment. Language expresses culture, and culture is manifested in language. It is as simple as that. This is not to say that language creates thoughts, but that the context or cultural frame in which a language is used, is like a lens, through which the articulation of our thoughts elicits various frames of reference. This builds upon the lesson of Benjamin Lee Whorf's investigations too, just under a century ago.




About the author: Ms. Lina Ufimtseva is a writer in language and semantics based in Cape Town. Her article The Modern Stoic was published in 2017  at our sister site Philosophical-Investigations.org

Address for correspondence:
She can be contacted via linaufim95@gmail.com