Monday, 6 May 2013

One Hundred Years of Philosophy (2013)


Centenary Special 1913-2012

From Volume 101 No. 1 Spring 2013


One Hundred Years of Philosophy

By George MacDonald Ross



It is a great honour to be invited to address The Philosophical Society of England on the occasion of its centenary. By coincidence, it is exactly half a century since I myself started studying philosophy at university. It is also just over half a century since the publication of John Passmore's lengthy and magisterial A Hundred Years of Philosophy, in 1957. Now, I shall not attempt to emulate Passmore's achievement by covering a hundred years of philosophical development in detail, but nevertheless it is possible to make a number of broad generalisations  about major changes in the philosophical scene since 1912.

A useful starting point is a famous little book first published in the year the society was founded. This is Bertrand Russell's The Problems of Philosophy. The work has a number of significant features. One is that, along with most of the many other books published by him and other leading philosophers at the time, it was written in a way that would be comprehensible and interesting to the intelligent lay public. Works directed solely towards fellow academics, such as Whitehead and Russell's Principia Mathematica, were the exception rather than the rule. Recall that in 1912, the number of professional philosophers was minute in comparison with today, and most of the readers of philosophical books were outside the university system.

Nowadays, we are used to the idea that, from the Renaissance onwards, most philosophical debate and innovative thinking took place outside the academy. Most of us would be hard-pressed to name any significant European philosopher before Kant who held a university post. It was really only towards the end of the 19th century that reformed universities took over from the coffee houses and voluntary clubs as the centres of gravity of philosophical and scientific activity. The evolution was very gradual, and even after the beginning of the twentieth century, at least outside Oxford and Cambridge, there were still flourishing clubs and societies which eclipsed what was happening in the struggling new universities.

Consider the example of the Philosophy Department at the University of Leeds. It celebrated its centenary in 1991, while I was Head of Department, and I did some research into how it had changed over the past century. In its early years, it had only one professor and then an assistant, and most of its few students were destined to become either school teachers or vicars. The syllabus was narrow, consisting mainly of some history of philosophy, logic and scientific method, and psychology (which had not yet established itself as a separate discipline). It seems that over the years, the Leeds Department did not provide a fertile environment for philosophical innovation, and I was unable to unearth any significant product of philosophical research by any member of staff in the first half century of its existence. The earliest figure of any note was Stephen Toulmin in the 1950s.

By contrast, the City of Leeds had a flourishing subscription library, with a more extensive collection of books than the university. It had a Philosophical and Literary Society with its own premises (though we should bear in mind that in those days the term 'philosophical' embraced science as much as philosophy as we now know it). And most significantly, there was an organisation called the Leeds Arts Club, which was as much concerned with philosophy as with the arts. One of its leading lights, Alfred Orage, was primarily responsible for bringing Nietzsche's philosophy to the attention of the British public at the beginning of the twentieth century. This was a far greater achievement than anything produced by the academic philosophers.

During the twentieth century, the academy grew and grew, and virtually monopolised intellectual life. Philosophical clubs were increasingly marginalised, and non-academics were effectively barred from publishing in academic journals or presses. My own father was a product of the free intellectual world of the early twentieth century. Although he was a civil servant by profession, he had the mind-set of an academic both in classics (which he studied at Oxford) and in theology. In his later years he encountered many obstacles to participating in scholarly debate, simply because he was not a card-carrying academic. He was eventually prevented from attending international theological conferences which were for academics only, and he had great difficulty publishing a well-argued article on the myth of Atlantis (eventually accepted, to its credit, by a journal run from the University of Durham). It is, of course, perfectly reasonable for journals to have procedures to protect themselves from the effusions of lunatic amateurs; but peer reviewing already does this. It is a shame that insistence on university affiliation reinforces an intellectual apartheid between academics and the scholarly public.

However, this intellectual apartheid does not mean that the universities constitute a complete philosophical monopoly; but rather that there are two separate philosophical cultures, with relatively little interaction. The Philosophical Society of England is a shining example of an organisation dedicated to promoting the study of philosophy among the general public, without letting itself become dominated by academic philosophers (unlike the Royal Institute of Philosophy, for instance).

One of the differences between modern British culture and that of much of Europe is that philosophy is not a compulsory school subject. Cynics may say that making a subject compulsory at school is the surest way of putting people off it for life. Yet for many it does have a lasting positive influence, and it makes for a critical mass of people wishing to continue reading and discussing philosophy. The nearest it ever came to being compulsory (in England) was during the lifetime of the Higher School Certificate (from 1918 to 1951), usually taken at 18, which included a compulsory exam in logic, taken in a reasonably broad sense. In Scotland, by comparison, philosophy was a compulsory subject in all the universities, if not in schools, until the latter part of the twentieth century. In the same spirit as the Higher School Certificate, the International Baccalaureate includes a compulsory course on Theory of Knowledge, as well as having an option of Philosophy as one of the main subjects of study.

It is true that for a number of decades now it has been possible to study philosophy at A level (examinations usually studied between the ages of 16 and 18); but the number of students has always been too small to have a significant cultural impact. The syllabus has tended to be very similar to that of first-year university courses, which may explain why many academic philosophers have been hostile to it — indeed at some universities admissions tutors even refused to accept it as a valid qualification at all. Interestingly, there was a lively debate about philosophy teaching in the press at the time of the World Congress of Philosophy at Brighton in 1988. Some philosophers, such as Roger Scruton, argued that 16 was too young an age to start studying philosophy — even though this was perhaps the age at which Theaetetus started discussing philosophy with Socrates, but far younger than Plato's extreme view that one should not start philosophy until the age of fifty. In my opinion, you are never too young to be encouraged to think philosophically, and it is arguable that you shouldn't start later than the age of entry to secondary school, when students begin to become less creative, and more inflexible in their thinking styles.

Doubtless this is part of the thinking behind  a growing movement called Philosophy for Children. It was originally set up in the 1980s in the USA by Matthew Lipman (1922-2010), and it has inspired similar organisations in other countries, including the UK. Its approach could hardly be more different from that of the A level. The A level is quite didactic, with approved texts specifying a defined range of possible answers to an equally defined range of traditional philosophical issues. It is a back-handed compliment to the authors of such texts that it is perfectly possible for candidates to get high grades in the exams simply by memorising their contents. That said, I hasten to add that there are innumerable examples of excellent philosophy teaching in sixth forms, despite the structural rigidities of the A level syllabus and marking schemes.

By contrast, Philosophy for Children is dialogic, open-ended, and non-assessed, encouraging students to think independently and imaginatively, and to discuss issues co-operatively. Lipman's method involves the students reading novelettes about philosophical issues targeted at different age groups from early primary upwards. The students themselves raise questions about what they have read, and there are strict rules as to how they conduct philosophical debate — such as always taking on board what the previous speaker has said, giving reasons in support of any assertion, being courteous and not interrupting, and so on. I once observed a discussion among young schoolchildren at a comprehensive school in a deprived area which was far more civilised, co-operative, reasoned, and imaginative than the typical discussion among university students of philosophy with high A level scores. Proponents of philosophy for children do not always use Lipman's materials as such, but all agree that the aim is to stimulate philosophical enquiry and debate by one means or another, rather than lecturing students about what professional philosophers have achieved.

In recent times, the methods of philosophy for children have been extended to adults through the community philosophy initiative, which holds that many of society's problems can be solved, or at least mitigated, if adults causing these problems can be helped to conceptualise their situation philosophically, and to engage in rational debate with others. This is closely related to cognitive behaviour therapy, which is a modern version of the ancient belief in philosophy as a method for helping people to overcome psychological problems and achieve a good life.

If we add into the picture the recent growth of cafés philosophiques and pub philosophy, and of philosophical magazines directed towards the general public, I believe that lay philosophy is in a far healthier state than it was a few decades ago. Nevertheless, there is still a serious divide between the philosophy of the people and the philosophy of the academics, with very little crossover between the two.

To return to Russell's Problems of Philosophy, there is another characteristic feature that distinguishes the practice of philosophy in the early twentieth century from current practice. This is that Russell confronts significant philosophical issues directly, and refers to major figures of the past only indirectly when relevant. Along with other British philosophers of the early and middle twentieth century, such as A. J. Ayer, Ludwig Wittgenstein, and Gilbert Ryle, he writes without footnotes or a bibliography. A hundred years later, this would be quite unacceptable in academic writing, and we need to consider why.

Comparing an early twentieth-century philosophical text with an early twenty-first century one, the most obvious difference is that the latter is full of references to what other writers have written about a problem, at the expense of addressing the problem itself. Every claim has to be supported by a reference, even if it is totally unclear what the reader should do about the reference. Reading all the works referred to in a single article could take years, and the further references in these works a whole lifetime. But if you are not expected to read the references, what are they there for? Their sole function seems to be as an authority for the statement made.

When I was a student, I was taught that the argument from authority was one of the great fallacies in human reasoning. To appeal to an authority was an abandonment of empirical experience and logic as the criteria of truth. Of course we have to trust others a lot of the time; but when we are doing philosophy, we need to rationally evaluate what others have said on the matter in hand.

There is a widely held myth that scholastic philosophers were especially prone to the argument from authority — that they clinched every argument with 'The philosopher saith . . .', i.e. an appeal to the authority of Aristotle. But this is quite misleading. If you look at a classic work of scholastic philosophy, such as Thomas Aquinas's Summa Theologiae, you will see that his method is first to define a problem, then to list the solutions to it proposed by different authors, and finally, and most importantly, to give reasons for preferring one solution to the others, or opting for some compromise position. This seems to me an excellent model for how references should be used.

Nonetheless, perhaps it was a fear of appearing to be like a scholastic philosopher that led to a modern philosophical culture, from Descartes and Hobbes to the middle of the twentieth century, with authors such as Russell, Wittgenstein, Ryle and others, which erred on the side of failure to give adequate acknowledgment of the views of others. So why was there a marked shift from giving too few references to giving too many? One possible factor is that, until quite late in the century, it was common for people to be appointed to academic posts in philosophy without having produced a doctoral (PhD) thesis. When I was a student, only a minority of my teachers had a PhD, and most of those were from abroad. I was firmly told that an Oxbridge BA was a licentia docendi, or a licence to teach, and that the reason why Oxford introduced the postgraduate BPhil (and not, note, the DPhil) as a teaching qualification was because philosophy was only a small proportion of what was studied there at undergraduate level (whereas the Cambridge Moral Sciences Tripos was a single-honours programme).

The reason why this is relevant is because the PhD thesis is now an academic's first and most formative experience of extensive philosophical writing. The requirement that the thesis must be original means that the author must give evidence that no one else has published the views contained in the thesis. The only way of providing this evidence is to trawl all the relevant literature, and to show that nothing is the same as the substance of the thesis. Journal editors and publishers have a similar interest in originality and avoidance of plagiarism, so it is in effect a necessary condition of publication that books and articles have copious bibliographies and footnotes. The requirements for a PhD thus become a lifetime habit.

Another reason for the change is the pressure to publish. Until the early twentieth century, the idea of a university was of an institution of which the primary purpose was to teach undergraduates. For example, one of the central themes of Cardinal Newman's much mentioned but little read The Idea of a University is that teaching and research are distinct and incompatible activities, and that they should be carried out in separate institutions. In the first two decades of the twentieth century, at the University of Leeds, for example, even the scientists and engineers amongst the academics had to obtain special permission to undertake research projects, in case they detracted from the teaching for which they were paid. Research and writing for publication were regarded as spare-time activities or even hobbies, and philosophy teachers published only if they felt they had something important to say. It was not really until after the First World War that research began to be seen as part of the duties of an academic, and not until much later that it began to be monitored.

The really big change in the UK took place in 1986, with the first Research Assessment Exercise (RAE), now rebranded as the Research Excellence Framework (REF). This came about because the then polytechnics complained that they received less funding per student than the universities. The universities responded by saying that part of their mission was to conduct blue-skies research (i.e. not just particular projects funded by sponsors), and that the difference in income was accounted for by the time academics spent on research rather than just teaching. The Treasury then called their bluff by insisting that they should account for the research activity of every member of academic staff — and hence the RAE came about. It initially measured the quantity of publications, but later became more qualitative. Certainly, the amount of money involved put immense pressure on universities to ensure that everyone produced as much as possible and within a short timescale.

Since the exercise covered all academic disciplines, considerations of equity meant that, as far as possible, the same general criteria would apply. Significantly, works intended for the general public or students simply did not count, and in some departments staff were explicitly forbidden to write such works. This is one reason why so many textbooks are written by American rather than British academics. Of course, there is a grey area where a book might be considered to be both for the lay and for the academic market, as usually used to be the case. But the tendency is to play safe, and make sure that anything published is unmistakably an academic research publication. This means that, in philosophy, the style of writing has become more like that of other disciplines, such as sociology, which is notorious for its excessive and uncritical reliance on references.

I was involved in discussions as to what would count as a research publication on teaching methods for RAE purposes, and whether it would be assessed by the Education Panel or the relevant subject panel, when I was Director of the Subject Centre for Philosophical and Religious Studies of the Higher Education Academy. The main criterion that emerged was that a work should be 'embedded in the literature' to count as research — i.e. that it should refer to lots of works on the same topic. Personal experience and reasoned arguments didn't count. This concerned me, because, until the Subject Centre started publishing its own journal, there was almost no published literature on the teaching of philosophy to refer to. No-one wanted to listen when I pointed out that anyone doing research in an entirely new field would be excluded from the RAE because of lack of references. To put it another way, publications that are heavily derivative from the work of others count as research, whereas entirely original research does not count at all!

It has been observed that academics in general have a tendency to discuss each others' writings rather than directly confronting issues themselves. Academics live off the oxygen of publicity, and like nothing better than being referred to, whether in agreement or not. So to outsiders academia looks like a closed club for mutual appreciation. Or perhaps I should say a collection of clubs, because the past century has seen a vast increase in the degree of specialisation as well as in the quantity of publications. So much is published that no-one can keep up with the literature in more than one sub-discipline. This is as true of philosophy as of any other subject, and philosophers tend to define themselves as metaphysicians, logicians, ethicists and so on. rather than just as philosophers.

Indeed, there is now also a host of 'philosophy ofs', such as philosophy of education, philosophy of science, and so on, which have relatively little commonality with mainstream philosophy. Noel Malcolm, a distinguished Hobbes scholar, left academia for a while to write a weekly political column for the Spectator (he told me that it gave him more time for research than being an academic). He once wrote a very amusing piece on the oversupply of academic writing, saying that he no longer had time to read all that was written about Hobbes, let alone the history of philosophy or philosophy itself in general. He concluded that instead of encouraging academics to produce more and more, the powers that be should take a leaf out of the book of the European Union, and pay academics set-aside money for books they refrained from writing.

Another factor which makes philosophy different from most other disciplines is that the very concept of research doesn't seem appropriate for what philosophers do. Research involves the discovery of something unknown, whereas what philosophers do is to reconceptualise and assess the significance of things that are already known. This may lie behind the slowness of philosophers to adopt the PhD as an essential qualification, or to come to see publication as part of the job, and not just a hobby. Now that the concept of philosophical research has been accepted, there is a tendency for it to become more like research in other disciplines. In particular, there is the scientistic expectation that effective research will lead to positive and objective results. Transferred to philosophy, this implies that there is objective philosophical truth which can be arrived at by research. In other words, it implies what a sceptic would call dogmatic philosophy.

In the history of philosophy, scepticism has come in and out of fashion. It was alive and well in the ancient world, with Socrates (arguably), Pyrrho, and Sextus Empiricus. It lay low in the middle ages, perhaps because of church control over the universities. After its revival in the Renaissance, especially under Montaigne, it was well represented in the modern period, with strong tendencies in Locke and Berkeley, and the overt scepticism of Hume and Kant. During the past hundred years, the logical positivists, while deferential to physical science, were dogmatically sceptical about metaphysics, ethics, aesthetics, and theology.

Similarly, linguistic philosophers rejected anything not expressible in ordinary language, and, in his later writings, Wittgenstein treated philosophy as a disease to be cured. But since then, scepticism seems to have gone out of fashion, and philosophy is largely conducted dogmatically, in the sense that practitioners of the sub-disciplines believe that they have arrived at objective truths, which they can then share with fellow-academics and students. One of the indications of this shift is the revival of sub-disciplines, in particular metaphysics, which were once considered taboo, but are now hailed as peaks of excellence in some departments. For some earlier twentieth-century philosophers, this would be the equivalent of a reversion to alchemy in a chemistry department, or to astrology in an astronomy department.

There has also been a change in attitude towards the applicability of philosophy. At the beginning of the last century, people like Russell and Joad believed passionately in the usefulness of philosophy for ordinary people, both as an approach to thinking, and in its applicability to morality and philosophy of life. Russell in particular wrote many books and made broadcasts on topics such as marriage, happiness, idleness, and politics; and I am sure he believed he could write them only by virtue of being a philosopher. It is thanks to Russell and his like that prominent philosophers had a much higher status and respect in British society than they have today.

In some respects, philosophy has tended to withdraw into itself, and some philosophers boast that their discipline is studied for its own sake without any practical application. At the World Congress in Brighton in 1988, which I mentioned earlier, Allen Phillips Griffiths was quoted in the press as saying that 'Philosophy butters no parsnips', by which he meant that it had no practical application. Shamelessly, the undergraduate handbook of the Philosophy Department of the University of Leeds once had a statement right at the beginning that studying philosophy was entirely irrelevant to a career. A number of university administrations are exasperated by the difficulty of getting philosophers to articulate the useful intellectual skills philosophy students develop in the course of their studies. In fact, When I did this for my own department, my colleagues complained that fostering these skills was not the aim of a philosophical education — which of course is true, but that doesn't mean that they aren't a useful by-product of studying philosophy.

The big exception to the denial of the usefulness of philosophy is applied ethics. Half a century ago, the general assumption was that ethicists studied the logic of moral discourse (what is now called 'meta-ethics'), but qua philosophers they had nothing to say about what is or is not moral. That was dubbed 'casuistry', or the resolving of particular moral issues, and it was the job of priests and politicians. Philosophy might help people think more clearly, but it was ultimately up to the individual to decide what was right or wrong.

In recent years, the sub-discipline of applied ethics has undergone spectacular growth. It has been helped in particular by the requirement of various professions, especially the medical profession, that trainees should study ethics. Non-philosophers probably expect the training to consisting in learning and applying a code of conduct. In fact what trainees get is help in thinking more effectively about issues in the practice of their profession, in the light of traditional theories about the nature of morality. Consistently with the assumptions of the previous century, applied ethicists do not see themselves as having a licence to preach to their students what their moral values should be.

Having devoted much of my professional career to trying to bring about improvements in the teaching of philosophy at UK universities, I would have liked to end my talk with an account of the ways in which the teaching of philosophy has got better . Unfortunately, apart from a few shining examples of improved practice, methods of teaching and assessment have remained largely unchanged over the past century. This is surprising, because one would expect teaching methods to reflect the prevailing fashion in views as to the nature of philosophy. In particular, a scepticism about the possibility of peculiarly philosophical knowledge might be sympathetic to a student-centred and dialogic approach, as practised by Socrates. On the other hand, a belief in the objectivity of metaphysics might favour the didactic lecture followed by an unseen sat examination testing the student's memory. But the latter method has largely prevailed over the past century and more, despite changes in belief about the nature of philosophy.

In all this I am reminded of the remark by an American university principal that scientists who are meticulous about doing nothing without the backing of experimental evidence in the practice of their research, completely ignore the empirical evidence about what is or is not effective when it comes to their teaching. I fear that much the same is true of university philosophers. You would have thought that philosophers of all people would be reflective about what they do, but I have seen no philosophical defence of the practice of teaching philosophy through lectures and assessing it through sat exams.

So my conclusion is that congratulations must be offered to organisations such as the Philosophical Society of England, but more generally, the world of academic philosophy needs to do much more to put its house in order.

Contact details: George MacDonald Ross, School of Philosophy, Religion and the History of Science, University of Leeds, UK
email < G.M.Ross@leeds.ac.uk>




This is a revised version of the paper presented to the Malmesbury Conference of 2012, as part of the Philosophical Society of England's Centenary Celebrations.

Wednesday, 1 May 2013

Philosophy in the Twentieth Century (2013)


Centenary Special 1913-2012

From Volume 101 No. 1 Spring 2013


Philosophy in the Twentieth Century

How the Wider World Impinged

By Richard Norman



It is now forty years since the first issue of the journal RadicalPhilosophy was published in January 1972. It contained a statement of intent which began:

'Contemporary British philosophy is at a dead end.  Its academic practitioners have all but abandoned the attempt to understand the world, let alone to change it.  They have made philosophy into a narrow and specialised academic subject of little relevance or interest to anyone outside the small circle of Professional Philosophers…'

A year earlier I had joined with two colleagues at the University of Kent and with others  in Sussex, Oxford, London and elsewhere, to form the Radical Philosophy Group.  There were three strands in the radical philosophy movement, which were
sometimes in tension with one another:

1. An emphasis on the need for the practice of philosophy to break out of the confines of academic institutions.

2. An alliance between political radicalism and a more engaged form of philosophical activity.

3. A willingness to draw on the resources of alternative philosophical traditions.  In the founding statement we said: 'There are other traditions which may inform our work (e.g. phenomenology and existentialism, Hegelian thought and Marxism).'

I'll return to the first strand shortly, but before doing so, I'll make some comments on the other two.  About the relation between philosophical and political radicalism there was always a certain ambiguity.  Radical philosophy did not explicitly define itself in relation to any specific political position, but in practice the people involved were on the political left, and there was a temptation to take left-wing politics for granted and to talk only to others who shared the same political commitment.

Mary Warnock, an Oxford philosopher who (ironically) would later be known primarily for applying philosophy to practical issues in bioethics, attacked radical philosophy for wanting to replace traditional philosophy with political action.  In an article published in New Society in 1972 she wrote:

'The present critics of traditional philosophy, the radicals, wish above all to ensure that philosophy shall have practical effects… This seems to mean that there is no part of philosophy that is non-political.  The point of the philosopher's work is to change the world – and to change it he must change the consciousness of the working classes.'

She attributed to us the view that philosophy must be left-wing and Marxist and 'must start from the working classes', and she saw this as turning philosophy into an irrational activity and abandoning the commitment to rational argument.  Her description of the radical philosophy project was a caricature, but perhaps one which we were in danger of inviting by addressing ourselves largely to people on the left and taking left-wing political commitment as a given.  Warnock concluded:

'It is not that I wish philosophy necessarily to be uncommitted.  But commitment should come, if at all, by way of arguments.  And it has always been the pride of philosophy to try to follow the argument, as Plato said, wherever it leads.  To have it laid down in advance, in the book of rules, that there is one and only one correct way to go, seems to me to be contrary to what ought to be the free and sceptical spirit of the subject.'

The need to defend political commitment with rational argument was not something that radical philosophy ever denied.  Of course political positions have to be supported with rational arguments which do not already take a particular commitment for granted.  In practice, however, the supporters of radical philosophy did not do enough of that kind of work in political philosophy.  The early 1970s did see a revival of normative political philosophy, setting out views about the nature of a good and just society, but it came from a different direction.  It was initiated especially by John Rawls's A Theory of Justice, published in 1972, which gave rise to a wealth of philosophical work defending other substantive political positions.  Philosophers who defended more radical positions, for example the political philosopher G A ('Jerry') Cohen, who was loosely associated with radical philosophy in its early days, saw it as more important to publish in mainstream journals where they were not preaching to the converted, and to engage with the arguments of philosophers such as Rawls and Robert Nozick.  With hindsight I think that they were right.

The third of the three strands in radical philosophy – an openness to other philosophical traditions – was not a commitment to any particular tradition or style.  There was a lot of interest in Marxist ideas in the early issues of Radical Philosophy, but this was not an exclusive preoccupation.  Since then, the range of traditions which are drawn on in philosophical work in this country has become much broader.  This has included a renewed interest in Marxism (including what came to be called 'analytical Marxism') and in the philosophy of Hegel, who was a focus of interest in some of the early issues of the journal.  I think it fair to say, however, that these developments have happened independently of radical philosophy, although the group was one symptom of the larger change.

The journal Radical Philosophy itself came to be increasingly dominated by a particular style of philosophy, heavily influenced by post-structuralist discourse theory and by post-modernism.  As it veered in this direction, I myself increasingly lost touch with it.  That style of philosophy can be just as narrowly academic and esoteric as the style of analytical philosophy which we had started off by criticising.  It exhibits the same tendency to be inward-looking, to employ a language intelligible only to the initiated, to focus on texts, often obscure and arcane, and to derive its content from what other academics have written rather than from what exercises people in their non-philosophical lives.

This brings me back to the first of the three strands in radical philosophy, and the one which I consider the most important – the attempt to take philosophy out of the academy into the wider world.  The crucial contrast here is between, on the one hand, philosophy which takes its problems from outside the academic profession, from the questions and dilemmas which exercise people in their everyday lives and their thinking about their lives, and on the other hand philosophy which takes its problems from other philosophers, which interests itself only in philosophical texts and articles in academic journals, and which addresses itself only to other academic philosophers.

This strand in radical philosophy was, again, part of a wider movement.  It was broadly in line with the emergence of what came to be called 'applied philosophy'.  The US-based journal Philosophy and Public Affairs started publication in 1971 and was followed a decade or so later, in Britain, by the founding of the Society for Applied Philosophy and the first issue of the Journal of Applied Philosophy.  The early 1970s had also seen the pioneering work of people like Jonathan Glover and Peter Singer who, rather than confining themselves to questions such as 'What do we mean by the word 'ought'?', began addressing questions such as 'Ought we to legalise voluntary euthanasia?' or 'Ought we to stop killing animals for food?'or 'How much of our money ought we to give to famine relief?'.

That broad development, exemplified both by the radical philosophy movement and by the applied philosophy movement, was, I believe, welcome and necessary.  It was a successful opening up of philosophy to the wider world and a revival of its true vocation.  Sadly, over the past twenty years there has been a falling back.  British philosophy has again retreated into a confined academic world.  Much of the blame lies with the national monitoring of philosophical research and publications through the Research Assessment Exercise (RAE), now renamed the Research Excellence Framework.  This has pushed academic philosophers back into writing primarily for professional fellow-academics.  It has encouraged work written in an esoteric vocabulary which excludes outsiders, bolstered with references to other academic publications, and addressing questions raised not by people's lives but by other academic philosophers responding in their turn to their academic predecessors.  It has encouraged publication for its own sake, rather than writing motivated by having something to say and a desire to communicate it to the wider world.

I believe that these have been retrograde developments.  We badly need a revival of what was best in radical philosophy, and what was best in applied philosophy.  I want to make a bold and sweeping claim: that all good philosophy is 'radical' in the sense of addressing fundamental human concerns – dealing not just with trivia and academic puzzles but with questions which go to the heart of people's struggles to understand their lives and the world around them.  In that sense, too, all good philosophy is 'applied' philosophy.  This has been true of all the great philosophers of the western tradition. 

Plato, the founder of the first academy, was not an academic philosopher.  His philosophical thought was motivated by his experience of the excesses of Athenian democracy, the even worse excesses of the oligarchic revolutions, and the act of the restored democrats who put Socrates to death.  The questions which exercised Plato were: How can we educate political rulers to rule justly?  What kind of knowledge do they need?  And, speaking today in Malmesbury, it is appropriate for me to point out that Thomas Hobbes's most famous philosophical work was driven by the political convulsions of the seventeenth century, and by the question of whether it is ever right to depose the sovereign.  This led him to explore the grounds and limits of political authority, just as it led Locke to do so later in the century.  Hobbes's philosophy was also a response to the rise of the new science, exploring its implications for our understanding of the universe, and whether it can be reconciled with traditional religious belief.  Those same questions were at the heart of the philosophical preoccupations of Descartes and Spinoza, Locke and Berkeley and Hume.

As these historical examples illustrate, by 'applied philosophy' I do not mean only applied ethics.  I include also philosophy which addresses the great metaphysical questions which everyone asks, such as:  Is there a god?  Does our conscious experience end with our physical death?  What makes life meaningful?  Is this all there is?  With that qualification, then, why do I say that all good philosophy is radical philosophy, and is applied philosophy?  Shouldn't it be a matter for choice what kind of philosophy its practitioners choose to engage in?  I shall now offer a brief defence of my claim.

I take it to be agreed that philosophy is not an empirical science.  It is primarily a conceptual discipline, using a priori reasoning which does not rely on the evidence of experience.  (I shall qualify this point in a moment, but let it stand for now.)  Assessment of a philosophical theory therefore has to put a lot of weight on the test of internal coherence, logical consistency.  However, any candidate theory can be made internally consistent if we are prepared to jettison enough of our beliefs and intuitions which don't fit.  Someone could defend even an outlandish theory such as solipsism if he or she were prepared to abandon enough of ordinary ways of speaking.  If all competing theories, then, can be made internally consistent, how do we decide on the best theory?  My answer is that a good philosophical theory has to be not only consistent but also as comprehensive as possible.  The best theory is the one which makes the best sense of as much of our shared experience as possible.

That is the qualification I want to add to my previous statement that philosophy is a conceptual rather than an empirical discipline.  It is not an experimental science, but its starting point is the facts of experience, both our everyday personal experience and the best currently available results of the specialist sciences.  The task of philosophy is not to add new empirical data, but to bring together all those products of experience, to look for ways of thinking about and understanding them as a whole, and to articulate an overall conceptual framework which provides the best fit with them.  It is the fact that the philosophical conclusions must be answerable to our experience as a whole that ensures that good philosophical theories are rooted in reality rather than being free-floating products of pure thought.

That is a rather abstract argument for why good philosophy has to be 'applied' philosophy in the sense of addressing itself to our pre-philosophical experience and concerns.  Here is a concrete example to illustrate that claim.  One of the liveliest philosophical debates of recent years has been the renewed controversy about religious belief and atheism.  Though the issues have been essentially philosophical ones, the debate has for the most part been conducted not by philosophers but by scientists and theologians.  Consider one prominent element in the debate, the so-called 'fine-tuning argument' – a refurbished version of the argument from design for the existence of a deity. 

According to this argument, scientific explanations for the existence of human beings and other living things, the origin of the earth, and the ultimate origins of the physical universe in the so-called 'Big Bang' 13.7 billion years ago, all depend upon basic facts about the physical universe, the fundamental scientific laws and the basic physical constants such as the force of gravitational attraction, the speed of light in a vacuum, and Planck's constant.  If the values of these physical constants had been even slightly different, the Big Bang would not have led to the emergence of life and of human beings, and perhaps not to an ordered universe at all.  The universe, it is said, is 'fine-tuned for life'.

What the argument then says is that this cannot be a matter of mere chance.  The fact that the mathematical values of the basic constants are just right for producing life, including human life, must have been intended.  The only plausible explanation for it is a purposive explanation, a personal explanation.  The fundamental features of the universe are as they are because they were established by an intelligent personal being for the purpose of eventually creating human life.

The response of atheist critics of the argument (such as Richard Dawkins) is that any acceptable explanation of the values of the basic physical constants would itself have to be a scientific explanation.  Perhaps the basic constants will turn out to be all interconnected at a deeper level.  Or perhaps the best explanation may be some kind of 'multiverse' theory – our universe is one of many actual or possible universes, and ours just happens to be the one that is fine-tuned for life.

It seems clear to me that the disagreement between the defenders of the fine-tuning argument and its critics is not itself a scientific one.  It is a philosophical disagreement.  For the critics the explanation has to be a scientific explanation, because that's the only kind of explanation there can be.  For the defenders of the argument, the ultimate explanation cannot possibly be one more scientific explanation.  So the disagreement is about what counts as a good explanation, and about the relation between scientific explanations and purposive explanations.

More fundamentally it is a conflict between two metaphysical perspectives, naturalism and anti-naturalism.  Defenders of the fine-tuning argument must be committed to the claim that a creator possessing intelligence and knowledge and will existed independently of and prior to the existence of the natural universe, without any physical embodiment or spatio-temporal location. 

Naturalists will say that this is philosophically untenable.  It is impossible that there could be a disembodied consciousness with no physical basis.  We can, they would say, make no sense of talk of the content of consciousness detached from any kind of sensory apparatus as a source of external data.  And we can make no sense of any talk of agency other than that of embodied beings acting on a physical world.  They will then have to acknowledge that naturalism in turn has its problems.  In particular, in the context of the present debate, naturalists of a scientific bent will want to give an evolutionary account of the origins of human consciousness, and it is not clear what they can say about how things like beliefs and intentions as we consciously experience them could emerge from a purely physical evolutionary process.

These are familiar philosophical disputes.  There is a range of philosophical theories offering competing accounts of the relation between the physical and the mental.  These theories all persist because – picking up my earlier point about philosophical method – any one of them can be made logically consistent, provided that its defenders are prepared to make the necessary linguistic adjustments and to acknowledge unfilled gaps.  For example, any dualist will have to say at some point that we cannot give any further account of how physical and mental events interact – they just do.  And any materialist will have to say at some point that we cannot give any further account of how certain kinds of physical processes can be at the same time conscious mental experiences – they just are.

My point is, then, that we cannot resolve the conflicts between these competing theories, and between naturalism and anti-naturalism more generally, simply by the piecemeal examination of individual concepts.  The best theory will have to meet not only the test of coherence but also that of comprehensiveness.   It will be the one which can best account for our experience as a whole.  And that is why philosophers have to engage with the questions and beliefs and experiences which occupy people in 'the God debate'.  Can we live by science alone?  Are there other kinds of knowledge and understanding, distinct from science, which we can draw on and which we need?  If so, what are they?  How can we best account for what people think of as 'spiritual' needs, and 'spiritual' or 'religious' experiences.  Any satisfactory philosophical theory about the nature of reality will have implications for the possibility of personal survival after death, and will therefore have to consider how best to account for the kinds of experiences which people have regarded as encounters with the dead or as evidence of resurrection or reincarnation.  I think myself that naturalistic answers can be given to these questions, but the questions are necessary and answers are needed.

And my point is too, that serious philosophical debate, such as the debate about naturalism and anti-naturalism, has to tackle these questions, and they are not just philosophical questions.  They are questions which exercise any thinking person, and about which many people have strong and deeply held beliefs.  They are the sorts of question which academic philosophers are inclined to dismiss as exhibiting a naïve misunderstanding of the nature of philosophy.  How should we live?  What is the purpose of life?  Why are we here?  What's it all about?  Good philosophy cannot ignore these concerns, which are fundamental to people's lives.  It is in this sense that all good philosophy is 'radical' philosophy, and all good philosophy is 'applied' philosophy.  The attempt in the 1970s and 1980s to drag philosophy out of the academy badly needs to be renewed.



Contact details: Dr Richard Norman, School of European Culture and Languages, University of Kent, Canterbury, UK
email <R.J.Norman@kent.ac.uk>

Revisiting Aristotle’s Noun (2013)


Centenary Special 1913-2012

From Volume 101 No. 1 Spring 2013





Revisiting Aristotle’s Noun

By Thomas Scarborough




What is a noun?  This has been the subject of intense study and debate since the ancient Greeks. In a sense, the answer is simple. A noun, it is said, is a word that names a person, place, or thing - a king, for instance, or a town, or an amulet. But then, what should one do with nouns that signify events or ideas - a dance, for instance, or an ideology?  The question becomes increasingly complex - and so it is said that perhaps rather, a noun is something that a sentence, in a special way, cannot do without - which is to say, one focuses on syntax or morphology.

Yet there is a problem with these kinds of answer. Such approaches look at the noun's place in various classifications, its role in various structures - and though they may do this in great detail sometimes, still the essence of the noun would seem to remain largely opaque.  One might say, metaphorically, that one has examined how the atom (the noun) binds to form molecules, yet one has not much peered inside the atom.

Far from being a trivial consideration, the question as to what a noun is may hold within it many secrets of our common life today - to the extent of defining the social construction of modern life.  The answer to the question 'What is a noun?' may include within it the key to understanding semantic change, the variability of grammars, the problem of meaning, the fact-value gap. In fact, considerably more. 

But let us begin at the beginning - with the noun.

In modern times, one has sought to understand the noun in static terms - in atomic, mechanical, structural terms.  The textbooks typically speak of components, categories, features, elements, constituents, properties, units.  The various metaphors, too, which have been applied to the noun in itself, have tended to be static: a capsule, a package, a chess piece, a unit of currency - items which in themselves are invariant.  Yet this modern conception of the noun is not the same as the ancient one.  The ancient Greeks viewed the noun quite differently.  They viewed it as something dynamic -  organic, synthetic, relational.

It was Socrates who first suggested that the noun may hide important secrets within.  In Plato's Cratylus, a noun (example: anthropos, or 'human being') may represent a sentence  While Socrates' train of thought seems whimsical, the seed is planted:  It was once a sentence, and is now a noun.  Here we find the tantalising suggestion that a noun may serve as a kind of wrapper for all that a sentence contains.  Further, in Plato, a noun is seen to be something which 'distinguishes things according to their natures'.  Yet what are their natures?  Such fleeting suggestions foreshadow Aristotle, who, in his Metaphysics, further explores these notions.

Aristotle viewed the noun as a sentence.  In his Metaphysics, he says that the noun is 'a sign of the definition', and that 'definition is one discursus or sentence'.  In other words, a noun points to the sentence which defines it.  While it is possible that Aristotle was merely stating the obvious - that a noun is defined by a sentence - it would seem likely that he was saying far more and that all the relations which exist within a discursus or sentence are wrapped up within a noun.  In other words, a noun is not something to be chopped and diced, as it is so often today.  Rather, it is a bundle of relations.  A noun may enfold spatial and temporal relations - causal, social, logical, conceptual relations.

This becomes clearer as we trace the development of Aristotle's ideas further.  In considering the definition of the word 'house', Aristotle sees that its composite substance - the features which are common to all houses - comprises stones, bricks, and rafters.  While, by modern standards, Aristotle's thinking is deficient (his are not necessary and sufficient features of a house), this does not matter for our purposes here.

Crucially, Aristotle does not merely think in terms of features.  He thinks in terms of how such features are related.  He is careful to specify repeatedly that such features are 'disposed in a certain way'.  Of course - if stones, bricks, and rafters are merely piled in a heap, they do not constitute a house, although they might be constituents of a house.  Thus Aristotle notes: 'The whole is not, as it were, a heap.'  That is to say, the features of the word house need to exist in a certain relation to one another if they are truly to represent its features.  For Aristotle, a noun cannot be defined without a statement of the relations in which its features are involved: 'If these be not manifest, neither will be manifest the definition of the thing'.

This explanation requires some expansion.  Naturally, the definition of a noun itself contains nouns - not to speak of other parts of speech - with the result that a definition represents an infinite regress of relations.  And so the interior life of a noun is infinitely more subtle than at first meets the eye.  Not only this, but it might be pointed out that a house, to be a house, must be involved in more relations than its composite substance suggests.  If it were buried inaccessibly under the ground, it would not be a house - or not, at least, in the sense that Aristotle conceived of a house, as a place where someone lives.  Nor would it be a house if it were suspended upside-down, or reconstructed in outer space.  That is, the features of a house involve external relations as well as internal ones.

Let us pause to consider where this has brought us so far.  Aristotle does not treat a noun as a thing which may be slotted into categories or cut into parts.  He draws into relation its composite substance - or rather, he views a noun as the web of its relationships.  One might say that a definition, in Aristotle's thinking, has to do not only with the features of a noun, but with the relations it suggests.  The definition of a noun, rather than enumerating its necessary and sufficient features, draws together its priority relations - not to speak of non-priority relations, or connotative meanings determined by the particular context.  As simple as this insight may seem, if it is correct we may now be in possession of solutions to some long-standing problems. 

Firstly, we may have in hand an explanation for semantic change, or meaning drift. Indeed, wholesale  language change, which includes the form and meaning of words, as well as syntax and morphology. Straying for a moment beyond the bounds of the noun alone - to include the idea of a word in general - a standard definition of what we mean by a 'word' is 'the union of an invariant form with an invariant meaning'.  In this view, there is an obvious problem in seeking to understand semantic change.  Yet if one understands the word as a bundle of relations, a solution seems clear.  Relations may grow stronger over time, or weaken.  They may vary from one situation to the next.  They may appear suddenly, or suddenly dissolve.  Relations are not static, they are dynamic.  So, too, in this view, words are dynamic, and live and move in time.

If one understands the noun in terms of relations, one may, by and large, understand all of grammar in such terms.  And so one may distinguish between words which trace spatial relations (nouns) from those which trace temporal relations (verbs).  While a house represents a structure in space, to build it is a process in time.  Further, some relations are oft repeated.  So for instance, nouns are frequently associated with ownership (the genitive case,), while verbs frequently have to do with past events (the past tense).  For the sake of convenience, such recurrent relations are compressed into tables (although not always consistently) called inflectional classes - among them declensions and conjugations.  In different cultures, different relations predominate - therefore different cultures use different grammars. 

Straying for a moment beyond the bounds of the word alone, there are wider, epistemological implications.  If words are bundles of relations, either singly or in combination, then the best they can offer is to foreground small regions of relations in the midst of a vast expanse.  Therefore words cannot ground meaning - for the reason that they cannot encompass all relations.  Since it is possible only to trace finite arrangements of relations - let us call them microcosms of relations - language is always going to be incapable of solving the problem of meaning in its widest sense.  Similarly, all language must needs be simplistic for the same reason that words are always limited in the relations that they encompass.

The nature of words further suggests the basis for a reconciliation of the natural and the human sciences.  Modernism elevated the natural sciences above the human, on the basis that the natural sciences were exact sciences, a dichotomy which is visible on nearly every university campus today.  Yet if all sciences trace relations, since words trace relations, this may suggest a levelling of the academic disciplines.  Both the natural and the human sciences trace relations - physics, medicine, geology, politics, poetry, theology all trace relations.  In fact, since the natural sciences deliberately proceed by isolating individual mechanisms, in the process excluding wider relations, these may be suspected of being 'lower' sciences. 

Finally, if words trace relations, this may at last suggest a reconciliation of facts and values.  Words may trace existing relations, or they may trace anticipated relations.  Existing relations, then, are facts, while anticipated relations are values.  Rephrasing this a little differently, one may judge that the relations one encounters in the world are as they should be, or one may judge that they are not.  When one judges that they are as they should be, one speaks the language of fact :'The dog is in the garden'.  When one judges that they are not as they should be, one speaks the language of value: 'The dog should be in the garden'.

So what is in a noun? What lies behind its simple form? While Aristotle could not foresee how his answer to this deceptively simple question might apply to all the various issues of our own day, his views, if correctly interpreted here, do have profound implications for life in modern society.

Contact details: The Rev. Thomas Oliver Scarborough
6 St. Patrick's Road, Fresnaye. Cape Town 8005
Republic of South Africa
Email: <scarboro@iafrica.com