All posts by Robert M Ellis

About Robert M Ellis

Robert M Ellis is the founder of the Middle Way Society, and author of a number of books on Middle Way Philosophy, including the introductory 'Migglism' and the new Middle Way Philosophy series published by Equinox. A former teacher, he now runs a retreat centre in Wales, Tirylan House, and is in the process of creating a forest garden there.

Learning the craft

I’ve never been much of a craftsman. By that I don’t just mean that I haven’t developed much skill with my hands. I’m thinking more of the way that a craft requires its practitioners to adopt and work within a particular set of socially sanctioned standards. To learn how to work wood, make pots, or almost anything else requiring skill, you start off by largely subordinating yourself to the standards you are taught. Ideas about ‘good’ carpentry or pottery, and how to do it well, have been developed over time and passed down to form a tradition with attendant standards. Creativity in such a craft can only come after you’ve accepted those standards and worked within them. You are only then able to stretch them when you’ve fully internalised them and allowed them to format your very understanding of quality itself. This is what Alasdair MacIntyre, the moral philosopher, described as ‘goods in a practice’ – the realistic basis of moral virtue. We can only develop goodness deeply rooted in individual and social experience, he thought, through internalising the standards offered by one or more ‘practices’ – those could be crafts in the usual sense, or sports, or academic disciplines, or professional requirements, or arts, or anything with a social dimension in which there are shared standards – a ‘craft’ at least in a metaphorical sense.

Recently I have been reflecting on my own difficulties with this process. My personal problem, I think, has always been not the discipline of learning any ‘craft’ in this broad sense in itself, but rather the requirement to accept a particular set of constraining rules in order to do so. Hence, the history of my varied academic studies, the history of my attempts to learn foreign languages, my engagement with different subjects when teaching, my engagement with different religious groups, and the history of my relationship to philosophy: all betray what one could unkindly call dillettantism, or more kindly a determined free-spiritedness: an inability to settle into  one set of constraints and make the best of them. Perhaps the furthest I’ve got with any ‘craft’, with the support of a teacher in relatively recent times, has been with classical piano playing. I have at least pursued this for most of my life and got a great deal out of it: but I scraped through my grade 8 piano exam, and to this day am pretty hopeless at any kind of musical theory, scales or even basic key recognition. I’ve got myself through the threat implied by classical music’s standards by scorning many of them.

This tendency has both a positive and a negative aspect to it. The positive side is that it’s been a key condition of my development of Middle Way Philosophy. What’s distinctive about that approach is its synthetic nature – namely the way it brings different kinds of ideas and standards from different sources together for practical ends. I would never have created so much synthetic material if I hadn’t been so impatient with the constraints of any one craft. The drawback of it, though, can be a limitation of the depth of my engagement with any one given area of experience. Sometimes I yearn to be the master of a craft, with the capacity to learn more fully from others that it implies.

What’s particularly made me think about this once more is that I’m currently trying once again to engage with a craft – this time the craft of EFL (English as a foreign language) teaching. I did a basic certificate in this a long time ago, and also have lots of experience of teaching five other subjects (there’s the dilettantism again!), but I’m currently enrolled on a Diploma Course to learn how to do it properly – and preferably also make myself more employable. Like the other teacher training courses that I’ve done (and scraped through) in the past, it’s a challenge. Not intellectually, but because I have to take someone else’s set of apparently unreasonable, arbitrary standards, accept them, and work within them.

I started off in my first observed lesson with Henry V. I had been watching Shakespeare’s Henry V, and thought that the story of Henry V’s invasion of France and the Battle of Agincourt might interest the students. There was lots of vocabulary about combat that might have been of some use to them, because it was used metaphorically in everyday life. So I showed them a video about Henry V, and helped them to draw some combat vocabulary out of it. But was this directed sufficiently towards the needs of the learners? No. Were its aims and objectives clearly focused on their needs? No. Did I teach a manageable and helpful amount of vocabulary, properly contextualised in the way the students could use it? No. In the terms of the diploma, this was a disastrous lesson. Despite a great deal of more general teaching experience, I had nowhere near internalised the kinds of standards that were needed for the performance of the craft.

The key to meeting this challenge seems to be provisionality. I have to remind myself that these standards are a means to an end in a particular context, and that others understand the practical workings of that context far better than I do. The temptation for me is to reject them because they are too constraining – but that would be to repeat previous mistakes. I hope, and believe, that although I’m now in my fifties, I’m not too old to learn this. We’ll have to see how I get on with the rest of the course.

Of course, I do still think that learning one craft is not enough, if the effect is that one then gets stuck in the limiting assumptions of that craft. I still see philosophy as a pursuit that can only gain a helpful identity by being seen as beyond any craft – drawing on many crafts but not being subject to any of them. When I was studying for my Ph.D. in Philosophy, I met another student with a radically different attitude to mine. He described his Philosophy thesis as his “apprenticeship piece”, and was only too willing to embrace the arbitrary constraints of the particular sort of philosophy he was being supervised in. I wondered why on earth he was studying philosophy. Why not be a woodworker? Or at least a teacher?

But it’s likely that we can only get beyond craft, and into philosophy – or perhaps art – by growing up into a particular practice at least to some degree, before we learn to understand different practices in relation to each other. That’s what developmental psychologist Robert Kegan described as stage 4 thinking, where most educated and/or professional adults are to be found. The challenge seems to be not just to aspire to stage 5 thinking, beyond the craft, but also to understand when to embrace stage 4 in a provisional but still practically committed fashion.

 

Distinctive Qualities for the Middle Way

This morning, I woke up thinking about what it is that is distinctive about the Middle Way approach that sets it apart from other ways of judging things. To put it more crudely in marketing terms, what is its ‘USP’ or unique selling point? I find that in whatever form I try to convey what the Middle Way is about, many people like to appropriate it into the terms of some tradition or type of thinking that is more familiar to them: for example, Buddhists think of it in Buddhist terms, scientists in scientific terms, and so on. I usually think that they are partially right, but that they are still missing an understanding of what is most distinctive, because synthesis (see different ideas from different sources in relation to each other) is so central to it. So however I try to convey the unique ‘selling point’ of the Middle Way, it will have to be based on a synthesis of different qualities coming together. Those qualities may be found separately in lots of places, but the Middle Way asks one to see them together and in systemic relationship to each other. It starts to arise more fully when they are all brought together.

I worked through a list of lots of different viewpoints, along with what I felt they shared and didn’t share with the Middle Way, and by this means managed in the end to distil a list of five qualities. These qualities, when combined and held together, seem to jointly create a distinctive Middle Way approach, whereas in every other approach that seemed to get near to the Middle Way but not quite hit it, I could identify one of these qualities missing. Of course, in those approaches that are even further from the Middle Way, there will be more than one of them missing. Focusing on these qualities is thus a different (but hopefully complementary) angle from which to understand the Middle Way than the Five Principles that I have been using for some years now. Lets call them the Five Qualities. the five qualities that I identified were synthesis, criticality, gestalt meaning, even-handedness and practice orientation. The diagram below conveys that interdependent aspect.

Firstly, synthesis is the ability to bring ideas together from different places. Without that ability, provisionality, in which we are open to alternative possibilities, is impossible. Synthesis is blocked by domain dependence, where our thinking is stuck in one context where we are used to applying it: for instance, we don’t apply what we learnt at work when we get home. Fixed and essentialised categories can also block synthesis, by making us think in only one way that’s dictated by the framing of the language we’re using: for instance believing that ‘religion’ must be only ever be one kind of thing. The blocking of synthesis is also, in my view, a major problem in academia, where it results in over-specialisation, over-reliance on analysis alone, and relativism about values. Those academic ways of framing things also influence the rest of society.

Secondly, criticality  is the ability to question the current set of assumptions that we are making or are presented with. Even if we are theoretically aware of alternatives, if it doesn’t occur to us to consider the possibility that what we believe might not be true, we can be slaves to confirmation bias, locked into an unhelpful set of assumptions. For instance, people who are mystically inclined may have a highly meaningful, practical and synthetic approach to things, but they also often assume that this view offers ‘ultimate truth’ of some kind. Their failure to apply any criticality to this assumption can again trap them in unhelpful views in practice.

Thirdly, gestalt meaning refers to the recognition of symbols being meaningful because of our embodied experience, channelled through the right hemisphere. This meaning is gestalt because it comes to all at once in an intuition, rather than being conveyed piecemeal. However, when we assemble these gestalt meaning experiences into language through the use of schemas and metaphors, we can use them to express our beliefs, and at that point they become subject to criticality. So putting criticality together with the recognition of gestalt meaning results in the distinction between meaning and belief, and the recognition that we need to treat them in slightly different ways: to appreciate and celebrate meaning, but maintain critical awareness about our beliefs. Many people find this difficult, or are not even aware that it is possible: thus there are many spiritual and artistic people with a strong sense of gestalt meaning but little criticality, and many scientifically or philosophically educated people who are inclined to dismiss anything to do with gestalt meaning as “woo”, because they wrongly assume that it must be a kind of belief that threatens their justified  and critical scientific beliefs. An insufficient openness to gestalt meaning can impoverish our emotional and imaginative lives, and tends to lead us into representational and instrumentalist attitudes in which, for instance, we don’t really respond to others as people like ourselves.

Fourthly, even-handedness is another quality that we need to be able to apply when we are engaging in synthesis, criticality and appreciation of meaning. There are always many different possibilities jostling for our attention, and many different possible beliefs we could adopt. Even-handedness is the capacity to apply a model of balance in our judgements about these, not simply immersing ourselves or committing ourselves to one kind of meaning or belief and completely neglecting another. This is especially important when it comes to dealing with absolute or metaphysical beliefs, as it is so easy to reject one because of its dogmatism without recognising that you are running headlong straight into the arms of its opposite (like people ‘on the rebound’ from a relationship breakup). Even-handedness requires an emotional awareness that the degree of hatred that is likely to accompany your rejection of one view does not have to create a total desire for only one alternative to it.

Finally, practice orientation is the commitment to making your judgements practical and putting them into practice. That will probably mean that you are ‘working on yourself’ through some kind of integrative practice such as meditation, the arts, and/or study, and probably also ‘working on the world’ through some kind of communicative, social or political activity. With practice orientation you are always likely to be asking ‘does this really make a difference in practice?’ and thus have a critical perspective on purely theoretical accounts that take abstract completion as an end in itself. For instance, if someone makes you an argument about the nature of the historical Jesus, you can ask what difference this makes: is it going to change the way that Jesus functions in people’s lives, as a source of advice, inspiration, or archetypal meaning? There are many academic approaches that seem to lack this kind of practice orientation, because they have turned scholarly or scientific investigation within a particular field an end in itself.

Of course, this list may not be complete, and may still be improved upon. But perhaps it can provide another way into the Middle Way, but especially into the question of what is distinctive about it. If you’re not sure about how well a particular approach fits the Middle way, you might like to start by asking whether all five of these qualities are present, at least to some degree.

 

Retreats update

We’re now booking for two weekend retreats in 2019, both led by Robert M. Ellis. Click the retreat titles for more information.

Depolarising Politics: 22nd-24th March 2019 in Yorkshire, UK

The Buddha’s Middle Way25th-27th October 2019, in Worcestershire, UK

We’re also expecting to run another retreat led by Nina Davies in early 2020, followed by a convention on 18th and 19th April, 2020 – look out for more details of these in future.

Also don’t forget our ongoing webinar programme!

 

Critical Thinking 22: The Slippery Slope Fallacy

I’m moved to return to this blog series on Critical Thinking by the appearance of a particular fallacious argument in current political discourse in the UK (in the form of the “best of three” argument about a possible second referendum on Brexit). This is an example of the slippery slope fallacy, which I’ve not yet covered in this series. This fallacy doesn’t seem to be as widely understood as it should be. I regularly see people online using “slippery slope” as though it was a justification rather than a fallacy, and even highly-educated BBC journalists seem either unaware of it, or otherwise unwilling to challenge politicians who use it.

The slippery slope fallacy, like any other bias or fallacy, involves an absolutized assumption that is usually unrecognised. In a Middle Way analysis there is always a negative counterpart to an absolutized assumption (assuming the opposite) and that’s also the case here. In the case of a slippery slope fallacy, it involves an assumption that if one acts in a particular way showing a tendency in a particular direction, this will necessarily result in negative effects that include further movement in the same direction, with further negative effects. The absolutisation here lies in the “necessarily”. Those who think in this way do not consult evidence about what is actually likely to happen following that course of action, or justify their position on the basis of such evidence. Rather, they just apply a general abstract principle about what they think must always happen in such cases. Such general abstract principles are usually motivated by dogmatic ideology of some kind.

Some classic examples of the slippery slope fallacy involve arguments against voluntary euthanasia or the legalisation of recreational cannabis. The argument against legalising voluntary euthanasia goes along the lines of “If you allow voluntary euthanasia, then there’s bound to be a creeping moral acceptance of killing. Respect for human life will be undermined. Before you know it we’ll be exterminating the disabled like the Nazis did.” The argument against legalising recreational cannabis would follow the lines of “If you let people smoke cannabis, they’ll soon be on to harder stuff. It’s a gateway drug. We’ll soon have the streets full of heroin addicts.” In both of these arguments, there is no particular interest in whether there is any evidence that the lesser effect would in fact lead onto the greater one, just the imposition of a dogmatically-held principle that proclaims what would always happen. The absurdity of assuming that this is what would always happen becomes clearer if you think about how easily we could use these slippery slope arguments against currently accepted practices: “If you allow euthanasia for dogs, you undermine respect for life and before you know it, it will be applied to humans.” or “If you allow people to smoke tobacco, they’ll soon be smoking heroin. It’s a gateway drug.” In practice, we draw boundaries all the time, and in law we enforce them. There is no particular obvious reason why new boundaries should be harder to enforce than previously accepted ones.

So now we come to the current use of the slippery slope fallacies in UK political discourse. This is by Brexiteers opposed to the idea of a second referendum – which, at the time of writing, is looking increasingly like the only viable option to release the UK parliament from deadlock over Brexit. There argument goes along the lines of “If we have a second referendum, what’s to stop us having a third one or a fourth one? We’ll never resolve the issue.” Here’s one example of many uses of this argument in the media.  As in the euthanasia and drug legalisation arguments, the objection appears to simply involve the dogmatic application of an implicit principle, in this case, that “politicians can call as many referendums as they like until they get the result they desire”. As in those arguments, also, there is no positive evidence that this would actually be the effect, nor that this is actually part of anyone’s motives. In practice, it seems much more likely that the amount of public resistance would grow the more referendums were called. In its imposition of an abstract dogmatic principle on the situation, this argument completely misses the point that the call for a second referendum is a pragmatic response to a particular situation of deadlock, not an invocation of a general political principle.

As with other biases and fallacies, there is also a negative counterpart to the positive slippery slope fallacy. This is the failure to acknowledge actual evidence that a “slippery slope” might happen, due to an absolute reaction against the slippery slope fallacy. There are some instances where there is positive evidence that a particular course of action can initiate a gradual deterioration – for instance, being unemployed is often correlated with poverty and depression. Not that everyone who is unemployed will necessarily suffer in these ways, but that your chances of becoming poor and depressed demonstrably increase once you are unemployed. The danger of further negative effects from unemployment is probably something you should take into account before you resign from your job, if you have no alternative available: but taking it into account does not necessarily mean that it should determine your response.

So, the slippery slope fallacy is just another common instance of dogmatic assumptions applied in unconscious everyday thinking. It doesn’t imply that there are no “slippery slopes”, only that you need to look carefully at the slopes before you set off down them to see how slippery they really are. You might well be able to keep your footing better than you expect.

Link to index of other blogs in the Critical Thinking series

Picture: ‘Slippery Slope’ by S. Rae (Wikimedia Commons) CCSA 2.0

Believing in Santa Claus

If we are told about Santa Claus, will we “automatically” believe in Santa Claus?

I’ve recently been reading a big tome – ‘Belief’ by professor of psychology James E. Alcock. In many ways this book can be recommended as a helpful and readable summary of a great deal of varied psychological evidence about belief, including all the ways that beliefs based on perception and memory are unreliable, and all the biases that can interfere with the justifiability of our beliefs. However, I’m also finding it a bit scientistic, particularly in its reliance on crude dichotomies between ‘natural’ and ‘supernatural’ beliefs, for instance. It seems like a good indicator of the mainstream of academic psychological opinion, with both its strengths and its limitations. (I haven’t got to the end of the book yet, so all of those judgements will have to remain fairly provisional.)

One particular point has interested me, that for some reason I had not come across before. This is Alcock’s claim that accepting what we are told as ‘truth’ is “the brain’s default bias”.

There is abundant and converging evidence from different research domains that we automatically believe new information before we assess it in terms of its credibility or assess its consistency with beliefs we already hold. Acceptance is the brain’s default bias, an immediate and automatic reaction that occurs before we have any time to think about it. Only at the second stage is truth evaluated, resulting in confirmation or rejection. (p.152)

One of Alcock’s examples of this (seasonally enough) is the child’s belief in Santa Claus. If people tell the child that Santa Claus exists, he or she will ‘automatically’ believe exactly that. Now, it’s one thing to claim that this is quite likely, but quite another to claim that it is ‘automatic’.

If this is correct, it seems to be a significant challenge to the things I have been writing and saying in the last few years in the context of Middle Way Philosophy. If we automatically believe what we are told, then it seems that there is no scope for provisionality in the way we initially believe it, and we are left only with ‘reason’ – i.e. a second-phase reflection on what we’ve come to believe – to rescue us from delusion. The distinction that I like to stress between meaning and belief would also be under threat, because we could not merely encounter meaningful ideas about possible situations without immediately believing them. So, I was sceptical when I encountered this information. But, it coming from a professor of psychology, I certainly needed to look into it and check my own confirmation biases before rejecting it. Was this claim actually well evidenced, or had dubious assumptions been made in the interpretation of that evidence?

Alcock references a 2007 review paper that he wrote in collaboration with Ricki Ladowsky-Brooks: “Semantic-episodic interactions in the neuropsychology of disbelief”. This paper does summarise a wide range of evidence from different sources, but reading it easily made it apparent that this evidence has also been interpreted in terms of assumptions that are highly questionable. The most important dubious assumption involves the imposition of a false dichotomy: namely that the only options in our initial ‘acceptance’ of a meaningful idea about how things might be are acceptance of it as ‘truth’ or rejection of it as ‘falsehood’. If one instead approaches this whole issue with an attempt to think incrementally, then we can understand our potential responses in terms of a spectrum of degrees of acceptance – running from certainty of ‘truth’ or ‘falsehood’ at each extreme, via provisional beliefs tending either way, to an agnostic suspension of judgement in the middle. The introduction to Alcock and Ladowsky-Brooks’ paper makes it clear that this dichotomy is being imposed when it says that

The term ‘‘belief’’ will refer to information that has been accepted as ‘‘true’’, regardless of its external validity or the level of conviction with which it is endorsed.

If we start off by assuming that all degrees of conviction are to be categorised as an acceptance of “truth”, then we will doubtless discover exactly what our categorisations have dictated – that we accept things as ‘true’ as a default. This will be done in a way that rules out the very possibility of separating meaning from belief from the start. But since the separation of meaning from belief enables us to approach issues like religion and the status of artistic symbols in a far more helpful way, surely we need to at least try out other kinds of assumptions when we judge these issues? Alcock’s use of “true” as a supposed default in the “truth effect” that he claims is so broad that it effectively includes merely finding a claim meaningful, or merely considering it. This seems to involve an unnecessary privileging of the left hemisphere’s dichotomising operations over the more open contributions of the right, when both are involved in virtually every mental action.

The alleged two-stage process that then allows us to reconsider our initial assumption that a presented belief is ‘true’, and decide instead that it is ‘false’, also turns out not to necessarily consist of two distinct stages. On some occasions, we do immediately assume that a statement is false, because it conflicts so much with our other beliefs. However, Alcock identifies “additional encoding” in the brain when this is occurring, implying that both the stages are taking place simultaneously. Yet if both stages can take place simultaneously, with the second nullifying the effects of the first, how can the first stage be judged “automatic”?

So, in some ways Alcock obviously has a good point to make. Very often we do jump to conclusions by immediately turning the information presented to us into a ‘truth’, and very often it then requires further effortful thinking to reconsider that ‘default’ truth setting. But the assumptions with which he has interpreted his research have also unnecessarily cut off the possibility of change, not just through ‘reason’, but through the habitual ways in which we interpret our experience. There is no discussion of the possibility of weakening this ‘truth effect’ – yet it is fairly obvious that it is much stronger in some people at some times than others at other times. He seems not even to have considered the possibility that sometimes, perhaps with the help of training, our responses may be agnostic or provisional, whether this is achieved through the actual transformation of our initial assumptions, or through the development of wider awareness made so habitual that the two phases he identifies are no longer distinct.

This issue might not be of so much concern if it did not seem to be so often linked to negative absolutes being imposed on rich archetypal symbols that we need to appreciate in their own right. If I consult my own childhood memories of Santa Claus talk, I really can’t identify a time when I “believed” in Santa Claus. However, that may be due to defective memory, and it may well be the case that many young children do “believe” in Santa Claus, as opposed to merely appreciating the meaning of Santa Claus as a symbol of jollity and generosity. At any rate, though, surely we need to acknowledge our own culpability if we influence children to be obsessed with what they “believe”, and accept that it might be possible to help them be agnostic about the “existence” of Santa Claus? To do this, of course, we need to start by rethinking the whole way in which we approach the issue. “Belief” is simply not relevant to the appreciation of Santa Claus. It’s quite possible, for instance, for children to recognise that gifts come from their parents at the same time as that Santa Claus is a potent symbol for the spirit in which those gifts are given. We don’t have to impose that dichotomy by going straight from Santa Claus being “true” to him being “false”, when children may not have even conceived things in that way before you started applying this as a frame. If we get into more helpful habits as children, perhaps it may become less of a big deal to treat God or other major religious symbols in the same way.

Apart from finding that even professors of psychology can make highly dubious assumptions, though, I also found some interesting evidence in Alcock’s paper for that positive possibility of separating meaning from belief. Alcock rightly stresses the importance of memory for the formation of our beliefs: everything we judge is basically dependent on our memory of the past, even if it is only the very recent past. However, memory is of two kinds that can be generally distinguished: semantic and episodic. Those with brain damage may have one kind of memory affected but not the other, for instance forgetting their identity and past experience but still being able to speak. Semantic memory, broadly speaking, is memory of meaning, but episodic memory is memory of  events.

Part of what looks like a big problem in the assumptions that both philosophers and psychologists have often made is that they talk about “truth” judgements in relation to both these types of memory. Some of the studies drawn on by Alcock involve assertions of “truth” that are entirely semantic – i.e. concerned with the a priori definition of a word, such as “a monishna is a star”. This is all associated with the long rationalist tradition in philosophy, in which it is assumed that there can be such things as ‘truths’ by definition. However, this whole tradition seems to have a mistaken view of how language is meaningful to us (it depends on associations with our bodily experience and metaphorical extensions of those associations), and to be especially confused in the way it attributes ‘truth’ to conventions or stipulations of meaning used in communication. No, our judgements of ‘truth’, even if agnostic or provisional, cannot be semantic, but need to rely on our episodic memory, and thus be related to events in some way. If we make this distinction clearly and decisively enough (and it goes back to Hume) it can save us all sorts of trouble, as well as helping us make much better sense of religion. Meaning can be semantic and conventional, whilst belief needs to be justified through episodic memory.

Of course, this line of enquiry is by no means over. Yes, I do dare to question the conclusions of a professor of psychology when his thinking seems to depend on questionable philosophical assumptions. But I can only do so on the basis of a provisional grasp of the evidence he presents. I’d be very interested if anyone can point me to any further evidence that might make a difference to the question of the “truth effect”. For the moment, though, I remain highly dubious about it. We may often jump to conclusions, but there is nothing “automatic” about our doing so. Meanwhile, Santa Claus can still fly his sleigh to and from the North Pole, archetypally bestowing endless presents on improbable numbers of children, regardless.

Santa pictures from Wikimedia Commons, by Shawn Lea and Jacob Windham respectively (both CCBY2.0)