(2/relationship in labyrinth — RH/reverse move)
In which we try another way to understand skill, and once more find that the control we crave continues to elude us
I walk down to the centre of town, and watch a street busker juggling. One, two, three; one, two, three, four… five balls in the air. That’s hard. And yet he makes it look so easy.
There’s just too much going on at once. No matter how much I watch him, no matter how much I analyze his movements, I still can’t grasp how he does it. I know that I can’t do it — not yet, at any rate. But I can’t see why I can’t do it when he can: there’s no visible difference between us — in fact there probably aren’t any significant physical differences between us. But clearly something’s different — otherwise I’d be able to do exactly the same juggling without any trouble at all.
The reason I can’t see what’s wrong is that the difference is in the subjective side of the skill: not in what we are, but who we are, and what we allow ourselves to do. To make sense of that, we need to go away from analysis for a while, and go another way in the labyrinth: looking inward rather than outward, at teaching from within — or, literally, ‘in-tuition’.
Thoughts and feelings
Thinking takes time: and there’s no time to spare in juggling. We can analyze someone else’s performance, watching to see whether each ball follows the ideal trajectory, and whether the movements of their hands is smooth and even: but we can’t do it to our own performance — not when we’re actually juggling. You can analyze or juggle, watch others or do it yourself — take your choice. If we want to take a risk and learn to juggle, we have to stop analyzing and step back for a while — and listen to a different kind of teaching, coming from within ourselves rather than as rules from outside.
When we learn some physical skill, such as juggling or riding a bicycle, we have to co-ordinate the movements of many different areas of the body. If we try to do it all from central control, we’d have to poll each one of those areas in turn to find out what’s going on and what has to be done next, and there simply isn’t enough time. We can only think so fast, but gravity won’t wait: we drop the ball, or fall off the bicycle. As we all discover the hard way, we can’t control everything directly.
What we can do instead is control the process indirectly. Imagine that we have within us many separate little minds, scattered all over our body: one for each arm, one for each hand, one for each finger — in effect, one for each joint, and plenty more besides. (To an extent these imaginary minds do have a physical counterpart in our bodies: nerve-ganglia in the spine and elsewhere, and certain ‘hard-wired’ portions of the brain, that handle specific reflexes.) If we teach each appropriate ‘area-mind’ its own task, and then trust that it can run the task on its own, all we’d have to do to co-ordinate the tasks is to keep them synchronised: passing each area a start-up instruction rather than trying to take total control. With each of the sub-minds taking care of the local details, we’re then free to concentrate on the overall pattern of what’s happening.
This is, in fact, what we do in learning manual skills, although a more common name for that teaching process is ‘practice’:
Practice makes perfect
We practice and practice the subtle balance that we need to ride a bicycle; we practice and practice at those throwing movements that are at the heart of juggling. We teach those sub-minds their tasks through repeated demonstration, allowing them to get closer and closer to the intended pattern of movements. We do of course analyse what’s happening — or not happening — but we also watch in a different way, watching our feelings about what’s happening. It’s through feelings that we get information back from those sub-minds. Then, quite suddenly, all the co-ordination — hand-eye, hand-mind, mind-body — comes together: and we never forget that part of the skill again (though we may soon forget the pain that we went through to do so). It’s become part of the body’s inner knowledge — by analogy, a new addition to the programming of the robot-like sub-minds of the body. In effect, that’s what a skill is: a new combination of programs for the totality that is us.
We create new skills by an awareness of what’s going on inside us — through experience built up from repeated practice. We can take advice from others, but others cannot do it for us: in that sense it is subjective. And yet it’s also very real.
From scientism, however, we’ve gained a very confusing idea of what is real and what is not. At one level, it takes the view that anything we can label as subjective or imaginary does not exist: only things which are physical, tangible can be considered as real — ‘object-ive’ in the literal sense. But at the same time it insists on the ultimate reality of ideal forms independent of any observer — what we call the ‘laws’ of science — which can only be identified through analysis and logic. This is an extreme version of Plato’s argument:
Reality cannot be invented, but only discovered through pure reason
If we stop to think for a moment, something is clearly wrong with that statement. Plato was writing well over two thousand years ago: and the world — the real, tangible, objective world — is now very different from that which he knew then. The real world has changed: a new version of reality has developed. But it hasn’t arisen from nowhere, like magic: it’s been invented. And we also know, from Gooch’s Paradox — that “things not only have to be seen to be believed, but also often have to be believed to be seen” — that the very act of observing changes what is observed. So a surprising amount of what we would consider to be real, in the sense of fixed or constant — such as our ability to juggle (or lack of it) — depends on what we believe to be real.
The only one of the sciences that does depend on reason alone is mathematics: and for many mathematicians the whole point of their art, and to them one of its great joys, is its lack of connection with reality. For century after century, mathematics has stood alone and aloof in its own private world of reason and logic, theorem and proof, imaginary numbers and imaginary dimensions. In fact for most of this century, mathematics that is actually useful for real-world situations has had to be developed outside of the formal reasoning of formal mathematics.
For practical purposes, reason alone can only describe an idealised — hence imaginary — abstraction of the complexities of the real world: condemned always to look backwards, accurate only in hindsight, it’s quite incapable of describing the whole of reality. And every time we ‘dis-cover’ something, it changes our perception of reality — and thus reality as we experience it. So in practice we need to turn around the whole of Plato’s statement:
Reality cannot be discovered through pure reason — but it can be invented
Scientism’s equating of objective with ‘real’, and subjective with ‘imaginary’, ‘non-real’, is artificial and misleading. It may be appropriate when talking about machines, which — by definition — have no mind. But for skills we can’t ignore subjective choices at all — they’re part of us, and so a necessary and inescapable part of the total equation.
Real and imaginary
We invent reality: not just our own view of reality, but also the ‘objective’ reality that we share with everyone else. It’s through our various skills that we ‘real-ise’ what we want in the world. For example, ask yourself:
Is your current project real or imaginary?
The only true answer is ‘Yes’ — both. It’s the word ‘or’ which misleads, comparing two different classes as if they were opposites — rather like asking “Is it chalk or Wednesday?” Every project is, if you think about it, both real and imaginary: with any project, you’re always in the process of changing something which is imaginary — the idea, the design, the plan — into something which is real in the objective or tangible sense. The catch, to use David Pye’s phrase, is that ‘design proposes, workmanship disposes’: no matter how good the idea may be, its real-isation depends to a great extent on the quality of our ‘workmanship’. No design can make good work out of bad workmanship; whilst excellent workmanship can help correct a poor design — or, in the case of most technology, tidy up all the deviations from the expectations of ‘laws’ that aren’t laws. As we’ve seen with juggling, that depends in turn on subjective differences, our subjective choices — and these are at the same time both imaginary and very real.
We can say that your ideas about your current project are entirely real — but only in an imaginary sense. They exist, they’re real: but in a different kind of reality. Even so, they can still connect directly, through you, to the physical world. For example, imagine an orange. A big, juicy orange. Imagine holding it in your hand. Imagine stripping the peel away — you can imagine the subtle scent of the zest, imagine the feel of the texture of the peel as you strip it away. Imagine, too, holding a segment of the orange; placing it on the tip of your tongue. And — imagine — biting on the segment, tasting the orange…
We could measure all of that, physically, in terms of its effects on you — even though the orange was only imaginary. You can probably still taste it in your mouth, too. So something can be both real and imaginary — at the same time.
And it’s also, quite literally, only coincidence. The sense of tasting the orange — a physically real sensation — coincided with the image of biting on the imaginary orange. But clearly there’s no causal link, in a physical sense, between them: the coincidence is there — and usually repeatably, too — but that is all. The only link between the image and the apparent sensation exists within us — and yet it’s real.
This is important, because scientism teaches us to draw another distinction:
<p class=”oneliner” align=”center”>Is there a cause, or is it only coincidence?
The implication is that something can only be real or meaningful if we can show a cause: otherwise it’s ‘only coincidence’, something we can ignore as non-real, as meaningless. Causal events have meaning; coincidences don’t. Once again, as with the question “Is it real or imaginary?”, the ‘or’ is misleading: though here it’s for a slightly different reason. Cause is not different from coincidence: it’s a type of coincidence, or rather a way of interpreting what are, literally, ‘co-incidences’. What scientism refers to as cause and effect, or chains of cause and effect, are more accurately a particular type of patterns of coincidences that repeat in time.
Cause and coincidence
Cause-and-effect is a useful way to interpret many coincidences, especially in the objective world: but it’s not the only way — and certainly not the only one we use. For example, ask yourself the question:
What is the cause of your reading this book?
You might answer at first that a friend suggested you read it; but what was the cause of your friend making that suggestion? Why did you follow up that suggestion? Further back, what was the cause of your becoming friends? There’s no end to the chain: you could easily argue that the whole of your life has been a stream of interlinked events that have caused you to be reading this word — and will, of course, move on to be part of the causes of seeing other words and other events, other coincidences.
It’s not that viewing the question causally is wrong, it’s more that it’s just not useful here to talk in terms of cause and effect. Cause is not different from coincidence: it’s just one of many sub-classes of coincidence, one of many ways of looking at incidents. And once we look at it that way, the question “Is there a cause, or is it only coincidence” becomes nonsensical:
Is there a coincidence, or is it only a coincidence?
To which the only possible answer is:
It’s all coincidence!
It’s useful to go back to first principles, and recognise that everything we perceive is, at first, only coincidence — literally, ‘co-incidence’. Reality is also Coincidence Department, in which anything and everything can — and does — happen. It’s important to realise that coincidences, in themselves, are neither meaningful nor meaningless: coincidence and meaning are quite separate. But if everything’s coincidence, we have to have some way of separating out ‘signal’ information from the chaotic mess of ‘noise’. In other words, we choose what we consider to be meaningful, and what we shall ignore. In that sense, it’s obvious that we have no choice but to invent some view of reality — if we didn’t, we’d go insane in the confusion. Most of the time, we can borrow other people’s definitions of what is real and what is not: but there are times when we have to make our own, subjective, choices — in “the judgemental, inspirational and even accidental processes that constitute much of engineering” and much else besides.
The meaning of a coincidence comes not so much from the content — what we see as going on — but more from the context, that which we see going on around the coincidence. If we view a particular coincidence as part of a chain of cause and effect, we’re looking mostly at the pattern in time: this coincidence always (or at least usually) comes after this one, but before that. We can also recognise other patterns in time, which are clearly not causal as such: a time of 1pm on the office clock does not cause me to feel hungry, but nonetheless there’s a common coincidence between them — and a useful one at that.
That last kind of coincidence is synchronistic, literally ‘at the same time’: we recognise that there’s a pattern that happens at the same time, but without a causal link. But it’s quite wrong to think of synchronicity as something opposed to causality — “It wasn’t a coincidence, it was a synchronicity” — as many people do, especially in the ‘New Age’ arena. It’s simply another class of coincidence — or interpretation of coincidence, rather. Kammerer’s ‘seriality’ is another, the clustering of a stream of similar numbers, or similar names, or similar events: if you buy a new car, suddenly you see dozens of that type on the road, where they’d seemed to be a rarity before. (Gooch’s Paradox plays an important part in that last example, of course: now that you ‘believe’ them — or, in this case, they’ve effectively become significant to you — you see them, because they stand out from the background ‘noise’ of all the other coincidences.)
The only type of coincidence that Plato’s ‘pure reason’ can handle is causality. If it can’t explain something in terms of cause and effect, it’s stuck — which is precisely why we get stuck so often, in treating technology solely as applied science. To get unstuck, we have to look more closely at what Coincidence Department has given us, and look at it in a different way. For a while, at least, we have to let go of the concept of cause, and play with some other kinds of interpretation that are beyond cause — though certainly not without effect!
In science and technology, the first move when we meet up with something we don’t understand is to make up some kind of model — a simplified copy of something else that we do understand. (An alternate term for ‘model’, is ‘hypothesis’ — perhaps more commonly used in science than in technology.) We know that what we’re looking at doesn’t behave in exactly the same way: we don’t have an explanation. But with a good model — or a good set of models — we can have enough of a handle on events to be able, not to predict exactly or to explain, but to describe what’s going on.
A model is not ‘true’, in the sense that a scientific law or a system of cause and effect is considered to be true. But that isn’t its purpose: the whole point is that the model should be useful. Once it ceases to be useful, we discard it, and use something else. If we’re dealing with some of the more confusing paradoxes in Reality Department, our playing with models can become something of a mental juggling act, tossing one model after another into the air. For example, look at all the different models we need to describe the not-so-simple workings of an electric light-bulb:
First, to generate electricity, we wave a wire around in a magnetic field, converting mechanical energy to electrical energy. The amount of electricity generated is, we say, proportional to the lines of magnetic flux cut per unit time — except there aren’t any actual lines to cut. Ignoring that problem, we then describe the electricity in terms of particles — electrons — in order to understand resistance; but we also describe it as a wave, because nothing actually moves down the wire. Neither model helps us understand skin resistance at high frequencies: for that we treat electricity as if it’s a fast-moving liquid. We then bring the electricity as a wave of particles that isn’t either a wave or particles to a dimmer-switch: an NPNP-type transistor in which, we say, electrons go one way, and positive ‘holes’ go the other — the end-result of which is that the particles of electricity come out as a chopped-up sine-wave. From there, on up the wire again, to the bulb filament, where our chopped wave of electrons that somehow never left the power station but are here at the same time, gets turned into electrons again. By another neat trick, particles of electricity change themselves into particles of light — photons — which ‘boil’ off the wire of the light filament. But particles of light can’t get through glass: so to help them on their way we call them waves again…
In reality, we don’t have clue what electricity or magnetism or light actually are. We may say, as we saw John Taylor do earlier, that electromagnetism is one of four ultimate forces: but that’s just a name, not an explanation. All we have are models, describing electromagnetic effects like light and electricity in terms of something else that we do understand, like waves or particles. We can describe it in terms of quantum effects, of course — but that’s just another model that happens to accept the paradoxes inherent in the idea of a ‘wave of one particle’. All and none of these models are true — light is like waves or particles, but ultimately is neither — but all of these models can be useful, in the right context.
By describing light in terms of waves, we can understand refraction and reflection — a model we use to great effect in designing lenses. But a wave-based model of light is no help at all in understanding photo-electricity: for that we have to describe light in terms of particles. Each model allows us to do some things, but prevents us from understanding others. We can swap from one model to another, and we need to: but we have to know what model to change to, and when.
Or we could take another example, that of geometry. Standard school geometry, Euclidean geometry, depends on certain assumptions: that the shortest distance between two points is a straight, for example. That’s certainly useful for working on simple plane surfaces — it’s intuitive in the sense of ‘obvious’, for most things in our everyday environment. But it’s positively dangerous if you try to use that geometry for long-distance navigation in the real world: a seventeenth-century British admiral by the unlikely name of Sir Cloudesley Shovell lost his entire fleet — and, incidentally, his life — on the Scilly Isles by depending too much on Euclid. To see his problem, ask yourself the textbook question:
What is the sum of the angles in a triangle?
You’d probably answer straight away with the textbook answer of 180 degrees — two right-angles. But a more precise answer would be “180 degrees — usually”. What school-level geometry fails to emphasise is that the textbook answer applies only to a special case: that of flat surfaces. If you try to use it on a globe, like Sir Cloudesley’s navigator, it no longer works: you’ll find, for example, that you can have a right-angle at the pole linking to two on a line at the equator — a total of 270 degrees. In fact the sum of the angles in a triangle on a sphere can be anything between 180 and 540 degrees, depending on how and where you draw the triangle; and as any airline map will show you, the shortest distance between two points — a ‘great circle’ route — may well appear to be a curve rather than a straight line. In a different world — a non-flat world — the rules have to change: they aren’t rules as such, but developments from assumptions that we choose to use.
So we don’t have ‘geometry’, but geometries: spherical geometry is more appropriate than Euclidean for the task of navigation, whilst the bizarre Riemannian geometry, with its curved description of space, provides a more useful map for the concepts of relativity theory. All and none of them are true: it’s up to us to choose the right one.
We have some way to test the models we choose. Sometimes the choice is not trivial at all — as Sir Cloudesley Shovell found to his cost. But by using a model to say “it’s like so-and-so…” — electricity is like waves, like particles, the inner sense of time can be used like an alarm clock — we then have something that analysis can test. We accept that it isn’t the same as that ‘so-and-so’, but for our purposes, we assume that it’s a close enough an analogy to be able to use the same sort of analysis as we would for the so-and-so on which we’re basing the model. A model is not so much true as useful; not so much correct or incorrect, as appropriate or inappropriate.
The analysis we develop from a model — looking backwards as always — depends on the validity of the model’s assumptions: analysis can test the results, and thus test the assumptions. But you cannot ‘prove’ a model in the way that you can, say with a mathematical theorem: all you can do is use it, or not use it. And its predictions are proven by their usefulness, not necessarily by their conformity with anything else — as you can see from the paradoxically confusing mess of models in that description of electricity above.
Analysis can test the model, but cannot really choose it. To illustrate this point, let’s go back to the robot tea-maker:
Command: Get up.
Command: What’s wrong?
Robot: I don’t know what ‘up’ is.
Command: This way (demonstrates). Now get up.
Robot: (moves right arm around in the air) Bzzz.
Command: What’s wrong now?
Robot: I don’t know how to get an ‘up’.
Command: All right, stand up.
A real robot wouldn’t be able to give hints like that: it would just sit there, twiddling its electronic digits. It continually surprises us with its inability to understand what we see as ‘obvious’: we forget that the robot can’t think for itself, it can only act logically. But in doing so it gives us an instant test for any flaws in our thinking. ‘Bzzz!’ — instant feedback (which is more than we’d usually get in the real world!). We have to do the robot’s thinking for it: we have to work out how to connect the robot’s imaginary world of logic — the robot’s — ‘world-model’ — to the one we live in.
You’ll notice that there’s no logical connection between the robot’s complaint of “I don’t know how to get an ‘up'” and the Command’s reply of “All right, stand up”. More to the point, Command is making a connection of logic, adding a new piece to the robot’s model (more commonly referred to as its program). And in doing so, Command has to see the limitations of the robot’s logic — responding to its entirely logical but inappropriate action — and look out into a wider context to find a more precise instruction. But that’s not a process of analysis: exactly the opposite, in fact, since it’s looking wider, feeling wider, beyond the robot’s limitations, whereas analysis can only work within narrower and narrower limits.
So how do we work out what to tell the robot to do next? The simple answer is: we guess.
We watch the feedback, and we guess what to do next. The same is true looking inward: in skills like juggling, or even the inner clock, we watch the feedback, and we guess what to do next. Watch the feedback, guess what to do next; watch the feedback, guess what to do next — an iterative process whose length depends mostly on our ability to feel out appropriate moves at each stage.
Basically, it’s not just a guess, but an educated guess. And to help us do it well — to make an appropriate guess at each stage — we need to educate that ability to feel, to sense, to be aware.
Unfortunately, that’s exactly what our education does not encourage. Somehow, we have to learn it for ourselves.
Most of what we suffered in the name of education was not education at all, in the proper sense of the term. That style of teaching consists of requests for proper responses to predictable, known conditions: training us to follow fixed trains of thought, layer within layer of rules and exceptions to rules. And each lesson has a pre-set pattern, a pre-set structure with which to be ‘in-structed’, a sequence with a pre-set item of information to be ‘in-formed’ into our minds and a predefined (and testable) goal.
This isn’t education: it’s training — teaching for robots, not people. The difference is subtle but simple:
To be prepared against surprise is to be trained. To be prepared for surprise is to be educated.
The whole aim of conventional training is to prevent surprises. Industry wants a known end-product; universities want students of a known standard; parents and politicians want provable and predictable standards; and so forth. But the only thing that’s no surprise about training is boredom. Beyond that, it simply doesn’t work, and can’t work, for a very simple reason:
Mother Nature loves to throw a surprise party
There always will be surprises: we cannot plan against every one. In one recent example of a nasty surprise, the Iowa City incident, all three independent hydraulic systems on an airliner were cut by a single engine failure, leaving the aircraft with barely more than half-power and no hydraulics — hence no landing gear, no vertical control and able only to turn, at best, to the right. It says a great deal for the skill of the aircrew that more than half the passengers survived the subsequent crash-landing: because there was nothing in ‘the book’ — or any book — that could tell them how to handle the situation. With three independent systems, total hydraulic failure had been thought to be impossible, or near enough impossible to ignore as far as emergency-procedures training was concerned. And yet Coincidence Department provided some help, in a rather different kind of surprise: the airline’s chief training officer just happened to be riding with the aircrew that day.
Training is also too slow for the real world. Based on analysis, it can only look backwards — yet all the changes, all the surprises, come up as we move forwards in a never-quite-repeating reality. Most of what we learn in school is out of date by the time we come to apply it in the world ‘outside’; university researchers and other professionals spend a large part of their time just trying to keep up with the deluge of new information they must absorb in their work. And in the computer industry, the whole issue of training is close to breakdown: as soon as operators complete a training course, a new version of the program — or of the system, or even the whole technology — is released, and they have to start all over again.
Somehow we have to be prepared for change, be prepared for surprise rather than against it. But if what passes for education is only training that doesn’t work — in fact can’t work — then what would real education look like? Where do we begin?
Conventional education has no real suggestions other than those in the word itself: ‘education’, literally ‘out-leading’. Instead of trying to cram everything in, we lead out from within an awareness and an ability to learn — an ability to ‘self-train’, using the ‘in-teaching’ of intuition. This aspect of intuition is better known as ‘common sense’ — but there seems to be an unfortunate problem:
Common sense isn’t
Robert Pirsig, in his classic Zen and the Art of Motorcycle Maintenance, used the word ‘gumption’ as an alternate term for common sense. And as he pointed out:
In traditional maintenance gumption is considered something you’re born with or have acquired as a result of a good upbringing. It’s a fixed commodity. From the lack of information about how one acquires this gumption one might assume that a person without any gumption is a hopeless case.
It’s clear, from painful experience, that gumption can’t be taught: each generation makes the same old mistakes, time after time after time. We can’t teach people common sense; we can’t force people to think with a wider awareness. So as far as conventional education is concerned, there’s nothing we can do: either you have it, or you don’t.
At the surface, that seems to be true. But if we look at our own common sense more closely, it becomes clear that, whatever it is, it varies. Sometimes we can fix things — such as telling the robot what to do — with no trouble at all; but at other times, often very similar situations, we just get stuck. We’re not stuck because of a shortage of information — it’s more likely, in fact, that we have too much information — but it’s something more to do with the subjective side of the problem: in other words, us. We’re part of the problem: so we can also be part of the solution. As Pirsig continues:
Gumption isn’t a fixed commodity. It’s variable, a reservoir of good spirits that can be added to or subtracted from… Gumption is the psychic gasoline that keeps the whole thing going.
Without gumption nothing can get fixed; with a good supply of it, there’s no way that things can avoid being fixed. But at this stage of the labyrinth it’s extremely hard to see how we add to or subtract from that reservoir of ‘psychic gasoline’.
In a way it seems to be not so much a commodity as a skill — certainly keeping up the supply would seem to be a skill in its own right. All we can do is to make a few observations, though they’re somewhat subjective. We can see, for example, that in situations where theory has priority, where the job is ‘de-skilled’ and supposedly ‘any idiot can do it’, gumption disappears completely — to be replaced, in most cases, by a chaotic mess for which no-one takes responsibility. But where skills are emphasised — such as in traditional-style apprenticeship — gumption seems to grow with experience, in the process of converting theory into personal practice.
That we can see: but beyond that it’s not easy to see how we can educate gumption. Almost all our approaches to education are based on analysis, and that leaves us, stuck, with the same problem as before: analysis we can analyse precisely, but intuition we can only intuit.
The answer, then, would seem to do just that: to learn how to intuit intuition — to sense it, to feel it, to invite it out from within.
One thing we do know about intuition is that it’s shy, even timid: look too closely at it, and it disappears, just like magic. By analogy, it’s like trying to look a a dim star at night: we know it’s there, but we can only see it by looking away from it.
On the surface, some of it seems to be little more than chance: there are “judgemental, inspirational processes”, but also “accidental” ones, a matter of luck, of being ‘in the right place at the right time. But like gumption, luck is something other than a mere random ‘commodity’ — as Louis Pasteur’s famous dictum reminds us:
In the field of observation, chance favours only the prepared mind
Much research depends on what is almost a technology of luck. To make use of those accidents, we need to know when something useful has happened: we have to be prepared to be surprised, prepared for surprise. Preparing against surprise prevents accidents — but here we need something accidental to let us see what our current fixed viewpoint won’t let us see. Letting go, for a while, of normal views of cause and effect, of what things ought to be like, allows another kind of pattern-matching, an intuitive kind of pattern-matching, to come into play.
But it’s like a small shy child: it has to be coaxed into play. Sometimes it will only come out in the dark, or at least in the darker spaces of the mind. One of the best-recorded examples of this was Kekule’s discovery of a workable model for the structure of benzene — one of the key discoveries in the history of organic chemistry. At the time, all hydrocarbons were assumed to have chain-structured molecules, yet benzene makes no sense as a chain. Whichever way he looked at it, the analysis simply didn’t work. In other words, like all of us at times, he was stuck:
But it did not go well; my spirit was with other things. I turned the chair to the fireplace and sank into a half sleep. The atoms flitted before my eyes. Long rows, variously, more closely, united; all in movement wriggling and turning like snakes. And see, what was that? One of the snakes had seized his own tail and the image whirled scornfully before my eyes. As though from a flash of lightning I awoke; I occupied the rest of the night in working out the consequences of the hypothesis… Let us learn to dream, gentlemen.
[[RESERVE 3 inches]]
[[CAPTION “One of the snakes seized its own tail”: the structure of benzene]]
Whilst few scientists applied that last comment to their own work, many in the present-day New-Age movement have done so all too literally. In a strange mixture of wishful thinking and laziness, many have made grandiose claims for the all-encompassing power of dreams. But dreaming alone is rarely, if ever, enough: once again, Pasteur’s comment was that chance favours only the prepared mind. We do have to do the work: solutions don’t arrive on their own by magic.
And yet they do come on their own — if we let them — but in a way that is even stranger, even crazier, than mere dreams. Intuitive solutions arrive, on their own — if only we can see them. Computer consultant Gerald Weinberg says:
I see the answer in the first five minutes but it can take me hours or days or weeks
to see what it was I saw in those first five minutes
Here we’re trying, in an intuitive way, to re-create beginner’s luck — but to do that we need an awareness of how we got there. The apparent principle behind any method for doing so is simple enough: do a great deal of preparation, of hard analytic ‘99% perspiration’ — and then, quite deliberately, and consciously, set out to forget it. Whatever you’re stuck on, analyse and analyse and analyse it: then let go. We go back to the state of the Fool in beginner’s luck: ‘the Fool succeeds not in spite of knowing nothing, but because of knowing nothing’. Under the right conditions — about which it seems we can never quite be certain — information simply arrives from nowhere:
To remember something that you never knew first set out deliberately to forget it
Once we understand this and accept it for what it is, we can use it as a method for encouraging intuitions — a working example of a technology of luck. Using it, quite suddenly something becomes ‘obvious’ to us, even though we couldn’t see it at all before — the inverse of the old adage about ‘You can’t see the wood for the trees’. But whilst this is a kind of method, the exact conditions aren’t clear, and certainly aren’t ‘objective’: as a result, we each have our own little rituals that seem to encourage this to happen for us.
One problem, though, is that few of these methods, on the surface at least, will seem to be entirely sane. They’re certainly odd: analogy, allegory, ‘brain-storming’, even what one writer described as ‘a whack on the side of the head’ and ‘a kick in the seat of the pants’. One technique I’ve used myself in the past was called ‘invoking Edison’: I would imagine, to myself, that the great engineer Edison would magically come to my aid, and show me, in the next coincidence, some information to tell me what to do next. The ‘next’ coincidence isn’t necessarily instantaneous: in recent instance, I’d been stuck for at least a couple of weeks in my juggling, unable to get past a single ‘switch’, until a visit to a friend. ‘Edison’ here was her husband who, it turned out, was a keen juggler, and able to show me the information I’d missed. (I was throwing the return ball too early, before rather than after the crossing ball reached its peak.) In other words, I’d accepted that I was stuck — then allowed myself to let Coincidence Department to provide me with clues. Just sometimes, part of the accepting is accepting that I may have to wait!
When you’re stuck, ask Edison what to do — even if Edison turns out to be the office cat
Evidently there’s no causal connection between the coincidence — the blunderings of the office cat, or whatever — and what we make of it, and it’s hardly predictable, in the conventional sense: but we do find that it usually works, for us if not necessarily for anyone else. In that sense, it’s exactly the same as any other technology: we don’t really know how it works, but it does usually work. It’s a personal technology, a subjective technology. The inner alarm clock, of course, relies entirely on this kind of technique. For another example, one programmer colleague, when he’s stuck, goes to an all-night movie with a pad and pencil in his pocket, and relies on Coincidence Department — in the form of the movie script — to provide some kind of nudge. Ideas come to me when I’m practising my juggling during a break in my writing; and more prosaically, another colleague finds that ideas come to him when he goes to the different environment of the bathroom for a rather different kind of break. Somehow or other, we have to let go — of conscious control, at least. It all seems a little crazy, but:
There’s a method in the madness
There’s a certain amount of trust involved. To make it work — or let it work — we have to let go of our usual way of working and let something else take over: a ‘something’ that seems to be almost outside of us. It can be more than a little frightening: a hint of madness, a loss of control, certainly a loss of our normal sense of certainty.  And to work this ‘way of the Fool’ we have to accept being ‘Fool-ish’ — which is hardly comfortable when the world requires us to be utterly competent. We also tend to be wary of looking crazy to others — which asking the office cat for advice would certainly seem to be. Whatever we do here:
There’s a method in the madness
but there’s madness in the method
It seems to be out of our control: Mozart was all but besieged by music, Gauss ‘seized’ by mathematics, and Poincaré found himself presented with one of his most famous solutions whilst stepping up into a bus. It’s clear that this kind of intuition is more than a mere ‘fast analysis’: analysis can only work within the known, whereas this kind of information arrives from beyond the known, from somewhere outside of the normal sense of self. For many artists, of course, this abandonment of self is sought deliberately, as a way of life. Ron Weldon, for example, said that for him the process of painting was:
trying to get in touch with the intuition and reach that unconscious level where ‘it’ takes over.
That ‘it’ sounds almost like George Lucas’ description of ‘the Force’ in the Star Wars saga: something vague and undefinable that is in each of us, yet pervades everyone and everything. And yet, surprisingly, it’s clearly there: we can’t quite analyse it, yet we can intuit it, feel or sense its presence — or absence. And it seems to be as unpredictable — and uncontrollable — as gumption and common sense: it comes and goes in waves. I can feel it come and go in waves, as I attempt to juggle: there are quite unmistakeable moments when I know that everything will work — followed immediately with moments when I think it will work, with the result that it doesn’t. Equally, there are long stretches when I’m fairly certain that nothing much is going to happen, but I keep practising anyway: I have to do the work. I keep trying, but I also let go: with the result that, every now and then, I allow a pleasant surprise — a decent stretch of juggling — to come through.
The juggling becomes a form of meditation, just watching the balls pass from side to side — and drop, of course. But it seems that it works best when I let it happen: if I do nothing, nothing happens, but if I try too hard everything happens. Just somewhere in between is a space, a kind of ‘doing no-thing’, where the balls seem to move themselves, placing themselves in each hand in exactly the right rhythm. And yet the moment I notice that it’s working, it all comes apart — it vanishes, just like trying to look straight at a dim star at night.
It is, to say the least, frustrating. But there’s an important clue there: that my hands, as long as I leave them alone, without trying to control everything, seem to know what to do. Things are moving so fast that I don’t even have time to see what they’re doing, let alone control what they’re doing: and yet it all works. It’s almost as if my hands have eyes of their own. So if I can just direct the process and leave everything else alone, it seems, my body’s own knowledge can come through.
The human body, we could say, is a creature of habit: it certainly does take time for it to learn new ones. Each new skill is a collections of new habits, new sequences of movements: and we have to go through them over and over and over again until each part of the pattern is learnt.
It’s all ‘in-tuition’, in the sense of both ‘teaching within’ and ‘teaching from within’. To get that intended body-knowledge to become body knowledge, we have to show, repeatedly, what we want to have happen — and then watch what we get.
At this stage in learning to juggle, for example, I can just about manage a dozen passes — a dozen moves of a ball from one hand to another without dropping one. But that’s only because of all the practice: throw the balls until they stop or drop, watch what’s happened, throw again, re-assess — hundreds and hundreds and hundreds of times. It comes and goes in waves, sometimes eight or nine passes, sometimes only one before I drop the lot — overall, it’s better than it was, but it still doesn’t work well. The best move I can make at the moment is just keep going:
Practice doesn’t necessarily make perfect but at least it’s more perfect than nothing
At each stage I watch: the return throw is nicely timed now, but my left hand tends to throw less high with each pass, and my right hand has a habit of either forgetting to let go of a ball, or throw it far too low and to the left. I can’t control them. But what I can do is redirect them, draw attention to their — for these purposes — ‘bad habits’. So I talk to my hands, so to speak: ask them to correct these habits. Sometimes they listen, more usually they don’t: but eventually, slowly, over minutes or even hours of practice, they get the idea. Then there’s another misunderstanding I have to draw my hands’ attention to, and another, and another: a slow iterative process of body-learning.
At one level this seems crazy — literally talking to myself. But it’s not ‘out there’ that I’m teaching: it’s part of me. And one way of reaching a specific aspect of me — my hands, in this case — is to imagine that each aspect is run by a separate sub-mind. It’s an intuitive technique — an analogy, in fact — for handling an ‘in-tuition’ problem. I imagine, then, that each of these not-quite-imaginary sub-minds is like a cantankerous child: I can’t control it directly, but can at least give it instructions and advice, and point out the errors of its ways. I also have to listen, though, to what it tells me. It’s a two-way process — a feedback loop of learning, of teaching within and teaching from within.
One problem is that this ‘kinesthetic’ intuition, a merging of the senses into one greater whole, develops out of feelings, out of information from all the senses — whereas we depend far too much on vision alone. Almost all our instruments present their information for our eyes, all but excluding the other senses. So for most purposes, we trust our eyes, and our thoughts — and very little else. But seeing alone is not enough:
If seeing is believing, what then is feeling?
I don’t have time in juggling to watch every movement of my hands and of the balls. If I try to watch everything, to control everything, either nothing happens, or everything but what I want happens. Instead, I have to rely on a more overall feeling, an overall sensing of movement — the literal meaning of ‘kinesthesia’ — using my vision more to oversee the movements than to try to control them. That works — somewhat. Certainly better than trying to control everything, which doesn’t work at all. But still, frustratingly, not well enough — not at this stage, at any rate.
To develop that overall awareness further requires a deliberate effort to distract attention from the eyes and concentrate on what things feel like: their textures, their strengths and weaknesses and all their other attributes — especially from the point of view, by analogy, of what it would feel like to be the materials. The engineers Laithwaite and Thring, in their work on the education of invention, coined the term “thinking with the hands” to describe this process of intuitive knowing, allowing our hands to tell us what needs to be done at any given point.  There’s no easy way to describe it, though: it’s strictly subjective, a feeling, or a set of feelings — and as such cannot be defined in an objective way.
Our training’s obsession with objectivity makes it much harder for us to reach that level of understanding. But by analogy at least, ‘things’ like a set of juggling balls are not outside of us: we manipulate them by making them an extension of us — we ‘know’ them by imagining them to be part of us. The conventional subject/object division may be useful for some purposes, but for the practice of skills it’s an active hindrance.
The same is true of what we call ‘mechanic’s feel’, which Pirsig describes as a ‘deep inner kinesthetic feeling for the elasticity of materials’. It’s not objective at all, but a personally-learned inner understanding of materials that leads us to know when a bolt is finger-tight, or snug, or over-tight. In principle, we could use a torque-wrench for the purpose, to measure the exact tension as we tighten the bolt: in fact conventional training of mechanics insists on its use, precisely because the ‘tightness’ readings a torque-wrench gives seem to be objective — hence presumed to be predictable, reliable, certain — whilst subjective feelings obviously are not. Specifying that a particular bolt shall be tightened to ‘exactly 10 ft/lb’ has a nice ring of certainty about it that ‘finger-tight’ cannot match.
But the torque-wrench, for all its objectivity, is not as certain as it seems. Every material — every different metal, plastic, glass, ceramic — has different elasticities that themselves depend on size and shape and what other material, in the example of a bolt, they’re being screwed into. Natural materials such as wood aren’t consistent, so there’s no way we can put an absolute value on the necessary tension; even with manufactured materials the appropriate tension varies with wear, with metal fatigue and so on. The result is that in real-world mechanics — as opposed to the comfortable idealised world of theory — we have an infinite variety of tension-values that could apply, always depending on context.
Even if we could memorise every possible combination of tension-values, the torque-wrench could still mislead us into thinking a bolt is tight when it’s simply stuck on a piece of grit. To get round that possibility, we need to know the feel of what’s happening to that bolt as we screw it in. So what we need, somehow, is to balance an increasing trust in intuitive ‘feel’ with proper use of analytic tools such as the torque-wrench. In other words, to quote the old Arab proverb:
Trust to Allah, but tie the camel first
We can do the same with the inner clock: with practice, the body knows and can tell us — itself — when to wake up. We can learn to trust it. But in the meantime, while we’re learning, we ‘tie the camel first’: in other words, we also set the mechanical alarm — knowing that that can fail too. But between the two of them — the analytic tool, and the intuitive one — we’ll probably be able to wake up on time.
Even then, we don’t quite trust intuition — with good reason, because it’s never absolutely reliable at the best of times. We want — crave for — absolute control, absolute certainty. But we often don’t even trust intuition when we can trust it, because we don’t know when we can — we don’t know how to know when we can. And that’s where things start to come apart.
Intuition isn’t obvious
What stops us from trusting intuition, much of the time, is that it just isn’t obvious: it doesn’t make immediate sense. That’s a problem, because we also tend to use the word ‘intuitive’ as a synonym for ‘obvious’. In computing, for example, we often use the term ‘intuitive’ to mean something, some aspect of a program, that is easy to use. It matches our habitual ways of working: it does make ‘immediate sense’. The converse, which, if we’re honest, describes most computer programs, is ‘counter-intuitive’: something that is not easy to use, something that encourages us to make mistakes and mis-assumptions. In that sense, intuition itself seems ‘counter-intuitive’: we can’t see it, we can’t grasp it, we can’t make it fit within a comfortable framework. If intuition isn’t intuitive in that sense, it seems, then it surely can’t exist — or if it does, we’re best to just forget about it.
That may be true: but then exactly the same is true of most of our current concepts of science and technology. They’re ‘counter-intuitive’ too. We can’t see galaxies, we can’t see molecules or magnetic fields. We can infer or imply that they exist, and make use of those concepts in models, even to the extent of making up laws that describe their apparent behaviour: but their existence is no more tangible — ‘real’ in the everyday intuitive sense — than intuition itself. As Pirsig commented, they’re ghosts: figments of the imagination. 
Science deals with abstractions and idealisations in a way that used to be counter-intuitive, but now somehow we’ve become convinced that the abstractions are more real than the real thing. We calculate the motion of a juggling ball, and say that, according to Newton’s laws of motion and gravity, it ‘ought to’ travel in a smooth parabolic arc. We expect consistency; we expect predictability. But the common-sense experience is that if we throw a juggling ball, its path is not a smooth arc — in fact it looks more like one of those old early gunnery-range diagrams, where the projectile climbs smoothly and then just drops. That’s reality; that’s our actual common sense of what happens.
[[RESERVE 3 inches]]
[[CAPTION “The arc of a projectile”: prediction and reality]]
We tend to think that that kind of diagram is quaint: pre-scientific, hence just plain wrong. In fact, it’s true technology: a precise series of observations on what actually happens to a real object under real conditions of drag and friction and the rest — a more accurate description in practice than the neat and tidy parabola. Common sense in technology, our common experience, is that no law is a law: it’s only a guideline, a useful model. But for a variety of reasons we choose to believe that the over-simplified abstractions of friction-free, reality-free ‘laws of motion’ are true, a more accurate picture of reality — and ignore our own experience. What we think of as scientific common sense — obvious, intuitive — is no more than a collection of ideas, in reality neither common nor sensed. Common sense isn’t: the ghosts have taken over.
If that’s so, it’s hardly surprising that we get stuck.
One point on which we trap ourselves is that we expect sameness. We think it’s obvious that events repeat themselves in regular patterns; and we like the comforting sense of predictability that science’s concept of law gives us in dealing with them. But it’s an illusion — a ghost. And we defend that illusion with all our might and our education, claiming that it’s ‘common sense’, but betraying our irrationality with logical absurdities like the common phrase:
The exception proves the rule
Any thing that doesn’t fit, we dismiss as ‘random noise’ or ‘experimental error’. But in the real world, that’s dangerous nonsense: as Jim Williams commented, “in my field, [the design of electronic] linear circuits, just about everything is an exception”. In the real practice of technology, we quickly discover that everything interacts with everything else; nothing is quite certain any more. Gone is that nice closed system of interlinked equations; gone are the tidy laws that seemed to work so well under laboratory conditions. Reality is no illusion: rules are. So to match reality, that phrase needs a slight change:
The exception proves that the rule isn’t
Events never quite repeat: they’re often similar, of course, but never exactly the same. There are no absolute rules, except for the absolute rule that there aren’t any absolute rules. That’s obvious; that is common sense. It’s just that we don’t like to admit it: because then we’d have to admit that we’re never actually in total control of events — or in control of anything.
But it is important to recognise that the rules are useful — as guidelines, but not as laws. Many things, many sequences of events, do occur with a high degree of repeatability: most things do work mostly the way that we expect. Where we need our intuition, and the sensing, the ‘thinking with the hands’ that creates common sense and gumption, is to work out when they don’t repeat. The more probable it is that a pattern will repeat, the more we have to be on our guard for the improbable — but inevitable — occasions when it doesn’t repeat. We use intuition to show us what is not obvious, what is not probable — but still important in the context:
Analysis depends on the theory of probability
Intuition copes with the practice of improbability
Where analysis is always looking for sameness, intuitive processes such as perception are always looking for not-sameness, for change or difference. If they can’t perceive any change, they simply shut down. We become so ‘habituated’ to a background noise like the ticking of a clock that we only perceive it when there’s a change: someone points it out to us, or, more confusingly when the clock stops — in other words when there’s nothing there. The actual sound has disappeared, leaving only a ghost, a memory of what ‘ought’ to be there:
As I was walking down the stair
I saw a man who wasn’t there.
He wasn’t there again today —
Oh how I wish he’d go away!
More often we wish it had not gone away. One of the hardest tasks in any technology is to work out what’s missing — especially as the effects of the absence of even the most trivial item can cascade upwards into major proportions. It’s all too easy to delete by accident a single line from a computer program: but a sheer nightmare to deduce, from the suddenly bizarre performance of the program, exactly what’s happened, which line (out of thousands, or tens of thousands) has gone missing, what to do to replace it, and so on. It’s easy to forget a single screw in reassembling an engine; but if we do, the whole lot has to come apart again — or else the engine may come apart of its own accord, at a time of its choosing rather than ours.
Analysis can only work in hindsight: so the standard analytic technique for this problem is the check-list — something from which to reason back to reality, something with which to compare. But it doesn’t always work — as we saw with the aircraft crash earlier, no check-list can ever be complete. It’s useful, but never quite reliable — especially if you rely on it as the sole indicator of absolute truth. And since academic examinations are based on the same idea — a check-list of what the student is supposed to know — they’re not exactly reliable as a test of knowledge, ability to act appropriately in the real world.
We have to have some way to get an overall feel, a flavour, a synesthetic sense, of what the pattern is and what it should be. That’s what intuition provides: we allow information to arise from an inner common sense, itself buit up from an open observation and awareness in experience. Almost magically — ‘out of the corner of my eye’, we might say — we get a warning that something’s wrong. But then we can’t see what’s wrong: “I get the answer in the first five minutes, but it then takes me hours or days to see what I saw in the first five minutes”.
It’s frustrating, confusing, unnerving. We all know the feeling: the theory says everything’s working fine, everything fits the checklist, so in principle everything must be fine. But we have a feeling of doubt, just a nagging doubt, a tiny warning bell a sense of ‘something not quite right’. Yet that’s all it is. Indefinite. There’s no way to say what it is, what it’s about. Unhelpful; irritating. As a result, there’s a strong temptation to just ignore those intuitions, in the hope that they’ll just go away. Life seems much easier if we ignore them, just stick to theory, it’s much more certain.
Obligingly, the intuitions do indeed go away if we ignore them. In effect, we learn not to see them in technology and elsewhere. But because they’re not there any more, we don’t see that they’ve usually been trying to tell us something. And we don’t see any link between their absence from technology, and the chaotic not-quite-control that’s typical of ‘applied science’. The connection’s not obvious, so there surely can’t be any connection.
In that sense, it seems ‘intuitive’ to say that we don’t need — don’t want — intuition. Unfortunately, that particular intuition, that particular piece of ‘common sense’, happens to wrong. But the same is true of many, if not most, of our other intuitions. Which is, of course, one reason why we don’t trust them.
You can’t trust intuition
Another difficulty with intuitions is that they come packaged complete with a feeling of ‘rightness’ (or, occasionally, ‘wrongness’) attached to the information. We’re certain that it’s true, simply because it feels true. It’s obvious. To us, at any rate.
But simply seeming to be true does not mean that it is true. I may feel quite certain that my next series of throws of the juggling balls will (or won’t) run well; as a result I have definite expectations, and may get quite upset if things don’t work the way I expect — even if it’s better than I expect, in fact. Working better than I expected can feel quite wrong; and failure to perform as well as I expected obviously feels wrong.
Edward de Bono, in Practical Thinking, calls this state ’emotional rightness’. — and it’s almost impossible to separate the information from the emotion that’s attached to it. Despite what we may feel, most of these intuitions are just plain wrong — as Beveridge points out in The Art of Scientific Investigation, and as we find out in our experience with the juggling balls, for example. The danger is that when it feels so right — ‘received truth’ — we tend to stop thinking.
It’s not a good idea. We need to be able to feel; we also need to be able to think.
In our usual non-education, analysis wipes out intuition: here, intuition wipes out analysis.