Cognitive Dissonance in Programming

A programmer who truly sees his program as an extension of his own ego is not going to be trying to find all the errors in that program. On the contrary, he is going to be trying to prove that the program is correct – even if this means the oversight of errors which are monstrous to another eye.

Cognitive Dissonance in Programming

In the field of psychology, cognitive dissonance is the mental discomfort (psychological stress) experienced by a person who simultaneously holds two or more contradictory beliefs, ideas, or values. The occurrence of cognitive dissonance is a consequence of a person performing an action that contradicts personal beliefs, ideals, and values; and also occurs when confronted with new information that contradicts said beliefs, ideals, and values.

In the fable of The Fox and the Grapes, by Aesop, on failing to reach the desired bunch of grapes, the fox then decides he does not truly want the fruit because it is sour. The fox’s act of rationalization (justification) reduced his anxiety about the cognitive dissonance occurred because of a desire he cannot realize.

Programming – perhaps more than any other profession – is an individual activity, depending on the abilities of the programmer himself, and not upon others. What difference can it make how many other programmers you run into during the day? If asked, most programmers would probably say they preferred to work alone in a place where they wouldn’t be disturbed by other people.

The ideas expressed in the preceding paragraph are possibly the most formidable barrier to improved programming that we shall encounter. First of all, if this is indeed the image generally held of the programming profession, then people will be attracted to, or repelled from, entering the profession according to their preference for working alone or working with others. Social psychologists tell us that there are different personality types – something we all knew, but which is nice to have stamped with authority. Among the general personality traits is one which is measured along three “dimensions” – whether a person is compliantaggressive or detached. The compliant type is characterized by the attitude of liking to work with people and be helpful. The aggressive type wants to earn money and prestige, and the detached type wants to be left to myself to be creative.

Now, every person contains a mixture of these attitudes, but most people lean more heavily in one direction than the others. There is no doubt that the majority of people in programming today lean in the “detached” direction, both by personal choice and because hiring policies for programmers are often directed toward finding such people. And to a great extent, this is a good choice, because a great deal of programming work is alone and creative.

Like most good things, however, the detachment of programmers is often overdeveloped. Although they are detached from people, they are attached to their programs. Indeed, their programs often become extensions of themselves – a fact which is verified in the abominable practice of attaching one’s name to the program itself. But even when the program is not officially blessed with the name of its creator, programmers know whose program it is.

Well, what is wrong with owning programs? Artists own paintings; authors own books; architects own buildings. Don’t these attributions lead to admiration and emulation of good workers by lesser ones? Isn’t it useful to have an author’s name on a book so we have a better idea of what to expect when we read it? And wouldn’t the same apply to programs? Perhaps it would – if people read programs, but we know they do not. Thus, the admiration of individual programmers cannot lead to an emulation of their work, but only to an affectation of their mannerisms. This is the same phenomenon we see in art colonies, where everyone knows how to look like an artist, but few, if any, know how to paint like one.

The real difficulty with property-oriented programming arises from another source. When we think a painting or a novel or a building is inferior, that is a matter of taste. When we think a program is inferior – in spite of the difficulties we know lurk behind the question of good programming – that is a matter at least potentially susceptible to object proof or hypothesis. At the very least, we can put the program on the machine and see what comes out.

An artist can dismiss the opinions of a critic if they do not please him, but can a programmer dismiss the judgment of the computer?

On the surface, it would seem that the judgment of the computer is indisputable, and if this were truly so, the attachment of a programmer to his programs would have serious consequences for his self-image. When the computer revealed a bug in his program, the programmer would have to reason something like this:

This program is defective. This program is part of me, an extension of myself, even carrying my name. I am defective.

But the very harshness of this self-judgment means that it is seldom carried out.

Starting with the work of the social psychologist Leon Festinger, a number of interesting experiments have been performed to establish the reality of a psychological phenomenon called “cognitive dissonance”. A classical experiment in cognitive dissonance goes something like this:

Writing an essay

Two groups of subjects are asked to write an essay arguing in favor of some point with which they feel strong disagreement. One group is paid $1 apiece to write this argument against their own opinions, the other is paid $20 apiece. At the end of the experiment, the subjects are re-tested on their opinions of the matter. Whereas common sense would say that the $20 subjects – having been paid more to change their minds – would be more likely to change their opinions. Cognitive dissonance theory predicts that it will be the other group which will change the most. Dozens of experiments have confirmed the predictions of the theory.

The argument behind cognitive dissonance theory is quite simple. In the experiment just outlined, both groups of subjects have had to perform an act – writing an essay against their own opinions which they would not like to do under normal circumstances.

Arguing for what one does not believe is classed as insincerity or hypocrisy neither of which is highly valued in our society. Therefore, a dissonance situation is created.

The subject’s self-image as a sincere person is challenged by the objective fact of his having written the essay.

Dissonance, according to the theory, is an uncomfortable and unstable state for human beings, and must therefore be quickly resolved in one way or another. To resolve a dissonance, one factor or another contributing to it must be made to yield. Which factor depends on the situation, but, generally speaking, it will not be the person’s self-image.

That manages to be preserved through the most miraculous arguments. Now, in the experiments cited, the $20 subjects have an easy resolution of their dissonance:

Of course, I didn’t really believe those arguments. I just did it for the money.

Although taking money to make such arguments is not altogether the most admirable trait, it is much better than actually holding the beliefs in question.

But look at the difficult situation of the dollar group. Even for poor college students – and subjects in psychological experiments are almost always poor college students – one dollar is not a significant amount of money. Thus, the argument of the other group does not carry the ring of conviction for them, and the dissonance must be resolved elsewhere. For many, at least, the easiest resolution is to come to admit that there is really something to the other side of the argument after all, so that writing the essay was not hypocrisy, but simply an exercise in developing a fair and honest mind, one which is capable of seeing both sides of a question.

Another application of the theory of cognitive dissonance predicts what will happen when people have made some large commitment, such as the purchase of a car. If a man who has just purchased a Ford is given a bunch of auto advertisements to read, he spends the majority of his time reading about Fords. It was a Chevrolet he purchased, then the Chevrolet ads capture his attention. This is an example of anticipating the possibility of dissonance and avoiding information that might create it. For if he has just purchased a Ford, he doesn’t want to find out that Chevrolet is the better car, and the best way to do that is to avoid reading the Chevrolet ads. In the Ford ads, he is not likely to find anything that will convince him that he is anything but the wisest of consumers.

Now, what cognitive dissonance has to do with our programming conflict should be vividly clear. A programmer who truly sees his program as an extension of his own ego is not going to be trying to find all the errors in that program. On the contrary, he is going to be trying to prove that the program is correct – even if this means the oversight of errors which are monstrous to another eye. All programmers are familiar with the symptoms of this dissonance resolution – in others, of course. The programmer comes down the hall with his output listing and it is very thin. If he is unable to conceal the failure of his run, he makes some remark such as:

It must be a hardware problem.


There must be something strange in your data.


I haven’t touched that code in weeks.

There are thousands of variations to these objections, if you are interested in finding more, check out devexcuses or programmingexcuses. But the one thing we never seem to here is a simple

I goofed again

Of course, where the error is more subtle than a complete failure to get output – which can hardly be ignored – the resolution of the dissonance can be made even simpler by merely failing to see that there is an error. And let there be no mistake about it: the human eye has an almost infinite capacity for not seeing what it does not want to see. People who have specialized in debugging other people’s programs can verify this assertion with literally thousands of cases.

The human eye has an almost infinite capacity for not seeing what it does not want to see.

Programmers, if left to their own devices, will ignore the most glaring errors in their output – errors that anyone else can see in an instant. Thus, if we are going to attack the problem of making good programs, and if we are going to start at the fundamental level of meeting specifications, we are going to have to do something about the perfectly normal human tendency to believe that ones “own” program is correct in the face of hard physical evidence to the contrary.

Image Credits:

Photo by Ián Tormo on Unsplash