This essay is one in a series. Others are The Simplest Machine Implementation of Pleasure and Pain (complete with a javascript demo!), Robots with a sense of Right and Wrong, and Violet the Color-bot.
Unwinding the strip out "magic" from definitions
by seeking non-circular, physical definitions of a number of everyday words, we can more readily move such concepts as happiness, purpose, and morality out of the realm of intuition, mystery and magic and into the world of science and hard logic

Prior to Charles Darwin's Origin of Species, it was difficult to have a worldview based purely on the physical. Without an explanation for the sophistication of living things, it was only reasonable to assume that there must be a supernatural creator. When Darwin proposed such a naturalistic explanation, he replaced a great deal of magic and mystery with science and logic. While Darwin didn't have much knowledge of the specific implementation behind it (that is, Mendelian genetics, molecular biology, gene expression, developmental biology, etc.), nonetheless he did quite well at seeing and presenting the big picture of natural selection.

However, despite his later work on emotions, Darwin barely made a dent in public understanding of such concepts as pleasure and suffering, good and evil, and right and wrong. Today, even those who claim to view the world to be free of the supernatural often use terminology that betrays an assumption that mental processes are fundamentally different from physical ones -- that is, they show a (subconscious?) belief in Cartesian dualism: the Ghost in the Machine. While we all know such concepts of emotions and morality intuitively, most of us have trouble fully grasping them without considering them to be, at least in part, "beyond physics." Often without really realizing we are doing it, we let this dualism sneak into our descriptions and understanding of such things, even in scientific contexts. And we have difficulty coming up with hard explanations and definitions that don't form a tight little circle: Why should we be good to others? Because it is the right thing to do. But what do you mean by "right"? That which you should do. "Should"? "Should" refers to making a good choice. Ok, then so what do you mean by "good"? Ummm...something that makes people happy...? And what is happiness? A good feeling!

Even esteemed scientists such as Francis Collins, former leader of the Human Genome Project, show a surprising degree of confusion over this problem. Collins defers to the supernatural to explain the observation that humans have an absolute concept of right and wrong.[1] Others skirt the issue by suggesting that only with massive advances in our understanding of neuroscience, artificial intelligence or even quantum physics will we have any understanding of what these things actually are. The "Brights," a group specifically committed to a naturalistic worldview, have initiated a project to explain morality in scientific terms -- yet they use logic exactly as circular as that shown above.[2] Meanwhile philosophers often just muddle things up by trading in everyday words for fancy new ones, seemingly in the hope that if their circle of logic passes through enough points, we'll miss the fact that it is indeed still a circle.

Without rooting our terminology in something physical, rather than just in our intuitive understanding of the concepts, it seems that all of these terms -- should, right, wrong, good, bad, happiness, suffering, etc. -- are meaningless in the absense of humans or at least animals, presumably because it is only such entities that possess a "mind." (whatever that is!) In the end, we are left with three choices: 1) avoid the concepts in any scientific discussion, 2) admit that we subscribe to Cartesian dualism, or 3) define the words in purely physical terms. I choose the third option.

Otherwise, in the absence of such hard, naturalistic definitions -- ones that don't loop us in a circle, appeal only to our intuition, or have needless requirements that they apply only to humans (and possibly animals) -- we have immense areas of knowledge that are effectively off-limits to science and logic. The intention this essay, and especially of the "dictionary" that follows it, is to help patch over these gaping holes in the domain in which logic and scientific inquiry can do its thing.

How needlessly narrow definitions inhibit understanding

At one time the word flight referred to something birds, bats and insects do, since only such creatures could fly. Only with the invention of balloons, airplanes and helicopters and the like was it common to extend the word to apply to non-living objects. But the word never really changed in meaning at all, and no one today would consider it "metaphorical," or a "separate definition" when someone says that an airplane flies. Had someone a few centuries ago said "man-made machines will never fly" and justified the statement by noting that the dictionary defined flight as pertaining to animals only...obviously, he wouldn't have contributed anything insightful or meaningful to the discussion.

But when Richard Dawkins published "The Selfish Gene" in 1976, he was famously attacked for speculating "about the emotional nature of genes." Philosopher Mary Midgley declared that "genes cannot be selfish or unselfish, any more than atoms can be jealous, elephants abstract or biscuits teleological" [3]. Most of those defending the book said that Dawkins was using selfish as a metaphor, and that Midgley had egregiously misunderstood or misrepresented him.

Yet I never saw the word selfish in that context as a metaphor, nor did Dawkins. My understanding of the term selfish never implied consciousness, free will, or even biology (note that the term -- albeit in quotes -- has been applied to the behaviour of network routers [4]). There is no logical necessity of defining the term so narrowly, any more so than restricting the word flight to apply only to animals. I would suggest that thinking of the term as a metaphor, or even thinking of it as a "biology specific" definition of selfish which does not cover the everyday use of the term (as Dawkins himself implies in his rebuttal: [5] ), only serves to obscure Dawkins' insight.

As one more example, it was once thought that a decision could only be made by a human, or possibly by an animal. To many, the word implied free will or consciousness. Today, we regularly refer to computer programs as making decisions, and rarely would a person -- except maybe someone very new to computers -- consider this to be a metaphor. It would certainly be difficult to learn computer programming without acknowledging that a program can make decisions. Stopping to think "this is just an metaphor" every time the word is used would stand in the way of understanding the task at hand; it is more useful to just consider that the meaning of decision -- selecting a course of action -- is inclusive of both human decisions and computer decisions.

Note that an even bigger barrier to understanding might be to consider the decision of a computer program to actually be that of the programmer or that of the user. There are decisions made by the programmer and user, but these are more usefully considered separate decisions -- just as it is a separate decision to teach your child to say "bless you" when someone sneezes, than it is for the child to actually say it at the appropriate time. (note also that at no point did a programmer at Google decide what response would be appropriate when a user types in "manic purple albatross"...the program makes that decision at the time such a query is received)

The point is, our ability to understand and our capacity for insight is only inhibited by definitions that imply a restriction to apply only to living things or things that we consider to have a "mind". It is an artificial distinction, and can contribute to incorrect -- or at best worthless -- conclusions.

A naturalistic "dictionary"

At the right is a list of words that are each generally associated with the nebulous concepts of "consciousness" and "mind" -- especially the words toward the bottom of the list. By thinking of these words this way, we find ourselves unable to explain a great number of things in naturalistic terms, because consciousness and mind have never been explained in such terms, at least not to the satisfaction of most people.

I suggest that we address this issue by creating a sort of "naturalistic dictionary," which attempts to more inclusively define words such as these, free from the arbitrary requirement of being associated with a biological entity that we might consider to have a "mind." In order to define the words such that there is no question that "no magic is required," we should make sure to provide examples of the words being applied to non-living, well-understood things, always trying to find the simplest (and therefore non-biological) example for which the word can reasonably apply. These more inclusive definitions must leave the words' everyday meanings unchanged for all practical purposes -- just as extending the word fly to include airplanes didn't change the way we describe or think about bird flight.

If you don't agree that these more broad definitions are the "true" definitions of the words, I ask that you at least consider that they are useful and meaningful definitions nonetheless, and therefore accept them as viable alternative definitions, which can be used for explanation of concepts which otherwise might be too tedious to explain.

Importantly, our little dictionary should have a requirement of being rigorously definition should directly or indirectly refer back to itself. We will make sure that all definitions only use words higher in the list, and that any other words used in the definitions are indisputably naturalistic.

Our root word: "goal"

Given this commitment to non-circularity, we need to start off with a "root" word that is already in common use in non-biological contexts, and define it clearly as such and provide explanation and examples ranging from simple and non-biological, to the everyday uses which apply to humans. With the word sufficiently defined, we can use it in the definitions of other terms in the dictionary, gradually moving to more difficult terms: those toward the bottom of the list. We'll also try to reuse (and extend) the same examples that we use for the root word, wherever appropriate. I have chosen the word goal as the root word, hence its position at the top of the list.


a state that an entity tends toward, due to characteristics specific to that entity (as opposed to external factors which might tend to prevent that state from being reached). Also known as: that which an entity has an inclination toward.

Below I'll give 4 examples of entities and their goals: a magnet, a thermostat, a gene, and a hungry dog.

Example 1: a magnet

A magnet is possibly one of the simplest things where the term goal can be said to apply. A magnet's goal is to come closer to iron or another magnet. Why is this considered the magnet's goal? Because the magnet's characteristics (rather that something external, such as someone pushing it) is creating the tendency for this to happen. External factors may be countering these tendencies; for instance, the magnet may be attached to a string, which is holding the magnet back from reaching the metal.

I chose a magnet as the first example, rather then a more sophisticated man-made object, for a reason: with the latter, it is too easy to say that the goal is actually the goal of the maker or user, rather than of the object itself. And of course the problem with biological organisms is that we still don't fully understand even the simplest of them. A possibly bigger problem is that our own perspective is from inside the decision making mechanism of a biological organism, which distorts our ability to see such things with objectivity.

Example 2: a thermostat

Let's move on to an example of something more sophisticated than a magnet, and that can also be said to have a goal: a thermostat. It comes closer to fully meeting our idea of goal than does a magnet, but as mentioned, it has a bit more potential to trip people up by thinking that the goal is that of the maker or the user of the object. (see sidebar at left)

The goal of a thermostat, of course, is to have the room's temperature be that which the thermostat is set to. In other words, it has a tendency toward a certain state, which is one in which the temperature of the air matches the temperature set on its dial. Note that it may not be able to achieve this state, such as if the window is open and cold air is blowing in. Nonetheless, it still approaches the state, either achieving the goal or coming closer to the goal more often than would likely happen in the absence of its specific characteristics.

Example 3: a gene

Now we're ready to move on to a biological entity: the one we mentioned above, a gene. A gene's goal (as explained in Dawkin's Selfish Gene) is to make copies of itself. Again, external factors such as competition or a harsh environment may prevent the gene from copying itself, but the gene's characteristics cause a tendency for copies to be made. Of course, there are other equally valid frames of reference: in many contexts, it makes more sense to think of a cheetah having a goal of catching the antelope, than to think of it as the cheetah's genes having a goal of copying themselves. Regardless, if the frame of reference is the gene, its overriding goal is to copy itself.

Example 4: a hungry dog

For a more complex biological example, a dog might have the goal of eating a tasty piece of meat. Note that a dog on a leash, trying to reach a piece of meat but being held back by the leash, might be seen as similar in many ways to the magnet described above: drawn toward metal but held back by a string. I would suggest that this similarity is not superficial -- the only significant difference is one of complexity. It is the dog's characteristics, exceptionally complex though they may be, that creates a tendency for the dog to be drawn to the food.


Since the word goal is regularly applied to humans, I don't really need to provide examples. But note that my definition is inclusive of any goal a human could have, whether it be to cause the basketball to go through the hoop, to find some water to prevent dying of thirst, or to get an attractive spouse, a couple kids, and a nice house with a two car garage. Or, for that matter, to get a big-ass Harley. In the absense of things preventing a person from achieving these things, he will do what he can to cause them to happen. While we might trace these goals backwards to the goals of the person's genes, or further still to the goals of natural selection, doing so does not mean that such things are not also the goal of the human. Nor is it a requirement for a goal to somehow be associated -- directly or indirectly -- with a biological imperative such as nutrition or reproduction, although many are.

The rest of the words

Now that goal is defined naturalistically, let's attempt to define the rest of the list of words, anchoring them all to goal. I'm willing to accept that some might be a bit controversial (pleasure comes to mind), but I expect that once the easier to pin down terms are defined, the more difficult concepts will become less mysterious.

Attraction (opposite: aversion/repulsion)

to have a goal of getting closer to something. If a magnet's goal is to get closer to metal, it can be said to have an attraction to metal. If a person has a goal of getting as close as possible to someone (especially in an intimate/sexual sense), that person is attracted to the other person.

Satisfy (opposite: frustrate)

to achieve a goal is to satisfy that goal (and in turn, to satisfy the entity having the goal). A thermostat's goal is satisfied when the room's temperature matches the one it is set to. A dog's goal of eating food is satisfied when the dog eats the food (and likewise the well-fed dog can be said to be satisfied, at least relative to that particular goal). Satisfaction does not imply whether the actions of the entity itself, or an external factor, actually caused the goal to be achieved.


to act in a way to increase the probability of achieving a goal. A thermostat "tries" to reach its goal by turning on or turning off the heater or air conditioner. A dog tries to eat food by pulling on the leash that is holding it back. The term is generally most meaningful when applied to something where there are alternative actions that can be considered, so saying a magnet "tries" to reach the metal is rather a stretch of the meaning, since the way it tries is by simply being magnetic.

Success (opposite: failure)

achieving a goal, typically due to an entity's own actions, is known as success. The dog is successful if it pulls on the leash hard enough to get to and eat the food. The thermostat is successful if, by turning on the heater, the set temperature is reached.


when an entity's goal is achieved due to external events (that may have been seen as improbable), especially if the goal would be unlikely to be achieved otherwise. The magnet could be considered "fortunate" if a chance event results in it coming into contact with metal, without its own magnetism causing that to happen. The thermostat is fortunate if someone closes the window. The dog is fortunate if his owner drops some food on the floor.


to have a preference or goal. That is, if something has a goal, it wants to achieve said goal. The thermostat wants the temperature to match that set on its dial, the dog wants to eat the food.

Good (opposite: bad)

good indicates that a goal has been achieved, whether due to the entity's own actions or due to external causes. If the temperature of the room matches that which is set on the dial of a thermostat, that is good (for the thermostat), even if the thermostat did not have to turn on the heater for it to happen.

Benefit (opposite: cost)

to make more likely for an entity's goal to be achieved, often due to some external factor. If the window is closed on a cold windy day, that benefits the thermostat.

Interest / self interest

to have an interest in something is to have that affect the achievement of one's goals. It is in the thermostat's interest to have the window closed (on a cold windy day), it is in the dog's interest to have food available.


status of goal achievment. To have the window closed increases the welfare of the thermostat, having food available increases the welfare of the dog.


roughly synonymous with "goal". If an entity has a goal, it can be said to have a preference to achieve that goal. Often used with reference to multiple alternatives, as to which alternative most benefits the entity. The thermostat prefers the window be closed, the dog prefers that food be available.

Pleasure (opposite: pain)

this is the trickiest one to define, but possibly the most important, as the polarity of pleasure vs. pain is probably more central to the way humans view the world than anything else. It plays the central role in allowing animals (including humans) to learn, so as to increase the likelihood of their achieving their goals.

Pleasure is the reinforcement of previously used decision paths, pain is the suppression of such paths. For instance, if a hungry dog barks at his sleeping owner, and the owner wakes up and feeds her, eating the food causes "pleasure," i.e. reinforcement of previously used (typically, recently used) decision paths, including those of the decision to bark. Thus the dog is more likely to respond similarly in the future, especially in response to similar stimuli (i.e. detecting hunger and seeing the sleeping owner). If the owner scolds the dog or swats her with a newspaper (causing pain / displeasure: suppression of recent decision paths), the tendency to bark in such situations will therefore be reduced.

My javascript demo of pleasure and pain illustrates how this reinforcement/suppression process can work to allow an entity to learn. While designed to be as simple as possible while still illustrating the concept, it is still not so simple that we commonly see such a process in non-biological entities (for instance I would not say that pleasure and pain exist in a magnet, a thermostat, or a gene, even though those entities have goals and can be said to be "satisfied" or "frustrated").

Happy (opposite: unhappy/sad)

typically used for a combination of goal satisfaction as well as the resulting pleasure. Sometimes used to represent simple goal satisfaction when the entity doesn't have such a reinforcement mechanism ("the thermostat is happy when the temperature is 70 degrees"), and sometimes used to indicate only the pleasure without really satisfying any previous goals (such as if a third party injects a person with morphine, effectively short circuiting the reinforcement mechanism). These two latter cases might be referred to as "metaphoric happiness" and "artificial happiness", respectively, as they may lie outside the more common usage of "happiness", which requires both goal satisfaction and the subsequent reinforcement.

Like (opposite: dislike)

if entity or situation A causes entity B to be satisfied or happy, entity B can be said to like entity A. Note that the word seems less appropriate in an entity that has no ability to reinforce, that is, to have pleasure as defined above. Saying the thermostat "likes" having the window closed might be a stretch. But certainly the dog likes eating the food, and can also be said to like the food itself.

Love (opposite: hate)

a stronger word for like. In the case of romantic love, it tends to be associated with reproductive goals and the byproducts thereof (for instance, the goal of intimate contact with someone, and of being a life partner with someone in a way typical of those that reproduce together, are all tied into the concept of love).

Hope (opposite: fear)

when an entity's decision making mechanism (and its simulator) models a state of achieving a goal (and often external factors which may play a part in achieving the goal), it can be said to "hope" that said goal is achieved. Hope is part of the process of determining the course of action most likely to achieve goals, since the possibility of external events contributing to the achievement of goals is an important part of the calculation. Fear involves modeling the opposite situation, that of not achieving goals. This one is hard to express for simple objects: for instance, the thermostat can hardly be said to "hope" that the window is closed, unless it actually has logic that simulates the environment and determines that it is possible for some external factor to cause the window to be closed. Note that "want" has overlapping meaning, but "hope" more strongly implies internal simulation of the possibility of a goal being achieved.


another word for goal.


the role of one entity in achieving another entity's goal. While the goal of the thermostat is simply to make the temperature match the setting on its dial, the purpose of the thermostat is typically to make the room comfortable for its occupants.

Should (opposite: shouldn't)

indicates the course of action by one entity, which is most beneficial to one or more entities (i.e. the beneficiary), which may or may not be itself. The entity that is the beneficiary must be stated or implied for this term to have meaning. The thermostat "should" turn on the heater if the temperature of the room is below that on its dial (beneficiary: itself, and, in some but not all contexts, its user). The dog owner "should" ease up on the leash so the dog can get to the food (beneficiary: dog).


an explanation as to the purpose of something.

Altruism (opposite: selfishness)

when one entity has as a goal that other entities' goals are reached. To describe an act as altruistic indicates that the act increases the likelihood of one or more other entity's goals being acheived.

Moral good / right (opposite: wrong/evil)

altruistic behaviour. I illustrate this in more detail in this article.

footnotes and links

© 2008 Rob Brown