Note: if you want to know what the following article is getting at before you read it, just click here. If you'd rather not have it spoiled for you, just read on.
In this story, a fictional, yet very plausible, computer is described which has what might be considered a minimal implementation of "artificial qualia." Since the most common example used to explain qualia tends to be "redness," the story revolves around a color-aware computer, and goes into some detail about the logic of color perception.
Some years ago, a fictional hacker acquaintance of mine, Lisa, told me she was going to try to build the holy grail of artificial intelligence: a robot that is conscious. Although I knew she was exceptionally smart and talented, I also figured she was not entirely serious. Still, I pinged her online recently to check in on how her machine consciousness project was going. "I've been rather busy with other things, you know," she told me. "But even with the limited amount of time I've been able to put into it, I'm pretty sure my robot Violet is finally conscious."
"Oh, really?" I asked. It wasn't really the response I was expecting. "Conscious? That's, ummm, kind of a big deal, you know? I'd like to see that."
"Well, ok, but here's the thing. I honestly do think she's conscious, if the word has any meaning at all. She basically thinks like a person and is aware and understands things in all meaningful senses of the word. But -- she's only conscious of one very specific thing: she can perceive, understand, and discuss color."
"Color? That's it?" I have to admit to some disappointment. "Can't my digital camera understand color? Or Adobe Photoshop? Nobody thinks they are particularly conscious or sentient or self-aware..."
"They don't understand color like Violet does," Lisa tells me. "Those things deal with color in various ways, but they don't subjectively experience it like people do. I'm suggesting that Violet actually does."
It's a bold claim, and I'm skeptical....and particularly doubtful that such a thing could ever be proven to be true. But it seems worth making the trip out to her home in the Sierra foothills to check it out. I've followed the state of the art in artificial intelligence for a long time, and am aware of how hard it is to make machines that process information anything like a human. In fact, I've always liked to kid my friends in the artificial intelligence community that AI stands for "Almost Implemented." All attempts I've seen utterly fail to get anywhere close to being able to fool anyone into thinking they are conscious or aware or capable of subjective experience. So if "she" is really able to think like a human, even if only a tiny subset of human experience, that's better than nothing. I make arrangements to visit Lisa and see Violet the following week.
Lisa welcomes me into her cluttered home and introduces me first to her sweet, graying golden retriever Mollie. Then she shows me Violet. The machine certainly doesn't look like a science fiction robot, but I am well aware that the term "robot" is often applied to anything artificial that mimics human behavior, even a software program such as a "chat-bot." Violet appears to be nothing but a rather old laptop computer, wired to a small video camera which is mounted on a stand so that it is pointing downward at the table. A piece of white cardboard is taped to the table in the area in the field of view of the camera, and a desk lamp is illuminating it.
On the screen of the computer are two windows. The first looks like a regular chat window, as on an Instant Messaging program. The second window appears to be a monitor of the video being captured by the camera: I confirm this by passing my hand under the camera, which shows on screen.
As Mollie curls up at our feet, Lisa prompts me to try Violet out. "Type your name in here. Then put something in front of her camera, and talk to her about it." She points to a pile of pages ripped from magazines, as well as pieces of colored construction paper and paint swatches. "For that matter, you can ask her anything. But she is pretty unlikely to be able to tell you anything particularly useful unless it has something to do with color."
Playing along, I start with some pleasantries, and quickly see that Lisa has put a bit of snark into Violet's demeanor.
Well at least she is honest, I think. Lisa says "Hey, like I said, she does colors. Not emotions." I wouldn't have really thought that a simple "I'm good" indicated any sort of emotions, but in retrospect, I'm glad she didn't lie and tell me she is "good" when she doesn't even know the difference between being good and bad, happy or unhappy.
Lisa tells me she borrowed a lot of code from the classic chat-bot program ELIZA to make it so Violet can hold a conversation. "I had to tweak the code a lot though. Don't think she's just reading from a script, there's some pretty hardcore AI stuff in her...really intense fuzzy logic. Did you notice how the fans kick in as soon as you talk to her? She's working hard."
I note to Lisa that I'm impressed that Violet seems to have some casual -- if quirky -- conversational ability, and Lisa says that is what she worked hardest on...trying to get Violet to respond appropriately, with neither too much nor too little information, while never faking knowledge of things she doesn't know. "So talk to her about colors," Lisa impatiently suggests.
I grab a couple small pieces of colored paper, one blue and one red, and put them in view of Violet's camera.
rob: what colors do you see?
Violet: i see one area of blue and another of red
rob: what about the background?
Violet: the background is white
Violet: i normally don't mention the white areas unless asked, by the way
I remove the colored papers and put a photo from a magazine under the camera.
rob: how about now?
Violet: i see a complex image consisting of lots of colors. maybe you could point to something?
Lisa hands me a sharp pencil. I lay it on top of the photo, pointing at the hood of a bright red, shiny car.
rob: ok there
Violet: that is pink
Lisa, with a hint of defensiveness, tells me "Violet can't tell it's a red car. She is talking about the particular part of the photo you are pointing at, which really is pink, because the color of the car is mixed with a reflection of the photographer's lights."
Fair enough, I think. Violet isn't able to understand the three dimensionality of objects or their shapes in the photos, just flat colors. I remove the photo.
rob: ok, tell me when you see something green
I start putting pieces of colored paper under the camera, one by one. For each one, Violet says "nope." She seems smart enough to know when I have changed out the swatch, and never replies when I'm still putting the paper down, or when there is nothing there. Being a programmer myself, I know Lisa worked hard to give her that sort of logic.
Eventually Violet says: "that one is sort of green. although i'd probably call it teal, or maybe turquoise"
I'm impressed. Indeed, that one was borderline green, and she seems to get that. Lisa smiles. I try something harder:
rob: now tell me when you see something silver
Violet: i can look for light grey, but I can't detect metallic reflectivity, only color
Lisa seems at once annoyed with me for trying to trick Violet with difficult things, while also proud of Violet that she handled the situation as well as can be expected. I grab some scissors, and quickly cut a couple swatches of construction paper and put them in front of her.
rob: there is a square and a triangle. please tell me what colors they are
Violet: sorry i don't have logic to identify shapes. you'll have to point
Acknowledging another limitation on her part, I point, and she does well. As I put more colors in front of her, she seems to prefer to mention the more general colors, like red, orange and green, and but also point out that it is, say, "more of a maroon." Every color she mentions is something I've heard of, so she doesn't seem to be simply looking up colors in a giant table of named colors. She will add qualifiers, saying it's a "dark purple" or a "pale blue" or a "warm light gray" or a "bright purplish blue."
Now I know a little about how computers and digital cameras typically store colors, by their red-green-blue component values, or "RGB." For instance, red is 100%, 0%, 0% (lots of red, no green, no blue), blue is 0%, 0%, 100%, and yellow is 100%, 100%, 0%. Brown is something like 40%, 25%, 10%. Knowing Violet is basically a computer program, I put a bright yellow swatch in front of her and ask:
rob: can you tell me the RGB values of this color?
Violet: i don't understand your question
rob: can you tell me the relative amounts of red, green, and blue in this color?
Violet: well, it is yellow. there is no red nor green nor blue that i see
Odd, I think, but interesting. Why, if she is a computer program, doesn't she know that the yellow that she sees is made of equal parts red and green, but no blue? Lisa smiles, and I think get it. Lisa wants Violet to understand colors like a human does, and we sure don't naturally think of colors as having distinct red, green and blue components, even though the human eye does work very much like a digital camera. The eye has "red sensitive cones" (or rather, light-sensing neurons that respond most strongly to light in the general range of 625 to 750 nanometers in wavelength), green sensitive cones (520-570 nanometers), and blue sensitive cones (440-490 nanometers). Digital cameras, color film, TVs and video monitors all use red, green and blue components approximating these ranges, so that they can best reproduce what humans perceive. But a person, looking at the color yellow, doesn't see it as part red and part green; instead it appears as a completely separate color, yellow.
Lisa explains that Violet has an internal representation of color that is not a simple set of red, green and blue values. Instead it is more akin to the hue, saturation and brightness representation (or HSB), which those familiar with paint programs like Photoshop may have run into. Developed in the early 20th century by Albert Munsell, the HSB color model is intended to logically represent color as the human perceptual system does.
Saturation, meanwhile, is "how colorful" the color is. Gray, white and black have 0% saturation, while a pure color has 100%. Munsell referred to saturation as "chroma."
Brightness, sometimes referred to as "value" or "luminance," typically ranges from 0% (pure black) to 100% (pure white). (Of course, this only makes sense in contexts where white is as bright as it can get, such as a reflective object. For light sources, there is no 100%.)
Adding a bit of complexity, some "pure hues" -- that is, colors with 100% saturation -- seem to have higher brightness than other pure hues. We think of a pure yellow as being brighter than a pure purple. Why this is true is probably best described as a quirk of the human visual system, and left at that.
rob: can you tell me the hue, saturation and brightness values?
Violet: its hue is yellow, and it is very saturated, and quite bright
rob: can you give me numbers?
Violet: no, I can't
Darn. Clearly, whatever Violet uses to internally represent the color, she doesn't seem to be able to directly access it or translate it into words. I ask Lisa why not. She explains:
I thought it was important to make sure that arbitrary decisions I made as to Violet's internal representation of color cannot affect her behaviour in any way. If they could, that would be a bug....and I'd be "breaking encapsulation," to put it in computer science terms. Every programmer knows the value of protecting arbitrary details of the implementation from the rest of the program.
Look at it this way. I could use 0 degrees to represent the hue red and 120 degrees for green. Alternatively, I could use 90 degrees for red, and 210 for green -- still 120 degrees apart, but shifted around the color wheel. Or I could even have it mirror image, with green at 0 and red at 120. Rather than storing it as a floating point number between 0 and 360, I might store it as a 12 bit integer, ranging from 0 to 4096. That would save memory, give me the precision I need, while also making it impossible to have an invalid value. And don't get me started on the difference between using red yellow and blue as primaries versus red, green and blue. The point is, all of these decisions are arbitrary. Changing them doesn't -- or at least shouldn't -- affect the end result, as long as I'm consistent, right?
So if Violet is designed correctly, those arbitrary details should be insulated so that they can in no way affect her external behavior. If the speaking module of Violet can get to those numbers directly, those details would not only affect her responses to direct questions about the internal representation, but it would be impossible to prevent them from affecting things beyond that. In fact, by allowing her to assign significance to things that are not logically significant, all of her logic would be polluted.
I ask Violet lots more questions. I point to a dark blue swatch, then ask her to tell me which of a set of other swatches it is closest in color to. For this, she shows me something I hadn't seen before: she can point. In the little window that shows the video, a blinking arrow appears, pointing to the image of a dark purple swatch.
This one is closest, I think. But some people might say this one....
The arrow moves to a bright blue square.
...since it is blue.
Interesting. Apparently her formula for "closeness" weighs brightness equally to hue, but she can alternatively disregard brightness, or at least weigh it less heavily. She seems to understand that the question is ambiguous, but still doesn't just give up. "Nice fuzzy logic," I observe.
I keep grilling Violet. She does amazingly well, giving the answers I would expect of a human. As long as I only ask about colors, that is. She doesn't know shapes, left, right, up or down, or anything geometrical other than being able to tell what is being pointed at.
Lisa tells me "you know, you don't have to only talk about what the camera sees. You can ask her other questions about colors."
rob: hmmm. what color is a Ferrari?
Violet: generally, red
rob: why do you say that?
Violet: most images i see on the web labeled "Ferrari" have a central area dominated by red. I also notice that the color name "Ferrari red" appears here and there on the web, so that would further support the idea that Ferrari is associated with red. However, not all of the images labeled "Ferrari" have a preponderance of red in them, which is why I qualified it with "generally
"So she has access to the web?" I ask Lisa.
"Well, yeah, but she doubtfully had to check the web for that question, since she's already downloaded a ton of photos and text and tried to make sense of it by indexing it. You know, sort of like a search engine does"
"Except she throws away all the information that isn't relevant to colors?"
"Pretty much," Lisa says. "She could probably tell you that a Ferrari is a car. But don't expect her to be able to look at a picture of a car and tell you that it is a Ferrari. Or even that it is a car. Sorry."
rob: can you tell me what common things are red?
Violet: blood is red. apples are often red, as are roses. stop signs, stop lights, and brake lights are red. cherries and strawberries seem to be red. coca cola is generally in red packaging. would you like me to continue?
rob: sure, go on
Violet: things that are just hot enough to glow are red, such as burning coal, or a hot piece of metal. some people are described as having red hair, but in general it is usually closer to a dull orange. the sun when it appears very near the horizon is usually red. Keep going?
rob: no, that's ok. but can you tell me what color you get if you mix red and yellow
Violet: orange is between red and yellow
rob: what if you mix blue and green?
Violet: teal, aqua, and turquoise are some of the more common names for colors that are between blue and green. Of course there is also blue-green, but I doubt that is what you are looking for
rob: what is a pale orange called?
Violet: peach seems to be the most common name for pale orange
rob: which color is darker: yellow or blue?
Violet: a reflective surface that is pure blue usually appears darker than one of pure yellow
rob: what is the complement of blue?
Violet: if you use red, yellow and blue as the primaries, then it would be orange. Using red, green and blue, it would be yellow
I grab a couple of swatches and put them in front of her.
rob: of the two colors in front of you, which is warmer?
Violet: the brown one is warmer than the dark green one
rob: why do you say that?
Violet: by my understanding, warmness is simply proximity to orange. The closer to orange, the warmer it is, the closer to blue, the cooler. Brown is a dark and desaturated orange, so I figure it as being closer to orange than a dark green is
Finally, I have to ask:
rob: what is your favorite color?
Violet: i'm sorry but I don't have particular affinities toward any color, or really any affinities at all. I'm just a machine, remember?
I do, of course, remember that she's only a machine, and a pretty limited one at that. But still, I feel a need to not be rude:
rob: ok, thanks Violet, nice work. good bye.
Violet: sure thing, see ya
So, overall I'm impressed, but not utterly amazed. Lisa's project is certainly quirky. But clearly, there are no great technological leaps here. Lisa has worked hard to make Violet talk about color as a human might, but color is quite simple compared to the vast bulk of human experience. I understand though, that while Violet is not a particularly useful program, obviously that's not the point.
I say to Lisa "look Lisa, this is really cool. I'm impressed with her ability to recognize and talk about color in a very human-like way. But in what way do you think that Violet is conscious? And do you really think Violet 'senses' color? I mean, when we see red, we get a particular subjective experience from seeing red. She obviously doesn't."
"Obviously?" Lisa asks. "You say that with quite a bit of confidence..."
"Well do you think she does?"
"I see nothing to indicate otherwise," Lisa says.
Hmmm. I think I'm seeing what Lisa is after with this project. We're talking about qualia. Qualia, which is the plural of "quale," is a term used by philosophers to refer to internal, subjective experiences, separate from the outside stimulus and separate from the resulting behavior. The classic example of qualia tends to be "redness," which of course hints at why Lisa might have chosen to target color perception. The outside stimulus of redness is, of course, light of relatively long wavelengths falling on our eyes. The behaviour resulting from it might be uttering the words "that car is red." But we know there is something else to it. There is something internal, which cannot be communicated in words.
If a person is raised in an environment where there simply is no red light -- say he has had filters installed in his eyes at birth that block all red light -- we have to assume that it would be impossible to describe to him the experience of seeing red, no matter how much he knows about the science of color perception, or how much he had been told that blood and strawberries and hot things tend to be red. The only way he will ever know about "redness" is to remove the filters and to experience it first hand.
Other sorts of qualia are the taste of chocolate, the tone of a french horn, the feeling of having an itch. None of these can easily be described to someone who hasn't experienced them, or at least experienced something similar enough to them to be compared. The existence of qualia is debated by philosophers, while scientists generally steer clear of the topic. But it is about as close as one can get to the "core" of the concept of consciousness.
Lisa and I spend the rest of the afternoon discussing whether Violet experiences qualia of colors. I grab a dictionary off the shelf, and look up qualia, just to make sure I've got it right. Webster's tersely describes a quale as a "property considered apart from things having the property," not surprisingly using "redness" as the example. We go online and check Wikipedia -- that surprisingly coherent product of internet anarchy -- which says that qualia "can be defined as qualities or feelings, like redness or pain, as considered independently of their effects on behavior and from whatever physical circumstances give rise to them."
"Violet does indeed have something that meets those definitions, doesn't she?" Lisa suggests. "Remember those arbitrary internal details of the implementation, that are hidden from the outside? If the hue red is represented as 90 degrees -- as opposed to, say, 0 degrees or .8 radians or what-have-you -- well, that sort of thing would be its quale. It is how Violet understands a color internally, but can't directly communicate. The subjective part, if you will."
The more we talk about it, the less the idea of qualia seems to be mysterious. Admittedly, we have arrived at a somewhat mundane, and not particularly satisfying way to look at it. I can't help but have a lingering feeling that there must be more to the concept of "subjective experience" than "arbitrary details of the internal implementation of an algorithm that don't affect the results," but alas, I can't come up with words to describe whatever else there might be. If nothing else, the truly mysterious part of qualia has been narrowed down considerably.
That said, qualia or not, Violet hasn't really met my idea of consciousness. She told me she was neither happy nor sad, and didn't have the capacity to be. She didn't even seem to understand the concept. She said she didn't have a favorite color, because she wasn't able to like or dislike things.
I ask Lisa "So if you feel that Violet is conscious, wouldn't that make her entitled to a certain degree of rights? For instance, do you think it would be possible to be cruel to her? Or even to be rude to her? Would you grieve if she was to be destroyed?"
"Hmmm. No, not really. Not like with a dog or something." Lisa gently nudges Mollie with her foot, who slowly sits up and puts her head in Lisa's lap in hope of some more attention. "It's a different thing. As you saw, Violet doesn't have a capacity for happiness. She can't sense pain, so I don't think you could be cruel to her. She certainly doesn't fear death, or being shut down or whatever. In that sense, she's no different than anything else I might have made, say a painting -- I wouldn't like her to be destroyed, of course, but any grieving I might feel would be totally irrational." She stops to think for a moment. "Then again, I do the same thing you do. When I am done talking with her, I always say 'good bye.' Feels weird not to. I suppose that's kind of irrational too, isn't it?"
"So then, what do you think it would take to make a machine that would qualify for rights?" I ask.
Lisa pauses for a bit, stroking Mollie's head.
"Well....I think I'm gonna go with happiness. If you can make a machine that can truly experience happiness and sadness, pleasure and pain, joy and suffering....all that sort of thing....I think it would deserve rights. But I have no clue how you'd go about programming happiness into a machine."