What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Posts

(tagged with 'Visual Perception')

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: Visual Perception, Most Recent at Top [Help]
Results 26 - 50 of 58
26. The Stroop Effect

What is the color of each words below? Say the name of each color aloud as you look at each word:


Green  Red  Blue  Purple  Blue  Purple


Now do the same thing for the words below. Don't read the words. Just say the name of the color of type used for each word.


Blue  Purple  Red  Green  Purple  Green

For most people, the second task is more difficult. It takes longer to name the colors when the word doesn't match the color, and mistakes happen more often.

This experiment was first conducted by John Ridley Stroop in 1935, and has been used to understand how different pathways of information can interfere with each other in the brain. 

One explanation for why we have such a hard time sorting out the conflicting information is that it takes us longer to identify the color names than it does to read the words, so the brain has to go back and reconfigure each guess.

7 Comments on The Stroop Effect, last added: 6/1/2012
Display Comments Add a Comment
27. Coiling snake illusion

If you glance around at this optical illusion by Akiyoshi Kitaoka, the snakelike coils around the outer edge will start to move.


But if you let your eyes come to a rest, the snakes come to a stop. Try looking toward one spot and 'spacing out.' Then start looking around again, and the snakes start moving.

Apparently, scientists have concluded, the illusion is somehow tied to eye movements.

Science News: Snakes Illusion

5 Comments on Coiling snake illusion, last added: 4/29/2012
Display Comments Add a Comment
28. Men, Women, and Eyetracking

Have a look at the photos below for five seconds or so. We'll come back to them later.


Scientists have used eyetracking technology to see where people look in a photo. One question they have asked is whether men and women look at other people in the same way.

In one experiment, groups of men and women were asked to look at the picture of baseball player George Brett and asked to think about his sport and position. 

The eyetracking heatmap shows that both men and women spent time looking at the head, but men also looked at the crotch. This isn't necessarily a sign of sexual attraction. They could be sizing up the competition or identifying with him.

According to Nielsen and Coyne, men also tend to look more at private parts of animals when shown American Kennel Club photos.

Here are the results of thirty men and thirty women looking without prompting at that first pair of photos.

The company Think Eye Tracking observes from the results:

1. Men check out other men, especially their "assets."
2. Women checked out his wedding ring.
3. Guys don't seem to care about the woman's marital status, but looked at her face, breasts, and stomach.
4. If you ask people to self-report where they looked, they tend not to be very honest or they're just not consciously aware.
 -------------
Read more:
Bathing Suit Photo Study (Think Eye Tracking)
Online Journalism Review
Studio Moh
Related GurneyJourney posts
Do Artists See Differently?
Dog cam: Where do dogs and chimps look?

22 Comments on Men, Women, and Eyetracking, last added: 3/16/2012
Display Comments Add a Comment
29. Floaters

Floaters are ghostly specks, dots, or lines that drift across your visual field. They’re most often visible in front of a smooth blue sky or a blank computer screen.

At left is a simulation from Wikipedia. You can’t look directly at floaters, because they exist at a fixed relationship to your direction of vision. As your eye moves to look at them, they dart away at the same speed.

Floaters are a normal experience for people with healthy eyes, but a lot of them can also be a sign of a retinal detachment or other medical problem. Normal floaters are caused by cell debris from the natural degeneration of the inside of the eye. Bits of protein material are suspended in the gel-like vitreous humour inside your eyeball and drift down like flakes in a snow globe.


10 fun facts about floaters:
  1. There’s a different set of floaters for each eye, and you can see each separate set of floaters by covering one eye and then the other.
  2. You get more floaters as you get older, but young people get them, too.
  3. Young peoples’ floaters tend to look more like transparent worms, older folks' floaters tend to be more like dark specks.
  4. Floaters eventually settle down to the bottom of the eyeball. During the day when you’re vertical, they settle at the bottom of the eyeball. During the night when you’re sleeping, they settle at the back. 
  5. If you tilt your head just right, you can sometimes get them to drift to the center of vision. 
  6. You don’t see the floaters, but rather the shadow they cast on your retina.
  7. Floaters are not optical illusions, but are called entopic phenomena
  8. In bright light, when the pupil is contracted, the shadows cast by floaters are sharper, so they’re easier to see.
  9. The gel-like vitreous humor gets more watery as you age.
  10. In French they’re called “mouches volantes,” which means “flying flies.
------
Sources: Wikipedia

16 Comments on Floaters, last added: 2/15/2012
Display Comments Add a Comment
30. Profile Scanpath

This eyetracking scanpath of a bust of Nefertiti by Alfred Yarbus 50 years ago shows how one person's gaze surveyed this face in profile. 
It seems the ear is an important landmark, but the viewer didn't actually look at the cheek or chin very much. In an isolated object like this, it also appears that the eye is indeed following some contours, though in a jagged, jumpy sort of way.  Source of image


7 Comments on Profile Scanpath, last added: 2/12/2012
Display Comments Add a Comment
31. Computer vision

In the rapidly developing field of artificial intelligence, the detection of edges is an important first step in analyzing and processing images. Edge detection is thought of as one of the primary steps in identifying and separating individual forms from each other and from the background.


Pioneers in this field face definite challenges:
1. Fragmentation, where the image processor can’t connect discontinuous segments of an edge.
2. False edges, where an edge is identified that doesn’t really belong.
3. Focal blur, where a sharp edge is not present because of optical blurring.

Using two cameras oriented stereoscopically helps by giving depth cues that would otherwise have to be inferred from a single image. Above is an artist’s rendering of a NASA Mars rover from Wikipedia Commons.

The prospect of machines that can see for themselves is still on the horizon. Right now, the applications include autonomous rovers and drones and industrial robots, but there are already early prototypes of machines that can assist the blind, drive cars, and take supersmart photos..
----------
Wikipedia on Edge Detection
Previously: Lines and the Brain

2 Comments on Computer vision, last added: 12/5/2011
Display Comments Add a Comment
32. Constructing Images of Brain Activity

Neuroscientists at University of California at Berkeley have developed a technique for creating digital images that correspond with neural activity in the brain.

This represents one of the first steps toward a computer being able to tap directly into what our brain sees, imagines, and even dreams.




(Link to video) Every image that we see activates photoreceptors in the retina of the eye. The information is fed through the optic nerve to the back of the brain. There, the information is assembled and interpreted by increasingly higher-level processes of the brain.

In this experiment, subjects watched clips of movie trailers while an fMRI machine scanned their brains in real time. The computer mapped activity throughout millions of “voxels” (3D pixels).

The computer gradually learned to associate qualities of shape, edges, and motion occurring in the film with corresponding patterns of brain activity.

It then built “dictionaries” by matching video images with patterns of brain activity, and then predicting patterns that it guessed would be created by novel videos, using a palette of 18 million seconds of random clips taken from the internet. Over time, the computer could crunch all this data into a set of images that played out alongside the original video.

If I understand the process correctly, the images we’re seeing on the right side (“clips reconstructed from brain activity”) are actually running averages created by blending a hundred or so random YouTube clips that met the computer’s predictions of what images would match the patterns it was monitoring in the brain.


In other words, the right-hand image is generated from existing clips, not from scratch. In this video (link), you can see the novel video that's causing the brain activity in the upper left of the screen, and some of the samples (strung out in a line) that the computer is guessing must be causing that kind of brain activity.

That would explain the momentary ghostly word fragments that pop up in the images, as well as the strange color and shape-shifts from the original.

The result is a moving image that looks a bit like a blurry version of the original video, but one that has been a bit generalized based on the available palette of average images. Evidently, the perception of faces triggers the brain in very active ways, judging from the relative clarity of the computer’s generated images, compared to other kinds of images.

I wonder what would happen if you set this system up in a biofeedback loop, so that the brain activity and image generation could play off against each other? It might be like a computer-aided hallucination.
Article on Gizmodo
Thanks, Christian Schlierkamp

5 Comments on Constructing Images of Brain Activity, last added: 11/16/2011
Display Comments Add a Comment
33. Why do veins appear blue?

I’d like to straighten out an incorrect statement that I made in a previous post and in my book on color.

On page 156, I said that in the region surrounding the lips, “there are relatively more veins carrying blue deoxygenated blood,” It turns out to be a misconception that blood turns blue when it loses its oxygen contact.

Yet veins deep below the skin certainly do appear blue. Why, then has no one seen blue blood? I had always assumed the answer was that if when a vein is cut open, the blood immediately turns red on contact with air.

In fact, blood is always red (or at least a deep maroon color) when it is deoxygenated.

What’s going on here? The scientific answer involves a lot of factors, but according to Wikipedia, on light Caucasion skin at least, “veins appear blue because the subcutaneous fat absorbs low-frequency light, permitting only the highly energetic blue wavelengths to penetrate through to the dark vein and reflect off. A recent study found the color of blood vessels is determined by the following factors: the scattering and absorption characteristics of skin at different wavelengths, the oxygenation state of blood, which affects its absorption properties, the diameter and the depth of the vessels, and the visual perception process.”

READ MORE
PDF of scientific paper
Wikipedia on Veins
Good summary by Quora.com
Science Blogs.com on the subject  
Color and Light: A Guide for the Realist Painter
Previously on GJ: Three Color Zones of the Face

Photo from We Heart It.
Thanks, Myke!

7 Comments on Why do veins appear blue?, last added: 9/19/2011
Display Comments Add a Comment
34. Checkerboard Illusion Video



Here's an impressive video version of the "checkerboard illusion." The lighting conditions in the room are set up to trick us into thinking the low light is the main light in the scene, but actually the top light facing down is dominant.

The giveaway is the lack of a cast shadow of the whole checkboard table onto the floor from the purported key light. They should have painted that, too.

Thanks, Jocque

0 Comments on Checkerboard Illusion Video as of 1/1/1900
Add a Comment
35. Light-Field-Capture Camera

Usually when you take a photograph, you have to select the focal distance and commit to it.


Either the foreground is in focus, or the background is sharp. When you click the shutter, you get one focal setting, and you can’t change it later.




A Silicon Valley start-up company called Lytro has developed a new technology called a “light field camera.” According to the company’s website, it has a completely different lens and capture system, allowing you to take a photograph of a scene and then fiddle with the focus afterward.

Try clicking on different parts of the image below and see the focus change.



Unlike a regular camera, which captures only the light quantities that intersect a single focal plane inside the camera, the light field camera captures the intensity, color, and vector direction of all the rays of light. It replaces many of the internal components of a traditional camera with special software.

The video below gives the pitch:



A website promoting the new technology gives a gallery of images where you can rack the focus to any distance.

If you combined these image interfaces in stereoscopic 3D with eyetracking technology to input the focus changes, I believe you'd get a very powerful 3D illusion.

 More on Engadget 
More on Kurzweil News
Thanks, Carl James Holley and Dorian Iten

4 Comments on Light-Field-Capture Camera, last added: 6/29/2011
Display Comments Add a Comment
36. Art makes us feel like we’re in love

In a recent study, Semir Zeki, Professor of Neuroaesthetics at the University College of London has shown that artwork stimulates the same centers of the brain that are active when we fall in love.



His research has shown that looking at an inspiring painting activates reward centers and releases the “feel-good” neurotransmitter dopamine.

Previously on GJ: Neuroaesthetics, with Semir Zeki interview
Professor Zeki's Blog
Zeki's book: Splendors and Miseries of the Brain: Love, Creativity, and the Quest for Human Happiness

5 Comments on Art makes us feel like we’re in love, last added: 6/15/2011
Display Comments Add a Comment
37. Automated Selectivity

One of the reasons we like to look at paintings is that reality is filtered through someone's brain. Painters select the important elements out of the infinite detail that meets our eyes.


Here, Al Parker chooses to show us detail in the hands and face. He sinks everything else into a flat tone.


John Singer Sargent could have detailed every paving stone and roof tile in this Venice street scene. Instead he softened and simplified the background areas and put the focus on the faces.

This selectivity isn’t arbitrary. The detailed areas correspond with the parts of the picture that we want to look at anyway.  As we’ve seen in previous posts, eye tracking studies have demonstrated the cognitive basis of selective attention. Viewers’ eyes consistently go to areas of a picture with the greatest psychological salience: things like faces, hands, and signs. We’re hard-wired for it.

What happens if you combine eye tracking data with computer graphics algorithms to automate the process of selective omission? Would the result look “expressive” or “artistic?”


(Click to enlarge) In his doctoral thesis for Rutgers University, Anthony Santella did just that. The photographs on the left include a set of overlapping circles showing where most people spent their time looking in each image. The larger the circle, the longer the concentration on those areas.

Santella combined that data with a rendering algorithm which simplified other areas of the image. In the top image, note the flattening of the far figures and the arches above them. In the bottom image of the woman, note how the wrinkles in the drapes and the textures in the sweater are rendered with flat tones. But her eyes, nose and mouth are still detailed.


The rendering algorithms can be designed to interpret the source photo in terms of line and color. Or the shapes can modulated in size according to the interest factor. Note how the outlying areas of each rendering is simplified.

Whichever rendering style one desires, the output image has a sense of psychological relevance, more so than rendering algorithms based merely on abstract principles such as edge detection. As a result these computer-modified photographs have a sense of something approaching true human artistry.

The results of this interaction between eye-tracking data and computer rendering algorithms suggests a heretical thought: What we think of as a rare gift of expressive artistic judgment is really something fairly simple and logical, something you can teach a machine to do.

"THE ART OF SEEING: VISUAL PERCEPTION IN
DESIGN AND EVALUA

16 Comments on Automated Selectivity, last added: 5/27/2011
Display Comments Add a Comment
38. Blue Light and the Circadian Clock

Researchers at the Rensselaer Polytechnic Institute's Lighting Research Center (LRC) in Troy, New York are proving that exposure to blue light is tied to the day/night sleep cycles of our circadian rhythms.


Mariana Figueiro, PhD., who heads the LRC’s light and health program explains:

“Within the mechanism that affects the circadian system are two color opponent channels. One of those is the blue vs. yellow (BY) channel, which seems to participate in converting light into neural signals to the part of the brain that generates and regulates circadian rhythms.”

A person attuned to the changing colors of outdoor light will notice that just after the orange-colored sun sets, the world is bathed in blue light from the twilight sky. So perhaps it’s not surprising that our body rhythms are tuned by this color.

In one LRC study, patients with Alzheimer’s disease experienced more hours of sleep per night after being exposed to blue LED lights than they did after being exposed to red lights.

“Blue sky is ideal for stimulating the circadian system because it’s the right color and intensity, and it’s ‘on’ at the correct time for the right duration—the entire day,” said LRC director Mark Rea, Ph.D.,

Presumably, since non-human animals also have the BY channel, they would respond to the same signals. 
--------
Science Daily article about waking up teens with colored light
RPI Lighting Research Center article
Painting is called "Bonfire" by Isaac Levitan

1 Comments on Blue Light and the Circadian Clock, last added: 5/16/2011
Display Comments Add a Comment
39. Gaze Direction

Which way are these eyes looking? Do they seem to be looking right at you, or off to the side?


Our perception of gaze direction is influenced by the position of the iris inside the visible surface of the sclera (the white of the eye), but that’s only part of the story.

In 1824 William Henry Wollaston made exact duplicates of an engraved plate of eyes and eyebrows. He then placed them in two different facial contexts. One face is turned one way, and the other points in the opposite direction.


Surprisingly the exact same set of eyes appears to be looking in different directions solely because of the surrounding facial cues.


Even if you take a matched pair of eyes and eyebrows and just shift the nose beneath them from one side to the other, you can shift the apparent direction of gaze.

Why this happens is still not completely understood. Ophthalmologists Michael F. Marmor and James G. Ravine, authors of the new book “The Artist’s Eyes: Vision and the History of Art,” suggest a psychological cause: “Our judgment of the direction of someone’s eyes is linked, in part, to the direction we believe that person to be looking.”
------
The Artist’s Eyes: Vision and the History of Art by Marmor and Ravine
William Hyde Wollaston, Apparent Direction of Eyes in a Portrait
-----
Don’t forget to vote in the video contest (scroll down to view the videos).

18 Comments on Gaze Direction, last added: 5/8/2011
Display Comments Add a Comment
40. Lines and the Brain, Part 2

 To artists, a line is a powerful geometric entity, whether it’s a straight or curved mark on a piece of paper. According to the author on neurobiology Carl Schoonover, the drawing by Picasso at left shows that we can distinguish shapes easily with a few lines, which he says “taps into our visual system's predilection for line.”


Yesterday I started to pose a few questions: Are lines merely abstract constructions—artificial conventions—that we have invented to represent nature? Do they have counterparts in the real world and in our minds? Do they reflect something basic going on in our brain when we look at the world?

These can be sensitive questions for artists who do most of their work in line. They are often made to feel that what they do is just a preliminary step, or that it isn’t as advanced as what a painter does. In fact, the management of line is one of the most sophisticated skills an artist can master, and it corresponds to some of the most basic and powerful experiences of visual perception.


We use lines to describe several things:
1.  A boundary of a form (B, above).
2. An edge of a surface marking (A).
3. A plane change within a form (C).
4. Or an edge of a cast shadow (D).
5. Also, a line can describe a thin form, like a tree branch or a piece of spaghetti.



A shape boundary can be regarded as a type of line. Some images, like this poster by Maxfield Parrish, can be made up entirely of overlapping shapes.

At the initial level of visual processing, neuron groups in the visual cortex begin to process shape boundaries in a similar way that they process outlines drawn on white paper. But as we'll see in later posts, edge detection is just one preliminary step in object recognition. The brain constructs an understanding of shape and form and space by combining information from many different cues.




Our visual system has no trouble sorting out boundaries, surface marks, plane changes, and cast shadows (click to enlarge). But they are not trivial tasks when you’re trying to educate a computer, even a smart computer, to see. Edge detection and feature extraction are exciting frontiers for people at the intersection of computer science and visual perception.
----
That last photo is from Wikipedia on Edge Detection
Colored cube is from GurneyJourney "Color Consta

12 Comments on Lines and the Brain, Part 2, last added: 5/1/2011
Display Comments Add a Comment
41. Lines and the Brain, Part 1

“There are no lines in nature, only areas of color, one against another” said Edouard Manet.

Perhaps not. But by the same logic there are no colors in nature, either.

Lines and colors are phenomena that manifest themselves in our our minds. At that level, lines are very real indeed. Perceiving boundaries is a fundamental aspect of our life as visual creatures. It is hard-wired into our perception.

According to neuroscientist neuroscience PhD candidate Carl Schoonover:

“Lines are the bread and butter of our visual experience. They define trees, horizons, the edges of things we don’t want to bump into. Our visual system is designed to rapidly extract this meaningful information in order to make sense of the world. Consequently, the area in the visual cortex that first processes information coming in from the eyes is configured in a manner that reflects this preference for lines.”
---------
Quote from Carl Schoonover in “Portraits of the Mind,” (2010)
Images from:
Vanderpoel, The Human Figure, 1908
Norton,  Freehand Perspective and Sketching

19 Comments on Lines and the Brain, Part 1, last added: 4/29/2011
Display Comments Add a Comment
42. First Impressions

Some people define “impressionism” as an approach to painting where the goal is to capture the first perception of a scene. The World Book Encyclopedia says that “impressionist painters try to show what the eye sees at a glance.”


The first-glance impact is usually represented by an image with simple masses of color, painted with big brushstrokes without much detail, often with soft edges between the masses, such as this haystack painting by Monet.

Typically, “impressionist” images have high-chroma dabs of color that resolve into a larger blurry image. Recognizable small details are conspicuously left out.


We’re told that this is how the eye perceives on the first glance. Let me see if I can simulate this idea using a photographic image. Here’s an unaltered photo of a street scene.


Here’s an “impressionist” take on the same scene (using the Photoshop filter “paint daubs” and a heightened color saturation).

I believe there are some assumptions here that need examining. Does our first impression really look like an impressionist painting?

If I’m really honest about my own experience of vision, my first-glance take on a scene is nothing at all like a Monet. What I see in the first two or three seconds are a few extremely detailed but disconnected areas of focus. Small individual elements, such as a sign, a face, or a doorknob, take on particular importance immediately, perhaps because the left-brain decoding process (seeing in symbols) is so heavily engaged in the first few seconds.


I’ve altered the photo to try to simulate this experience by sharpening and heightening these disconnected elements. What happens in the first few seconds for you? I don’t know how other people see, because I’m stuck inside my own head. Perhaps eye-tracking and fMRI studies can help us to better understand what really happens cognitively in the first few second of visual perception. Maybe it varies widely from person to person.

What I’m questioning is not the artistic tradition of impressionism, but rather our habits of thinking about it. The idea of trying to capture the broad, simple masses of a scene is a valid artistic enterprise. But even though I’m a plein air painter with impressionist leanings, I believe that kind of seeing emerges only after sustained, conscious effort and training, or not at all.

16 Comments on First Impressions, last added: 4/18/2011
Display Comments Add a Comment
43. Smart Tracking Camera


New technology allows a camera to track an eye, a particular face, or any other specified object. 

The inventor, Zdenek Kalal, takes it through its paces, showing how it can learn and adapt to a lot of tracking challenges.

The ominously named “Predator” has strong potential for autonomous driving systems, military targeting, consumer photography, wildlife filming, FX compositing, surveillance videography, image stabilization, industrial robots, and human/computer interface technology. 

The field of machine vision is still in its infancy and largely limited so far to industrial applications. We’ve grown accustomed to surveillance cameras filming us. Next we’ll have to get used to seeing machines that recognize us and keep an eye on us.
-------
Via Best of YouTube
More at Wired.com's Gadget Lab

2 Comments on Smart Tracking Camera, last added: 4/8/2011
Display Comments Add a Comment
44. Disney uses lab tests to gauge response to ads.

According to Variety magazine, the Disney company is working with a scientific laboratory known as the “Disney Media and Advertising Lab” or “Ad Lab” to analyze audience response to the ads appearing on its networks.


The lab building, which does not include the Disney logo, is located in Austin, Texas. Scientists dissect biometric data about eye tracking (left image above), heart rate, and galvanic skin response to better understand emotional reactions to ad content. According to Senior VP Artie Bulgrin, these data are far more accurate than the self-reporting of questionnaires.

Another technique called “facial coding” or “facial mapping” (right image, above) tracks tiny movements of individual facial muscles. For the future, the scientists at Ad Lab are considering including data from brain wave analysis to better understand how people respond to ads.

The lab’s primary mandate is to study advertising strategies on behalf of ESPN, ABC, and ABC Family, helping those multi-platform media networks to coordinate better with their advertising partners.

Ad lab studies classic metrics such as unaided recall to novel ad strategies like live ads, split screen ads, banners, and transparencies, where ads are superimposed over content.

No word yet on whether the mouse house is using ad lab to pre-test its motion pictures.

 Adweek article on Ad Lab
"Austin to House Disney's Ad Lab"

Variety article: “Disney’s Lab Studies People” by David Cohen

14 Comments on Disney uses lab tests to gauge response to ads., last added: 3/17/2011
Display Comments Add a Comment
45. Eyetracking in International Artist


The new International Artist magazine has a six page feature on the research on eyetracking and composition that I did with the help of Greg Edwards of Eyetools, Inc. The feature includes an extra painting that I didn't have room to analyze in Imaginatiive Realism.



You can virtually browse a few pages of International Artist on their offical website.
Previously on GurneyJourney: Eyetracking.
Subscription info for IA.

5 Comments on Eyetracking in International Artist, last added: 3/23/2010
Display Comments Add a Comment
46. The Two-Streams Hypothesis

In the last two decades, new scanning technology has given us a better idea of what is happening in the brain as we process visual information. One of the discoveries that has come out of this new data is the realization that that visual processing divides into two streams, which take place in very different parts of the brain.



The two streams originate in the retina, which begins some low-level processing. The information pathways route back to the optical cortex at the back of the brain. Although there is some crossover and interaction, the two streams are largely kept separate from the level of the retina all the way to the higher-level vision centers of the brain.

The area of the brain that interprets tone is several inches away from the area that interprets color, making the experience of tone and color distinct physiological experiences, as distinct as sight and hearing.



The color stream (in blue), is also called the ventral stream or the “what” stream. It is more concerned with recognizing, identifying, and responding to objects. Color processing through the ventral stream is a capacity that is shared only by higher primates, not the bulk of other mammals.

I asked Dr. Livingstone if she would describe more differences between the "where" and "what" streams? In particular, is one stream more associated with emotional response or higher cognitive function?

“I don't know about emotions, she said, but the dorsal (what) stream is certainly more associated with higher, conscious functions and awareness.”

The difference between these two streams may explain why classically-trained artists use a strategy of planning the tonal organization of their compositions separately from the color scheme.

"The Artist at the Easel” by Sadie Dingfelder, February 2010 Monitor magazine of the American Psychological Association.


Two-Streams Hypothesis on Wikipedia

26 Comments on The Two-Streams Hypothesis, last added: 2/13/2010
Display Comments Add a Comment
47. Checkerboard Illusion

The recent colored cube illusion showed us how our brain compensates for a color cast, making hues seem constant, even when they’re really not.

Our visual system does a similar thing with tones, discounting the effect of shadows and grouping tones into meaningful sets.

This checkerboard is an example. I painted it using the exact same gray mixture for the dark square in the light area (1) as I did for the light square in the shadow (2).

Don’t believe it? Here are the exact same squares with everything else made white.

Why do the tones seem so different? The light square is surrounded by darker squares. This makes our visual system automatically conclude that the actual tone is light in value, and we group it with the other light squares.

We interpret the diagonal bars of darker tone as shadows for the following reasons:

1. They have soft edges, and soft edges usually belong to shadows.
2. Those edges are parallel, and cast shadows from the sun on flat surface are parallel.
3. The tones of adjacent colored squares gradate to an equal degree, just the way shadows do.

Our visual system is designed to help us determine the actual color of objects in the world. The fact that it seems to deceive us is not a defect of our vision. It’s central to our survival.

But we have to know about this mechanism of visual perception if we want to paint tones accurately.
----------
A related illusion from Edward Adelson’s website.

17 Comments on Checkerboard Illusion, last added: 1/28/2010
Display Comments Add a Comment
48. Color Constancy

The painting below show the same colorful cube in red light and green light. The squares on the cube are cyan, magenta, ochre, blue, and white. Or so they seem. What colors are those squares really, objectively?

In fact, the cyan square in the bottom corner of the red-lit scene is exactly the same color mixture as the red square in the upper corner of the green-lit scene.

To test that claim, here’s the same image file with everything but those squares turned to gray tones. Nothing else has changed. The colors of those squares are made from the same paint, applied with the same brush. (The shaded surface side is a slightly redder gray in both cases.)

This phenomenon is called color constancy. We interpret local colors as stable and unchanging, regardless of the effects of colored illumination, the distractions of cast shadows, and the effects of form modeling.

Here’s another example. This powerful optical illusion, created digitally by R. Beau Lotto, is called the cross-piece illusion. Two bars made up of colored cylinders, meet in a junction piece. In one picture, the cross-piece looks blue-gray. In another it looks yellow. In fact it’s precisely the same color. At this link, you can view the illusion with a slider to isolate the actual color.

A fire truck looks red, regardless of whether we see it lit by the orange light of a fire, the blue light the twilight sky, or a blinking light of an ambulance. If the truck were parked halfway in shadow, we would still believe it to be a single, consistent color. If the fender were dented, the tones of red reaching our eyes would change, but we would still believe the red to remain the same.

Our visual systems make such inferences automatically. Color constancy processing happens unconsciously. It’s almost impossible for our conscious minds to override it.
---------
More at
R. Beau Lotto's site
HueValueChroma
R. Beau Lotto
Wikipedia on Color Constancy

17 Comments on Color Constancy, last added: 1/18/2010
Display Comments Add a Comment
49. Change Blindness


Here's proof that most of the time we look but don't see.

18 Comments on Change Blindness, last added: 12/16/2009
Display Comments Add a Comment
50. Shady Diamond Illusion


See for yourself: all the diamonds are identical in tone.
Via Best of YouTube

14 Comments on Shady Diamond Illusion, last added: 12/25/2009
Display Comments Add a Comment

View Next 7 Posts