The Student News Site of University of Arizona

The Daily Wildcat

57° Tucson, AZ

The Daily Wildcat

The Daily Wildcat

 

    Teaching computers how to see

    A+view+of+buildings+on+the+UA+campus+on+Wednesday.+Computers+would+have+difficulty+in+differentiating+between+these+overlapping+buildings+because+it+is+hard+for+them+to+distinguish+between+borders.
    Julie Huynh

    A view of buildings on the UA campus on Wednesday. Computers would have difficulty in differentiating between these overlapping buildings because it is hard for them to distinguish between borders.

    Researchers at the UA have been awarded a grant from the Office of Naval Research to study human vision in an attempt to improve computer vision.

    Mary Peterson, psychology professor and director of the Cognitive Science Program, is part of a larger nationwide team across a variety of disciplines that was also awarded the grant. Her role in the project is to decipher the mechanisms underlying human vision — specifically, the ability to differentiate objects from a background.

    “We want to understand human vision so that we can inform machine vision,” said Elizabeth Salvagio, a graduate student in the Visual Perception Lab. “Many, many years ago, maybe in the ’70s, computer scientists said they’d be able to build a machine that could see like a human in 10 years. Here we are, 45 years later, and they still can’t do it.”

    Ultimately, what researchers seek to understand is how people differentiate objects from their background, Salvagio said. They want to know how we perceive one region as a shaped figure and the other region as a shapeless background.

    “To be able to navigate in this world, we have to know where the objects are to avoid them, but you also have to know where the empty spaces are so you can move between them,” Salvagio said.

    According to Peterson, one of the areas in which computers struggle to identify images is in crowded scenes, such as when there are objects occluded — either completely or partially covered — by other objects. What they are good at is identifying faces and obvious textures, like zebras, tigers and leopards.

    “What that means is they’re capable of mistakes that a human wouldn’t make,” Peterson said. “For example, a tiger coat may be interpreted to be an actual tiger.” 

    One way that human vision is different from computer vision is that the human brain uses past experiences to help decode visual inputs. Peterson said she hypothesizes vision as a two-way street, in which what we see is affected by our experiences, attitudes and intentions. 

    “My work has long shown that our past experience exerts an influence very early in the course of [visual] processing, that it helps us to parse the world into objects in the first place,” Peterson said. “People are resistant to this idea. They’re worried that because you and I have different past experiences, out of sheer necessity since we’re different people, that we may see different worlds.” 

    From a physiological standpoint, the setup for a two-way means of communication appears to be present.

    “Physiologists have found that, in the brain, whenever there is a connection between a higher level to a lower level, there are also connections that go in the opposite direction, allowing one to affect the other in either direction,” Peterson said. “The physiology seems to favor this feedback mechanism, this two-way street explanation.” 

    According to Peterson, the grant is meant to help figure out when this feedback connection is being used and for what it is important. Then, scientists who look at these questions can take the data to their computer science colleagues so they can create models and better computer vision programs that more closely approximate that of a human.

    More knowledge of human vision and better computer vision can have a variety of uses. One potential use is relevant to the military’s training of fighter pilots, one reason why it is funding this research.

    “When [pilots] are taught to fly, they learn using flight simulators,” Salvagio said. “They’ll be in a cockpit that has motion, but they’re looking at a screen that is projecting an image. … That’s not the real world. But how can we make that experience more life-like? By understanding how vision is accomplished in the first place, we can do that.”

    _______________

    Follow Laeth George on Twitter.

    More to Discover
    Activate Search