Peeking between memory and perception
Study zeroes in on how humans interpret visual environment
The human field of vision is only about 180 degrees, so if you’re reading this at your desk, you should have a good view of the stuff that’s right in front of you — your computer and phone, maybe some pictures of your family.
Despite that limited view, your brain is able to stitch together a coherent 360-degree panorama of the world around you, and now researchers are beginning to understand how.
Harvard scientists have pinpointed two regions in the brain’s so-called scene network — the retrosplenial cortex (RSC) and the occipital place area (OPA) — and demonstrated that they share nearly identical patterns of neural activation when people are shown images of what is in front of and behind them. The finding suggests that these regions play a key role in helping humans understand their visual environment. The study is described in an Aug. 26 paper in Current Biology.
“We have a limited field of view — we can only see what’s immediately in front of us,” said lead author Caroline Robertson, a junior fellow of the Harvard Society of Fellows. “And yet you have a very detailed visual memory, particularly in a familiar place like your office.
“We know there are cells in the brain — like head direction cells — that maintain a representation of your spatial position in the environment around you,” she added. “Yet your visual system, all we know is how it responds to what’s in your current field of view. What we wanted to get at is the intersection between those two, between memory and perception.”
Though scientists have long understood that certain brain regions are involved in processing scenes as opposed to faces or bodies, specifying regions involved in merging the images we see moment-by-moment into a coherent view of the world demanded some creative thinking — and some gaming hardware.
As part of a series of tests, Robertson and colleagues used virtual-reality goggles to enable volunteers to explore panoramic images of Boston’s Beacon Hill neighborhood. The first was largely proof-of-concept. Volunteers donned VR goggles and were shown a series of panoramic images. Some saw a single continuous image while others saw images that contained a gap.
When participants were later shown pairs of snapshots, researchers found that those who had seen the continuous panorama were better able to identify images that were across the street from each other.
Next, participants were placed in an MRI scanner and asked whether images came from the left or right side of the street. Over 90 minutes, researchers collected dozens of measurements showing patterns of brain activity, and later analyzed those patterns hoping to find similarities.
“What we were looking for in our analysis was whether neural activity for images that were across the street from each other looked similar,” Robertson said.
Armed with that MRI data, Robertson and colleagues found that while one part of the brain’s “scene network” — the parahippocampal place area (PPA) — reacted in the same way regardless of the scene, the RSC and OPA showed similar activation patterns for images that were connected, suggesting that they played a role in constructing panoramic images.
The final test, Robertson said, was nearly identical, but before asking participants whether an image came from the left or right, researchers briefly flashed a “prime” image — a view down the street — with the expectation that it would trigger people’s memories of the full panorama.
“Once you form that association in the brain, and you have an overlap between images from either side of the street, I expect that if I show you image one, I’m implicitly triggering image two,” Robertson explained. “So we found people were faster and more accurate if they saw the priming image versus seeing a totally different panorama or no prime at all.”
The study offers new insight into how vision and memory work together to inform our understanding of the world around us, she added.
“Even though we only get these very discrete snapshots of our world, and even though that snapshot is interrupted when we blink, it doesn’t feel as though our mental image of our environment is constantly going on- and off-line,” Robertson said. “We feel a smooth, consistent representation of the world that we are interacting in, and there’s evidence that, for that to happen, there needs to be a hub, somewhere in the brain, where your current field of view interacts with your memory of what’s around you, and that’s what we’re putting together in these regions of the brain.”