Recent quotes:

Antabuse may help revive vision in people with progressive blinding disorders: Test of drug could prove role of hyperactive retinal cells in blindness, potentially leading to better therapies -- ScienceDaily

A group of scientists led by Richard Kramer, UC Berkeley professor of molecular and cell biology, had previously shown that a chemical -- retinoic acid -- is produced when light-sensing cells in the retina, called rods and cones, gradually die off. This chemical causes hyperactivity in retinal ganglion cells, which ordinarily send visual information to the brain. The hyperactivity interferes with their encoding and transfer of information, obscuring vision. He realized, however, that the drug disulfiram -- also called Antabuse -- inhibits not only enzymes involved in the body's ability to degrade alcohol, but also enzymes that make retinoic acid. In new experiments, Kramer and collaborator Michael Goard, who directs a lab at UC Santa Barbara (UCSB), discovered that treatment with disulfiram decreased the production of retinoic acid and made nearly-blind mice much better at detecting images displayed on a computer screen.

Encoding sights

This capability of “visual working memory” feels effortless, but a new MIT study shows that the brain works hard to keep up. Whenever a key object shifts across our field of view — either because it moved or our eyes did — the brain immediately transfers a memory of it by re-encoding it among neurons in the opposite brain hemisphere.

Octopuses can ‘see’ with their skin | Science News

So far, though, there’s no evidence in squids and cuttlefishes that light striking skin is enough to make chromatophores blush. Cronin isn’t completely ruling out the idea yet and speculates about more subtle roles. “Maybe they don’t respond directly, but they may alter a signal sent from the central nervous system,” he says. Research on light detection beyond eyes and brains “has been neglected for some time,” says developmental biologist Florian Raible of the University of Vienna. Yet non-eye light sensing structures or compounds show up in the tube feet of sea urchins and the body walls of fruit fly larvae. And in Raible’s lab, a polychaete worm flees light — even after beheading.

How Brain Cells Filter Information in Groups - Neuroscience News

For decades, scientists studying the visual system thought that individual brain cells, called neurons, operate as filters. Some neurons would prefer coarse details of the visual scene and ignore fine details, while others would do the opposite. Every neuron was thought to do its own filtering. A new study led by Salk Institute researchers challenges this view. The study revealed that the same neurons that prefer coarse details could change to prefer finer details under different conditions. The work, which appeared in the journal Neuron on December 31, 2018, could help to better understand neural mechanisms that shape our perceptions of the world. “We were trying to look beneath the hood and figure out how these filters work,” says Professor Thomas Albright, director of Salk’s Center for Neurobiology of Vision and a senior author of the study.

A tilt of the head facilitates social engagement: New findings of potential value for people with autism -- ScienceDaily

Scientists have known for decades that when we look at a face, we tend to focus on the left side of the face we're viewing, from the viewer's perspective. Called the "left-gaze bias," this phenomenon is thought to be rooted in the brain, the right hemisphere of which dominates the face-processing task. Researchers also know that we have a terrible time "reading" a face that's upside down. It's as if our neural circuits become scrambled, and we are challenged to grasp the most basic information. Much less is known about the middle ground, how we take in faces that are rotated or slightly tilted¬. "We take in faces holistically, all at once -- not feature by feature," said Davidenko. "But no one had studied where we look on rotated faces." Davidenko used eye-tracking technology to get the answers, and what he found surprised him: The left-gaze bias completely vanished and an "upper eye bias" emerged, even with a tilt as minor as 11 degrees off center. "People tend to look first at whichever eye is higher," he said. "A slight tilt kills the left-gaze bias that has been known for so long. That's what's so interesting. I was surprised how strong it was." Perhaps more importantly for people with autism, Davidenko found that the tilt leads people to look more at the eyes, perhaps because it makes them more approachable and less threatening. "Across species, direct eye contact can be threatening," he said. "When the head is tilted, we look at the upper eye more than either or both eyes when the head is upright. I think this finding could be used therapeutically." Davidenko is eager to explore two aspects of these findings: whether people with autism are more comfortable engaging with images of rotated faces, and whether tilts help facilitate comprehension during conversation. The findings may also be of value for people with amblyopia, or "lazy eye," which can be disconcerting to others. "In conversation, they may want to tilt their head so their dominant eye is up," he said. "That taps into our natural tendency to fix our gaze on that eye." The effect is strongest when the rotation is 45 degrees. The upper-eye bias is much weaker at a 90-degree rotation. "Ninety degrees is too weird," said Davidenko. "People don't know where to look, and it changes their behavior totally." Davidenko's findings appear in the latest edition of the journal Perception, in an article titled "The Upper Eye Bias: Rotated Faces Draw Fixations to the Upper." His coauthors are Hema Kopalle, a graduate student in the Department of Neurosciences at UC San Diego who was an undergraduate researcher on the project, and the late Bruce Bridgeman, professor emeritus of psychology at UCSC.

How the brain reacts to loss of vision: Going blind affects all senses, and disrupts memory ability -- ScienceDaily

Before any changes had developed in the sensory cortices, the researchers observed that loss of vision was first followed by changes in the density of neurotransmitter receptors and impairments of synaptic plasticity in the hippocampus. In subsequent months, hippocampal plasticity became more impaired and spatial memory was affected. During this time the density of neurotransmitter receptors also changed in the visual cortex, as well as in other cortical areas that process other sensory information. "After blindness occurs, the brain tries to compensate for the loss by ramping up its sensitivity to the missing visual signals," explains Denise Manahan-Vaughan, who led the study. When this fails to work, the other sensory modalities begin to adapt and increase their acuities. "Our study shows that this process of reorganisation is supported by extensive changes in the expression and function of key neurotransmitter receptors in the brain. This is a major undertaking, during which time the hippocampus' ability to store spatial experiences is hampered," says Manahan-Vaughan.

I see therefore I think

For example, in the seven or so seconds, on average, that it took students in the study to solve a logical reasoning problem, researchers recorded at least 23 eye movements. Among other clues, ocular activity indicated which data students absorbed, or disregarded, to arrive at their conclusions. The study builds on previous findings by Bunge and fellow researchers that track cognitive changes in students during mentally challenging learning tasks. For example, a 2012 study found that a three-month LSAT course strengthened the circuitry in the brain's frontoparietal network and boosted the reasoning skills of two dozen young adults, compared to pre-law students who did not complete the course.

Time-traveling illusion tricks the brain: How the brain retroactively makes sense of rapid auditory and visual sensory stimulation -- ScienceDaily

The first illusion is called the Illusory Rabbit. To produce the illusion, first a short beep and a quick flash are played nearly simultaneously on a computer, with the flash appearing at the left side of the screen. Next, 58 milliseconds after the first beep, a lone beep is played. Finally, 58 milliseconds after the second beep, a second nearly simultaneous beep-flash pair occurs, but with the flash appearing on the right side of the screen. The beep location is always central and does not move. Though only two flashes are played, most people viewing the illusion perceive three flashes, with an illusory flash coinciding with the second beep and appearing to be located in the center of the screen. The fact that the illusory flash is perceived in between the left and right flashes is the key evidence that the brain is using postdictive processing. "When the final beep-flash pair is later presented, the brain assumes that it must have missed the flash associated with the unpaired beep and quite literally makes up the fact that there must have been a second flash that it missed," explains Stiles. "This already implies a postdictive mechanism at work. But even more importantly, the only way that you could perceive the shifted illusory flash would be if the information that comes later in time -- the final beep-flash combination -- is being used to reconstruct the most likely location of the illusory flash as well."

Eye movements take edge off traumatic memories: Human study investigates neurobiology of widely used yet controversial psychotherapy technique -- ScienceDaily

Investigating the neurobiological mechanisms underlying EMDR in healthy men and women, Lycia de Voogd and colleagues found that both side-to-side eye movement and a working memory task independently deactivated the amygdala -- a brain region critical for fear learning. The researchers show in a second experiment that this deactivation enhanced extinction learning -- a cognitive behavioral technique that reduces the association between a stimulus and a fear response. The reduced amygdala activity is thought to be a consequence of less available resources since they are dedicated to making eye movements.

Past experiences shape what we see more than what we are looking at now -- ScienceDaily

Most past vision research, however, has been based on experiments wherein clear images were shown to subjects in perfect lighting, says He. The current study instead analyzed visual perception as subjects looked at black-and-white images degraded until they were difficult to recognize. Nineteen subjects were shown 33 such obscured "Mooney images" -- 17 of animals and 16 humanmade objects -- in a particular order. They viewed each obscured image six times, then a corresponding clear version once to achieve recognition, and then blurred images again six times after. Following the presentation of each blurred image, subjects were asked if they could name the object shown. As the subjects sought to recognize images, the researchers "took pictures" of their brains every two seconds using functional magnetic resonance images (fMRI). The technology lights up with increased blood flow, which is known to happen as brain cells are turned on during a specific task. The team's 7 Tesla scanner offered a more than three-fold improvement in resolution over past studies using standard 3 Tesla scanners, for extremely precise fMRI-based measurement of vision-related nerve circuit activity patterns. After seeing the clear version of each image, the study subjects were more than twice as likely to recognize what they were looking at when again shown the obscured version as they were of recognizing it before seeing the clear version. They had been "forced" to use a stored representation of clear images, called priors, to better recognize related, blurred versions, says He.

Humans rely more on 'inferred' visual objects than 'real' ones -- ScienceDaily

To make sense of the world, humans and animals need to combine information from multiple sources. This is usually done according to how reliable each piece of information is. For example, to know when to cross the street, we usually rely more on what we see than what we hear -- but this can change on a foggy day. "In such situations with the blind spot, the brain 'fills in' the missing information from its surroundings, resulting in no apparent difference in what we see," says senior author Professor Peter König, from the University of Osnabrück's Institute of Cognitive Science. "While this fill-in is normally accurate enough, it is mostly unreliable because no actual information from the real world ever reaches the brain. We wanted to find out if we typically handle this filled-in information differently to real, direct sensory information, or whether we treat it as equal." To do this, König and his team asked study participants to choose between two striped visual images, both of which were displayed to them using shutter glasses. Each image was displayed either partially inside or completely outside the visual blind spot. Both were perceived as identical and 'continuous' due to the filling-in effect, and participants were asked to select the image they thought represented the real, continuous stimulus. "We thought people would either make their choice without preference, or with a preference towards the real stimulus, but exactly the opposite happened -- there was in fact a strong bias towards the filled-in stimulus inside the blind spot," says first author Benedikt Ehinger, researcher at the University of Osnabrück. "Additionally, in an explorative analysis of how long the participants took to make their choice, we saw that they were slightly quicker to choose this stimulus than the one outside the blind spot." So, why are subjects so keen on the blind-spot information when it is essentially the least reliable? The team's interpretation is that subjects compare the internal representation (or 'template') of a continuous stimulus against the incoming sensory input, resulting in an error signal which represents the mismatch. In the absence of real information, no deviation and therefore no error or a smaller signal occurs, ultimately leading to a higher credibility at the decision-making stage. This indicates that perceptual decision-making can rely more on inferred rather than real information, even when there is some knowledge about the reduced reliability of the inferred image available in the brain. "In other words, the implicit knowledge that a filled-in stimulus is less reliable than an external one does not seem to be taken into account for perceptual decision-making," Ehinger explains.

Brain 'rewires' itself to enhance other senses in blind people -- ScienceDaily

"Our results demonstrate that the structural and functional neuroplastic brain changes occurring as a result of early ocular blindness may be more widespread than initially thought," said lead author Corinna M. Bauer, Ph.D., a scientist at Schepens Eye Research Institute of Mass. Eye and Ear and an instructor of ophthalmology at Harvard Medical School. "We observed significant changes not only in the occipital cortex (where vision is processed), but also areas implicated in memory, language processing, and sensory motor functions." The researchers used MRI multimodal brain imaging techniques (specifically, diffusion-based and resting state imaging) to reveal these changes in a group of 12 subjects with early blindness (those born with or who have acquired profound blindness prior to the age of three), and they compared the scans to a group of 16 normally sighted subjects (all subjects were of the same age range). On the scans of those with early blindness, the team observed structural and functional connectivity changes, including evidence of enhanced connections, sending information back and forth between areas of the brain that they did not observe in the normally sighted group. These connections that appear to be unique in those with profound blindness suggest that the brain "rewires" itself in the absence of visual information to boost other senses. This is possible through the process of neuroplasticity, or the ability of our brains to naturally adapt to our experiences.

Blue-eyed humans have a single, common ancestor -- ScienceDaily

"Originally, we all had brown eyes," said Professor Hans Eiberg from the Department of Cellular and Molecular Medicine. "But a genetic mutation affecting the OCA2 gene in our chromosomes resulted in the creation of a "switch," which literally "turned off" the ability to produce brown eyes." The OCA2 gene codes for the so-called P protein, which is involved in the production of melanin, the pigment that gives colour to our hair, eyes and skin. The "switch," which is located in the gene adjacent to OCA2 does not, however, turn off the gene entirely, but rather limits its action to reducing the production of melanin in the iris -- effectively "diluting" brown eyes to blue. The switch's effect on OCA2 is very specific therefore. If the OCA2 gene had been completely destroyed or turned off, human beings would be without melanin in their hair, eyes or skin colour -- a condition known as albinism.

Vision, not limbs, led fish onto land 385 million years ago -- ScienceDaily

Neuroscientist and engineer Malcolm A. MacIver of Northwestern and evolutionary biologist and paleontologist Lars Schmitz of Claremont McKenna, Scripps and Pitzer colleges studied the fossil record and discovered that eyes nearly tripled in size before -- not after -- the water-to-land transition. The tripling coincided with a shift in location of the eyes from the side of the head to the top. The expanded visual range of seeing through air may have eventually led to larger brains in early terrestrial vertebrates and the ability to plan and not merely react, as fish do. "Why did we come up onto land 385 million years ago? We are the first to think that vision might have something to do with it," said MacIver, professor of biomedical engineering and of mechanical engineering in the McCormick School of Engineering. "We found a huge increase in visual capability in vertebrates just before the transition from water to land. Our hypothesis is that maybe it was seeing an unexploited cornucopia of food on land -- millipedes, centipedes, spiders and more -- that drove evolution to come up with limbs from fins," MacIver said. (Invertebrates came onto land 50 million years before our vertebrate ancestors made that transition.)

Fractal edges shown to be key to imagery seen in Rorschach inkblots -- ScienceDaily

"As you increase the D value, which makes for more visual complexity, the number of visual perceptions fall off," he said. "People see a lot more patterns in the simple ones." Inkblots with D values of 1.1 generate the highest numbers of perceived images, the team found. The team then put their findings to a human test, generating computerized fractal patterns with varying D values. When seen for 10 seconds by psychology undergraduate psychology students at the University of New South Wales in Australia, the same trend between D values and imagery surfaced. Fractal patterns are also found in the artwork of Jackson Pollock, whose abstract expressionist paintings captured Taylor's lifelong interest in childhood. Pollock's paintings from 1943 to 1952, Taylor has found, are composed of fractals with D values that increased from 1.1 to 1.7. That change was deliberate, Taylor said, as Pollock sought ways to reduce imagery figures seen in his earlier work.

Fractal edges shown to be key to imagery seen in Rorschach inkblots -- ScienceDaily

"These optical illusions seen in inkblots and sometimes in art are important for understanding the human visual system," said Taylor, who is director of the UO Materials Science Institute. "You learn important things from when our eyes get fooled. Fractal patterns in the inkblots are confusing the visual system. Why do you detect a bat or a butterfly when they were never there?"

Why the lights don't dim when we blink: Blinking prompts eye muscles to keep our vision in line -- ScienceDaily

"Our eye muscles are quite sluggish and imprecise, so the brain needs to constantly adapt its motor signals to make sure our eyes are pointing where they're supposed to," Maus said. "Our findings suggest that the brain gauges the difference in what we see before and after a blink, and commands the eye muscles to make the needed corrections." From a big-picture perspective, if we didn't possess this powerful oculomotor mechanism, particularly when blinking, our surroundings would appear shadowy, erratic and jittery, researchers said. "We perceive coherence and not transient blindness because the brain connects the dots for us," said study co-author David Whitney, a psychology professor at UC Berkeley. "Our brains do a lot of prediction to compensate for how we move around in the world," said co-author Patrick Cavanagh, a professor of psychological and brain sciences at Dartmouth College. "It's like a steadicam of the mind." A dozen healthy young adults participated in what Maus jokingly called "the most boring experiment ever." Study participants sat in a dark room for long periods staring at a dot on a screen while infrared cameras tracked their eye movements and eye blinks in real time. Every time they blinked, the dot was moved one centimeter to the right. While participants failed to notice the subtle shift, the brain's oculomotor system registered the movement and learned to reposition the line of vision squarely on the dot. After 30 or so blink-synchronized dot movements, participants' eyes adjusted during each blink and shifted automatically to the spot where they predicted the dot to be. "Even though participants did not consciously register that the dot had moved, their brains did, and adjusted with the corrective eye movement," Maus said. "These findings add to our understanding of how the brain constantly adapts to changes, commanding our muscles to correct for errors in our bodies' own hardware."

For Better Vision, Let the Sunshine In - The New York Times

Strong correlations were found between current eyesight and volunteers’ lifetime exposure to sunlight, above all UVB radiation (which is responsible for burning). Those who had gotten the most sun, particularly between the ages of 14 and 19, were about 25 percent less likely to have developed myopia by middle age. Exposure to sunlight up to the age of 30 also conferred a protective benefit.

Self-generated vision inputs suppressed

That's because the brain can tell if visual motion is self-generated, canceling out information that would otherwise make us feel -- and act -- as if the world was whirling around us. It's an astonishing bit of neural computation -- one that Maimon and his team are attempting to decode in fruit flies. And the results of their most recent investigations, published in Cell on January 5, provide fresh insights into how the brain processes visual information to control behavior. Each time you shift your gaze (and you do so several times a second), the brain sends a command to the eyes to move. But a copy of that command is issued internally to the brain's own visual system, as well. This allows the brain to predict that it is about to receive a flood of visual information resulting from the body's own movement -- and to compensate for it by suppressing or enhancing the activity of particular neurons.

Brain on LSD revealed: First scans show how the drug affects the brain

Under normal conditions, information from our eyes is processed in a part of the brain at the back of the head called the visual cortex. However, when the volunteers took LSD, many additional brain areas -- not just the visual cortex -- contributed to visual processing. Dr Robin Carhart-Harris, from the Department of Medicine at Imperial, who led the research, explained: "We observed brain changes under LSD that suggested our volunteers were 'seeing with their eyes shut' -- albeit they were seeing things from their imagination rather than from the outside world. We saw that many more areas of the brain than normal were contributing to visual processing under LSD -- even though the volunteers' eyes were closed. Furthermore, the size of this effect correlated with volunteers' ratings of complex, dreamlike visions. "

The eyes have it: Mutual gaze potentially a vital component in social interactions: Eye contact may be vital in establishing successful human connections -- ScienceDaily

Indeed, the researchers detected synchronization of eye-blinks, together with enhanced inter-brain synchronization in the IFG, in the pairs when eye contact was established. Compared with findings from previous studies, these outcomes show that synchronization of eye-blinks is not attributable to a common activity, but rather to mutual gaze. This indicates that mutual eye contact might be a crucial component for human face-to-face social interactions, given its potential to bind two individuals into a singular connected system.

Tufte: seeing is thinking

In some ways, seeing is thinking. The light comes in through the lens and is focused on the retina. And the retina is doing - is pretty much working like brain cells. It's processing. And then the two optic nerves are sending what we now know are 20 megabits a second of information back to the brain. That's sure a lot better than my Wi-Fi at home. And so the seeing right then is being transformed into information, into thinking, right as that step from the retina to the brain. And the brain is really busy, and it likes to economize. And so it's quick to be active and jump to conclusions. So if you're told what to look for, you can't see anything else. So one thing is to see, in a way, without words. That avoids the confirmation bias, where, you know, that once you have a point of view, all history will back you up. And that's the eye and brain busy economizing on those 20 megabits a second that are coming in.