PSYC102 - Professor Aron PDF

Title PSYC102 - Professor Aron
Author Ashley Min
Course Sensory Neuroscience
Institution University of California San Diego
Pages 34
File Size 1.1 MB
File Type PDF
Total Downloads 51
Total Views 137

Summary

Professor Aron...


Description

Lecture 3.1 // January 24 (5) ● Focusing Images on the Retina ○ The cornea ■ Fixed, accounts for 80% of focusing ○ The lens ■ Adjusts shape for object distance, accounts for 20% of focusing ○ Accommodation: ciliary muscles cause the lens to change thickness ● Basic Optics ○ Fixed lens ■ Focal point of lens depends on distance of light source/object ○ Shape of the lens affects the distance between the lens and the focal point ■ Wider, curved lens → focal point closer to lens (not towards the back) ●







Correction via Convex & Concave lenses ○ Nearsightedness ■ Ray focus is in front of retina (too short) ■ CONCAVE lens corrects myopic error )( ○ Farsightedness ■ Ray focus is behind retina (too far) ■ CONVEX lens corrects hypermetropic error () Retinal Processing/ Specialization ○ Distribution on retina ■ Fovea consists solely of cones ■ Peripheral retina has both rods and cones ■ More rods than cones in periphery ○ 120 million rods ■ Function in low light, even one photon to activate ■ Supports low acuity vision in low light ■ Relatively sensitive to movement ○ 6 million cones ■ Concentrated in the fovea ■ Less sensitive to light, need 10s to 100s of photons for activation, but good visual acuity ■ Daytime & high precision vision Transduction of Light into Nerve Impulses ○ Receptors have outer segments, which contain: ■ Visual pigment molecules, which have two components ● 1. OPSIN: a large protein ● 2. RETINAL: a light sensitive molecule ○ Visual Transduction occurs when the retinal absorbs light ■ Retinal changes its shape : ISOMERIZATION Light intensities ○ Range across 9 orders of magnitude ■ But in a given lighting condition, light ranges over only about 2 orders of magnitude



Dynamic Range Adjustment ○ 3 mechanisms for Light/Dark adaption ■ 1. Pupil : ranges in diameter → factor of 4 in diameter → factor of 16 in area → one order of magnitude ■ ■

2. Rod & Cone ● LECTURE 5, PG 23 3. Adaptation

Lecture 4.1 // January 31 (6) V1 and Object Perception ● Retina to Visual Cortex ○ Multiple visual pathways ■ Rods → M ganglion cells → Magno LGN → V1 ■ ○



Cones → P ganglion cells → Parvo LGN → V1

Distinct pathways project to distinct destinations in LGN ■ 6 layers ● First two : RODS (Magnocellular) ● Last four: CONES (Parvocellular) ■ Ocular dominance layers ● Any given cell is getting info from the rods or cones (magno or parvo) and from either the left eye or the right eye ● Almost perfect segregation ■ Retinotopic map in the LGN ● Light coming from adjacent parts of the world project on adjacent parts on the retina ● Image on the back of retina is upside down ○ That map is then projected onto LGN ■ Each layer of the LGN preserves the spatial structure of the image that you’re looking at Retinotopic Map on the Cortex ■ Cortex shows retinotopic map too ● Electrode recording from a cat’s visual cortex shows: ○ Receptive fields on the retina that overlap also overlap in the cortex ■ Receptive fields tile the visual fields ■ Massive overrepresentation of the information coming into the fovea ● More tissue dedicated to representing the middle part of the image ● Cones are highest (only two layers for rods, periphery) ● Visual acuity is not just determined by lens, but also by the amount of tissue in visual cortex. ■ Cortical magnification ● Fovea has more cortical space than expected ○ Fovea accounts for 1% of retina ○ Signals from fovea account for 8-10% of visual cortex







○ Provides extra processing for high-acuity tasks Functional Properties of the cells in V1 (visual cortex) ○ Ocular dominance columns ■ Neurons in primary visual cortex initially respond best to one eye ■ Neurons with same preference are organized into columns Center surround RF’s (retina/LGN) ○ Synapse onto the simple cells of primary visual cortex. The spatial arrangement of the specific ganglion cells that feed into a simple cell creates the receptive field of the simple cell ■ Orientation columns Beyond V1: Dorsal and Ventral Pathways ○ Dorsal → Parietal lobe → “where” landmark distinction ■



Later, Milner & Goodale said it’s not where, but HOW ● Patient DF damage to ventral pathway ○ Can’t tell you orientation of slot, but have post a letter into it ■ So ventral stream = what ■ And dorsal stream = how Ventral → Temporal lobe → “what” object recognition

Lecture 4.2 // February 2 Object Perception ● Difficult to design a perceiving machine ○ Stimulus on the receptors is ambiguous ■ Inverse projection problem: an image on the retina can be caused by an infinite number of objects ● Retina is 2D surface, how ○ Objects can be hidden or blurred ■ Occlusion, segmentation ○ Different viewpoints produce very different retinal images ● Heuristics ○ Gestalt laws of perceptual organization ○ Figure-ground segregation ○ Building object perceptions from parts ● The structuralist Approach ○ Wundt ■ States that perceptions are created by combining sensations ■ You experience only what you see ■ BUT couldn’t explain illusions like “completion” ● The Gestalt Approach ○ The whole is different than the sum of its parts ■ Configuration or pattern ○ Organizing principles: (NOT LAWS) ■ Good continuation ■ proximity/similarity





■ Common fate ■ Common region ■ Uniform connectedness ■ Synchrony: happening together, grouped together Figure-ground segregation ○ Determining what part of environment is the figure so that it stands out from background ■ Figure is more thinglike ■ Seen in front of ground ■ Ground is more uniform ■ Symmetric things more figurelike ○ Bigger firing rate for figure than ground RBC (Recognition by Components) ○ Geons ■ Each geon is uniquely identifiable from most viewpoints

Lecture 5.1 // February 7 ● How does the brain process information about objects? ○ Temporal lobe: IT cortex, Ventral Stream ■ Viewpoint invariance: Neuron responds to the same neuron despite size/viewpoint ■ Not a one-neuron-to-one-object, but an aggregate/pattern population response ● Object shape. ● IT provides object structure ○ V4 & MT provide details (color, motion) ● Organization of ventral pathway ○ Groups of cells tend to respond to similar features… so not equally responsive to all features/objects ■ Specialized chunks of cortex that perform distinct perceptual functions = MODULES ● FFA: fusiform face area ○ AM is much more responsive to faces ○ Many different FFA patches ● PPA: spatial navigation tasks ● EBA: full bodies and body parts ● Nature VS Nurture? Face areas ○ Isabelle Gotiez: Functional expertise rather than specific object type, and we are experts at recognizing faces so we see activity in these areas ■ Used experiment with greebles… before training people for greebles, they have much higher response for faces. But after training, almost equal FFA response to greebles as faces ■ More expertise → more activity in FFA ○

How related is brain activity in IT to perceptual experience?







Logothetis & Sheinberg ● Used binocular rivalry ● Monkey trained to pull two levers (one for sunburst, one for butterfly); both images are presented but only aware to one of them ● Neuron in the IT cortex that responded only to butterfly was monitored Summary Object Recognition ○ Brain uses basic gestalt grouping principles to organize input ○ Computational theories like RBC can help explain how we build complex objects based on simpler objects ○ Neural activity in ventral visual cortex tracks subjective perceptual experience Attentional Control & Dorsal Visual Pathway ○ Attention (selective), implies withdrawal of attention from other things (competitive in nature) ■ Can influence how fast you process information and how you search for relevant information ■ V4 attention increases the firing rate of cells that respond to the attended location ● When you pay attention to something, firing rate of the cells that encode that object increases

Lecture 5.2 // February 9 ● Some sources of attentional control ○ Parietal cortex ■ FEF: Frontal eye fields → cells that project down to the brainstem and controls eye movements, controlling spatial ability ■

SPL: superior parietal lobule and precuneus → role in act of shifting attention



IPS: Intraparietal sulcus → occipital lobe to parietal; series of retinotopic maps, important in spatial representations (dorsal stream)





● ●

Visual Neglect ○ Disorder in which individuals are unaware of events in the space opposite to their lesion ■ Right parietal cortex ○ Cannot be attributed to the sensory systems themselves ○ Generally regarded as selective lack of attention Input Neglect & Output Neglect ○ Input neglect: unable to perceive things in left side of visual field ○ Output neglect: Neglect: ○ Heavy head trauma → mostly results in some sort of neglect Extinction ○ Patients often recover from neglect after acute injury ■ But may be left with long-lasting, milder version ● “Extinction” - failure to detect a stimulus contralateral to a lesion during simultaneous bilateral stimulation ○ Competition





■ Individuals will respond to a single event on the contralateral side ■ But they fail when a stimulus is presented simultaneously on the “good” side Simultagnosia (Balint’s) ○ Bilateral damage to parietal lobes ○ Complete inability to attend to more than one feature, regardless of the side of presentation ■ Even in fovea ○ When presented with two spatially conjoint objects, will report only one ○ Failure of attention due to COMPETITION, NOT failure of vision Conclusions: ○ Attention modulates the response of neurons in visual cortex (increases their response) ○ Subregions of parietal cortex play a key role in attentional control (different aspects of control) ○ Failures of attention can be dissociated from failures of perception

Motion Perception ● When motion perception fails (Akinesthesia) ○ Can’t drive, walk across street ○ Hard time understanding people (lips + social cues that depend on movement perception) ○ Anxiety attacks: people suddenly appear and disappear ○ Can’t pour coffee :O ● Physiological basis of motion perception ○ Hubel and Wiesel discovered direction selective cells in cat V1 ■ Almost all neurons in area MT of monkey are also direction selective ● Psychophysical evidence for motion detectors ○ Motion aftereffect ■ These motion aftereffects provide evidence for direction and rotationally selective neurons ■ Exposure to the moving stimulus causes the most responsive neurons to stop responding after a while ● Motion detectors ○ Not simply a static receptive field, but a spatio-temporal receptive field ○ Responds to a particular direction of motion (preferred direction) and not as much to other directions of motion (and lowest to NULL direction) ● Local to global motion perception ○ V1 cells are very small, small windows of the cell (local motion) ■ Gives rise to fundamental problem in perception ● Cannot see global movement of the object ■ There has to be something beyond V1 cells to help disambiguate information ● → MT cells are selective for direction too, and inherit information from several V1. Vector averaging of a whole bunch of V1 cells ○ Global motion

EXAM 3

Week 6.2 // February 16 ● Area MT ○ Humans also have a motion selective area called MT or sometimes MT+ ● Retinal Vs Real World Motion : Big Computational Challenge ○ If motion is perceived when an image moves on the retina, then why don’t we see motion when we move our eyes from A to B? ■ Somehow, the visual system subtracts the perception of motion during an eye movement from the eye movement itself ● Brain takes into account that you are moving your eye ● Corollary Discharge Theory ○ Movement perception depends on three signals: ■ 1. Image movement signal (IMS) - movement of image stimulating receptors across the retina ■ 2. Motor Signal (MS) - signal sent to eyes to move eye muscles (to track an object) ■ 3. Corollary discharge signal (CDS) - copy of the motor signal ○ When you send a command to move your eyes, a copy of the motor signal signal is sent to a “comparator” ○ The IMS is subtracted from the CDS to calculate the amount of “real” motion in the world (CDS - IMS = 0 then the world is stationary, your eyes are moving)



When you TRACK an object that’s actually moving rightward: ■ 1. A motor signal is initiated (MS) which moves the eyes rightward

■ ■



2. A copy of this rightward signal is sent to the comparator (CDS) 3. No image movement (IMS) occurs on the retina (because you’re tracking the object SO It is STATIONARY ON THE RETINA… therefore, 0) ■ 4. Comparator subtracts IMS (0) from CDS, telling sensory system that there is a mismatch and therefore there is movement There are 4 ways to experience the effects of corollary discharge ■







Just by generating the CDS (corollary discharge signal in the eye) motor signal you can cause the perception of motion even though there is absolutely no real motion in the real world ■ Evidence of this comparator model Other kinds of motion stimuli ○ Point-light walker biological motion: actions depicted solely by the kinematics of light points is an example of body perception stripped down to its most bare and essential components ■ Enables careful control over the defining characteristics of biologically plausible vs implausible motion ○ fMRI shows biological motion is processed in superior temporal sulcus (STS), which is right next to MT (MT is for non biological) ■ STS contains specific types of mechanisms for social information ● Receives projections from the DORSAL AND VENTRAL streams Physiological Evidence for Corollary Discharge Theory ○ Damage to the medial superior temporal area in humans ( MST) leads to perception of movement of a stationary environment whenever the eyes move ○ True “movement” selective neurons found in monkeys (area MST) that respond only when a stimulus moves and do not respond when eyes move







MST IS Said to record “true movement” meaning it reflects the stage of processing after “comparator”... only things that are actually moving in the environment. At or after the comparator stage. Records true motion, NOT just movement across the retina Perception and Action ○ The ecological approach to perception ■ Approach developed by JJ Gibson ● Lab studies are too artificial.. Observers were not allowed to move their heads ● This movement in the environment provides important cues that are not present when you remain still… ● The environment provides rich information ● Came up with optic array - structure created by the surfaces, textures, and contours in the environment ○ Optic flow- appearance of objects as the observer moves past them (self-produced information) ■ Gradient of flow - difference in flow as a function of distance from the observer ■ Focus of expansion - point in distance where there is no flow ■ The focus of expansion is always centered wherever you are heading and provides invariant information that remains constant while the observer is moving ○ To understand perception, have to actually go out of lab and do experiments there ● Impact of self-produced optic flow is strong ○ Lee & Aronson had 13-16 month children placed in “swinging room” ■ RESULTS: children swayed back and forth in response to the flow patterns created in the room The physiology of Navigation ○ MST, like MT, neurons are sensitive to different types of motion ■ But in MT we look at simple motions ■ In MST, we get more complex receptive fields ○ Causal role of MST in direction judgments ■ As monkeys did the task, microstimulation was used to stimulate MST neurons that are tuned to leftward flow ● Judgments were shifted in the direction of the stimulated neuron

Week 7.1 // February 23 ● Affordances- what are objects for? ○ Gibson believed affordances of objects are made up of info that indicates what an object is used for ■ Indicate “potential for action” as part of our perception





People with certain types of brain damage show that even though they may not be able to name objects, they can still describe how they are used to can pick them up and use them ○ Experiment by Pelligrino ■ Tested woman with damage to parietal lobe who showed extinction (neglect) the inability to direct attention to more than one thing at a time ■ Two cups presented, 1 to right and 1 to left ● The left was not detected unless there was a handle added ■ RESULTS: Adding a “motor affordance” for grasping may activate different brain circuits that are specialized for action planning ● Specific feature that makes it more possible to interact with the object ○ Physiology of Reaching & Grasping (motor affordances) ■ Neurons in the parietal lobe that are silent when a monkey was not behaving, respond when the monkey reached to press a button to receive food ● This response only happened when the animal was reaching to achieve a goal ● Calton identified goal directed neurons in the parietal reach region (PRR) ■ In the PRR ● Responses of neurons in the parietal lobe ○ Visual-dominant neuron- responds best when a monkey looks at a button or pushes it in the light ○ Motor-dominant neuron- responds best when pushing button both in light and dark ■ Does not respond to looking at a button ● Neurons in the PPR respond BEFORE monkeys grasp an object and thus signal the intention to grasp ● Neurons from this region sends signals directly to the premotor area that are immediately anterior ■ Practical applications: Neural Prosthetics ● Goal is for people with spinal cord injuries to be able to move a computer mouse directly with their thoughts Size/Depth Perception ○ Cues for depth perception ■ Oculomotor: cues based on sensing the position of the eyes and muscle tension ■ Monocular: cues that rely on only one eye (pictorial cues) ■ Binocular: cues that rely on both eyes ○ Oculomotor: *** only really work well for closer distances ■ 1. Convergence: knowing the inward movement of the eyes when we focus on nearby objects ■ 2. Accommodation: feedback from changing the focus of lens ○ Monocular: ■ 1. Pictorial Cues: sources of depth information that come from 2-D images such as pictures

● ●



Occlusion: when one object partially covers another, it is closer Relative height: objects that are higher in the field of vision are more distant ● Relative size: when objects are equal size, the closer one will take up more of your visual field ● Familiar size: distance information based on our knowledge of object size (we know how big things are generally) ● Perspective convergence: parallel lines appear to come together in the distance ● Atmospheric perspective: distant objects have lower contrast and blend into background ● Texture gradient: equally spaced elements are more closely packed as distance increases ● Shadows: can help indicate distance ■ 2. Movement-produced cues: caused by your movement through the environment ● Motion parallax: close objects in direction of movement glide rapidly past but objects in distance appear to move slowly ● Deletion and accretion: objects are covered or uncovered as we move relative to them ○ Also called occlusion-in-motion Binocular ■ We make vergence movements to keep an object at fixation on the fovea of both eyes ● Either CONVERGE or DIVERGE ■ Binocular disparity: difference in images between the two eyes ● Each eye takes in light from a different angle ■ The horopter- imaginary circle that passes through the point of focus. Objects on the horopter produce equal disparity in the two eyes ● Aka LINE OF ZERO DISPARITY



● Once you’re fixating, the relative positions of other locations on the two retinas can serve as a cue to depth ● For objects straight in front of you ○ If it’s in front of fixation: crossed disparity ○ Behind fixation: uncrossed disparity ● Inside...


Similar Free PDFs