WK2 Readings Ch. 6 Learning & Ch. 8 Thinking PDF

Title WK2 Readings Ch. 6 Learning & Ch. 8 Thinking
Course Psychology Fundamentals
Institution University of California Irvine
Pages 17
File Size 569.6 KB
File Type PDF
Total Downloads 64
Total Views 136

Summary

Download WK2 Readings Ch. 6 Learning & Ch. 8 Thinking PDF


Description

Ch. 6, pp. 221-238; 239-263 Ch. 8, pp. 309-321 ● Learning is a relatively enduring change in behavior, resulting from experience ○ Occurs when an animal benefits from experience so that its behavior is better adapted to the environment ● John Locke ○ Believed infants have a tabula rasa (blank slate); they are born knowing nothing → infants acquire all of their knowledge through sensory experiences ● John B. Watson stated that the environment and its associated effects on animals were the sole determinants of learning ○ Founder of behaviorism, which focuses on observable aspects of learning ● Three types of learning ○ Nonassociative ○ Associative ○ Observational ○

● 1. Nonassociative learning: responding after repeated exposure to a single stimulus, or event ○ Ie, you hear a fire alarm go off and look toward where the alarm is

○ ○ Habituation: a decrease in behavioral response after repeated exposure to a stimulus ■ If smth is neither rewarding nor harmful, habituation leads us to ignore it

■ Ie, being habituated to the sound of a clock or computer fan ■ If the background noise suddenly stops, you are likely to immediately notice the change; the increase in a response because of a change in something familiar is dishabituation ■ Ex. birds stop chirping → detect predator → warns other animals to potential danger ○ Sensitization is an increase in behavioral response after exposure to a stimulus ■ Most likely threatening or painful stimuli ■ Ex. smelling something burning ■ Leads to heightened responsiveness to other stimuli ○ Alterations in the functioning of the synapse lead to habituation and sensitization ■ Reduction in neurotransmitter release → habituation ■ Increase in neurotransmitter release → sensitization ● 2. Associative learning: the linking of two events that, in general, take place one right after the other ○ Develop through conditioning, a process in which environmental stimuli and behavioral responses become connected ○ Ex. associate working with getting paid

○ ○ Classical conditioning (Pavlovian conditioning): a type of associative learning in which a neutral stimulus comes to elicit a response when it is associated with a stimulus that already produces that response ■ Aka, you learn that one event predicts another ■ Pavlov and his contribution through studying the salivary gland in dogs and neutral stimuli (ie, bowl or person) ● A neutral stimulus unrelated to the salivary reflex, such as the clicking of a metronome, is presented along with a stimulus that reliably produces the reflex, such as food ○ Can be anything that the animal can see or hear as long as it is not something that is usually associated with being fed ○ This conditioning trial is repeated a number of times →

test trials ■ Metronome presented alone → salivation









produced ■ Classical conditioning Unconditioned response (UR): a response that does not have to be learned, such as a reflex ○ Ex. salivation after seeing food Unconditioned stimulus (US): a stimulus that elicits a response, such as a reflex, without any prior learning ○ Ex. food → salivation Conditioned stimulus (CS): a stimulus that elicits a response only after learning has taken place ○ Ex. clicking of metronome Conditioned response (CR): a response to a conditioned stimulus; a response that has been learned ○ Ex. increased salivation after hearing the clicking of a metronome

● ● Conclusion: the dog was conditioned to associate the metronome with food ■ Pavlov concluded that the critical element in the acquisition of a learned association is that the stimuli occur together in time (contiguity) ■ The strongest conditioning occurs when there is a brief delay between the conditioned stimulus and the unconditioned stimulus ■ ■ Once the CS has value, other stimuli may become associated with the CS only and can produce CRs ● Second-order conditioning: the CRs can be learned even without the learner ever associating the CS with the original US ■ Extinction: a process in which the conditioned response is weakened when the conditioned stimulus is repeated without the unconditioned stimulus ● Ie, if the metronome is presented many times and food does not arrive, the animal learns that the metronome is no longer a good predictor of food ● Replaces the associative bond, but it does not eliminate that bond ■ Spontaneous recovery: a process in which a previously extinguished conditioned response reemerges after the presentation of the conditioned stimulus ● Temporary unless the CS is again paired with the US

■ Stimulus generalization: learning that occurs when stimuli that are similar but not identical to the conditioned stimulus produce the conditioned response

● ● Slight differences in variables, such as background noise or temperature, lead to slightly different perceptions of the CS → ■









varied responses to CS Stimulus discrimination: a differentiation between two similar stimuli when only one of them is consistently associated with the unconditioned stimulus ● Ie, different shades of gray Problem: they did not know why the dog never salivated at the mere sight of the presenter, even when he was not delivering meat ● Contiguity (together in time) was not sufficient to create CS-US associations Conditioned taste aversions (ie, when a particular food caused nausea) are easy to produce with food, but they are difficult to produce with light or sound Martin Seligman argued that animals are genetically programmed to fear specific objects (biological preparedness) ● Explains why animals tend to fear potentially dangerous things (ie, heights) rather than objects that pose little threats (ie, flowers) Robert Rescorla argued that for learning to take place, the conditioned stimulus must come before the unconditioned stimulus, thereby setting up an expectation for it ● Rescorla-Wagner model: a cognitive model of classical conditioning; it holds that the strength of the CS-US association is

determined by the extent to which the unconditioned stimulus is unexpected ○ The difference between the expected and actual outcomes is called predicted error ■ Something better than expected happens → positive (presence) prediction error, which strengthens the association b/w the CS and the US ■ Expected event doesn’t happen → negative (absence) prediction error, which weakens the CSUS relationship ○ An animal will more easily associate an US with a novel stimulus (ie, metronome) than with a familiar stimulus (ie, whistle) ○ Once a conditioned stimulus is learned, it can prevent the acquisition of a new conditioned stimulus (blocking effect) ■ Occurs when a second CS is added to a conditioning trial with a previously learned CS ■ Shultz examined how dopamine neurons respond during conditioning ● Monkeys received fruit juice (US) → positive prediction error, dopamine activity ● Light or tone (CS) ● Once association occurred, dopamine activity for CS but none for the US ● Monkeys learned that the light or tone predicted the juice, so the juice was no longer a surprise (less prediction error = less dopamine activity) ● Findings support the idea that prediction error signals alert us to impt events in the environment ■ Phobia: an acquired fear that is out of proportion to the real threat of an object or of a situation ● Ex. fear of heights, insects ● Fear conditioning: classically conditioning animals to fear neutral objects ● Watson taught Albert B. (“Little Albert”) to fear neutral objects ● As they presented the white rat and Albert reached for it, Watson banged a hammer into an iron bar, producing a loud clanging sound → Albert got scared by the sound ● US (smashing sound) → UR (fear response) ● CS (rat) + US (smashing sound) → UR (fear response)

● Fear was generalized to other white objects (ie, white santa beard) ● Counterconditioning: when a person is suffering from a phobia, a clinician might expose the patient to small doses of the feared stimulus while having the client engage in an enjoyable task ■ When former heroin addicts are exposed to environmental cues associated with their drug use, such as ppl and places, they often experience cravings ● Tolerance: a process in which addicts need more and more of a drug to experience the same effects ○ Tolerance is greatest when the drug is taken in the same physical location as previous drug use occurred in ○ Operant conditioning (instrumental conditioning): a learning process in which the consequences of an action determine the likelihood that it will be performed in the future ■ Thorndike and his Puzzle Box ● Used cats

● ● Law of effect: any behavior that leads to a “satisfying state of affairs” is likely to occur again ■ B.F. Skinner - chose the word operant to express the idea that animals operate on their environments to produce effects ● Believed he could dramatically change an animal’s behavior by providing incentives to the animal for performing particular acts ● Reinforcer: a stimulus that follows a response and increases the likelihood that the response will be repeated ● Believed that behavior occurs because it has been reinforced ● The Skinner Box



■ ■



○ An animal, usually a rat or pigeon, is placed in the chamber or cage ○ The animal learns to press one lever or key to receive food, the other lever or key to receive water ○ An operant chamber (Skinner box) allowed Skinner to expose rats or pigeons to repeated conditioning trials without having to do anything but observe Shaping: a process of operant conditioning; it involves reinforcing behaviors that are increasingly similar to the desired behavior ● Ex. trying to teach dog to roll over. Reward dog for any behavior that resembles rolling over, such as lying down. Once this behavior is established, you reinforce behaviors more selectively. Reinforcing successive approximations eventually produces the desired behavior. Primary reinforcers: reinforcers that are necessary for survival, such as food or water, that satisfy biological needs Secondary reinforcers: events or objects that serve as reinforcers but do not satisfy biological needs ● These reinforcers are established through classical conditioning, such as money (CS) with rewards such as food (US) Premack theorized that a reinforcer’s value depended on the amount of time a person, when free to do anything, willingly engages in a specific behavior associated with the reinforcer ● Ie, children-- prefer ice cream over spinach → ice cream is a

more potent reinforcer than spinach ● A reinforcer’s value can vary with context (ie, if you’re full, the value of ice cream will drop) ● Premack principle: a more-valued activity can be used to reinforce the performance of a less-valued activity ○ Ie, eat spinach and then you’ll get dessert ■ Positive reinforcement: the administration of a stimulus to increase the probability of a behavior’s being repeated ● Positive = something added ● Ex. get chocolate for doing dishes ■ Negative reinforcement: the removal of an unpleasant stimulus to increase the probability of a behavior’s being repeated ● Negative = removal of something ● Ex. take a pill to get rid of a headache ■ Continuous reinforcement: a type of learning in which behavior is reinforced each time it occurs

● Fast learning, but such learning does not last ● In real world, occurs seldomly ■ Partial reinforcement: a type of learning in which behavior is reinforced intermittently ● Partial reinforcement’s effect on conditioning depends on the reinforcement schedule ● A ratio schedule is based on the number of times the behavior occurs, as when a behavior is reinforced on every third or tenth occurrence ● An interval schedule is based on a specific unit of time ● Fixed schedule ● Variable schedule ● Fixed interval schedule (FI): reinforcement is provided after a certain amount of time has passed

○ ○ Scalloping pattern ● Variable interval schedule (VI): reinforcement is provided after the passage of time, but the time is not regular ○ Cannot predict when it will happen

○ ○ Ex. pop quizzes ● Fixed ratio schedule (FR): reinforcement is provided after a certain number of responses have been made

○ ○ Ex. punch cards--- free pizza after you buy 10 ● Variable ratio schedule (VR): reinforcement is provided after an unpredictable number of responses

○ ○ Ex. games of chance at a casino ● Partial-reinforcement extinction effect: the greater persistence of behavior under partial reinforcement than under continuous reinforcement ○ When the behavior is reinforced only some of the time, the learner needs to repeat the behavior comparatively more times to detect the absence of reinforcement ○ The less reinforcement during training, the greater the resistance to extinction ■ Punishment reduces the probability that a behavior will recur ● For punishment to be effective, it must be reasonable, unpleasant, and applied immediately so that the relationship between the unwanted behavior and the punishment is clear ● Can lead to negative emotions, such as fear and anxiety ○ Through classical conditioning, these emotions may become associated with the person who administers the punishment ○ If a child learns to fear a parent or teacher, the long-term relationship between child and adult may be damaged ● Punishment fails to offset the reinforcing aspects of the undesired behavior → reinforcement should be used (ie, compliment student for doing well than punish for doing poorly) ● Physical punishments, such as spanking → negative outcomes, including poor parent/child relationships, weaker moral values, mental health problems…

■ Positive punishment: the administration of a stimulus to decrease the probablity of a behavior’s recurring ● Usually unpleasant stimulus ● Ex. getting a speeding ticket for speeding ■ Negative punishment: the removal of a stimulus to decrease the probability of a behavior’s recurring ● Usually pleasant stimulus ● Ex. losing driving privileges for speeding

■ ■ Behavior modification: the use of operant-conditioning techniques to eliminate unwanted behaviors and replace them with desirable ones ● Rationale: most unwanted behaviors are learned and therefore, can be unlearned ● Ex. token economies: ppl learn to perform tasks in exchange for tokens, which they can later trade for desirable objects or privileges ○ Conditioning is most effective when the association between the response and the reinforcement is similar to the animal’s built-in predisposition ■ Animals have a hard time learning behaviors that run counter to their evolutionary adaptation (ie, raccoon had a hard time putting coin in piggy bank) ○ Tolman argued that reinforcement has more impact on performance than on learning ■ Cognitive map: a visual/spatial mental representation of an environment ■ Rats were divided into three groups and ran mazes ● One group of rats is put through trials running in a maze with a goal box that never has any food reward as reinforcement

● A second group of rats is put through trials in a maze with a goal box that always had food reinforcement ● A third group of rats is put through trials in a maze that has food reinforcement only after the first 10 trials ● Results: rats that were regularly reinforced for correctly running through a maze showed improved performance over time compared with rats that did not receive reinforcement ● Rats may learn a path through a maze but not reveal their learning ● They do not reveal the learning because the maze running behavior has not been reinforced ● Learning this kind is called latent learning (learning that takes place in the absence of reinforcement) ■ Insight learning: a solution suddenly emerges after either a period of inaction or contemplation of the problem ○ In operant conditioning, dopamine release sets the value of a reinforcer ■ Drugs that block dopamine’s effects disrupt operant conditioning ● 3. Observational learning (social learning): acquiring or changing a behavior after exposure to another individual performing that behavior

○ ○ Ex. learning how to dance by watching a YouTube video ○ Bandura’s observational studies ■ Two groups of preschool children were shown a film of an adult playing with a large inflatable doll called Bobo ■ One group saw the adult play quietly with the doll ■ The other group saw the adult attack the doll ■ When the children were later allowed to play with a number of toys, including the Bobo doll, those who had seen the more aggressive display were more than twice as likely to act aggressively toward the doll ■ Results suggest that exposing children to violence may encourage them to act aggressively ○ Modeling: the imitation of observed behavior

■ Generally, we are more likely to imitate the actions of models who are attractive, have high status, and are somewhat similar to ourselves ■ Modeling is effective only if the observer is physically capable of imitating the behavior ■ Is the model reinforced for performing the behavior? ● In a similar study done by Bandure, children who saw the model being punished were less likely to be aggressive than those in the control group ● Vicarious learning: learning the consequences of an action by watching others being rewarded or punished for performing the action ○ Media violence has been found to increase the likelihood of short-term and longterm aggressive and violent behavior ○ Fear can be learned through observation ■ Mineka’s study with monkeys ● Two groups of monkeys ● One group was reared in the lab, and one group was reared in the wild (fear snakes) ● To obtain food, the monkeys had to reach beyond a clear box that contained either a snake or a neutral object ● When a snake was in the box, the wild-reared monkeys did not touch the food + showed signs of distress ● The laboratory-raised monkeys reached past the box even if it contained a snake + no sign of fear ● The researchers then showed the laboratory-raised monkeys the wild monkeys’ fearful response → lab-monkeys quickly developed fear of snakes ■ The social learning of fear likely relies on the amygdala ○ Mirror neurons: neurons in the brain that are activated when one observes another individual engage in an action and when one performs a similar action ■ Ie, activated when you see someone reach for water or if you reach for a glass of water ■ May help us learn what other ppl are thinking ■ Possible neural basis for empathy Ch. 8 ● Cognition: the mental activity that includes thinking and the understandings that result from thinking ● Cognitive psych was originally based on two ideas: ○ 1) Knowledge about the world is stored in the brain in representations

○ 2) Thinking: the mental manipulation of representations of knowledge about the world ● Types of mental representations ○ Analogical representations: mental representations that have some of the physical characteristics of objects; they are analogous to the objects ■ Usually images ■ Ex. maps are analogical representations of geographical layouts ■ “This is to that as that is to…” ■ Ex. family trees depict relationships between relatives ○ Symbolic representations: abstract mental representations that do not correspond to the physical features of objects or ideas ■ Usually words, numbers, or ideas ■ Ex. the word violin stands for a musical instrument ○ Mental maps rely on both analogical and symbolic representations ● Grouping things based on shared properties is called categorization ○ Reduces the amount of knowledge we must hold in memory and is therefore an efficient way of thinking ○ Concept: a category, or class, of related items; it consists of mental representations of those items ■ Ex. musical instruments ○ Prototype model: a way of thinking about concepts; within each category, there is a best example-- a prototype-- for that category ■ You average all members of a particular category to arrive at the prototype ○ Exemplar model: a way of thinking about concepts: all members of a category are examples (exemplars); together they form the concept and determine category membership ■ Ex. your ...


Similar Free PDFs