343 Exam 3 - study guide from unit 3 PDF

Title 343 Exam 3 - study guide from unit 3
Course Conditioning & Learning
Institution Southeastern Louisiana University
Pages 41
File Size 947.4 KB
File Type PDF
Total Downloads 44
Total Views 224

Summary

study guide from unit 3...


Description

Test 3 Review 11.2.20 Stimulus Control & Motivating Operations ●

Define discriminative stimuli, in general. Explain how they are established and come to exert stimulus control over behavior. In general, discriminative stimuli tells us when and when not to respond ○ ○ ○

1).stimuli differentially correlated with some kind of consequence (punishment,reinforcement, extinction) 2). and bc of that, it excerpts some EFFECT or stimulus control over behavior because of that correlation How are they established? Discriminative stimuli are established in a way that becomes correlated with availability of consequences (presenting stimulus ONLY when reinforcement will be reproduced) don’t want to be correlated with food but the availability of food ■ Lever press when red light turns on bc red light signals the availability of food ■ Alter the likelihood of behavior when their presented

○ ●

Differentiate between the SDs for reinforcement, SDs for punishment, and SDeltas and be able to provide an example. ○

SDs for reinforcement= 1). Differentially correlated with availability of reinforcement and 2). a result= behavior that has produced reinforcer is going to become more likely to occur ■

EX: Krispy creme; has a neon sign that turns on when there's hot donuts= SD for reinforcement bc 1). it stands for the availability for hot donuts then 2). You engage to that behavior and buy the donut



EX: Everytime a red light comes on pigeon pecks because it signals that food is coming



SDs for punishment= Stimulus that is differentially correlated with likelihood of punishment, and as a result, when stimulus is presented we do not behave/do the behavior ■

EX: Key pecking when the red light turns on is reinforced because food is presented each time the red light is on. When the green light is on a shock is presented. Behavior will then become less likely and they will stop pecking in the presence of green light

■ ○

SDs for extinction/SDelta= Stimulus that is differentially correlated with

extinction. That response is NOT going to produce reinforcement and as a result, it is going to stop ■

EX: Pecking red key produces reinforcement but when green stimulus turns on, nothing happens so extinction occurs meaning overtime pigeon will learn to NOT respond when green light is on (just red)



Krispy kreme donut open sign turns off meaning they are closed so it will not produce reinforcement (donuts) so you will not stop by



Describe the role of SDs in behavior chains. ○

Each step is SD for reinforcement for the terminal consequence because getting closer to our goal and reinforcer is closer in time (SD gets us closer to the terminal reinforcer/end result) ■



EX: putting jacket on. Putting one arm in ( SD for next step)

Describe stimulus discrimination, generalization, and peak shift and provide examples related to concept learning. ○

Stimulus Discrimination= When stimuli are different enough (from stimuli that has been established as a SD) that it will not set the occasion to bring out responding. If you present something other than the true SD, the response does NOT occur ■

Kid learning to say tree when picture is being presented and later bring out a picture of a chair and kid will NOT say tree because they are able to discriminate between the two



Stimulus Generalization= Stimulus similar to SR reinforcement will have the same effect on behavior. ■

Showing kid one picture of an oak tree and another of a pine tree but response for both is still “tree”



Peak Shift= Would be if you established an SDelta on only one side, Peak shifts away from SDelta ■

EX: I get kid to say red repeatedly when presenting a red card. Then we assess generalization and throw on some yellow, orange, purple, blue and as we get more and more different from red, the response “red” becomes less likely



Present red card= reinforcement is always produced, now you establish

orange as an SDelta (everytime orange is presented nothing happens, no reinforcement) and as a result the peak will shift away from orange. So now that peak of generalization gradient has shifted ●

What is behavioral contrast? ○

Change in responding to one stimulus that occurs after a change in the reinforcement schedule of another stimulus (two schedules presented in alteration) ■

EX: same level of responding is occurring then implement extinction for green light so red light response increases even if its in the same schedule



There is positive/Negative contrast ●

Positive= change through extinction in one component and unchanged component increases

● ■ ○

Negative=unchanged component decreases

Both are describing what happens to unchanged component

Only possible through stimulus control ■

Two stimuli correlated with two different schedules



Two stimuli presented in alteration ●

Red vs green light ○

Red is Extinction vs green is reinforcement then change green light to extinction and it will go down but red response will go up



Changes in one context can affect behavior in another ○

Two context: green and red, changes in one of them will affect how behavior looks in the other



Describe the predictions of absolute and relative stimulus control and their significance.



Absolute= Only stimulus trained/established with a reinforcer serves as an SD. (learn individual stimuli) Closer you are to SD the better/more responding



Relative= Animal responds to the relationship between two stimulus Comparison between the two ■

Picking the bigger one, the lighter one

-

Chicken learning to peck medium grey(SD+) square and NOT dark grey(SΔ)

-

Add a medium grey square and light grey square -

For absolute= they will peck medium grey (it’s closer to SD+)

-

For relative= they will pick the lighter grey (bc taught to peck lighter of the two squares)



Differentiate between simple and conditional discriminations. ○

Simple discriminations= Primer ■

One response is always reinforced, the other is not ●

EX: Red square and green square moving around computer screen. All that is instructed to do is press red square and that’s it. Only one response is reinforced



Conditional discriminations= (most used) ■

Response that is reinforced depends on which SD is present



Multiple responses available and one thing i do is discriminative for a certain response and another thing i do is discriminative for another response ● EX: Same computer screen with green square and red square, and a word popped up at the top telling you which color to click (when word said “red” touching red would be reinforced, same with green) Response that is reinforced depends on some stimulus on environment, what SD is present (tells you which one to press on) ● Ability to identify things “what is her name vs your name” if you tell



me wrong i’ll be like “you’re wrong”



fine motivation operations, differentiate between the two types and provide an example of each.







MO= Stimulus changes that alter the reinforcing value or efficacy of a particular reinforcer (stimulus) and as a result alter the occurrence of behaviors that have produced that consequence in the past What are the motivating operations?: 1) Make consequences and make reinforcers more or less valuable and as a result, 2) make behaviors that are produced by reinforcers more or less likely to occur. 3) Additionally, they alter the salience of discriminative stimuli (SD) for those consequences Two Types of MO: ■ Establishing= Operations make a stimulus more reinforcing, behavior that has produced that reinforcer more likely, SDs correlated with that reinforcer more salient. ● Behavior more likely ● EX: Longer I go without water, the water will be more reinforcing ■ Abolishing= operations make stimulus less reinforcing, behavior that has produced that reinforcer less likely, and SDs correlated with that reinforcer less salient ● If i chugged a gallon of water before class, water will be less reinforcing and will be less likely to pick up bottle during class

43:31 Imitation, Motor Learning, and Verbal Behavior ●

We discussed 5 points that are important to remember when providing an operant account of complex human behavior, describe each of them. ○





○ ○

1). Reinforcement affects operant classes. ■ Reinforcement makes all members for that class that have the same function more likely 2). Reinforcement can take many shapes other than arbitrary/contrived consequences. ■ Contrived consequences= Like getting gold star for rolling over successfully for the first time (placed by someone else) ■ Naturally occurring too 3). Can’t account for (or dismiss) reinforcement based on observation of a learned skill, have to bring about learning. ■ How learned skills (musically perfectly tuned performances) came about ■ EX: Some critics of operant conditioning say that chains of motor responses like pressing piano keys or letters on a keyboard happen too quickly to be explained by operant conditioning. What characteristic of operant conditioning are the most clearly failing to consider? ● Can't dismiss the effects of reinforcement based on current performance, need to observe the development of performance 4). Stimuli similar to discriminative stimuli exert similar effects on behavior. ■ Stimulus generalization 5). Verbal behavior is no different than other type of behavior. Can establish stimuli as reinforcing or discriminative.

How people talk to themselves; what people say No different than any other type of behavior Ex: New board game says you need yellow tokens to win so they become reinforcers and you have never won the game but you know you need them to win ■ Very powerful in psychology History of reinforcement/how it got there Consider naturally occurring reinforcers - things that arent arbitrary like “good job” Consider Intermittent reinforcement - just bc reinforcement doesnt follow every response doesnt mean its not at work Rules can alter function of things in our environment (SD+,SD-)- establish them as reinforcement or punishment Stimulus generalization ■ ■ ■

○ ○ ○ ○ ○ ●

Compare and contrast the three theories of imitation that we discussed. ○

Innate theory= humans just imitate and thats it ■



Limited bc we don't know what makes imitation more likely

Social learning/cognitive theory of imitation ■

Encoding, rehearsal, storage of responses occurring and then develop expectations of other consequences of those responses



Operant acc in which model’s behavior serves as SD for your behavior of doing something similar in the consequences can be contrived “good job” or more natural (saw bro get cookie jar and now you can too)



46:30 Provide an operant account of imitation in a contrived learning context and in a natural-contingency context. ○

Operant account of imitation= imitative behavior that is controlled by its consequences ■

Contrived learning context= Getting an A on your test and getting a “good job” from teacher and now you will work hard to get another A (being placed by someone)



Natural-contingency context= Being able to throw ball further by watching older bro do it (occurring naturally) have a MO ●

Naturally occurring= We have an SD+ which is sister turning on bathtub faucet and in instance of stimulus generalization, brother can now also turn the sink faucet on and hands are sticky and that is aversive and whatever removes stickiness (washing his hands

in sink) is reinforcement. ● ●

Hands being sticky (aversive) and washing hands is reinforcer

How can operant conditioning account for imitation of novel behaviors? Describe how you could demonstrate this. ○

true/complex imitation= consider stimulus generalization and how you can manipulate history of consequences



One skill is taught and reinforced by “great job, you did it” then all these similar behaviors that were never reinforced start to occur bc of imitation ■

Doing a basic puzzle then bc you learned to do that and get reinforced with a “good job”, you imitate what you did and do more complex puzzles bc of generalization (like 3D ones or more pieces) ??



Program in imitation of all these different model behaviors and after a certain point we will get to the point where a more generalized ability to imitate has emerged even completely different novel behaviors



Provide an operant account of simple motor learning in an infant. ○

Learning to reach for objects: infant is learning to turn over. Baby is in a playpen with lots of toys and in the corner of their eye they see a little toy flashing and making noise but can’t really see it. Those lights/flashes from the corner of their eye is SD for reinforcement for turning over and reaching for it. The closer it is the more reinforcing it is.



Reaching for book, grab it, and bring it closer is more reinforcing and the availability of the book out of my reach is SD for reinforcement



1). What effect did knowledge of results have in experiments using thorndike’s line drawing task? 2). How may these effects be explained from an operant perspective? ○

How does their performance differ when we tell them “right or wrong” vs when we are more quantitative/more specific like “that was off by +5 or how short or long it was relative to the target” quantitative feedback is better



Knowledge of results= you know whether or not you did something correctly ■



You know the BB went through the hoop

1). Having knowledge of results in some tasks is important for learning and giving feedback allows for more learning and more quickly. Quantitative feedback helps more



2). Operant perspective: Consider qualitative feedback (where you are told right or wrong) to be equivalent to reinforcement and quantitative result to be

something different/distinct ■

You are told “good job” (positive reinforcement) then you will keep doing it or perfect that task



You are told “you are wrong” (positive punishment) then you will not do that task or change the way you are doing it

Give them more quantitative feedback they learn more quickly (+1,-2), can establish quantitative feedback as discriminative (to know what person means by saying +1 which means more) and have history of reinforcement thats establishes that kind of feedback more effective than to have someone tell you just to perform it well -

Quantitative feedback can be like a rule, and set criteria for reinforcement (here is a response that will be reinforced, here is how your response was off of that)



Provide a definition of verbal behavior and give an example illustrating interlocking speaker-listener contingencies. ○

Changing my environment by changing someone else's' behavior ■

request of “anybody has gum” sd for someone else’s behavior of “yeah i have some gum” and tossing it over to me and my behavior is reinforced by receiving the gum and say “thank you” and that thank you reinforces the act of giving gum to me



You are driving by Krispy creme donuts and see the sign and tell driver “Hey do you mind pulling over” reinforcement might be that they do pull over and for the listener “Hey can you pull over” is an SD



How do rules affect behavior? Illustrate using an example. ○

Can bring out behavior changes in absence of engaging in response



Contingency specifying stimuli that say: here is the antecedent, behavior, and the consequence it will produce ■

Class outside and teacher is like “if you see guest arrive and everyone lines up quickly and quietly I will extend recess” This statement specifies that the person arriving signals the availability of reinforcement of response (lining up quickly and quietly) not the actual extension of recess



Here is the antecedent, here is the behavior, and here is the consequence it will produce ■

Person arriving signals availability of reo

Choice ●

Describe an operant experiment on choice and how this may “map on” to complex human behavior. ○

Analog 4 on human behavior



One lever over another ■

Have two concurrently available operants and manipulate the consequences for at least one of those responses (how likely reinforcement) ●

When pressing Lever A rat gets more food than the other so rat will press lever a more bc more reinforcing (manipulate consequences for response)



SEC PART OF Q= ■

How many people nod or seem to understand one example over the other across time (will use the one that brings more reinforcement/more nodding)



Two servers in cafeteria (two concurrently available operants) and you know one will serve more food on a plate so you go to that server



Describe the basic/strict matching equation and then describe deviations from that prediction and how the generalized matching equation accounts for them. (how do they affect choice) ○

Strict matching equation= proportion of behavior is equal to the proportion of reinforcement on the alternative; relative likelihood of behavior is equal to the relative likelihood of reinforcement (changes perfectly with rate of reinforcement everytime)



Deviations: ■

Matching ●



relative rate of reinforcement equals relative rate of behavior

Bias= something about one alternative makes it more likely ●

Unidentifiable what that something is, it just is

● ■

Left is better than right side for some reason

Sensitivity= how much a given change in rate of reinforcement affects behavior, how much behavior changes in the rate of reinforcement ●

Undermatching= behavior changes less than rate of reinforcement



Overmatching=behavior changes more than rate of reinforcement, hypersensitivity to changes



Strict/basic matching= for every change in reinforcement you will get an equal and exact change in behavior



Describe two steady-state/molar theories of choice and experiments testing their predictions. ○

Matching theory= all behavior is choice, governed by matching law ■

Natural selection has prepared us to match our relative rates to behavior to relative rates of reinforcement



Optimization theory= organisms match to whatever will produce the most reinforcement ■



Organisms match because they maximize reinforcement

Describe dynamic/molecular theory of choice. ○

Whatever will pro...


Similar Free PDFs