Chance 7e TB Ch05 - Chapter 5 Test bank PDF

Title Chance 7e TB Ch05 - Chapter 5 Test bank
Course Introduction To Learning And Behavioral Analysis
Institution Queens College CUNY
Pages 9
File Size 167.9 KB
File Type PDF
Total Downloads 53
Total Views 148

Summary

Chapter 5 Test bank...


Description

Chapt e r5:Re i nf or c e me nt Multiple Choice 1. E. L. Thorndike's studies of learning started as an attempt to understand _______. a. operant conditioning b. the psychic reflex c. animal intelligence d. maze learning Ans: C Ref: 127 2. Thorndike complained that _______ evidence provided a “supernormal psychology of animals.” a. anecdotal b. case study c. informal experimental d. intuitive Ans: A Ref: 127 3. In one of Thorndike's puzzle boxes, a door would fall open when a cat stepped on a treadle, thus allowing the cat to reach food outside the box. Eventually the cat would step on the treadle as soon as it was put into the box. Thorndike concluded that ________. a. the reasoning ability of cats is quite remarkable b. treadle stepping increased because it had a "satisfying effect" c. the treadle is a CS for stepping d. learning meant connecting the treadle with freedom and food Ans: B Ref: 128 4. Thorndike plotted the results of his puzzle box experiments as graphs. The resulting curves show a _____ with succeeding trials. a. decrease in time b. decrease in errors c. change in topography d. increase in the rate of behavior Ans: A Ref: 128 5. The law of effect says that _______. a. satisfying consequences are more powerful than annoying consequences b. behavior is a function of its consequences c. how an organism perceives events is more important than the events themselves d. effective behavior drives out ineffective behavior Ans: B Ref: 130 6. Thorndike made important contributions to all of the following fields except _____. a. educational psychology b. animal learning c. social psychology d. psychological testing Ans: C Ref: 130 TESTBANKCHAPTER5

32

7. Thorndike emphasized that we learn mainly from _______. a. errors b. repeated trials c. success d. social experiences Ans: C Ref: 130; 164 8. Operant learning is sometimes called ________ learning. a. free b. higher-order c. instrumental d. reward Ans: C Ref: 131 9. ________ gave Skinner’s experimental chamber the name, “Skinner box.” a. Fred Keller b. E. L. Thorndike c. John Watson d. Clark Hull Ans: D Ref: 131 10. Operant learning may also be referred to as _______. a. trial-and-error learning b. effects learning c. non-Pavlovian conditioning d. instrumental learning Ans: D Ref: 131 11. Mary’s grandmother, Pearl, is from the Old Country. Although she knows some English, she continues to speak her native tongue. Pearl can't go anywhere without a member of the family because she can't communicate with people about prices, directions, bus routes, etc. Pearl's resistance to learning English is most likely the result of ______. a. a lack of intelligence b. age. Studies show that after the age of 60 learning a second language is nearly impossible. c. the length of time she has spent speaking her native language d. the benefits she receives for not speaking English Ans: D Ref: inferred 12. Mary decides to try to modify Pearl's behavior (see above item). She and the rest of the family refuse to respond to any comment or request by Pearl that they know she is capable of expressing in English. For example, if during dinner she says, "Pass the potatoes" in English, she gets potatoes; if she says it in her native language she gets ignored. The procedure being used to change Pearl's behavior is ______. a. positive reinforcement b. negative reinforcement c. adventitious reinforcement d. punishment Ans: A Ref: 133

TESTBANKCHAPTER5

33

13. Charles Catania identified three characteristics that define reinforcement. These include all of the following except _______. a. a behavior must have a consequence b. the consequence of the behavior must be positive c. a behavior must increase in strength d. the increase in strength must be the result of the behavior’s consequence Ans: B Ref: 133 14. The one thing that all reinforcers have in common is that they _______. a. strengthen behavior b. are positive c. feel good d. provide feedback Ans: A Ref: 133 15. The number of operant procedures indicated in the contingency square is ______. a. two b. four c. six d. nine Ans: B Ref: 133 16. Positive reinforcement is sometimes called _______. a. escape training b. positive training c. satisfier training d. reward learning Ans: D Ref: 133 17. Negative reinforcement is also called _______. a. punishment b. aversive training c. escape-avoidance training d. reward training Ans: C Ref: 134 18. Alan Neuringer demonstrated that with reinforcement, _____ could learn to behave randomly. a. preschoolers b. cats c. rats d. pigeons Ans: D Ref: 135f 19. Skinner describes some of his most important research in _______. a. Verbal Behavior b. The Behavior of Organisms c. Particulars of My Life d. Animal Intelligence Ans: B Ref: 136

TESTBANKCHAPTER5

34

20. The author of your text calls Skinner the ______. a. Newton of psychology b. Thorndike of free operant work c. discoverer of reinforcement d. Darwin of behavior science Ans: D Ref: 137 21. The opposite of a conditioned reinforcer is a ______. a. tertiary reinforcer b. secondary reinforcer c. primary reinforcer d. generalized reinforcer Ans: C Ref: 138 22. All of the following are recognized kinds of reinforcers except ______. a. primary b.contrived c. secondary d. classical Ans: D Ref: 138-141 23. Donald Zimmerman found that a buzzer became a positive reinforcer after it was repeatedly paired with ______. a. food b. water c. escape from shock d. morphine Ans: B Ref: 139 24.The level of deprivation is less important when the reinforcer used is a(n) _________reinforcer. a. primary b. secondary c. unexpected d. intrinsic Ans: B Ref: 139 25. Secondary reinforcers are also called _______ reinforcers. a. transient b. conditioned c. second-order d. acquired Ans: B Ref: 139 26. Money is a good example of a _______ reinforcer. a. primary b. tertiary c. generalized d. transient Ans: C Ref: 140

TESTBANKCHAPTER5

35

27. The Watson and Rayner experiment with Little Albert may have involved operant as well as Pavlovian learning because the loud noise ______. a. occurred as Albert reached for the rat b. occurred while Albert was eating c. did not bother Albert initially d. was aversive Ans: A Ref: 142 28. Studies of delayed reinforcement document the importance of ______. a. contiguity b. contingency c. inter-trial interval d. deprivation level Ans: A Ref: 144 29. Schlinger and Blakely found that the reinforcing power of a delayed reinforcer could be increased by ________. a. increasing the size of the reinforcer b. preceding the reinforcer with a stimulus c. providing a different kind of reinforcer d. following the reinforcer with a stimulus Ans: B Ref: 145 30. An action that improves the effectiveness of a reinforcer is called a ______. a. motivating operation b. reward booster c. contrived reinforcer d. activator Ans: A Ref: 148 31. ________ demonstrated that electrical stimulation of the brain could be reinforcing. a. Olds and Milner b. Skinner c. Barnes and Noble d. Hull Ans: A Ref: 150 32. _____is a neurotransmitter that seems to be important in reinforcement. a. Dopamine b. Stupamine c. Intelamine d. Actomine Ans: A Ref: 151 33. Clark Hull’s explanation of reinforcement assumes that reinforcers _____. a. stimulate the brain b. reduce a drive c. activate neurotransmitters d. leave a neural trace Ans: B Ref: 154 TESTBANKCHAPTER5

36

34. The best title for the figure below is ______. a. Motivation and Line Drawing b. The Effect of Practice without Reinforcement c. Trial and Error Learning d. Improvement in Line Drawing with Practice

Ans: B

Ref: 154

35. Sylvia believes that the reinforcement properties of an event depend on the extent to which it provides access to high probability behavior. Sylvia is most likely an advocate of _______ theory. a. drive-reduction b. relative value c. response deprivation d. random guess Ans: B Ref: 155 36. Premack’s name is most logically associated with _______. a. drive reduction theory b. relative value theory c. response deprivation theory d. equilibrium theory Ans: B Ref: 155 37. The Premack principle says that reinforcement involves _______. a. a reduction in drive b. an increase in the potency of a behavior c. a relation between behaviors d. a satisfying state of affairs Ans: C Ref: 156

TESTBANKCHAPTER5

37

38. According to ___________ theory, schoolchildren are eager to go to recess because they have been deprived of the opportunity to exercise. a. drive-reduction b. relative value c. response deprivation d. stimulus substitution Ans: C Ref: 157 39. The distinctive characteristic of the Sidman avoidance procedure is that _______. a. the aversive event is signaled b. the aversive event is not signaled c. the aversive event is signaled twice d. there is no aversive event Ans: B Ref: 162 40. Douglas Anger proposed that there is a signal in the Sidman avoidance procedure. The signal is ________. a. reinforcement b. the aversive event c. fatigue d. time Ans: D Ref: 162 41. According to the one-process theory of avoidance, the avoidance response is reinforced by _______. a. escape from the CS b. a reduction in the number of aversive events c. positive reinforcers that follow aversive events d. non-contingent aversives Ans: B Ref: 163

True/False 42. Another term for operant is instrumental. T (131) 43. Positive reinforcement increases the strength of a behavior. T (133) 44. According to Skinner, people are rewarded, but behavior is reinforced. T (133) 45. Reprimands, restraint, captivity, and electrical shocks can be reinforcers. T (133) 46. Negative reinforcement increases the strength of a behavior. T (134) 47. A general assumption of behavioral research is that any feature of a behavior may be strengthened by reinforcement, so long as reinforcement can be made contingent on that feature. T (135) 48. Negative reinforcement and punishment are synonyms. F (135) 49. People can learn to behave randomly provided that reinforcers are made contingent on random acts. T (135) 50. Reinforcement is often said to increase the frequency of a behavior, but research suggestss that any feature of a behavior (e.g., intensity, duration, form, etc.) can be strengthened if a reinforcer can be made contingent on that feature. T (136) 51. Operant learning probably always involves Pavlovian conditioning as well. T (142) 52. In operant learning, the word contingency usually refers to the degree of correlation between a behavior and a consequence. T (142) TESTBANKCHAPTER5

38

53. Vomiting is ordinarily an involuntary response, but sometimes it can be modified by operant procedures. T (142) 54. Pavlovian and operant learning often occur together. T (142) 55. The more you increase the size of a reinforcer, the less benefit you get from the increase. T (146) 56. Studies demonstrate that operant learning is as effective with involuntary behavior, such as the salivary reflex, as it is with voluntary behavior. F (147) 57. With reinforcement, it is easy for a person to lower his blood pressure. F (148) 58. Using ESB as a reinforcer, Talwar and his colleagues got such effective control over the behavior of rats that journalists called the animals robo-rats. T (151) 59. Unexpected reinforcers produce more dopamine than expected reinforcers. T (152) Completion 60. The experimental chamber developed by Skinner is often called a _________. Ans: Skinner box (131) 61. Positive reinforcement is sometimes called ______ learning. Ans: reward (133) 62. Negative reinforcement is sometimes called ______ learning. Ans: escape-avoidance (134) 63. Reinforcers such as praise, positive feedback, and smiles are called ______ reinforcers. Ans: secondary/conditioned (139) 64. _______ reinforcers are those that have been arranged by someone. Ans: Contrived (141) 65. The area of the brain that seems to be associated with reinforcement is called the reward _______. Ans: pathway/ circuit. (151) 66. Clark Hull's name is associated with the _______ theory of reinforcement. Ans: drive-reduction (154) 67. The _______ principle states that high probability behavior reinforces low probability behavior. Ans: Premack (155) 68. _______ theory assumes that a behavior becomes reinforcing when we are prevented from performing it as ofen as we normally would. Ans: Response deprivation (157) 69. In _______ _ a response is followed by the withdrawal of, or reduction in the intensity of, an aversive stimulus. Ans: negative reinforcement (159) 70. Operant learning is often described as trial-and-error learning, but Thorndike argued that behavior was selected by ______. Ans: success (164) Short Essay 71. Your text says that the law of effect implies that our environment is constantly “talking” to us. What does this mean? (130) The idea is that our surroundings provide feedback about what we do. In a sense, it tells us, “Yes, that was good. Do that again” or, “No. That was a mistake. Don’t do it again.” 72. What are the differences between classical conditioning and operant learning? (141) The essential elements students should mention is that classical conditioning involves stimuluscontingent stimuli and reflexive behavior; operant learning involves response-contingent stimuli and non-reflexive behavior. 73. Compare Premack’s relative value theory with the response deprivation theory of Timberlake and Allison. (155-158) Students should note that both define reinforcers as behavior, but that Premack says reinforcers are defined by their probability relative to other responses while Timberlake and Allison define TESTBANKCHAPTER5

39

reinforcers by their level of deprivation relative to their baseline rate.

74. What is the chief problem with the two-process theory of avoidance? (160) Answers should probably focus on the fact that the avoidance response continues even when the CS is no longer aversive. 75. How does the Sidman avoidance procedure differ from other avoidance procedures? (162) Answers should note the absence of any stimulus preceding the shock or other aversive.

TESTBANKCHAPTER5

40...


Similar Free PDFs