What makes a behavioural response less likely




















Consider your parents for a minute. To stop some undesirable behavior you made in the past surely they took away some privilege. I bet the bad behavior ended too. I know my grandmother used to. What happened to that bad behavior that had disappeared? Did it start again and your parents could not figure out why? Someday your parents will get you back and do the same thing with your kid s.

When extinction first occurs, the person or animal is not sure what is going on and begins to make the response more often frequency , longer duration , and more intensely. This is called an extinction burst. We might even see novel behaviors such as aggression. I mean, who likes having their privileges taken away? That will likely create frustration which can lead to aggression. One final point about extinction is important. You must know what the reinforcer is and be able to eliminate it.

Say your child bullies other kids at school. Since you cannot be there to stop the behavior, and most likely the teacher cannot be either if done on the playground at recess, the behavior will continue.

Your child will continue bullying because it makes him or her feel better about themselves a PR. With all this in mind, you must have wondered if extinction is the same as punishment. Yes, but that is the only similarity they share. Punishment reduces unwanted behavior by either giving something bad or taking away something good. Extinction is simply when you take away the reinforcer for the behavior. This could be seen as taking away something good, but the good in punishment is not usually what is reinforcing the bad behavior.

If a child misbehaves the bad behavior for attention the PR , then with extinction you would not give the PR meaning nothing happens while with punishment, you might slap their behind a PP or taking away TV time an NP. You might have wondered if the person or animal will try to make the response again in the future even though it stopped being reinforced in the past. The answer is yes, and one of two outcomes is possible. First, the response is made and nothing happens.

In this case extinction continues. Second, the response is made and a reinforcer is delivered. The response re-emerges. Consider a rat that has been trained to push a lever to receive a food pellet. If we stop delivering the food pellets, in time, the rat will stop pushing the lever. The rat will push the lever again sometime in the future and if food is delivered, the behavior spontaneously recovers.

You have likely heard about Pavlov and his dogs but what you may not know is that this was a discovery made accidentally. Ivan Petrovich Pavlov , , , a Russian physiologist, was interested in studying digestive processes in dogs in response to being fed meat powder. What he discovered was the dogs would salivate even before the meat powder was presented.

They would salivate at the sound of a bell, footsteps in the hall, a tuning fork, or the presence of a lab assistant. Pavlov realized there were some stimuli that automatically elicited responses such as salivating to meat powder and those that had to be paired with these automatic associations for the animal or person to respond to it such as salivating to a bell.

Armed with this stunning revelation, Pavlov spent the rest of his career investigating the learning phenomenon. The important thing to understand is that not all behaviors occur due to reinforcement and punishment as operant conditioning says. In the case of respondent conditioning, antecedent stimuli exert complete and automatic control over some behaviors.

We see this in the case of reflexes. When a doctor strikes your knee with that little hammer it extends out automatically. You do not have to do anything but watch. If a nipple is placed in their mouth, they will also automatically suck, as per the sucking reflex. Humans have several of these reflexes though not as many as other animals due to our more complicated nervous system.

Respondent conditioning occurs when we link a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus. In respondent conditioning, learning occurs in three phases: preconditioning, conditioning, and postconditioning. See Figure 6. Notice that preconditioning has both an A and a B panel.

Really, all this stage of learning signifies is that some learning is already present. There is no need to learn it again as in the case of primary reinforcers and punishers in operant conditioning. In Panel A, food makes a dog salivate.

This does not need to be learned and is the relationship of an unconditioned stimulus UCS yielding an unconditioned response UCR. Unconditioned means unlearned. In Panel B, we see that a neutral stimulus NS yields nothing.

Dogs do not enter the world knowing to respond to the ringing of a bell which it hears. Conditioning is when learning occurs. Through a pairing of neutral stimulus and unconditioned stimulus bell and food, respectively the dog will learn that the bell ringing NS signals food coming UCS and salivate UCR.

The pairing must occur more than once so that needless pairings are not learned such as someone farting right before your food comes out and now you salivate whenever someone farts …at least for a while. Eventually the fact that no food comes would extinguish this reaction but still, it would be weird for a bit.

Postconditioning, or after learning has occurred, establishes a new and not naturally occurring relationship of a conditioned stimulus CS; previously the NS and conditioned response CR; the same response. So the dog now reliably salivates at the sound of the bell because he expects that food will follow, and it does. Comprehension check. A lot of terms were thrown at you in the preceding three paragraphs and so a quick check will make sure you understand.

First, we talk about stimuli and responses being unconditioned or conditioned. The term conditioned means learned and if it is unconditioned then it is unlearned. A response is a behavior that you make due to one of these stimuli. Finally, pre means before and post means after, so preconditioning comes before learning occurs, conditioning is when learning is occurring, and postconditioning is what happens after learning has occurred.

Be sure to keep these terms straight; this explanation is an easy way to do so. One of the most famous studies in psychology was conducted by Watson and Rayner In Panel A of Figure 6. After several conditioning trials, the child responded with fear to the mere presence of the white rat Panel C.

As fears can be learned, so too they can be unlearned. Considered the follow-up to Watson and Rayner , Jones ; Figure 6.

Simply, she placed the child in one end of a room and then brought in the rabbit. The rabbit was far enough away so as to not cause distress. Then, Jones gave the child some pleasant food i. The procedure in Panel C continued with the rabbit being brought in a bit closer each time to eventually the child did not respond with distress to the rabbit Panel D. This process is called counterconditioning , or the reversal of previous learning. Another respondent conditioning way to unlearn a fear is what is called flooding or exposing the person to the maximum level of stimulus and as nothing aversive occurs, the link between CS and UCS producing the CR of fear should break, leaving the person unafraid.

That is the idea at least and if you were afraid of clowns, you would be thrown into a room full of clowns. Though you may be nervous and likely terrified at first, when nothing bad happens over time, you will eventually calm down and no longer feel fear CR due to the presence of clowns.

It should be noted that for this fear to have developed, there was likely an event earlier in life that caused it. The functional assessment should help in identifying this event. In operant conditioning we talked about generalization, discrimination, extinction, and spontaneous recovery.

These terms apply equally as well to respondent conditioning as follows:. There are times when we learn by simply watching others. This is called observational learning and is contrasted with enactive learning , which is learning by doing.

There is no firsthand experience by the learner in observational learning. As you can learn desirable behaviors such as watching how your father bags groceries at the grocery store I did this and still bag the same way today you can learn undesirable ones too. If your parents resort to alcohol consumption to deal with the stressors life presents, then you too might do the same.

What is critical is what happens to the model in all of these cases. If my father seems genuinely happy and pleased with himself after bagging groceries his way, then I will be more likely to adopt this behavior. If my mother or father consumes alcohol to feel better when things are tough, and it works, then I might do the same.

On the other hand, if we see a sibling constantly getting in trouble with the law then we may not model this behavior due to the negative consequences. Albert Bandura conducted the pivotal research on observational learning and you likely already know all about it. Check out Figure 6. From a young age, we learn which actions are beneficial and which are detrimental through a trial and error process.

For example, a young child is playing with her friend on the playground and playfully pushes her friend off the swingset. Her friend falls to the ground and begins to cry, and then refuses to play with her for the rest of the day. The law of effect has been expanded to various forms of behavior modification. Because the law of effect is a key component of behaviorism, it does not include any reference to unobservable or internal states; instead, it relies solely on what can be observed in human behavior.

While this theory does not account for the entirety of human behavior, it has been applied to nearly every sector of human life, but particularly in education and psychology. Skinner was a behavioral psychologist who expanded the field by defining and elaborating on operant conditioning.

Research regarding this principle of learning was first conducted by Edward L. Thorndike in the late s, then brought to popularity by B. Skinner in the mids. Much of this research informs current practices in human behavior and interaction. Skinner theorized that if a behavior is followed by reinforcement, that behavior is more likely to be repeated, but if it is followed by some sort of aversive stimuli or punishment, it is less likely to be repeated. He also believed that this learned association could end, or become extinct, if the reinforcement or punishment was removed.

Skinner : Skinner was responsible for defining the segment of behaviorism known as operant conditioning—a process by which an organism learns from its physical environment. In his first work with rats, Skinner would place the rats in a Skinner box with a lever attached to a feeding tube. Whenever a rat pressed the lever, food would be released.

After the experience of multiple trials, the rats learned the association between the lever and food and began to spend more of their time in the box procuring food than performing any other action. It was through this early work that Skinner started to understand the effects of behavioral contingencies on actions. He discovered that the rate of response—as well as changes in response features—depended on what occurred after the behavior was performed, not before.

Skinner named these actions operant behaviors because they operated on the environment to produce an outcome. The process by which one could arrange the contingencies of reinforcement responsible for producing a certain behavior then came to be called operant conditioning. In this way, he discerned that the pigeon had fabricated a causal relationship between its actions and the presentation of reward. In his operant conditioning experiments, Skinner often used an approach called shaping.

Instead of rewarding only the target, or desired, behavior, the process of shaping involves the reinforcement of successive approximations of the target behavior. Behavioral approximations are behaviors that, over time, grow increasingly closer to the actual desired response.

Skinner believed that all behavior is predetermined by past and present events in the objective world. He did not include room in his research for ideas such as free will or individual choice; instead, he posited that all behavior could be explained using learned, physical aspects of the world, including life history and evolution.

His work remains extremely influential in the fields of psychology, behaviorism, and education. Shaping is a method of operant conditioning by which successive approximations of a target behavior are reinforced. In his operant-conditioning experiments, Skinner often used an approach called shaping. The method requires that the subject perform behaviors that at first merely resemble the target behavior; through reinforcement, these behaviors are gradually changed, or shaped , to encourage the performance of the target behavior itself.

Shaping is useful because it is often unlikely that an organism will display anything but the simplest of behaviors spontaneously. It is a very useful tool for training animals, such as dogs, to perform difficult tasks. Dog show : Dog training often uses the shaping method of operant conditioning. In shaping, behaviors are broken down into many small, achievable steps. To test this method, B. Skinner performed shaping experiments on rats, which he placed in an apparatus known as a Skinner box that monitored their behaviors.

The target behavior for the rat was to press a lever that would release food. Initially, rewards are given for even crude approximations of the target behavior—in other words, even taking a step in the right direction. Then, the trainer rewards a behavior that is one step closer, or one successive approximation nearer, to the target behavior. For example, Skinner would reward the rat for taking a step toward the lever, for standing on its hind legs, and for touching the lever—all of which were successive approximations toward the target behavior of pressing the lever.

As the subject moves through each behavior trial, rewards for old, less approximate behaviors are discontinued in order to encourage progress toward the desired behavior. For example, once the rat had touched the lever, Skinner might stop rewarding it for simply taking a step toward the lever. In this way, shaping uses operant-conditioning principles to train a subject by rewarding proper behavior and discouraging improper behavior.

This process has been replicated with other animals—including humans—and is now common practice in many training and teaching methods. It is commonly used to train dogs to follow verbal commands or become house-broken: while puppies can rarely perform the target behavior automatically, they can be shaped toward this behavior by successively rewarding behaviors that come close.

Shaping is also a useful technique in human learning. For example, if a father wants his daughter to learn to clean her room, he can use shaping to help her master steps toward the goal. First, she cleans up one toy and is rewarded. Second, she cleans up five toys; then chooses whether to pick up ten toys or put her books and clothes away; then cleans up everything except two toys.

Through a series of rewards, she finally learns to clean her entire room. Reinforcement and punishment are principles of operant conditioning that increase or decrease the likelihood of a behavior. Reinforcement and punishment are principles that are used in operant conditioning. Reinforcement means you are increasing a behavior: it is any consequence or outcome that increases the likelihood of a particular behavioral response and that therefore reinforces the behavior.

Because you have become habituated to the conditioned stimulus, you are more likely to ignore it and it's less likely to elicit a response, eventually leading to the extinction of the conditioned behavior. Personality factors might also play a role in extinction. One study found that children who were more anxious were slower to habituate to a sound.

As a result, their fear response to the sound was slower to become extinct than non-anxious children. Ever wonder what your personality type means? Sign up to find out more in our Healthy Mind newsletter. Neurobiol Learn Mem. Skinner, BF. The Shaping of a Behaviorist. New York, Knopf, Facets of Pavlovian and operant extinction. Behav Processes. Schedules of Reinforcement. Appleton-Century-Crofts; Benito KG, Walther M. Therapeutic process during exposure: Habituation model.

J Obsessive Compuls Relat Disord. Sensory-modulation disruption, electrodermal responses, and functional behaviors. Dev Med Child Neurol. Coon D, Mitterer JO. Psychology: A Journey. Wadsworth Publishing; Skinner BF. A Case History in Scientific Method.

American Psychologist. Your Privacy Rights. To change or withdraw your consent choices for VerywellMind. At any time, you can update your settings through the "EU Privacy" link at the bottom of any page.

These choices will be signaled globally to our partners and will not affect browsing data. We and our partners process data to: Actively scan device characteristics for identification.

I Accept Show Purposes. Table of Contents View All. Table of Contents. It's Not Gone Forever. Was this page helpful?



0コメント

  • 1000 / 1000