Stacks Image 15

6. Fixed Interval Schedules



To give you an example of the experimental analysis of behaviour in operation we will take a closer look at one of the 4 basic schedules, the FI schedule. Fig. 6.4 shows an example of baseline performance on a FI 30-sec schedule using rats as subjects and using food as the reinforcer. At the beginning of each 30-sec interval there is a pause in responding called the post-reinforcement pause (PRP), after which responses increase in frequency as the end of the interval approaches.

This is the ‘finger-print’ of the FI schedule, so to speak. But how do you explain this pattern? When I first addressed this question as a student, my thinking went like this. Firstly, let’s see what else we know about the performance. Well, when you look at the overall pattern you see that the PRP is a fairly fixed proportion of the interval. That is, if you increase the size of the interval you increase the size of the average pause. Regarding the number of responses, we know that only 1 response is required to produce the reinforcer so perhaps that large number of responses indicates that the animal has a very poor sense of time which makes it respond too early. Ideally it should wait until the end of the interval and then make one response. But how do you account for the increasing rate of responding throughout the interval? Well, it could be, I thought, that despite the poor sense of timing, the animal gets excited as it approaches the time for food delivery and that excitement is reflected in the increasing number of responses.

Seems like a simple enough explanation. But in truth my explanation at the time is riddled with mentalism because all the focus is on what’s happening inside the organism. Remember, what’s happening inside an organism at the moment of observation is properly understood by looking at the contingencies in operation at that time. So, let’s take a closer look at what is happening in a FI schedule.

Fig. 6.5 is an overview of how the FI schedule is constructed. Having just obtained a reinforcer, a period of 30 secs must elapse before a response can produce a reinforcer; responses during the interval have no effect. Once the 30 secs has elapsed, the first response could be anywhere in the shaded area that continues indefinitely. However, once it occurs, the reinforcer is delivered and we are back to the start of the interval. The sequencing of the Fixed Time (FT) period and the FR 1 contingency like this means that the FI schedule can also be called a
Tandem FT FR1.


Image
Figure 6.4

Image
Figure 6.5



Now within this arrangement it is important to note that we have a guarantee of what is called response-reinforcer contiguity (Fig. 6.6). That is, each food delivery is immediately preceded by a response. We know that learning proceeds best the shorter the delay to reinforcement.

In summary, then, a FI schedule is actually more complicated than it first appears. You may read in introductory textbooks that a schedule simply arranges a relation between a response and a reinforcer. However, we now know that to be a limited description of a schedule. In this instance, for example, we see that we have a complex system that is comprised of a number of differing elements all working in concert (Fig. 6.7). The formal structure of the FI schedule determines how these elements combine. And let’s not forget that we also have a biological system embedded in this simple environmental system. In other words, then, we have a biological system interacting with an environmental system.

The value of identifying the components of the environmental system is that we can now experiment with them to see what happens. Note, I don’t have to test a theory in the usual sense of the word. Instead, I can tinker with the variables that define the system and see what behaviour pops out. Let’s do this and see what happens.


Image
Figure 6.6

Image
Figure 6.7



One thing we could do is rearrange some of the elements of the system as shown in Fig. 6.8. Here we have what is called a Recycling Conjunctive FR FR1 schedule. It delivers food at fixed intervals, just like the FI schedule, only this time the FR1 component can be completed anywhere inside the interval. Thus, if a single response was made after 5 secs, for example, then the reinforcer would be delivered at the end of the 30-sec interval and the next interval would begin. If a response is not made inside the interval, then the interval ends without the delivery of a reinforcer and the next interval begins immediately.

Fig. 6.9 shows the baseline performance on this schedule (see also Movie 6.5). This is radically different from the performance on the FI schedule. There are fewer response during the intervals and responding is bunched in the middle of each interval, with a post-reinforcement and pre-reinforcement pause.

Now let’s go back to my thinking as a student. If the regularity of food presentations was essential for some sort of temporal discrimination on the part of the animal, and if poor temporal discrimination produced the pattern of responses when only one response was required, then I should see the same thing here. But I don’t. So much for my theory! It would seem, then, that mother nature can get on quite well without my theory, or any other theory for that matter that doesn’t make reference to how the environmental system is designed.

OK, let’s try something else. Remember we noted that contiguity was important in learning. With the current set up we see that there are delays in responding before the delivery of the reinforcer. When you look at a whole session consisting of 100 reinforcers you’ll find that on only 3 or 4 occasions is there an accidental occurrence of response-reinforcer contiguity. An obvious question to ask is whether you could increase the incidence of contiguity without changing much else. That is, can we increase the likelihood of contiguity without changing either the FR1 component or the FT component?

This next arrangement does just that (Fig. 6.10). We keep the same FR1 component, but if a response occurs in the final 2 secs of the interval, say at 29 secs, then the reinforcer is delivered immediately and that interval ends and the next one begins. This increases the likelihood of obtaining more incidences of reinforce-reinforcer contiguity but the effect on the overall duration of the interval is minimal.


Image
Figure 6.8

Image
Figure 6.9




This next arrangement does just that (Fig. 6.10). We keep the same FR1 component, but if a response occurs in the final 2 secs of the interval, say at 29 secs, then the reinforcer is delivered immediately and that interval ends and the next one begins. This increases the likelihood of obtaining more incidences of reinforce-reinforcer contiguity but the effect on the overall duration of the interval is minimal.
Fig. 6.11 shows what was produced by this arrangement in some rats; FI-like patterning appeared. Rate of responding increased for all rats, and the overall results have implications for understanding adaptation to temporal regularities of significant events in the environment.


Image
Figure 6.10

Image
Figure 6.11


In conclusion, we see the power of an experimental analysis of behaviour. Key independent variables can be identified that help to produce patterning of behaviour in time. The structure of the prevailing contingencies defines the components of an environmental system within which a biological system adapts. That is a huge advance over the simple mentalistic analysis that I engaged in as a student.

References

Keenan, M., & Leslie, J. C. (1986). Varying the incidence of response-reinforcer contiguity in a recycling conjunctive schedule. Journal of the Experimental Analysis of Behavior, 45, 317-332.

Keenan, M. (1999). Periodic response reinforcer contiguity: Temporal control but not as we know it! The Psychological Record, 49, 273-297.