So far, we have talked about the importance of understanding absolute levels of rewards and what rewards people value. We also discussed the importance of understanding relative rewards and the notion of equity where we socially compare to other peers. I would like to expand our discussion by talking about schedules of reinforcement, which is when and how a given reward is distributed. Consider the following problem. You're trying to motivate your daughter to do her homework. Which schedule of reinforcement would most effectively motivate your child to do her homework and keep doing her homework in the future? Option A, you give your daughter a piece of her favorite candy after every 20 minutes of studying. Option B, you give your daughter three pieces of her favorite candy after sixty minutes of studying. Option C, you give your daughter two pieces of candy after the first thirty minutes of studying, no candy after the next thirty minutes and then one piece of candy after every fifteen minutes of studying. What would you choose? Now let's assume that your daughter's willing to study for about two hours. So the absolute level of reward distributed is exactly the same. What varies is when and how its given out. So we'll come back to this question shortly, but I would like to first of all say that the most prevalent schedules of reinforcement in contemporary organizations are fixed interval and fixed ratio. By fixed interval I mean you receive a reward after a fixed time interval receiving a paycheck at the end of the month or biweekly. A fixed ratio is when you receive a reward after a fixed number of responses, such as you get paid after every five cars sold. And even though these are the most prevalent schedules of reinforcement, they're not necessarily the most effective. And In the words of B.F. Skinner, no one works on Monday morning because he is reinforced by a paycheck on Friday afternoon. I'm gonna show you a video that will give us insight into a vastly different schedule of reinforcement. Take a look. One of the reasons slot machines are so addictive is because they're based on the variable schedule of reinforcement. Specifically variable ratio schedule of reinforcement. What I mean by this is that the probability is constant. But the number of lever presses needed to win is variable. So again, the variable ratio schedule of reinforcement is the number of units produced to receiver award various. An example would be a lottery for your employees. Introducing probabilistic rewards. I'll give an example of those shortly. There's also variable interval schedule reinforcement, where you receive reward after timed intervals of different length. An example would be receiving praise only now and then, a surprise inspection, a pop quiz. So what we know from research is the following. First of all, ratio reinforcement schedules typically outperform interval reinforcement schedules. And secondly is variable reinforcement schedules typically outperform their fixed counterparts. So, variable interval schedule outperforms fixed interval schedule. The variable ration schedule outperforms fixed ration schedule. When I say outperforms, it means it leads to high levels of motivation, engagement, and performance. Let me give you an example of a study, that directly compares the performance effects of a fixed ratio reinforcement schedule, relative to a variable ratio. So in this study participants were asked to do a simple task, which was grading exams. And for the first week all they received was fixed compensation, so $1.5 per hour. In the second week, that's where it gets interesting, bonuses were introduced. For the first group the bonus was as follows, $0.50 for each exam sheet graded if you correctly guess a coin flip. So you create an exam sheet, you submit it, and you have to guess a coin flip correctly. If you guess it correctly, you get $0.50. If you don't guess it, you receive nothing. In the second condition in the second group, you get $0.25 guaranteed for every exam sheet graded. No coin flip involved. So in the first group, you can see this is a probabilistic reward. Now the first thing to recognize here, is that the absolute levels of reward, of compensation, are very much comparable across the two groups, assuming that you can guess coin flips with about 50% probability. And that's exactly what happened here. So most people guessed coin flips with about a 50% probability. The only thing that's different across the two bonuses, is that we have a variable ratio reinforcement schedule in the first condition, and fixed ratio in the second condition. I call the variable ratio reinforcement schedule In the first condition, because I can guess the coin flip correctly on the first exam sheet graded, then miss it on the second and third, then guess it again correctly on the fourth, and miss it again on the fifth, and guess it correctly on the sixth. So the rewards were becoming after different numbers of units graded. Let's look at the performance results. So what this graph shows on the y axis, is the increase in the number of exams graded per day following the introduction of bonuses. And as you can see, for the fixed-ratio reinforcement schedule, where people had $0.25 per exemption guaranteed, no probabilities involved, no coin flip, no uncertainty. The productivity went up by about 36.5%. And in the second condition, for the second group, which had the variable ratio of schedule reinforcement, the productivity went up by 44.8%. So, you can see that the schedule of reinforcement matters. Variable ratio schedule reinforcement can increase productivity and engagement. So keep that in mind. Let me give you another example. New York Life Insurance. What they start is, employees with perfect attendance are entered into lottery system. The odds of winning are extremely low, and people compete for such prizes as small cash prizes or extra days of vacation. But what they found is a stunning effect of reducing absenteeism throughout the entire organization by 21%. [SOUND] How many of you have just looked at your phones? Now our phones is a great a example of a variable reinforcement schedule. And just like with gambling, we sometimes get addicted to or phones and iWatches, and hear this phantom buzzes and rings. Now coming back to the question I posted for you in the beginning of the session. I'm sure you recognize by now that options A and B are fixed interval reinforcement schedules. Option C is a variable interval reinforcement schedule. And so, out of these three choices, you might consider giving option C a shot. Because on average tends to outperform fixed interval reinforcement schedules. Now if you want to reward your daughter for the work actually done and not for the time spent studying, think of the folly of rewarding one thing while expecting else that we just discussed. You may consider a variable-ratio reinforcement schedule. That would look like giving her a piece of candy for the first problem solved, another piece of candy for the second problem solved, no piece of candy for the third, maybe four pieces of candy for the fourth. So, to reflect on the key insights from this discussion so far. I think the most important thing to recognize is that in addition to attaining to absolute levels of rewards, and what rewards people value, and to relative levels of rewards, as we discussed in the equity conversation. It's really important to understand that the schedule of reinforcement, when and how a given reward is distributed matters greatly. What we know from research is that ratio schedules typically are more effective than interval schedules. But what's even more insightful is that fixed interval and fixed ratio schedules even though they're the most prevalent in organizations doesn't mean they're the most effective. Consider using variable interval over fixed interval reinforcement schedule. And variable ratio over fixed ratio reinforcement schedule. And also think about the fact that incentives are pervasive. Think about how you can use incentive to improve not just your work life, but also life outside of work.