Academia.eduAcademia.edu

Pavlov + Skinner = Premack

2014, International Journal of Comparative Psychology

https://doi.org/10.46867/IJCP.2014.27.04.04

Abstract

Behavior is a sequence of actions. Premackian conditioning occurs when one of those actions permits an animal to engage in more biologically potent positive responses—reinforcement—or forces them to engage in less positive (or negative) responses —punishment. Signals of the transition from one class of actions to another focus the instrumental responses in the first class and inform the contingent responses in the second class. The signals may be innate (USs) or learned (sign-learning); excitatory (leading to more positive actions) or inhibitory (leading to less positive actions). The potency of an action has phylogenetic origins, but may be conditioned by proximity to more potent responses, such as consummation of a reinforcer. With practice instrumental responses may take on increased strength, and in some cases become motivationally autonomous—become habits. Stimuli or responses that signal the availability of more positive actions may become incentive motivators that animals wil...

References (177)

  1. Adams, C. D., & Dickinson, A. (1981). Instrumental responding following reinforcer devaluation. The Quarterly Journal of Experimental Psychology, 33, 109-121. doi: 10.1080/14640748108400816
  2. Allison, J. (1983). Behavioral economics. New York, NY: Praeger.
  3. Allison, J., & Timberlake, W. (1974). Instrumental and contingent saccharin licking in rats: Response deprivation and reinforcement. Learning and Motivation, 5, 231-247. doi: 10.1016/0023- 9690(74)90029-0
  4. Balleine, B. W., Garner, C., Gonzalez, F., & Dickinson, A. (1995). Motivational control of heterogeneous instrumental chains. Journal of Experimental Psychology: Animal Behavior Processes, 21, 203-217. doi: 10.1037/0097-7403.21.3.203
  5. Balsam, P. D., Deich, J. D., Ohyama, T., & Stokes, P. D. (1998). Origins of new behavior. In W. T. O'Donohue (Ed.), Learning and behavior therapy (pp. 403-420). Needham Heights, MA: Allyn & Bacon.
  6. Balsam, P. D., Graf, J. S., & Silver, R. (1992). Operant and Pavlovian contributions to the ontogeny of pecking in ring doves. Developmental Psychobiology, 25, 389-410. doi: 10.1002/dev.420250602
  7. Balsam, P. D., & Silver, R. (1994). Behavioral change as a result of experience: Toward principles of learning and development. In J. A. Hogan & J. J. Bolhuis (Eds.), Causal mechanisms of behavioural development (pp. 327-357). Cambridge, England: Cambridge University Press.
  8. Baum, W. M. (2004). Understanding behaviorism: Behavior, culture, and evolution. New York, NY: Wiley- Blackwell.
  9. Baum, W. M. (2005). Understanding behaviorism: Behavior, culture, and evolution (2 nd ed.). Malden, MA: Blackwell Publishing.
  10. Baum, W. M. (2012). Rethinking reinforcement: allocation, induction, and contingency. Journal of the Experimental Analysis of Behavior, 97, 101-124. doi: 10.1901/jeab.2012.97-101
  11. Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26, 507-513. doi: 10.1016/s0166-2236(03)00233-9
  12. Bindra, D. (1972). A unified account of classical conditioning and operant training. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 453-481). New York, NY: Appleton-Century-Crofts.
  13. Boakes, R. A. (1977). Performance on learning to associate a stimulus with positive reinforcement. In H. David & H. M. B. Hurwitz (Eds.), Operant-Pavlovian interactions (pp. 67-101). Hillsdale, NJ: Erlbaum.
  14. Boakes, R. A. (1984). From Darwin to Behaviourism: Psychology and the minds of animals. Cambridge, UK: Cambridge University Press.
  15. Boakes, R. A., Poli, M., Lockwood, M. J., & Goodall, G. (1978). A study of misbehavior: Token reinforcement in the rat. Journal of the Experimental Analysis of Behavior, 29, 115-134. doi: 10.1901/jeab.1978.29-115
  16. Bolles, R. C. (1970). Species-specific defense reactions and avoidance learning. Psychological Review, 71, 32-48. doi: 10.1037/h0028589
  17. Bolles, R. C. (1983). The explanation of behavior. The Psychological Record, 33, 31-48.
  18. Brackney, R. J., Cheung, T. H., Neisewander, J. L., & Sanabria, F. (2011). The isolation of motivational, motoric, and schedule effects on operant performance: a modeling approach. Journal of the Experimental Analysis of Behavior, 96, 17-38. doi: 0.1901/jeab.2011
  19. Breland, K., & Breland, M. (1961). The misbehavior of organisms. American Psychologist, 16, 681-684. doi: 10.1037/h0040090
  20. Brembs, B. (2009). Mushroom Bodies Regulate Habit Formation in Drosophila. Current Biology, 19(16), 1351-1355. doi: 10.1016/j.cub.2009.06.014
  21. Brembs, B., & Heisenberg, M. (2000). The operant and the classical in conditioned orientation of Drosophila melanogaster at the flight simulator. Learning & Memory, 7, 104-115. doi: PMC311324
  22. Brembs, B., Pauly, D., Schade, R., Mendoza, E., Pflüger, H.-J., Rybak, J., Scharff, C., & Zars, T. (2011). The Drosophila FoxP gene is necessary for operant self-learning: Implications for the evolutionary origins of language. Paper presented at the German Neuroscience Society, Göttingen.
  23. Brown, P. L., & Jenkins, H.M. (1968). Auto-shaping of the pigeon's key-peck. Journal of the Experimental Analysis of Behavior, 11, 1-8. doi: 10.1901/jeab.1968.11-1
  24. Cairns, R. B., Gariepy, J. L., & Hood, K. E. (1990). Development, microevolution, and social behavior. Psychological Review, 97, 49-65. doi: 10.1037/0033-295X.97.1.49
  25. Catania, A. C. (2013). Learning (5th ed.). Cornwall on Hudson, NY: Sloan Publishing.
  26. Colomb, J., & Brembs, B. (2010). The biology of psychology: 'Simple'conditioning? Communicative & Integrative Biology, 3, 142-145. doi: PMC2889970
  27. Crawford, L. L., Holloway, K. S., & Domjan, M. (1993). The nature of sexual reinforcement. Journal of the Experimental Analysis of Behavior, 60, 55-66. doi: 10.1901/jeab.1993.60-55
  28. Davis, H., & Hurwitz, H. M. B. (1977). Operant-Pavlovian interactions. Hillsdale, NJ: Lawrence Erlbaaum Associates.
  29. Davison, M., & Baum, W. M. (2006). Do conditional reinforcers count? Journal of the Experimental Analysis of Behavior, 86, 269-283. doi: 10.1901/jeab.2006.56-05
  30. Denny, M., & Weisman, R. G. (1964). Avoidance behavior as a function of length of nonshock confinement. Journal of Comparative and Physiological Psychology, 58, 252-257. doi: 10.1037/h0048966
  31. Denny, M. R. (1991). Relaxation/relief: The effects of removing, postponing, or terminating aversive stimuli. In M. R. Denny (Ed.), Fear, avoidance and phobias (pp. 199-229). Hillsdale, NJ: Erlbaum.
  32. Denny, M. R., & Adelman, H. M. (1955). Elicitation theory: I. An analysis of two typical learning situations. Psychological Review, 62, 290-296. doi: 10.1037/h0046265
  33. Dezfouli, A., & Balleine, B. W. (2012). Habits, action sequences and reinforcement learning. European Journal of Neuroscience, 35, 1036-1051. doi: 10.1111/j.1460-9568.2012.08050.x
  34. Dezfouli, A., Lingawi, N. W., & Balleine, B. W. (2014). Habits as action sequences: hierarchical action control and changes in outcome value. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655). doi: 10.1098/rstb.2013.0482
  35. Dickinson, A. (1985). Actions and habits: the development of behavioural autonomy. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 308, 67-78. doi: 10.1098/rstb.1985.0010
  36. Dickinson, A., & Balleine, B. W. (1994). Motivational control of goal-directed action. Animal Learning & Behavior, 22, 1-18.
  37. Dickinson, A., Balleine, B. W., Watt, A., Gonzalez, F., & Boakes, R. A. (1995). Motivational control after extended instrumental training. Animal Learning & Behavior, 23, 197-206.
  38. Dickinson, A., & Boakes, R. A. (Eds.). (1979). Mechanisms of learning and motivation: A memorial volume to Jerzy Konorski. Hillsdale, NJ: Lawrence Erlbaum Associates.
  39. Domjan, M. (2005). Pavlovian conditioning: A functional perspective. Annual Review of Psychology, 56, 179- 206. doi: 10.1146/annurev.psych.55.090902.141409
  40. Domjan, M. (2008). Adaptive specializations and generality of the laws of classical and instrumental conditioning. In R. Menzel (Ed.), Learning theory and behavior (Vol. 1, pp. 327-340). Oxford, UK: Elsevier.
  41. Donahoe, J. W. (2006). Contingency: Its meaning in the Experimental Analysis of Behavior. European Journal of Behavior Analysis, 7, 111-114.
  42. Donahoe, J. W. (2014). Evocation of behavioral change by the reinforcer is the critical event in both the classical and operant procedures. International Journal of Comparative Psychology, 27, 537-543.
  43. Donahoe, J. W., Burgos, J. E., & Palmer, D. C. (1993). A selectionist approach to reinforcement. Journal of the Experimental Analysis of Behavior, 60, 17-40. doi: 10.1901/jeab.1993.60-17
  44. Donahoe, J. W., & Palmer, D. C. (1994). Learning and complex behavior. Boston, MA: Allyn and Bacon.
  45. Donahoe, J. W., Palmer, D. C., & Burgos, J. E. (1997a). The S-R issue: its status in behavior analysis and in Donahoe and Palmer's learning and complex behavior. Journal of the Experimental Analysis of Behavior, 67, 193-211. doi: 10.1901/jeab.1997.67-193
  46. Donahoe, J. W., Palmer, D. C., & Burgos, J. E. (1997b). The unit of selection: What do reinforcers reinforce? Journal of the Experimental Analysis of Behavior, 67, 259-273. doi: 10.1901/jeab.1997.67-259
  47. Donahoe, J. W., & Vegas, R. (2004). Pavlovian conditioning: The CS-UR relation. Journal of Experimental Psychology: Animal Behavior Processes, 30, 17-33. doi: 10.1037/0097-7403.30.1.17
  48. Eisenberger, R., Karpman, M., & Trattner, J. (1967). What is the necessary and sufficient condition for reinforcement in the contingency situation? Journal of Experimental Psychology, 74, 342-350. doi: 10.1037/h0024719
  49. Everitt, B. J., & Robbins, T. W. (2005). Neural systems of reinforcement for drug addiction: from actions to habits to compulsion. Nature Neuroscience, 8, 1481-1489. doi: 10.1038/nn1579
  50. Falk, J. L. (1966). The motivational properties of schedule-induced polydipsia. Journal of the Experimental Analysis of Behavior, 9, 19-25. doi: 10.1901/jeab.1966.9-19
  51. Fanselow, M. S., & Lester, L. S. (1988). A functional behavioristic approach to aversively motivated behavior: Predatory imminence as a determinant of the topography of defensive behavior. In R. C. Bolles & M. D. Beecher (Eds.), Evolution and learning (pp. 185-212). Hillsdale, NJ: Erlbaum.
  52. Fedorchak, P. M., & Bolles, R. C. (1986). Differential outcome effect using a biologically neutral outcome difference. Journal of Experimental Psychology: Animal Behavior Processes, 12, 125-130. doi: 10.1037/0097-7403.12.2.125
  53. Gallistel, C. (1980). The organization of action: A new synthesis. Hillsdale, NJ: Lawrence Erlbaum Associates.
  54. Gallistel, C. R. (1981). Précis of Gallistel's The organization of action: A new synthesis. Behavioral and Brain Sciences, 4, 609-619. doi: http://dx.doi.org/10.1017/S0140525X00000480
  55. Gormezano, I., & Kehoe, E. J. (1975). Classical conditioning: Some methodological-conceptual issues. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 2, pp. 143-179). Hillsdale, New Jersey: Lawrence Erlbaum Associates.
  56. Gottlieb, G. (2007). Probabilistic epigenesis. Developmental Science, 10, 1-11. doi: 10.1111/j.1467- 7687.2007.00556.x
  57. Gould, J. L., & Marler, P. (1984). Ethology and the natural history of learning. In P. Marler & H. S. Terrace (Eds.), The biology of learning (pp. 47-74). New York, NY: Springer-Verlag.
  58. Graybiel, A. M. (2008). Habits, rituals, and the evaluative brain. Annul Review of Neuroscience, 31, 359-387. doi: 10.1146/annurev.neuro.29.051605.112851
  59. Hanson, S. J., & Timberlake, W. (1983). Regulation during Challenge: A general model of learned performance under schedule constraint. Psychological Review, 90, 261-282. doi: 10.1037/0033- 295X.90.3.261
  60. Hearst, E. (1975). The classical-instrumental distinction: Reflexes, voluntary behavior, and categories of associative learning. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 2, pp. 181-223). Mahwah, NJ: Erlbaum.
  61. Hearst, E., & Jenkins, H. M. (1974). Sign-tracking: The stimulus-reinforcer relation and directed action. Austin, TX: The Psychonomic Society.
  62. Herrnstein, R. J. (1988). Lost and found: One self. Ethics, 98, 566-578. doi: http://www.jstor.org/stable/2380969
  63. Hinde, R. A. (1973). Constraints on learning: An introduction to the problems. New York, NY: Academic Press.
  64. Hinde, R. A. (1982). Ethology: Its nature and relation with other sciences. New York, NY: Oxford University Press.
  65. Hogan, J. A. (1988). Cause and function in the development of behavior systems. In E. M. Blass (Ed.), Developmental psychobiology and behavioral ecology (Vol. 9, pp. 63-106). New York, NY: Plenum Press.
  66. Hogan, J. A. (1994). Development of behavior systems. In J. A. Hogan & J. J. Bolhuis (Eds.), Causal mechanisms of behavioural development (pp. 242-264). Cambridge, England: Cambridge University Press.
  67. Hogan, J. A. (2014). A framework for the study of behavior. Behavioural Processes, May 22. pii: S0376- 6357(14)00123-5. doi: 10.1016/j.beproc.2014.05.003. [Epub ahead of print]
  68. Hogan, J. A., & Bolhuis, J. J. (2009). Causal mechanisms of behavioural development. Cambridge, England: Cambridge University Press.
  69. Hogarth, L., Dickinson, A., & Duka, T. (2010). The associative basis of cue-elicited drug taking in humans. Psychopharmacology, 208, 337-351. doi: 10.1007/s00213-009-1735-9
  70. Holland, P. C. (1977). Conditioned stimulus as a determinant of the form of the Pavlovian conditioned response. Journal of Experimental Psychology: Animal Behavior Processes, 3, 77-104. doi: 10.1037/0097-7403.3.1.77
  71. Holland, P. C. (1980). CS-US interval as a determinant of the form of Pavlovian appetitive conditioned responses. Journal of Experimental Psychology: Animal Behavior Processes, 6, 155-174. doi: 10.1037/0097-7403.6.2.155
  72. Holland, P. C. (1983). Occasion-setting in Pavlovian feature positive discriminations. In M. L. Commons, R. J. Herrnstein, & A. R. Wagner (Eds.), Quantitative analyses of behavior: Discrimination processes (Vol. 4, pp. 183-206). New York: Ballinger.
  73. Holland, P. C. (1990). Forms of memory in Pavlovian conditioning. In J. L. McGaugh, N. M. Veinberger & G. Lynch (Eds.), Brain organization and memory: cells, systems, and circuits (pp. 78-105). Oxford, UK: Oxford University Press.
  74. Holland, P. C. (1992). Occasion setting in Pavlovian conditioning. In D. L. Medin (Ed.), The psychology of learning and motivation (Vol. 28, pp. 69-125). San Diego, CA: Academic Press.
  75. Hollis, K. L. (1983). Cause and function of animal learning processes. In P. Marler & H. S. Terrace (Eds.), The biology of learning (pp. 357-371). Berlin, Germany: Springer-Verlag.
  76. Hull, C. L. (1934). The concept of the habit-family hierarchy, and maze learning. Part I. Psychological Review, 41, 33-54. doi: 10.1037/h0070758
  77. Hull, C. L. (1943). Principles of behavior. New York, NY: Appleton-Century-Crofts.
  78. Hundt, A. G., & Premack, D. (1963). Running as both a positive and negative reinforcer. Science, 142, 1087- 1088. doi: 10.1126/science.142.3595.1087
  79. Hursh, S. R. (1984). Behavioral economics. Journal of the Experimental Analysis of Behavior, 42, 435-452.
  80. Huys, Q. J., Cools, R., Golzer, M., Friedel, E., Heinz, A., Dolan, R. J., & Dayan, P. (2011). Disentangling the roles of approach, activation and valence in instrumental and Pavlovian responding. PLoS Computer and Biology, 7, e1002028. doi: 10.1371/journal.pcbi.1002028
  81. Innis, N. K., Simmelhag-Grant, V. L., & Staddon, J. E. R. (1983). Behavior induced by periodic food delivery: The effects of interfood interval. Journal of the Experimental Analysis of Behavior, 39, 309-322. doi: 10.1901/jeab.1983.39-309
  82. James, W. (1890/1983). Principles of Psychology. Cambridge, MA: Harvard University Press.
  83. Jenkins, H. M., Barrera, F., Ireland, C., & Woodside, B. (1978). Signal-centered action patterns of dogs in appetitive classical conditioning. Learning and Motivation, 9, 272-296. doi: 10.1016/0023- 9690(78)90010-3
  84. Killcross, S., & Coutureau, E. (2003). Coordination of actions and habits in the medial prefrontal cortex of rats. Cerebral Cortex, 13, 400-408. doi: 10.1093/cercor/13.4.400
  85. Killeen, P. R. (1989). Behavior as a trajectory through a field of attractors. In J. R. Brink & C. R. Haden (Eds.), The Computer and the brain: Perspectives on human and artificial intelligence (pp. 53-82). Amsterdam: Elsevier.
  86. Killeen, P. R. (1991). Behavioral Geodesics. In D. S. Levine & J. S. Levin (Eds.), Motivation, emotion, and goal direction in neural networks (pp. 91-114). Hillsdale, NJ: Erlbaum.
  87. Killeen, P. R. (1992). Mechanics of the animate. Journal of the Experimental Analysis of Behavior, 57, 429- 463. doi: 10.1901/jeab.1992.57-429
  88. Killeen, P. R. (2014). A theory of behavioral contrast. Journal of the Experimental Analysis of Behavior, 102, 363-390. doi: 10.1002/jeab.107
  89. Killeen, P. R., & Glenberg, A. M. (2010). Resituating cognition. Comparative Cognition & Behavior Reviews, 5, 59-77. doi: 10.3819/ccbr.2010.50003
  90. Killeen, P. R., Hall, S. S., Reilly, M. P., & Kettle, L. C. (2002). Molecular analyses of the principal components of response strength. Journal of the Experimental Analysis of Behavior, 78, 127-160. doi: 10.1901/jeab.2002.78-127
  91. Killeen, P. R., & Pellón, R. (2013). Adjunctive behaviors are operants. Learning & Behavior, 41, 1-24. doi: 10.3758/s13420-012-0095-1
  92. Killeen, P. R., Sanabria, F., & Dolgov, I. (2009). The dynamics of conditioning and extinction. Journal of Experimental Psychology: Animal Behavior Processes, 35, 447-472. doi: 10.1037/a0015626
  93. Killeen, P. R., Wald, B., & Cheney, C. D. (1980). Observing behavior and information. The Psychological Record, 30, 181-190.
  94. Kirkpatrick, K. (2002). Packet theory of conditioning and timing. Behavioural Processes, 57, 89-106. doi: 10.1016/S0376-6357(02)00007-4
  95. Klatt, K. P., & Morris, E. K. (2001). The Premack principle, response deprivation, and establishing operations. The Behavior Analyst, 24, 173-180. doi: PMC2731497
  96. Konorski, J. (1967). Integrative activity of the brain. Chicago, IL: Chicago University Press.
  97. Kruijt, J. P. (1964). Ontogeny of social behaviour in Burmese red junglefowl (Gallus gallus spadiceus) Bonnaterre. Behaviour. Supplement, 1-201. doi: http://www.jstor.org/stable/30039152
  98. Lieberman, D. A., Davidson, F. H., & Thomas, G. V. (1985). Marking in pigeons: The role of memory in delayed reinforcement. Journal of Experimental Psychology: Animal Behavior Processes, 11, 611- 624. doi: 10.1037/0097-7403.11.4.611
  99. Lorenz, K. (1973). Konrad Lorenz -Biographical. Nobelprize.org. Retrieved 5 Oct 2014, 2014
  100. Lubow, R. E. (1989). Latent inhibition and conditioned attention theory. Cambridge, England: Cambridge University Press.
  101. Luo, Y., Yang, Z., Steele, M. A., Zhang, Z., Stratford, J. A., & Zhang, H. (2014). Hoarding without reward: Rodent responses to repeated episodes of complete cache loss. Behavioural Processes, 106, 36-43. doi: http://dx.doi.org/10.1016/j.beproc.2014.04.005
  102. Mackintosh, N. J. (1974). The psychology of animal learning. New York, NY: Academic Press.
  103. Marr, M. (1979). Second-order schedules and the generation of unitary response sequences. In M. D. Zeiler & P. Harzem (Eds.), Advances in the analysis of behavior (Vol. 1, pp. 223-260). New York, NY: John Wiley & Sons.
  104. Mason, G., Bateson, M., & Bean, D. (1999). Contra-freeloading in starlings-a test of the information hypothesis. Behaviour, 136, 1267-1282. doi: http://hdl.handle.net/10214/4700
  105. Miyashita, Y., Nakajima, S., & Imada, H. (2000). Differential outcome effect in the horse. Journal of the Experimental Analysis of Behavior, 74, 245-253. doi: 10.1901/jeab.2000.74-245
  106. Morgan, M. (1974). Resistance to satiation. Animal Behaviour, 22, 449-466. doi: 10.1016/S0003- 3472(74)80044-8
  107. Neuringer, A. J. (1970a). Many responses per food reward with free food present. Science, 169, 503-504. doi: 10.1126/science.169.3944.503
  108. Neuringer, A. J. (1970b). Superstitious key pecking after three peck-produced reinforcements. Journal of the Experimental Analysis of Behavior, 13, 127-134. doi: 10.1901/jeab.1970.13-127
  109. Neuringer, A. J. (2002). Operant variability: Evidence, functions, and theory. Psychonomic Bulletin & Review, 9, 672-705. doi: 10.3758/BF03196324
  110. Neuringer, A. J., & Chung, S.-H. (1967). Quasi-reinforcement: Control of responding by a percentage- reinforcement schedule. Journal of the Experimental Analysis of Behavior, 10, 45-54. doi: 10.1901/jeab.1967.10-45
  111. Osborne, S. R. (1978). A note on the acquisition of responding for food in the presence of free food. Animal Learning & Behavior, 6, 368-369.
  112. Patterson, A. E., & Boakes, R. A. (2012). Interval, blocking and marking effects during the development of schedule-induced drinking in rats. Journal of Experimental Psychology: Animal Behavior Processes, 38, 303-314. doi: 10.1037/a0027788
  113. Pavlov, I. P. (1927). Conditioned reflexes (G. V. Anrep, Trans.). London, UK: Oxford University Press.
  114. Pear, J. J., & Rector, B. L. (1979). Constituents of response rate. Journal of the Experimental Analysis of Behavior, 32, 341-362. doi: 10.1901/jeab.1979.32-341
  115. Peden, B. F., Browne, M. P., & Hearst, E. (1977). Persistent approaches to a signal for food despite food omission for approaching. Journal of Experimental Psychology: Animal Behavior Processes, 3, 377- 399. doi: 10.1037/0097-7403.3.4.377
  116. Pitts, R. C. (2013). On multiscaled and unified. The Behavior Analyst, 36, 313-323.
  117. Premack, D. (1959). Toward empirical behavior laws: I. Positive reinforcement. Psychological Review, 66, 219-233. doi: 10.1037/h0040891
  118. Premack, D. (1963). Rate differential reinforcement in monkey manipulation. Journal of the Experimental Analysis of Behavior, 6, 81-89. doi: 10.1901/jeab.1963.6-81
  119. Premack, D. (1965). Reinforcement theory. In D. Levine (Ed.), Nebraska Symposium on Motivation (pp. 123- 180). Lincoln, NE: University of Nebraska Press.
  120. Premack, D. (1971). Catching up with common sense, or two sides of a generalizaiton: Reinforcement and punishment. In R. Blaser (Ed.), The nature of reinforcement (pp. 121-150). New York, NY: Academic Press.
  121. Premack, D. (2010). Why humans are unique: Three theories. Perspectives on Psychological Science, 5(1), 22- 32. doi: 10.1177/1745691609356782
  122. Pryor, K., Haag, R., & O'reilly, J. (1969). The creative porpoise: training for novel behavior. Journal of the Experimental Analysis of Behavior, 12, 653-661. doi: 10.1901/jeab.1969.12-653
  123. Rachlin, H. R. (1985). Pain and behavior. Behavioral and Brain Sciences, 8, 43-83. doi: http://dx.doi.org/10.1017/S0140525X00019488
  124. Rachlin, H. R. (1992). Teleological behaviorism. American Psychologist, 47, 1371-1382.
  125. Rachlin, H. R. (2014). The escape of the mind. New York, NY: Oxford University Press.
  126. Rachlin, H. R., Battalio, R., Kagel, J., & Green, L. (1981). Maximization theory in behavioral psychology. Behavioral and Brain Sciences, 4, 371-417. doi: http://dx.doi.org/10.1017/S0140525X00009407
  127. Reid, A. K., Bachá, G., & Morán, C. (1993). The temporal organization of behavior on periodic food schedules. Journal of the Experimental Analysis of Behavior, 59, 1-27. doi: 10.1901/jeab.1993.59-1
  128. Reid, A. K., Chadwick, C. Z., Dunham, M., & Miller, A. (2001). The development of functional response units: The role of demarcating stimuli. Journal of the Experimental Analysis of Behavior, 76, 303-320. doi: 10.1901/jeab.2001.76-303
  129. Reid, A. K., Rapport, H. F., & Le, T.-A. (2013). Why don't guiding cues always guide in behavior chains? Learning & Behavior, 41, 402-413. doi: 10.3758/s13420-013-0115-9
  130. Reid, A. K., Vazquez, P. P., & Rico, J. A. (1985). Schedule induction and the temporal distributions of adjunctive behavior on periodic water schedules. Learning & Behavior, 13, 321-326. doi: 10.3758/BF03200027
  131. Rescorla, R. A. (1971). Summation and retardation tests of latent inhibition. Journal of Comparative and Physiological Psychology, 75, 77-81. doi: 10.1037/h0030694
  132. Reynierse, J. H., & Rizley, R. C. (1970). Relaxation and fear as determinants of maintained avoidance in rats. Journal of Comparative and Physiological Psychology, 72, 223-232. doi: 10.1037/h0029476
  133. Robinson, M. J., & Berridge, K. C. (2013). Instant transformation of learned repulsion into motivational "wanting". Current Biology, 23, 282-289. doi: 10.1016/j.cub.2013.01.016
  134. Sanabria, F., Sitomer, M. T., & Killeen, P. R. (2006). Negative automaintenance omission training is effective. Journal of the Experimental Analysis of Behavior, 86, 1-10. doi: 10.1901/jeab.2006.36-05
  135. Sawisch, L. P., & Denny, M. R. (1973). Reversing the reinforcement contingencies of eating and keypecking behaviors. Animal Learning & Behavior, 1, 189-192.
  136. Schneirla, T. C. (1959). An evolutionary and developmental theory of biphasic processes underlying approach and withdrawal. In M. Jones (Ed.), Nebraska Symposium on Motivation (pp. 1-42). Lincoln, NE: University of Nebraska Press.
  137. Segal, E. F. (1972). Induction and the provenance of operants. In R. M. Gilbert & J. R. Millenson (Eds.), Reinforcement: Behavioral analyses (pp. 1-34). New York, NY: Academic Press.
  138. Shahan, T. A. (2010). Conditioned reinforcement and response strength. Journal of the Experimental Analysis of Behavior, 93, 269-289. doi: 10.1901/jeab.2010.93-269
  139. Shettleworth, S. J. (1975). Reinforcement and the organization of behavior in golden hamsters: Hunger, environment, and food reinforcement. Journal of Experimental Psychology: Animal Behavior Processes, 1, 56-87. doi: 10.1037/0097-7403.1.1.56
  140. Shettleworth, S. J. (2009a). Cognition, evolution, and behavior: Oxford University Press.
  141. Shettleworth, S. J. (2009b). The evolution of comparative cognition: is the snark still a boojum? Behavioural Processes, 80, 210-217. doi: 10.1016/j.beproc.2008.09.001
  142. Shettleworth, S. J., & Juergensen, M. R. (1980). Reinforcement and the organization of behavior in golden hamsters: Brain stimulation reinforcement for seven action patterns. Journal of Experimental Psychology: Animal Behavior Processes, 6, 352-375. doi: 10.1037/0097-7403.6.4.352
  143. Shimp, C. P. (2013). Toward the unification of molecular and molar analyses. The Behavior Analyst, 36, 295- 312.
  144. Shull, R. L. (1979). The postreionforcement pause. In M. D. Zeiler & P. Harzem (Eds.), Advances in the analysis of behavior (Vol. 1, pp. 193-221). New York, NY: John Wiley & Sons.
  145. Shull, R. L. (2011). Bouts, changeovers, and units of operant behavior. European Journal of Behavior Analysis, 12, 49-72.
  146. Shull, R. L., Gaynor, S. T., & Grimes, J. A. (2001). Response rate viewed as engagement bouts: Effects of relative reinforcement and schedule type. Journal of the Experimental Analysis of Behavior, 75, 247- 274. doi: 10.1901/jeab.2001.75-247
  147. Silva, F. J., & Pear, J. J. (1995). Stereotypy of spatial movements during noncontingent and contingent reinforcement. Animal Learning & Behavior, 23, 245-255.
  148. Silva, F. J., Timberlake, W., & Koehler, T. L. (1996). A behavior systems-approach to bidirectional excitatory serial conditioning. Learning and Motivation, 27, 130-150. doi: 10.1006/lmot.1996.0008
  149. Silva, K. M., & Timberlake, W. (1997). A behavior systems view of conditioned states during long and short CS-US intervals. Learning and Motivation, 28, 465-490. doi: 10.1006/lmot.1997.0986
  150. Skinner, B. F. (1938). The behavior of organisms. New York: Appleton-Century-Crofts.
  151. Staddon, J. E. R. (1979). Operant behavior as adaptation to constraint. Journal of Experimental Psychology: General, 108, 48-67. doi: 10.1037/0096-3445.108.1.48
  152. Stokes, P. D., & Balsam, P. D. (1991). Effects of reinforcing preselected approximations on the topography of the rat's bar press. Journal of the Experimental Analysis of Behavior, 55, 213-231. doi: 10.1901/jeab.1991.55-213
  153. Stolz, S. B., & Lott, D. F. (1964). Establishment in rats of a persistent response producing a net loss of reinforcement. Journal of Comparative and Physiological Psychology, 57, 147-149. doi: http://dx.doi.org/10.1037/h0042991
  154. Stubbs, D. A. (1971). Second-order schedules and the problem of conditioned reinforcement. Journal of the Experimental Analysis of Behavior, 16, 289-313. doi: 10.1901/jeab.1971.16-289
  155. Terhune, J. G., & Premack, D. (1970). On the proportionality between the probability of not-running and the punishment effect of being forced to run. Learning and Motivation, 1, 141-149. doi: 10.1016/0023- 9690(70)90080-9
  156. Terrace, H. S. (2001). Chunking and serially organized behavior in pigeons, monkeys and humans. In R. G. Cook (Ed.), Avian visual cognition [On-line]. Available: http://www.pigeon.psy.tufts.edu/avc/terrace/.
  157. Thrailkill, E. A., & Shahan, T. A. (2014). Temporal integration and instrumental conditioned reinforcement. Learning & Behavior, 42, 1-8. doi: 10.3758/s13420-014-0138-x
  158. Timberlake, W. (1983). The functional organization of appetitive behavior: Behavior systems and learning. In M. D. Zeiler & P. Harzem (Eds.), Advances in Analysis of Behavior (Vol. 3. Biological factors in learning, pp. 177-221). New York, NY: Wiley.
  159. Timberlake, W. (1993). Behavior systems and reinforcement: An integrative approach. Journal of the Experimental Analysis of Behavior, 60, 105-128. doi: 10.1901/jeab.1993.60-105
  160. Timberlake, W. (1994). Behavior systems, associationism, and Pavlovian conditioning. Psychonomic Bulletin & Review, 1, 405-420. doi: 10.3758/BF03210945
  161. Timberlake, W. (2001). Motivational modes in behavior systems. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of Contemporary Learning Theories (pp. 155-209). Mahwah, NJ: Erlbaum.
  162. Timberlake, W., & Allison, J. (1974). Response deprivation: An empirical approach to instrumental performance. Psychological Review, 81, 146-164. doi: 10.1037/h0036101
  163. Timberlake, W., & Lucas, G. A. (1989). Behavior systems and learning: From misbehavior to general principles. In S. B. Klein & R. R. Mowrer (Eds.), Contemporary Learning Theories: Instrumental Conditioning Theory and the Impact of Constraints on Learning (pp. 237-275). Hillsdale, NJ: Erlbaum.
  164. Timberlake, W., Wahl, G., & King, D. A. (1982). Stimulus and response contingencies in the misbehavior of rats. Journal of Experimental Psychology: Animal Behavior Processes, 8, 62-85. doi: 10.1037/0097- 7403.8.1.62
  165. Timberlake, W., & White, W. (1990). Winning isn't everything: Rats need only food deprivation and not food reward to efficiently traverse a radial arm maze. Learning and Motivation, 21, 153-163. doi: 10.1016/0023-9690(90)90017-I
  166. Timberlake, W., & Wozny, M. (1979). Reversibility of reinforcement between eating and running by schedule changes: A comparison of hypotheses and models. Animal Learning & Behavior, 7, 461-469. doi: 10.3758/BF03209702
  167. Tinbergen, N. (1961). The Herring Gull's World. New York, NY: Basic Books.
  168. Trapold, M. A. (1970). Are expectancies based upon different positive reinforcing events discriminably different? Learning and Motivation, 1, 129-140. doi: 10.1016/0023-9690(70)90079-2
  169. Turkkan, J. S. (1989). Classical conditioning beyond the reflex: The new hegemony. Behavioral and Brain Sciences, 12, 121-137. doi: http://dx.doi.org/10.1017/S0140525X00024572
  170. Wasserman, E. A., Franklin, S. R., & Hearst, E. (1974). Pavlovian contingencies and approach versus withdrawal to conditioned stimuli in pigeons. Journal of Comparative and Physiological Psychology, 86, 616-627. doi: 10.1037/h0036171
  171. Weisman, R. G. (1977). On the role of the reinforcer in associative learning. In H. Davis & H. M. B. Hurwitz (Eds.), Operant-Pavlovian interactions. Hillsdale, NJ: Erlbaum.
  172. Weiss, S. J. (2014). Instrumental and classical conditioning. In F. K. McSweeney & E. S. Murphy (Eds.), The Wiley Blackwell Handbook of Operant and Classical Conditioning (pp. 417-451). John Wiley & Sons, Ltd, Oxford, UK. doi: 10.1002/9781118468135.ch17.
  173. Williams, B. A. (1994). Conditioned reinforcement: Neglected or outmoded explanatory construct? Psychonomic Bulletin & Review, 1, 457-475. doi: 10.3758/BF03210950
  174. Wynne, C. D. (2001). Animal cognition: The mental lives of animals. New York, NY: Macmillan.
  175. Zentall, T. R., & Stagner, J. (2011). Maladaptive choice behaviour by pigeons: an animal analogue and possible mechanism for gambling (sub-optimal human decision-making behaviour). Proceedings of the Royal Society, B: Biological Sciences, 278, 1203-1208. doi: 10.1098/rspb.2010.1607
  176. Zimmerman, J. (1969). Meanwhile... back at the key: Maintenance of behavior by conditioned reinforcement and response-independent primary reinforcement. In D. P. Hendry (Ed.), Conditioned reinforcement (pp. 91-124). Homewood, IL: The Dorsey Press. Financial Support: The author received support from his wife. Conflict of Interest: The author declares no conflict of interest. Submitted: July 14 th , 2014
  177. Resubmitted: October 11 th , 2014 Accepted: October 30 st , 2014