In two-alternative discrimination tasks, researchers typically randomize the placement of the rewarded stimulus to prevent systematic responses to irrelevant stimuli, ensuring that learning curves reflect chance performance.
One method to accomplish this is by employing random numbers generated from a discrete binomial distribution to construct a ‘full random training schedule’ (FRS). However, the use of FRS may inadvertently result in sporadic but extended sequences of biased training, known as ‘input biases,’ which can lead to the emergence of biased responses (‘output biases’).
As an alternative approach, a ‘Gellerman-like training schedule’ (GLS) can be implemented. GLS mitigates most input biases by restricting the occurrence of rewards in the same location for more than three consecutive trials. The history of past rewards associated with choosing a specific discriminative stimulus influences the likelihood of selecting that stimulus in subsequent trials.
This function implements a Gellerman-like series based on Herrera & Treviño (2015). The algorithm samples from a binomial distribution and imposes two restrictions:
## [1] 1 1 0 0 0 1 0 0 1 1 1 0 0 0 1 1
Note: it is important to consider that n has to be an even number so that there are the same amount of trials for each alternative.