👁

Modeling gaze biases

Mar, 10 2020, 3-5min read

PAPER:

Thomas, A. W., Molter, F., Heekeren, H. R., & Mohr, P. N. (2019). GLAMbox: A Python toolbox for investigating the association between gaze allocation and decision behaviour. PloS one, 14(12). doi.org/10.1371/journal.pone.0226428

IN BRIEF:

Imagine, deciding whether to eat an apple or an orange from a fruit basket on a table in front of you. While forming that decision, you look back and forth between both alternatives to compare them. Recent research in decision neuroscience has established a link between looking behaviour and these types of simple value-based choices, such that alternatives that have been looked at longer have a generally higher probability to be chosen.

On a computational level, this link (the gaze bias) has been formalized by the attentional Drift-Diffusion Model (aDDM). The aDDM assumes that while forming such a decision, the brain accumulates evidence in favor of each available alternative, and that a choice is made once enough evidence for one alternative has been accumulated over the other. Importantly, the aDDM assumes that the rate of evidence accumulation is dependent on the distribution of visual gaze, with generally higher accumulation rates for the momentarily looked-at alternative (leading to generally higher choice probabilities for this alternative:

image
💡

In the aDDM, choices are determined by an evidence accumulation process, in which noisy evidence E is accumulated in favor of each available choice alternative (upper panel; here, exemplified with three choice alternatives, each represented by a different color). Importantly, the accumulation process is dependent on the allocation of visual gaze, with lower accumulation rates for alternatives that are momentarily not looked at (the color of the blocks along the bottom of the upper panel indicates which alternative is momentarily looked at). The evidence signals builds the basis for the relative decision signals (RDV; lower panel) which determine the decision process and are computed as the difference between the evidence E of an alternative and the maximum evidence of all other available alternatives. Once any RDV reaches a common decision boundary (as indicated by the dashed black line), a choice is made for the respective item.

Yet, the aDDM has only been tested on group-level data. It is therefore unclear, whether the established link between looking behaviour and choice also exists on the level of the individual.

While it is in theory possible to apply the aDDM to individual-level data, it is in practice very complicated, as fitting the aDDM relies on extensive model simulations and a generative model of individuals' fixation process (for a more detailed discussion, see Thomas, Molter, Krajbich, Heekeren & Mohr, Nature Human Behaviour, 2019).

In this work, we propose the Gaze-weighted Linear Accumulator Model (GLAM), which is inspired by the aDDM, but does not rely on model simulations and is thereby easily applicable to individual-level data. The GLAM provides an analytical solution for the first-passage time of each option (describing the likelihood that each of the available options is chosen at each point in time), given an individual's observed distribution of visual gaze (defined by the fraction of trial time that the individual spent looking at each alternative) and the liking ratings for the available choice alternatives:

image
💡

The GLAM describes the influence of visual gaze allocation (how long each item was looked at) on the decision-making process in the form of a linear stochastic race: While a person looks at the available choice options (a), an absolute evidence signal A for each option i in the choice set is computed (b). The magnitude of this signal is dependent on the allocation of visual gaze, with lower magnitudes for options that are momentarily not looked at. These absolute evidence signals are then transformed into relative decision signals (indicating the individual's relative preference for the available items) by computing the average absolute evidence signal for each item in the trial (indicated by the broken lines in (b)) and computing the difference between each of these averages and the maximum of the other two. The GLAM further assumes an adaptive representation of these relative evidence signals that is maximally sensitive to small differences in the relative decision signals. To this end, a logistic transform is applied (c). The resulting scaled relative evidence signals determine the drift terms R of the relative evidence accumulators E in the stochastic race (d). A choice for an option is made as soon as the accumulated relative evidence E reaches a common choice threshold. The stochastic race also provides first-passage time distributions p for each choice option, describing the likelihood of each item being chosen at each time point.

Due to the availability of an analytical solution for the first-passage time distribution, the GLAM can be easily embedded in a (hierarchical) Bayesian framework.

To a make the GLAM more accessible to a wider audience of researchers, we have also created a simple Python toolbox [GLAMbox], which allows to easily:

  • Apply the GLAM to individual- and hierarchical-level data in a Bayesian framework
  • Simulate and predict data
  • Perform likelihood-based model comparisons