# adopy.tasks.cra¶

The choice under risk and ambiguity task (CRA; Levy et al., 2010) involves preferential choice decisions in which the participant is asked to indicated his/her preference between two options:

1. A fixed (or reference) option of either winning a fixed amount of reward ($$R_F$$, r_fix) with a fixed probability of 0.5 or winning none otherwise; and

2. A variable option of either winning a varying amount of reward ($$R_V$$, r_var) with a varying probability ($$p_V$$, p_var) and a varying level of ambiguity ($$A_V$$, a_var) or winning none otherwise.

Further, the variable option comes in two types:

1. risky type in which the winning probabilities are fully known to the participant; and

2. ambiguous type in which the winning probabilities are only partially known to the participant.

The level of ambiguity ($$A_V$$) is varied between 0 (no ambiguity and thus fully known) and 1 (total ambiguity and thus fully unknown).

References

Levy, I., Snell, J., Nelson, A. J., Rustichini, A., & Glimcher, P. W. (2010). Neural Representation of Subjective Value Under Risk and Ambiguity. Journal of Neurophysiology, 103 (2), 1036-1047.

class adopy.tasks.cra.TaskCRA

Bases: adopy.base._task.Task

The Task class for the choice under risk and ambiguity task (Levy et al., 2010).

Design variables
• p_var ($$p_V$$) - probability to win of a variable option

• a_var ($$A_V$$) - level of ambiguity of a variable option

• r_var ($$R_V$$) - amount of reward of a variable option

• r_fix ($$R_F$$) - amount of reward of a fixed option

Responses
• choice - 0 (choosing a fixed option) or 1 (choosing a variable option)

Examples

>>> from adopy.tasks.cra import TaskCRA
['p_var', 'a_var', 'r_var', 'r_fix']
['choice']


## Model¶

class adopy.tasks.cra.ModelLinear

Bases: adopy.base._model.Model

The linear model for the CRA task (Levy et al., 2010).

\begin{split}\begin{align} U_F &= 0.5 \cdot (R_F)^\alpha \\ U_V &= \left[ p_V - \beta \cdot \frac{A_V}{2} \right] \cdot (R_V)^\alpha \\ P(V\, over \, F) &= \frac{1}{1 + \exp [-\gamma (U_V - U_F)]} \end{align}\end{split}
Model parameters
• alpha ($$\alpha$$) - risk attitude parameter ($$\alpha > 0$$)

• beta ($$\beta$$) - ambiguity attitude parameter

• gamma ($$\gamma$$) - inverse temperature ($$\gamma > 0$$)

References

Levy, I., Snell, J., Nelson, A. J., Rustichini, A., & Glimcher, P. W. (2010). Neural Representation of Subjective Value Under Risk and Ambiguity. Journal of Neurophysiology, 103 (2), 1036-1047.

Examples

>>> from adopy.tasks.cra import ModelExp
>>> model = ModelExp()
Task('CRA', designs=['p_var', 'a_var', 'r_var', 'r_fix'], responses=[0, 1])
>>> model.params
['alpha', 'beta', 'gamma']

compute(choice, p_var, a_var, r_var, r_fix, alpha, beta, gamma)

Compute log likelihood of obtaining responses with given designs and model parameters. The function provide the same result as the argument func given in the initialization. If the likelihood function is not given for the model, it returns the log probability of a random noise.

Warning

Since the version 0.4.0, compute() function should compute the log likelihood, instead of the probability of a binary response variable. Also, it should include the response variables as arguments. These changes might break existing codes using the previous versions of ADOpy.

Changed in version 0.4.0: Provide the log likelihood instead of the probability of a binary response.

class adopy.tasks.cra.ModelExp

Bases: adopy.base._model.Model

The exponential model for the CRA task (Hsu et al., 2005).

\begin{split}\begin{align} U_F &= 0.5 \cdot (R_F)^\alpha \\ U_V &= (p_V) ^ {(1 + \beta \cdot A_V)} \cdot (R_V)^\alpha \\ P(V\, over \, F) &= \frac{1}{1 + \exp [-\gamma (U_V - U_F)]} \end{align}\end{split}
Model parameters
• alpha ($$\alpha$$) - risk attitude parameter ($$\alpha > 0$$)

• beta ($$\beta$$) - ambiguity attitude parameter

• gamma ($$\gamma$$) - inverse temperature ($$\gamma > 0$$)

References

Hsu, Y.-F., Falmagne, J.-C., and Regenwetter, M. (2005). The tuning in-and-out model: a randomwalk and its application to presidential election surveys. Journal of Mathematical Psychology, 49, 276–289.

Examples

>>> from adopy.tasks.cra import ModelLinear
>>> model = ModelLinear()
Task('CRA', designs=['p_var', 'a_var', 'r_var', 'r_fix'], responses=[0, 1])
>>> model.params
['alpha', 'beta', 'gamma']

compute(choice, p_var, a_var, r_var, r_fix, alpha, beta, gamma)

Compute log likelihood of obtaining responses with given designs and model parameters. The function provide the same result as the argument func given in the initialization. If the likelihood function is not given for the model, it returns the log probability of a random noise.

Warning

Since the version 0.4.0, compute() function should compute the log likelihood, instead of the probability of a binary response variable. Also, it should include the response variables as arguments. These changes might break existing codes using the previous versions of ADOpy.

Changed in version 0.4.0: Provide the log likelihood instead of the probability of a binary response.

## Engine¶

class adopy.tasks.cra.EngineCRA(model, grid_design, grid_param, **kwargs)

Bases: adopy.base._engine.Engine

The Engine class for the CRA task. It can be only used for TaskCRA.