API Endpoint for journals.

GET /api/articles/29703/?format=api
HTTP 200 OK
Allow: GET
Content-Type: application/json
Vary: Accept

{
    "pk": 29703,
    "title": "Devaluation of Unchosen Options: A Bayesian Account of the Provenance andMaintenance of Overly Optimistic Expectations",
    "subtitle": null,
    "abstract": "Humans frequently overestimate the likelihood of desirableevents while underestimating the likelihood of undesirableones: a phenomenon known as unrealistic optimism. Previ-ously, it was suggested that unrealistic optimism arises fromasymmetric belief updating, with a relatively reduced codingof undesirable information. Prior studies have shown that areinforcement learning (RL) model with asymmetric learningrates (greater for a positive prediction error than a negativeprediction error) could account for unrealistic optimism in abandit task, in particular the tendency of human subjects topersistently choosing a single option when there are multi-ple equally good options. Here, we propose an alternativeexplanation of such persistent behavior, by modeling humanbehavior using a Bayesian hidden Markov model, the Dy-namic Belief Model (DBM). We find that DBM captures hu-man choice behavior better than the previously proposed asym-metric RL model. Whereas asymmetric RL attains a measureof optimism by giving better-than-expected outcomes higherlearning weights compared to worse-than-expected outcomes,DBM does so by progressively devaluing the unchosen op-tions, thus placing a greater emphasis on choice history inde-pendent of reward outcome (e.g. an oft-chosen option mightcontinue to be preferred even if it has not been particularly re-warding), which has broadly been shown to underlie sequentialeffects in a variety of behavioral settings. Moreover, previouswork showed that the devaluation of unchosen options in DBMhelps to compensate for a default assumption of environmentalnon-stationarity, thus allowing the decision-maker to both bemore adaptive in changing environments and still obtain near-optimal performance in stationary environments. Thus, thecurrent work suggests both a novel rationale and mechanismfor persistent behavior in bandit tasks.",
    "language": "eng",
    "license": {
        "name": "",
        "short_name": "",
        "text": null,
        "url": ""
    },
    "keywords": [
        {
            "word": "unrealistic optimism; decision making; multi-armed bandit; reinforcement learning; Bayesian modeling"
        }
    ],
    "section": "Poster Session 1",
    "is_remote": true,
    "remote_url": "https://escholarship.org/uc/item/4jj2g5w1",
    "frozenauthors": [
        {
            "first_name": "Corey",
            "middle_name": "Yishan",
            "last_name": "Zhou",
            "name_suffix": "",
            "institution": "University of California, San Diego",
            "department": ""
        },
        {
            "first_name": "Dalin",
            "middle_name": "",
            "last_name": "Guo",
            "name_suffix": "",
            "institution": "University of California, San Diego",
            "department": ""
        },
        {
            "first_name": "Angela",
            "middle_name": "J.",
            "last_name": "Yu",
            "name_suffix": "",
            "institution": "University of California, San Diego",
            "department": ""
        }
    ],
    "date_submitted": null,
    "date_accepted": null,
    "date_published": "2020-01-01T18:00:00Z",
    "render_galley": null,
    "galleys": [
        {
            "label": "PDF",
            "type": "pdf",
            "path": "https://journalpub.escholarship.org/cognitivesciencesociety/article/29703/galley/19560/download/"
        }
    ]
}