Assessing Three Sources of Misresponse to Reversed Likert Items

By: Material type: ArticleArticleLanguage: ENG Series: ; XLVPublication details: Feb 2008 0Edition: 1Description: 116-131 PpSubject(s): DDC classification:
  •  Swa
Online resources: Summary: Data collected through multi-item Likert scales that contain reversed items often exhibit problems, such as unexpected factor structures and diminished scale reliabilities. These problems arise when respondents select responses on the same side of the scale neutral point for both reversed and nonreversed items, a phenomenon the authors call misresponse. Across four experiments and an exploratory study using published data, the authors find that misresponse to reversed Likert items averaged approximately 20%, twice the level identified as problematic in previous simulation studies. Counter to prevailing thought, the patterns of misresponse and response latency across manipulated items could not be attributed to respondent inattention or acquiescence. Instead, the pattern supports an item verification difficulty explanation, which holds that task complexity, and thus misresponse and response latency, increases with the number of cognitive operations required for a respondent to compare a scale item with his or her belief. The observed results are well explained by the constituent comparison model.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

Data collected through multi-item Likert scales that contain reversed items often exhibit problems, such as unexpected factor structures and diminished scale reliabilities. These problems arise when respondents select responses on the same side of the scale neutral point for both reversed and nonreversed items, a phenomenon the authors call misresponse. Across four experiments and an exploratory study using published data, the authors find that misresponse to reversed Likert items averaged approximately 20%, twice the level identified as problematic in previous simulation studies. Counter to prevailing thought, the patterns of misresponse and response latency across manipulated items could not be attributed to respondent inattention or acquiescence. Instead, the pattern supports an item verification difficulty explanation, which holds that task complexity, and thus misresponse and response latency, increases with the number of cognitive operations required for a respondent to compare a scale item with his or her belief. The observed results are well explained by the constituent comparison model.

There are no comments on this title.

to post a comment.

Powered by Koha