Scale analysis (statistics)




In statistics, scale analysis is a set of methods to analyze survey data, in which responses to questions are combined to measure a latent variable. These items can be dichotomous (e.g. yes/no, agree/disagree, correct/incorrect) or polytomous (e.g. disagree strongly/disagree/neutral/agree/agree strongly). Any measurement for such data is required to be reliable, valid, and homogeneous with comparable results over different studies.




Contents






  • 1 Constructing scales


  • 2 Measurement models


    • 2.1 Traditional models


    • 2.2 Modern models based on Item response theory


    • 2.3 Other models




  • 3 References





Constructing scales


The item-total correlation approach is a way of identifying a group of questions whose responses can be combined together into a single measure or scale. This is a simple approach that works by ensuring that, when considered across a whole population, responses to the questions in the group tend to vary together and, in particular, that responses to no individual question are poorly related to an average calculated from the others.



Measurement models


Measurement is the assignment of numbers to subjects in such a way that the relations between the objects are represented by the relations between the numbers (Michell, 1990).



Traditional models



  • Likert scale

  • Semantic differential (Osgood) scale


  • Reliability analysis, see also Classical test theory and Cronbach's alpha

  • Factor analysis



Modern models based on Item response theory



  • Guttman scale

  • Mokken scale

  • Rasch model

  • (Circular) Unfolding analysis

  • Circumplex model



Other models



  • Latent class analysis

  • Multidimensional scaling

  • NOMINATE (scaling method)



References


  • Michell, J (1990). An Introduction to the logic of Psychological Measurement. Hillsdales, NJ: Lawrences Erlbaum Associates Publ.







Popular posts from this blog

Steve Gadd

Лира (музыкальный инструмент)

Сарыагашский район