Evaluation of computerized tests applying the classical theory of the tests and the theory of response to the item

Authors

  • César Higinio Menacho Chiok Facultad de Economía y Planificación, Universidad Nacional Agraria La Molina, 15024, Lima, Perú. https://orcid.org/0000-0003-1310-2551
  • Jesús María Cano Alva Trinidad Facultad de Economía y Planificación, Universidad Nacional Agraria La Molina, 15024, Lima, Perú.

DOI:

https://doi.org/10.21704/ac.v81i2.1638

Keywords:

Computerized tests, classic test theory, item response theory, binary logistic models, test calibration.

Abstract

The objective of this study was to evaluate the reliability and validity of computerized tests via the web through the measurement of their psychometric and statistical properties by applying the Classical Test Theory (TCT) and the Item Response Theory (TRI). The TCT methodology was applied to assess the difficulty and discrimination of the test and the items. The data was adjusted to the TRI binary logistic models of one, two and three parameters. A computerized test of 30 questions was applied to 775 students enrolled in the Basic Statistics course in the 2016 II semester. The results indicated a good reliability of the test with a Cronbach’s alpha of 0,833 and was corroborated with a correlation of 0,815. For the TCT the difficulty index identified three very easy questions (V7, V8 and V12) and the discrimination index did not find any questions to withdraw it. The assumption of unidimensionality with factor analysis was tested with an explained variance of the first factor of 24,7%. The binary logistic model of the three parameter TRI (3PL) was better adjusted to the data. For the calibration process with the 3PL model, questions V28 (discrimination index greater 0,65) were withdrawn; V8, V12, V16 and V18 (chance index greater than 0.4) and none with the difficulty index.

Downloads

Download data is not yet available.

References

• Akaike, H. (1974). A new look at the statistical identification model. IEEE Transactions on Automactic Control, 6. 716-723pp.

• Averaño, B.L. (2003). Teoría de Respuesta al Ítem. Otra alternativa para la Medición y Evaluación. Suma Psicológica, 10(2), 235-245.

• Bulut, O. (2015). Applying Item Response Theory Models to Entrance Examination for Graduate Studies: Practical Issues and Insights. Journal of Measurement and Evaluation in Education an Psychology, 6(2), 313-330.

• Carmines, E.G., & Zeller, R.A. (1979). Reliability and Validity Assessment. https://dx.doi.org/10.4135/9781412985642

• Davey, T. (2005). Computer-based testing. Encyclopedia of statistics in behavioral science. ISBN: 978-0-470-86080-9

• Fan, X. (1998). Item response theory and classical test theory: an empirical comparison of their item/person statistics. Educational and Psychological Measurem. June 1998 58(3), 357-382.

• George, D., & Mallery, P. (1995). SPSS/PC+ step by step: A simple guide and reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.

• Gonzáles, J., Cabrera, E., Montenegro, E., Nettle, A., & Guevara, M. (2010). Condicionamiento del modelo logístico para la evaluación informatizada de competencias matemáticas. Ciencia, Docencia y Tecnología, 41, 173-191.

• Lord, F.M. (1952). A theory of test scores.Psychometric Monographs N° 7.

• Martínez, D. (1990). Psicometría: Teoría de los Tests Psicológicos y Educativos. Ed. Pirámide.

• Muñiz, J. (2010). Las Teoría de los Tests: Teoría Clásica y Teoría de Respuesta a los Ítems. Papeles del Psicólogo, 3 (1): 57-66.

• Muñiz, J., & Hambleton, R.K. (1992). Medio siglo de teoría de respuesta a los ítems. Anuario de Psicología, 52(1), 41-66.

• Navas, M.S. (1994). Teoría Clásica de los Test versus Teoría de Respuesta al ítem. Psicología 15. UNED, Madrid, pp. 175-208.

• Olea, J., Ponsoda, V., & Prieto, G. (1999). Tests Informatizados: Fundamentos y Aplicaciones. Colección Psicología. Madird, España. Ed. Prirámide.

• Omobola, O. A., & Adedoyin, J. A. (2013). Assessing the comparability between classical test theory (CTT) and item response theory (IRT) models in estimating test item parameters. Herald Journal of Education and General Studies, 2 (3), 107-114.

• Progar, S., Socan, G., & Pec, M. (2008). An empirical comparison of Item Response Theory and Classical Test Theory. Psihološka obzorja / Horizons of Psychology, 17(3), 5-24.

• Reckase, M.D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Education Statistic, 207-230.

• Schwarz, E. (1978). Estimating the dimension of a model. The Annals of Statistics, 461-464.

• Spearman, C. (1913). Correlations of sums and differences. Journal of Psychology, 5: 417-426.

• Sudol, L., & Studer, C. (2010). Analyzing Test Items:Using Item Response Theory to Validate Assessments. ACM, pp. 436-440.

• Zanon, C., Htz, C., Yoo, H., & Hambleton, R. (2016). An application of item response theory to psychological test development. 18-29.

• Zwick, R. (1987). Assessing the dimensionality of NAEP reading data. Journal of Educational Meusurement, 293-308.

Downloads

Published

2020-12-30

Issue

Section

Original articles/ Business, Management and Accounting

How to Cite

Menacho Chiok, C. H., & Alva Trinidad, J. M. C. (2020). Evaluation of computerized tests applying the classical theory of the tests and the theory of response to the item. Anales Científicos, 81(2), 278-288. https://doi.org/10.21704/ac.v81i2.1638