Open Access
SHS Web Conf.
Volume 149, 2022
International Conference on Social Science 2022 “Integrating Social Science Innovations on Post Pandemic Through Society 5.0” (ICSS 2022)
Article Number 01033
Number of page(s) 5
Section Education and Digital Learning
Published online 18 November 2022
  1. Azisah, Sapti Wahyuningsih, Penggunaan Model Rasch Untukk Analisis Instrumen Tes pada Matakuliah Matematika Aktuaria, vol.3, (2020), pp. 45–50. DOI : [Google Scholar]
  2. I Natsir, A Munfarikhatin, AR Taufik, Development of Student Worksheet Based on Blended Learning Oriented to Multiple Intelligences in Algebra, International Join Conference on Social Science, vol. 473, (2021), pp. 438–442. [Google Scholar]
  3. Ratna Wulan, E & Rasdiana, A, Evaluasi Pembelajaran, Pustaka Setia, (2015). [Google Scholar]
  4. Dunn, L., Morgan, C., O’Reilly, M., & Parry, S, The student assessment handbook : New directions in traditional and online assessment, Routledge, (2003). [CrossRef] [Google Scholar]
  5. Thissen, D., Nelson, L., Rosa, K., et al, Item Response Theory for Items Scored in More than Two Categories dalam D. Thissen & H. Wainer, Test Scoring, New Jersey : Lawrence Erlbaum Associates Publishers, (2001), pp. 141–184. [Google Scholar]
  6. Rasch, G, Probabilistic models for some intelligence and attainment test, Chicago, IL: University of Chicago Press, (1980). [Google Scholar]
  7. Rozeha, A. R., Azami, Z. & Mohd Saidfudin, M, Application of Rasch Measurement in Evaluation of Learning Outcomes: A Case Study in Electrical Engineering, Regional Conference on Engineering Mathematics, Mechanics, Manufacturing & Architecture, (2007). [Google Scholar]
  8. Ashraf, Z.A., & Jaseem, K, Classical and modern methods in item analysis of test tools, International Journal of Research and Review, vol.7, 2020, pp. 397–403. [Google Scholar]
  9. Meyer, J.P., & Zhu, Shi, Fair and equitable measurement of student learning in MOOCs: an introduction to item response theory, scale linking, and score equating, Journal of Research and Practice in Assessment, vol.8, (2013), pp. 26–39. [Google Scholar]
  10. Yilmaz, H.B, comparison of IRT model combinations for assessing fit in a mixed format elementary school science test, International Electronic Journal of Elementary Education, vol.11, (2019), pp. 539–545. DOI: [Google Scholar]
  11. Fan, X, Item response theory and classical test theory:An empirical comparison of their item/person statistics, Education and Psychological Measurement, vol.58, pp. 357–381. [Google Scholar]
  12. Magno, C, Demonstrating the difference between classical test theory and item response theory using derived test data, The International Journal of Educational and Psychological Assessment, vol. 1, (2009), pp. 1–11. [Google Scholar]
  13. Bichi, A.A, Classical test theory: An introduction to linear modelling approach to test and item analysis, International Journal for Social Studies, vol.2, pp. 27–33. [Google Scholar]
  14. Rezaee, R., Shafiayan, M., Jafari, P., & Zarifsanaiey, N, Invariance of item difficulty parameter estimates based on classical test theory and item response theory, Journal of Advance Pharmacy Education and Research, vol.8, pp. 156–161. [Google Scholar]
  15. Anastasi, A. & Urbina, S, Psychological testing, Prentice Hall: New York, (2002). [Google Scholar]
  16. Aziz, R, Aplikasi model Rasch dalam pengujian alat ukur kesehatan mental di tempat kerja. Psikoislamika: Jurnal Psikologi dan Psikologi Islam, vol. 12, pp. 29–39. [Google Scholar]
  17. Hambleton, R.K., Swaminathan, H., & Rogers H.J, Fundamental of item response theory. London: Sage Publication, (1991). [Google Scholar]
  18. Sumintono, B., & Widhiarso, W, Aplikasi permodelan rasch pada assessment pendidikan. Cimahi: Trim Komunikata, (2015). [Google Scholar]
  19. Linacre, J.M, A user’s guide to WINSTEPS. Chicago, IL: Winsteps, (2009). [Google Scholar]
  20. Boone, W.J., Staver, J.R., & Yale, M.S, Rasch analysis in the human sciences. Dordrecht: Springer, (2014). [CrossRef] [Google Scholar]
  21. Meijer, R.R, Person-fit research: an introduction. Applied Measurement in Education, vol.9, (1996), pp. 3–8. [CrossRef] [Google Scholar]
  22. Karabatsos, G, Comparing the abberant response detection performance of thirty-six person-fit statistics. Applied Measurement in Education, vol.16, (2003), pp. 277–298. [CrossRef] [Google Scholar]
  23. Meijer, R.R., & Sitsma, K, Person fit statistic: What is their purpose?. Rasch Measurement Transactions, vol.15, (2001). [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.