Examining Accessibility, Credibility, and Accountability in Digital Assessment: A Systematic Literature Review

Authors

  • Richard Mulenga ZCAS University, Dedan Kamathi Road, Box 35243, Lusaka, Zambia https://orcid.org/0009-0003-0065-7432
  • Allan Mwenya Examinations Council of Zambia (ECZ), Box 504432, Lusaka Zambia

DOI:

https://doi.org/10.54536/ajiri.v3i3.2908

Keywords:

Digital Assessments, Assistive Technologies, Algorithmic Bias, Post COVID-19 Era

Abstract

This paper examines the accessibility, credibility, and accountability of digital assessments through systematic literature reviews. Although the extant literature is replete with studies on educational digital assessments, it seems that, so far, no systematic review study has focused on re-examining the accessibility, credibility, and accountability of digital assessments in the pre-and post-COVID-19 period. The growing ubiquity of digital assessments in academic and professional contexts, especially in the post-COVID-19 era, makes it necessary to conduct this systematic review. The main finding of this study is that, despite the growing ubiquity of digital assessments post the COVID-19 crisis, digital assessments seem to be deficient in employing assistive technologies. Additionally,the sudden migration to digital assessments pose challenges of maintaining the same standards of validity and reliability commensurate with traditional in-person assessments Therefore, going forward, we recommend extensive integration of assistive technologies in academic and professional digital assessments to enhance accessibility. Additionally, digital assessing authorities should also establish mechanisms for detecting cheating and plagiarism in digital assessments. This study underscores the overarching need for holistic approaches that balance technological innovation with ethical imperatives in digital assessments. This study contributes to the 21st-century understanding of the complex dynamic digital assessment landscapes.

Downloads

Download data is not yet available.

References

Anselimus, S. M. (2023). Assistive technologies and participation of students with visual impairments in extra-curricular activities - What does the literature say? American Journal of Interdisciplinary Research and Innovation, 2(4), 67–73. https://doi.org/10.54536/ajiri.v2i4.2155

Avgerou, C. (2003). The link between ICT and economic growth in the discourse of development. In S. D. D. Ciborra (Ed.), From control to drift: The dynamics of corporate information infrastructures (pp. 90–107). Oxford University Press.

Bachman, B., & Farrand, L. (2010). Against algorithmic discrimination. Annals of the New York Academy of Sciences, 1366(1), 165-180.

Barbour, M. K., LaBonte, R., Hodges, C., Moore, S., Lockee, B. B., Trust, T., ... & Kelly, K. (2020). Understanding Pandemic Pedagogy: Differences between Emergency Remote, Remote, and Online Teaching. Journal of Educational Technology Systems, 49(1), 9-36.

Barnard, L., & Lan, W. Y. (2015). Digital assessment literacy: Supporting students’ ability to use digital tools effectively. The Journal of General Education, 64(4), 218–234.

Bennett, R. E. (2011). Accountability in education: Meeting the challenge of measuring complex learning. Information Age Publishing.

Brown, A., Smith, B., & Johnson, C. (2018). Enhancing accessibility in digital assessments: Best practices for inclusive design. Journal of Educational Technology, 25(3), 321–335.

Brown, S. (2019). Enhancing accessibility in digital assessments. Journal of Educational Technology Systems, 48(2), 232–245.

Burgstahler, S. (2020). Universal design in assessment. In L. M. West & L. A. Lewis (Eds.), Handbook of research on multidisciplinary approaches to disability and equity in digital environments (pp. 198-213). IGI Global.

CAST. (2018). Universal design for learning guidelines version 2.2. Retrieved from http://udlguidelines.cast.org Cascio, W. F., & Aguinis, H. (2018). Content validity in employment selection: Review, recommendations, and future research. Research in Organizational Behavior, 38(1), 209-241.

Cascio, W. F., & Aguinis, H. (2018). Content validity in employment selection: Review, recommendations, and future research. Research in Organizational Behavior, 38(1), 209-241.

Chapman, A. (2018). Accessibility in online assessments: A guide for educators. University of Southampton.

Cook, A. M., & Polgar, J. M. (2015). Assistive Technologies: Principles and Practice (4th ed.). Mosby.

Dempere, J., Modugu, K., Hesham, A., & Ramasamy, L. K. (2023). The impact of ChatGPT on higher education. Frontiers in Education, 8, Article 1206936. https://doi.org/10.3389/feduc.2023.1206936

Eaton, J. S. (2019). Rethinking assessment in a digital world: The ethics of validity, bias, and fairness. Educational Researcher, 48(8), 566–577.

Edyburn, D. L., Higgins, S., & Judson, E. (2010). Universal design for learning: Theory and practice. Association for Supervision and Curriculum Development.

Edyburn, D. L. (2010). Assistive Technology and Mild Disabilities. In J. Sanchez (Ed.), Technology-Supported Environments for Personalized Learning: Methods and Case Studies (pp. 21–45). IGI Global.

Educational Testing Service. (2023). Protecting the integrity of our tests. https://www.ets.org/

Elish, M. C., & Elish, K. C. (2018). Rethinking algorithmic fairness for criminal justice. Law and Contemporary Problems, 81(2), 309–342.

Epstein, D., Caliskan, A., & Bryson, J. (2019). Fairness in machine learning. arXiv preprint arXiv:1908.09532.

Ercikan, K., & Jin, Y. (2020). Challenges and opportunities of multilingual assessment. Educational Measurement: Issues and Practice, 39(1), 53-61.

Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: A randomized controlled trial. JMIR Mental Health, 5(e64). https://doi.org/10.2196/mental.9782

García, M., & López, R. (2020). Ensuring accountability in digital assessments: Strategies for educational institutions. International Journal of Educational Administration, 12(2), 145-158.

Gebru, T., Morgenstern, J., Vecchiolla, C., Vaughan, J., Stapleton, M., Mitchell, M., & Borenstein, T. (2020). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the Conference on Fairness, Accountability, and Transparency, 80-89.

Gregson, J. (2017). Blockchain technology for education: Unpacking the hype. EDUCAUSE Review, 52(2), 14–20.

Haleem, Y., & Ditsa, E. M. G. (2024). Assessing the relationship between technological factors and the implementation of human resource information system: A survey in the municipal, metropolitan, and district assemblies in the Upper West Region of Ghana. American Journal of Interdisciplinary Research and Innovation, 3(2), 7–29. https://doi.org/10.54536/ajiri.v3i2.2594

Hodges, C., Moore, S., Lockee, B., Trust, T., & Bond, A. (2020). The Difference Between Emergency Remote Teaching and Online Learning. Educause Review. Retrieved from https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning.

International Association of Privacy Professionals. (2023). Data privacy essentials. https://iapp.org/

Johnson, D., & Wang, L. (2019). Evaluating the credibility of digital assessments: A review of current practices. Assessment and Evaluation in Higher Education, 44(4), 567-582.

Kirkup, G. (2019). Technology and inequality: Concentrating on digital divisions. Information, Communication & Society, 22(2), 161–175.

Koehler, M. J., & Mishra, P. (2008). Introducing TPCK. In AACTE Committee on Innovation and Technology (Ed.), Handbook of technological pedagogical content knowledge (TPCK) for educators (pp. 3–29). Routledge.

Lang, J. M., & Etzkowitz, H. (2018). The impact of online testing on higher education: A review of the literature. Educational Assessment, 23(3), 182-210.

Lang, J. M., Wheatcroft, J., & Wheeler, A. R. (2021). Examining the psychometric properties of online professional assessments during a pandemic. International Journal of Testing, 11(2), 1-16. https://doi.org/10.1080/1530508X.2021.1879223

Lane, S., & Buckingham, J. (2021). Algorithmic fairness in educational assessment: Challenges and recommendations. Educational Researcher, 50(2), 138–146. https://doi.org/10.3102/0013189X21989342

Lee, S., & Kim, J. (2021). Validity and reliability of digital assessments: A meta-analysis. Educational Research Review, 36, 1-15.

Li, C., & Lalani, F. (2020). The COVID-19 pandemic has changed education forever. This is how. World Economic Forum. Retrieved from https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/

Liu, M., & Zhao, Y. (2018). Language-related differential item functioning on a computerized adaptive test for non-native English speakers. Educational and Psychological Measurement, 78(3), 418–436.

Martínez, P., & Pérez, J. (2022). Addressing cheating and plagiarism in digital assessments: Strategies and considerations. Journal of Academic Integrity, 10(1), 78-92.

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Moss, C., Shah, A., & Brookhart, S. M. (2019). Advancing formative assessment practices in the digital age. Educational Assessment, 24(2), 132–152.

National Center for Fair & Open Testing (Fair Test). (2023). Bias in testing. https://fairtest.org/racebias/

Norris, D. F., & Lopez, K. A. (2016). The digital divide: Fact and fiction. Annual Review of Sociology, 42(1), 743-767.

Pascu, M., Mavroeidi, E., & Oliveira (2019). Assessment essentials: Understanding student learning. Pearson Education Limited.

Pickard, M., Schuetzler, R., Valacich, J., & Wood, D. (2017). Next-generation accounting interviewing: A comparison of human and embodied conversational agents (ECAs) as interviewers. SSRN Electronic Journal, April, 1–21. https://doi.org/10.2139/ssrn.2959693

Popham, W. J. (2009). Modern educational assessment: Principles, practices, and applications. Pearson Education.

Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Association for Supervision and Curriculum Development.

Sherlock Center. (2009). Rhode Island modified UDL educator checklist—Version 1.2. Providence: Rhode Island College. http://www.ric.edu/sherlockcenter/udl/udleducatorchecklist.pdf

Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4-14.

Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1-22.

Slepankova, M. (2021). Possibilities of artificial intelligence in education: An assessment of the role of AI Chatbots as a communication medium in higher education (dissertation). Available at: http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-108427

Smith, A., Lang, C., & Balser, M. (2020). A review of accessibility considerations in digital assessments.

Smith, C. (2023). GPT-5 might make ChatGPT indistinguishable from a human. BGR. Available at: https://bgr.com/tech/gpt-5-might-make-chatgpt-indistinguishable-from-a-human/

Society for Information Technology & Teacher Education. (2023). ISTE standards for students, educators, and administrators. https://iste.org/standards/students

Springer Web Accessibility Initiative, WAI. (2018). Web content accessibility guidelines (WCAG) overview. World Wide Web Consortium.

Stufflebeam, D. L. (2000). The seven evaluation continents: A comprehensive guide to educational and program evaluation. Kluwer Academic Publishers.

Thach, L., & Woodman, R. (1994). Organizational change and information technology: Managing on the edge of cyberspace. Organizational Dynamics, 23, 30-46.

Thompson, A., & Mishra, P. (2007–2008). Breaking news: TPCK becomes TPACK! Journal of Computing in Teacher Education, 24(2), 38–64.

Thompson, C. (2020). Algorithmic bias and the digital divide in education: A call for critical research and design. Learning, Media and Technology, 45(3),425–440. https://doi.org/10.1007/s40593-021-00285-9

Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38.

Universal Design for Learning Center. (2023). CAST. https://udlguidelines.cast.org/

UNESCO. (2021). Global partnership strategy for early childhood, 2021-2030 [Report].

Usher, F. (2018). Insights into students’ experiences with e-assessment. Computers & Education, 116, 190–202

Voss, E., Cushing, S. T., Ockey, G. J., & Yan, X. (2023). The use of assistive technologies including generative AI by test takers in language assessment: A debate of theory and practice. Language Assessment Quarterly, 20(4-5), 520-532.

Warschauer, M. (2019). Technology and social inclusion: Rethinking the digital divide. MIT Press.

Web Accessibility Initiative, WAI. (2018). Web content accessibility guidelines (WCAG) overview. World Wide Web Consortium. https://www.w3.org/WAI/

Wilson, M. (2009). The design of assessment instruments in education and training. SAGE Publications Ltd.

Xiao, Z., Zhou, M. X., Chen, W., Yang, H., & Chi, C. (2020). If I hear you correctly: Building and evaluating interview Chatbots with active listening skills. In CHI ‘20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14).

Yadav, A., & Shavelson, R. J. (2017). Digital technology and student cognitive development: The neuroscience of the university classroom. In R. J. Shavelson (Ed.), Measuring cognitive development in the digital age (pp. 43–63).

Downloads

Published

2024-08-08

How to Cite

Examining Accessibility, Credibility, and Accountability in Digital Assessment: A Systematic Literature Review. (2024). American Journal of Interdisciplinary Research and Innovation, 3(3), 11-20. https://doi.org/10.54536/ajiri.v3i3.2908

Similar Articles

21-30 of 30

You may also start an advanced similarity search for this article.