International Steering Committee

Dr. Kunle Ayanwale (Vice President ACATA); University of Johannesburg, South Africa (Southern Africa)

Dr Mohammad Ramdan; Fayoum University, Egypt (North Africa)

Dr Bachir Adda; University of Mustapha Stambouli, Algeria (North Africa)

Victor Peter Nkungula; Malawi Adventist University, Malawi (East/Central Africa)

Dr Eric Asare; West African Examinations Council, Ghana (West Africa)

Dr Andrews Cobbinah; University of Cape CoastCape Coast, Ghana (West Africa)

National Steering Committee

Dr Maxwell E. Uduafemhe; NECO- National Examinations Council, Nigeria

Dr Esther Bamidele; NECO- National Examinations Council, Nigeria

Dr Yusuf Olayinka Shogbesan; Al-Hikmah University, Kwara State, Nigeria

Dr Simeon Ariyo; University of Ibadan, Oyo State, Nigeria

Local Organising Committee

Dr. Mayowa O. Ogunjimi (LOC Chairperson) Faculty of Education University of Ilorin, Nigeria. Email:

Dr Dorcas S. Daramola Faculty of Education, University of Ilorin, Nigeria Email:

Dr Mohammed I. Jimoh Faculty of Education,University of Ilorin, NigeriaEmail:

Dr Jumoke I. Oladele (Conference LOC Secretary) Faculty of Education University of Ilorin, Nigeria Email: +2348060226110


Arrival/General Information/Venues

This is a hybrid conference with members from West, East, Southern, and North Africa. The venue for the physical gathering is the University of Ilorin Main Auditorium, University of Ilorin, Nigeria.



DAY TWO (Opening Ceremony)

Registration of Participants8.00-9.00 a.m.
Full Papers Sessions9.00-10.00 a.m.

Vice-Chancellors Speech: Professor Wahab O. Egbewole, Chief Host


Good will message: Deans, Faculty of Education, Prof. L.A. Yahaya, Host

It is with great pleasure that I take this opportunity to extend my warmest goodwill to all attendees of this August occasion of the Conference of the Association of Computer Adaptive Testing in Africa, themed “Computer Adaptive Testing (CAT) for Standardised Assessment and Implications for the Fourth Industrial Revolution in Africa” hosted by the Faculty of Education, the mother of all Faculties of this great citadel of learning!

In a fast-changing world where knowledge is a constant guiding light, our collective pursuit of wisdom and truth has never been more crucial. As scholars and educators, we are bound by our passion for learning, our commitment to discovery, and our dedication to the dissemination of knowledge. It is this shared pursuit that unites us in our academic endeavours and this conference.

Despite the challenges and uncertainties that often accompany the academic journey, the resilience, dedication, and unwavering commitment displayed by our community continue to inspire. Whether you are engaged in groundbreaking research, nurturing the minds of future generations, or contributing to the advancement of your field, your efforts are not only commendable but also pivotal in shaping the future.

The Forth Industrial Revolution (4IR) is a time of technology-driven change and transition, and the academic community stands as a beacon of hope while fostering innovation, promoting diversity of thought, and nurturing the intellectual curiosity that propels society forward. Together, we must embrace new technologies such as Computer Adaptive Testing while championing the values of academic freedom and open discourse. I am particularly interested in educational assessment for accurate psychological testing, and I hope that discourses in this conference will beam some trackable lights in this direction.

In the spirit of goodwill, let us remember the strength of our collective knowledge and the profound impact that we can achieve when we collaborate and support one another, and continue to raise the bar of excellence in our academic pursuits.

In closing, I want to express my sincere appreciation for the incredible work of the local oraganising committee made up of the Measurements and Evaluation Unit in the Department of Social Sciences Education of the Faculty. I also want to appreciate all other officials of this conference and conference participants, both physically and virtually. May your endeavours be met with success, your intellectual journeys be rewarding, and your contributions continue to shape a brighter future for us all.

Do accept my warmest wishes for your continued success in all your academic pursuits and have a productive conference.

With my heartfelt goodwill.

Welcome Message 2- Prof. I.O.O. Amali, HOD, SSE DEPT. (ACATA SECRETARIAT)

Ladies and gentlemen, esteemed scholars, researchers, and distinguished guests, it gives me the greatest pleasure to give this speech in my capacity as Head of the Department of Social Sciences Education, Faculty of Education, University of Ilorin, where the Secretariat of the Association of Computer Adaptive Testing in Africa (ACATA), is presently housed and managed by the Educational Research, Measurement, and Evaluation Unit of the Department.

It is in this spirit that I welcome you to the First Biennial ACATA Conference – a gathering of brilliant minds and a celebration of academic excellence. As we convene here today, we embark on a journey of discovery, innovation, and collaboration that will shape the future of educational assessment, which is a major responsibility in the educational terrain and across various disciplines.

This conference is a testament to the power of knowledge, the enduring spirit of inquiry, and the boundless potential of human intellect. We have gathered from every corner of the globe, representing diverse cultures, backgrounds, and disciplines. Together, we form a mosaic of intellectual curiosity and creativity, united by our shared commitment to advancing the frontiers of knowledge in digital educational assessments.

Over the next few days, the conference will serve as a crucible of ideas, where paradigms will be enlightened. As academics, our collective mission is to push the boundaries of human knowledge and to seek solutions to some of the most pressing challenges of our time.

The theme of this conference, “Computer Adaptive Testing (CAT) for Standardised Assessment and Implications for the Fourth Industrial Revolution in Africa”, has been carefully chosen to reflect the urgency and complexity of the issues we face with digital educational assessment and the fast-changing realities of the Fourth Industrial Revolution. It is a call to action, a summons to engage with the most profound questions of our era. It reminds us that the pursuit of knowledge is not a solitary endeavour but a collaborative effort that transcends individual disciplines and perspectives.

As we embark on this intellectual odyssey, let us remember the great minds that have come before us, those who have left an indelible mark on the annals of academia. Their dedication, perseverance, and insatiable curiosity have paved the way for our own inquiries and discoveries. It is our responsibility to build upon their legacy and to inspire the generations that will follow in our footsteps.

I would like to express my heartfelt gratitude to the conference organizing committee, whose tireless efforts have made this conference possible. I would also like to thank the Management of the University ably led by the Vice-Chancelor, Prof. Wahab O. Egbewole for approving the hosing this conference at the University and his continuous support for academic excellence. And, of course, I extend my warmest appreciation to each and every one of you, the participants, for your invaluable contributions and dedication to the pursuit of knowledge.

In keeping with the given expectations of collaboration and exchange, I encourage you to engage in fruitful discussions, forge new connections, and challenge preconceived notions. Let us embrace the diversity of thought and approach that defines assessment advancements that can improve learning.

As we embark on this intellectual journey, let us keep in mind the words of Albert Einstein, who once said, “The most beautiful thing we can experience is the mysterious. It is the source of all true art and science.” Let us embrace the mysteries, as new trajectories for advancing technology-led assessment practices are being unraveled in this conference.

Thank you for your presence and your commitment to the pursuit of knowledge. Together, we will make this conference a resounding success, and our collective contributions will illuminate the path forward for generations to come.

Welcome to the ACATA Conference, and may this conference be a beacon of inspiration, innovation, and enlightenment.

Thank you.




Welcome Message 3– Dr M.O. Ogunjimi, ACATA President/Conference Chair

It gives me great pleasure to welcome you all to this historic gathering, marking the first ever international conference of our association, in my capacity as the President of the Association of Computer Adaptive Testing in Africa (ACATA). This conference is being hosted by measurement and evaluation experts at Better by Far University, who constitute the nucleus of ACATA.

I wish to remark that we are all here as witnesses to ACATA’s humble beginnings. The group was established on October 28, 2020, and officially registered as a non-profit organisation (NPO) in line with Section 14 of the Companies Act, 2008, and Regulation 14 of the Companies Regulations, 2011.  The vision of ACATA is to enhance the technological capability of assessment experts and members for exploring and expanding CAT in Africa through research and the exchange of knowledge. Though an emerging specialization in the world and Africa in particular, it is expected that ACATA will, through its activities, contribute to the improvement of and innovation in large-scale assessment in the continent. Decision making in the field of education based on assessment will benefit from more accurate, efficient, and secure procedures fostered by Computer Adaptive Testing (CAT).

To achieve this vision, the ACATA seeks to steer a paradigm shift in educational assessment in Africa by moving the continent to the next generation in Computer-Based Testing, through the promotion of scholarship and best practices in Computer Adaptive Testing to encourage the use and management of technologies for educational assessment that are geared towards precision as applicable in a variety of educational settings.

Mr. VC and Chief Host, leaders of educational assessment industry in Africa, Ladies and Gentlemen, being in its formative stage, ACATA has been centrally coordinated from West Africa with Nigeria currently hosting the secretariat, a few members are in the southern and northern parts of Africa. Efforts to reach out to other African countries are being intensified through academics in the higher institutions of learning while the drive towards examining bodies is just gathering momentum.

This opening ceremony is one major way of achieving ACATA’s main goal with members registered to participate and share CAT knowledge and experiences and sharpen each other’s expertise by taking on the theme:  Focusing Africa on Computer Adaptive Testing (CAT) for Assessment in the Fourth Industrial Revolution.

Though the pace of digitization in the rest of the world is so fast and vast, Africa must double if not triple its pace to not only embrace technology but implement best practices in the implementation of large-scale educational assessment. The continent is making a good effort with computer-based tests, but many countries face serious challenges in adopting CAT. We hope to keep evolving in this direction as change they say “…is the only constant thing”.

Distinguished Ladies and Gentlemen, ACATA is grateful to the Vice Chancellor, Guest Speaker, Keynote Speaker, Lead Paper Presenter and Workshop Lecturers all our invited guests and all the participants present physically and virtually. We are sure to learn from your wealth of experience with Computer Adaptive Testing. We also appreciate the Executive Director of this great association in the person of Prof. Henry Owolabi, our teacher and all-round mentor. We will continue to look forward to your exemplary mentorship.

Ladies and Gentlemen, we are happy to have seasoned experts in academia and industry (Dr Nathan Thomson from Assessment Systems and Prof Oloyede, CON; the JAMB Registrar) join us for the conference.

On behalf of the Executive Committee and the entire membership of ACATA, I say welcome to you all and thank you for making out time to attend and participate in this international conference.

Guest Speaker: Prof. I. Oloyede, CON, JAMB REGISTRAR


Keynote Speaker: Dr Nathan Thompson, CEO ASSESSMENT SYSTEMS, USA

Nathan Thompson, PhD, is the cofounder and CEO of Assessment Systems (, a leading provider of technology and psychometric solutions to the testing and assessment industry.  His interest is in utilizing modern technology – especially AI, automation, and a quality user interface – to improve the development, delivery, and analysis of assessments, thereby improving the millions of decisions made every day from test scores.

Full Papers Sessions4.0O-6.0O p.m.  


Courtesy of the Vice Chancellor, University of Ilorin, Nigeria


(Workshop Sessions)

Workshop 1 IRT and CAT Algorithms Implementation for Educational Assessment

Dr Kunle Ayanwale, South Africa ORCID

ABSTRACT: Introducing Item Response Theory (IRT) and Computerized Adaptive Testing (CAT) algorithms for educational assessments in Africa can significantly enhance evaluation processes. IRT models students’ performance on test items, providing nuanced insights into their abilities. This is particularly advantageous in Africa’s diverse educational landscape, where learners have varied backgrounds and experiences. It ensures precise calibration of test items to match individual examinees’ abilities, thus fostering fair and unbiased assessments. Meanwhile, CAT which is built upon IRT principles, is ideally suited for resource-constrained African settings like many parts of Africa, reducing the number of test items needed, saving time and resources. However, successful implementation requires investments in technology infrastructure, culturally tailored item banks, and educator/administrator training. Incorporating IRT and CAT into African educational assessments can enhance precision, equity, and efficiency, ultimately improving educational outcomes and expanding opportunities for learners across the continent.

TEA BREAK11.00-11.30 a.m.

Workshop 2 Computerized Adaptive Testing Implementation Projects 

Dr Jumoke Oladele, Nigeria ORCID

ABSTRACT: Computerized Adaptive Testing (CAT) is a dynamic assessment approach that tailors test questions to an individual’s ability level, enhancing measurement precision while reducing test length. CAT has gained prominence in educational and employment assessment due to its efficiency and accuracy. Implementing CAT involves several crucial steps, including item bank development, item calibration, and algorithm design. Successful CAT projects require meticulous planning, resource allocation, and collaboration among experts in psychometrics, test development, and technology. CAT’s adaptive algorithms, which is premised on Item Response Theory (IRT), play a pivotal role in selecting questions that match a test-taker’s ability, optimizing measurement precision. This workshop summarizes key aspects of CAT implementation projects.

Full Papers Sessions1.00-2.00 p.m.

Workshop 3 Simulations and CAT

Dr Fernando Austria, Mexico ORCID

ABSTRACT: Computerized Adaptive Testing (CAT) represents a dynamic and innovative approach in the field of psychometrics. This presentation will delve into the foundational principles of CAT, emphasizing its ability to tailor tests to individual examinees for optimal efficiency and precision. Through the use of simulated data, attendees will gain insights into how CAT algorithms select items based on an examinee’s ability, leading to shorter tests without compromising validity. We’ll also explore the challenges and benefits associated with CAT simulations, including issues of item pool development, security, and fairness. By the end of the session, participants will have a comprehensive understanding of the transformative potential of CAT simulations in modern psychometric assessment.

Launch Break4.00-5.00 p.m.
Paper Presentation Sessions5.00-6.00 p.m.




(Members only). New members are welcome to sign up

(Paper Presentation Mop-up Sessions)




Thank you for attending the first biennial Conference of the Association of Computer Adaptive Testing in Africa at University of Ilorin, the ‘Better by far University’. We hope you are leaving with an advanced learning curve on computer adaptive testing, and we wish you all safe journeys.

From the LOC, it is not a bye, but a see you again!

Book of Abstracts


O. D.



Prof. E. R. I



The inclusion of information and communication technology in education for cognitive assessment leads to the need for consideration, modification, and/or change of the traditional examination methods both at the primary, secondary, and tertiary levels of education. Over time, literature evidence abounds on the psychometric characteristics of different types of conventionally tested items in terms of their mean scores, with little or no attention paid to the criteria of adequate reliability and validity, as well as the difficulty and discrimination indices of assessment items. However, information relating to how they compare with those tested electronically is sparse in the literature. This paper presents the estimates of the psychometric properties of the items tested using the two modes of item testing to ascertain their psychometric equivalence or otherwise.

Key words: Psychometric properties, E-Testing, C-Testing

Dr. Yusuf

The calibration of item parameters (difficulty, discrimination and guessing parameters) estimate may only consider the true and not the cheating abilities of examinees. In an effort to detect the occurrence of test cheating due to compromise in multiple-Choice items, the Deterministic Gated Item Response Theory Model was developed to provide information about cheating effectiveness of examinees, measure the extent of item fit for the compromised items, assess the sensitivity and ascertain the specificity to detect cheating due to item compromise. Hence, the model was meant to provide information on the extent of which the item response theory psychometric estimates are sensitive to item compromise when cheating occurs in large scale examinations. This paper examine the concepts of cheating, item compromise and provides a brief overview of the Deterministic Gated Item Response Theory Model (DGIRT). It was recommended among others that Psychometricians should consider the validation of Deterministic Gated IRT model and other new IRT models that will account for the cheating ability of examinee unlike the “normal” IRT model that produces the probability of an item response for varying values of θ (ability).

Key words: Tests Compromise, Score Inflation, Cheating, Deterministic Gated Item, Response TheoryModel(DGIRT)

Dr. of Ilorin, Ilorin, Nigeria 
AbassAdegokebarsskord90@gmail.comNigeriaUniversity of Ilorin, Ilorin, Nigeria

Computerised Adaptive Testing (CAT) is a superior option for testing with a focus on the examinee’s ability as a result of the need for a paradigm shift in computer-based testing for large-scale testing. This study  was conducted on assessment of CATSim using UTME items for determination of candidates’ placement into Higher Institutions in Nigeria with specific objectives of determining the ability estimation of CATSim for candidates’ placement ; examining reduction ability of CATSim to reduce UTME items ; and  estimating precision of CATSim using UTME items for determination of candidates’ placement. A quasi-experimental design of post-test only of 2x1x2 factorial was employed alongside post-hoc simulation method for generating data using CATSim. For this study, 100,000 simulees were the selected sample with 500 multiple-choice items. Data were analyzed using descriptive statistics of mean and standard deviation, and inferential statistics of correlation were used. It is revealed that CAT has the estimation ability using UTME items for placement of candidates into higher institutions in Nigeria  (SEE = 0.2826);  CAT reduced the number of items used for UTME by 40%;  CAT has estimation precision for placement of candidates into higher institutions in Nigeria (r = r=0.696). It is therefore concluded that CAT holds significant importance and offers a more efficient and streamlined evaluation process. It is recommended that CAT be adopted by JAMB for placement of candidates into higher institutions in Nigeria through UTME, and educational authorities and institutions should explore the implementation of CAT to enhance the overall efficiency of the candidates’ placement process.

Key words: CAT, CATSim, Ability Estimation, Reduction Ability, Estimate Precision


Julius K.



Prof. Henry O.



Cognitive Diagnostic Assessment (CDA) is one of the emerging educational assessment practices that is designed to measure specific knowledge structures and processing skills in learners so as to provide information about their cognitive strengths and weaknesses. This assessment approach, which is an extension of IRT framework, aims at providing formative diagnostic feedback through a fine-grained reporting of mastery of learners’ skill profiles. In order to extract discrete attribute and skill profiles for test-takers, a set of statistical and psychometric models called Cognitive Diagnostic Models (CDMs) have been proposed. CDMs are psychometric models which determine the examinee’s mastery of a set of predefined skills or attributes based on their responses to a set of test items. This review sheds light on the meaning of CDA, CDMs, types of CDMs and some terminologies used in CDA. It also dealt with the procedures involved in CDA, similarities and differences between CDMs and conventional statistical methods, and software packages that can be used for CDMs. This article employed a systematic literature review method because it critically synthesized research studies and findings on Cognitive Diagnostic Models. It was concluded that CDA has immense practical implications for classroom instruction and learning by dragonising the mastery status of learners in a targeted domain, providing effective assessment of students’ learning and progress, improving assessment reports, and providing better remedial feedback.

Key words: Assessment, CDA, CDMs, Attributes or Skills, Q-Matrix


Julius K.



Dr M. I.



Afolabi O.









As the global society transits from the Third Industrial Revolution (3IR) to the Fourth Industrial Revolution (4IR), the existing Education 3.0 era is being supplanted by Education 4.0 era. Education 3.0 is based on the behaviourist paradigm and pedagogical approach to learning, where the curriculum is delivered top-down in formal learning contexts. However, the emerging Education 4.0 era fosters a more learner-centred approach to education where learners use emerging technologies to generate an immersive, meaningful, life-long, and holistic learning experience. This shift in educational paradigms is being triggered by the emergence of 4IR technologies such as artificial intelligence, the Internet of Things (IoT), the fusion of technologies, teaching factories, learning factories, and so on. This study therefore examined the impact of the fourth industrial revolution on education and educational assessment in particular. This paper provided insight into the antecedents and trajectory of Fourth Industrial Revolution in education. The study sheds lights on the opportunities of fourth industrial revolution on educational assessment and the challenges stemming from such changes in education. It was concluded that the education sector cannot be isolated from 4IR technologies, which are predicted to have a significant effect on learning opportunities, educational policies, instructional procedures, and assessment procedures. The study recommended that emphasis should be placed on new sets of skills, creativity, innovation, curriculum changes, etc. that will help to achieve the educational Sustainable Development Goals (SDGs) in 4IR.

Key words: Fourth Industrial Revolution, Emerging 4IR, Technologies, Artificial Intelligence, Learning Factories.


Dr Mayowa












Olamide Sukurat




Bello Olaitan


An improved knowledge of the logic, design, execution, analysis, and interpretations in the use of Computer Adaptive Testing (CAT) can be achieved through simulation studies. The process of validating item banks, algorithm testing, parameter estimates, test assembly, fairness and bias analysis, realistic scenarios, and data production are all covered in this paper as ways to assist CAT operations. The paper describes CAT in terms of the use of cumulative learning activities based on Bloom’s Taxonomy and also highlights simulation termination rules to support the operation of CAT. Additionally, the paper seeks to compile some of the available research methodologies into a general framework for the development of any CAT assessment. Additionally, item selection in CAT is discussed in this study along with its three key elements: item selection, item exposure management, and content balancing. It also covers the development of three simulation types for computerized adaptive testing (CAT) utilizing both dichotomous and polytomous item response theory (IRT) models, namely post-hoc (actual data) simulations, hybrid simulations, and monte-carlo simulations. It was concluded that conducting simulation studies is a critically important component of evaluating the design of CAT programs and their implementation. Based on this assertion, it was recommended that practitioners perform CAT simulation studies in each stage of CAT development.

Key words: Simulations, Computer Adaptive Testing, Artificial Intelligent, Item Bank Validation.


This study examined the level of perception of the use of artificial intelligence for curbing examination malpractice in Nigeria. Three research questions were formulated to guide the study. The study adopted qualitative research of the exploratory type. The population of this study comprised of six examination body staffs from JAMB and NECO, Ilorin. Kwara State. The research instrument used for this study was a designed interview guide titled “Artificial Intelligence in Curbing Examination Malpractice during Standardized Examinations”. The research questions were answered using a thematic approach. Findings from this study revealed that JAMB makes use of proctoring software while NECO makes use of human proctors. Moreso, findings show that both JAMB and NECO have positive thoughts towards the use of Artificial Intelligence in curbing examination malpractice due to the fact that it carries out the activities it is expected to such as detecting impersonation and accuracy in analysing data. Moreover, Artificial Intelligence significantly contributes to curbing examination malpractice. It is hereby recommended among others that Artificial Intelligence proctoring software should be adopted in curbing examination malpractice. Artificial Intelligence will improve the process of curbing examination malpractice; therefore, it should be adopted, there should also be little or no human interference in the use of Artificial Intelligence to maintain its credibility.

Key words: Artificial Intelligence, Examination Malpractice, Industrial Revolution.

Dr Musa AdekunleAyanwalema.ayanwale@nul.lsLesotho

Implementing Item Response Theory (IRT) and Computerized Adaptive Testing (CAT) algorithms for educational assessment in Africa holds great potential for improving the quality and efficiency of assessments in the region. IRT allows for a more nuanced understanding of students’ abilities by modeling their performance on test items. In diverse African educational contexts, where students come from varied backgrounds and experiences, IRT can provide fair and accurate assessments that are less biased than traditional methods. It ensures that test items are appropriately calibrated to the abilities of the test-takers, making assessments more equitable. CAT, which is built upon IRT principles, is particularly well-suited for resource-constrained settings like many parts of Africa. CAT’s adaptability reduces the number of test items required, saving time and resources. This is especially valuable in regions with limited access to quality education and assessment resources. Implementing these algorithms would require investment in technology infrastructure and item banks tailored to African curricula and cultural contexts. Additionally, training educators and administrators to understand and use these systems effectively is crucial. By incorporating IRT and CAT into educational assessment practices in Africa, it is possible to enhance assessments’ precision, fairness, and efficiency, ultimately contributing to improved education outcomes and greater opportunities for the continent’s learners.

Key words: Computer adaptive testing, Item response theory, educational assessment, 4IR.


Dr Jumoke















Assessment is the hallmark of teaching and learning processes which characterize individual knowledge, skills, abilities, interests, and values of students. A technological-driven form of assessment known as Computer Adaptive Test (CAT) which utilizes Item Response Theory (IRT) and captures the examinees’ level of skills and knowledge sequentially was discussed. CAT can be deployed for performance evaluation, job analysis and synthesis, continuous learning progress pathways, validity-centered design and documentation, and logical measurement opportunities in performance tasks and simulations. CAT’s goals are to maximize test efficiency by selecting the most appropriate items for each examinee, ensure that tests measure the same traits for each examinee by controlling the non-statistical nature of test items, and protect the security of the item bank by controlling the rates at which items are administered. CAT is usually employed software algorithms to select the items from the pool of items according to the examinees’ ability. CAT item bank has items that collectively provide information across the full range of the trait (theta) of the examinees. The item development process includes the recruitment of the subject matter personnel who will be responsible for raising items based on table of specification. They are to write, review, and edit the test items after which the item bank (scored dichotomously or polytomously) is pilot tested and subjected to psychometric analysis. CAT encompasses several key aspects which include the flexibility in managing tests, the provision of instant feedback to examinees, and the potential to enhance examinee motivation.

Key words: Computer Adaptive Test (CAT), CAT Application, CAT Implementation, Item Response Theory (IRT).

Damilola DanielOlaoyeolaoyedamilola2020@gmail.comNigeria
Najat OmololaAbdulhameedangel45y2k2@gmail.comNigeria
Saka KawuMuritalakawumuritala@gmail.comNigeria


This article delves into the realm of machine learning and its applications in the field of educational assessment. The term “machine learning” is traced back to its inception in 1959 by Arthur Samuel, an IBM employee renowned for his work in computer gaming and artificial intelligence. It is noted that early experiments, like Raytheon Company’s Cybertron, utilized rudimentary reinforcement learning techniques, indicating the early adoption of machine learning for data analysis. Supervised learning and unsupervised learning are explained as core paradigms within machine learning, with a focus on how supervised learning enables machines to acquire general rules by using sample inputs and their corresponding desired outputs. This study aims at leveraging the supervised AIG technique for item back generation. Worthy of note is the fact that AIG is no longer limited to large testing companies but is now accessible through platforms like ASC’s next-gen assessment platform. This study adopts the action research design. In deploying this study, which is an AIG Platform was used to implement Type 1 AIG using a developed and validated 20-item cognitive test template. This process was implemented using dynamic fields through which the item bank will be enlarged, and then exported for review by subject experts for face and content validation. The exported file will be subjected to subject expert re-validation to determine the final item bank after which the item bank will be re-subjected to psychometric analysis using X-calibre software based on which conclusions for this study will be made and related recommendations.

Key words: Computer Adaptive Testing, AIG, Machine Learning Algorithm, Item Bank Enhancement.

Dr SakaBabatundesakababatunde2020@gmail.comNigeria

The effectiveness of cognitive restructuring on examophobia among public Colleges of Education students in Kwara State, Nigeria is the main focus of this study. The study also looked into the effect of the independent variable on the dependent variable based on gender. A quasi-experimental research design with pre-test, post-test, non-randomized and non-equivalent control group of 2×2 factorial design was adopted. The target population for the study comprised 462 Agricultural Science students in the selected public Colleges of
Education in Kwara State. The sample size for the study comprised 129 students in NCE1. The Examination Phobia Scale and Cognitive Restructuring Treatment Package were used to collect data for the study. The data collected were analysed using ANCOVA at 0.05 level of significance. The results revealed that there was a significant effectiveness of cognitive restructuring on examophobia among public Colleges of Education students (F = 20 .600, p =.000) and also showed that there was no significant effectiveness of cognitive restructuring
on examophobia among public Colleges of Education students on the basis of gender, (F=0.290, p=.592). The study concluded that cognitive restructuring technique was effective in treating examophobia among public college students. The implication is that students experiencing examination phobia can access therapeutic assistance in order to achieve their set academic goals. It was therefore re-commended that counselling and human development canters in the Public Colleges of Education should be well-trained with knowledge of the application of psychotherapy towards resolving psychological issues that hinder students’ effectiveness and optimal performance be established.

Key words: Examphobia, Cognitive Restructuring, College students.

Dr JumokeOladeleoladeleajumokeedu19@gmail.comNigeria

Mental health is conceptualized as a state of well-being in which the individual realizes his or her abilities, can cope with the normal stresses of life, can work productively and fruitfully, and can contribute to his or her community. The purpose of the research was to calibrate an optimal item bank for a computerised adaptive mental-well-being scale (CAMS) for use by university undergraduates in Nigeria which informed the study sample. The instrument for the study would be a Likert-scaled questionnaire with items based on indices of mental well-being. The scale would be face and content validated by medical, sociology and educational psychological experts while a trial test would be carried out to determine the reliability of the test items. The scale parameters were analysed using Samejima’s Graded Rating Scale Model (GRM) for polytomously scored items and deployed through the Xcalibre 4.2 programme. This study yielded a calibrated item bank of 175 items with an alpha value of 0.8469, mean, and standard deviation of 0.0344(0.8658), and model fit statistics ascertained through the chi-square/p-value statistics with a value of 6067.7362(1). The calibration yielded the Eta, Alpha w/o, Max Info, Theta at Max, a, and boundary (b1 to b3) values obtained for the CAMS deployment. The calibrated item bank was scaled at a theta value of -4 to +4, Maximum information of 25.478, and CSEM of 0.198. This study is germane to the attainment of SDG goal 3 of ensuring healthy lives and promoting well-being for all.

Key words: Mental health, Well-being, Item bank Calibration, Computer Adaptive Testing, Samejima’s Graded Response Model



Assessment of undergraduates’ mental well-being in Sub-Saharan Africa is a critical endeavour, given the unique socio-cultural and economic challenges faced by this demographic. Two distinct approaches in item development would be explored in this study. Traditional scale development, involving expert input, ensures that assessment items are culturally tailored to the diverse and nuanced context of Sub-Saharan Africa. These experts, often with deep knowledge of local mental health issues, can craft items that resonate with the experiences and challenges of undergraduates in the region. However, this method is time-intensive and may not keep pace with rapidly evolving mental health concerns, limited in scalability and adaptability. In contrast, automated item generation driven by data and algorithms leverages the power of artificial intelligence and machine learning to rapidly produce a large pool of test items. This approach is efficient and scalable. However, it relies heavily on historical data and may inadvertently perpetuate biases and lack cultural sensitivity. A hybrid approach, where traditional item development sets the foundation of culturally relevant items is augmented and adapted through automated item generation techniques is desired. Expertise of traditional methods combined with the efficiency and adaptability of automation will strike a balance between cultural relevance and scalability. Ultimately, the choice between traditional and automated item development should be informed by the specific goals, resources, and contextual nuances of the assessment, ensuring that the mental well-being of undergraduate students in Sub-Saharan Africa is comprehensively and effectively evaluated.


Key words: Traditional Item Development, Automated Item Generation, Artificial Intelligence, Algorithms.

SianoZakari Nigeria
AbdulrahmanAhmed Nigeria

High stakes tests in Nigeria and many other African countries suffered serious setback due to the Covid-19 pandemic that ravaged the entire world. CAT dynamically adapts to the examinee’s performance level, varying the difficulty of presenting items according to the examinees’ previous answers which in some way depict their ability. Technological advancements continue to evolve enhance and innovate CAT in the areas of Artificial Intelligence and Machine Learning, Natural Language Processing, Remote Proctoring and Biometrics, Mobile Accessibility, Data Analytics and Predictive Modelling, Item Banking and Calibration, Adaptive Feedback and Reporting, Virtual Reality and Augmented Reality, Block Chain for Credential Verification, Multimodal Assessments, Personalized Learning Paths, Gamification and Accessibility Features. This study investigated these innovations and how they affect testing time, test result accuracy and reliability. Analyses showing in terms of saved time, reliability, user friendliness and reduced overhead costs were made.

Key words: Analysis of time, Computer Adaptive Testing, Cost, Technological innovations, User adaptability


Computer Adaptive Testing (CAT) is an assessment delivery method that uses algorithms making it possible to personalize test delivery to every examinee. This makes the test shorter, and more accurate, secure, and fair with Item Response Theory (IRT) based algorithms. IRT item selection algorithms and scoring procedures are helpful for estimating examinee abilities and their connections with the answers based on one, two or three parameter models. The most wide-spread approach is the 1PLM, there is the 2PLM and 3PLM. Open-source IRT software, designed and publicly accessible to be seen, used, modified, and distributed include, but not limited to, RSCAT, Concerto, IRT-Computerized Adaptive Testing, OSCATS, OpenCAT, WebCT, Simulcat, SIETTE, WebCT and MISTRAL. Some examples of commercial IRT software are FastTest, PearsonVUE, Prometric, McCann, LeaderAmp, and LIVECAT. Advantages of open-source software are lower hardware and software cost, access to high-quality hardware and software, no vendor lock-in, integrated management, simple license management, and abundant support. This study was designed to compare psychometric properties of one-, two- and three- parameter models across three selected software to identify uniformities and their sources. Specific research questions were addressed on which of the selected software should be used under different situations for determining which of the three parameters and recommendations made on them.

Key words: Item Response Theory, One Two and Three Parameter Model, Open Source IRT software.