Frequently Asked Questions About Licensing Exams

Small group testing

CLEAR Exam Review (Winter 1997)
Norman Hertz

Question: Our examination program is relatively small. We test only a hundred candidates a year. We use the Angoff methodology to establish the passing score for each examination administered. We hire a consultant to assist us in developing the examination and in establishing the passing score, yet the passing score varies and the percent of candidates who pass fluctuates for the different administrations. Our consultant tells us that item analysis results should not be over-interpreted, because so few candidates sit for the examination that the statistics are not stable. What can we do to ensure that our licensing examination program is valid and in compliance with testing standards?

Answer: Your concerns are shared by many small examination programs. There are a number of steps that you can take to ensure that the program is valid and that the licensing pass/fail decisions are correct. The first step is to ensure that the licensing examination is job related. Job-relatedness is established by a study of the job to identify the work that is actually being performed by practitioners. The best methodology would be to undertake an occupational analysis study. If a formal study is not feasible, your consultant could assist you in convening a focus group composed of knowledgeable job incumbents, supervisors, and educators whose responsibility it would be to describe the job in a manner that would facilitate development of a job-related licensing examination. The product from a well-designed focus group study should be sufficient to establish the job-relatedness of the examination program--the first step in demonstrating the validity of an examination. The next step is to write the examination based on the results of the study. If these steps are followed, it is likely that the job-related requirements will be satisfied.

In instances where examination programs are small, the statistical data are less important than the procedures. With a small number of candidates, you may find that the item statistics vary considerably for identical questions when they are used in consecutive administrations. In these cases, it is best to ensure that sound procedures were followed in writing the questions, and worry less about the statistics.

Passing percentages may vary greatly by examination administration and could be due to causes other than the examination. It would be best if passing percentages were averaged over several administrations for a more accurate representation of the true passing percentage or if only first-time candidates were used in the calculation of the passing rate percentage.

In summary, for small examination programs, a better assessment of the validity of the examination program is to evaluate the procedures used to develop and implement the program rather than to overemphasize the statistical results.


Back to index

2002 Council on Licensure, Enforcement and Regulation