CLEAR Exam Review Archives

All back journal issues are available for $15 each. There is an additional $3.75 charge for shipping and handling, plus $1 for each additional item. Members can view issues online as PDFs (2006–Present) by logging in.

Winter 16-17, Volume 26, No. 2
Summer 16, Volume 26, No. 1 Summer 02, Volume 13, No. 2
Winter 15, Volume 25, No. 2 Winter 02, Volume 13, No. 1
Spring 15, Volume 25, No. 1 Summer 01, Volume 12, No. 2
Fall 14, Volume 24, No. 2 Winter 01, Volume 12, No. 1
Spring 14, Volume 24, No. 1 Summer 00, Volume 11, No. 2
Fall 12, Volume 23, No. 2 Winter 00, Volume 11, No. 1
Spring 12, Volume 23, No. 1 Summer 99, Volume 10, No. 2
Fall 11, Volume 22, No. 2 Winter 99, Volume 10, No. 1
Spring 11, Volume 22, No. 1 Summer 98, Volume 9, No. 2
Fall 10, Volume 21, No. 2 Winter 98, Volume 9, No. 1
Winter 10, Volume 21, No. 1 Summer 97, Volume 8, No. 2
Summer 09, Volume 20, No. 2 Winter 97, Volume 8, No. 1
Winter 09, Volume 20, No. 1 Summer 96, Volume 7, No. 2
Summer 08, Volume 19, No. 2 Winter 96, Volume 7, No. 1
Spring 08, Volume 19, No. 1 Summer 95, Volume 6, No. 2
Summer 07, Volume 18, No. 2 Winter 95, Volume 6, No. 1
Winter 07, Volume 18, No. 1 Summer 94, Volume 5, No. 2
Summer 06, Volume 17, No. 2 Winter 94, Volume 5, No. 1
Winter 06, Volume 17, No. 1 Summer 93, Volume 4, No. 2
Summer 05, Volume 16, No. 2 Winter 93, Volume 4, No. 1
Winter 05, Volume 16, No. 1 Summer 92, Volume 3, No. 2
Summer 04, Volume 15, No. 2 Winter 92, Volume 3, No. 1
Winter 04, Volume 15, No. 1 Summer 91, Volume 2, No. 1
Summer 03, Volume 14, No. 2 Winter 90, Volume 1, No. 2
Winter 03, Volume 14, No. 1 Summer 90, Volume 1, No. 1

CLEAR Exam Review, Winter 2016-17, Volume 26, No. 2

  • Abstracts and Updates, by George Gray- George Gray focuses first on job analysis methodologies, reviewing recently published reports on job analyses in assorted professions. Next, test validity is spotlighted in a summary of articles presenting dueling conceptualizations of validity. Dr. Gray also synopsizes recent publications that address issues related to the use of multiple measures, equating and linking, public disclosure of test items, and the inclusion of performance assessments in a testing program. Finally, a correlational study of variables predicting success on the National Council Licensure Examination for Registered Nurses (NCLEX-RN) is reviewed.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat column reviews the purpose and application of the Americans with Disabilities Act (ADA) in the context of licensure and certification testing and summarizes an ADA-related lawsuit initiated by a medical student against the National Board of Medical Examiners (NBME) and the Federation of State Medical Boards (FSMB). The column focuses mainly on issues related to discovery and disclosure of documentation the Plaintiff and Defendants requested of one another. The Court’s decisions help to clarify the types of information that might be reasonably requested in such a case and what factors affect judgments related to disclosure.
  • Perspectives on Testing, coordinated by Chuck Friedman- The column addresses questions raised by CLEAR members and conference attendees. Experts in psychometrics, exam development, and other aspects of testing provide responses. In this issue, experts answer questions regarding the reporting of sub-scores, the inclusion of easy items on an exam, and cost/benefit analysis of using innovative item types.
  • Item Writing: Methods for Gaining a Greater Return on Investment , by Belinda Brunner - Belinda Brunner presents alternative techniques for the generation of new test items. Test security is a priority in licensure and certification, particularly for high-stakes examinations. Building and maintaining a large bank of viable test questions is an important line of defense, yet item generation can be quite expensive. This article describes tools and techniques for the efficient generation of effective test items.
  • Clarifying the Blurred Lines Between Credentialing and Education Exams , by Chad Buckendahl - This article describes similarities and differences between testing in an educational setting and testing in a credentialing context. Three aspects of testing, in particular, warrant attention to these distinctive contexts: the relationship between assessment and curriculum/training; the focus of measurement (e.g., minimum competence versus a broader range of performance); and issues related to disability accommodations. Readers may find this article useful in helping regulators and lawmakers to understand relevant issues in the context of licensure and certification testing.

CLEAR Exam Review, Summer 2016, Volume 26, No. 1

  • Abstracts and Updates, by George Gray- George Gray summarizes recent research related to Angoff standard setting, the meaning and use of coefficient alpha as a measure of reliability, and computer based testing (CBT). CBT issues addressed include alignment with content specifications, item position effects, and response time as a flag for advance knowledge of test content. Dr. Gray goes on to describe recent publications in health care assessment and certification. The column concludes with summaries of articles related to the consistency of the pass/fail decision of a credentialing examination and establishing the equivalence of cut scores across exam forms.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat column describes a lawsuit initiated by a candidate who felt he was given insufficient time to complete an examination and sued for “breach of contract, discrimination, failure to accommodate, and retaliation . . . .” The court ruled in favor of the testing organization. The case not only demonstrates the court’s understanding that candidates who expect testing accommodations must clearly request them in advance but also illustrates an inherent hazard of testing related to policies and procedures around the granting of accommodations. When the stakes of testing are high, a failing candidate may resort to suing the testing organization even when the case is apparently without merit. Testing organizations that develop and implement strong, standardized policies and procedures are much more likely to prevail if sued.
  • Technology in Testing, by Brian Bontempo- Brian Bontempo returns with the third in a series of articles on data visualization. This issue introduces readers to the concept and use of digital dashboards. Dr. Bontempo describes how these under-utilized tools can be of value to credentialing organizations for monitoring and evaluating their programs using multiple sources of data.
  • A Study of Potential Methods to Increase Response Rates on Task Inventory Surveys , by Adam E. Wyse, Carol Eckerly, Ben Babcock, and Dan Anderson - A solid job analysis, or practice analysis, is the foundation of validity evidence in professional licensure and certification programs. A common problem in conducting a job analysis survey is the difficulty of inspiring a response rate high enough to provide dependable results. Researchers from the American Registry of Radiologic Technologists and the University of Wisconsin investigated various methods designed to improve response rates on task analysis surveys. Their findings are presented in this issue and include a discussion of age and gender differences.
  • Evaluating Item-Banking Systems: What to Consider and Why , by Adrienne W. Cadle- Next is an informational paper that will be of interest to organizations with a need to improve the development and management of their test items. Developing a propriety item-banking system can be expensive, and making a selection from existing systems can be a daunting task. Whether establishing a new item-banking system or upgrading to a system that better meets a testing program’s current needs, there are many issues to be addressed and questions that should be asked up front to ensure that the final product will be well suited to address the program’s unique needs. Adrienne Cadle provides a discussion of issues and a checklist of questions that will be instructive to agencies considering adopting a new item-banking system.
  • Options for Establishing the Passing Score on a New Test Form for Small Volume Programs , by Andrew C. Dwyer- Finally, we include an article that will be of special interest to small volume certification programs seeking NCCA accreditation. The recently updated NCCA Standards require programs to establish the equivalence of different test forms in content and difficulty, specifically mentioning statistical equating procedures. Andrew Dwyer discusses options for small programs to demonstrate form equivalency when seeking accreditation. Dr. Dwyer is also the author of the final article reviewed in Abstracts and Updates, which addresses this topic from a more academic perspective.

CLEAR Exam Review, Winter 2015, Volume 25, No. 2

  • Abstracts and Updates, by George Gray- The Abstracts and Updates column offers something for everyone. George Gray begins with a review of recent validity-related research, including three articles from Psicothema focusing on various sources of validity evidence. Validity issues are also addressed in articles centered on job analysis, item writing, sensibility assessment, item disclosure, and test development. Two papers on computer adaptive testing are reviewed, along with articles on remote proctoring and automated item scoring, and two recent NCME instructional modules on item response theory are described.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat delves into the nuances of legal proceedings and reasoning in Gulino v. Board of Education, 2015, a dispute regarding alleged discrimination and disparate impact in a state teacher licensing exam. In this case, the court was forced to analyze the test development process and grapple with the concept of validity in the context of a job-related exam. Related litigation has not been fully settled as yet. Dale’s review of the “significant practical and legal gyrations” of this case serves to highlight the importance of clearly and thoroughly documenting evidence of validity in any job-related examination.
  • Perspectives on Testing: Responses to your Questions, by Chuck Friedman and contributors- Chuck Friedman presents the responses of experts to questions from CLEAR members and conference attendees. This issue addresses 1) the pros and cons of administering an exam in one or more set windows versus continuous, year-round (on-demand) testing and 2) general guidelines for providing testing accommodations for candidates with disabilities. Readers are encouraged to submit questions for future columns and conferences to cer@clearhq.org. Please enter “CER Perspectives on Testing” in the subject line.
  • Technology and Testing, by Brian Bontempo will return in the next issue of the CLEAR Exam Review, continuing the series on data visualization.
  • ISO/IEC 17024 Conformity Assessment: General requirements for bodies operating the certification of persons, by Cynthia Woodley- Continuing our recent emphasis on standards in testing, Cynthia Woodley describes the international standard ISO/IEC 17024: Conformity Assessment – General requirements for bodies operating the certification of persons. This standard must be met by certification bodies seeking national/international accreditation from organizations such as ANSI. Regulatory agencies across many professions and jurisdictions rely on certification as evidence of competence to practice and a requirement for licensure. ISO/IEC 17024 serves the regulatory community by affirming the ability of accredited certifying bodies to assure competence.
  • Evaluating the Use of Custom Simulation Items: The Good, the Bad, and Reality, by Susan Davis-Becker and Jared Zurn- This issue also includes an article by Susan Davis-Becker and Jared Zurn describing the experiences of a regulatory board in developing and administering computer-based simulation assessments. The benefits of these custom simulation items are discussed, as are the challenges encountered. This paper will be of interest to anyone considering adding custom simulation or other innovative item types to their assessment program.

CLEAR Exam Review, Spring 2015, Volume 25, No. 1

Spring is a time of renewal, and this issue features several articles related to the long-awaited renewal and revision of various testing standards. The Standards for Educational and Psychological Testing (Standards), which serves as a guide for sound practice and fairness in high-stakes testing, was recently updated to reflect current technologies and other advances in assessment. In addition, the Standards for the Accreditation of Certification Programs (NCCA Standards) published by the National Commission for Certifying Agencies (NCCA) has also been revised, with the new NCCA Standards to take effect in 2016. The NCCA Standards are used primarily in accrediting certification programs, but the principles they express are relevant in licensure as well.
  • User’s Guide to the 2014 Joint Testing Standards for Credentialing Programs , by Ron Rodgers- This guide incorporates input from key members of the Joint Committee of the AERA, APA, and NCME charged with the task of updating the Standards. This article reviews the standards that are most relevant in the context of credentialing and highlights differences between the 2014 and 1999 editions of the Standards. Readers will undoubtedly find this article valuable in navigating the revised Standards.
  • A Review of the Newly Adopted NCCA Standards, by Brian Bontempo- Brian Bontempo, who served on the Revisions Steering Committee for the NCCA Standards, describes the changes to the NCCA Standards, explains the relevance of this document to licensure programs, describes the revision process, and highlights changes from the earlier edition. As the NCCA Standards reflects current thinking regarding sound practice in credentialing programs, we believe that readers will find this article useful in identifying what the professional community expects of both licensure and certification programs.
  • Subscore reporting: An updated reminder from the revised Standards for Educational and Psychological Testing, by Corina M. Owens and Vincent Lima- Returning to the AERA, APA, and NCME Standards, this article describes the chapter on workplace testing and credentialing and then focuses on the application of a revised standard related to subscore reporting. Reliability issues are discussed, and several methods of providing feedback to failing candidates are illustrated.
  • Abstracts and Updates, by George Gray- In this issue, George Gray includes a review of the recent issue of Educational Measurement: Issues and Practice that was devoted to the revised Standards. George also reviews a variety of recent publications, including a textbook on item development and articles covering a range of topics: a comparison of IRT and classical test theory; two illustrations of exam development and validation via job analysis; two articles addressing standard setting; a study of score gains among repeat examinees; two articles addressing the validity of a medical certification exam; a study of the relationship of certification and safety knowledge in the food industry; an investigation of the predictive validity of a medical in-training examination; an evaluation of the effects of interrupting a computer-delivered test; an assessment of the effects of candidate strategies on automated scoring; and two related articles on the reliability and value of subscores.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat describes a case in which one exam preparation organization sued another over the harvesting of copyrighted items. The outcome of this case may have implications for licensure and certification testing programs as well.
  • Perspectives on Testing: Responses to your Questions, by Chuck Friedman and contributors- This new column provides expert responses to questions posed by CLEAR members. Questions addressed in this issue focus on continuing competence and feedback to candidates. Readers are encouraged to submit questions for future columns and conferences to clear@clearhq.org. Please enter “CER Perspectives on Testing” in the subject line.

CLEAR Exam Review, Fall 2014, Volume 24, No. 2

  • Abstracts and Updates, by George Gray- In this issue, George Gray reviews a multitude of publications in the Abstracts and Updates column, including articles focusing on objective structured clinical examinations; barriers to professional mobility for internationally educated professionals; a role delineation study in orthopaedic nursing; a wide variety of psychometric issues; test format and administration issues; and guides to useful credentialing-related software. There is something for everyone in this extensive review of recently published material.
  • Technology and Testing, by Brian Bontempo- Brian Bontempo continues his series on data visualization. This article, the second in the series, presents a step-by-step process for designing informative data visualizations, along with Dr. Bontempo’s insights and guidance on successfully executing each step.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat describes a recent case in which a licensure candidate filed suit against a state regulatory board over issues related to disability accommodations. Claims were also filed against the national association of which the state board is a member. The case illustrates the complexity of issues that are addressed by the courts in evaluating such litigation.
  • Perspectives on Testing: Responses to your Questions, by Chuck Friedman and contributors- A new column, Perspectives on Testing: Responses to Your Questions, makes its debut with this issue. Chuck Friedman coordinates this column, which spotlights questions posed by CLEAR members at the annual conference. Answers are provided by experts in measurement and licensure/certification testing. Readers are encouraged to submit questions for future columns and conferences to clear@clearhq.org. Please enter “CER Perspectives on Testing” in the subject line.
  • 2014 Standards for Educational and Psychological Testing Released, by Ron Rodgers- Readers will be pleased to learn that the long-awaited updated edition of the Standards for Educational and Psychological Testing, jointly published by the American Psychological Association, the American Educational Research Association, and the National Council on Measurement in Education, is now available. Ron Rodgers presents a brief overview of the updated Standards, with a more detailed review to come in future CLEAR publications.
  • Incorporating Continuing Competency into Certification Maintenance with Attention to Certificant Concerns, by Fran Byrd- This issue also includes an article by Fran Byrd of the National Certification Corporation (NCC) describing the NCC’s recent modification of its approach to maintenance of certification, in which online specialty assessments are used to determine the areas in which continuing education is needed for individual certificants. This paper describes the issues the NCC felt were important to address in making this change and illustrates an approach to certification maintenance that departs from the one-size-fits-all continuing education requirements that were common in the past, focusing instead on personalized, ongoing professional development.

CLEAR Exam Review, Spring 2014, Volume 24, No. 1

  • Abstracts and Updates, by George Gray- In this issue, George Gray’s Abstracts and Updates column begins by examining perspectives on validity as presented in a recent volume of the Journal of Educational Measurement, then reviews a book on automated item generation. He also summarizes five articles on a variety of topics: an assessment of the added value of subscores, a comparison of first-time and repeat test taker ratings in a clinical skills assessment, the effect of item position in tests administered via computer, the effect of changing answers on test takers’ exam scores, and methodology for equating across raters and occasions in a constructed examination.
  • Technology and Testing, by Brian Bontempo- Brian Bontempo presents the first in a series of articles on data visualization. Data visualization involves presenting data by means of visual aids such as charts, graphs, and tables. This article introduces the reader to the theory and concept of data visualization and its application in the context of testing. Additional columns will focus on the design of effective data visualization for reporting and other purposes.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column describes a true instance of item harvesting in a high-stakes examination, the legal action taken by the credentialing organization, the progression of the case through the courts, and the final outcome. This column will especially be of interest to regulators who suspect that item theft has occurred or who suspect that their exams may be vulnerable to this type of cheating.
  • Using the Delphi Method to Determine Test Specifications from a Job Analysis, by Lynn C. Webb and Kirk Becker- This article discusses the use of the Delphi technique to translate job analysis data into test specifications. Webb and Becker have had some success in using this methodology in lieu of face-to-face meetings, particularly when the number of stakeholders is of necessity quite large (for example, when all 50 states must be represented).
  • Comparison of English and Spanish Translations of a National Certification Examination, by Hong Qian, Xiao Luo, Ada Woo, Philip Dickison and Doyoung Kim- This article describes an investigation into the comparability of a Spanish translation of an examination and the original English version. This article is more technical in focus than most of our selections. The authors employ and combine three different methods for detecting differential item functioning (DIF) between English- and Spanish-speaking examinees. The reader should be cautious about generalizing the results to other translated examinations. As noted by the CER review board, the sample sizes for the Spanish items are quite small and the statistical methodologies used are best applied with much larger samples. Reviewers also indicated that this type of evaluation would be strengthened by having the qualitative bias review performed by a panel of experts rather than a single individual. Nevertheless, the article is illustrative of one approach to examining the comparability of translated exam items, using the data and resources available. And it clearly brings home the point that it is important to seek empirical evidence that translated items are indeed comparable to the original items, rather than assuming that both versions of the examination are measuring the intended construct equally.

CLEAR Exam Review, Fall 2012, Volume 23, No. 2

  • To the Reader- letter from CLEAR President Michelle Pedersen
  • Abstracts and Updates, by George Gray - In this issue, George Gray’s Abstracts and Updates column reviews a book on designing a certification program in sufficient detail to whet the appetite of anyone interested in undertaking such an activity. He also summarizes articles on different aspects of validity and describes their application to credentialing examination issues. Other articles reviewed in this column include topics of simulations, ethics in testing and licensure, access issues for examinees where English is not their first language, credential test preparation, multistage testing, best practices for score reporting and a scope of practice issue and its potential relationship to job analysis. 
  • Technology and Testing, by Robert Shaw - Robert Shaw talks about innovative item types and describes from his viewpoint the kinds of issues that readers may want to consider for credentialing tests.  He provides an interesting perspective and describes issues one may want to consider when thinking about incorporating innovative items in an existing credentialing examination program.
  • Legal Beat, by Dale Atkinson - Dale Atkinson’s legal column explores a recent case where a licensed professional from one state was not permitted to practice in a second state due to rule differences between the two states in testing requirements. The applicant sued the second state to have the rules overturned. This article makes for interesting reading for licensure board members and testing vendors alike.
  • What’s in a Score? Principles and Properties of Scoring, by Jerry Gorham, Ada Woo and Karen Sutherland- This article summarizes different aspects of both item score and test scores.  They describe different types of scores and review properties of different scoring systems that are desirable.
  • Setting Valid Performance Standards on Educational Tests, by Stephen Sireci, Jennifer Randall and April Kenisky- This article describes methodology that readers should find compelling that incorporates building validity into standard setting processes. Although this article is written with education in mind, the tenets of building validity into your standard setting process and the methods that it focuses attention on are worth your thoughtful consideration in the credentialing arena.

CLEAR Exam Review, Spring 2012, Volume 23, No. 1

  • Abstracts and Updates, by George Gray - George Gray’s Abstracts and Updates column reviews a book on psychometric theory, an internet paper on item response theory and Rasch modeling, a book about developing a quality certification program, two instructional modules- one on subscores and a second on linking and equating. He also summarizes four articles covering topics such as equating using a graphical approach, changing item response, adjusting human scores for rate effects and an article about certification in pediatric nursing.
  • Technology and Testing, by Robert Shaw- Robert Shaw talks about item banking standards. His intention with this article is to describe a current set of concepts that could affect some item banks today and in the future. This concept of item-banking embraces interoperability among multiple systems, which is in contrast to systems that were built in the past to serve the parochial needs of a single user.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column explores a recent case where a licensure candidate had his scores cancelled and his license revoked for suspected impersonation. The column is interesting and will be of interest to many test sponsors and test vendors given the outcome of the case. It deserves your attention.
  • Effect on Pass Rate When Innovative Items Added to the National Certification Examination for Nurse Anesthetists, by Mary Ann Krogh- This article addresses the effect that introducing innovative item types has on pass rates in a certification test. The article has some practical implications for credentialing programs seeking to introduce new item types and serves as an introduction to an area that is sure to become increasingly more important in the future. 
  • Challenges in Developing High Quality Test Items, by Greg Applegate- This article summarizes aspects that lead to quality tests as well as the steps needed for quality items and their creation.
  • A Multistate Approach to Setting Standards: An Application to Teacher Licensure Tests, by Richard Tannenbaum- This article discusses the subject of standard setting for multistate licensing programs. It describes a multistate standard setting approach designed to address the issues presented by the more traditional state-by-state approach for recommending passing scores on teacher licensure tests. Although the multistate approach has been used with teacher licensure tests, it certainly may be applied to other licensure contexts that involve multiple jurisdictions or multiple agencies.

CLEAR Exam Review, Fall 2011, Volume 22, No. 2

  • Abstracts and Updates, by George Gray- George Gray’s Abstracts and Updates column reviews an article on ability estimation within computer adaptive tests, two articles on equating subscores and three on equating.  This column also covers topics that include identifying “good” multiple-choice distractors, automated test assembly, reliability studies, combining scores on tests with mixed item formats and assessing the fairness of tests administered to speakers of foreign languages.
  • Technology and Testing, by Robert Shaw- Robert Shaw talks about item and option scrambling in test delivery in this part II column of a two-part series.  The series dealt with procedures that were designed to help increase security for a licensure examination.  He also demonstrates an innovative use of differential item function (DIF) analysis that merit s your attention.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column explores a recent case where a licensure candidate took a mixed model licensure examination three times, each time failing the essay portion of the examination. The candidate filed a law suit first in federal court, then a similar and expanded complaint in state court.  The candidate’s complaint involved issues of adverse impact, negligent test design and failure to meet quality criteria. The column describes the complications involved in re-litigating the same case and previously determined issues in different courts.
  • The Role of Security in Today’s Testing Programs, by Marie Garcia and Ada Woo- This article addresses the role of security in licensure testing programs. The authors describe the testing format, the candidate identity verification process, incidents reporting and data forensics in the service of test security for the nurse licensure program. Some of their practices may be of use in your programs.
  • Automated Scoring of Constructed Response Items for Large-Scale High-Stakes Licensure Examinations, by Chaitanya Ramineni and John Mattar- This article describes the implementation and evaluation of a computer program used to score the essay portion of a licensure examination.  The article describes the scientific rigor and care by which an automated essay scoring model is developed and continually evaluated.

CLEAR Exam Review, Spring 2011, Volume 22, No. 1

  • Abstracts and Updates, by George Gray- George Gray’s Abstracts and Updates column reviews a book on item response theory; an educational module and two articles on scaling and equating; and articles about testing accommodations, automated scoring of essays, continuing competency, and issues in computer adaptive testing.
  • Technology and Testing, by Robert Shaw- Robert Shaw talks about item and option scrambling in test delivery in this part I column of a two-part series.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column explores a recent case where a licensure candidate desired to have her preferred choice of accommodation instead of the traditional reasonable accommodation standard that was preferred by the test sponsor. The column will be of interest to many test sponsors and test administrators given the outcome of the case. It deserves your attention.
  • The Importance of Data Forensic Applications in Today’s Testing Programs, by Jerry Gorham and Ada Woo- This article addresses data forensics and its various applications to testing programs. The article covers a variety of applications and serves as an introduction to an area that is sure to become increasingly more important in the future.

CLEAR Exam Review, Fall 2010, Volume 21, No. 2

  • Abstracts and Updates, by George Gray- George Gray discusses a number of publications dealing with (1) applications of Rasch measurement, (2) validity, (3) standard setting and cut scores, (4) subscore information, (5) the comparison and contrast of multiple choice and constructed-response tests, and (6) research publications in the context of specific certification and licensure programs (i.e., osteopathic medicine, anesthesiology, physical therapy, and psychology). This column provides an excellent thumbnail sketch of each of the articles reviewed.
  • Technology and Testing, by Cameron Clyne and Robert Shaw- Cameron Clyne and Robert Shaw continue their exploration on the topic of using computer-delivered and/or paper-based surveys for job analyses. They describe similarities and differences associated with offering the survey in paper-based format versus on computer. They report some interesting positive as well as negative aspects of computerized surveying. If you have ever thought about surveying via computer or you are currently conducting your surveys on computer, this column is worth your time.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a recent appellate case involving the Medical Board of California’s passive adoption of a passing score used in a national examination and the repercussions that action had on the limitation of the number of times a candidate was allowed to take the licensure examination. The lesson learned from this case is that Boards should review their statutory mandates for formal adoption of passing scores in national examination programs and make certain that formal adoption rather than passive acceptance of national passing scores is followed.
  • Evaluating Content Validity in Multistage-Adaptive Testing, by Leah T. Kaira and Stephen G. Sireci- This article addresses issues related to evaluating content coverage in computerized examinations, an element integral for score validity. They describe an investigation of content coverage using a method of testing called multistage adaptive testing that makes for interesting reading. We think this article has important validity implications for testing programs currently using computer-based testing or considering the use of computer-based testing.

CLEAR Exam Review, Winter 2010, Volume 21, No. 1

  • Abstracts and Updates, by George Gray- George Gray discusses a number of articles dealing with (1) standard setting issues, (2) scoring performance tests over a period of time, (3) an introduction to constructed-response questions, (4) item response theory issues related to person-fit, (5) CBT issues, and (6) equating issues. This column provides an excellent thumbnail sketch of each of the articles reviewed. 
  • Technology and Testing, by Robert Shaw- Robert Shaw explores the topic of using both a computer-delivered and paper-based survey for a job analysis. He describes some issues as well as logisitc problems associated with offering the survey in paper-based format as well as on computer. He has some interesting results and this column is worth your time.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses the recent Supreme Court case involving the New Haven Connecticut firefighter promotion examinations. As you may recall from recent news stories, the City of New Haven, out of fear of litigation, discarded the results of the tests. The story itself- given that it has to do with an examination- makes for interesting reading.
  • Understanding the Impact of Enemy Items on Test Validity and Measurement Precision, by Ada Woo and Jerry Gorham- This article addresses issues related to “enemy“ items and their impact on measurement and score validity. This article describes the different variations that these items can take as well as discusses a number of different methods for managing them.
  • Exploring the Optimal Number of Options in Multiple Choice Testing, by Kelly Piasentin- This article describes a research study conducted to explore the optimum number of options in a multiple-choice item in an operational credentialing examination using live candidate data following the administration. The article is interesting in that the methodology used real candidate data and seems to confirm other published research.

CLEAR Exam Review, Summer 2009, Volume 20, No. 2

  • Abstracts and Updates, by George Gray- George Gray discusses the new certification handbook from National Organization for Competency Assurance (NOCA) as well as a number of articles dealing with (1) the effects of retesting candidates on credentialing examinations, (2) fit of the Rasch model to item response data when item discrimination indices vary, (3) item response theory, (4) practice analysis issues related to survey design, sampling and respondents background information, and (5) consequential validity. This column provides an excellent thumbnail sketch of each of the chapters and articles reviewed.
  • Technology and Testing, by Karen Flint and Robert Shaw- Karen Flint and Robert Shaw explore the topic of using video images in tests. They describe some development issues as well as logistic problems associated with continuing to offer the test in paper-based testing format as well as on computer.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a case involving a situation in which a licensure candidate was discovered to have video recording equipment sewn into a garment worn during the administration of a licensure examination. The examinee was convicted of criminal charges and the examination provider prevailed in a civil suit against the candidate. The story itself makes for interesting reading.
  • Developing Models That Impact Item Development, by Anne Wendt, Shu-chuan Kao, Jerry Gorham, and Ada Woo- This article addresses the test development issues related to developing items targeted to content and item difficulty. This article deals with creating item variants or clones of well-performing items. The authors suggest that use of this methodology results in greater quality control over the test development process.
  • Virtual Standard Setting, by Irvin R. Katz, Richard J. Tannenbaum and Priya Kannan- This article describes a methodology by which a web-based standard setting study was conducted using readily-available technology. The article adds to the literature on standard setting by showing little differences between face-to-face meetings and virtual Internet-based meeting protocols.

CLEAR Exam Review, Winter 2009, Volume 20, No. 1

  • Abstracts and Updates, by George Gray- George Gray discusses a number of articles dealing with (1) concepts of reliability and measurement error, (2) methods for detecting differential item functioning, (3) item response theory studies, (4) multiple language and translation issues, and (5) concepts and competence in credentialing. This column provides an excellent thumbnail sketch of each of the articles reviewed.
  • Technology and Testing, by Robert Shaw- Robert Shaw explores the topic of stand-alone computer-based testing systems. He describes features and applications of five such computer-based test delivery systems.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a case involving a situation in which a licensed person was required to take a computerized test after having been brought up on charges of incompetence by the regulatory Board. The licensee failed the test and argued that the failure was attributed to unfamiliarity with computers. Following a hearing, the Board revoked the license. The applicant appealed the license revocation and the resulting legal issues and outcomes will be of interest to any board dealing with evaluating the competence of current license holders according to Board rules.
  • Memorability of Innovative Items, by J. Christine Harmes and Anne Wendt- This article addresses the issue related to remembering innovative items. This article compares remembering such items to standard multiple choice questions in a controlled experimental setting. 
  • Readability of Licensure Examinations, by Ada Woo, Anne Wendt, and WeiWei Liu- This article deals with the readability of multiple-choice test questions. The authors suggest readability analyses as a way to ensure items are written at the appropriate reading level.

CLEAR Exam Review, Summer 2008, Volume 19, No. 2

  • Abstracts and Updates, by George Gray- George Gray’s column describes a new book “A Rasch Primer: The Measurement Theory of Georg Rasch” focusing on item response theory (IRT). He also discusses a number of articles dealing with the following topics: constructed-response items, standard setting, and validity.
  • Technology and Testing, by Robert Shaw- Robert Shaw continues with the topic of biometric security systems. He describes weaknesses of biometric systems, biometric system costs, and privacy issues. This is the second of a two-part article dealing with biometric security systems.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a case involving a situation in which a licensure applicant admitted on the license application to having a psychological impairment. The licensing board requested the applicant undergo and pay for a psychological evaluation. The applicant brought suit and the resulting legal issues and outcomes will be of interest to any board dealing with evaluating license applications.
  • Addressing Nonresponse in Surveys, by Anne Wendt- This article addresses the issue related to nonresponse in survey work- specifically related to practice analyses.
  • The Design of Innovative Item Types: Targeting Constructs, Selecting Innovations, and Refining Prototypes, by Cynthia Parshall and J. Christine Harmes- This article deals with the importance of matching the intent and purpose of measuring examinees with the selection and use of innovative items types. The authors also discuss various ways to refine the design and format of these kinds of items. This article provides a listing of suggested steps for evaluation and implementation if you are considering the use of some of these item types.
  • Evidence-Centered Design: A Lens Through Which the Process of Job Analysis May Be Focused to Guide the Development of Knowledge-Based Test Content Specifications, by Richard Tannenbaum, Stacy L. Robustelli and Patricia A. Baron- This article discusses the incorporation of evidence-centered design aspects to help guide the job analysis process and the production of test content specifications. This article discusses the often asked- but rarely answered- question of how to move from job analysis to test specifications and it gives some suggestions to accomplish that goal in a practical and efficient manner.

CLEAR Exam Review, Spring 2008, Volume 19, No. 1

  • Abstracts and Updates, by George Gray- George Gray’s column cites a number of chapters in a new book “Improving Testing: Applying Process Tools and Techniques to Assure Quality“ as well as discussing a number of articles dealing with the following topics: practice analysis, standard setting, retest effects on same and parallel test forms, testing with dual languages and audio, accommodations for testing, and equating.
  • Technology and Testing, by Robert Shaw- Robert Shaw deals with the topic of biometric security systems. He describes common authentication methods, typical biometric configurations, characteristics of biometrics, and biometric system errors. This is the first of a two-part article dealing with biometric security systems.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a case involving a situation in which a licensing board produced a practical examination in a haphazard manner. The court ruled against the licensing board. This case clearly demonstrates that part of the defensibility of an examination relies on the fundamental process used to develop the assessment instrument.
  • Identifying Item Parameter Drift in Multistage Adaptive Tests, by Craig S. Wells, Stephen G. Sireci, and Kyung T. Han- This article deals with the importance of being able to identify item parameter drift in multistage adaptive tests. The authors also discuss various techniques for determining if item parameter drift has occurred.
  • Investigation of the Item Characteristics of Innovative Item Formats, by Anne Wendt- This article discusses the item characteristics of several different innovative item formats.

CLEAR Exam Review, Summer 2007, Volume 18, No. 2

  • Abstracts and Updates, by George Gray- George Gray’s column features the fourth edition of Educational Measurement. He discusses chapters dealing with the state of the field in educational measurement, questions for future standard setting research, quality control measures in testing, evidence-centered design and testing, and multistage computer-based testing.
  • Technology and Testing, by Robert Shaw and Robert Clark- Robert Shaw and Robert Clark discuss the use of digital images as test item stimuli. They note the possible advantage of using true-to-life stimuli in certification examinations as a way to improve the relevance and validity of the examination. They discuss the challenges associated with the use of visual images, some possible solutions to those challenges, and provide a set of recommendations for producing acceptable images on screen as well as in a paper format.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column notes that regulatory boards are created and empowered to regulate professions and in doing so may consider prerequisites such as education, experience, and examinations. He further notes that in some professions and jurisdictions additional criteria such as good moral character and other personal history variables are considered. Dale discusses a case involving the number of times a candidate is allowed to take an examination and how that requirement changed over time. This case presents many issues for consideration related to the important public protection mission of regulatory boards. These include the legal authority to limit the number of examination attempts allowed, the order in which licensure requirements are assessed, as well as other relevant issues.
  • An Analysis of Post Entry-Level Registered Nurse Practice, by Anne Wendt and Casey Marks- This article describes a study designed to focus on the activities performed by post entry-level Registered Nurses. The procedures used and results of this study may be useful to others considering methods for assessing or assuring the continued competence of credentialed professionals.

CLEAR Exam Review, Winter 2007, Volume 18, No. 1

  • Abstracts and Updates, by George Gray- George Gray cites articles dealing with the following topics: validity in court, point counter-point regarding task-analysis methodology, point counter-point regarding standard-setting methodology, additional standard setting articles, item response theory, and differential item functioning.
  • Technology and Testing, by Robert Shaw- Robert Shaw deals with the topic of web conferencing.  He describes the primary function of a web conference, the features available, costs, security, and human factor issues.  The column provides a good deal of useful information for those of you considering web conferencing.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses a case involving a test preparation company and the procedures they used to make their practice examinations look like the actual licensing examination.  These procedures included having members of the test preparation company take the licensing examination multiple times and write down questions on “scratch paper.” The court ruled that the test preparation company had violated copyright laws and ordered that they pay a substantial fine.
  • Making the Test Development Process More Efficient Using Web-Based Virtual Meeting, by Deborah L. Schnipke and Kirk A. Becker- This article deals with using web-based virtual meetings to make the test development process more efficient and cost-effective.  It discusses the types and requirements for virtual meetings as well as their advantages and disadvantages.
  • Assessing Critical Thinking Using a Talk-Aloud Protocol, by Anne Wendt, Lorraine E. Kenny, and Casey Marks- This article describes the use of talk-aloud protocols to explore the kinds of cognitive functioning examinees- recently licensed people- use to answer both multiple-choice and alternate item types.  The authors provide some interesting results and conclude that talk-aloud studies such as those described are feasible and provide further evidence of what certain items are measuring.


CLEAR Exam Review, Summer 2006, Volume 17, No. 2

  • Abstracts and Updates, by George Gray- George Gray discusses several of the chapters in the new Handbook of Test Development. Several of the chapters are reviewed and discussed including some discrepancies among the chapter authors. He also reviews an article on computer adaptive testing that provides an overview of the test delivery methodology. Finally he discusses an article describing a literature review of the Bookmark Standard Setting Method.
  • Technology and Testing, by Robert Shaw- Robert Shaw deals with the topic of self-service systems in general and specifically their use in credentialing programs. An example of a self-service system is an on-line registration system.
  • Comparability of Practice Analysis Survey Results Across Modes of Administration, by Thomas R. O’Neill, Reed Castle, and Casey Marks- This article describes a study comparing paper-based practice analysis surveys delivered through the mail and an Internet-delivered survey. The results are interesting and deserve your attention.
  • Variable-Length Computerized Classification Testing with Item Response Theory, by Nathan Thompson- This article provides an overview of the potential of computerized test delivery for making classification decisions. The article describes how a computerized classification test is developed and its potential utility and limitations.

CLEAR Exam Review, Winter 2006, Volume 17, No. 1

  • Abstracts and Updates, by George Gray- George Gray cites a number of articles, measurement books, and presentations dealing with the topics of score interpretations, a survey of measurement, equating issues, changing answers on multiple-choice examinations, the case for three-option multiple-choice questions, and a discussion of a variety of issues related to computer adaptive testing.
  • Technology and Testing, by Lee Schroeder and Reed Castle- Lee Schroeder and Reed Castle deal with the topic of reporting scores.  They provide a survey of comments on the different ways credentialing programs report scores in light of validity considerations.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses an interesting issue related to state licensing rights versus due process rights.  The regulated profession discussed is retail florists and raises some interesting perspectives regarding protection of the public.  The column describes the review and conclusion of the court.
  • Assessing the Impact of English as a Second Language Status on Licensure Examinations, by Thomas R. O’Neill, Casey Marks and Weiwei Liu- This article describes the use of Differential Item Functioning (DIF) to examine the fairness of knowledge-based licensure examinations administered to candidates in nursing who use English as a Second Language (ESL).  Their approach to answering questions of test fairness as a result of primary language through DIF analyses as well as their results makes this article worth your attention.
  • Professional Exam Specification Development: A Web-based Survey Experience, by David S. Chapman- This article describes the use of an internet-based survey methodology to efficiently conduct a job analysis for a widely dispersed group of professionals- in this case Naval Architects and Marine Engineers. His perspective and commentary on how the process worked for his profession is interesting and we think provocative in some respects.

CLEAR Exam Review, Summer 2005, Volume 16, No. 2

  • Abstracts and Updates, by George Gray- George Gray cites a number of articles dealing with topics of interest: using skill, knowledge, and ability statements to develop a weighted content outline; flagging scores from accommodated test administrations; setting performance standards; formula scoring and key balancing; rating a subset of items for an Angoff study; reporting examination subscores; weighting examination components; and constructing measures using item response theory (IRT).
  • Technology and Testing, by Lee Schroeder and Reed Castle- Lee Schroeder and Reed Castle discuss the delivery of an examination from a CBT network server to the candidate workstation.  Two general types of CBT delivery modes are discussed: a Local Area Network-based Administration and an Internet Administration. The benefits and limitations of each type are discussed.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses an interesting case involving a decision to invalidate test scores. He discusses the issue of who makes the ultimate determination as to score invalidation and the consequences of such a decision. In the case cited, the court recognized the importance of deference to testing experts and the need to allow for group invalidation of scores in order to maintain the integrity of the entire examination program.
  • The Impact of Internet Sites on Item Exposure and Item Parameter Drift, by Russell W. Smith- This article deals with the emergence of Internet sites in which candidates post items from live credentialing examinations.  The article describes an experiment designed to evaluate the impact of posted items on the difficulty and validity of a certification examination over time.
  • Equating 21st Century Licensure and Certification Tests, by Lisa A. Keller and Stephen G. Sireci- This article discusses the equating of licensing and certification examinations. The article describes the important factors credentialing agencies must consider when choosing and implementing an equating process. Recommendations for dealing with these and other issues are provided.

CLEAR Exam Review, Winter 2005, Volume 16, No. 1

  • Testing Across the Nation, by Sandy Greenberg- Sandy Greenberg deals with the challenge of testing the repeating licensure candidate.  She discusses the issue from a policy and legal point of view.
  • Abstracts and Updates, by George Gray- George Gray cites a number of articles dealing with the topics of multiple choice item development, item response theory (IRT) item analysis, IRT model fit and application to standard setting, adapting tests to foreign languages, computer-based testing issues, technical issues in score equating, fairness review guidelines, and IT certification.
  • Technology and Testing, by Lee Schroeder and Reed Castle- Lee Schroeder and Reed Castle deal with the topic of Multi-Stage Adaptive Testing.  They provide insights on the purposes and value of this computer-based administration format in that it may be useful to programs with smaller item pools and describe how it might work in practice.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s legal column discusses three cases, the first regarding a request to rescore an essay test after the scores for all candidates have been distributed. This may be a useful case for those boards that are considering candidate appeals to essay grading after the fact. The second case describes the revocation of an individual’s license based upon evidence that an imposter took the licensure examination for the candidate. The third case describes a court decision involving a candidate’s denied ADA request for double time for licensure test administration.
  • Administering the NCLEX Examinations Internationally, by Casey Marks- This article describes the process and decisions taken by the National Council of State Boards of Nursing to administer the registered nurse and practical nurse licensure examinations in foreign lands.  It makes for interesting reading as it describes the issues and thought processes they considered and provides some lessons learned for programs considering administering their examinations internationally.

CLEAR Exam Review, Summer 2004, Volume 15, No. 2

  • Testing Across the Nation, by Sandy Greenberg & Kenneth Doucet- Sandy Greenberg & Kenneth Doucet use an interview format with William G. Harris, Executive Director of the Association of Test Publishers (ATP) to discuss ATP’s Guidelines for Computer-Based Testing. These guidelines were formulated to provide direction on principles and best practices for developing and administering computer-based examinations.
  • Abstracts and Updates, by George Gray- George Gray cites a number of articles dealing with the topics of automated essay scoring, research on the Angoff standard-setting method, the effects of extra time on test performance, the weighting of constructed-response items, and computer-based testing.
  • Technology and Testing, by Lee Schroeder and Reed Castle- Lee Schroeder and Reed Castle discuss the use of Internet-based surveys. Their column mentions some of the benefits and limitations associated with this technique and offers guideance on how to use it more effectively.
  • Legal Beat, by Dale Atkinson- This legal column discusses an interesting case involving test security and the ability of owners to protect examination items and other proprietary information. The case is interesting because it involves test preparation materials developed in India and marketed in the United States. Dale discusses the complex issues involved in deciding which law applies, which court should hear a particular matter, and whether alleged infringers from foreign countries must appear in the United States to defend themselves.
  • Evidence-Centered Design for Certification and Licensure, by David M. Williamson, Robert J. Mislevy, and Russell G. Almond- This article discusses a new approach to examination development called Evidence-Centered Design (ECD). Their article explains the elements of ECD and how they believe it can result in more valid credentialing examinations.
  • Proctored and Secure Examinations Administered Over the Internet, by Scott E. Arbet, Carol Morrison, and Roberta Griffin- This article discusses many of the practical issues surrounding testing on the Internet. It describes how Internet testing can be conducted in a secure and cost-effective manner.

CLEAR Exam Review, Winter 2004, Volume 15, No. 1

  • Testing Across the Nation, by Sandy Greenberg- Sandy Greenberg deals with threats related to candidate cheating and test security.  She does so by presenting three different perspectives: from the professions, the schools, and testing vendors.
  • Abstracts and Updates, by George Gray- George Gray cites a number of articles dealing with the topics of item-writing and multiple-choice formats, criteria for evaluating the quality of examinations, computer-based testing, standard setting, significance testing, job analysis, and studies related to the development of the Uniform CPA Examination.
  • Technology and Testing, by Lee Schroeder and Reed Castle- Lee Schroeder and Reed Castle deal with the topic of performance testing. They provide insights on the purposes and value of performance testing and emphasize that the focus and intent of a performance test is not on the cognitive knowledge component, but rather on the integration of the cognitive knowledge and psychomotor behavior, and on the psychomotor behavior alone.
  • Legal Beat, by Dale Atkinson- This column discusses a case in the District Court of Appeals of Florida that emphasizes the need for licensing boards to have both a working knowledge of examination development and the procedural aspects of administrative proceedings.
  • The NBME Medical School Resource Site: A Multi-purpose Application for Communicating with Medical Schools, by Carol Morrison Featherman, Melanie Nelson, Ellen Landau, Ann Simms, and Aggie Butler- This article from the National Board of Medical Examiners (NBME) notes that Steps 1, 2, and 3 of the United States Medical Licensing Examination transitioned in 1999 from paper-and-pencil examinations that were administered twice a year to computer-based examinations that are administered continuously.  This change required NBME to develop new methods of communicating with medical schools for verification of enrollment and graduation status and for score reporting.  This article describes the web-based technology that was developed to meet these needs and should be informative to other agencies transitioning from paper-and-pencil testing to computer-based testing.
  • Setting Passing Scores on Licensure Examinations Using Direct Consensus, by Stephen G. Sireci, Ronald K. Hambleton, and Mary J. Pitoniak- This article notes that in many situations there is a need to set valid and defensible passing scores quickly.  A new method, the direct consensus method for setting passing scores, is described, along with the results obtained from two applications of this method.

CLEAR Exam Review, Summer 2003, Volume 14, No. 2

  • Abstracts and Updates, by George Gray- Gray cites a number of articles dealing with issues related to practice analysis, standard setting, language translation, and reliability.
  • Software Review, by Lee Schroeder and Reed Castle- Schroeder and Castle discuss how Item Response Theory equating can be used to help testing programs stay in compliance with specific mandates related to the scoring of examinations and still maintain a scale that requires the same level of candidate proficiency to pass each examination form.
  • Legal Beat, by Dale Atkinson- This column discusses the situation in which a licensing board may consider a certification examination as part of the licensing process. Dale emphasizes that while private sector certification serves a specific purpose, boards must be careful when relying on such an examination to ensure that it focuses on general practice and covers the broad range of knowledge and skills required for competent performance.
  • Are Your Ducks in a Row? Time to Check Your ADA Compliance, by Shelby Keiser- This article deals with the Americans with Disabilities Act (ADA). Shelby notes that CLEAR is in the process of revising and updating its guide to the ADA entitled “Americans with Disabilities Act: Information and Recommendations for Credentialing Examinations.” Her article highlights some key areas that credentialing organizations should review when evaluating their existing policies and procedures.
  • Beyond Multiple Choice: Innovations in Professional Testing, by Betty Bergstrom and Andria Cline- This article discusses the increasing array of new item types and functionalities available in computer-based testing. They note that innovative item types can be expensive and will not necessarily lead to better measurement. They caution that sound psychometric criteria should be used to evaluate these item types to ensure that they enhance the validity and reliability of your examination.

CLEAR Exam Review, Winter 2003, Volume 14, No. 1

  • Testing Across the Nation, by Sandra Greenberg- Greenberg deals with the topic of practice assessments. She describes a regulator’s view of these measures and also provides the rationale and procedures used by three different testing organizations.
  • Abstracts and Updates, by George Gray- Gray cites several articles and books dealing with validity issues and studies and reliability. One point of interest is the way some authors refer to the reliability of a test while others refer to the reliability of test scores.
  • Software Review, by Lee Schroeder and Reed Castle- Schroeder and Castle note again that while technology has played an important role in how we develop and administer examinations, it also provides the opportunity for theft and loss through Web-based breaches. This column provides a brief discussion of firewalls, access control, encryption, and virus protection and provides Web sites where more information about these topics can be found.
  • Legal Beat, by Dale Atkinson- This column discusses the Freedom of Information Laws that exist in many jurisdictions and how they may impact regulatory activities. The application of these laws can create interesting legal issues especially in the areas related to testing.
  • Psychometric Matters, by Leon Gross- Leon Gross has agreed to write a new column for the CER. In his first column, he discusses cheating and focuses on the systematic, organized memorization of test items by groups of test takers for the specific purpose of providing that information for sale or as a free aide for future candidates.
  • CBT for High-Stakes Certification and Licensure Examinations: Impact of Examinee Volume on Test Design and Program Operation, by David B. Swanson, Susan K. Jacovino, Kathleen Z. Holtzman, Douglas R. Ripkey, Scott Arbet and Raja Subhiyah- This article discusses computer-based testing and the impact of examinee volume on test design and program operation. Three hypothetical testing programs with different numbers of candidates are described. The authors discuss how increases in the number of candidates taking a high-stakes licensure and certification examination can affect the complexity of the testing program.

CLEAR Exam Review, Summer 2002, Volume 13, No. 2

  • Testing Across the Nation, by Sandra Greenberg- Greenberg deals with the issue of disaster recovery. Given the current climate, planning for disaster recovery seems to be an important concern for credentialing agencies. This column poses questions for you and your board to consider, describes the various phases of disaster planning, and provides resources for use in developing a disaster recovery plan.
  • Abstracts and Updates, by George Gray- Gray cites several articles and books discussing standard setting, computer-based testing, and issues related to reliability.
  • Software Review, by Lee Schroeder and Reed Castle- Schroeder and Castle discuss the use of Microsoft® 2000 Terminal Services. This product allows users to connect to the “base” server and work remotely. One of the most useful applications for our readers may be its use as a secure mode for accessing a central item bank.
  • Legal Beat, by Dale Atkinson- This column discusses why the limitations imposed by the Americans with Disabilities Act must be considered by regulatory boards when framing questions to be included in the background information forms they require candidates to complete.
  • The CFP® Certification Examination: An Illustration of the Modified Angoff Method, by J. David Ashby and Terrye A. Todd- This article describes an application of the modified Angoff process and the Beuk adjustment to establish a passing score in the certification examination.
  • An Empirical Evaluation of Selected Multiple-Choice Item Writing Guidelines, by Stephen G. Sireci, Andrew Wiley and Lisa A. Keller- This article deals with an empirical evaluation of some guidelines that are frequently used for developing quality multiple-choice test questions.

CLEAR Exam Review, Winter 2002, Volume 13, No. 1

  • Testing Across the Nation, by Sandra Greenberg- Greenberg discusses two major topics: converting the Uniform CPA Examination to a computerized administration and using automated test assembly to construct multiple test forms for online administration.
  • Abstracts and Updates, by William Lohss- Lohss discusses articles and papers dealing with standard setting, job analysis/test specifications, performance assessment, applications of web/internet technology, item response theory, and test fairness.
  • Software Review, by Lee Schroeder- Schroeder expresses concern about the expense associated with the item writing and review process. He notes that many of us at CLEAR have begun to look at the Internet as a tool for reducing these costs. Lee provides his thoughts on what an online item-writing system should be able to do.
  • Legal Beat, by Dale J. Atkinson- Atkinson describes a recent opinion from the United States Circuit Court of Appeals for the 9th Circuit that deals with the applicability of the Civil Rights Act of 1964 to the examination process. The test in question was used as part of the credentialing process for teachers.
  • Developing High-Quality Items Quickly, Cheaply, Consistently- Pick Two, by Kathy Holtzman, Susan M. Case and Douglas Ripkey- The authors describe three item development models and discuss their pro’s and con’s including costs and productivity.
  • Actions to Help Ensure the Fairness of Licensing Tests, by Michael Zieky- This article deals with test fairness. It provides several definitions of fairness and also describes a number of steps that test developers can take to help make their examinations as fair as possible.
  • Conducting a Practice Analysis to Achieve Multiple Organizational Goals, by Linda E. Montgomery and Anne Sax Hone- This article discusses issues to consider when conducting a practice analysis designed to achieve multiple organizational goals.

CLEAR Exam Review, Summer 2001, Volume 12, No. 2

  • Abstracts and Updates, by Charles S. Kunce and Mary M. Sandifer- Kunce and Sandifer discuss a number of papers presented at the annual meetings of the American Educational Research Association and the National Council on Measurement in Education that deal with developments in performance testing and standard setting and classification.
  • Software Review, by Lee Schroeder- Lee Schroeder describes a suite of survey-related products that he finds useful in the conduct of role delineation or job analysis studies. He discusses particularly the three primary components of the software: Survey Development; Data Entry Program and Data Base Design; and Data Analysis and Presentation.
  • Legal Beat, by Dale J. Atkinson- Atkinson discusses a case involving statutory restrictions on the ability of practitioners to advertise credentials or specialty certifications. This column, which should be of particular interest to regulatory board members, was decided on freedom of speech issues.
  • Oral Examinations: Psychometric and Practical Considerations, by Norman R. Hertz and Roberta N. Chinn- The authors discuss a number of difficulties, and offer some solutions, in the use of oral examinations as part of the credentialing process.
  • Setting and Validating Standards on Professional Licensure and Certification Exams: A Survey of Current Practices, by Kevin C. Meara, Ronald K. Hambleton and Stephen G. Sireci- This article discusses the results obtained from a survey of credentialing agencies studying their standard-setting procedures. They also provide suggestions about how credentialing organizations can improve gathering and documenting standard-setting validity evidence.
  • Candidate Review Policies: Considerations for Certification and Licensure Examinations, by David M. Williamson- Williamson outlines considerations in establishing a policy on candidate review of examinations for certification and licensing programs. He cites professional testing standards and reviews legal precedents that may be useful to agencies establishing and/or reviewing their candidate examination review policy.

CLEAR Exam Review, Winter, 2001, Volume 12, No. 1

  • Testing Across the Nation, by Sandra Greenberg- Sandra Greenberg’s column continues a thematic approach that focuses on the innovative uses of technology in credentialing programs. She explores the use of technology in credential data-base management, item and examination development, and application and licensing processing and evaluation.
  • Abstracts and Updates, by Charles S. Kunce and Mary M. Sandifer- The authors discuss articles and papers dealing with job analysis, translated examinations, standard setting, reviewing answers in CAT, and high-stakes assessment.
  • Legal Beat, by Dale J. Atkinson- Atkinson discusses a case that involves the forfeiture of a medical license in a Federal criminal case. The column focuses on the license as personal property that can be used in federal sentences without regard to the state rules regarding revocation.
  • Detecting and Preventing Cheating on Credentialing Examinations, by Gregory J. Cizek- - Cizek’s article sets forth the kinds of testing situations in which cheating can occur, and then presents a summary of some methods for detecting cheating and suggestions for how cheating can be prevented.
  • A Checklist for Evaluating Standard-Setting Documentation, by Kevin C. Meara- This article discusses the necessity to improve the documentation of credentialing examinations’ standard-setting studies for validity purposes and offers useful information for organizations faced with the challenge of setting and validating standards on their examinations.
  • Effects of Mode of Item Presentation on Standard Setting, by Jane Faggen, Donald Powers and Gerald Melican- The authors describe a study they conducted to determine any differences in recommended passing scores that may result from the mode (paper vs. computer) in which test items are presented to standard-setting panelists.

CLEAR Exam Review Summer 2000, Volume 11, No. 2

  • Testing Across the Nation, by Sandra Greenberg and Karen Cullen- Greenberg and Cullen focus on innovative uses of technology in credentialing programs. They discuss the use of electronic data in a practice analysis study, Web-based self-assessment examinations, online application processing, Web access to credential information, and online license renewals. 
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss recent articles, abstracts and presentations on differential item functioning (DIF), performance assessment, and computers and testing.
  • Software Review, by Lee Schroeder- Schroeder’s reviews IMSI’s HiJaak Pro Version 5, a software product that allows the user to convert, capture and organize graphic files. This should mean that graphics can be stored and used more easily in test questions. 
  • Legal Beat, by Dale Atkinson- Atkinson discusses a recent case that focuses on the appropriateness of testing agencies and organizations disclosing to recipients of test scores that the examination was administered under nonstandard conditions. The opinion in the case analyzed Title III of the Americans with Disabilities Act (ADA) as it relates to private entities providing examinations to the public.
  • Testing and Measurement Issues, by Thomas Henzel and Kristina Golden- Henzel and Golden focus un the the structural complexity of test items for computer-based testing. Their column identifies a new classification code that describes a test item in terms of its structural complexity. They suggest this new code may be useful in the reproduction of multiple examination forms that are balanced for content and difficulty, as well as complexity of item construction.
  • Defining the Scope of a New Job and its Knowledge Base: The Application of a Job Analysis Study to a Decision-Making Process, by Michael Rosenfeld, Sarah Slater and Sharon Goldsmith- The authors describe how job analysis techniques may be used to define the scope of a new job, assist in defining the curriculum necessary to perform the tasks and acquire the skills necessary for competent entry-level performance, and serve as a basis for standards to be used to credential approved training programs.
  • Condensed Job Analysis: Capturing a Moving Target in the Information Technology Field, by James Adair- Adair discusses the development of an alternative job analysis method. It focuses particularly on the adaptation of existing job analysis techniques and processes to meet the needs of the Information Technology industry for assessing quickly altering job and product-related content. 

CLEAR Exam Review Winter 2000, Volume 11, No. 1

  • Testing Across the Nation, by Jim Zukowski- Zukowski discusses an audit program recently approved in the state of California that will impact upon credentialing examination programs related to professional licensure. He also reviews recent U.S. Supreme Court decisions that will affect the application of ADA with regards to testing accommodation requests, provides an update on a job analysis for social workers, and describes a new certification program for information technology specialists.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss recent articles, abstracts and presentations on test adaptation, computer-based testing, and the impact of new technology on examinations. Other issues include gender differences, validation and performance testing and the accuracy of item ratings of subject matter experts.
  • Software Review, by Lee Schroeder- Schroeder’s reviews the Statsoft Electronic Statistics Textbook, a book of reference information that can be accessed for free on the Internet. He recommends this particularly for those that find themselves in need of information on statistics when out of the office.
  • Legal Beat, by Dale Atkinson- Atkinson discusses a case of a dentist with a limited license from the state of Arizona. After the legislature repealed the section of legislation providing for limited licenses, the dentist filed a complaint for violation of his due process and equal protection rights.  
  • Testing and Measurement Issues, by Gerard Dillon and William Walsh- Dillon and Walsh focus on using performance data to set standards, its practical impact and the perception of judges. They also summarize the results from a research study that they recently conducted.
  • On the Documentation of Credentialing Examination Procedures and Policies, by James Fidler- The author considers issues pertinent to the preparation of performance and policies manuals for credentialing examination programs. The issues discussed by Fidler include: the importance and uses of documentation; contents and format of a manual; appropriate author and audience; and document storage.
  • A Brief Discussion of Three Basic Computer-Based Testing Models, by Jerry Gorham and Jian Zhang- Gorham and Zhang provide an introduction to computer-based testing and discuss the different models available.


CLEAR Exam Review, Summer 1999, Volume 10, No. 2

  • Testing Across the Nation, by Jim Zukowski- Zukowski discusses a new partnership between the Chauncey Group and the NorthWest Center for Emerging Technology (NWCET) concerned with the Information Technology industry. Also mentioned are the revised Standards for Educational and Psychological Testing, an update on the practice analysis in orthotics and prosthetics, computer-based podiatry testing, the development of a set of guidelines for computer-based testing, and new standards for food protection managers.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss recent articles, abstracts and presentations on educational standards, computer-based testing (item exposure, automated test assembly, and user interface), ADA accommodations, performance assessment, equating and scaling, and job analysis. 
  • Software Review, by Lee Schroeder- Schroeder’s article focuses on Item Response Theory (IRT) and provides a review of the theory, before considering a software product (XCALIBRE) which can be used to calculate IRT parameters.
  • Legal Beat, by Dale Atkinson- Atkinson discusses a recent Missouri case that focused on the reliance of regulatory boards on licensure examinations and whether limitations to this authority exist. 
  • Testing and Measurement Issues, by Brian Klauser and Ronald Nungester- Klauser and Nungester focus on the factors that need to be considered when identifying cut scores in certification and licensure decisions.
  • Adapting Credentialing Exams for Use in Multiple Languages, by Ronald Hambleton, Stephen Sireci,and Frederic Robin- The authors consider reasons for adapting examinations for use in other languages and steps that can be taken to adapt the examinations. Eight steps are recommended: checking exam content and format equivalence; deciding the desirability of a translation; choosing translators; translating and adapting the exam; reviewing the adapted version of the exam; conducting a small tryout of the adapted exam; carrying out a more ambitious study of the adapted exam; and documenting the process.

CLEAR Exam Review, Winter 1999, Volume 10, No. 1

  • Testing Across the Nation, by Jim Zukowski- Zukowski discusses a number of topics that are currently being considered by several state and national boards. His column discusses such issues as the transition to computer-based testing, examination ownership, and multi-state job analysis.
  • Testing, Testing ... 1, 2, 3, by Grady Barnhill and Lynn Webb- Barnhill and Webb talk about credentialing agency responsibilities to the failing candidate. They discuss the pros and cons of providing diagnostic feedback to candidates.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss the status of both the revised Standards for Educational and Psychological Testing and the “Rights and Responsibilities of Test Takers”. Additionally they review articles dealing with the detection of cheating, consequential validity, item relevance, learning disabilities, standard setting, and essay and performance test scoring.
  • Software Review, by Lee Schroeder- Schroeder discusses concerns about the security of documents that are sent using e-mail or through the Internet. Lee describes software that uses public key cryptography to help solve this problem.
  • Legal Beat, by Dale Atkinson- Atkinson discusses how managed care can make issues of defining scope of practice and determining jurisdiction quite challenging to regulatory boards. 
  • Testing and Measurement Issues, by Shelby Keiser- Keiser discusses factors licensing boards should consider when responding to individuals requesting test accommodations. She emphasizes the fact that licensing agencies are not educational institutions. Their mission is public protection.
  • Security for CBT High Stakes Licensure Exams, by Barbara Halsey- Halsey discusses the way in which the National Council of State Boards of Nursing attempts to maintain the security of its computer administered licensing examinations. These include such aspects as data transmission and back-up, software security, the training and certification of administrators/proctors, identification/check-in procedures, confidentiality agreements, and the testing environment.
  • Guidelines for Selecting a Standard Setting Panel for Licensure Testing, by David M. Williamson- Williamson discusses factors to consider when selecting participants in a standard setting panel. He emphasizes that these factors are important in order to ensure the integrity of the standard setting process.

CLEAR Exam Review, Summer 1998, Volume 9, No. 2

  • Testing Across the Nation, by Jim Zukowsk- Zukowski discusses methods related to standard setting for performance assessments and describes a new method developed by the Education Commission for Foreign Medical Graduates.
  • Testing, Testing ... 1, 2, 3, by Grady Barnhill and Lynn Webb- Barnhill and Webb talk about all the things you need to consider when facilitating meetings. They discuss how to encourage subject matter experts to attend meetings, the ideal group size, and the role of the facilitator.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss articles dealing with political and regulatory issues, performance testing, and standard setting methods.
  • Software Review, by Lee Schroeder- Schroeder talks about a problem of concern to many of us- the security of our computers. Lee discusses software that is available to protect the security of your files in the event your laptop is lost or stolen.
  • Legal Beat, by Dale Atkinson- Atkinson discusses one of the major functions of legislatively created regulatory boards- the conduct of disciplinary proceedings against licensees accused of wrongdoing. He discusses the standards of proof necessary for a board to establish wrongdoing of an accused licensee.
  • Testing and Measurement Issues, by Richard Luecht- Luecht discusses how moving from paper-and-pencil testing to computerized testing can require fundamental changes in the way tests are produced. He describes an automated test assembly approach that helps meet increased item development needs.
  • Job Analysis for High-stakes Credentialing Examinations, by Vicki L. Flaherty and James B. Hogan- Flaherty and Hogan discuss the role job analysis plays in test development and validity in credentialing examinations. They discuss a number of practical issues concerning how to plan, conduct, analyze, and report on the results of job analysis studies.
  • The National Board of Podiatric Medical Examiners New Testing Methodology, by Charles Gibley, Jr.- Gibley describes how the testing program conducted by the National Board of Podiatric Medical Examiners evolved over several years from a paper-and-pencil program to one that is based on computerized mastery testing (CMT). The article describes the steps that were taken to move from paper-and pencil tests to linear computer-delivered tests, and then on to CMTs.

CLEAR Exam Review, Winter 1998, Volume 9, No. 1

  • Testing Across the Nation, by Jim Zukowski- Zukowski discusses the “Guidelines for Documentation of a Learning Disability in Adolescents and Adults,” developed by the Association on Higher Education and Disability (AHEAD). In addition to their use in higher education, these guidelines have important implications for candidates who are are applying for test accommodations for licensing and certification examinations. Zukowski also describes the procedures used by the National Institute for Hearing Instrument Studies to develop their international licensing examination for hearing instrument dispenser licensing boards.
  • Testing, Testing ... 1, 2, 3, by Grady Barnhill and Lynn Webb- Barnhill and Webb talk about how the delivery modes for assessing professional competency- ranging from oral exams to virtual reality testing--have changed over time. Their timeline extends from 2357 B.C. to the present. The conversation makes clear that some delivery modes are better geared to the achievement of particular assessment objectives than others.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer discuss articles dealing with different language and cross-cultural testing, bias and differential item functioning, significance testing, consequential validity, innovative uses of computers in testing, and performance testings.
  • Software Review, by Lee Schroeder- Schroeder reviews a relatively inexpensive statistical package that he found to be very comprehensive and useful.
  • Legal Beat, by Dale Atkinson- Atkinson looks at two opposing judicial opinions on the same issue to remind board members that judicial opinions may differ on the same issue, not only from state to state, but also from district court to district court, or from circuit court to circuit court. These differing opinions make a board member’s job difficult and challenging.
  • Testing and Measurement Issues, by Gerard Dillon- Dillon describes the use of survey data in a testing program. He discusses how asking test takers focused questions about their testing experience can provide useful information for modifying and improving various aspects of a testing program.
  • Advances in the Use of Item Response Theory in Licensure and Certification Assessment, by Betty A. Bergstrom and Richard C. Gershon- Bergstrom and Gershon outline the differences between classical test theory and item response theory (IRT) and describe how applications of IRT are relevant to regulatory and credentialing organizations. They discuss the use of IRT for item banking, automated test construction, online test construction, computerized adaptive testing, computer-based simulations, performance assessment and surveys.
  • Mental Model Comparison of Automated and Human Holistic Scoring of Architectural Design Simulations, by David M. Williamson, Issac I. Bejar and Anne Hone- The authors compare automated scoring with human scoring. Using data gathered from the Architect Registration Exam (ARE), they compare the computer-based scoring of open-ended architectural problems with the results obtained from human scorers.

CLEAR Exam Review, Summer 1997, Volume 8, No. 2

  • Testing Across the Nation, by Jim Zukowski- Zukowski describes the new Multistate Pharmacy Jurisprudence Examination, a take-home recertification examination developed by the American Board of Internal medicine. He also examines the Internet as a way to provide information about licensing to candidates and the public.
  • Testing, Testing ... 1, 2, 3, by Grady Barnhill and Lynn Webb- Barnhill and Webb provide useful information about assessing continuing competence in a lively, conversational manner.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer review a new book published by CLEAR, Demystifying Occupational and Professional Regulation: Answers to Questions You May Have Been Afraid to Ask, by Kara Schmitt and Ben Shimberg. Next, they cover a special credentialing issue of the journal Applied Measurement in Education, followed by a list of sessions at the 1997 meetings of the American Educational Research Association and the National Council on Measurement in Education that dealt with issues important to credentialing agencies.
  • Software Review, by Lee Schroeder and Pansy Houghton- Schroeder and Houghton discuss a product that allows for in-house design and printing of double-sided, scannable forms.
  • Legal Beat, by Dale Atkinson- Atkinson discusses two cases involving a board’s authority to investigate the activities of licensees. In these two cases, the board’s authority was upheld- but these processes may not be protected in all jurisdictions.
  • Testing and Measurement Issues, by Anthony LaDuca- LaDuca describes the approach the National Board of Medical Examiners is taking to evaluate practicing physicians’ continued competence, a topic of interest for many professions.
  • New Testing Methodologies for the Architect Registration Examination, by Jeffrey F. Kenney- Kenney explains new testing methodologies being used by the National Council of Architectural Registration Boards. In particular, the article describes the development of procedures that enable a computer to score and administer small, focused design problems, known as “vignettes,” which simulate an architect’s design work.
  • Is the Angoff Method Really Fundamentally Flawed?, by Michael Zieky- Zieky asks whether or not the Angoff method used to set passing scores is fundamentally flawed, as recently alleged. This article is particularly useful for those who use this standard-setting procedure.

CLEAR Exam Review, Winter 1997, Volume 8, No. 1

  • Testing Across the Nation, by Barbara Showers- Showers makes a plea for the provision of diagnostic information to candidates who fail credentialing examinations. She discusses the psychometric considerations as well as some creative solutions that can be used to provide candidates some of the information they seek.
  • The Answer Key, by Norman Hertz- Hertz responds to four common examination questions. The first deals with the ideal number and type of references upon which to base an examination. The second concerns possible problems associated with a state adopting an examination developed and administered by an association of state boards. The third responds to a concern raised by a small licensing program that wants to ensure its assessments comply with national standards. The fourth deals with the grounds on which appeals of examination results should be considered.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer review a new book on computer-based testing, six references dealing with general certification issues, three references concerned with the Americans with Disabilities Act, as well as six articles devoted to issues and viewpoints pertaining to performance assessment.
  • Software Review, by Lee Schroeder and Pansy Houghton- Schroeder and Houghton discuss software that can be used to manage bands of examination questions. Their focus is on software that can be used to ensure that duplicate or very similar items are not unknowingly included in the item bank.
  • Legal Beat, by Dale Atkinson- Atkinson discusses whether or not the exam scores of licensees should be made available to the general public.
  • Testing and Measurement Issues, by Janice Scheuneman- Scheuneman provides an objective and balanced discussion of the advantages and disadvantages of computer-based testing.
  • Using Nominal Group Technique (NGT) to Identify Factors Potentially Influencing a Consistently Low Passing Rate, by Linda M. Dean- Dean describes the use of the Nominal Group Technique to identify factors potentially influencing a consistently low passing rate. It incorporates a process borrowed from the research on group dynamics to solve a testing problem encountered in licensure.

CLEAR Exam Review, Summer 1996, Volume 7, No. 2

  • Testing Across the Nation, by Barbara Showers- Showers explores whether or not credential test providers are performing reliability analyses at the cut score to determine which, if any, of the analyses have become “state of the art” among providers. The data for this column were collected via a survey sent to credential test providers.
  • The Answer Key, by Norman Hertz- Hertz answers four common examination questions. The first relates to the advisability of asking item writers to prepare questions in advance of the item writing workshop. The second deals with how licensing and certification tests differ from academic (achievement) and employment examinations. The third concerns whether the completion of an occupational analysis and the setting of test specifications are sufficient to meet validity requirements for an examination used in licensing and certification. The fourth deals with how to explain passing score variation from one administration to another to candidates when using a criterion-referenced methodology.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer describe a new reference book on licensure testing, five articles discussing the use of computers to grade performance tests, an article on rater accuracy in performance testing, three articles about guidelines for translating exams, and two articles about candidate reactions to new licensure tests.
  • Software Review, by Lee Schroeder and Pansy Houghton- Schroeder and Houghton discuss the strengths and weaknesses of introducing technology to reduce or even substitute for the need to have face-to-face meetings to review test questions. The column describes the use of a modem to conduct a conference over a single telephone line that shares voice and computer data simultaneously.
  • Legal Beat, by Dale Atkinson- Atkinson describes the legal concept of collateral estoppel and its implications for administrative proceedings against licensees in regulated professions. He also discusses a case in United States District Court, Northern District, involving the applicability of Title VII of the Civil Rights Act to voluntary certification agencies, a first of its kind.
  • The Expectations of Standard Setting Judges, by Gerard F. Dillon- Dillon discusses how the results of standard setting can sometimes be viewed as unacceptable by relevant stakeholders. He describes a technique he believes can be used to reduce this problem that can also be helpful in training judges participating in standard setting exercises.
  • Item Harvesting: An Efficient Method for Generating Test Questions, by John Norcini, Paul Poniatowski, Susan Day, and Elizabeth Callahan- The authors describe a procedure, called “item harvesting,” for producing test questions. The procedure solicits the active participation of large numbers of practitioners, is carried out by mail and does not require a large-scale, face-to-face test committee meeting to review all of the questions.
  • Test Validity Evidence: What About Face Validity?, by Steven M. Downing- Downing discusses the concept of face validity and its misuse, particularly in performance testing. This article emphasizes that more evidence is needed to demonstrate the validity of an examination than just its apparent fidelity to the job.

CLEAR Exam Review, Winter 1996, Volume 7, No.1

  • Testing Across the Nation, by Barbara Showers- Showers indicates that the need for examination security seems to be greater than ever. Her column describes some of the sophisticated devices now available to potential cheaters as well as steps credentialing agencies can take to improve test security in the face of these threats.
  • The Answer Key, by Norman Hertz- Hertz’s column raises and answers four questions. The first responds to a question concerning how to evaluate the quality of a criterion-referenced passing score study. The second concerns the implications for testing when the results of an occupational analysis indicate that activities not mentioned in the regulations are being performed by practitioners. The third question deals with the issues involved when a board considers using a performance test as part of its licensing process. The last question deals with concerns regarding test standards and whether they are consistent from state to state.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer identify and summarize a number of papers, articles, and reports reflecting current thinking on performance assessment, situations encountered in adapting performance assessments to licensing/certification applications, and historical perspectives of performance testing.
  • Software Review, by Lee Schroeder and Pansy Dubose Houghton- Schroeder and Houghton discuss how the use of computers in examination development processing, analysis of test results, and the storage of confidential information requires the design and implementation of tough security procedures in order to restrict access to appropriate personnel.
  • The Death of PMPs and the Lesson for Performance Assessment, by Lynn C. Webb- Webb’s column discusses the factors that caused the death of PMPs. She describes the psychometric difficulties as well as the financial and timing issues that resulted in the demise of this technique.
  • Legal Perspectives of Examination Security, by Dale Atkinson- Atkinson’s article deals with the legal aspects of test security. The author discusses the ways in which the security of examinations can be breached. He then describes the procedures and legal actions boards and examination contractors can take to protect themselves and the public as well as discouraging individuals and organizations from attempting to breach test security.
  • Licensing and Certification Test Construction: A Balancing Act, by Wade Gibson and John Weiner- This article describes the procedures that can be used to ensure that alternate forms of licensing and certification examinations meet both content and psychometric standards. The approach used is based on classical test theory and can be used with computerized tests to enable users to administer a unique alternate test form to each candidate.

CLEAR Exam Review, Summer 1995, Volume 6, No. 2

  • Testing Across the Nation, by Barbara Showers- Showers covers four themes: the Americans With Disabilities Act and it focus on describing the “essential functions” of jobs; the National Board of Examiners’ decision to stop using the latent-image gathering format in the Patient Management section of its certifying exam; the security concerns regarding computer-based tests; and the State Post-secondary Review Entity, which may require regulators to provide information on passing rates for institutions receiving Title IV funds that have educational programs leading to licensure.
  • The Answer Key, by Norman Hertz- Hertz’s column raises and answers five questions. The first concerns how examinations are affected when interpreters are used to translate an examination into a language other than the one in which it was written. The second asks what can be done to ensure that more items survive an item writing workshop. The third discusses the options a board has other than accepting passing scores that have been established by a national association. The fourth asks how to evaluate the quality of test questions. The last one is concerned with the meaning of validity in a licensing examination.
  • The Americans with Disabilities Act Versus Your Board’s Duty to Investigate Applicants: The ADA is Winning!, by Susan E. Dorn, Kim A. Zeitlin, and Margaret L. Bloom- This article discusses the Americans With Disabilities Act and licensing boards’ duty to investigate applicants. It is clear that boards need to be very careful about the questions they ask. Useful advice is provided on the kinds of questions that are appropriate and can be defended.
  • Abstracts and Updates, by Charles Kunce and Mary Sandifer- Kunce and Sandifer review: guidelines for computerized adaptive tests, an article describing procedures designed specifically for estimating the reading level of test questions, an article and paper describing the effects of changing answers on multiple-choice examinations, as well as two articles dealing with setting passing scores.
  • Software Review, by Lee Schroeder and Pansy Dubose Houghton- Schroeder and Houghton’s column notes that bringing subject matter experts together to write and review examinations is one of the major expenses associated with an ongoing testing program. Their column describes test questions by computer and by telephone.
  • America’s School-to-Work Initiative: What Role Should Credentialing Agencies Play?, by Meredith Mullins and Donald Ross Green- Mullins and Green discuss the current emphasis on improving the transition from school to work. They point out what they believe are some of the unique capabilities and experiences that credentialing agencies could contribute to the solution of this national problem. And they raise questions about the role of credentialing agencies in this area.
  • Development of Performance Assessments for Use in Professional Certification and Licensing, by Janice Dowd Scheuneman- Scheuneman’s article explores some of the difficulties that threaten the validity of performance measures. These include the lack of generalizability of scores and construct irrelevant variance present in scores.
  • Psychometric Evaluation of Performance Assessments: Critical Issues and Innovative Strategies, by Richard M. Jaeger- Jaeger’s article describes the development of performance assessments by the National Board for Professional Teaching Standards (NBPTS). The NBPTS is developing performance assessments in a number of content areas that will be used to certify accomplished teachers. This article discusses the approaches used to develop the certification for Early Adolescence Generalists. It describes a variety of critical psychometric issues that arose in the development of these measures as well as the strategies used to address them.

CLEAR Exam Review Winter 1995, Volume 6, No. 1

  • Testing Across the Nation, by Barbara Showers- Showers discusses the use of computerized testing in licensing and certification. Barbara describes the various computer-based testing options, administration of computerized tests, impacts of computerized test delivery on the credentialing program, justifying the change, justifying the increased cost, availability of providers, components of cost, and the planning necessary for implementing computer-based testing.
  • The Answer Key, by Norman Hertz- Hertz’s column raises and answers five questions. The first deals with how to maintain the same level of scientific rigor in constructing the examination plan as was used in conducting and analyzing the job analysis. The second asks about the guidelines that can be used to define the entry-level practice when conducting a passing score study. The third deals with the factors to consider when developing procedures for scoring performance, problem, essay, or oral examinations. The fourth involves the role of board members in the examination development and scoring process; the fifth considers the role of educators in the examination development process.
  • Unconstitutional Advertising Restrictions, by Susan E. Dorn, Kim A. Zeitlin, and Margaret L. Bloom- This article discusses the type of advertising restrictions that licensing agencies have adopted that have been upheld by the courts as well as the type of restrictions that may be unconstitutional.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column discusses new publications in three major content categories. The first category is concerned with item writing. The articles in this section cover topics such as estimating the optimum number of options and the effects of altering the position of options in multiple-choice examinations as well as use of taxonomies in test construction. The second deals with procedures for use in estimating the reliability of performance tests. The third category includes articles on job analysis, testing policy, and computerized adaptive testing.
  • Software Review, by Lee Schroeder and Pansy Dubose Houghton- Schroeder and Houghton’s column describes two software products they believe can very useful to regulatory agencies. One is Microsoft Access version 2.0, which is a database management tool. The other is DesignExpert, which enables an agency to produce its own scannable documents.
  • Accommodating Different Ethnic and Cultural Groups in Credentialing Examinations, by Meredith Mullins and Donald Ross Green- Mullins and Green discuss the importance of global awareness and sensitivity to cultural diversity in the design and administration of today’s credentialing examinations.
  • Evaluating Licensure and Certification Examination Programs, by Steven M. Downing and Thomas M. Haladyna- Downing and Haladyna describe how high-stakes licensure and certification programs can benefit from external review. They discuss the advantages of such reviews, the aspects of the credentialing program that should be reviewed, the standards that should be considered for evaluating the program, as well as criteria to use when selecting an outside evaluator.
  • Expanding a Professional Assessment Program: Issues for Cross-Cultural Testing and Test Translation, by Lynn C. Webb- Webb’s article notes that we have entered the age of the global community and that many certification and licensure programs are investigating the possibility of expansion. Lynn discusses issues and concerns related to cross-cultural testing and test translation. She emphasizes the need for planning, the use of appropriate testing standards, translation strategies, and special areas of inquiry.

CLEAR Exam Review, Summer 1994, Volume 5, No. 2

  • Testing Across the Nation, by Barbara Showers- Showers discusses why she believes sole-source providers of credentialing examinations concerns about contract clauses, policy issues, and options for states. Barbara also discusses the progress being made on revising the Standards for Educational and Psychological Testing promulgated by the American Psychological Association, American Educational Research Association and the National Council for Measurement in Education. Barbara also provides a list of organizations and publications that can be used as resources for issues related to performance-based assessment.
  • The Answer Key, by Norman Hertz- Hertz’s column raises and answers five questions. The first deals with whether licensing examination programs should provide diagnostic feedback to candidates who do not pass examinations. The second asks if a passing-score workshop is necessary to establish a criterion-referenced passing score for every examination--even if a substantial percentage of test questions are reused from administration to administration. The third concerns factors to consider when selecting subject-matter experts so as to ensure they represent practice. The fourth concerns issues related to the use of oral examinations; and the fifth deals with the appropriateness of subject-matter experts writing test questions in advance of item-development workshops.
  • Licensure Denials: Know Your Authority, by Susan E. Dorn, Kim A. Zeitlin, and Margaret L. Bloom- This column discusses the importance of a board knowing its authority under its state’s administrative procedures act when imposing disciplinary procedures.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column discusses six articles concerned with the validation of licensing and certification examinations; six articles dealing with item-writing and test assembly; five articles concerned with alternative assessment; and three articles related to legal and administrative issues.
  • Software Review, by Lee Schroeder and Pansy Dubose Houghton- Lee Schroeder’s column applies his criteria for evaluating an item-banking system (published in a previous CER column) to two item-banking systems that were sent to him for review. One of Lee’s associates, Pansy Houghton, reviews LXR-Test 5.0 and The Question Bank.
  • Gender Bias in Licensure Testing: Is It a Problem, by Meredith Mullins and Donald Ross Green- Mullins and Green discuss the possibility of gender bias in licensure testing.
  • Evaluating Test Fairness in Licensure Testing: The Sensitivity Review Process, by Stephen G. Sireci and Laura A. Mullane- Sireci and Mullane discuss the sensitivity review process as one aspect of evaluating test fairness in licensure testing. They describe the purpose of sensitivity review and the sensitivity review process, and also provide examples of problematic items as well as a sample rating sheet for documenting the results of a sensitivity review.
  • Providing Interpreters for Deaf and Hard of Hearing Candidates for Credentialing Examinations, by Catherine Nelson- Nelson’s article discusses issues related to providing interpreters for deaf and hard of hearing candidates for credentialing examinations. She outlines an interpreter’s responsibilities, as well as some of the decisions a board needs to make when using interpreters. Catherine also identifies a source boards can use to identify qualified interpreters.

CLEAR Exam Review, Winter 1994, Volume 5, No. 1

  • Testing Across the Nation, by Barbara Showers- Showers discusses five initiatives being undertaken to improve national standards, guidelines, and policy guidance for the developers and users of tests. Barbara also describes the results obtained from the National Adult Literacy Survey as well as business news about contractors working in the licensing and certification area.
  • The Answer Key, by Norman Hertz- Hertz’s column raises and answers four questions. The first concerns the amount of testing time to allow for candidates with learning disabilities. The second deals with how to report the scores for candidates with disabilities who have received testing accommodations, and the third discusses how to evaluate the quality of an occupational analysis that is conducted as a basis for test development. The fourth describes the issues that should be considered when exploring the option of changing from a paper-and-pencil test to a computer-administered version of the examination.
  • Protecting Yourself from Liability as a Board Member, by Susan E. Dorn, Kim A. Zeitlin, and Margaret L. Bloom- This column discusses the use of a “hold harmless clause” as one precaution that may reduce your board’s potential exposure to a lawsuit.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column discusses articles and books dealing with performance testing, job analysis and test specifications, and empirical validity in licensing and certification. Mark also mentions articles which describe techniques that can be used to set passing scores.
  • Software Review, by Lee Schroeder- Lee Schroeder reviews the item banking system developed by Computer Adaptive Technologies, Inc.
  • In Search of Truth and the Perfect Standard-Setting Method: Is the Angoff Procedure the Best Available for Credentialing?, by Meredith Mullins and Donald Ross Green- Mullins and Green discuss whether the Angoff Procedure is the best standard-setting method available for use in the licensing and certification context.
  • Legal Bases for Licensure Testing, by William A. Mehrens- Mehrens writes about the legal basis for licensure testing and discusses the related legal setting, professional standards, and state and federal court decisions.
  • Computerized Adaptive Testing for Licensure and Certification, by Betty A. Bergstrom and Richard C. Gershon- Nelson’s article discusses issues related to providing interpreters for deaf and hard of hearing candidates for credentialing examinations. She outlines an interpreter’s responsibilities, as well as some of the decisions a board needs to make when using interpreters. Catherine also identifies a source boards can use to identify qualified interpreters.

CLEAR Exam Review, Summer 1993, Volume 4, No. 2

  • Testing Across the Nation, by Barbara Showers- Showers discusses the use of uniform national passing scores by veterinarians, psychologists, and physical therapists. She also describes a workshop on test disclosure that was cosponsored by CLEAR and NOCA, as well as two new publications distributed by CLEAR entitled “Development, administration, scoring, and reporting of credentialing examinations: Recommendations for board members” and “Principals of fairness: An examining guide for credentialing boards.”
  • The Answer Key, by Eric Werner- Werner’s column raises and answers two questions. The first deals with the pros and cons of constructed response items and the second with how to go about setting a passing score for a performance examination.
  • Reviewing Your Exam Review Procedures: How Well Do You Score?, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss several cases involving challenges to exam review procedures. They provide some useful information concerning review procedures for multiple-choice and essay type examinations.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column reviews two new books and seven articles concerning computerized adaptive testing and item response theory, setting and maintaining standards over different forms of an examination, the legal defensibility of high-stakes tests, and measurement problems.
  • Software Review, by Lee Schroeder- Lee Schroeder describes the important features he believes an item banking system should possess and the criteria he feels should be used to evaluate such systems.
  • The Beuk Compromise Adjustment: Possible Rx for Troubled Cut Score Study Results, by F. Jay Breyer- Breyer discusses a method for adjusting passing scores that may be helpful to credentialing boards that find themselves faced with recommendations for passing scores that appear to be either too strict or too lenient. The article describes how the Beuk Compromise Adjustment can be used with the Angoff technique to improve decision-making about where to set the passing score.
  • Content Validity Procedures, by I. Leon Smith and Sandra Greenberg- Smith and Greenberg describe some approaches that can be used to strengthen the documentation and demonstration of content validity. The authors describe several methods for evaluating the relationship of items to test specifications. These procedures, when implemented in conjunction with those used to develop the examination, can strengthen the documentation of content validity.

CLEAR Exam Review, Winter 1993, Volume 4, No. 1

  • Testing Across the Nation, by Barbara Showers- Showers describes the changes taking place in several key national licensing examinations. These examinations are for accountants, architects, medical doctors, and nurses. The changes include: permitting and controlling the use of calculators, assessing writing skills, and using computers to administer and score examinations.
  • The Answer Key, by Eric Werner- Werner’s column raises and answers three questions. The first is about the need and desirability of translating licensing examinations into languages other than English, the next about how a board can decide whether or not to change their closed-book examination to an open-book assessment, and the third concerns how much information about a test a licensing board should provide candidates.
  • Test Score Reporting: What Is Your Discretion?, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss several cases dealing with a testing agency which refused to report a candidate’s score because of doubts about the validity of that score. Despite numerous cases approving this policy, this course of action is now under attack.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column reviews three sets of articles. One deals with a variety of multiple-choice formats. He then discusses articles on other types of testing formats, such as oral tests and constructed response tests. This is followed by a presentation of general articles on validity.
  • Software Review, by Lee Schroeder- Lee Schroeder describes and reviews an item analysis program, ITEMSTAT, which was developed by the Colorado Department of Regulatory Agencies. Lee discusses how this program can be used to identify questionable items.
  • Assessing Clinical Skills in Optometry: A National Standardized Performance Test, by Leon J. Gross- Gross describes the performance test used as part of the licensing process for optometrists. The article describes the examination; the training given the examiners; the Candidate Guide; and the scoring procedures. This article provides a good overview of a number of complex technical and administrative issues that must be resolved when conducting performance tests in a licensing context.
  • Standard Setting in Compensatory Versus Noncompensatory Licensure Testing Programs, by Neal Kingston- Kingston discusses standard-setting in compensatory and noncompensatory licensure testing programs and its impact on passing rates.

CLEAR Exam Review, Summer 1992, Volume 3, No. 2

  • Testing Across the Nation, by Kara Schmitt- Schmitt presents the second of her two-part series describing the key issues that licensing and certification professionals believe will most affect licensure and certification in the next three to five years. Kara discusses the following topics: the need for better procedures and documentation to ensure that the content of examinations is job-related and that passing scores truly reflect appropriate standards; the need to provide evidence that licensing and certification really do play a role in protecting the public; continuing education; and the possible involvement of the Federal government in licensure.
  • The Answer Key, by Eric Werner- Werner’s column raises and answers three questions. The first is about the need for, or advisability of, providing a grade-level readability index for multiple-choice tests used in licensure, the next considers whether English as a second, nonprimary language, constitutes a disability, and the third concerns ways in which a licensing or certification agency can develop valid and fair oral examinations.
  • State Licensing Board Alert: Keeping Communications Confidential, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss whether a state licensing board’s investigatory files are discoverable. They also provide a checklist which may be helpful to a board in dealing with such matters.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column alerts us to four new books; one deals with continuing education, tow with computer-adaptive testing, and a fourth with generalizability theory as a tool for evaluating the reliability of scores for virtually all types of performance tests. A number of articles are also presented that cover topics such as: computer-adaptive testing, an independent auditing mechanism for testing, item-banking, and the validity of job analysis data.
  • Software Review, by Lee Schroeder- Lee Schroeder describes and reviews a software package used by one of our readers. The product is the Test Development and Analysis System (TDAS) published by Applied Psychometric Services, Inc. of Napierville, Illinois. Lee discusses installing TDAS as well as its capacity to process tests and to provide reports.
  • The Validity of Licensing and Certification Exams, by Benjamin Shimberg- Shimberg’s column deals with the validity of licensing and certification examinations. Ben discusses professional standards, how licensing and certification tests differ from those used for selection, content validity, and the pros and cons of construct validity.
  • Psychometric Issues in the Use of Simulations and Work Samples as Examinations, by Ellen R. Julian and Nancy A. Orr- This article should be useful to raters who are considering or currently using performance tests as part of their licensing or certification process. Ellen and Nancy discuss some of the difficult psychometric issues that still need to be resolved when using measures of this type.
  • Evaluating Items for Fairness, by Michael Zieky- Zieky’s article describes a multi-state study that was conducted to evaluate the fairness of multiple-choice questions. The author presents the definition of unfairness that was used, the reactions of the reviewers, and aspects of questions unrelated to fairness that reviewers tended to confuse with fairness issues.

CLEAR Exam Review, Winter 1992, Volume 3, No. 1

  • Testing Across the Nation, by Kara Schmitt- Schmitt focuses on the results of a survey she conducted recently of professionals working in the area of licensing and certification testing. Kara asked these individuals to identify the key issues they felt would most affect licensure and/or certification in the next three to five years. Her column is the first of a two-part series describing her findings. She discusses the impact of computerization, the use of practical examinations, privatization, and the nontypical licensure candidate.
  • The Answer Key, by Eric Werner- Werner’s column raises and answers questions about four interesting issues. The first is a policy question that deals with the appropriateness of allowing candidates an unlimited number of test retakes, the next concerns which reference group to use when examining the passing rate of a test, the third considers how a board might obtain assistance to improve the quality of its testing services and to negotiate better with their test service provider, and the fourth concerns the appropriateness of a board providing candidates’ test scores to employers for use in making personnel decisions.
  • Legal Issues, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss two developments which may help us understand how the courts may apply the new Americans with Disabilities Act to licensing and certification. There is also a brief discussion of a very recent case dealing with a Florida title statute.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column identifies two sets of articles. The first set discusses the latest trends in standard setting. The second set presents a debate over the limitations of multiple-choice questions as well as some of the new “authentic” assessment methods.
  • Software Review, by Lee Schroeder- Lee Schroeder reviews his experience with Paradox, a relational data base, and Bubble Publishing. As you may recall from Lee’s previous column, a relational data base is a collection of linkable files. Bubble Publishing allows the users to design two types of optically readable answer sheets.
  • Psychometric Issues, by Benjamin Shimberg- Shimberg’s column contains an article by Ronald Hambleton that provides background on item response theory and why it is expected to play a greater role in licensing and certification in the 1990s.
  • Measurement Practices in Licensing Examination Programs: A Survey, Stephen J. Sireci and Bruce H. Biskin- This article reports on a survey of measurement practices in national licensing programs. The article describes the approaches being used to assess competency, set passing scores, and document evidence of reliability and validity.
  • Computerized Testing- Mind Your Marketing, by David Vale- Vale’s article describes some of the advantages and disadvantages of using computerized testing. It provides some useful hints for determining whether or not computerized testing would be appropriate for your assessment program.

CLEAR Exam Review, Summer 1991, Volume 2, No. 1

  • Testing Across the Nation, by Kara Schmitt- Schmitt discusses questions raised by the Americans with Disabilities Act (ADA), legal challenges affecting examinations, computer-administered examinations in Wisconsin, a patient consultation examination, and self-assessment.
  • The Answer Key, by Eric Werner- Werner’s column raises and answers questions about statistical procedures for use in identifying whether one candidate taking a multiple-choice examination might have copied from another and how item analysis might differ for a multiple-choice and a performance test.
  • Legal Issues, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss the accommodations required for disabled candidates by the Americans with Disabilities Act that are relevant for licensing and certification agencies. These accommodations include access to exam administration facilities as well as certain types of assistance in taking the exam.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column covers legal and technical issues related to validity, item bias, and alternative testing methods.
  • Software Review, by Lee Schroeder- Schroeder discusses the value of laptop of computing, the use of a relational data base system, and the establishment of a CLEAR software clearinghouse.
  • Psychometric Issues, by Benjamin Shimberg- Shimberg’s column explains what equating does and discusses how it contributes to test fairness.
  • On the Role of Criterion-Related Validity Evidence for Licensure Examinations, by Michael T. Kane- Kane’s article examines the use of a criterion-related validity for licensing examinations.
  • Continuing Competency, by Joan E. Knapp- Knapp discusses the issue of continued competency. While the article does not solve the problems associated with continuing competency, it will give you a better understanding of the nature of the difficulties associated with this topic.

CLEAR Exam Review, Winter 1990, Volume 1, No. 2

  • Testing Across the Nation, by Kara Schmitt- Schmitt discusses a new commission reporting on testing, a new appraiser credentialing program, and new challenges for special test accommodations.
  • The Answer Key, by Eric Werner- Werner’s column answers a specific question on periodic updating of job analyses.
  • Legal Issues, by Susan E. Dorn and Kim A. Zeitlin- Dorn and Zeitlin discuss a recent Supreme Court decision regarding a state bar challenge to an attorney for noting on stationery letterhead that he held specialty certification. This decision has substantial implications for professionals and regulators.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column provides a comprehensive update on recent publications on job analysis.
  • Software Review, by Steve Nettles- Nettles reviews some software utilities to enhance computer productivity.
  • Psychometric Issues, by Benjamin Shimberg and Michael Rosenfeld- Shimberg and Rosenfeld discuss job analysis methodology. They discuss how a job analysis is conducted, a committee’s judgments are verified, and the results are used.
  • Adapting the Code of Fair Testing Practices in Education to Licensure Testing, by Donald Ross Green- This article is a recommendation to extend the Code of Fair Testing Practices in Education to licensure and certification tests. What a great idea! Although credentialing examinations are given in the public interest, these are also “high stakes” tests for the candidates. Therefore, providing assurances of fairness to test takers is appropriate and desirable.
  • Norm- vs. Criterion-Referenced Passing Scores: Considerations for Passing Rates, by John Mirone- Mirone discusses the important distinctions between norm- and criterion-referenced standard setting procedures. This article is neither the first nor the last to be written on this subject. However, the author provides a somewhat different perspective on the issues, one which will influence our thinking.

CLEAR Exam Review, Summer 1990, Volume 1, No. 1

  • Testing Across the Nation, by Kara Schmitt- Schmitt discusses a significant legal decision, a realignment of major exam programs, and the effects of exam preparation.
  • The Answer Key, by Eric Werner- Werner’s column provides insightful answers to questions that are frequently asked. Topics include multiple pass-fail cutoff scores, pretesting new items, professional standards, and test equating.
  • Legal Issues, by Kim A. Zeitlin- Zeitlin discusses a recent setback to truth-in-testing laws due to a federal district judge’s decision. The article includes observations about copyrights which were declared supreme by the ruling.
  • Abstracts and Updates, by Mark Raymond- Raymond’s column provides some excellent summaries of recent publications on an important topic- test item writing.
  • Software Review, by Steve Nettles- Nettles reviews an inexpensive statistical package.
  • Psychometric Issues, by Benjamin Shimberg- Shimberg’s column delves into important psychometric issues or statistical indices. In this issue, he discusses practical examinations or performance tests. The ideas raised will resolve some commonly held concerns.
  • Testing Candidates with Disabilities, by Catherine Nelson and Marjorie Ragosta- This article discusses linguistic issues from the perspective of disabled or handicapped individuals. For whom should the test administration format be adjusted? What should be the extent of the adjustment?
  • Comparability of Translated Tests in Occupational Testing, by Nancy Thomas Ahluwalia- Ahluwalia discusses issues involved in translating examinations. When and how should exams be translated, and what problems may be expected?
  • Meeting the Needs of Candidates With Limited Literacy, by Charles B. Friedman and Barbara Halsey- Friedman and Halsey discuss linguistic issues from a very different perspective. Their concern is for licensure exams, when the literacy level required to take the test far exceeds the level required on the job. How should this be handled?
Home  |  About  |  Membership  |  Resources  |  CLEAR Learning  |  Awards  | Events
Join  |  Contact  |  Login  |  eNews Signup

Copyright © 2016 CLEAR
Please read the following disclaimer.  |  
Website by Nicasio LLC and CLEAR
108 Wind Haven Dr., Ste. A
Nicholasville, KY 40356
(859) 269-1289
Fax: (859) 231-1943
Powered by Wild Apricot Membership Software