Back to CLEAR Store

CLEAR Exam Review Spring 2019

CLEAR Exam Review Spring 2019

CLEAR Exam Review Spring 2019, Volume 29, No. 1

  • Abstracts and Updates, by George Gray- Dr. Gray summarizes an array of recent publications in Abstracts and Updates, beginning with the updated CLEAR publication, Questions a Legislator Should Ask (2018). Next reviewed is a full-length book on score reporting; Dr. Gray summarizes several chapters that are most relevant to assessment practices in professional licensure and certification. Five articles describing certification programs are reviewed, covering assessment in clinical data management, occupational safety and health, business analysis, and the medical subspecialty of clinical informatics. Next highlighted is a publication illustrating the use of generalizability theory as an alternative to classical reliability estimation. Two additional articles are described that focus on evaluating the relevance of test items to evidence-based medicine in the context of a medical specialty.
  • Legal Beat, by Dale Atkinson- Dale Atkinson’s Legal Beat describes a recent case in which a failing candidate challenged the grading methodology used to score the bar exams, indicating that he would have passed if the score reporting procedure employed by the Law School Admission Council (LSAC) for the Law School Admission Test (LSAT) had been used. The LSAT is not a licensure examination. It is designed to serve as a factor in predicting success in law school. It measures proficiency along a continuum without establishing a passing level, and the LSAC reports score bands determined by the standard error of measurement. Ultimately, the case failed “on its merits,” but issues related to jurisdiction, defamation, and constitutional rights were also addressed. Testing organizations will note the Court’s finding regarding the jurisdiction of federal courts regardless of the state in which the organization is located.
  • Perspectives on Testing, by Grady Barnhill, Fae Mellichamp, Cyrus Mirza, Chuck Friedman, and Lawrence J. Fabrey- This column presents brief responses of several psychometricians and other assessment experts to examination-related questions posed by CLEAR members and conference attendees. Nine such questions are addressed. Several are related to scoring and score reporting: best practices in score reporting, responding to errors in scoring or score reporting (including subscores), and whether and how to report scores to training programs. Additional topics covered include the use of innovative exam items, issues related to remote proctoring, addressing variations in pass rates, responding to reports of cheating, and ensuring that examinations truly serve to protect the public. For some questions, those attending the panel discussion at the CLEAR Annual Educational Conference were surveyed on the spot, and their impromptu responses are reported.
  • 2018 CLEAR Quick Poll Results, by Carla Caro- Results of CLEAR Quick Poll surveys from 2017 were summarized in the Winter 2017-18 issue of CER. Beginning with the current issue, we present a recurring column by Carla Caro reporting the results of recent Quick Polls. In 2018 CLEAR members responded to questions regarding language accommodations in testing, use of varying item types, registration of test content with the U.S. Copyright Office, training provided to new licensees or registrants, experience with exam breaches (cheating), and examination appeals. Graphics and verbal descriptions reveal the responses of the 2018 Quick Polls.
  • Are You Kidding Me? Keep Your Exams Legally Defensible , by Sarah Wennik- In our first feature article, Sarah Wennik has compiled the advice of testing experts on ensuring the legal defensibility of credentialing examinations. This article is similar to Perspectives on Testing in that it reprises the content of a popular session at the recent Annual Educational Conference. The focus here, however, is on the single topic of exam defensibility. Best practices are recommended in the areas of administrative phases of testing (communication with candidates, eligibility criteria, score reporting), exam development (selecting experts for an exam review committee, editing and updating exam questions), and psychometrics (developing test blueprints, establishing an appropriate passing standard, reviewing item performance, and equating test forms). Readers are cautioned to avoid the mindset of “set it and forget it,” and instead to periodically review their exam programs to maintain defensibility. The authors also point to helpful resources available through the CLEAR website.
  • The Future of Medical Continuing Certification Assessment: Relevant, Dynamic, and Frequent, by Sarah Schnabel- Finally, we proffer an article that some might find controversial. Sarah Schnabel discusses assessment practices in the context of Maintenance of Certification (MOC) programs among the American Board of Medical Specialties (ABMS). ABMS member boards have wrestled with designing and implementing MOC programs for nearly 20 years, with each board developing its own program to address a four-part framework: (I) Professionalism and Professional Standing; (II) Lifelong Learning and Self-Assessment; (III) Assessment of Knowledge, Judgment, and Skills; and (IV) Improvement in Medical Practice. Dr. Schnabel focuses on Part III and supports the use of frequent, summative assessments that follow dynamic blueprints and include assessment of specialty core knowledge. Other medical specialties (as well as non-medical regulated fields) may take a different perspective regarding the assessment of continuing competence or certification maintenance. CER invites responses from readers, whether confirmatory or divergent, in the form of an article submission or simply a brief note to cer@clearhq.org.

Price: $15

Add to Cart