Post-Exam Surveys Give Candidates a Chance to Help Us Improve
by Brent Wagner, MD, MBA, ABR Executive Director
2021;14(5):3
We conduct surveys after each exam administration, for both our computer-based (“written”) and oral exams. Our interest in feedback predates our current remote platforms, although the 2021 surveys have included questions that are targeted to the new technology.
On the surface, it would seem reasonable to assume that we do surveys to inform the ABR staff, the question writers, and the governing board how the candidates perceive the various components of the process. However, within the ABR, our discussions frequently involve a logical extension of that premise, specifically: Why do we ask for examinees’ perceptions? Simply stated, it’s because we want to do it better.
However, the most important question pushes these ideas even further: Why do we want to do it better? This is the nexus of the process and the board’s mission, and it starts with two broad requirements of the testing instruments. First, the process should not introduce undue burden, anxiety, or cost. Second, the exam should include relevant, balanced content that reasonably covers the domain of the specialty. The latter piece is important to allow the public to rely on the credential as an indicator of the competence of the radiologic professional to effectively engage in independent practice. The former (the process), however, should not be a distraction that negatively impacts the validity of the exam (the value to the patient).
Not surprisingly, we learn the most from those who encountered problems with remote exams. The ABR staff and I value comments such as “The instructions were confusing” or “The test platform was not intuitive” because they show us where we need to improve. If a candidate didn’t understand, it’s not his or her fault – we have an obligation to look at ways to provide more clarity. If the tool isn’t intuitive, we need to accept responsibility for that and attempt to make it better.
There are three categories of survey responses I want to specifically address:
- “The exam was too difficult.” While we receive a number of such comments from candidates after each administration, I hear from our question writers (most of whom are faculty members in training programs) that the test is “just right.” The good news for the candidates is that a large group of individuals who are not the same people who write the content independently establish the “difficulty rating” of each item (the Angoff method). Taken in aggregate, this adjusts for more difficult questions and, by extension, adjusts the number of correct answers needed to reach the passing threshold.
- “The remote exam platform is not reliable.” Our remote platform administrations have been successful for the vast majority of candidates. We appreciate the complimentary descriptions of the ease of use of the system and our overall responsiveness to problems when they arise. The surveys have consistently returned results in the 90% range in the “top two” boxes regarding the performance and ease of use of the remote platforms. Nearly all our candidates are grateful to avoid the stress and costs of travel, especially in view of the ongoing pandemic. However, there are substantial tradeoffs, especially in a high-stakes testing environment such as medical board certification. Perhaps the most significant is the inability of the ABR to anticipate and diagnose problems with the examinee’s local network or hardware; these issues cannot always be corrected in real time and, in a small number of cases, unavoidably interrupt the exam. Fortunately, we have had to invalidate very few (less than 1%) of the exams administered in 2021 because of these technical failures. We realize there are significant impacts on other life events for candidates who are required to sit for a second (future) administration. We’re working on reducing those outliers through simplification and other refinements of the platform.
- “The content is not relevant.” Years ago, when I was a candidate for diagnostic radiology certification, my assessment of “relevance” (whether it was in a departmental case conference, the in-service exam, or one of the board exams) was based on a personal bias: whether or not I knew the answer. We have approximately 1,000 volunteers who take part in content development and delivery; many of them participate at several professional levels including research, publication, and national society meetings. The domain covered by the exams is defined by these individuals, not the ABR staff. As we have described previously, the iterative steps of question development and test assembly are the product of committee discussions that are conscious of the need to eliminate esoteric and niche topics.
We are sincerely grateful for the feedback provided by our candidates. Acknowledging that we must balance conflicting constraints, including costs, the ABR’s staff and governing board are trying to make our exams – including the experience for the examinees – better.