From the Board of Governors: ABR Plans to Use AI – Carefully
By ABR Board of Trustees Chair Matthew B. Podgorsak, PhD, and ABR Executive Director Brent Wagner, MD, MBA
February 2026;19(1):3

Use of artificial intelligence (AI), accelerated by the introduction of widely available generative tools, has been a leading topic of discussion over the past several years, including myriad implications for medical practice and the medical profession. Most of us are already using AI tools in our personal and professional lives to varying degrees. This is probably even more prevalent in the organizations where we work. The ABR recognizes that AI will greatly impact our processes as well, and as such we have been proactively and carefully evaluating the various ways in which this might happen.
The ABR will be using AI (and associated automation) tools to increase the efficiency of internal business functions that are encountered in any organization of our size and scope. These tasks might include scheduling, data analysis, projections, gap analysis, and project management. For example, we are actively looking at ways to more effectively analyze comments from diplomates regarding Online Longitudinal Assessment (OLA) content and make corresponding improvements, either to individual questions or the systematic distribution of content across the knowledge domain.
Specific to the ABR as a certifying body is the creation of assessment content, including exams and OLA. AI might seem to be a useful tool for this purpose, but it turns out to be very problematic. When used to propose test questions, many popular generative AI tools rely directly or indirectly on external sources that, although publicly available, may be protected by copyright. Conversely, questions and other material generated by AI are generally not protected by copyright. Thus, if generative AI were used to create content (text or images) for ABR assessments, the ABR risks not only violating an unknown original author’s copyright protection but also producing exam material that is not protected as an intellectual product. For these reasons, ABR volunteers must agree not to use generative AI when creating OLA or exam content. However, some literature searches or other knowledge-based tools may be used by the volunteer item writer to research or confirm the elements of the tested concepts.
The AI landscape as it might relate to ABR certification model(s), specifically exam structure, is still evolving. In ongoing discussions involving internal and external stakeholders as well as third-party consultants, we are looking at opportunities to use AI to increase standardization and fairness. In addition, as AI clinical tools become an integral part of practice (particularly in DR), the ABR may find it appropriate to assess competency in that portion of the domain that relates to “if, when, and how” candidates use clinically available AI tools in practice.
As with all ABR strategies and priorities, we attempt to balance our consideration of opportunities for innovation with established professional standards. The parallel efforts to replicate clinical practice in the assessment and testing process and our commitment to fairness mean that we will continue to communicate all substantive changes to our exam platforms well in advance of implementation.
P.S. None of this was generated by AI.
