
In-training exams (often called ITEs) are a major part of the medical world, especially for training. In both residency and fellows programs, these exams help assess learners' progress, identify knowledge gaps, and inform curriculum decisions well before they need to take their board exams.
Most medical educators understand what in-training exams consist of. However, many educators do not have a clear view of how ITEs can be delivered, analyzed, and benchmarked effectively, especially across participating institutions.
In this guide, we’ll break down:
ITE’s are assessments given to residents or fellows while they are being trained. They are typically delivered annually and are designed to measure specialty-specific knowledge at various stages of a learner’s development.
Unlike board certification exams, ITEs are generally formative, not punitive. Their primary purpose is to:
ITE scores are often reported using percentiles or scaled scores, allowing learners and educators to interpret performance relative to a broader cohort.
ITE results serve different but complementary purposes depending on the audience.
In-training exams help learners:
Rather than being a single high-stakes moment, ITEs provide ongoing feedback that supports continuous improvement.
For program directors and faculty, ITEs offer:
When used well, ITEs become a program-level diagnostic tool, not just an individual assessment.
Considering ITE’s are used across multiple residency and fellows programs, many programs still struggle with delivering and analyzing them.
Common challenges include:
Off-the-shelf exam platforms often force programs into rigid workflows that don’t align with specialty needs, institutional policies, or existing educational models.
In some cases, exams live outside the primary learning platform, creating disconnected experiences for learners and added administrative overhead for staff.
Basic score reports may show averages or percentiles but fail to provide deeper insights—such as trends over time, subgroup comparisons, or program-level benchmarking.
Perhaps most limiting: many systems do not allow programs to compare performance across universities or training sites, even when that data could significantly improve educational outcomes. They could improve outcomes through program manager coordination, sharing best practices and even taking advantage of gamifying the ITE to the program managers.
Modern medical education increasingly relies on data-informed decision-making. That means ITEs shouldn’t just be delivered, they should be intentionally designed, deeply analyzed, and meaningfully compared.
This is where purpose-built platforms make a real difference.
As a strong healthcare LMS, OasisLMS works with major medical associations/colleges that administer high-stakes, high-impact assessments, including in-training exams. Rather than offering a one-size-fits-all exam tool, OasisLMS supports highly customized ITE implementations.
Programs can tailor:
This level of customization is especially important in medical education, where standards vary by specialty and institution.
In-training exams can be hosted directly within OasisLMS, allowing learners to complete assessments in the same environment they use for coursework, content, and other evaluations.
This creates a smoother learner experience and simplifies administration for staff.
One of the most powerful capabilities OasisLMS supports is comparative performance analysis.
In some implementations, program managers at universities or medical institutions can:
This cross-university visibility allows educators to move beyond isolated results and see how their program truly stacks up, something most traditional exam systems don’t support.
Benchmarking transforms in-training exams from simple assessments into strategic tools.
With comparative data, programs can:
For medical education leaders, this level of insight is invaluable.
Whether you’re administering ITEs now or planning improvements, these practices help maximize their impact:
The more intentionally ITE data is used, the more value it delivers.
In-training exams remain one of the most important assessment tools in medical education, but their value depends on how they’re delivered, analyzed, and applied.
Programs that rely on rigid platforms and surface-level reporting often miss opportunities to improve outcomes. Those that invest in customized delivery, integrated systems, and meaningful benchmarking are better positioned to support learners and strengthen their programs.
OasisLMS was built to support exactly this kind of complexity, by helping medical education providers move beyond basic exam delivery toward smarter, more impactful use of in-training exams.
Want to Learn More?
If your organization administers in-training exams and needs a platform that supports custom workflows, advanced reporting, and cross-institution insights, you can explore how OasisLMS supports medical education programs.
Whether managing CME for physicians or supporting member growth, Oasis LMS helps deliver high-impact education efficiently and at scale.
