Best practices for interviewing applicants for medical school admissions: a systematic review

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Associated Data

ESM 1: Search strategy GUID: 598FB896-EED5-4980-9E17-F8A512E89E45 ESM 2: Characteristics of included studies and interventions GUID: 6DBB4D4C-7E23-4ED5-83F0-D981B7350DCB ESM 3: Intervention and measurement methods of included studies GUID: 2654FC72-0918-4982-8C41-C77AD396C88D

Abstract

Introduction

Interviews are commonly used to select applicants for medical school, residency, and fellowship. However, interview techniques vary in acceptability, feasibility, reliability, and validity. This systematic review investigated the effectiveness of different interview methods in selecting the best qualified applicants for admission to medical school and developed a logic model to implement best practices for interviewing.

Methods

Five electronic literature databases were searched for comparative studies related to interviewing in medical schools from inception through February 1, 2021. Inclusion criteria included publications in English that compared different methods of conducting a selection interview in medical schools with a controlled trial design. General study characteristics, measurement methodologies, and outcomes were reviewed. Quality appraisal was performed using the Medical Education Research Study Quality Instrument (MERSQI) and the Oxford Risk of Bias Scale. Based on these findings, a logic model was constructed using content analysis.

Results

Thirteen studies were included. The multiple mini-interview (MMI) was reliable, unbiased, and predicted clinical and academic performance; the virtual MMI increased reliability and lowered costs. For unstructured interviews, blinding interviewers to academic scores reduced bias towards higher scorers; student and faculty interviewers rated applicants similarly. Applicants preferred structured over unstructured interviews. Study quality was above average per the MERSQI, risk of bias was high per the Oxford scale, and between-study heterogeneity was substantial.

Discussion

There were few high-quality studies on interviewing applicants for admission to medical school; the MMI appears to offer a reliable method of interviewing. A logic model can provide a conceptual framework for conducting evidence-based admissions interviews.

Supplementary Information

The online version of this article (10.1007/s40037-022-00726-8) contains supplementary material, which is available to authorized users.

Keywords: Interview, Communication skills, Admission, Medical schools

Disclaimer The views expressed here are those of the authors and do not necessarily reflect the position or policy of the US Department of Veterans Affairs or the US government.

Introduction

Interviews are an important process in medical education [1]. In the United States (US), there are 94,243 students in medical school [2], most of whom go through an interview as part of the selection process. The selection interview allows admissions officers to verify and supplement other parts of an applicant’s portfolio (grades, standardized exam scores, essays) by gathering information about their personal qualities and capabilities, including proficiency in oral communication, social skills, and motivations that may be predictive of academic success in medical school and future clinical performance. Additionally, interviews are a way of exhibiting institutional environments and encouraging candidates to matriculate at schools and training programs.

Interviews in medical education can be problematic, however. They are less reliable and predictive of medical trainee academic and clinical performance than grade point averages (GPA) and national examination scores [3–8]. Moreover, many aspects of the interviewing process may discriminate against applicants based on certain characteristics such as race, gender, and other demographic factors [9]. In addition, in-person interviews are time consuming and can pose a significant financial burden for prospective trainees. These counterbalancing features underscore the importance of determining the most effective ways of conducting fair and reliable interviews that can identify the best qualified applicants for training in medicine.

Evaluating evidence-based approaches to interviewing will help medical education programs reduce bias and admit more qualified applicants. However, prior systematic reviews on best approaches to interviewing in medical education have largely focused on the reliability and validity of the multiple mini-interview (MMI), which is used to evaluate different aspects of an applicant’s character with short interview stations [10, 11]. The primary purpose of this systematic review was to compare the effectiveness of different interview methods in selecting the best qualified applicants for medical school. The secondary purpose was to use the findings of the systematic review to devise a logic model to guide the implementation of evidence-based interview practices [12, 13].

Methods

A health sciences librarian was consulted to formulate search strategies in accordance with Preferred Reporting Items and Meta-Analyses (PRISMA) guidelines. A literature search was performed for randomized controlled trials (RCTs) and comparative observational studies using CINAHL, Embase, ERIC, PsycINFO, and PubMed from inception to February 1, 2021. (See Electronic Supplementary Material 1 [ESM 1] for searches).

Two investigators (JL, AL) independently assessed each study in Covidence (https://www.covidence.org/) for full-text review by screening their titles and abstracts. After conflicts were resolved, JL and AL independently conducted a full-text analysis for eligibility. JL and AL independently extracted the following information from the articles: publication year, country, study design, demographic information, sample size, setting, interview techniques, and primary outcomes. Study countries were categorized based on their Global Burden of Disease (GBD) super-region. All disagreements were resolved by the senior investigators (CEM, PG). To identify additional papers for consideration, we used forward reference searching on Google Scholar to find articles that cited relevant literature reviews [10, 11, 14–17].

Our inclusion criteria included studies that compare different methods of conducting a selection interview in medical schools using a controlled trial design, including RCTs and comparative observational studies. A comparative observational study is defined as a study that tests different interview techniques with two or more distinct comparison groups with minimal differences, such as a parallel group study [18].

Our exclusion criteria included: (1) non-English publications, (2) publications that report on the selection interview in non-medical school settings, (3) observational studies, (4) studies that test different interview techniques using the same sample, and (5) reviews, editorials, case studies, and reports. Crossover studies were excluded as their designs may be inherently prone to a higher risk of bias due to carry-over effects [19, 20]. For example, if applicants were interviewed twice using different formats by the same interviewer, their first performance will affect their score for the second interview; additionally, even if the interviewer was different, the applicant may have gained additional experience and familiarity with interview questions from the first interview.

JL and AL independently evaluated the quality and risk of bias of the studies using the Medical Education Research Study Quality Instrument (MERSQI) and the Oxford Risk of Bias Scale, respectively. MERSQI scores range from 0 to 18 based on study design (0–3), sampling (0–3), type of data (0–3), validity evidence (0–3), data analysis (0–3), and outcome reporting (0–3) [21]. The Oxford scale, also known as the Jadad scale, is the most frequently used metric for risk of bias in the medical literature and provides scores from 0 to 5 based on randomization (0–2), blinding (0–2), and attrition (0–1) [22]. Disagreements were resolved by the senior authors.

Logic models are designed to visually map the relationship between interventions and their short-, intermediate-, and long-term outcomes [12, 13]. A logic model was developed by synthesizing the findings of the included studies using content analysis. Categories of interview methods were created to classify interview content and planning for the logic model. Next, methodologies of included studies were reviewed, approaches to developing interview processes were extracted, and activities necessary to conduct interviews were incorporated into the model. Results were listed and divided into targeted short-term outputs for individuals and long-term outcomes for systems based on the principles of logic model development for systematic reviews [13].

Results

After removing duplicates, 1793 potential studies were identified (Fig. 1 ). After screening titles and abstracts, the full texts for 109 studies were comprehensively reviewed. In total, 96 were excluded due to incompatible study design (60), incompatible study population (24), and incorrect intervention (12). Forward reference searching was performed but did not identify any eligible studies for systematic review. This strategy yielded thirteen studies included in the review [9, 23–33].