Final Report of the Testing Task Force

Foreword by the Testing Task Force Chair

Photo of Hon. Cynthia L. Martin

It is with great pride that the Testing Task Force presents this Final Report, marking the conclusion of an intensive three-year research study undertaken to identify the legal knowledge and skills entry-level attorneys are expected to have or learn within the first three years of practice, and to determine whether, how, and when those identified competencies should be assessed on a bar examination.

The Testing Task Force undertook this substantial research project beginning in January 2018, consistent with the National Conference of Bar Examiners’ commitment to providing high quality, valid, reliable licensure exam materials to jurisdictions that require passage of a bar examination for bar admission. Licensure exam requirements are certainly not unique to the legal profession. All such requirements possess as a common thread the recognition that public protection and confidence in a profession warrant coupling satisfactory performance on a licensure exam with relevant education as conditions of licensure. High-stakes licensure exams are thus an integral part of a professional licensure system that recognizes the important and varied roles played by professional education, post-education assessment, and post-licensure training and continuing education in producing competent licensed professionals to practice in their profession consistent with public protection.

This report marks both an ending and a beginning. The report represents the end of the TTF’s work—a substantial research project that produced scientifically supported recommendations for the content, timing, scoring, format, and delivery mode of the bar examination of the future. The TTF’s recommendations are exciting and transformative. Most importantly, they are responsive to input gathered through listening sessions, focus groups, scientific surveys, and intensive deliberations. This report will afford the reader with a high-level summary of three years of exhaustive work and should be read collectively with the far more detailed reports published by the TTF along the way (and referenced below).

But this report also marks the beginning of the next chapter: implementation. The TTF’s recommendations have been approved by NCBE’s Board of Trustees, and over the next four to five years, NCBE will be working diligently to develop the next generation of the bar exam—the NextGen Bar Exam. Implementation of the TTF’s recommendations will employ the same transparent, unencumbered, collaborative, empirical methodology that served as the hallmarks of the TTF’s study. It will require an enormous amount of work, as is summarized in the closing portions of this report. You can be assured that NCBE’s demonstrated dedication to the provision of products and services of unparalleled quality to jurisdictions will continue through the implementation phase of this important project.

It is no casual undertaking, producing licensing exam products that validly and reliably measure whether an entry-level lawyer, who will be afforded a general license to practice, possesses the foundational knowledge and skills required to help ensure public protection. But NCBE has demonstrated time and again its commitment to that objective in the services and products it has provided to jurisdictions. The work of the TTF, and the important work that will be required over the next few years to implement the TTF’s recommendations, add to the long history of proactive efforts undertaken by NCBE to capably and professionally serve bar admitting authorities, mindful of the fundamental fairness to which applicants are entitled.

Stay tuned.

Cynthia Martin electronic signature

Introduction

The National Conference of Bar Examiners (NCBE) created the Testing Task Force (TTF) to undertake a comprehensive three-year study to ensure that the bar examination continues to test the knowledge, skills, and abilities needed for competent entry-level legal practice in a changing profession. The primary goal of this research was to identify the foundational knowledge and skills that should be included on the next generation of the bar exam and to determine how and when they should be assessed. However, the TTF expected that its research could also potentially be useful to others involved in educating, training, and mentoring law students and newly licensed lawyers.

The TTF’s work was conducted in three phases, starting at the beginning of 2018 and concluding at the end of 2020. It was approached systematically, transparently, and collaboratively—unconstrained by the current bar exam’s content and design—with qualitative and quantitative research conducted by external expert consultants.

During Phase 1, the TTF held a series of listening sessions across the country where more than 400 stakeholders from bar admission agencies, the legal academy, and the legal profession provided their views about the current bar exam and ideas for how it could be changed. Phase 2 consisted of a nationwide practice analysis survey completed by nearly 15,000 lawyers that provided a rich set of data on the work performed by newly licensed lawyers (NLLs) and the knowledge and skills they need to perform that work. In Phase 3, the TTF convened two committees composed of bar admission representatives, legal educators, and practitioners, who applied their professional experience and judgment to the data produced in Phases 1 and 2 to provide input on what content should be tested on the bar exam and when and how that content should be assessed. Input from stakeholders was gathered at each step. The results from Phases 1, 2, and 3 of our study are summarized at a high level in this report, which should be read in conjunction with the more detailed descriptions provided in the three individual reports available at https://NextGenBarExam.ncbex.org /reports/.

Based on this extensive research, the TTF arrived at high-level decisions about the content and the design for the next generation of the bar examination. Those decisions were founded on the principle that the purpose of the bar exam is to protect the public by helping to ensure that those who are newly licensed possess the minimum knowledge and skills to perform activities typically required of an entry-level lawyer. Our decisions reflect the fact that NLLs receive a general license to practice law, suggesting that the licensure exam should not attempt to assess knowledge and skills unique to discrete practice areas, but should instead assess knowledge and skills that are of foundational importance to numerous practice areas.

Additionally, the TTF’s decisions were guided by the prevailing views expressed by stakeholders that

  • the bar exam should test fewer subjects and should test less broadly and deeply within the subjects covered;
  • greater emphasis should be placed on assessment of lawyering skills to better reflect real-world practice and the types of activities NLLs perform;
  • the exam should remain affordable;
  • fairness and accessibility for all candidates must continue to be ensured; and
  • the feature of score portability provided by the Uniform Bar Exam (UBE) should be maintained.

The TTF’s decisions were also based on what will best ensure that the exam’s content and design achieve the purpose of the bar exam described above and meet the criteria for sound testing practices applicable to high-stakes licensure exams as set forth by the Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014).

At the beginning of January 2021, the TTF published its Overview of Preliminary Recommendations for the Next Generation of the Bar Examination and held webinars to collect stakeholder reactions and answer questions. Overall, the response from stakeholders was positive. The TTF then submitted the recommendations to the NCBE Board of Trustees, which approved the recommendations without change on January 28, 2021.

This report summarizes information gathered during the three years of our study, sets out the TTF’s final recommendations and the rationale for its decisions, and highlights the major steps NCBE will undertake to implement the next generation of the bar examination.

Summary of Phase 1: Listening Sessions

READ FULL PHASE 1 REPORT

The purpose of the listening sessions held during Phase 1 was to listen to stakeholders’ concerns, thoughts, and ideas related to the bar exam of the future. At each session, a member of the TTF or NCBE staff gave a welcome and introduction describing the TTF’s research plan and then stayed in the session as an observer. One of the TTF’s external consultants facilitated the sessions.

Following the welcome and introduction, the facilitator explained that the participants would be asked to provide input on the content, format, timing, and delivery method of the bar exam and the MPRE. Participants were invited to candidly provide their opinions and were informed that no comments would be attributable to specific participants in any written reports or materials.

The following questions served as the guiding framework for the listening sessions:

  • What aspects of the current bar exam and MPRE do you think should be kept? Why?
  • What aspects of the current bar exam and MPRE do you think should be dropped or modified? Why?
  • What do you think the next generation of the bar exam and MPRE should be?
  • What cautions do you want to share regarding any potential changes to the bar exam and MPRE?
  • What else would you like to discuss about the bar exam and MPRE?

Key Points

Because each listening session included different stakeholders, the discussions reflected the interests of each respective group. The diversity of stakeholders and perspectives provided ample opportunities for rich discussion about each of the major topics. The key points that emerged from participants’ input across all listening sessions are summarized below.

Very few, if any, opinions were universally shared by stakeholders. Additionally, while the intended focus of the sessions was on changes that could be made, in most sessions there were comments supporting various aspects of the current exam program/model, but with suggested opportunities for continued evolution and improvement.

Content
  • The MPT was widely viewed as the component that is most representative of the skills needed for NLLs at the point of entry to practice.
  • The subject areas measured on the MBE were generally viewed as representative of subjects that would be applicable to all NLLs. However, the target level for items on the MBE was viewed by many as going beyond the point of entry-level competency by testing nuanced issues and “exceptions to exceptions to rules.”
  • Content that focuses on skills such as issue spotting, critical thinking, legal analysis, written and oral communication, and reasoning was considered more applicable to all NLLs. In contrast, content that focuses on subject-matter knowledge was viewed by some as requiring memorization of legal rules that lawyers can look up in practice.
Format
  • The constructed-response format of the MEE and the MPT was viewed as more representative of what NLLs do in practice (i.e., written analysis of legal and factual issues) than the multiple-choice format of the MBE.
  • The MPRE content could be assessed using essays or MPT-like questions as opposed to, or in addition to, the current multiple-choice format.
  • While multiple-choice items were viewed by some stakeholders as not reflective of the way law is practiced, many stakeholders recognized the benefits that the MBE contributes: objective scoring, reliability of scores, and scaled scores that have consistent meaning over time and across jurisdictions because the exam is equated.
  • While using simulations was suggested by stakeholders to provide more realistic assessment of skills, the associated downsides of greater subjectivity in grading, the potential for bias, and increased costs were also noted.
Timing
  • While the idea of “step testing” (used for physician licensing through the United States Medical Licensure Examination) was frequently suggested by stakeholders, the downsides of step testing were also raised.
  • More frequent administration of the bar examination could permit candidates to sit for the exam when they are ready, permit failing candidates to retake the exam sooner, and reduce the time to employment after graduation, which would help graduates with student debt. It was also acknowledged that more frequent administrations of the exam could require jurisdictions to use more staff and other resources, which could increase costs.
  • Reducing the time required to grade the constructed-response components (essays and performance tests) could allow passing candidates to begin practicing sooner.
Delivery and Administration
  • There was varied support for paper-based testing, computer-based testing, or some combination of these delivery modes. The delivery method for the exam should align with law school, training, and practice environments.
Other Comments/Topics
  • The consistency in subjects tested and the portability of scores are positive features of the UBE and should be maintained. Increased consistency in grading of the MEE and MPT across UBE jurisdictions could be accomplished through different activities ranging from increased guidance by NCBE on grading practices to centralized grading for constructed-response/essay questions.
  • There was support for greater consistency in passing score requirements to communicate a common standard for entry-level competency, particularly for the UBE, but support was also voiced for maintaining each jurisdiction’s autonomy in setting its passing score.

Summary of Phase 2 Report: Practice Analysis

READ FULL PHASE 2 REPORT

Phase 2 of the TTF’s study consisted of a national practice analysis to provide empirical data on the job activities of NLLs, with NLLs defined as lawyers who have been licensed for three years or less. The practice analysis survey asked respondents to rate the job tasks typically performed by NLLs, as well as the knowledge, skills, abilities, and other characteristics required to effectively perform those tasks. To paint a comprehensive picture of legal practice, the survey also included a technology section that listed work-related software applications that lawyers use to perform their work. The quantitative data collected through the practice analysis was intended to complement the more qualitative data gleaned from the Phase 1 listening sessions, from focus groups and interviews with NLLs conducted in prior studies done by NCBE, and from the environmental scan conducted as part of the current study and described briefly below.

The practice analysis survey was developed between October 2018 and July 2019. First, an environmental scan was completed to research information relevant to the legal profession that could support the development of an organized taxonomy of the work responsibilities of NLLs. Draft lists of tasks; knowledge areas; skills, abilities, other characteristics; and technology items were compiled through the environmental scan. Three focus groups were then conducted with lawyers from a variety of practice areas, settings, and backgrounds to refine the lists. Next, the TTF revised the draft lists resulting from the work of the focus groups to improve consistency in wording and eliminate redundancy, and the lists were subsequently organized for use in the survey. To evaluate the content and structure of the draft survey, pilot testing was completed by 82 lawyers who volunteered to provide input on the clarity of the survey instructions, the completeness of the lists, the usability of the rating scales, and the amount of time required to complete the survey. The survey was revised and finalized based on the results of the pilot test.

Given the purpose of the practice analysis—to identify fundamental work activities across the practice areas and settings in which NLLs work to determine appropriate content for a general licensure exam—the TTF organized the tasks according to the following four broad categories: (1) General tasks, (2) Trial/Dispute Resolution tasks, (3) Transactional/Corporate/Contracts tasks, and (4) Regulatory/Compliance tasks. The lists of knowledge areas; skills, abilities, and other characteristics (SAOs); and technology items were shorter than the list of tasks and did not require organizational frameworks. The survey also included a demographics section to obtain a description of respondents’ backgrounds and work environments for use in analyzing the results.

Table 1. Practice Analysis Survey Sections and Rating Scales
Survey Section Sample Survey Items Rating Scales
Tasks

(179 items)

Establish and maintain client trust account. 5-point frequency scale ranging from 0 (not applicable) to 4 (weekly)
4-point criticality scale ranging from 0 (not applicable) to 3 (essential)
Determine proper or best forum to initiate legal proceeding.
Determine lawfulness or enforceability of contract or legal document.
Secure required governmental or regulatory approvals or authorizations.
Knowledge Areas

(77 items)

Bankruptcy Law 4-point importance scale ranging from 0 (not applicable) to 3 (essential)
Civil Procedure
Criminal Law
Rules of Evidence
Skills, Abilities, and Other Characteristics (SAOs)

(36 items)

Critical/Analytical Thinking – Able to use analytical skills, logic, and reasoning to solve problems and to formulate advice. 4-point criticality scale ranging from 0 (not applicable) to 3 (essential)
Conscientiousness – Approaches work carefully and thoughtfully, driven by what is right and principled.
Interviewing/Questioning – Able to obtain needed information from others to pursue an issue or matter.
Leadership – Able to delegate, inspire, and make thoughtful decisions or plans to further goals and objectives.
Technology

(24 items)

Research Software or Platforms – Software, programs, or databases that permit the user to conduct electronic legal research. 4-point proficiency scale ranging from 0 (not applicable) to 3 (expert)
Data Analytics Software – Software used to find anomalies, patterns, and correlations within data.
Video-Conferencing Software – Software that permits audio or video meetings with participants in different locations.
Demographics

(10 items)

Which of the following best describes your practice setting? Response options were tailored to each question
How many lawyers are in your organization?
With which of the following races do you identify?
In which of the following areas of practice do you spend at least 5% of your time?

The survey was lengthy by necessity to adequately cover the work of NLLs. To prevent survey fatigue and encourage a high rate of response, matrix sampling was used to assign survey respondents to different sections of the survey. Respondents were randomly assigned to one of four versions of the survey. Random assignment ensured that each version of the survey was seen by comparable numbers of respondents and reduced the selection bias that can occur when survey recipients are provided with the option to choose the category of questions to which they respond.

The survey was open from August 1, 2019, through October 2, 2019. Given that there is no centralized registry of all practicing lawyers in the United States, a random sampling approach to survey distribution was not possible. The TTF instead took a census approach in which any eligible respondent could answer the survey. NCBE obtained cooperation from 54 jurisdictions to assist with promoting the survey. NCBE also promoted the survey via multiple email campaigns, through frequent posts on the TTF’s and NCBE’s social media channels, and in NCBE’s quarterly publication, the Bar Examiner.

Both NLLs and more experienced lawyers (non-NLLs) who have or had direct experience working with or supervising NLLs were invited to complete the survey to ensure a breadth of perspectives on the work performed by NLLs. Respondents were asked at the beginning of the survey how many years they had been licensed, which was used to determine whether they fell into the category of NLL or non-NLL. Non-NLLs were disqualified from taking the survey if they indicated that they had not ever had direct experience working with or supervising NLLs.

The survey required slightly different sets of instructions for NLLs and non-NLLs. NLLs were asked to rate survey items in terms of their own personal practice (e.g., “How frequently do YOU perform this task in YOUR practice areas and setting?”). Non-NLLs were asked to rate survey items based on the practice of NLLs with whom they have or had direct experience (e.g., “How frequently do newly licensed lawyers with whom you have or had direct experience perform this task in THEIR practice areas and setting, regardless of what other NLLs with whom you do not have direct experience may do?”).

Results

Demographics and Practice Areas

The total effective sample size was 14,846 respondents. The respondents consisted of 3,153 NLLs (21%) and 11,693 non-NLLs (79%). Because the survey did not require a response to every question, the number of respondents to any particular question varied.

Respondents represented a total of 56 jurisdictions and included a broad range of entry-level and experienced lawyers working in a variety of practice settings. Survey respondent data were compared to data for the US legal profession published by the American Bar Association in the ABA Profile of the Legal Profession 2019 (ABA Profile). For most jurisdictions, the percentage of survey respondents in the jurisdiction and the number of lawyers in that jurisdiction as a percentage of the US lawyer population were reasonably consistent, with the following exceptions: Minnesota, Ohio, and Pennsylvania were slightly overrepresented on the survey, while Florida and Illinois were slightly underrepresented.

It can be seen from these demographic comparisons that the practice analysis survey respondents generally were representative of the population of US lawyers based on the ABA Profile. This, in combination with the large number of respondents, suggests that survey results should generalize from the sample of respondents to the eligible population of NLLs and non-NLLs in the United States.

Respondents were presented with 35 practice areas and asked to indicate the areas in which they spend at least 5% of their time. They were then asked to enter as a percentage the amount of time they estimate working in each area selected. The most and least frequently selected practice areas are shown in Table 2.

Table 2. Most Common and Least Common Practice Areas
Most Common Least Common
Contracts Securities
Business Law Immigration Law
Commercial Law Disability Rights
Administrative Law Employee Benefits
Real Estate Workers’ Compensation
Criminal Law International Law
Appellate Environmental Law
Employment Law and Labor Relations Education Law
Torts Energy Law
Other Indian Law

The data show that 82% of survey respondents work in multiple and varying numbers of practice areas and with different degrees of emphasis in each practice area. To better understand how the respondents allocate their time across different practice areas, the data were subjected to cluster analysis to identify groups of respondents with similar practice profiles. A desirable feature of cluster analysis is that each survey respondent is assigned to only one cluster and gets counted just once for purposes of data analyses. The results suggested that the practice profiles could be condensed into 25 practice clusters. The task and knowledge area ratings were then analyzed within each practice cluster to identify the tasks and knowledge areas that span multiple practice clusters.

Tasks

The Tasks section of the survey asked respondents to rate tasks on the frequency of performance and criticality for practice. The mean ratings of task frequency and criticality by NLLs correlated highly with the ratings by non-NLLs. Therefore, the groups were combined for most analyses.

The most frequently performed tasks were performed by more than 90% of NLLs, had mean frequency ratings approaching weekly, and had criticality ratings approaching “high importance” (essential). Of note is that three of these tasks have “research” as the primary verb. Themes other than legal research that were common to the highly rated tasks include ethics, written and spoken communications, legal analysis/evaluation, and diligence. The most and least commonly performed tasks are set out in Table 3.

Table 3. Most Commonly and Least Commonly Performed Tasks
Most Commonly Performed Tasks Least Commonly Performed Tasks
Identify issues in client matter, including legal, factual, or evidentiary issues. Draft and file documents to secure or maintain intellectual property protection.
Research case law. Draft legislation or regulations.
Interpret laws, rulings, and regulations for client. Negotiate with or on behalf of land use regulatory authorities.
Research statutory and constitutional authority. Draft prenuptial or antenuptial agreements.
Evaluate strengths and weaknesses of client matter. Prepare or review local, state, or federal tax returns and filings.
Evaluate how legal document could be construed. Establish and maintain client trust account.
Develop specific goals and plans to prioritize, organize, and accomplish work activities. Participate in initiative or proposition process to change statute or constitution.
Conduct factual investigation to obtain information related to client matter. Represent client in post-conviction relief or habeas corpus proceedings.
Research secondary authorities. Represent client in eminent domain or condemnation proceeding.
Consult with colleagues or third parties regarding client matters. Draft constitutional amendments.

Because the tasks lawyers perform might depend on characteristics such as practice setting, geographic region, and so on, criticality and frequency ratings were analyzed by subgroups of respondents based on the following demographic factors: recency of experience with NLLs, practice setting, number of lawyers in the organization, gender, race/ethnicity, and geographic region. The large number of task statements, multiple rating scales, and variety of demographic factors produced thousands of comparisons. A limitation of these analyses was that they concerned only main effects for a single demographic variable at a time and did not consider joint effects of multiple variables. Another limitation was that sample sizes for some subgroups were quite small. More complex analyses were required to disentangle the effects of one demographic variable from another and to better understand the differences; the results of these additional complex analyses were considered during Phase 3, when the content to be assessed in the next generation of the bar exam was evaluated by a diverse panel of legal subject matter experts (SMEs).

In determining which of the 179 tasks that were included in the survey should potentially be addressed as part of the content assessed on the bar exam, the TTF applied a 50% rule as a general guideline, such that for a task to be considered eligible for consideration in the test blueprint development process, it must be performed by at least 50% of entry-level practitioners. However, the decision to keep or drop a task for potential inclusion was also based on the extent to which it was rated as relevant to multiple practice areas. Additional factors considered included results based on demographic subgroups (e.g., solo practitioners, women) and on practice clusters, as well as the personal experience of the SMEs who participated in Phase 3 of the study. Ultimately, 136 tasks were considered during Phase 3, as discussed later in this report.

Knowledge Areas

The 77 knowledge areas were rated in terms of their importance to the practice of all NLLs. The overall means for all knowledge areas as rated by NLLs and non-NLLs were nearly identical, and the correlation between the two sets of ratings was very high; thus, data for the two groups were combined for most analyses.

The knowledge areas with the highest and lowest mean importance ratings are set out in Table 4.

Table 4. Knowledge Areas with Highest and Lowest Mean Importance Ratings
Highest Mean Importance Ratings Lowest Mean Importance Ratings
Rules of Professional Responsibility and Ethical Obligations Transportation Law
Civil Procedure Bioethics
Contract Law Indian Law
Rules of Evidence Foreign Trade Law
Legal Research Methodology Public Utility Law
Statutes of Limitations Military Justice Law
Local Court Rules Animal Rights Law
Statutory Interpretation Principles Sports and Entertainment Law
Sources of Law (Decisional, Statutory, Code, Regulatory, Rules) Air and Space Law
Tort Law Admiralty Law

Various methods and indices were considered to guide decisions about which knowledge areas should be considered during Phase 3 as potential content to be assessed on the bar exam. The TTF decided to include knowledge areas if at least 50% of either NLLs or non-NLLs who rated it viewed it as being of moderate or high importance. As with the tasks, however, additional factors were also taken into consideration, such as differences in ratings across demographic subgroups and evaluation of the extent to which a knowledge area is relevant to multiple practice areas. Knowledge area importance ratings were remarkably consistent across demographic groups; that is, mean ratings did not vary much based on the demographic backgrounds of respondents such as race, gender, or geographic region. However, mean knowledge area ratings did vary by practice area. Therefore, the results were further analyzed by practice clusters to evaluate the extent to which a knowledge area was relevant to multiple practice areas. As a result of these analyses by practice clusters, 25 knowledge areas were included for consideration during Phase 3, as discussed later in this report.

Skills, Abilities, and Other Characteristics (SAOs)

The survey included 36 SAOs, which NLLs were instructed to rate in terms of criticality to their own practice; non-NLLs were instructed to rate the SAOs based on the practice of NLLs with whom they have or had direct experience. Again, the overall mean ratings from NLLs and non-NLLs were highly correlated and were therefore combined for analysis.

Most SAOs tended to receive high ratings, with the vast majority of the SAOs being judged as being either moderately or highly critical. The SAOs with the highest and lowest mean criticality ratings are set out in Table 5.

Table 5. SAOs with Highest and Lowest Mean Criticality Ratings
Highest Ratings Lowest Ratings
Written/Reading Comprehension – Able to read and understand information presented in writing. Strategic Planning – Plans and strategizes to anticipate and address present and future issues and objectives.
Critical/Analytical Thinking – Able to use analytical skills, logic, and reasoning to solve problems and to formulate advice. Leadership – Able to delegate, inspire, and make thoughtful decisions or plans to further goals and objectives.
Written Expression – Able to effectively communicate information and ideas in writing. Social Consciousness/Community Involvement – Demonstrates desire to improve society by contributing skills to the community.
Identifying Issues – Able to spot salient legal concerns presented by a set of circumstances. Networking and Business Development – Able to develop meaningful business relationships and to market skills to develop client relationships.
Integrity/Honesty – Demonstrates core values and belief system. Instructing/Mentoring – Able to manage, train, and instruct to assist others in realizing their full potential.

Results for the SAOs section confirmed previous research on the cognitive and affective skills required of practicing lawyers. Specifically, the list of SAOs included nearly all the 26 lawyering skills identified through the work of Shultz and Zedeck (2011).1 The fact that nearly all SAOs were judged to be either moderately or highly critical can be regarded as confirmation of that earlier work.

Given the uniformly high criticality ratings for SAOs, responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

There is little doubt that these SAOs are important for competent entry-level legal practice. Indeed, due to their broad nature, most of the SAOs are critical to working in a variety of jobs or professions. However, some of these skills are difficult to teach (e.g., Integrity and Time Sharing) and even more challenging to assess in a manner that produces reliable and valid test scores. SAOs that are relatively specific to the legal profession (e.g., Fact Gathering), as well as those that can be applied and assessed narrowly within a legal context (e.g., Critical/Analytical Thinking), were considered during Phase 3 when recommendations for the content and design of the next generation of the exam were developed.

Beyond identifying potential content for assessment on the bar exam, the SAO results may be useful to the licensing process by empirically identifying the personal characteristics that are important for competent practice. Thus, those involved in legal education, mentoring of NLLs, continuing legal education, and the character investigation part of the admissions process may find the results useful to their work.

Technology

The 24 technology items on the survey were rated by NLLs in terms of the level of proficiency required in their own practice, while non-NLLs based their ratings on the practice of NLLs with whom they have or had direct experience. The mean ratings for NLLs and non-NLLs were highly correlated, so the groups were combined for analysis.

The technology items with the highest and lowest mean proficiency ratings are set out in Table 6.

Table 6. Technology with Highest and Lowest Mean Proficiency Ratingsa
Highest Mean Proficiency Ratings Lowest Mean Proficiency Ratings
Word Processing Software Web Content Management Software
Research Software or Platforms Data Analytics Software
Electronic Communication Software Language Translation Software
Desktop Publishing Software Financial Planning Software
Document Storage Software, Including Cloud Storage Tax Preparation Software

a Complete definitions for each technology item were provided in the survey.

Responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

The next generation of the bar exam will not directly assess knowledge and skills related to use of the technology items. However, knowing which technologies NLLs should be proficient in using in practice provides information about the types of testing platforms that examinees might be expected to use (with reasonable accommodations provided for examinees with disabilities). For example, the survey results provide support for the appropriateness of having examinees interact with electronic research software as part of completing a performance test.

Credibility and Generalizability of Findings

Best practices in practice analyses include validating survey responses. To do this, four sources of evidence were evaluated: sample representation, sample size and sampling error, consistency with expectations, and consistency with independent research.

Sample Representation

The survey respondents represented nearly all jurisdictions, and the proportion of survey respondents from each jurisdiction approximated the proportion of practicing lawyers in each jurisdiction based on the ABA Profile. Thus, the breadth of the sample contributes to the generalizability of findings. Furthermore, comparisons of responses to the Tasks and Knowledge Areas sections by respondents from different regions of the country indicated that there was little regional variation in ratings across tasks and almost no regional variation across knowledge areas. This limited regional variation in responses suggests that results are not overly dependent on one or more specific regions.

Sample Size and Sampling Error

A representative sample is of limited value if it is not sufficiently large. Adequate sample sizes are important to ensure the stability of the statistics reported in the findings. The margin of error, or standard error, is the most common index for documenting the precision associated with any statistic. Literally hundreds of standard errors were computed as part of the statistical analyses for this report. The margins of error were not large, meaning that if this study were replicated with new samples of NLLs and non-NLLs, mean values for the new study would be expected to be very similar to the values observed in the 2019 study. This suggests that readers can be confident in the stability of the statistical indices.

Consistency with Expectations

Another strategy for examining the validity of practice analysis data involves evaluating the extent to which the responses are consistent with informed expectations. The differences in ratings of tasks and knowledge areas by practice clusters were in line with what most readers would expect. For example, the task of “Draft or negotiate business agreements” was performed by 92% of respondents from the Real Estate Law practice cluster but by only 11% of respondents from the Appellate Law: Criminal practice cluster, which is in line with expectations. The survey results suggest that respondents generally were attentive and provided thoughtful responses as they completed the survey.

Consistency with Independent Research

NCBE commissioned a practice analysis in 2011/2012, which was completed by a research consultant different from the one that completed the present 2019 study. In addition, the State Bar of California completed a practice analysis in 2019 specific to practice in California. Those two studies provide external criteria to which the present study was compared. Although none of the studies were intended to be replications of another, they all had the goal of identifying the responsibilities and KSAs required of NLLs.

The 2012 and 2019 NCBE studies both included sections for tasks, knowledge areas, and SAOs. Direct comparison of findings is hindered for various reasons (e.g., the lists were not identical across studies, a task from 2019 might have been classified as a skill in 2012, and there were differences in rating scales). Nonetheless, there is enough overlap to draw some parallels. Overall, the tasks viewed as important in 2012 were also viewed as critical in 2019, even though data were collected from different samples using different instruments and in different contexts. Similarly, in general, knowledge areas judged to be important by 2019 respondents were also viewed as important by 2012 respondents.

The California Practice Analysis (CAPA) survey included 23 tasks that were similar or very similar to tasks appearing on the 2019 NCBE practice analysis survey. Although the rating scales for the two studies were not identical, it was possible to use a linear transformation to rescale the NCBE ratings to approximate what those ratings would be on the CAPA rating scales.2

Overall frequency ratings were found to be very similar for the two studies, but there were some notable differences in criticality ratings. A comparison of a sample of tasks from the two surveys indicated striking similarity across all the frequency ratings and most of the criticality ratings.

The CAPA survey also included a list of knowledge areas (topics) that were rated in terms of frequency and criticality. Whereas the 2019 NCBE practice analysis survey listed 77 knowledge areas, the California survey included two levels of topics where 121 specific topics were nested under 21 broad knowledge areas (e.g., Offer and Acceptance nested under Contracts).

Of the 10 most important knowledge areas on the NCBE survey, five also appeared in the top 10 on the CAPA survey. Note that the knowledge areas of Criminal Law and Constitutional Law were included among the top 10 on the CAPA survey, and in the NCBE survey results those two areas were ranked fifteenth and thirteenth, respectively, but those two areas would have been in the top 10 of the NCBE survey had it not included the following as knowledge areas: Legal Research Methodology, Statutes of Limitations, Local Court Rules, Statutory Interpretation Principles, and Sources of Law.

Based on the systematic process of developing a practice analysis survey, and of gathering information from a representative sampling of lawyers, stakeholders should have confidence that the 2019 NCBE practice analysis results provided meaningful guidance for the TTF’s comprehensive study.

Summary of Phase 3 Report: Test Content and Design

READ FULL PHASE 3 REPORT

For Phase 3 of the Task Force’s work, two committees were convened for the purpose of discussing test content and design issues, working from the qualitative and quantitative data that were compiled in Phase 1 (stakeholder listening sessions) and Phase 2 (nationwide practice analysis). The charge of the Blueprint Development Committee (BDC) was to help determine what content should be tested on the bar exam, while the role of the Test Design Committee (TDC) was to recommend how that content should be assessed. The BDC consisted of newly licensed and experienced practitioners who applied their professional judgment and experience to recommend what content should be tested based upon the Phase 2 results. The TDC was composed of legal educators and bar admission representatives who provided input on an effective design for the exam. The TDC’s work was guided by the Phase 1 study results and by the professional judgment and experience of committee members in educating law school students and admitting NLLs to the bar.

Blueprint Development Committee Meeting

The TTF recruited 17 practicing lawyers to participate as panelists on the BDC; 14 of the panelists were female and 10 were people of color. In total, the panelists practiced in 13 jurisdictions and across a range of 22 practice areas and various practice settings (private law firm, government, nonprofit organization, legal services/public interest, judicial law clerk, and in-house counsel). None had ties to NCBE and none were current or former bar examiners.

The BDC met by videoconference from June 29 to July 1, 2020, for five hours each day. Prior to the meeting, each panelist was provided a binder of materials that served as advance readings for the meeting and additional materials for reference during the meeting.

One of the TTF’s external research consultants facilitated the meeting, and staff from NCBE and the chair of the TTF attended the meeting to observe. The meeting began with an orientation that included an overview of the TTF study, the purpose and function of a test blueprint, a review of the meeting materials, and an explanation of how to interpret the results of the Phase 2 practice analysis.

The general discussion began after the orientation with a review of the job tasks from the practice analysis survey. Specifically, the full list of 179 tasks had been reduced to those 136 tasks that were rated as being performed Frequently or Moderately by 50%3 or more of the survey respondents. The tasks identified for review were organized by the TTF under these seven skill domains:

  • Legal Research
  • Legal Writing and Drafting
  • Client Counseling and Advising
  • Issue Spotting and Evaluation
  • Investigation and Analysis
  • Negotiation and Dispute Resolution
  • Client Relationship and Management

The BDC reviewed each task and discussed its relevance to practice by NLLs based on the ratings collected during the practice analysis, including (1) the overall frequency ratings, (2) the frequency ratings by Practice Cluster, and (3) the frequency ratings by those survey respondents identified as NLLs versus those who were not NLLs. The result of each task-level discussion was a recommendation as to whether the task should be included within that skill domain as being representative of the activities required of NLLs. The BDC also recommended consolidation of some tasks to eliminate overlap or redundancy.

After reviewing all 136 tasks in this manner, the BDC was asked to consider how much emphasis or weight should be given to the seven skill domains on the bar exam, including models of (1) equal weighting for each skill domain, (2) natural weighting, meaning the weight is determined by the number of tasks under each skill domain, or (3) weighting based on the judgments of the BDC. The BDC panelists opted for the third model and applied their judgment to reach consensus on recommended weights for each skill domain. This activity was concluded at the end of the second day.

The third day of the meeting was focused on reviewing the knowledge areas from the practice analysis. The full list of 77 knowledge areas from the practice analysis survey had been reduced to 25 by prioritizing those areas that were rated as Important by 50%4 or more of the survey respondents. The BDC reviewed each knowledge area and discussed its relevance to practice by NLLs based on the overall importance ratings, the importance ratings by Practice Cluster, and the importance ratings by those respondents identified as NLLs versus those who were not NLLs. The result of each knowledge area discussion was a recommendation as to whether the area should be included on the bar exam.

After making decisions about what knowledge areas to recommend for inclusion, the BDC considered how much emphasis or weight should be given to each knowledge area on the bar exam. The BDC also considered generally whether knowledge areas should be measured in a content-dependent context (necessary legal resources are not provided) or in a content-independent context (necessary legal resources are provided).

Results: Skills

In total, the BDC identified 103 tasks as representative of the seven skill domains identified for assessment on the bar exam: 9 of the original 136 tasks considered by the BDC were consolidated to eliminate redundancy, and 24 tasks were recommended for exclusion, with most of those excluded because the BDC concluded that the tasks were generally outside the scope of an NLL’s practice. A list of the 136 tasks, with the BDC recommendations indicated, is provided in Appendix A of the Phase 3 report.

Table 7 shows for each skill domain the number of tasks, a general description of the domain, and the recommended weighting. The weighting is shown as the average of the weights recommended by the BDC panelists; a range of roughly 3% around that average is shown in parentheses.

Table 7. Skills Recommendations by BDC
Skill Domain Tasks Description of Domain Weighting (%)
Legal Research 5 Researching the Law, Written/Reading Comprehension, Critical/Analytical Thinking 17.5 (15–20)
Legal Writing and Drafting 24 Written Expression, Critical/Analytical Thinking 14.5 (12–17)
Client Counseling and Advising 14 Oral Expression, Oral Comprehension, Cultural Competence, Advocacy, Critical/Analytical Thinking, Problem Solving, Practical Judgment 11.9 (10–15)
Issue Spotting and Analysis 7 Identifying Issues, Observant, Critical/Analytical Thinking 17.5 (15–20)
Investigation and Evaluation 17 Interviewing/Questioning, Fact Gathering, Cultural Competence, Problem Solving 17.5 (15–20)
Negotiation and Dispute Resolution 23 Negotiation Skills/Conflict Resolution, Creativity/Innovation, Expressing Disagreement, Written Expression, Oral Expression, Oral Comprehension, Advocacy, Practical Judgment 11.9 (10–15)
Client Relationship and Management 13 Networking and Business Development, Resource Management/ Prioritization, Organization, Strategic Planning, Managing Projects, Achievement/Goal Orientation, Practical Judgment, Decisiveness, Cultural Competence 9.2 (7–12)
Results: Knowledge Areas

The BDC endorsed including 11 (of 25) knowledge areas. A list of the 25 knowledge areas considered by the BDC, with the BDC’s recommendations noted, is provided in Appendix B of the Phase 3 report. The BDC further recommended that the following six knowledge areas should be excluded as stand-alone topics and coverage of these areas should be subsumed under other knowledge areas and skills:

  • Statutory Interpretation Principles –> subsumed under Skills and Constitutional Law
  • Uniform Commercial Code –> subsumed under Business Organizations or Contract Law
  • Remedies –> subsumed under all knowledge areas
  • Civil Rights –> subsumed under Constitutional Law
  • Landlord-Tenant Law –> subsumed under Real Property and/or Contract Law
  • Debtor-Creditor Law –> subsumed under Business Organizations and/or Contract Law

For each of the 11 knowledge areas, Table 9 below shows the BDC recommendations for weighting (average of BDC panelists’ judgments along with a range of + 3%) and measurement approach (reflecting the consensus of at least two-thirds of the panelists). With respect to the measurement approach for each knowledge area, the BDC was asked to recommend either testing knowledge of legal doctrine in a content-dependent manner, where legal resources are not provided as part of the test materials, or applying skills in the knowledge area in a content-independent manner, where appropriate legal resources are provided. Though Table 8 reflects the BDC’s ultimate suggestions in this regard, the BDC’s deliberations about whether and how knowledge and skills could or should be assessed in a content-dependent or content-independent manner were formative in introducing the concept of integrated assessment, discussed later in this report, which recognizes the interdependency of the assessment of knowledge and skills.

Table 8. Knowledge Area Recommendations by BDC
Knowledge Area Weighting (%) Measurement Approach
Business Organizations 7 (4–10) Knowledge (content-dependent)
Professional Responsibility, Ethics 7 (4–10) Knowledge (content-dependent)
Legal Research Sources & Methods 8 (5–11) Applying skills (content-independent)
Constitutional Law 9 (6–12) Knowledge (content-dependent)
Dispute Resolution* 9 (6–12) Applying skills (content-independent)
Real Property 9 (6–12) Knowledge (content-dependent)
Torts 9 (6–12) Knowledge (content-dependent)
Evidence 10 (7–13) Knowledge (content-dependent)
Criminal Law & Procedure 10 (7–13) Knowledge (content-dependent)
Contract Law 10 (7–13) Knowledge (content-dependent)
Civil Procedure 11 (8–14) Knowledge (content-dependent)

* This knowledge area represents the combination of Alternative Dispute Resolution and Trial Advocacy and Practice.

Test Design Committee Meeting

The TTF invited each jurisdiction to nominate a bar admission representative (bar administrator, bar examiner, or justice) to serve on the TDC. The TTF selected from the nominees to achieve a mix of roles, jurisdiction sizes, and other demographic variables. The TTF also invited individual deans and faculty members from a variety of law schools to serve. The panel of 28 was composed of 11 educators, 9 bar examiners, 6 bar administrators, and 2 justices; 10 of the panelists were female and 7 were people of color. Each panelist had experience educating law students, administering the bar exam, serving as a bar examiner, or, in the case of the justices, serving as liaison between a state’s highest court and the state’s board of bar examiners.

The TDC completed its work through two meetings conducted via videoconference for five hours per day over three days (Meeting 1 on July 16 and 17, 2020, and Meeting 2 on August 4, 2020), with an offline review of written materials before Meeting 1 and between meetings. The August 4 meeting was added after the meeting was changed from an in-person format to videoconference, and, unfortunately, seven of the TDC panelists were not available on that date. Therefore, 28 panelists were present for Meeting 1 and 21 were present for Meeting 2.5 Those who could not attend Meeting 2 were given the opportunity to provide written input before and after the meeting.

The TTF’s external research consultant facilitated the meetings. Staff from NCBE and the chair of the TTF attended the meetings to observe. The first meeting began with an orientation that included an overview of the TTF study, the purpose and function of a test design, and a review of the meeting materials with an explanation of how each document related to the TDC’s work.

After the orientation, the panel was split into two groups and a facilitator guided each group through a discussion of specific test design topics and issues. The TDC did not discuss the issue of test delivery mode because the TTF had already decided that the next generation of the bar exam would be a computer-based test, administered either at computer testing centers managed by a suitable vendor or on candidates’ laptops at jurisdiction-managed test sites.

The TDC panelists recognized the interconnectedness of the design topics and spent the meeting time sharing their opinions and discussing advantages and challenges associated with various options. The TDC was largely split on whether the design should use compensatory scoring (with scores on each component combined to produce one overall pass/fail decision for licensure) or conjunctive scoring (with scores on each component treated as separate pass/fail decisions and a requirement that candidates pass each component to be licensed). Under a compensatory design, candidates may compensate for a weak performance on one component with a strong performance on another. Under a conjunctive design, candidates must demonstrate the required level of proficiency on each component. The other design feature on which there was a diversity of opinions was whether to use a single-event administration model (one exam administration taken after completion of law school) or a multi-event model (exam administered as separate components with the option to take the first component during law school).6 Therefore, three draft design models were created after Meeting 1 using those decision points as the key differentiators.

Each of the draft design models assumed that the bar exam would include two components—Application of Core Doctrinal Law and Application of Lawyering Skills—and would be administered using a range of assessment methods/formats. Another common feature of each draft design model was a test of knowledge of the rules of Professional Responsibility that would be administered separately from the bar exam and could be taken during law school or after graduation.

Though the three models presumed separate assessment of knowledge and skills, TDC panelists discussed the fact that assessment of knowledge and skills are inherently interconnected. That is, lawyering skills such as issue spotting and analysis cannot be separated from demonstrating knowledge of foundational legal doctrine. Conversely, some degree of legal doctrine is generally required to demonstrate foundational lawyering skills.  As was the case with the BDC’s rich discussions, the TDC’s discussions around the notion of interconnected assessment of foundational knowledge and skills was formative in leading to consideration of integrated assessment.

Results

The prevailing views of the TDC members are summarized below. TDC members also commented on the content identified by the BDC for inclusion on the bar examination.

Structure: The TDC generally supported the structure of two components (Application of Core Doctrinal Law and Application of Lawyering Skills) for the bar exam and a separate exam on knowledge of Professional Responsibility. Pass/fail decisions for the bar exam would be based on a compensatory score for the exam but with minimum score requirements for each component. The compensatory score would be a weighted combination of the scores on the two components, and the TDC suggested either a 50/50 weighting (equal weight between the two components) or a 60/40 weighting with the higher weight allocated to the Application of Lawyering Skills component. These suggestions, however, were inherently limited by the fact that all the test design models presented to the TDC presumed independent assessment of foundational knowledge and skills. The TDC continued to express reservations about whether foundational knowledge and skills can be assessed independently of one another.

Application of Core Doctrinal Law component: For this component, the TDC did not unanimously agree with the appropriateness of assessing some of the knowledge areas recommended for inclusion by the BDC; both the BDC and the TDC agreed, however, that the depth and breadth of coverage in the knowledge areas tested should be limited to the core legal principles that NLLs need to know without “looking it up” (i.e., they should be able to issue spot and know the basic rules but should not be expected to know “the exceptions to the exceptions”).

Application of Lawyering Skills component: The TDC showed unanimous support for measuring skills such as Legal Writing, Legal Research, Issue Spotting and Analysis, and Investigation and Evaluation. For Professional Responsibility and Ethics, the TDC acknowledged the importance of the subject matter but did not want to see it tested as a core knowledge area on the bar exam because it would duplicate content tested on the Multistate Professional Responsibility Exam (MPRE). The TDC suggested that Professional Responsibility could serve as the context for questions in the Application of Lawyering Skills component to assess skills such as Issue Spotting and Analysis, with the Model Rules of Professional Responsibility being provided as a resource to use during testing. Some members of the TDC expressed strong concerns that the skills of Client Counseling and Advising, Client Relationship and Management, and Negotiation and Dispute Resolution could not be measured objectively and without bias, and the importance of those concerns was noted. In terms of methods for assessing skills, the TDC generally supported the idea of case studies (e.g., written fact scenarios or video simulations) using multiple item types (e.g., short answer, selected response, extended response) with a library of legal resources provided.

Administration: A slight majority of the TDC panelists were supportive of allowing candidates the option to take one of the two components of the bar exam during law school, but a few panelists were adamantly opposed, voicing their concerns regarding the impact on law school curriculum and law students. Additionally, those who supported the option were not in agreement about which component would be more appropriate for testing during law school.

Overall, the TDC members’ views reflected the interconnectedness and complexity of test design issues. For those issues where they were not of one mind, their discussions provided valuable insight into the benefits and challenges of various approaches to the design issues.

Testing Task Force Recommendations

The TTF formulated a set of recommendations for the content and design of the new exam after taking into consideration the views of stakeholders, the data collected during the study, the work of the BDC and TDC, the opinions of assessment experts and psychometricians, and relevant practical and logistical administrative issues. The recommendations are consistent with the purpose of the exam to protect the public and the intended use of exam scores to determine whether candidates possess the minimum knowledge and skills to perform activities typically required of an entry-level lawyer. The recommendations are also consistent with the fact that a newly licensed lawyer secures a general license to practice law, suggesting that the bar exam should assess foundational knowledge and skills that are common to numerous practice areas. The recommendations are discussed in detail below.

Structure and Format

The TTF recommended the use of an integrated exam structure to assess both legal knowledge and skills holistically in a single, practice-related examination. Although each of the draft design models presented to the TDC was based on the assumption that the bar exam would include two separate components, with one component testing legal knowledge and the other testing legal skills, the discussion of those models often highlighted the interconnectedness of knowledge and skills. Thus, while neither the BDC nor the TDC directly suggested an integrated exam, the combined discussions of the BDC and TDC sparked the idea. The concept of an integrated assessment model was further supported by NCBE’s Technical Advisory Panel (TAP), a group of external psychometric experts. Members of the TAP were given the opportunity to review and comment on the BDC and TDC recommendations and recognized a recurring theme pointing to the concept of integrated assessment design. Members of the TAP observed that integrated assessment is not a novel concept and is already employed in academia and in high-stakes licensure assessments used in other professions.

An integrated exam permits use of scenarios that are representative of real-world types of legal problems that NLLs encounter in practice. Realistic scenarios are used in the current exam, but in discrete components comprised of stand-alone items, whereas an integrated exam includes item sets and a combination of item formats (e.g., selected-response, short-answer, and extended constructed-response items) within the same component. An item set is a collection of test questions based on a single scenario or stimulus such that the questions pertaining to that scenario are developed and presented as a unit. Item sets can be assembled so that all items within a set are either of the same format or of different formats. Stand-alone questions will still be used, and the exam will not consist of item sets exclusively. NCBE aims to have prototypes of integrated exam questions available later this year to share with stakeholders.

Scoring

A compensatory scoring model will be used to produce a single combined score for making admission decisions, which is consistent with the use of an integrated exam design and the interconnected nature of the competencies being measured. A combined score allows a candidate’s areas of strength to compensate for areas of weakness and reflects the candidate’s overall proficiency in the competencies being measured.

Multiple-choice items and other item formats that can be machine-scored will be scored by NCBE, while the constructed-response questions will continue to be graded by bar examiners.

Content

To reflect the nature of the content of the new exam, the TTF adopted the terms Foundational Concepts & Principles (FC&P) and Foundational Skills for the competencies to be assessed.

Foundational Concepts and Principles
  • Civil Procedure (including constitutional protections and proceedings before administrative agencies)
  • Contract Law (including Art. 2 of the UCC)
  • Evidence
  • Torts
  • Business Associations (including Agency)
  • Constitutional Law (excluding principles covered under Civil Procedure and Criminal Law)
  • Criminal Law and Constitutional Protections Impacting Criminal Proceedings (excluding coverage of criminal procedure beyond constitutional protections)
  • Real Property
Foundational Skills
  • Legal Research
  • Legal Writing
  • Issue Spotting and Analysis
  • Investigation and Evaluation
  • Client Counseling and Advising
  • Negotiation and Dispute Resolution
  • Client Relationship and Management

The scope of what will be assessed within the eight FC&P and the seven Foundational Skills will be carefully aligned with minimum competence for entry-level practice and set out in the test content specifications that will be developed as one of the first steps of implementing the recommendations. Test content specifications guide development of test questions and provide notice to candidates of what may be tested and how. While all the features of the new exam’s test content specifications have not yet been determined, we plan to include detailed descriptions of the topics and subtopics to be covered within each of the FC&P and Foundational Skills; the weight or emphasis (e.g., percent of test items, amount of testing time) allocated to each FC&P and Foundational Skill; the approximate emphasis to be given to the various item formats; and, when appropriate, the sources of law upon which FC&P content will be based. This list of features is illustrative; additional features may be included. The development of test content specifications will be a collaborative process involving external subject matter experts such as bar examiners, legal educators, and practitioners, including newly licensed lawyers. We expect to publish final test content specifications by the end of 2021.

The list of Foundational Skills includes some skills that might be thought of as performance skills, such as negotiation. To ensure fairness, those skills that can be objectively measured will be assessed using uniform text- or video-based scenarios that require candidates to construct a written response or select the correct response. We will also determine appropriate assessment methods to ensure that exam materials can be provided in accessible formats to candidates with disabilities to ensure they have equal opportunity to demonstrate their proficiency.

The Foundational Skills may be assessed in the context of the FC&P, in which case candidates will be expected to know the applicable legal concepts and principles, or Foundational Skills may be assessed in other legal contexts, in which case a closed universe of appropriate legal resources (e.g., statutes, cases, rules, regulations) will be provided. The objective is to reduce the amount of legal knowledge candidates must learn for the exam, while emphasizing skills such as interpreting and applying law. The new exam will not be “open book” in the sense of candidates being permitted to bring in or otherwise access materials not made available in the exam materials provided to all candidates. However, the new exam’s emphasis on the application of provided legal resources will yield the practical effect of an open-book exam while maintaining the standardization central to applicant fairness.

The Multistate Professional Responsibility Examination (MPRE) will remain a stand-alone exam that is administered separately from the bar exam. Stakeholders recognize the importance of professional responsibility and value its separate assessment as a core piece of ensuring public protection and trust in the integrity of the legal profession. Because of its importance, professional responsibility may serve as the context for assessing Foundational Skills (e.g., legal analysis, client counseling and advising) on the new bar exam, but the applicable rules or other legal resources will be provided to candidates.

Timing of Test Administration

The new exam will be given as a single event at or near the point of licensure; jurisdictions may still permit students to test in their final semester of law school, as some currently do. This timing is consistent with the purpose of the bar exam in that it places measurement of minimum competence as close in time to the award of a license as possible. It is also consistent with the use of an integrated exam that assesses knowledge and skills holistically. Additionally, single-event testing allows more options for equating and scaling, which is necessary for fairness and consistency in scoring.

A single-event approach avoids concerns expressed by some stakeholders about a multi-event approach, where components of the exam would be administered at separate times. Those concerns included potential negative consequences such as interfering with internships and summer employment opportunities, impacting law school curricula, adding the stress of taking a high-stakes exam during law school, creating multiple “hurdles” for admission, and increasing costs for candidates to prepare for and travel to multiple administrations of the exam.

Among the reasons some stakeholders favored multi-event testing was to permit testing of legal doctrine closer in time to when students learn the content in law school. The TTF concluded that the increased emphasis on assessment of skills and the decreased depth and breadth of coverage of doctrine makes this reason less compelling. In addition, some of those who favored multi-event testing want to use the bar exam as a diagnostic tool, which is not the purpose for which it is designed. Further, some perceived advantages of multi-event testing, such as letting students decide when to take a component and retake separate components if they fail, would also bring disadvantages. Among these would be the challenge for law schools to keep track of where their students are in the bar passage process and the need to tailor bar preparation support to 2Ls, 3Ls, and graduates, all of whom might be at different points in the admission process.

Readers are encouraged to review the comments of TDC members that are provided in Appendix F and Appendix H of the Phase 3 report for a fuller appreciation of the range and complexity of the issues considered around this topic. Some of the most compelling comments were those related to fairness to and equity among candidates. For example, one TDC member commented that multi-event testing could lead to a “two-track path to licensure that splits candidates along lines that appear to be racist or classist.” Others expressed concerns that some first-generation law students and those who are struggling academically would feel pressured to take the first component as early as possible, even though they might not be ready to do so before completing law school. Such students might be discouraged from continuing law school if they are not successful, which could have the unintended consequence of limiting the number of first-generation lawyers entering the profession.

Mode and Frequency of Test Administration

The new bar exam will be delivered by computer, either at computer testing centers managed by appropriate vendors or on examinees’ laptops at jurisdiction-managed testing sites. Although NCBE offered remote administration of the current bar exam as an emergency option during the COVID-19 pandemic, uniform testing conditions and accessibility for all candidates can be best ensured by in-person administration.

The exam will continue to be offered two times each year.

Implementation

It is estimated that it will be four to five years before the new exam is administered for the first time. A website dedicated to implementation of the new exam will be used to help keep stakeholders informed about the process. The major steps of implementation will include

  • developing test content specifications identifying scope of coverage;
  • drafting new types of questions for integrated testing of knowledge and skills;
  • ensuring accessibility for candidates with disabilities;
  • field-testing new item formats and new exam content;
  • conducting analyses and review to ensure fairness for diverse populations of candidates;
  • evaluating options for computer delivery of the exam;
  • reviewing procedures and scoring guidelines for grading constructed response items (e.g., essays);
  • establishing scoring processes and psychometric methods for equating/scaling scores;
  • developing test administration policies and procedures;
  • assisting and supporting jurisdictions in activities such as establishing passing score requirements and amending rules to align with changes to the exam; and
  • providing study materials and sample test questions to help candidates prepare.

Implementation will be conducted in a systematic, transparent, and collaborative manner, informed by input from and participation by stakeholders, and guided by best practices and the professional standards for high-stakes testing. We will ensure that information is provided to jurisdictions, candidates, and law schools in a timely manner to create a smooth transition to the new exam. 

Footnotes

1 Shultz, M.M. & Zedeck, S. (2011). Predicting lawyer effectiveness: Broadening the basis for law school admissions decisions, Law & Social Inquiry, Journal of the American Bar Foundation, 36(3), 620–661.

2 Although the transformation allows for more direct comparison of results, it may not account for potential ceiling effects; because the NCBE scale had fewer scale points, it is possible that the ratings at the upper end of the NCBE scale were suppressed a bit relative to the CAPA means. Differences in means across the surveys may be at least partially attributable to ceiling effects or scale suppression.

3 To account for a margin of error of 3%, the list reviewed by the BDC included tasks rated as being performed Frequently or Moderately by 47% or more of the survey respondents.

4 To account for a margin of error of 3%, the list reviewed by the BDC included knowledge areas rated as Important by 47% or more.

5 The panelists present for Meeting 2 consisted of 10 educators, 7 bar examiners, and 4 bar administrators.

6 Under either administration model, jurisdictions could permit candidates to take components that are to be completed “after law school” prior to graduation, as is the case with the current bar exam.