Phase 2 Report of the Testing Task Force

Executive Summary

Introduction

In 2018, the National Conference of Bar Examiners (NCBE) created a Testing Task Force (TTF) to undertake a comprehensive three-year study to ensure that the bar examination continues to test the knowledge, skills, and abilities needed for competent entry-level legal practice in a changing profession. The TTF’s study consists of three phases. Phase 1 was a series of listening sessions with stakeholders to solicit their impressions about the current bar examination and ideas for the next generation of the bar examination. Phase 2 consisted of a national practice analysis to provide empirical data on the job activities of newly licensed lawyers (NLLs), which are defined in the practice analysis survey as lawyers who have been licensed for three years or less. Phase 3, which will be completed in 2020, will translate the results from Phase 1 and Phase 2 into a recommended test blueprint and design for the next generation of the bar examination. This Executive Summary provides a high-level synthesis of the 2019 Practice Analysis Report.

Survey Development and Administration

Survey Development

The practice analysis survey was developed between October 2018 and July 2019. First, an environmental scan was completed to research information relevant to the legal profession that could support the development of an organized taxonomy of the work responsibilities of NLLs. The scan identified the tasks typically performed by NLLs, as well as the knowledge, skills, abilities, and other characteristics required to effectively perform those tasks. To paint a comprehensive picture of legal practice, the survey also included a technology section that listed work-related software applications that lawyers use to perform their work. The resources used for the environmental scan included, among other materials, the previous practice analysis conducted by NCBE in 2011–2012, various studies by other individuals/entities identifying the competencies needed by NLLs, the US Department of Labor’s Occupational Information Network (O*NET), and articles and reports regarding recent changes and anticipated future changes in the delivery of legal services and the practice of law.

After draft lists of tasks; knowledge areas; skills, abilities, and other characteristics (SAOs); and technology items were compiled through the environmental scan, three focus groups were conducted with lawyers from a variety of practice areas, settings, and backgrounds to refine the lists. Next, the TTF revised the draft lists resulting from the work of the focus groups to improve consistency in wording and eliminate redundancy, and the lists were subsequently organized for use in the survey. The TTF gave attention to the organizational framework of the tasks list. Given the purpose of the practice analysis—to identify fundamental work activities across the practice areas and settings in which NLLs work to determine appropriate content for a general licensure exam—the TTF organized the tasks according to the following four broad categories: (1) General tasks, (2) Trial/Dispute Resolution tasks, (3) Transactional/Corporate/Contracts tasks, and (4) Regulatory/Compliance tasks. The lists of knowledge areas, SAOs, and technology items were naturally shorter than the list of tasks and did not require organizational frameworks. The survey also included a demographics section to obtain a description of respondents’ backgrounds and work environments for use in analyzing the results.

To evaluate the content and structure of the draft survey, pilot testing was completed by 82 individuals to gather input on the clarity of the survey instructions, the completeness of the lists, the usability of the rating scales, and the amount of time required to complete the survey. The survey was revised based on the results of the pilot test. The following table summarizes the content and respective rating scales of each section of the survey.

Survey Section Sample Survey Items Rating Scale
Tasks

(179 Items)

Establish and maintain client trust account. 5-point frequency scale ranging from 0 (not applicable) to 4 (weekly)

 

4-point criticality scale ranging from 0 (not applicable) to 3 (essential)

Determine proper or best forum to initiate legal proceeding.
Determine lawfulness or enforceability of contract or legal document.
Secure required governmental or regulatory approvals or authorizations.
Knowledge Areas

(77 Items)

Bankruptcy Law 4-point importance scale ranging from 0 (not applicable) to 3 (essential)
Civil Procedure
Criminal Law
Rules of Evidence
SAOs – Skills, Abilities, and Other Characteristics

(36 Items)

Critical/Analytical Thinking – Able to use analytical skills, logic, and reasoning to solve problems and to formulate advice. 4-point criticality scale ranging from 0 (not applicable) to 3 (essential)
Conscientiousness – Approaches work carefully and thoughtfully, driven by what is right and principled.
Interviewing/Questioning – Able to obtain needed information from others to pursue an issue or matter.
Leadership – Able to delegate, inspire, and make thoughtful decisions or plans to further goals and objectives.
Technology

(24 Items)

Research Software or Platforms – Software, programs, or databases that permit the user to conduct electronic legal research. 4-point proficiency scale ranging from 0 (not applicable) to 3 (expert)
Data Analytics Software – Software used to find anomalies, patterns, and correlations within data.
Video-Conferencing Software – Software that permits audio or video meetings with participants in different locations.
Demographics

(10 Items)

Which of the following best describes your practice setting? Response options were tailored to each question
How many lawyers are in your organization?
With which of the following races do you identify?
In which of the following areas of practice do you spend at least 5% of your time?

Survey Design and Administration

The survey was lengthy by necessity to adequately cover the work of NLLs. To prevent survey fatigue and encourage a high rate of response, the TTF determined that matrix sampling should be used to assign survey respondents to different sections of the survey. This method of survey assembly and assignment resulted in four versions of the survey:

• Version A: 49 General tasks, 24 technology items, and 10 demographic questions
• Version B: 74 Trial/Dispute Resolution tasks and 10 demographic questions
• Version C: 41 Transactional/Corporate/Contracts tasks, 36 SAOs, and 10 demographic questions
• Version D: 15 Regulatory/Compliance tasks, 77 knowledge areas, and 10 demographic questions

Respondents were randomly assigned to one of the four versions of the survey. Random assignment ensured that each version of the survey was seen by comparable numbers of respondents and reduced the selection bias that can occur when survey recipients are provided with the option to choose the category of questions to which they respond. Additionally, the survey questions did not require a response. Therefore, the number of respondents to any item could vary.

The survey was open from August 1, 2019, through October 2, 2019. Given that there is no centralized registry of all practicing lawyers in the United States, a random sampling approach to survey distribution was not possible. The TTF instead took a census approach, in which any eligible respondent could answer the survey. NCBE obtained cooperation from 54 jurisdictions to assist with promoting the survey. NCBE also promoted the survey via multiple email campaigns, through frequent posts on the TTF’s and NCBE’s social media channels, and in NCBE’s quarterly publication, the Bar Examiner.

Both NLLs and non-newly licensed lawyers (non-NLLs) who have or had direct experience working with or supervising NLLs were invited to complete the survey to ensure a breadth of perspectives on the work performed by NLLs. Respondents were asked at the beginning of the survey how many years they had been licensed, which was used to determine whether they fell into the category of NLL or non-NLL. Non-NLLs were disqualified from taking the survey if they indicated that they had never had direct experience working with or supervising NLLs.

The survey required slightly different sets of instructions for NLLs and non-NLLs. NLLs were asked to rate survey items in terms of their own personal practice (e.g., “How frequently do YOU perform this task in YOUR practice areas and setting?”). Non-NLLs were asked to rate survey items based on the practice of NLLs with whom they have or had direct experience (e.g., “How frequently do newly licensed lawyers with whom you have or had direct experience perform this task in THEIR practice areas and setting, regardless of what other NLLs with whom you do not have direct experience may do?”).

Results

Demographics and Practice Areas

Of the 30,970 people who accessed the survey, 11,442 did not provide any responses after the initial screening question, and an additional 4,682 were disqualified from the survey due to having no experience working with or supervising NLLs. Thus, the total effective sample size was 14,846 respondents. The respondents consisted of 3,153 NLLs (21%) and 11,693 non-NLLs (79%).

Respondents represented a total of 56 jurisdictions and included a broad range of entry-level and experienced lawyers working in a variety of practice settings. The largest number of respondents had their primary practice in New York (17.5%) and California (14.8%), followed by Pennsylvania (8.9%), Minnesota (5.7%), Ohio (5.6%), and Texas (5.3%). The fewest respondents were from New Hampshire, Rhode Island, South Dakota, and the Pacific and Caribbean islands. Survey respondent data were compared to data for the US legal profession published by the American Bar Association in the ABA Profile of the Legal Profession 2019 (ABA Profile). For most jurisdictions, the percentage of survey respondents in the jurisdiction and the number of lawyers in that jurisdiction as a percentage of the US lawyer population were reasonably consistent, with the following exceptions: Minnesota, Ohio, and Pennsylvania were slightly overrepresented on the survey, while Florida and Illinois were slightly underrepresented.

The largest group of survey respondents was White or Caucasian (79.3%), followed by Asian or Asian American (4.8%) and Black or African American (4.4%); 5.3% of respondents were of Hispanic descent. There was a nearly even split between male (52.3%) and female (47.7%) survey respondents. The percentages of respondents by race and ethnicity are in line with the overall US population of lawyers based on the ABA Profile, although the survey had a higher percentage of female respondents than the ABA Profile (36.5%).

It can be seen from these demographic comparisons that the practice analysis survey respondents generally were representative of the population of US lawyers based on the ABA Profile. This, in combination with the large number of respondents, suggests that survey results should generalize from the sample of respondents to the eligible population of NLLs and non-NLLs in the United States.

Respondents were presented with 35 practice areas and asked to indicate the areas in which they spend at least 5% of their time. They were then asked to enter as a percentage the amount of time they estimate working in each area selected. About 18% of all respondents selected just one practice area, with approximately two-thirds selecting between two and seven practice areas, indicating that most lawyers work in multiple practice areas. The most frequently selected practice areas were Contracts, Business Law, Commercial Law, Administrative Law, Real Estate, Criminal Law, Appellate, Employment Law and Labor Relations, Torts, Family Law, and Wills, Estates, and Trusts. A few of the least selected practice areas included Workers’ Compensation, International Law, Environmental Law, Education Law, Energy Law, and Indian Law. NLLs were generally more likely than non-NLLs to select Criminal Law, Family Law, and Immigration Law. NLLs were generally less likely than non-NLLs to select Appellate, Real Estate, Business Law, Commercial Law, Employment Law and Labor Relations, Insurance Law, Health Care Law, and Data Privacy and Cyberspace.

The 2019 Practice Analysis Report provides results of a cluster analysis, in which groups of respondents with similar practice profiles were identified and the numerous practice profiles were condensed into 25 practice clusters. The report analyzes task and knowledge area ratings within each practice cluster to identify the tasks and knowledge areas that span multiple practice clusters. A desirable feature of cluster analysis is that each survey respondent is assigned to only one cluster and gets counted just once rather than multiple times for purposes of data analyses.

Tasks

The Tasks section of the survey asked respondents to rate tasks on the basis of frequency of performance (0=not applicable; 1=yearly; 2=quarterly; 3=monthly; 4=weekly) and criticality for practice (0=not applicable; 1=low; 2=moderate; 3=high). The mean ratings of task frequency and criticality by NLLs correlated highly with the ratings by non-NLLs. Therefore, the groups were combined for the analyses presented in this report.

The most commonly performed tasks were performed by more than 90% of NLLs, had mean frequency ratings approaching weekly, and had criticality ratings approaching “high importance” (essential). Of note is that three of these tasks have “research” as the primary verb. Themes other than legal research that were common to the highly rated tasks include ethics, written and spoken communications, legal analysis/evaluation, and diligence. The most commonly performed tasks were the following: Identify issues in client matter, including legal, factual, or evidentiary issues; Research case law; Interpret laws, rulings, and regulations for client; Research statutory and constitutional authority; and Evaluate strengths and weaknesses of client matter.

Those tasks least likely to be performed tended to involve more specialized and/or advanced areas of practice and included activities such as the following: Establish and maintain client trust account; Participate in initiative or proposition process to change statute or constitution; Represent client in post-conviction relief or habeas corpus proceedings; Represent client in eminent domain or condemnation proceeding; and Draft constitutional amendments.

It is possible that the tasks lawyers perform depend on characteristics such as practice setting, geographic region, and so on. Thus, criticality and frequency ratings were analyzed by subgroups of respondents based on the following demographic factors: recency of experience with NLLs, practice setting, number of lawyers in the organization, gender, race/ethnicity, and geographic region. The large number of task statements, multiple rating scales, and variety of demographic factors produced thousands of comparisons. The results suggested some group differences in task ratings, the meaning and stability of which are not immediately apparent. A limitation of these analyses is that they concern only main effects for a single demographic variable at a time, and do not consider joint effects of multiple variables. Another limitation is that sample sizes for some subgroups were quite small. More complex follow-up analyses will be conducted, and the results will be taken into consideration by the TTF as it conducts Phase 3 of its study.

Results from the Tasks section of the survey can help inform test design in at least two ways. First, the frequency ratings can be useful for identifying core responsibilities. Second, the criticality ratings can be helpful for establishing content weights for the test blueprint. Relatively more weight should be allocated to those tasks that are performed by a large percentage of people, are performed more often, and are more critical. It is common in practice analyses to establish a threshold to determine which tasks should be addressed as part of a licensure exam. One common practice is to apply a 50% rule as a general guideline, such that for a task to be considered eligible for consideration in the test blueprint development process, it must be performed by at least 50% of entry-level practitioners. Further review based on demographic subgroups (e.g., solo practitioners, gender), results based on practice clusters, data from other reports, and/or the personal experience of the panel of legal subject matter experts (SMEs) participating in the test blueprint development process will be taken into consideration.

Knowledge Areas

The 77 knowledge areas were rated in terms of their importance to the practice of all NLLs. The overall means for all knowledge areas as rated by NLLs and non-NLLs were nearly identical, and the correlation between the two sets of ratings was very high; thus, data for the two groups were combined for the analyses in this report.

The knowledge areas with the highest mean importance ratings included the following: Rules of Professional Responsibility and Ethical Obligations, Civil Procedure, Contract Law, Rules of Evidence, and Legal Research Methodology. Among the lowest-rated knowledge areas were topics such as Transportation Law, Bioethics, Public Utility Law, Sports and Entertainment Law, and Admiralty Law.

Knowledge importance ratings were remarkably consistent across demographic groups; that is, mean ratings did not vary much based on the demographic backgrounds of respondents. However, mean knowledge area ratings did vary by practice area. For example, the mean importance rating for Business Organizations Law by those respondents who practice Criminal Law was lower than that from respondents who practice primarily Real Estate Law. Decisions about whether to include a knowledge area in the test blueprint should include evaluation of the extent to which it is relevant to multiple practice areas; therefore, results based on practice clusters will be taken into consideration.

The Knowledge Areas section of the survey has direct implications for the test blueprint because most licensure tests include an assessment of the subject matter knowledge required for competent practice.

Skills, Abilities, and Other Characteristics (SAOs)

The survey included 36 SAOs, which NLLs were instructed to rate in terms of criticality to their own practice; non-NLLs were instructed to rate the SAOs based on the practice of NLLs with whom they have or had direct experience. Again, the overall mean ratings from NLLs and non-NLLs were highly correlated and were therefore combined in the results presented in this report. Most SAOs tended to receive high ratings.

The SAOs with the highest mean criticality ratings included the following: Written/Reading Comprehension, Critical/Analytical Thinking, Written Expression, Identifying Issues, and Integrity/Honesty. The SAOs with the lowest mean criticality ratings were Strategic Planning, Leadership, Social Consciousness/Community Involvement, Networking and Business Development, and Instructing/Mentoring. A few notable differences between the ratings of NLLs and non-NLLs were observed (with notable being defined as a difference of 5% or more). NLLs assigned lower ratings than non-NLLs for eight of the SAOs: Integrity/Honesty, Advocacy, Researching the Law, Collaboration/Teamwork, Achievement/Goal Orientation, Interviewing/Questioning, Resource Management/Prioritization, and Creativity/Innovation. Meanwhile, NLLs provided higher ratings than non-NLLs for Leadership and Instructing/Mentoring. These differences may reflect the more experienced judgment of the non-NLLs.

Results for the SAOs section confirmed previous research on the cognitive and affective skills required of practicing lawyers. Specifically, the list of SAOs included nearly all of the 26 lawyering skills identified through the work of Shultz and Zedeck (2011). The fact that nearly all SAOs were judged to be either moderately or highly critical can be regarded as confirmation of that earlier work.

Given the uniformly high criticality ratings for SAOs, responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

Translating SAOs into meaningful examination content is expected to be a challenge for those who work on blueprint development. There is little doubt that these SAOs are important for competent entry-level legal practice. Indeed, due to their generic nature, most of the SAOs are critical to working in a variety of jobs or professions. However, some of these skills are difficult to teach (e.g., Integrity) and even more challenging to assess in a manner that produces reliable and valid test scores. SAOs that are relatively specific to the legal profession (e.g., Fact Gathering), as well as those that are not specific to the legal profession but can be applied and assessed narrowly within a legal context (e.g., Critical/Analytical Thinking), should be considered for inclusion in the blueprint development process.

Technology

The 24 technology items on the survey were rated by NLLs in terms of the level of proficiency required in their own practice, while non-NLLs based their ratings on the practice of NLLs with whom they have or had direct experience. The mean ratings for NLLs and non-NLLs were highly correlated. Therefore, the groups were combined for the analyses presented here.

The technology items with the highest mean proficiency ratings included the following: Word Processing Software, Research Software or Platforms, Electronic Communication Software, Desktop Publishing Software, and Document Storage Software, including Cloud Storage. The technology items with the lowest mean proficiency ratings included the following: Web Content Management Software, Data Analytics Software, Language Translation Software, Financial Planning Software, and Tax Preparation Software.

Responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

The findings identify the types of technology in which all NLLs might reasonably be expected to demonstrate proficiency and provide information about the types of testing platforms that examinees might be expected to use (with reasonable accommodations provided for examinees with disabilities). For example, the survey results provide support for the appropriateness of having examinees interact with electronic research software as part of completing a performance test.

Next Steps

Based on the systematic process of developing the practice analysis survey, and of gathering information from a representative sample of lawyers, stakeholders should have confidence that the practice analysis results provide meaningful guidance for the TTF’s comprehensive study of the bar examination. Next, the TTF will appoint an independent panel of subject matter experts (SMEs) to translate the results of the practice analysis survey into a test blueprint and test design. The test blueprint will identify the knowledge and skill domains to be assessed by the bar examination and the emphasis to be allocated to each domain. After that, the TTF will appoint a test design committee composed of external stakeholders, such as bar administrators, bar examiners, justices, and legal educators. The test design committee will focus on methods of assessment (e.g., multiple-choice or essay questions), the timing and sequencing of those assessments, procedures for scoring, and other important features of test delivery. Recommendations regarding the test blueprint and test design will be reviewed by NCBE’s Technical Advisory Panel, and the TTF will seek input more broadly from the stakeholder community before deciding on the blueprint and design recommendations to submit to the NCBE Board of Trustees at the end of 2020.

2019 Practice Analysis Report

Introduction

In 2018, the National Conference of Bar Examiners (NCBE) created a Testing Task Force (TTF) to undertake a comprehensive three-year study to ensure that the bar examination continues to test the knowledge, skills, and abilities needed for competent entry-level legal practice in a changing profession. To support its study, the TTF contracted with two independent research consulting firms with expertise in psychometrics and social science research.

The TTF’s study consists of three phases. Phase 1 was a series of listening sessions with stakeholders to solicit their impressions about the current bar examination and ideas for the next generation of the bar examination.1 Phase 2 consisted of a national practice analysis to provide empirical data on the job activities of newly licensed lawyers (NLLs).2 According to the Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014), practice analysis is an essential part of the examination development process and serves as the primary source of validity evidence for licensure tests (Kane, 1982; Raymond & Luecht, 2013).3 This report summarizes the results of Phase 2. Phase 3, which will be completed in 2020, will translate the results from Phase 1 and Phase 2 into a recommended test blueprint (content to be tested and level of emphasis) and design (how the content is tested, including considerations like item format and test length) for the next generation of the bar examination. During Phase 3, a panel of legal subject matter experts (SMEs) will evaluate the survey results and, with guidance from one of the TTF’s research consultants, develop a draft test blueprint. Then stakeholders and outside testing experts will be consulted to finalize the blueprint and help the TTF develop recommendations on the design of the future bar examination.

Survey Development and Administration

The practice analysis survey was developed in three stages between October 2018 and July 2019. The stages included (1) completing an environmental scan to create a list of job requirements of NLLs, (2) conducting focus groups with NLLs and experienced lawyers to refine the list of job requirements, and (3) pilot testing and creating a final version of the practice analysis survey. Each stage is described below, as are the procedures for administering the survey.

Environmental Scan

The objective of the environmental scan was to research information relevant to the legal profession that could support the development of an organized taxonomy of the work responsibilities of NLLs. Consistent with common practice in job analyses, the taxonomy consisted of the tasks typically performed by NLLs, as well as the knowledge, skills, abilities, and other characteristics required to effectively perform those tasks. To paint a comprehensive picture of legal practice, the survey also included a technology section that listed work-related software applications that lawyers use to perform their work.

The resources used for the environmental scan included (1) the previous practice analysis conducted by NCBE in 2011–2012; (2) a focus group of NLLs and experienced lawyers facilitated by NCBE in March 2018; (3) the US Department of Labor’s Occupational Information Network (O*NET), which identifies the work and worker requirements for all jobs within the US economy; (4) research studies published by other individuals/entities identifying the competencies needed by NLLs; (5) various taxonomies of legal practice areas; (6) well-established behavioral taxonomies from the fields of personnel and educational psychology; (7) articles and reports regarding recent changes and anticipated future changes in the delivery of legal services and the practice of law; and (8) job postings for NLLs on the internet (e.g., the American Bar Association website and job recruitment websites such as Indeed and LinkedIn).

Focus Groups/List Refinement

After draft lists of tasks; knowledge areas; skills, abilities, and other characteristics (SAOs); and technology items were compiled through the environmental scan, three focus groups were conducted with lawyers from a variety of practice areas, settings, and backgrounds to further refine the lists. Focus Group 1 gathered information from experienced lawyers—those practicing law for 10 years or more—regarding changes in the legal profession over the previous five years and anticipated changes over the next five years. Lawyers with five years of practice or less were included in Focus Groups 2 and 3. Groups 2 and 3 reviewed and revised the draft lists of tasks, knowledge areas, SAOs, and technology items stemming from the environmental scan.

The TTF revised the draft lists resulting from the work of the focus groups to improve consistency in wording and eliminate redundancy, and the lists were subsequently organized for use in the survey. The TTF gave attention to the organizational framework of the task list. There were many possible ways to organize this list; for example, tasks could have been nested under practice areas such as Administrative Law, Criminal Law, Family Law, and so on. While such a framework has intuitive appeal, it creates redundancy because many tasks may be performed in several practice areas (e.g., negotiating the resolution of contract or business disputes). In addition, such a framework can artificially limit a practice analysis by including some practice areas to the exclusion of others. Given the purpose of the practice analysis—to identify fundamental work activities across the practice areas and settings in which NLLs work to determine appropriate content for a general licensure exam—the TTF organized the tasks according to the following four broad categories:

  • General (tasks any lawyer might perform regardless of practice area, such as analysis of client matter, research, investigation, communication, and case management)
  • Trial/Dispute Resolution (tasks that involve the representation of clients in contested matters regardless of practice area or forum)
  • Transactional/Corporate/Contracts (tasks that involve assisting clients with business, financial, or commercial transactions, agreements, or planning regardless of practice area)
  • Regulatory/Compliance (tasks that involve drafting, enforcing, determining compliance with, or securing benefits under laws or regulations regardless of practice area)

The lists of knowledge areas, SAOs, and technology items were naturally shorter than the list of tasks and did not require organizational frameworks. The survey also included a demographics section to obtain a description of respondents’ backgrounds and work environments for use in analyzing the results.

The TTF devoted considerable thought to including what is termed “other characteristics” on the survey. “Other characteristics” include personal attributes such as creativity, conscientiousness, diligence, integrity, leadership, and professionalism, to name a few. Most of the other characteristics represent non-cognitive or “soft” skills that have not been formally assessed on the bar examination historically. The TTF chose to include other characteristics on the survey for the following reasons. First, the practice analysis was intended to paint a comprehensive picture of entry-level legal practice and not be limited to only those things likely to be assessed on the bar examination. The results will be useful not only for licensure, but also for legal education, mentoring of NLLs, and continuing legal education.4 Second, bar admission agencies include a character inquiry as part of the licensure process. National survey data on other characteristics could support the character review by empirically identifying personal characteristics that are important for competent practice. Third, previous studies have suggested the importance of certain soft skills to competent legal practice (e.g., Shultz & Zedeck, 2011; Gerkman & Cornett, 2016), and including such skills as part of the present study provides an opportunity to build on that body of research. Fourth, the TTF remains open to the possibility of assessing soft skills as part of the licensure examination, recognizing that professions such as medicine have taken steps in that direction (Kyllonen, 2016).

Various rating scales typically used in practice analyses were considered to elicit and record responses to the different sections of the survey. For example, although it is common to rate work tasks in terms of the task’s frequency or criticality, knowledge areas might better be characterized in terms of their importance, their difficulty, or some other attribute (Kane et al., 1989; Raymond, 2016; Sanchez & Fraser, 1992). The pilot study described below provided an opportunity to evaluate application of the selected rating scales.

Pilot Testing

To evaluate the content and structure of the draft survey, pilot testing was completed to gather input on the clarity of the survey instructions, the completeness of the task, knowledge area, SAO, and technology lists, the usability of the rating scales, and the amount of time required to complete the survey.

During the period of July 11–23, 2019, the pilot survey was completed by 82 individuals, including some members of the TTF, some NCBE staff members, an outside consultant with expertise in practice analyses, and practicing attorneys not associated with NCBE who volunteered to participate. The pilot survey asked participants to rate the items in the draft lists and to evaluate the content, format, and length of each section of the survey, as well as the clarity of the instructions and “fit” of the rating scales to the various sections. The TTF and its research consultant used the results and feedback from pilot participants to fine-tune the survey and prepare it for online administration. Table 1 summarizes the content and respective rating scales of each section of the survey.

Table 1. Overview of Survey Content and Rating Scales
Survey Section Sample Survey Items Rating Scalesa
Tasksb

(179 items)

Establish and maintain client trust account. 5-point frequency scale ranging from 0 (not applicable) to 4 (weekly)

 

4-point criticality scale ranging from 0 (not applicable) to 3 (essential)

Determine proper or best forum to initiate legal proceeding.
Determine lawfulness or enforceability of contract or legal document.
Secure required governmental or regulatory approvals or authorizations.
Knowledge Areas

(77 items)

Bankruptcy Law 4-point importance scale ranging from 0 (not applicable) to 3 (essential)
Civil Procedure
Criminal Law
Rules of Evidence
SAOs – Skills, Abilities, and Other Characteristics

(36 items)

Critical/Analytical Thinking – Able to use analytical skills, logic, and reasoning to solve problems and to formulate advice. 4-point criticality scale ranging from 0 (not applicable) to 3 (essential)
Conscientiousness – Approaches work carefully and thoughtfully, driven by what is right and principled.
Interviewing/Questioning – Able to obtain needed information from others to pursue an issue or matter.
Leadership – Able to delegate, inspire, and make thoughtful decisions or plans to further goals and objectives.
Technology

(24 items)

Research Software or Platforms – Software, programs, or databases that permit the user to conduct electronic legal research. 4-point proficiency scale ranging from 0 (not applicable) to 3 (expert)
Data Analytics Software – Software used to find anomalies, patterns, and correlations within data.
Video-Conferencing Software – Software that permits audio or video meetings with participants in different locations.
Demographics

(10 items)

Which of the following best describes your practice setting? Response options were tailored to each question
How many lawyers are in your organization?
With which of the following races do you identify?
In which of the following areas of practice do you spend at least 5% of your time? 
a The exact wording and values for each rating scale are provided in the Results sections of this report.
b The four sample task statements represent each of the four categories: General (49 tasks); Trial/Dispute Resolution (74 tasks); Transactional/Corporate/Contracts (41 tasks); and Regulatory/Compliance (15 tasks).

Survey Design

A noteworthy feature of the survey is that both NLLs and non-newly licensed lawyers (non-NLLs) were invited to answer it to ensure a breadth of perspectives on the work performed by NLLs. Respondents were asked at the beginning of the survey how many years of experience they had, which was used to determine whether they fell into the category of NLL or non-NLL. A qualifying question posed to non-NLLs at the beginning of the survey asked about their experience working with or supervising NLLs, and non-NLLs were disqualified from taking the survey if they indicated that they had never had direct experience working with or supervising NLLs. The survey required slightly different sets of instructions for the two groups of participants for all lists except the knowledge areas list.

  • NLLs were asked to rate survey items in terms of their own personal practice (e.g., “How frequently do YOU perform this task in YOUR practice areas and setting?”).
  • Non-NLLs were asked to rate items based on the practice of NLLs with whom they have or had direct experience (e.g., “How frequently do newly licensed lawyers with whom you have or had direct experience perform this task in THEIR practice areas and setting, regardless of what other NLLs with whom you do not have direct experience may do?”).

The survey instructed both groups to rate knowledge areas based on their importance for ALL newly licensed lawyers. The rating instructions for NLLs and non-NLLs for each survey section are set out in full in the Results sections of this report.

The survey was lengthy by necessity to adequately cover the work of NLLs. To prevent survey fatigue and encourage a high rate of response, the TTF determined that matrix sampling should be used to assign survey respondents to different sections of the survey. This method of survey assembly and assignment resulted in four versions of the survey:

  • Version A: 49 General tasks, 24 technology items, and 10 demographic questions
  • Version B: 74 Trial/Dispute Resolution tasks and 10 demographic questions
  • Version C: 41 Transactional/Corporate/Contracts tasks, 36 SAOs, and 10 demographic questions
  • Version D: 15 Regulatory/Compliance tasks, 77 knowledge areas, and 10 demographic questions

Respondents were randomly assigned to one of the four versions of the survey. Random assignment ensured that each version of the survey was seen by comparable numbers of respondents and reduced the selection bias that can occur when survey recipients are provided with the option to choose the category of questions to which they respond. Additionally, the survey questions did not require a response. Therefore, the number of respondents to any item could vary.

The following flowchart depicts how respondents were routed to the different sections of the survey, reflecting the eight forms of the survey that resulted from crossing four versions with two sets of instructions.

Survey Administration

The survey was open from August 1, 2019, through October 2, 2019. Given that there is no centralized registry of all practicing lawyers in the United States, a random sampling approach to survey distribution was not possible. The TTF instead took a census approach, in which any eligible respondent could answer the survey. A landing page was developed on the TTF website to serve as an informational platform about the practice analysis prior to the launch of the survey and as the “home” site for respondents to access the survey while it was open.

Fifty-four jurisdictions assisted NCBE with promoting the survey. A few jurisdictions provided NCBE with the email addresses of those members of their bar who permitted sharing that information, while other jurisdictions agreed to directly email their members about the survey. Most jurisdictions, however, agreed only to inform members about the survey through bar newsletters and social media channels. NCBE developed a communications toolkit of sample email messages, social media posts, and newsletter posts that the jurisdictions could use to inform members about the survey. NCBE also promoted the survey via multiple email campaigns to the following additional groups: bar admission administrators and bar examiners, TTF website subscribers, attendees at the 2018 and 2019 NCBE Annual Bar Admissions Conferences, ABA law school deans, and individuals with NCBE online accounts who appeared to meet criteria indicating that they were NLLs. NCBE also asked staff of the ABA Diversity and Inclusion Center and the ABA Young Lawyers Division to encourage their members to take the survey.

Additionally, NCBE promoted the survey through frequent posts on the TTF’s and NCBE’s social media channels, including paid/targeted posts on Facebook and LinkedIn. A press release about the survey was issued in August, and the practice analysis was featured in the Summer 2019 issue of the Bar Examiner, NCBE’s quarterly publication.

Results: Demographics and Practice Areas

Respondent Demographics

A total of 30,970 people accessed the survey and answered the first question: “Do you currently hold an active license to practice law in a United States jurisdiction?” Of those, 11,442 abandoned the survey before providing any ratings (i.e., no valid responses after the initial screening questions), and an additional 4,682 were disqualified from the survey due to having no experience working with or supervising NLLs. Thus, the total effective sample size for the survey was 14,846 respondents (30,970 –11,442 – 4,682 = 14,846). The respondents consisted of 3,153 NLLs (21%) and 11,693 non-NLLs (79%), as shown in Table 2.

Table 2. Survey Respondents Who Provided Ratings
0 to 1 years 1,421 9.6%
2 to 3 years 1,732 11.7%
Total NLL 3,153 21.2%
 
4 to 6 years 1,428 9.6%
7 to 10 years 1,499 10.1%
11 to 15 years 1,579 10.6%
16 or more years 7,187 48.4%
Total Non-NLL 11,693 78.8%
 
Total 14,846  
Years Licensed Number %

Tables A.1 through A.8 in Appendix A offer a complete description of the demographic background and practice characteristics of survey respondents. Given the challenges of administering a national survey in the absence of a single, national database of licensed lawyers, the survey coverage exceeded expectations. Several tables in Appendix A compare survey respondent data to data for the US legal profession published by the American Bar Association in the ABA Profile of the Legal Profession 2019 (ABA Profile).

Table A.7 compares the percentage of survey respondents whose primary practice is in a given jurisdiction with the percentage of lawyers practicing in that jurisdiction based on the ABA Profile. For the largest jurisdictions, the percentage of survey respondents in the jurisdiction and the number of lawyers in that jurisdiction as a percentage of the US lawyer population were reasonably consistent (California: 14.8% of survey respondents vs. 12.6% of US lawyer population; New York: 17.5% of survey respondents vs. 13.5% of US lawyer population; Texas: 5.3% of survey respondents vs. 6.8% of US lawyer population). For most medium-sized and smaller jurisdictions, the percentage of respondents was consistent with the jurisdiction’s percentage of the US lawyer population, with the following exceptions: Minnesota, Ohio, and Pennsylvania were slightly overrepresented on the survey, while Florida and Illinois were slightly underrepresented.

As shown in Table A.4, there was a nearly even split between male and female survey respondents. The survey has a higher percentage of female respondents (47.7%) than the ABA Profile (36.5%).

Tables A.5 and A.6 show the distribution of survey respondents by race and ethnicity. The largest group of survey respondents was White or Caucasian (79.3%), followed by Asian or Asian American (4.8%) and Black or African American (4.4%); 5.3% of respondents were of Hispanic descent. As indicated in Tables A.5 and A.6, the percentages of respondents by race and ethnicity are in line with the overall US population of lawyers based on the ABA Profile.

It can be seen from these demographic comparisons that the practice analysis survey respondents generally were representative of the population of US lawyers based on the ABA Profile. This, in combination with the large number of respondents, suggests that survey results should generalize from the sample of respondents to the eligible population of NLLs and non-NLLs in the United States.

Practice Areas and Clusters

Respondents were presented with 35 practice areas5 and asked to indicate the areas in which they spend at least 5% of their time. They were then asked to enter as a percentage the amount of time they estimate working in each area selected. A total of 13,750 respondents provided usable responses to this set of questions. Both NLLs and non-NLLs selected an average (mean) of 4.6 practice areas. About 18% of all respondents selected just one practice area, with approximately two-thirds selecting between two and seven practice areas, indicating that most lawyers work in multiple practice areas. Table 3 lists the 10 most common and 10 least common practice areas based on the percent of respondents who selected each area.

Table 3. Most Common and Least Common Practice Areas (see Table A.8 for all 35 practice areas)
Most Common Least Common
Contracts Securities
Business Law Immigration Law
Commercial Law Disability Rights
Administrative Law Employee Benefits
Real Estate Workers’ Compensation
Criminal Law International Law
Appellate Environmental Law
Employment Law and Labor Relations Education Law
Torts Energy Law
Other Indian Law

Table A.8 shows the percent of respondents who selected each practice area (i.e., spend at least 5% of their time in that area) and the mean percentage of time spent in each practice area based on all respondents. When interpreting these indices, it should be noted that the number of times a respondent is counted varies (e.g., an individual who practices in 10 areas contributes 10 values to the table, while an individual who practices in just two areas contributes only twice).

Table A.8 indicates that the practice area of Contracts was selected by 42% of respondents, Business Law was selected by 32%, and so on. The “mean % of time” values are also informative and, in some cases, temper the interpretation of the “% who selected” values. For example, 23% of respondents selected Commercial Law, but it had a relatively low “mean % of time” value of 3.1%, while Family Law was selected by 15% of respondents but it had a relatively high “mean % of time” value of 5.8%. One interpretation is that while fewer lawyers practice in the area of Family Law compared to Commercial Law, those who do tend to devote a higher percentage of their time to it.

Additional analyses were conducted to compare NLLs to non-NLLs. Results for the two groups generally followed the same patterns, with some exceptions: NLLs were generally more likely than non-NLLs to select Criminal Law, Family Law, and Immigration Law; NLLs were generally less likely than non-NLLs to select Appellate, Real Estate, Business Law, Commercial Law, Employment Law and Labor Relations, Insurance Law, Health Care Law, and Data Privacy and Cyberspace.

The practice area data show that 82% of respondents work in multiple and varying numbers of practice areas and with different degrees of emphasis in each practice area. For two respondents who work in Personal Injury and Commercial Law, for example, one respondent might spend 90% of her time in the former and 10% in the latter area, while the other respondent might spend 20% of her time in Personal Injury and 80% in Commercial Law. Although the two respondents practice in the same areas, their profiles are quite different. Therefore, to better understand how respondents allocate their time across the different practice areas, the data representing “mean % of time” in each practice area were subjected to cluster analysis.

The purpose of the cluster analysis was to identify groups of respondents with similar practice profiles (i.e., similar “mean % of time” in each practice area). Cluster analysis is used in practice analyses to identify families of similar jobs (Fleishman & Quaintance, 1984; Garwood et al., 2006). A desirable feature of cluster analysis is that each respondent is assigned to only one cluster and gets counted just once rather than multiple times for purposes of data analyses.

Table 4 summarizes the outcomes of the cluster analysis.6 The 25 cluster labels were determined by studying which of the original 35 practice areas correspond to a given cluster. The labels are subjective but reasonable. For example, the Wills, Estates, and Trusts cluster was given that label because the respondents in that cluster spent approximately 50% of their time in Wills, Estates, and Trusts, 13% in Elder Law, 8% in Tax Law, 5% in Real Estate Law, and 5% in Business Law. It is acknowledged that other interpretations and labels are possible. Later sections of this report analyze task and knowledge area ratings within each practice cluster to identify the tasks and knowledge areas that span multiple practice clusters.

Table 4. Practice Clusters Derived from Combinations of the 35 Practice Areas

 

Cluster Label

 

% of Sample

 

Comments

Criminal Law 10.7% Includes Constitutional Law; Litigation; Juvenile Law
Business Law 9.2% Includes Administrative Law; Commercial Law; Contracts; Cyber Law; Employment Law; Intellectual Property
Personal Injury 7.3% Includes Torts; Insurance Coverage; Contracts; Employment Law; Professional Liability
Family Law 6.6% Includes Litigation; Wills, Estates, and Trusts
Business Litigation  5.8% Includes Contracts; Appellate; Debtor-Creditor Relations; Business Law; Real Estate; Commercial Law; Torts; Litigation
Real Estate Law 5.8% Includes Business Law; Commercial Law; Contracts; Wills, Estates, and Trusts; Land Use and Zoning
Wills, Estates & Trusts 4.4% Includes Elder Law; Tax Law; Real Estate; Business Law
Employment Law 3.4% Includes Litigation; Administrative Law
Administrative Law 3.2% Includes Education Law; Disability Rights; Litigation; Employment Law
Securities 3.2% Includes Business Law; Commercial Law; Contracts
Health Care Law 2.8% Includes Contracts; Administrative Law; Business Law; Cyber Law; Employment Law
Local Government Law 2.5% Includes Contracts; Administrative Law; Tax Law; Employment Law; Insurance Coverage; Constitutional Law
Immigration Law 2.5% Includes Family Law; Administrative Law
Debtor-Creditor Relations 2.4% Includes Real Estate; Business Law; Contracts; Commercial Law
Intellectual Property Law 2.4% Includes Litigation; Contracts
Family-Criminal Law 2.3% Includes Family Law; Criminal Law; Juvenile Law; Litigation
Commercial Law 2.2% Includes Business Law; Contracts
Professional Liability 2.2% Includes Business Law; Contracts; Torts; Insurance Coverage; Commercial Law
Tax Law 2.2% Includes Employee Benefits; Employment Law
Appellate Law: Criminal 2.1% Includes Appellate Law; Criminal Law; Constitutional Law
Workers’ Compensation 1.9% Includes Personal Injury; Administrative Law
Insurance Coverage 1.7% Includes Litigation; Contracts; Torts
Juvenile Law 1.5% Includes Family Law; Appellate Law; Education Law
Environmental Law 1.5% Includes Administrative Law; Energy Law; Land Use and Zoning; Litigation; Contracts; Local Government Law
Energy Law 0.9% Includes Administrative Law; Contracts; Real Estate; Business; Wills, Estates, and Trusts

Results: Tasks

Rating Scales and Sample Sizes

The Tasks section of the survey comprised 179 work activities grouped under four categories: General tasks, Trial/Dispute Resolution tasks, Transactional/Corporate/Contracts tasks, and Regulatory/Compliance tasks. Respondents were instructed to rate the tasks on the basis of criticality and frequency, as explained below. The instructions for NLLs and non-NLLs differed in an important way. NLLs were asked to “Rate the criticality and frequency of individual tasks based on YOUR practice, regardless of what other newly licensed lawyers may do in their practice.” Non-NLL respondents were instructed to “Rate the criticality and frequency of the tasks based on the practice of newly licensed lawyers (licensed for 3 years or less) with whom you have or had direct experience, regardless of what other newly licensed lawyers with whom you do not have direct experience may do in their practice.”

The rating scales for NLLs were as follows: The rating scales for non-NLLs were as follows:
Criticality Scale Criticality Scale
0 = Not applicable – performing this task effectively is not applicable/necessary to YOUR practice, or you have not performed this task yet as a newly licensed lawyer

1 = Low – performing this task effectively is minimally critical to YOUR practice

2 = Moderate – performing this task effectively is important but not essential to YOUR practice

3 = High – performing this task effectively is essential to YOUR practice

0 = Not applicable – performing this task effectively is not applicable/necessary to their practice, or they have not performed this task yet as newly licensed lawyers

1 = Low – performing this task effectively is minimally critical to their practice

2 = Moderate – performing this task effectively is important but not essential to their practice

3 = High – performing this task effectively is essential to their practice

Frequency Scale Frequency Scale
0 = Not applicable – you have not performed this task yet as a newly licensed lawyer

1 = Yearly – you perform this task about once a year or less frequently (e.g., every 2–3 years)

2 = Quarterly – you perform this task approximately quarterly (about 3–6 times per year)

3 = Monthly – you perform this task approximately monthly

4 = Weekly – you perform this task approximately weekly or more frequently

0 = Not applicable – they have not performed this task yet as newly licensed lawyers

1 = Yearly – they perform this task about once a year or less frequently (e.g., every 2–3 years)

2 = Quarterly – they perform this task approximately quarterly (about 3–6 times per year)

3 = Monthly – they perform this task approximately monthly

4 = Weekly – they perform this task approximately weekly or more frequently

For many of the analyses discussed in this report, the frequency rating was converted to a dichotomous scale indicating whether the respondent performed the task. The rating was coded as “0” if the task was rated as not applicable; otherwise, the rating of 1, 2, 3, or 4 was assigned a value of “1.” This dichotomous frequency scale is abbreviated as “%perform” throughout this report.

The sample sizes of respondents to the Tasks section of the survey ranged from 495 to 753 per task for NLLs, with an average of 568 respondents, while the sample sizes for non-NLLs ranged from 1,797 to 2,423 per task, with an average of 2,027.

Main Findings

Table 5 offers a few results relating to the task ratings. The left column of the table lists the 10 most commonly performed tasks in terms of the %perform values for ratings of the tasks by NLLs and non-NLLs combined. All these tasks were performed by at least 90% of NLLs, had mean frequency ratings approaching weekly, and had criticality ratings approaching “high importance” (essential). The right side of the table shows the 10 lowest-ranked tasks in terms of the %perform values.  

Table 5. Most Commonly and Least Commonly Performed Tasks (see Table B.1 for all 179 tasks)

Most Commonly Performed Tasks Least Commonly Performed Tasks
Identify issues in client matter, including legal, factual, or evidentiary issues. Draft and file documents to secure or maintain intellectual property protection.
Research case law. Draft legislation or regulations.
Interpret laws, rulings, and regulations for client. Negotiate with or on behalf of land use regulatory authorities.
Research statutory and constitutional authority. Draft prenuptial or antenuptial agreements.
Evaluate strengths and weaknesses of client matter.  Prepare or review local, state, or federal tax returns and filings.
Evaluate how legal document could be construed. Establish and maintain client trust account.
Develop specific goals and plans to prioritize, organize, and accomplish work activities. Participate in initiative or proposition process to change statute or constitution.
Conduct factual investigation to obtain information related to client matter. Represent client in post-conviction relief or habeas corpus proceedings.
Research secondary authorities.  Represent client in eminent domain or condemnation proceeding.
Consult with colleagues or third parties regarding client matters. Draft constitutional amendments.

Table B.1 in Appendix B summarizes ratings for all 179 tasks separately for NLLs and non-NLLs. The tasks are ordered from high to low based on the percentage of respondents who indicated that they (or, in the case of non-NLLs, the NLLs with whom they have direct experience) performed the task. For purposes of rank-ordering tasks, Table B.1 uses the combined values of %perform for both groups.

The frequency and criticality means set out in Table B.1 are computed from respondents who indicated that the task was applicable (i.e., assigned a rating of 1 to 4 for yearly or more frequent). Inspection of Table B.1 reveals a remarkable consistency between NLLs and non-NLLs in their ratings across most of the 179 tasks. The correlation between frequency ratings by NLLs and non-NLLs was r = 0.97, while the correlation between criticality ratings by NLLs and non-NLLs was r = 0.90.

Table 6 summarizes the range of values presented in Table B.1. The ratings by NLLs had lower values of %perform than the ratings by non-NLLs, with overall means of 52.2% for NLLs and 65.6% for non-NLLs. This difference could represent a positive bias, in that some non-NLLs may perceive that NLLs are performing tasks that they are not. It could also reflect that some NLLs have not yet performed some of the less frequent tasks. However, the values for the two groups are highly correlated, r = 0.95, suggesting a high degree of consistency in responses between NLLs and non-NLLs.

Table 6. Summary of %Perform, Mean Frequency, and Mean Criticality if Tasks Ratings from Table B.1

  %Perform Min %Perform Max %Perform Mean Mean Frequency Min Mean Frequency Max Mean Frequency Mean Mean Criticality Min Mean Criticality Max Mean Criticality Mean
NLL 3% 95% 52.2% 1.4 3.8 2.49 1.5 2.8 2.17
Non-NLL 8% 98% 65.6% 1.3 3.8 2.39 1.3 2.8 2.11

Demographic Subgroup Analyses

It is possible that the tasks lawyers perform depend on characteristics such as practice setting, geographic region, and so on. Thus, criticality and frequency ratings were analyzed by subgroups of respondents based on the following demographic factors: recency of experience with NLLs, practice setting, number of lawyers in the organization, gender, race/ethnicity, and geographic region.

The large number of task statements, multiple rating scales, and variety of demographic factors produced thousands of comparisons. The analyses reported here focus on %perform. Analyses were also completed for mean frequency and mean criticality, but there were no meaningful subgroup differences in those ratings. The results of the subgroup analyses of the %perform data are summarized by converting %perform to a dichotomous variable: a task was assigned a 1 for a subgroup if it was performed by at least 50% of respondents in the subgroup; otherwise, it was assigned a 0. This dichotomous variable served as an indicator of task relevance for the subgroup. For each subgroup, the number of tasks out of 179 that had a status of “relevant” was then counted. Table B.2 in Appendix B provides a summary comparison by subgroup of the number of tasks deemed relevant (i.e., performed by at least 50% of respondents in the subgroup).

A limitation of these analyses is that they concern only main effects for a single demographic variable at a time, and do not consider joint effects of multiple variables. More complex analyses would be required to disentangle the effects of one demographic variable from another. Another limitation is that sample sizes for some subgroups were quite small. These limitations notwithstanding, a few differences in task relevance by subgroups are provided as examples:

  • The greater the time lapse since a non-NLL had direct experience with an NLL, the higher the number of tasks that were rated as relevant.
  • Respondents at smaller firms rated more tasks as relevant than respondents in larger firms.
  • Solo practitioners rated more tasks as relevant than those employed in other settings; those employed in legal services/public interest or in judicial clerkships rated fewer tasks as relevant than those employed in other settings.
  • Respondents practicing in the western United States rated fewer tasks as relevant than those practicing in other geographic regions.
  • A larger number of tasks were rated relevant by male respondents than by female respondents.
  • White or Caucasian respondents rated tasks as relevant at higher rates than Asian/Asian American respondents and Black/African American respondents.

Again, these differences do not lend themselves to unambiguous interpretation due to the potential effects of multiple demographic variables. For example, the gender differences could be explained by other factors, such as years of experience. Follow-up analyses will be conducted to better understand these differences, and the results will be taken into consideration by the TTF as it conducts Phase 3 of its study.

Analyses of Practice Clusters

Task frequency ratings for NLLs and non-NLLs were evaluated as a function of the practice clusters previously presented in Table 4. It is common to use a 40% or 50% frequency criterion for evaluating relevance of a task when developing the blueprint for a licensure exam. However, the decision to keep or drop a task should also be based on the extent to which it is relevant to multiple practice areas. Some tasks might exceed the criterion because they are performed by many lawyers in just one or two large practice clusters. Conversely, there may be tasks that do not meet the criterion because they are performed by fewer lawyers, but such tasks might be considered core work activities because they span multiple practice areas.

Table B.3 in Appendix B presents findings for a sample of 15 clusters from the original 25 in Table 4 and for 90 of the 179 tasks.7 Each cell in Table B.3 indicates the percentage of respondents in a cluster who performed the task. The 15 clusters include the 10 largest clusters and a sample of 5 smaller clusters. The 90 tasks consist of the top 10 (in terms of %perform), the bottom 10, and 70 that are near the region of a possible cut point (i.e., total group %perform values ranging from 30% to 60%).

As expected, tasks near the top of Table B.3 have very high values of %perform across all practice areas. Toward the middle of the table, however, results diverge. This variability seems quite natural—values of %perform are generally high where expected and low where expected. Consider the first task in the 30% to 60% group: “Draft or negotiate business agreements (e.g., purchase and sale, lease, licensing, non-disclosure, loan, security).” This task was performed by 60% of all respondents, but the range of values varied widely from 11% (Appellate Law: Criminal) to 92% (Real Estate Law).

The two examples below further illustrate how data in Table B.3 can serve as another source of information for evaluating task relevance—especially those tasks that are near the 50% criterion for %perform. These two example tasks can be identified in the table by referring to the values in the Total Group column and looking for values of 48% and 35%.

  • The task “Draft or respond to demand to compel arbitration” was performed by 48% of all respondents. However, that value was negatively influenced by the fact that only 22% of respondents in the Criminal Law practice cluster performed the activity, and Criminal Law is the largest cluster. It is also noteworthy that this task was performed by many respondents in other common practice clusters such as Business Law (54%), Personal Injury (66%), and Employment Law (76%). These data support an argument for including this task even though the %perform value is below 50%.
  • The task “Draft estate, inheritance, descent, and/or non-probate transfer documents (e.g., wills, trusts, transfer on death)” was performed by only 35% of all respondents. However, a majority of those in Family Law (68%), Real Estate Law (55%), and Wills, Estates, and Trusts (95%), all of which are relatively common practice areas for NLLs, indicated that they performed this activity. These data support an argument for including this task even though the %perform value is below 50%.

Implications for Test Blueprint and Design

Results from the Tasks section of the survey can help inform test design in at least two ways. First, a licensure examination should assess the KSAs required to effectively perform the major work responsibilities for entry-level practice. The mean frequency and %perform values presented in Table B.1 in Appendix B can be useful for identifying those core responsibilities. Second, the criticality ratings can be helpful for establishing content weights for the test blueprint.8 Relatively more weight should be allocated to those tasks that are performed by a large percentage of people, are performed more often, and are more critical (Kane et al., 1989; Raymond, 2016).

The results in Table B.1 indicate that NLLs and non-NLLs were consistent in their ratings, as evidenced by the high correlations and similar frequency and criticality means. This supports using the ratings of both NLLs and non-NLLs when determining which tasks to consider for test blueprint development. The very large sample size for non-NLLs, in combination with their experience and informed perspective, offers further support for including their responses with those of NLLs when making test blueprint and design decisions.

After reviewing the task data from the survey, the TTF adopted the following guiding principles:

  • For a task to be considered in the test blueprint development process, the value of %perform should be at least 50 for either the NLL group or the non-NLL group.
  • Those tasks with a large difference between NLL and non-NLL respondents should be subject to further review to determine whether the tasks should be considered in blueprint development. This review could include data based on demographic subgroups (e.g., solo practitioners, gender), results based on practice clusters, data from other reports, and/or the personal experience of the panel of legal subject matter experts (SMEs) participating in the blueprint development process.

Table 7 summarizes the number of tasks that do and do not meet the 50% criterion for %perform. A majority of General and Trial/Dispute Resolution tasks meet the criterion, whereas fewer than half of the Transactional/Corporate/Contracts tasks and Regulatory/Compliance tasks meet the criterion.

Table 7. Number of Tasks That Meet and Do Not Meet the 50 %Perform Criterion

Category N Tasks  N ≥ 50% % ≥ 50% N < 50% % < 50%
General 49 42 86% 7 14%
Trial/Dispute Resolution 74 67 91% 7 9%
Transactional/Corporate/Contracts 41 19 46% 22 54%
Regulatory/Compliance 15 5 33% 10 67%
Total 179 133 74% 46 26%

Results: Knowledge Areas

Rating Scale and Sample Sizes

Both NLLs and non-NLLs rated the 77 knowledge areas based on how important they believe the area of knowledge is for ALL newly licensed lawyers.

The rating scale for NLLs and non-NLLs was as follows:

How important is the area of knowledge for a newly licensed lawyer regardless of the newly licensed lawyer’s practice?

0 = Not applicable – this area of knowledge is not applicable/necessary for a newly licensed lawyer

1 = Low – this area of knowledge is minimally important for a newly licensed lawyer

2 = Moderate – this area of knowledge is important but not essential for a newly licensed lawyer

3 = High – this area of knowledge is essential for a newly licensed lawyer

For this section of the survey, the total sample sizes for NLLs and non-NLLs were 940 and 3,321 respondents, respectively.

Main Findings

Table 8 shows the knowledge areas with the highest and lowest mean importance ratings. For the most part, there are no surprises here. The topics with the highest average ratings have surfaced in previous studies as being essential to the practice of NLLs. Many of the highest-rated knowledge areas are subjects presently covered on the bar examination. The least important knowledge areas arguably are not necessary for entry-level practice for most lawyers and do not represent foundational knowledge that applies to numerous areas of practice by most NLLs.9

Table 8. Knowledge Areas with Highest and Lowest Mean Importance Ratings 

Highest Mean Importance Ratings Lowest Mean Importance Ratings
Rules of Professional Responsibility and Ethical Obligations Transportation Law
Civil Procedure Bioethics
Contract Law Indian Law
Rules of Evidence Foreign Trade Law
Legal Research Methodology Public Utility Law
Statutes of Limitations Military Justice Law
Local Court Rules Animal Rights Law
Statutory Interpretation Principles Sports and Entertainment Law
Sources of Law (Decisional, Statutory, Code, Regulatory, Rules) Air and Space Law
Tort Law Admiralty Law

Mean importance ratings for all 77 knowledge areas by NLLs and non-NLLs appear in Table C.1 in Appendix C. Table C.1 is ordered from most important to least important based on the mean ratings combined across both groups. These means are based on only those respondents who judged the knowledge area as applicable to the practice of all NLLs and have a possible range from 1.0 to 3.0. Table 9 summarizes the range of ratings means. The overall means for all knowledge areas as rated by NLLs and non-NLLs were nearly identical (1.69 vs. 1.65), and the correlation between the two sets of ratings was very high, r = 0.99.

Table 9. Summary of Mean Importance Ratings of Knowledge Areas from Table C.1

 NLL or Non-NLL

Mean Importance Ratings

Min     

Mean Importance Ratings

Max

Mean Importance Ratings

Mean

NLL 1.1 2.6 1.69
Non-NLL 1.0 2.8 1.65

The high correlation notwithstanding, there were a few minor but interesting differences between NLLs and non-NLLs. Knowledge areas rated slightly higher by NLLs than non-NLLs were Criminal Procedure, Criminal Law, Landlord-Tenant Law, and Immigration Law. Areas rated slightly lower by NLLs than by non-NLLs included Rules of Professional Responsibility and Ethical Obligations, Statutory Interpretation Principles, Commercial Litigation Law, and Data/Cybersecurity Law. However, overall, the high correlation and the similarity in mean ratings by NLLs and non-NLLs supports combining the two sets of ratings for decision-making purposes.

The TTF discussed various methods and indices to guide decisions about which knowledge areas should be considered in the blueprint development process. Rather than exclusively relying on NLL and non-NLL mean ratings, the TTF opted for a strategy that relies on a direct interpretation of the rating scale. Specifically, if a knowledge area is judged by most lawyers to be of only minimal importance, then it is hard to justify including that knowledge area on the bar examination. Conversely, if most lawyers judge a knowledge area as being moderately or highly important, then there is support for considering that area for inclusion on the bar examination. Following this line of reasoning, the TTF’s guideline for blueprint development is that a knowledge area should be considered for inclusion in the blueprint if at least 50% of either the NLLs or non-NLLs who rated it viewed it as being of moderate or high importance. Knowledge areas that fail to meet this criterion should be considered for exclusion.

The two right-most columns of Table C.1 indicate the percentage of NLLs and non-NLLs who rated each knowledge area as of moderate or high importance. Although the values in these two columns are similar, they are not as similar as the values for mean importance displayed in the two left-most columns. Rigid application of the 50% rule would result in keeping 27 knowledge areas for possible inclusion in the blueprint and dropping 50 knowledge areas. A similar outcome would have resulted from applying a criterion of a mean importance rating of 1.7. As with the tasks, other factors will be taken into consideration when deciding what knowledge areas to include in the test blueprint.

Demographic Subgroup Analyses

Knowledge area ratings were analyzed based on the following demographic variables: recency of experience with NLLs, practice setting, number of lawyers in the organization, gender, race/ethnicity, and geographic region. Results were remarkably consistent across these groups; that is, mean ratings did not vary much based on the demographic backgrounds of respondents. Mean knowledge ratings did vary by practice areas, however, as described immediately below.

Analyses of Practice Clusters

Decisions about whether to include a knowledge area in the test blueprint should include evaluation of the extent to which it is relevant to multiple practice areas. Therefore, Table C.2 in Appendix C was produced to depict knowledge area importance as a function of the 25 practice clusters. Some knowledge areas with low overall mean importance ratings might nevertheless be considered for inclusion in the test blueprint because they are important in numerous practice clusters.

Table C.2 presents findings for the sample of 15 clusters (the same 15 clusters presented earlier in Table B.3 in Appendix B). All 77 knowledge areas are included in Table C.2. Each cell indicates the mean importance rating based on the respondents in that cluster. The comments below illustrate how the data from Table C.2 can be used to determine which knowledge areas should be considered for inclusion on the test blueprint.

  • Knowledge areas toward the top of Table C.2 are generally rated important across all practice clusters, with two exceptions: Local Court Rules and Business Organizations Law had mean importance ratings of 1.6 or less in two of the 15 practice clusters.
  • Table C.2 may be most helpful for informing decisions about those knowledge areas in the middle region of relevance for entry-level practice (e.g., mean between 1.4 and 1.7). As one example, the knowledge areas of Commercial Litigation Law and Employment Law both have a total group mean of 1.7. The observation that they each had low importance ratings by respondents in six or more practice clusters, however, might support an argument that they should not be considered for inclusion on the test blueprint.
  • As a counterexample, the knowledge area of Personal Property Law has a total group mean of 1.5, suggesting that it should be excluded, but it received high ratings from those in the practice clusters of Family Law, Real Estate Law, and Wills, Estates, and Trusts, and these practice areas tend to have a relatively large number of NLLs who work in small or solo practices.
  • Table C.2 is useful for identifying core practice clusters (based upon “% of sample” values in the first row) and core knowledge areas (listed in first column), starting at the top left portion of the table and progressing toward the bottom right until most cell values are below 1.7.

Overall, the values in Tables C.1 and C.2 are consistent with expectations. Values are high in those cells where high values are expected, and low where low values are expected, although some exceptions may be found. That these findings have intuitive meaning and are derived from large sample sizes speak positively to the validity of the survey results (Colton et al., 1991).

Implications for Test Blueprint and Design

The Knowledge Areas section of the survey has direct implications for the test blueprint because most licensure tests include an assessment of the subject matter knowledge required for competent practice (Kane, 1981; Knapp & Knapp, 2007; Raymond & Luecht, 2013).

It would be reasonable to take the ratings in Tables C.1 and C.2 at face value and allow them to determine which subjects to assess on the bar examination and the amount of emphasis that should be allocated to each subject. Knowledge importance ratings are often used in this manner (Tannenbaum & Wesley, 1993). However, it is acknowledged that knowledge ratings are susceptible to positive bias10 (Morgeson et al., 2004). To mitigate the influence of any such bias, the knowledge areas will be mapped or linked to key work responsibilities from the Tasks section of the survey before the test blueprint is finalized (Hughes & Prien, 1989).

Results: Skills, Abilities, and Other Characteristics (SAOs)

Rating Scales and Sample Sizes

NLLs were instructed to rate each SAO in terms of its criticality for their own practice, while non-NLLs were instructed to rate the SAOs based on the practice of NLLs with whom they have or had direct experience.

The rating scale for NLLs was as follows:

How critical is this SAO in YOUR practice?

0 = Not applicable – this SAO is not applicable/necessary in YOUR practice

1 = Low – this SAO is minimally critical in YOUR practice

2 = Moderate – this SAO is important but not essential for YOUR practice

3 = High – this SAO is essential in YOUR practice

 

The rating scale for non-NLLs was as follows:

How critical is this SAO in the practice of newly licensed lawyers with whom you have direct experience? 

0 = Not applicable – this SAO is not applicable/necessary in their practice

1 = Low – this SAO is minimally critical in their practice

2 = Moderate – this SAO is important but not essential for their practice

3 = High – this SAO is essential in their practice

Sample sizes for this section of the survey were 785 NLL respondents and 2,930 non-NLL respondents.

Main Findings

The five most critical and five least critical SAOs appear in Table 10. Given that most SAOs tended to receive high ratings, contrasting the five highest with the five lowest ratings is not too helpful. One might reasonably argue that the SAOs in the right column of Table 10 are important for entry-level practice but are less important than those in the left column. A fair interpretation of the results requires looking at the full range of ratings.

Table 10. SAOs with Highest and Lowest Mean Criticality Ratings

Highest Ratings Lowest Ratings
Written/Reading Comprehension – Able to read and understand information presented in writing. Strategic Planning – Plans and strategizes to anticipate and address present and future issues and objectives.
Critical/Analytical Thinking – Able to use analytical skills, logic, and reasoning to solve problems and to formulate advice. Leadership – Able to delegate, inspire, and make thoughtful decisions or plans to further goals and objectives.
Written Expression – Able to effectively communicate information and ideas in writing. Social Consciousness/Community Involvement – Demonstrates desire to improve society by contributing skills to the community.
Identifying Issues – Able to spot salient legal concerns presented by a set of circumstances. Networking and Business Development – Able to develop meaningful business relationships and to market skills to develop client relationships.
Integrity/Honesty – Demonstrates core values and belief system. Instructing/Mentoring – Able to manage, train, and instruct to assist others in realizing their full potential.

Table D.1 in Appendix D presents the mean criticality ratings for all 36 SAOs. Table 11 summarizes the range of ratings means. The ratings by the two groups were highly correlated, r = 0.96.

Table 11. Summary of Mean SAO Criticality Ratings from Table D.1 

NLL or Non-NLL

Criticality Ratings

Min     

Criticality Ratings

Max      

Criticality Ratings

Mean

NLL 1.8 2.8 2.49
Non-NLL 1.6 2.8 2.46

The two right-most columns of Table D.1 indicate the percent of NLLs and non-NLLs who rated the criticality of the SAO as “moderate” or “high.” Keeping the NLLs and non-NLLs separate is useful for evaluating these data because there are some notable differences between the two groups (with notable being defined as a difference of 5% or more). NLLs assigned lower ratings than non-NLLs for eight of the SAOs: Integrity/Honesty, Advocacy, Researching the Law, Collaboration/Teamwork, Achievement/Goal Orientation, Interviewing/Questioning, Resource Management/Prioritization, and Creativity/Innovation. Meanwhile, NLLs provided higher ratings than non-NLLs for Leadership and Instructing/Mentoring.

The results in Table D.1 reinforce the outcomes of previous research on the cognitive and affective skills required of practicing lawyers. Specifically, the list of SAOs included nearly all the 26 lawyering skills identified through the work of Shultz and Zedeck (2011). The fact that nearly all SAOs were judged to be either moderately or highly critical can be regarded as confirmation of that earlier work. For determining which SAOs should be considered relevant to the licensure process, the TTF decided to apply the same rule used for the knowledge areas. That is, SAOs should be considered relevant to licensure if at least 50% of NLLs or non-NLLs rated the SAO to be of moderate or high criticality. SAOs that do not meet this criterion should be considered for exclusion.

All 36 SAOs would meet this threshold, although the last one in Table D.1 (Instructing/Mentoring) is borderline. Even if all SAOs are retained, some were rated higher than others, and the differences in ratings can be useful in prioritizing the SAOs for licensure purposes.

There is an important distinction between the guideline for SAOs and the one for knowledge areas. Those knowledge areas that meet the 50% threshold should be considered for inclusion on the test blueprint; however, SAOs that meet the threshold should be considered as relevant to the licensure process. In the context of SAOs, the term “licensure process” includes any aspect of preparation for practice (e.g., admission to and graduation from law school, character and fitness evaluation, mentoring, and continuing legal education). However, to be considered for inclusion on the bar examination, SAOs would need to meet other criteria as described below.

Demographic Subgroup Analyses

Given the uniformly high criticality ratings for SAOs, responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

Implications for Test Blueprint and Design

Translating SAOs into meaningful examination content is expected to be a challenge for those who work on blueprint development. There is little doubt that these SAOs are important for competent entry-level legal practice. Indeed, due to their generic nature, most of the SAOs are critical to working in a variety of jobs or professions. However, some of these skills are difficult to teach (e.g., Integrity) and even more challenging to assess in a manner that produces reliable and valid test scores. To determine which, if any, SAOs should be considered for inclusion in the blueprint development process, the TTF adopted the following guidelines:

  • The SAOs that are relatively specific to the legal profession should be considered for inclusion in the test blueprint process. An example of such an SAO would be Fact Gathering.
  • SAOs that are not specific to the legal profession but can be applied and assessed narrowly within a legal context should be considered for inclusion in the test blueprint process. An example would be Critical/Analytical Thinking. Although this SAO could be measured broadly with generic content (similarly to the SAT, ACT, GRE, and LSAT, for example), it also can be measured within the context of legal scenarios and documents.

SAOs that meet one of the above criteria will be evaluated in terms of their feasibility for assessment. For example, the SAOs of Conscientiousness and Professionalism probably could be applied and assessed within a legal context, and there is modest research supporting the feasibility of assessing them in other occupations (Kyllonen, 2016). In contrast, the SAO of Adapting to Change, Pressure, or Setbacks does not pass the feasibility screen because assessing this SAO would require repeated assessment over the span of days, weeks, or even months.

As mentioned earlier in this report, although some of the SAOs that were rated as important might not be suitable for the bar examination, the survey results could nevertheless be useful for other purposes, such as guiding the types of information collected as part of the character and fitness evaluations conducted by jurisdictions. Others involved in preparing and mentoring NLLs, including legal educators, employers, and bar associations, might also find the results of the SAO section of the survey helpful in their endeavors.

Results: Technology

Rating Scales and Sample Sizes

NLLs were asked to rate each of the 24 items included in the Technology section based on their own practice, while non-NLLs were instructed to rate each one based on the practice of NLLs with whom they have or had direct experience.

The rating scale for NLLs was as follows:

What level of proficiency do YOU need in using the technology in YOUR practice?

0 = Not applicable – ability to use this technology is not applicable/necessary to YOUR practice

1 = Low – limited ability to use common functions/features of this technology is necessary to YOUR practice

2 = Moderate – moderate ability to use the features/functions of this technology is necessary to YOUR practice

3 = High – broad, in-depth ability to use the features/functions of this technology is necessary to YOUR practice

 

The rating scale for non-NLLs was as follows:

What level of proficiency do newly licensed lawyers with whom you have direct experience need in using the technology in their practice?

0 = Not applicable – ability to use this technology is not applicable/necessary to their practice

1 = Low – limited ability to use common functions/features of this technology is necessary to their practice

2 = Moderate – moderate ability to use the features/functions of this technology is necessary to their practice

3 = High – broad, in-depth ability to use the features/functions of this technology is necessary to their practice

For this section of the survey, the sample sizes were 516 NLLs and 2,256 non-NLLs.

Main Findings

Table 12 shows the technology items with the highest and lowest proficiency ratings. It is quite reasonable for educators, clients, and employers to expect NLLs to be proficient at those applications listed in the left column of Table 12. Notably, the second highest ranked item is Research Software or Platforms, which is consistent with the “research” theme that emerged from the other sections of the survey.

Table 12. Technology with Highest and Lowest Mean Proficiency Ratingsa

Highest Mean Proficiency Ratings Lowest Mean Proficiency Ratings
Word Processing Software Web Content Management Software
Research Software or Platforms Data Analytics Software
Electronic Communication Software Language Translation Software
Desktop Publishing Software Financial Planning Software
Document Storage Software, Including Cloud Storage Tax Preparation Software
a The survey provided complete definitions for each technology item; these definitions appear in Table E.1 in Appendix E.

 

Mean ratings for all 24 technology items appear in Table E.1 in Appendix E. The range of mean ratings is summarized in Table 13. Mean ratings are based on those who indicated that a technology was applicable (i.e., ratings of 1, 2, or 3). The ratings of the two groups were highly correlated (r = 0.95). The similar means and high correlation support combining the two sets of mean ratings for data interpretation purposes; however, it should be recognized that NLLs provided slightly higher ratings overall.

Table 13. Summary of Technology Mean Proficiency Ratings from Table E.1

NLL or Non-NLL

Mean Proficiency Ratings

Min     

Mean Proficiency Ratings

Max      

Mean Proficiency Ratings

Mean

NLL 1.5 2.5 1.82
Non-NLL 1.3 2.5 1.71

Also important for interpretation purposes is the percentage of NLLs and non-NLLs who rated an item as requiring moderate or high proficiency, as shown in the last two columns of Table E.1. As with the previous sections of the survey, any survey item that was rated as requiring a moderate or high level of proficiency by 50% of respondents in either group should be considered as relevant for entry-level practice. In most instances, the percentage values track with the mean proficiency ratings. That is, any item with a mean proficiency rating greater than 1.5 was also endorsed as requiring moderate or high proficiency by more than 50% of NLLs, non-NLLs, or both groups. Items with means in the range of 1.3 to 1.5 have mixed results in terms of the percentage of respondents who rated the item as requiring moderate or high proficiency. A notable feature of these data is that non-NLLs were consistently more likely than NLLs to judge an item as requiring users to have moderate or high proficiency. The largest difference in ratings between NLLs and non-NLLs occurred for Document Review Software (69% by NLLs and 87% by non-NLLs), Voice Recognition Software (29% by NLLs and 50% by non-NLLs), and Data Analytics Software (32% by NLLs and 52% by non-NLLs).

Demographic Subgroup Analyses

Responses to this section of the survey were not subjected to formal analyses comparing demographic subgroups.

Implications for Blueprint and Test Design

It is not expected that the test blueprint would include content that directly assesses knowledge and skills related to use of these technology items. However, knowing which technology items NLLs should be proficient in using in practice provides information about the types of testing platforms that examinees might be expected to use (with reasonable accommodations provided for examinees with disabilities). For example, the survey results provide support for the appropriateness of having examinees interact with electronic research software as part of completing a performance test.

Credibility and Generalizability of Findings

Best practices in practice analyses include validating survey responses. To do this, four sources of evidence were evaluated: sample representation, sample size and sampling error, consistency with expectations, and consistency with independent research.

Sample Representation

The Demographics section of this report summarizes analyses aimed at evaluating the extent to which the sample of survey respondents represented the population of interest (NLLs and those who have had direct experience with NLLs). The tables in Appendix A document that survey respondents represented nearly all jurisdictions, and the proportion of survey respondents from each jurisdiction approximated the proportion of practicing lawyers in each jurisdiction based on the ABA Profile. Thus, the breadth of the sample contributes to the generalizability of findings. Comparisons of responses to the Tasks and Knowledge Areas sections by respondents from different regions of the country indicated that, aside from the finding that respondents practicing in the western United States rated fewer tasks as relevant than those practicing in other geographic regions, there was little regional variation in ratings across tasks.  Furthermore, there was almost no regional variation across knowledge areas. This limited regional variation in responses suggests that results are not overly dependent on one or more specific regions.

Sample Size and Sampling Error

A representative sample is of limited value if it is not sufficiently large. Adequate sample sizes are important to ensure the stability of the statistics reported in the findings. The margin of error, or standard error, is the most common index for documenting the precision associated with any statistic such as mean criticality or the percent of respondents who perform a task (%perform). Because the sample sizes for NLLs are smaller than those for non-NLLs, standard errors necessarily will be larger for NLLs than for non-NLLs.

Hundreds of standard errors were computed as part of the statistical analyses for this report. The Tasks section of the survey alone required the computation of 1,611 standard errors. This is because there were three statistics of interest (%perform, mean frequency, and mean criticality) for each of 179 tasks and for three groups of respondents (NLLs, non-NLLs, and both groups combined). Instead of reporting all standard errors, a sampling is documented in Table 14.

Table 14. Summary of Scale Properties and Standard Errors (Margins of Error)

Rating Scale Range of Scalea Range of Typical Mean Values Typical (Mean) Standard Error for NLLs and Non-NLLs
%Perform 0% to 100% 20% to 80% 2.2%, 1.1%
Task Frequency 1 to 4 2.0 to 3.0 0.06, 0.03
Task Criticality 1 to 3 1.8 to 2.4 0.05, 0.02
Knowledge Importance 1 to 3 1.3 to 2.0 0.02, 0.01
SAO Criticality 1 to 3 2.2 to 2.7 0.02, 0.01
Technology Proficiency 1 to 3 1.4 to 1.9 0.04, 0.02
a For most scales, 0 = not applicable (NA). Values of NA were excluded when computing means but were included when computing %perform.

 

The margins of error reported in Table 14 are not large. For values of %perform, the standard errors are just over ±2% for NLLs and ±1% for non-NLLs. The standard errors of the means for frequency, criticality, importance, and proficiency all are less than one-tenth of a scale point. If this study were to be replicated with new samples of NLLs and non-NLLs, mean values for the new study would be expected to be very similar to the values observed in the present study. This study is consistent with previous research documenting that job analysis ratings can be sufficiently reliable with two to three hundred respondents or fewer (Kane et al., 1995; Dierdorff & Wilson, 2003). In short, readers can be confident in the stability of the statistical indices reported here.

Consistency with Expectations

Another strategy for examining the validity of practice analysis data involves evaluating the extent to which the responses are consistent with informed expectations (Colton et al., 1991). The practice clusters in the present study provided an opportunity for such an evaluation. Consider a task from Table B.3 in Appendix B, “Draft or negotiate business agreements (e.g., purchase and sale, lease, licensing, non-disclosure, loan, security).” This task was performed by 92% of respondents from the Real Estate Law practice cluster, but by only 11% of respondents from the practice cluster labeled Appellate Law: Criminal. This difference is in line with expectations.

Similar examples can be found in Table C.2 in Appendix C, although the variation across knowledge areas is not as stark as the variation across tasks. As one example, the knowledge area “Administrative Law and Regulatory Practice” had mean importance ratings of only 1.3 from respondents in the Personal Injury cluster and the Family Law cluster; however, the mean importance from those in the Environmental Law cluster was 2.3. Findings like this do not prove the validity of survey responses, and one probably can find examples in the data that contradict expectations. However, these types of results do suggest that respondents generally were attentive and provided thoughtful responses as they completed the survey.

Consistency with Independent Research

NCBE commissioned a practice analysis in 2011/2012, which was completed by a consulting firm different from the one that completed the present 2019 study. In addition, the State Bar of California completed in 2019 a practice analysis specific to practice in California. Those two studies provide external criteria to which the present study can be compared. Although none of the studies were intended to be replications of another, they all had the goal of identifying the responsibilities and KSAOs required of NLLs.

Comparison to 2012 NCBE Study

The 2012 and 2019 NCBE studies both included sections for tasks, knowledge areas, and SAOs. Direct comparison of findings is hindered for various reasons (e.g., the lists were not identical across studies, a task from 2019 might have been classified as a skill in 2012, and there were differences in rating scales). Nonetheless, there is enough overlap to draw some parallels. Table 15 lists tasks from the General tasks on the 2019 survey that seem reasonably similar to tasks that appeared on the 2012 survey. The tasks are ranked from high to low in terms of mean criticality on the 2019 study. For each of these tasks, the mean importance rating from the 2012 study was also high, ranging from 2.70 to 3.49. These data indicate that the tasks viewed as important in 2012 were also viewed as critical in 2019, even though data were collected from different samples using different instruments and in different contexts.

Table 15. Tasks from the 2019 NCBE Practice Analysis That Are Similar to Tasks from the 2012 Practice Analysis

2012 Practice Analysis 2019 Practice Analysis
Research secondary authorities. Research secondary authorities.
Establish and maintain calendaring system. Schedule meetings and other work activities.
Negotiate agreement. Negotiate or facilitate resolution of client matter.
Research regulations and rules. Research administrative regulations, rules, and decisional law.
Identify issues in case. Identify issues in client matter, including legal, factual, or evidentiary issues.
Communications with client. Inform client about status of client matter.
Communications with supervising attorney. Consult with colleagues or third parties regarding client matters.
Research statutory authority. Research statutory and constitutional authority.
Develop strategy for client matter. Develop strategy for client matter.
Interview client and client representatives. Interview client, client representatives, or witnesses to obtain information related to client matter.

The Knowledge Areas sections of the two surveys also lent themselves to a macro-level comparison. Table 16 lists the knowledge areas of interest. Of the 10 most highly ranked knowledge areas from the 2019 survey, eight also appeared on the 2012 survey. Of the eight knowledge areas common to both surveys, seven were in the top 10 on both lists. Tort Law, which ranked tenth on the 2019 list, ranked eleventh on the 2012 list. It also was observed that the top 10 knowledge statements on the 2019 survey were all at least within the top 13 in 2012. More extensive comparisons confirmed that, in general, knowledge areas judged to be important by 2019 respondents were also viewed as important by 2012 respondents.

Table 16. Comparison of Highest-Ranked Knowledge Areas Across Surveys

2019 Rank 2012 Rank
1.       Rules of Professional Responsibility and Ethical Obligations 8
2.       Civil Procedure 1
3.       Contract Law 10
4.       Rules of Evidence 3
5.       Legal Research Methodology 5
6.       Statutes of Limitations 6
7.       Local Court Rules N/A
8.       Statutory Interpretation Principles 7
9.       Sources of Law (Decisional, Statutory, Code, Regulatory, Rules) N/A
10.     Tort Law 11

Comparison to the 2019 California Practice Analysis

The California Practice Analysis (CAPA) survey included 23 tasks that were similar or very similar to tasks appearing on the 2019 NCBE practice analysis survey. Although the rating scales for the two studies were not identical, it was possible to use a linear transformation to rescale the NCBE ratings to approximate what those ratings would be on the CAPA rating scales. 11

Overall, frequency ratings were found to be very similar for the two studies, but there were some notable differences in criticality ratings. Across the 23 similar tasks, the mean absolute difference in frequency was 0.26, while the mean absolute difference in criticality ratings was 0.46. Table 17 compares a sample of tasks from the two surveys; it indicates striking similarity across all of the frequency ratings and most of the criticality ratings displayed in the table (the second and third entries on the list had greater differences in criticality ratings across the two surveys). Researchers have commented that frequency ratings are more objective than criticality ratings (Morgeson et al., 2004; Raymond, 2016).

Table 17. Approximate Mean Frequency and Criticality Ratings for Similar Tasks that Appeared on Both NCBE and CAPA Surveys

 

NCBE Task

Rescaled Frequency Rescaled

Criticality

 

CAPA Task

 

Frequency

 

 Criticality

Advise client about dispute resolution options. 3.2 3.4 Evaluate options for alternative dispute resolution. 3.0 3.5
Draft or respond to post-judgment motions. 2.4 3.1 Prepare post-trial motions. 2.4 4.2
Establish and maintain client trust account. 3.2 3.2 Manage client trust accounts. 3.2 4.3
Interview client, client representatives, or witnesses to obtain information related to client matter. 3.5 4.0 Interview the client. 3.2 4.1
Prepare or respond to written discovery or other requests for information. 3.3 4.0 Develop discovery plan. 3.3 3.9
Research case law. 3.9 4.4 Research laws and precedents. 4.0 4.3
Research court rules. 3.6 4.0 Research local rules. 3.8 3.9

The CAPA survey also included a list of knowledge areas (topics) that were rated in terms of frequency and criticality. Whereas the 2019 NCBE practice analysis survey listed 77 knowledge areas, the California survey included two levels of topics where 121 specific topics were nested under 21 broad knowledge areas (e.g., Offer and Acceptance nested under Contracts).

Table 18 lists the top-ranked knowledge areas for the two surveys. The rankings are based on mean importance ratings for the NCBE survey and mean criticality ratings for the CAPA survey. Of the 10 most important knowledge areas on the NCBE survey, five also appeared in the top 10 on the CAPA survey. Note that Criminal Law and Constitutional Law were included among the top 10 on the CAPA survey, and in the NCBE survey results those two areas (Criminal Law and Constitutional Law) were ranked fifteenth and thirteenth, respectively. These two areas would have been in the top 10 of the NCBE survey had it not included knowledge areas 5 through 9 below.

Table 18. Highest-Ranked Knowledge Areas for NCBE and CAPA Surveys

NCBE Rank CAPA Rank
1.       Rules of Professional Responsibility and Ethical Obligations 1
2.       Civil Procedure 4
3.       Contract Law 8
4.       Rules of Evidence 3
5.       Legal Research Methodology N/A
6.       Statutes of Limitations N/A
7.       Local Court Rules N/A
8.       Statutory Interpretation Principles N/A
9.       Sources of Law (Decisional, Statutory, Code, Regulatory, Rules) N/A
10.    Tort Law 7

Summary

This section of the report sought to evaluate the quality of the survey data by examining the sample size and representativeness, the margins of error associated with key statistics, and the internal and external consistency of key findings. Although a rigorous comparison to external surveys was not possible because of differences in surveys and studies, the consistency of findings across different projects strengthens the validity argument in support of the present findings.

Summary of Findings and Next Steps

Summary of Findings

The demographic results indicate that respondents included a broad range of newly licensed and experienced lawyers who worked in a variety of practice settings and who represented a total of 56 jurisdictions (many were licensed in multiple jurisdictions). The largest numbers of respondents practiced in New York (17.5%), California (14.8%), Pennsylvania (8.9%), Minnesota (5.7%), and Ohio (5.6%). The fewest respondents practiced in New Hampshire, Rhode Island, South Dakota, and the Pacific and Caribbean islands. Although some jurisdictions were overrepresented (e.g., California and New York) while others were underrepresented (e.g., Florida and Illinois), the breadth of the sample supports the generalizability of findings.

Respondents also represented a wide variety of practice areas. The most common practice clusters included Criminal Law, Business Law, and Personal Injury. About one-third of practicing lawyers worked in one of these areas. Another one-fifth worked in practice clusters such as Family Law, Business Litigation, Real Estate Law, and Wills, Estates, and Trusts—the types of service likely to be needed by the typical consumer and areas that are also common to lawyers in small or solo practices.

The results of the Tasks section indicate that nearly three-fourths of the 179 job activities were performed by most (> 50%) respondents, and most job activities applied to multiple areas of practice. Most tasks were judged to be moderately critical to highly critical by those who performed them. Respondents at small firms and solo practitioners performed a wider variety of tasks than those employed in other settings. The data supporting these findings are summarized in Table B.1 in Appendix B. Of note is that several of the most common and critical tasks involve research, issue identification, and analysis. For example, four of the top 20 tasks included the word research. Implications of these highly rated job activities will be considered during Phase 3.

The Knowledge Areas section asked respondents to judge the importance of 77 areas of legal knowledge to practice by all NLLs regardless of practice area. About two-fifths of the areas were judged to be moderately important to highly important. The knowledge areas toward the top of the list included fundamental knowledge domains such as Rules of Professional Responsibility and Ethical Obligations, Rules of Evidence, and Civil Procedure, as well as common practice areas such as Contract Law, Criminal Law, and Torts. These all are areas covered on the current bar examination.12 Legal Research Methodology also surfaced as a highly rated knowledge area. Employment Law, as well as Administrative Law and Regulatory Practice, were among those areas rated as moderately or highly important by at least half of respondents. The analyses of knowledge areas by practice cluster were consistent with expectations: ratings were high in those cells where high values should be seen (e.g., Contracts was rated as highly important by lawyers who worked in Business Law) and low where one would expect to see low values (e.g., Trial Advocacy was rated low by those who practiced in the area of Securities). The ratings of importance will provide useful guidance in determining topics for inclusion on the test blueprint and the emphasis to give to those topics.

The SAOs section consisted of 36 personal attributes ranging from Critical/Analytical Thinking to Collegiality to Interviewing/Questioning skills. Mean criticality ratings were uniformly high for 32 SAOs, and all but one SAO were judged to be moderately or highly critical for entry-level practice by at least 50% of respondents. Some of the more critical SAOs included Reading Comprehension, Critical Thinking, Integrity, and Conscientiousness. Instructing/Mentoring was the only SAO rated lower than moderately critical by at least half of respondents. While those SAOs most directly relevant to legal practice (e.g., Fact Gathering) should be given serious consideration for inclusion on the bar examination, other SAOs might be useful to others involved in preparing and mentoring NLLs, such as legal educators, employers, and bar associations.

Finally, the Technology section asked respondents to indicate the level of proficiency required of NLLs with respect to 24 technology items. The findings for this section do not have direct implications for the test blueprint process. However, the findings do provide information about the types of testing platforms that examinees might be expected to use in the bar examination (e.g., electronic research software). Additionally, the finding that Research Software or Platforms was very highly rated (mean proficiency = 2.4) is consistent with findings from other sections of the survey indicating that research-related tasks are performed frequently, and that those skills are important.

Next Steps

Based on the systematic process of developing a practice analysis survey, and of gathering information from a representative sampling of lawyers, stakeholders should have confidence that the 2019 NCBE practice analysis results provide meaningful guidance for the TTF’s comprehensive study. That guidance informs a critical part of the TTF’s research plan, as it reveals the job requirements of NLLs, including the tasks most critically and frequently performed, as well as the knowledge, skills, abilities, other characteristics, and technology items important to performance of those tasks. That is not the end of the TTF’s inquiry, however. The TTF now must take the rich data it has gathered during Phase 2, coupled with the invaluable input it gathered during Phase 1 stakeholder listening sessions, to determine during Phase 3 what content should be tested on the bar exam and how that content should be tested.

Phase 3 will be undertaken systematically and thoughtfully. An independent research consulting firm will facilitate the work of a blueprint development committee composed of subject matter experts from around the country with a variety of practice and demographic backgrounds. The blueprint development committee will recommend content that should be tested on the bar exam, based on the results of the practice analysis, and guided by the bar exam’s purpose to determine that those who secure a general license to practice law have demonstrated minimum competence with respect to the knowledge and skills most NLLs should possess. The independent research consulting firm will then facilitate the work of a test design committee composed of external stakeholders such as bar administrators, bar examiners, justices, and legal educators. The test design committee will focus on how best to assess the content identified by the blueprint development committee, taking into consideration input received from stakeholders during Phase 1, as well as cost, feasibility, and best practices in testing. The collective recommendations from the blueprint development committee and the test design committee will be validated through a linkage exercise comparing those recommendations to the results of the practice analysis.

Importantly, throughout Phase 3, the TTF will continue to seek input from NCBE’s Technical Advisory Panel and more broadly from the stakeholder community before settling upon blueprint and design recommendations to be submitted to NCBE’s Board of Trustees at the end of 2020. The TTF will also explore opportunities to collaborate with stakeholders whose unique roles in the preparation of NLLs might benefit from the valuable research gathered by the TTF during its stakeholder listening sessions and through the practice analysis.

  1. The TTF’s Phase 1 report, Your Voice: Stakeholder Thoughts About the Bar Exam, is available on the TTF website at https://testingtaskforce.org/wp-content/uploads/2019/12/FINAL-Listening-Session-Executive-Summary-with-Appendices-2.pdf. The Phase 1 report details the rich body of opinions that the TTF heard from more than 400 stakeholders who participated in the listening sessions.(Go back)
  2. “Newly licensed lawyers” are defined in the practice analysis survey as lawyers who have been licensed for three years or less. (Go back)
  3. A list of the references cited is provided at the end of the report. (Go back)
  4. Although this practice analysis was undertaken in support of the bar examination, practice analyses provide the basis for a variety of human resource functions ranging from establishing training requirements to the design of performance evaluation instruments. (Go back)
  5. The survey included 35 practice areas plus “Other.” Those who selected Other were asked to specify (type in) the practice area. The practice area of Litigation was removed while the survey was live after several respondents commented that they could not enter a percent of time value for Litigation because it is difficult to disentangle it from the underlying practice areas of the matters being litigated. (Go back)
  6. Although Litigation was removed from the practice area options while the survey was open, it is included here to ensure a complete sample of respondents. Its inclusion in the cluster analysis had minimal influence on the results. (Go back)
  7. The complete matrix consisting of %perform indices for 179 tasks in 25 practice areas is too large to include in the format of this report. (Go back)
  8. If criticality ratings are used for informing content weights, then mean criticality values in Table B.1 could be multiplied by %perform from the same table to prevent tasks performed by very few respondents, but nonetheless important for those few respondents, from having excessive weight. (Go back)
  9. The contents of this table should not be interpreted as implying that the knowledge areas with relatively low importance ratings are not important for any newly licensed lawyer. Any one of these knowledge areas might be important for lawyers working within a particular context, setting, or practice area.(Go back)
  10. “Positive bias” describes the tendency of self-presentational concerns to affect responses individuals give to surveys, including practice analyses. (Go back)
  11. Although the transformation allows for more direct comparison of results, it may not account for potential ceiling effects; because the NCBE scale had fewer scale points, it is possible that the ratings at the upper end of the NCBE scale were suppressed a bit relative to the CAPA means. Differences in means across the surveys may be at least partially attributable to ceiling effects or scale suppression. (Go back)
  12. Knowledge of ethical rules and professional responsibility is currently assessed separately from the bar examination (on the MPRE). Passing the MPRE is a requirement for licensure in most jurisdictions. (Go back)

References

American Bar Association (2019). ABA Profile of the Legal Profession 2019. https://www.americanbar.org/content/dam/aba/images/news/2019/08/ProfileOfProfession-total-hi.pdf

American Educational Research Association, American Psychological Association & National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. Washington DC: American Educational Research Association.

Colton, D.A., Kane, M.T., Kingsbury, C. & Estes, C.A. (1991). A strategy for examining the validity of job analysis data. Journal of Educational Measurement, 28, 283–294.

Dierdorff, E.C. & Wilson, M.A. (2003). A meta-analysis of job analysis reliability. Journal of Applied Psychology, 88, 635–646.

Fleishman, E.A. & Quaintance, M.K. (1984). Taxonomies of Human Performance: The Description of Human Tasks. New York: Academic Press.

Garwood, M.K., Anderson, L.E. & Greengart, B.J. (2006). Determining job groups: Application of hierarchical agglomerative cluster analysis in different job analysis situations. Personnel Psychology, 44(4), 743–762.

Gerkman, A. & Cornett, L. (2016). Foundations for Practice: The Whole Lawyer and the Character Quotient. Institute for the Advancement of the American Legal System. http://iaals.du.edu/foundations/reports/whole-lawyer-and-character-quotient

Hughes, G.L. & Prien, E.P. (1989). Evaluation of task and job skill linkage judgments used to develop test specifications. Personnel Psychology, 42, 283–292.

Kane, M.T. (1982). The validity of licensure examinations. American Psychologist, 37, 911–918.

Kane, M.T., Kingsbury, C. Colton, D. & Estes, C. (1989). Combining data on criticality and frequency in developing plans for licensure and certification examinations. Journal of Educational Measurement, 26, 17–27.

Kane, M.T., Miller, T., Trine, M., Becker, C. & Carson, K. (1995). The precision of practice analysis results in the professions. Evaluation and the Health Professions, 18, 29–50.

Knapp, J. & Knapp, L. (2007). Knapp Certification Industry Scan. Princeton, NJ: Knapp & Associates International.

Kyllonen, P. (2016). Designing tests to measure personal attributes and noncognitive skills. In S. Lane, M. Raymond & T. Haladyna (eds.). Handbook of Test Development, 2nd ed (pp. 190–211). New York, NY: Routledge.

Morgeson, F.P., Delaney-Klinger, K., Mayfield, M.S., Ferrara, P. & Campion, M.A. (2004). Self-presentation processes in job analysis: a field experiment investigating inflation in abilities, tasks and competencies. Journal of Applied Psychology, 89, 674–686.

Nettles, S. & Hellrung, J. (2012). A Study of the Newly Licensed Lawyer. National Conference of Bar Examiners.

Raymond, M.R. (2016). Job analysis, practice analysis and the content of credentialing tests, in S. Lane, M.R. Raymond & T.M. Haladyna (eds.), Handbook of Test Development, 2nd ed. New York, NY: Routledge.

Raymond, M.R. & Luecht, R.L. (2013). Licensure and certification testing. In K.F. Geisinger (ed.), APA Handbook of Testing and Assessment in Psychology (pp. 391–414). Washington, DC: American Psychological Association.

Sanchez, J.I. & Fraser, S.L. (1992). On the choice of scales for task analysis. Journal of Applied Psychology, 77, 545–553.

Shultz, M.M. & Zedeck, S. (2011). Predicting lawyer effectiveness: Broadening the basis for law school admissions decisions, Law & Social Inquiry, Journal of the American Bar Foundation, 36(3), 620–661.

Tannenbaum, R.J. & Wesley, S. (1993). Agreement between committee-based and field-based job analyses: A study in the context of licensure testing. Journal of Applied Psychology, 78, 975–980.

Appendices