The NALP Foundation has released an exciting new report assessing performance evaluation practices within leading law firms. This comprehensive study explores 106 leading law firms’ associate performance evaluation practices, detailing key areas including content, process, mechanics, timeframes, training and external vendor utilization, along with insights on best practices and challenges.
The full report not only analyzes participating firms’ data in aggregate, but also segments it by firm size and geographic location.
Notable findings include:
- Evaluation components: Firms were quite consistent in the information used to compile associate performance evaluations: all firms included supervising attorneys’ qualitative comments in performance evaluations; the vast majority also incorporated quantitative metrics (96%), associate self-evaluations (93%), and summaries/compilations (66%). Many firms also factored in firm citizenship (93%) and pro bono work (75%).
- Feedback attribution: Nearly one half (48%) of firms included qualitative feedback with attribution in evaluations, while 39% provided only anonymous feedback. Another 14% of firms varied their approach based on the situation or context.
- Data comparisons: A sizable number of firms (30%) reported they did not compare performance data across any cohorts.For firms that did, these comparisons were most frequently across associates firm-wide (50%), by practice areas/practice groups (43%), and by seniority level (junior, mid-level, senior).
- External vs. internal solutions: Approximately three quarters (76%) of firms said they use an external or third-party solution/vendor to collect evaluation feedback; roughly one third (31%) used internally developed electronic surveys or solutions, and 7% used both mechanisms. However, those using external vendors reported only moderate overall satisfaction with these.
- Limited use of AI: Surprisingly, the majority (69%) of firms reported they had not yet integrated AI into their performance evaluation process, although many reported they were exploring how they might do so in the future. AI usage was most common for generating performance summaries (18%) and analyzing written feedback (11%).
Performance Evaluations Study
Your additional 6% donation helps The NALP Foundation defray our credit card processing fees. Thank you for supporting our non-profit organization!
The price of the final report is based on membership in The NALP Foundation -- note that we are a separate entity from NALP -- and not participation in the study. Only law firms and law schools that support The NALP Foundation on an annual basis receive the discounted Member Price.
