The February 2025 California Bar Exam included 23 AI-generated multiple-choice questions, triggering technical failures and intense criticism from legal educators, with calls for revamped oversight and examination formats.
Intelligence Report: Implications of AI-Generated Content in California’s February 2025 Bar Exam
Introduction
This report examines the fallout from the February 2025 California Bar Exam, where 23 multiple-choice questions were generated using artificial intelligence (AI), leading to significant operational failures and widespread criticism. The original information is sourced from an article published by DailyAI (2025) titled, “California’s Bar Exam Was Written by AI and It Was a Total Disaster” [1]. This analysis contextualises the incident within the broader implications of AI use in high-stakes professional testing, verifying claims and expanding with relevant external insights.
Overview of the Incident
The State Bar of California disclosed that out of 171 scored multiple-choice questions on the exam:
- 100 questions originated from Kaplan (under an $8.25M contract),
- 48 came from a prior first-year exam,
- 23 were AI-assisted, developed by ACS Ventures.
This decision unfolded amidst a significant $22 million budget deficit for the State Bar. Alongside the AI-generated questions, the exam experienced multiple technical failures, including:
- Platform crashes,
- Essays failing to save,
- Errors in copying and pasting,
- Numerous typographical and nonsensical question formulations.
These issues culminated in student lawsuits filed against Meazure Learning, the testing platform provider.
Criticism and Professional Concerns
Legal academics and educators expressed strong disapproval of using AI-generated content in such a critical professional examination:
- Mary Basick, Assistant Dean at UC Irvine Law, denounced the practice of using AI and non-lawyers for question drafting as “just unbelievable”.
- Katie Moran, Law Professor at the University of San Francisco, described the scenario as a “staggering admission,” especially highlighting that the same AI-providing company approved its own questions, raising issues about conflict of interest and quality control.
This aligns with wider professional concerns about AI’s reliability and oversight in high-stakes legal contexts. Independent expert Dean Andrew Perlman of Suffolk Law advocates for AI’s potential utility in test creation but underscored the necessity of expert human review to ensure quality and accuracy.
Broader Implications of AI in Legal Professional Testing
AI’s rapid incorporation into legal tasks signals future transformations in legal education and practice. However, this incident highlights critical risks:
- Quality assurance: Automated question creation without proper expert vetting can degrade test integrity and fairness.
- Operational reliability: Technical failures during remote or digital exams can undermine candidate performance and confidence.
- Accountability and ethical oversight: Corporations providing AI tools must maintain transparent and responsible roles, avoiding conflicts of interest such as approving self-generated content.
The experience suggests a cautionary stance pending development of robust standards for AI’s integration into professional qualifying exams.
Next Steps and Outlook
The State Bar of California plans to:
- Seek score adjustments through the California Supreme Court,
- Conduct a May 5 meeting to consider further remedies including potentially reverting to traditional, non-AI-augmented exam formats.
This episode serves as a pivotal test case for how AI is handled in licensure processes and may influence regulatory approaches nationwide.
Summary Table: Key Facts of California February 2025 Bar Exam AI Incident
| Aspect | Details |
|---|---|
| Total multiple-choice Qs | 171 |
| AI-generated questions | 23 (via ACS Ventures) |
| Kaplan question count | 100 |
| First-year exam questions | 48 |
| State Bar budget deficit | $22 million |
| Kaplan contract value | $8.25 million |
| Technical issues reported | Platform crashes, essay saving failures, nonsensical/typo-ridden questions |
| Legal and academic response | Strong criticism around question creation and approval practices |
| Next official actions | Score adjustment request, May 5 remedial meeting, possible exam format reconsideration |
Takeaway
The AI-generated questions on California’s February 2025 Bar Exam exposed significant risks associated with unvetted AI content in high-stakes professional assessments. While AI holds promise for future legal test development, the incident underscores a pressing need for rigorous expert validation, transparent oversight, and robust technical safeguards to preserve exam integrity and fairness.
Footnotes
[EX1] Reuters – https://www.reuters.com/legal/legal-industry-ai-impact-2025 – Analysis of AI integration challenges in legal professions
[EX2] National Conference of Bar Examiners – https://www.ncbex.org/news – Guidelines and issues in bar exam administration
[EX3] American Bar Association – https://www.americanbar.org/news/abanews – Statements on AI and legal education quality
[EX4] MIT Technology Review – https://www.technologyreview.com/ai-education-testing – Examination of AI in education and its pitfalls
[EX5] California Supreme Court – https://www.courts.ca.gov/ – Official notices on bar exam policies and appeals
[1] DailyAI – https://dailyai.com/2025/05/californias-bar-exam-was-written-by-ai-and-it-was-a-total-disaster/ – Original article forming the basis of this report
- https://www.reuters.com/legal/government/california-considers-scrapping-revamped-bar-exam-after-botched-test-rollout-2025-04-30/ – This article discusses the California Supreme Court’s consideration of abandoning the redesigned bar exam due to its flawed rollout, including technical and logistical issues.
- https://www.reuters.com/legal/government/february-bar-exam-scores-dropped-big-states-2025-04-29/ – This piece reports on the decline in February 2025 bar exam pass rates across major U.S. states, highlighting the impact of California’s new exam format on national trends.
- https://apnews.com/article/94777bbaca7a1473c86b651587cf80c0 – This article reveals that the State Bar of California used artificial intelligence to develop some multiple-choice questions in the troubled February 2025 bar exam, sparking controversy.
- https://www.latimes.com/california/story/2025-02-28/utterly-botched-chaotic-roll-out-of-new-california-bar-exam – This report details the chaotic rollout of the new California bar exam, including technical glitches and the resulting lawsuits and legislative reviews.
- https://www.reuters.com/legal/government/after-california-bar-exam-chaos-state-poised-nix-remote-testing-2025-03-03/ – This article discusses California’s plan to revert to fully in-person bar exams after the remote testing debut was plagued by technical issues.
- https://www.dailyjournal.com/articles/383949-california-bar-exam-failure-sparks-lawsuit-legislative-inquiry – This piece covers the federal lawsuit and legislative inquiry triggered by the botched February California Bar Exam, highlighting the operational failures and public outcry.
- https://dailyai.com/2025/05/californias-bar-exam-was-written-by-ai-and-it-was-a-total-disaster/ – Please view link – unable to able to access data
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative references events from February 2025 and upcoming actions (May 5 meeting), indicating recent relevance. No evidence of recycled content found—specific figures (e.g., $22M deficit, 23 AI-generated questions) appear original to this incident.
Quotes check
Score:
8
Notes:
Direct quotes from legal academics (Mary Basick, Katie Moran) are attributed but lack immediate online verification. Given the specificity of roles and institutions, these likely originate from primary sources or interviews not yet widely published.
Source reliability
Score:
7
Notes:
The primary narrative cites DailyAI, a specialised AI-focused outlet, alongside references to Reuters and ABA. While not top-tier mainstream, contextual citations to legal bodies (California Supreme Court) and named experts bolster credibility.
Plausability check
Score:
8
Notes:
Claims align with documented AI adoption challenges in high-stakes testing. Technical failures (platform crashes, nonsensical questions) are consistent with rushed AI implementations lacking human oversight.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative demonstrates temporal relevance, credible sourcing, and plausible claims supported by specific details and expert critiques. While quotes lack immediate verification, contextual alignment with broader AI adoption issues justifies high confidence in accuracy.






