Emma C., et al v. Eastin, et al
Filing
2598
SECOND ORDER RE STATE'S COMPLIANCE AT PHASE 2. Signed by Judge Vince Chhabria on 7/16/2020. (vclc1S, COURT STAFF) (Filed on 7/16/2020)
UNITED STATES DISTRICT COURT
NORTHERN DISTRICT OF CALIFORNIA
EMMA C., et al.,
Case No. 96-cv-04179-VC
Plaintiffs,
SECOND ORDER RE STATE’S
COMPLIANCE AT PHASE 2
v.
TONY THURMOND, et al.,
Defendants.
This case is proceeding in four phases to determine whether the California Department of
Education is in compliance with its monitoring and enforcement obligations under the
Individuals with Disabilities Education Act. Each phase involves evaluation of the legal
adequacy of a different aspect of the state’s operations: (1) data collection; (2) methods for
selecting districts for intervention; (3) monitoring and enforcement activities; and (4) written
policies for implementing the systems developed in the first three phases. See Order Re State’s
Obligations Under Consent Decree, Dkt. No. 2387. The first phase ended in the fall of 2018,
when the Court determined that the state’s process for collecting data on school district
performance was legally adequate on the whole (although a couple of issues remain to be worked
out). See Emma C. v. Torlakson, No. 96-CV-04179-VC, 2018 WL 3956310 (N.D. Cal. Aug. 17,
2018).
Hearings on Phase 2 occurred in the spring of 2019. The Court ruled that the state’s
methods for selecting districts for monitoring were legally adequate in some respects but not in
others. See Emma C. v. Thurmond, 393 F. Supp. 3d 885 (N.D. Cal. 2019). Areas of
noncompliance included the state’s failure to conduct any assessment of the efforts by districts to
mediate disputes with parents, its failure to develop rational or sufficiently rigorous methods for
selecting districts for further monitoring, and its minimal assessment of preschool performance.
Most importantly, the state had no system at all for reviewing small school districts, which
everyone acknowledged was legally unacceptable. Id. at 897–905. The Court was thus unable to
deem the state in compliance with its legal obligations at Phase 2.
The state went back to the drawing board on these areas of noncompliance, and further
hearings were held in June and July 2020. The state has adequately addressed the areas of
noncompliance (with the exception of one, which everyone agrees should be put off for now).
The state now has an adequate process for monitoring districts’ use of mediation. It has
simplified its intervention scheme, such that there are now only two types of monitoring
interventions: intensive monitoring (a form of comprehensive review), and targeted monitoring
(to address particular weaknesses). It has adopted a new, rational method for selecting districts
for intensive monitoring. It has proposed two reasonable models for assessing the performance of
small districts. And finally, it has found a way to include preschools in these plans, making
adjustments where necessary.
The outstanding concern is the various targets the state has set for measuring compliance
by districts. The state had expected to propose a more rigorous set of targets along with its other
submissions this year, which would have coincided with a separate (though largely equivalent)
obligation to submit targets to the federal Department of Education. But last fall, the federal
government postponed the date for submitting new targets by one year. At the state’s option, the
2
Phase 2 evaluation of targets was also postponed so that the state could submit its targets to the
Court and to the federal government along concurrent timelines. The one exception is the “child
find” target, which is designed to measure how well districts are locating children within their
district who entitled to special education services. The state has proposed and briefed a new,
legally adequate child find target, and it will be discussed here.
Given all this progress, the case may move to Phase 3. That phase requires evaluation of
how the state actually intervenes with districts that have been identified as potentially
problematic—what the parties refer to as the state’s monitoring and enforcement activities. In
light of the pandemic, it’s unclear whether this evaluation should take place during the 2020–21
academic year or be delayed until enforcement activities can return to normal. A case
management conference is scheduled for 9:00 a.m. on September 2, 2020, to discuss this
question. If the parties believe that Phase 3 should be delayed, they should be ready to identify
other possible ways to make progress in this case in the meantime.
***
This ruling assumes that the reader is familiar with the Phase 2 ruling from the summer of
2019 and picks up where that ruling left off, addressing the areas where the state previously
failed to establish compliance:
Mediation Practices. The IDEA requires districts to establish mediation procedures for
resolving disputes with parents about the provision of special education services. 20 U.S.C.
§ 1415(e). The alternative (if mediation fails or is not attempted) is a formal due-process hearing
before an administrative law judge. Mediation is voluntary for both parents and districts, but
states must nonetheless monitor districts’ mediation practices. 34 C.F.R. §§ 300.506(b)(4),
300.600(d)(2).
3
The state has now proposed a legally adequate system for monitoring these practices. The
system uses four “filters” to select districts for assistance: the first three identify districts that
have shown a preference for skipping mediation; the fourth operates to exclude some districts
that would otherwise have been selected:
1. Include districts that filed a request for a due process hearing without mediation (and
thus have chosen to skip mediation entirely for at least one dispute).
2. Include districts where a complaint was resolved by an administrative law judge and
the state cannot confirm that the district had shown a willingness to mediate.
3. Include districts where the parents asked for “mediation only” and there was no
mediation because a party declined to participate.
4. Remove districts that themselves had filed any mediation-only requests that year.
The state will provide targeted technical assistance to districts selected by this process.
The assistance is designed to encourage the use of mediation and increase districts’
understanding of options for non-adversarial dispute resolution.
The plaintiffs object to the use of the fourth “filter” to remove any district that has filed
even one mediation request per year. The concern is that districts might begin filing one
mediation request each year to escape review of their otherwise problematic mediation practices.
But the plaintiffs could not supply any evidence-based reason to believe that districts would
manipulate the process in this way.
The monitor objects that “targeted technical assistance”—the mediation intervention the
state proposes—is not itself “monitoring.” This, according to the monitor, puts the state out of
compliance with the IDEA, which identifies mediation as a priority area for state monitoring. 20
U.S.C. § 1416(a)(3); 34 C.F.R. § 300.600(d)(2). But the filtering system described above is
indeed “monitoring”—it is a system for reviewing information from every school district to
determine which ones need intervention in the areas of mediation. See generally Emma C. v.
Torlakson, No. 96-CV-04179-VC, 2018 WL 3956310 (N.D. Cal. Aug. 17, 2018) (describing the
state’s data collection and analysis as a type of monitoring). On this record—and given where the
4
issue of mediation ranks in importance compared to issues like academic performance, child
find, and suspension rates—this form of monitoring, and the process for selecting districts for
intervention in this area, is legally adequate.1
Intensive Monitoring. The state has proposed a revised system for identifying which
districts will be selected for intensive monitoring (formerly called “comprehensive review”). The
prior selection method was irrational because it systematically selected better-performing
districts for comprehensive review while missing worse-performing ones.
The state’s new system uses a percentile method to select districts for intensive
monitoring. The state has come up with a set of six performance indicators for K-12 schools and
four for preschools.2 Each district gets a score of 1–10 on each indicator, according to its position
relative to other districts. The scores are added together for each district and divided by the total
number of possible points. The districts are ranked according to this final percentage, and the
bottom 10% are selected for intensive monitoring.
1
Part of the confusion here may stem from the way people in this field tend to use the word
“monitoring.” As has been discussed throughout this case, the state conducts different types of
“monitoring” activities. Under the state’s recently revised system, once a district has been
identified as potentially deficient in a specific area, it qualifies for “targeted monitoring.” If a
district has been identified as potentially deficient across the board (or across a good part of the
board) it qualifies for “intensive monitoring.” It may be more precise to think of this as “targeted
intervention” and “intensive intervention.” Although there is no doubt an element of mere
monitoring to what the state is doing once districts have been selected for these types of
interventions, presumably in many instances the state will be going further and directing the
district to make changes. The fact that the interventions are called “targeted monitoring” and
“intensive monitoring” should not distract from the fact that districts are selected for these
interventions as a result of a type of monitoring—namely, the collection and review of data from
those districts.
2
The K-12 Indicators are: English proficiency, math proficiency, suspension rate, chronic
absenteeism rate, rate of students in regular classes more than 80% of the day, and rate of
students in separate classrooms more than 40% of the day. The preschool indicators are: average
rate of child outcomes (based on assessments of social skills, knowledge, etc.), rates of
suspension and expulsion, rate of children in regular classes most of the time, and rate of
children in separate schools or placements most of the time.
5
This is a rational way to select districts for intensive monitoring, and one that enables the
state to satisfy its obligations under the federal statute. The plaintiffs and the monitor appear to
be in agreement on this point.3 The only real source of disagreement is the first of the four
preschool indicators: many districts (especially small ones) are missing “outcomes” data for their
preschools. The monitor has proposed an alternative method that excludes outcomes data for
preschools, and this method results in a somewhat different composition of districts selected for
intensive monitoring. There is room for reasonable debate about which method is better, so the
Court cannot deem the state’s proposed method out of compliance with federal law.
Small districts and local educational agencies (LEAs).4 The state is required to monitor
all entities responsible for providing education to students with disabilities, but those with small
numbers of disabled students pose a tough statistical problem. If a particular district or charter
school has five students with disabilities, its performance cannot be reliably gauged according to
its percentages of success across any given indicator like large districts can. In its 2019 Phase 2
submission, the state excluded small LEAs from its assessment analysis entirely but
acknowledged the need to monitor them in some form. At the June 2020 hearings, the state put
forth a proposal for assessing performance by small LEAs but the Court, the monitor, and the
plaintiffs all expressed concern that the proposal was irrational. In response to these concerns,
3
Of course, whether the state actually satisfies its obligations depends on whether it performs the
intensive intervention adequately.
4
Under the relevant federal statute and regulations, the subjects of the state’s monitoring
activities are called “local educational agencies.” Because “local educational agencies” are
frequently the same as school districts, the Court has been using “districts” as shorthand instead
of the acronym “LEA.” See Emma C. v. Torlakson, No. 96-CV-04179-VC, 2018 WL 3956310,
at *1 n.1 (N.D. Cal. Aug. 17, 2018). But it is useful to note for the purposes of this section that
not all local educational agencies are districts: an individual charter school, for example, is not a
district in the way we usually think of districts, but is treated as a local educational agency by the
state’s monitoring system. Acknowledging the tradeoff between accessibility and precision, the
Court will use “LEAs” instead of “districts” when discussing small local educational agencies.
6
the state has now proposed two new options for assessing small-LEA performance and selecting
particular ones for intervention—a primary option and a backup option that could be adopted in
the alternative. Both are legally adequate, so the policymakers have the authority to adopt either
one.
The Primary Option: Randomized Grouping. Under this approach, each year, the state
would randomly divide the small LEAs into groups (with around 700–1000 disabled students in
each group). Then, the groups would be analyzed using the same methods applied to large
districts: for intensive monitoring, the bottom 10% of the groups of small LEAs would be
flagged for further review; for targeted monitoring, groups that fall below specified targets would
also be flagged. The state would proceed to select particular LEAs from within a flagged group
for monitoring. The idea is that this aggregation would help to mitigate the statistical problem of
having to analyze LEAs with very few disabled students. And the randomization would change
the composition of groups year after year: a poorly performing LEA might happen to be grouped
with well-performing LEAs one year and escape review, but it cannot count on the same luck in
future years.
According to this model, the state would approach intensive monitoring by first
comparing the groups to one another according to the percentile approach described above. The
bottom 10% of groups would undergo further review. The state has devised a set of secondary
standards to determine which LEAs from within flagged groups would be selected for further
monitoring. If any LEA from within a flagged group triggers half or more of these standards, that
LEA would receive intensive monitoring. 5
5
The K-12 secondary standards for selecting small LEAs for intensive monitoring are: no
students proficient in English, no students proficient in math, no students in regular classes more
than 80% of the day, any students in separate placements, any suspended students, and any
7
For targeted monitoring, the state will first apply the same set of targets that are used for
large districts to flag groups of small LEAs. Then, an LEA from within a flagged group will be
picked for targeted monitoring if it triggers one of some 17 secondary standards that the state has
proposed. See Dkt. No. 2553, at 10.
This is a reasonable starting point for selecting small LEAs for monitoring. The plaintiffs
and the monitor have some concerns about the secondary standards for picking individual LEAs
from flagged groups for targeted monitoring. But there is no alternative set of secondary
standards that is obviously better than the ones the state has proposed, and any flaws that
currently exist will not significantly undermine the effectiveness of the state’s monitoring efforts.
More importantly, the state is only just embarking on the project of assessing small LEAs, so if it
decides to adopt this approach it will no doubt make adjustments along the way.6
The Secondary Option: Cyclical Monitoring. Under this approach, the state would review
each small LEA over the course of a three-year cycle. There are about 1500 small LEAs, and the
state would select about 500 each year. The first year, one-third of the LEAs would be randomly
picked for monitoring. The second year, the state would monitor half of those that remain, and
the rest in the third year. Then the cycle would begin anew. Thus, no LEA would be
students chronically absent. The preschool secondary standards are: any suspensions or
expulsions, any children in separate placements, and no children in regular classroom
placements.
6
During the most recent round of hearings, the policymakers flagged a concern about dedicating
too high a percentage of resources to monitoring and enforcement in small LEAs. They noted
that small LEAs tend to perform better for disabled students than large LEAs, and that only a
small percentage of disabled students in California are situated in small LEAs. This concern—
that the state shouldn’t divert too many resources from monitoring and enforcement in larger
LEAs—is valid, and nothing in this ruling should be interpreted as limiting the state’s flexibility
in this area. In other words, this ruling addresses the general methodologies proposed by the state
for monitoring small LEAs, but it does not speak to the particular percentage of small LEAs that
the state should be selecting for further monitoring and enforcement, nor does it speak to how
intensive the intervention should be.
8
“immunized” from review for more than two years, and each LEA would be subject to some sort
of review over the course of three years.
If this model were adopted, the state would replace the dichotomous scheme of “targeted”
and “intensive” monitoring with a more qualitative, flexible system of intervention with small
LEAs. Each year, the state would first “review student files, performance information, and
policies and procedures” for all LEAs selected. It would then require struggling LEAs to
complete an improvement plan or take specific corrective actions.
The monitor and the plaintiffs were both intrigued by this proposal, which seems at first
glance like a better method, and a more reliable one for regularly reaching every small LEA in
the state. It would seem to allow the state to conserve resources by calibrating its level of
intervention for any given small LEA more precisely. And it has the additional advantage of
sidestepping the concerns about secondary standards in the randomized-grouping model, because
no secondary standards would be necessary here. If the state adopts this model, the details of the
review process will be explored at Phase 3, whose focus is the effectiveness of the state’s
monitoring and enforcement activities. For now, suffice it to say that this alternative model as
described by the state is another legally adequate method for selecting small LEAs for further
monitoring and intervention.
Child Find. The IDEA requires school districts to identify, locate, and evaluate all
disabled children within its school district, even those who are homeless or who attend private
schools. 20 U.S.C. § 1412(a)(3)(A). This requirement is known as “child find,” and the child find
indicator is the percentage of students in a district who are disabled. The theory is that the
percentage of disabled students should be relatively constant, so a district with a low percentage
9
has probably missed more disabled students than a district with a higher percentage.7
California performs slightly worse than the national average: for the 2017–18 school
year, the overall percentage in California was 12.01%, while the national rate was 13.35%. Last
year, the state set the target to be two standard deviations from the California mean, such that a
large district with an identification rate below 3.6% would be selected for child-find monitoring.
The cutoff rate of 3.6% was unreasonably low. This year, the state set its target to be only 1.5
standard deviations from the mean, resulting in a much-improved cutoff rate of 7.23%. This new
cutoff rate is legally adequate as applied to large districts.
Child find as applied to small LEAs presents a particular problem—one that could be
avoided if the state adopts its alternative cyclical approach to monitoring small LEAs. If the state
uses the randomized grouping method, a small LEA will be selected for targeted child-find
monitoring only if: (i) it belongs to a group with an overall child-find rate below 7.23%, and (ii)
its own rate is below 7.23%. This system is somewhat arbitrary: there is not any particular reason
to think that an LEA’s membership in a poorly performing group means that its child-find
problem is any worse than other LEAs with similar rates. But taking into account the state’s
resource constraints, the Court cannot say that this way of approaching the child-find problem in
small LEAs is outside the range of reasonable options.
The monitor has raised a further objection to this approach to child find in small LEAs. A
small LEA is defined according to the number of disabled students in it. So on the margins, an
LEA that does a particularly bad job of identifying disabled students can find itself on the smallLEA side of the line by virtue of its poor child-find practices. And an LEA subject to the small-
7
Of course, it could also be that a particular district has a lower percentage of disabled students
for reasons unrelated to child find.
10
LEA process is less likely to be selected for monitoring than a large district. (A large district is
selected if its rate is below 7.23%, and a small LEA is selected only if it has a rate below 7.23%
and also belongs to a group whose overall rate is below 7.23%.) The monitor proposes that for
child-find purposes, small LEAs be defined according to their total number of students and not
according to the number of disabled students. But this concern is not central to the state’s ability
to monitor small LEAs in this area. And the state has a strong interest in the clarity and
simplicity of the policies it adopts. Requiring the state to adopt different small-LEA definitions
for different purposes would be too much of an intrusion into the state’s policy choices.
Moreover, the state’s cyclical approach to small-LEA analysis would require that the LEAs be
defined the same way for all purposes, and the Court will not impose a requirement that would
prevent the state from selecting that sensible and well-balanced approach.
***
Accordingly, the state has now successfully passed Phase 2 of this case, subject only to
analysis of the new targets it will adopt in the coming year. At the upcoming case management
conference, the Court and the parties will discuss whether to move to Phase 3 in the upcoming
academic year, or to work productively on other matters (such as the new targets and the couple
of lingering issues from Phase 1) in the hope that the state’s Phase 3 activities will go back to
normal (or something close to it) in the 2021–22 academic year.
IT IS SO ORDERED.
Dated: July 16, 2020
______________________________________
VINCE CHHABRIA
United States District Judge
11
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?