Department of Health and Human Services
DEPARTMENTAL APPEALS BOARD
Appellate Division
SUBJECT: District of Columbia Department of
Human Services
Docket No. 89-229
Decision No. 1228
DATE: February 14, 1991
DECISION
The District of Columbia Department of Human Services (DC) appealed
a
funding reduction imposed under section 403(h) of the Social
Security
Act (Act) by the Office of Child Support Enforcement (OCSE).
Based on
audits of DC's child support enforcement and paternity
establishment
program, OCSE determined that DC did not comply substantially
with the
requirements of Title IV-D of the Act. OCSE proposed a one
percent
reduction of the amount otherwise payable to DC for Aid to Families
with
Dependent Children (AFDC) during the period July 1, 1988 through
June
30, 1989 (approximately a $517,498 reduction).
DC challenged the regulations that governed Title IV-D audits,
OCSE's
failure to supply DC with requested technical assistance, and
the
imposition of a funding reduction where substantial improvement
was
evidenced. DC also challenged the statistical sampling methodology
used
by OCSE as a basis for its findings (including the
auditors'
classification of certain cases in the audit sample as requiring
a
particular type of action), and OCSE's conclusions concerning whether
DC
took such action.
For the reasons stated below, we uphold OCSE's decision to reduce by
one
percent DC's AFDC funding for the one-year period beginning July
1,
1988. Specifically, we conclude that--
o OCSE properly applied its
interpretation of the statutory term
"substantial compliance" to the time
periods at issue here;
o OCSE reasonably interpreted the
statutory requirement for
"substantial compliance" to mean that a state must
be taking action to
provide basic child support services (required under the
Act) in at
least 75% of the cases requiring those services;
o OCSE's failure to supply DC with the
specific technical
assistance DC requested does not render OCSE's decision
invalid;
o DC's improving performance does not
provide a basis for the
Board to overturn OCSE's decision;
o the statistical sampling evidence submitted here reliably shows
that
DC failed to meet OCSE audit criteria related to locating absent
parents
and establishing paternity in Title IV-D cases;
o DC's mass submission of names to the
Federal Parent Locator
Service does not support a finding that DC took action
to locate absent
parents in 75% of the cases requiring such action;
o DC failed to document that its mass
mailing of Christmas cards
to absent parents should be counted as an action
in sample cases
requiring services to establish paternity; and
o while the record indicates that DC did
meet an audit criterion
related to establishing support obligations, this is
not sufficient to
show substantial compliance.
Statutory and regulatory provisions
The District of Columbia is considered a state for purposes of Title IV
of
the Social Security Act. Section 1101(a)(1) of the Act. Each
state
that operates an AFDC program under Title IV-A of the Act is required
to
have a child support enforcement and paternity establishment
program
under Title IV-D of the Act. Section 402(a)(27) of the
Act. The Title
IV-D program has been in existence since July
1975. OCSE has the
responsibility for auditing state Title IV-D
programs, pursuant to
section 452(a)(4) of the Act, and evaluating whether
the actual
operation of such programs conforms to statutory and
regulatory
requirements. Following adoption of Title IV-D, the
participating
states were given 18 months by Congress -- until December 31,
1976 -- to
establish and begin operating their programs before compliance
audits
actually began. Under the applicable statute, a state was
subject to a
five percent reduction of its Title IV-A funds if the audit
found that
the state was not in compliance. Congress, however,
continuously
extended the initial moratorium on imposition of this funding
reduction,
so that no reduction was ever imposed during the first eight years
of
the program's operation, although OCSE did continue its annual audits.
On August 16, 1984, Congress adopted the Child Support
Enforcement
Amendments of 1984, section 9 of Public Law 98-378 (the
1984
Amendments). As amended, section 403(h)(1) of the Act provides
that--
if a State's program operated under Part D is found as a
result
of a review conducted under section 452(a)(4) not to
have
complied substantially with the requirements of such part
for
any quarter beginning after September 30, 1983, and
the
Secretary determines that the State's program is not
complying
substantially with such requirements . . ., the
amounts
otherwise payable to the State under this part [A] for
such
quarter and each subsequent quarter, prior to the first
quarter
throughout which the State program is found to be in
substantial
compliance with such requirements, shall be reduced . . .
.
(Emphasis added.)
The amended section then provides for graduated reductions, starting
with
a reduction of "not less than one nor more than two percent" and
increasing
to a maximum of five percent with each consecutive finding
that a state is
not complying substantially with Title IV-D
requirements.
The 1984 Amendments provided for the continuation of compliance
audits,
which could in appropriate cases be scheduled as infrequently as
once
every three years. Rather than directing immediate reduction of
funding
for a state which failed an audit, the Amendments provided that
a
reduction could be suspended while the state was given an opportunity
to
bring itself into compliance through a corrective action plan
approved
by OCSE. Section 403(h)(2)(A)-(C) of the Act, as
amended. If a
follow-up review of a state's performance showed that the
state still
did not achieve substantial compliance, a reduction would be
imposed.
Section 403(h)(2)(B)(iii) of the Act.
Section 9(c) of the 1984 Amendments provides that they "shall be
effective
on and after October 1, 1983."
OCSE proposed regulations implementing the Amendments on October 5,
1984,
49 Fed. Reg. 39488 (1984), and issued final regulations on October
1, 1985,
50 Fed. Reg. 40120 (1985). (We refer to these regulations as
the "1985
regulations.") The 1985 regulations amended parts, but not
all, of the
audit regulations at 45 C.F.R. Part 305. Section 305.20(a),
as amended
by the 1985 regulations, provided that, for the fiscal year
(FY) 1984 audit
period, certain listed audit criteria (related primarily
to administrative or
fiscal matters) "must be met." This section also
provided that the
procedures required by nine audit criteria "must be
used in 75 percent of the
cases reviewed for each criterion . . . ."
These criteria relate to
performance of basic services provided under a
IV-D state plan and are the
criteria at issue in this appeal. All the
service-related audit
criteria are based on sections of 45 C.F.R. Part
305 which (with minor
exceptions not relevant here) were originally
published in 1976, with minor
amendments in 1982. (We refer to these
provisions, as amended in 1982,
as the "existing regulations" since they
were in effect during FY 1984.)
Thus, under the 1985 regulations, substantial compliance for FY
1984
audits was measured by audit criteria from the existing regulations,
but
a state had to be providing the required services in 75% of the
cases
requiring them. In follow-up reviews after a corrective action
period,
OCSE would examine only the audit criteria that the state had
previously
failed or had complied with only marginally (that is, in 75 to 80%
of
the cases reviewed for that criterion). 45 C.F.R. 305.10(b) and
305.99,
as amended. 1/
Background
OCSE's audit for FY 1984 (October 1, 1983 through September 30,
1984)
resulted in a September 4, 1987 notice to DC that it had been found
to
have failed to comply "substantially with the requirements of Title
IV-D
of the Act" in the following areas: (1) state parent locator
service;
(2) establishing paternity; (3) establishing child support
obligations;
and (4) recovery of direct payments. OCSE Exhibit (Ex.) A.
2/ OCSE
found that DC took action in 45 of 290 sample cases requiring
location
of missing parents (16% of sampled cases); that DC took action in 29
of
67 sample cases requiring establishment of paternity efforts (43%
of
sampled cases); that DC took action in only 22 of 46 sample
cases
requiring establishment of a support obligation (48% of sampled
cases);
and that DC had not developed procedures to implement its
determination
that the Title IV-A agency recover child support payments
received
directly by an AFDC recipient. Id. 3/
Rather than appealing OCSE's findings, DC opted to propose a
corrective
action plan that was accepted by OCSE on December 4, 1987, and
the
funding reduction was suspended. DC Ex. 2 at 2. The
corrective action
period was originally scheduled to end on January 3, 1988,
but OCSE
subsequently approved an extension until September 3, 1988.
The
follow-up review by OCSE of DC's performance for calendar year
1988
resulted in the September 29, 1989 notice of substantial
noncompliance
that is the subject of this appeal. OCSE found that DC
had achieved
substantial compliance with the recovery of direct payments
audit
criterion, but had failed the three remaining audit criteria cited
in
the program results audit. Specifically, OCSE found that DC had
taken
appropriate action in locate cases in only 49 of 109 sample cases
(45%
of those cases); had taken action in paternity cases in 37 of 87
sample
cases (43% of those cases); and had taken action in support cases
in
only 26 of 42 sample cases (62% of those cases). See 9/29/89
Letter
(attached to DC's Notice of Appeal).
Analysis
I.
DC's challenges to the 1985 regulations are without
merit.
DC challenged the 1985 regulations that OCSE used in concluding that
the
State was not in substantial compliance. Specifically, DC argued
that--
o the regulations are impermissibly retroactive under Bowen
v.
Georgetown University Hospital, 488 U.S. 204 (1988)
(hereafter
Georgetown), since OCSE lacked express statutory authorization to
apply
these regulations retroactively;
o the statutory provision setting an
effective date of October 1,
1983, was the result of an "obvious mistake and
thus should not be given
effect;"
o the regulations have retroactive effect
in violation of the APA,
which defines a "rule" as having "future effect"
(see 5 U.S.C. 551(4)
and Georgetown (Scalia, J., concurring));
o the 75% standard in the regulations had
no empirical basis and
therefore was established in an arbitrary and
capricious manner under
Maryland v. Mathews, 415 F. Supp. 1206 (D.D.C. 1976);
and
o the regulations were invalid because
they did not include a
definition of "violations of a technical nature,"
based on section
403(h)(3), as amended.
OCSE disputed DC's position, but also pointed out that the Board is
bound
by applicable laws and regulations under 45 C.F.R. 16.14.
The
regulations at issue were "effective" on the date of final
publication
(October 1, 1985). However, section 305.20(a), which sets
out the 75%
standard for service-related audit criteria, states that it is to
be
applied "[f]or the fiscal year 1984 audit period." The preamble to
the
regulations confirmed that OCSE intended to apply this section to
FY
1984 audits, based on the October 1, 1983 effective date of the
1984
Amendments. 50 Fed. Reg. at 40126, 40131-40132, and 40138.
We are, of course, bound by the Department's regulations, even if
invalid
under a constitutional analysis, if those regulations are
applicable.
While some of the issues here clearly would be controlled
by 45 C.F.R. 16.14,
DC's arguments also raise interrelated questions of
applicability. We
do not need to sort out these issues precisely,
however, since we conclude
that all of DC's arguments concerning the
regulations are completely without
merit. 4/ Our reasons are:
o Section 403(h)(1) of the Act, as
amended, requires reductions
for states not found to be in substantial
compliance in audits "for any
quarter beginning after September 30, 1983,"
and Congress explicitly
made the 1984 Amendments effective on October 1,
1983. The
circumstances here are therefore distinguishable from those
in
Georgetown, where the agency published cost-limit rules for
Medicare
providers in 1984 and attempted to apply the rules to 1981 costs, in
the
absence of any statutory authority to do so. Here, the
statute
expressly made the change in the standard retroactive.
o In support of its argument that the
statutory language setting a
1984 effective date was an "obvious mistake,"
the State argued that
legislative history of the 1984 Amendments shows that
Congress intended
that OCSE's implementing regulations would have prospective
effect only.
The legislative history on which the State relied, however, does
not
refer to OCSE's implementation of the substantial compliance
standard;
instead, it refers to the expectation by Congress that OCSE would
issue
new regulations focusing on whether states were effectively
attaining
program objectives (in addition to meeting the existing state
plan
requirements). S. REP. No. 387, 98th Cong., 2d Sess. 32-33
(1984).
o The effect of the 1985 regulations here is also
significantly
different from the effect of the cost-limit rules considered
in
Georgetown. There, Medicare providers were entitled to a specific
level
of reimbursement under the regulations in effect in 1981, and the
1984
rules would have retroactively reduced that level. Here, the
AFDC
funding reduction applies to periods after the 1985 regulations
were
published.
o The audit criteria at issue here were
in the existing
regulations, had been in effect without substantial change
since 1976,
and were based on IV-D state plan requirements. The 75%
standard is
more lenient than the standard in the existing regulations,
which
provided that the State must "meet" the criteria. Even if the
State is
correct that OCSE could not reasonably have implemented this
by
requiring action in 100% of the cases, the existing regulations
clearly
contemplated a compliance level greater than 75%. 5/
o More important, the 1985 regulations
afforded DC a corrective
action period. DC had notice of the 75%
standard prior to this period,
and over two years to adjust its
administrative practices before the
follow-up review period.
o The regulations here merely interpret the statutory term
"substantial
compliance." Obviously, the range of compliance levels
OCSE could adopt
is limited by this term, particularly when it is read
together with
section 403(h)(3) of the Act (which permits a finding of
substantial
compliance only when any noncompliance is of a technical
nature). A
level lower than 75% would have been subject to challenge
as
inconsistent with statutory intent.
o Even in the absence of the 1985
regulations, we would reject
DC's position that it should be found to meet
the substantial compliance
standard. The record here supports a finding
that DC did not achieve
substantial compliance under any reasonable reading
of that term. This
Department clearly may retroactively adjudicate a
state's entitlement to
AFDC funds under the applicable statutory standard,
without violating
the APA (even as interpreted in the concurring opinion in
Georgetown).
o Since the 75% standard reasonably
interprets the statutory term
"substantial compliance," the circumstances
here are distinguishable
from those considered in Maryland, where the court
found that
regulations setting "tolerance levels" for AFDC
eligibility
determination errors were not reasonably related to the purposes
of the
statute. Moreover, unlike the "tolerance levels" in Maryland,
the 75%
standard here had an empirical basis in past performance levels
measured
through OCSE's audits. While audit results from FYs 1980 and
1981
showed that some states were not yet achieving 75% levels, other
states
were achieving 100% levels at that time (see OCSE Ex. H), and OCSE
could
reasonably expect all states to be achieving 75% levels by FY 1984.
6/
o Finally, we reject DC's arguments based
on section 403(h)(3) of
the Act. That section permits OCSE to find
substantial compliance only
where any noncompliance is "of a technical nature
not adversely
affecting the performance of the child support program."
OCSE
implemented this provision through its regulations, determining
that
failure to meet the critical service-related audit criteria in
its
regulations is not simply technical since the required activities
are
essential to an effective program. 50 Fed. Reg. at 40130. We
find that
interpretation to be reasonable as applied here since DC's
failures
under a service-related criterion would adversely affect
program
performance; DC took no action whatsoever to provide basic child
support
services in a significant number of cases. 7/
Thus, we conclude that application of the 1985 regulations here
was
clearly proper, and that those regulations are consistent with the
1984
Amendments.
II. DC's technical assistance and improved
performance arguments are
not grounds for overturning the findings of the
follow-up review.
DC raised two additional arguments about OCSE's authority to impose
a
funding reduction in this case. First, DC maintained that
OCSE's
failure to give DC requested technical assistance under section 452
of
the Act renders the follow-up review (and thus the reduction based
upon
it) invalid. 8/ Second, DC contended in its final brief in this
case
that the reduction should not be imposed because DC's performance
has
greatly improved in the past few years. We reject each of
these
contentions.
In March 1988, DC solicited technical assistance from OCSE in the form
of
a request that OCSE provide one of its Philadelphia regional office
employees
to assist DC's program director in the administration and
management of DC's
program for three to four days per week for a
six-month period. DC
agreed to pay per diem and travel expenses; OCSE
was expected to pay the
detailed employee's salary. See DC Ex. 32. DC
characterized
OCSE's refusal to oblige it as a failure to fulfill OCSE's
duty under section
452(a)(7) of the Act to provide technical assistance.
DC maintained, "This
assistance, if provided, would have most likely
enabled appellant to comply
with the standards of the follow-up review."
DC App. Br. at 26-27.
OCSE responded that DC held independent responsibility for operating
its
program and having sufficient staff to administer the functions
for
which it is responsible under its state plan. OCSE Response Br.
at
56-57, citing 45 C.F.R. 302.12, 303.20. OCSE asserted that its
refusal
to supply an OCSE employee was not a failure to supply
technical
assistance, but a management decision as to the appropriate
disposition
of its limited resources. In addition, OCSE noted that
its
representatives are available to advise state program officials
through
correspondence and by telephone. Id. at 57-58.
We agree with OCSE that its duty to supply technical assistance does
not
extend to a duty to loan its employees to provide on-site advice
to
troubled programs. Guam Dept. of Public Health and Social Services,
DAB
No. 1050 (1989). DC did not deny that OCSE representatives
were
available to advise DC by telephone or letter. Moreover, it
is
questionable whether the loan of a single employee, no matter how
able,
could have boosted DC's performance to passing on all criteria.
The DC
program director, whom the detailed employee was to assist,
acknowledged
at the hearing that when she arrived in February 1988, "at
that
particular point the program was so bad from everything I had read
and
heard that I knew it would be years before we'd be able to pass
an
audit." Tr. at 94. We therefore reject DC's contention that
the
follow-up review is invalid due to OCSE's refusal to provide DC
the
precise type of technical assistance which DC requested.
DC's other argument for summarily overturning OCSE's determination
was
that DC's performance had improved remarkably in the last few
years. DC
contended that OCSE would probably agree with this
assessment.
According to DC, "[t]he nature of audits and corrective actions
is to
correct deficiencies and not to be punitive in nature." DC
Post-Hearing
Br. at 6. Since this argument was made in DC's
post-hearing brief,
which was filed simultaneously with OCSE's last brief,
OCSE did not
respond to this argument.
Even assuming that DC's program has improved in general, the
question
before us here is whether the program had improved sufficiently by
the
follow-up review period to bring DC into substantial compliance
with
program requirements. If DC did not bring its program into
substantial
compliance, despite notice and an extended corrective action
period, the
Act requires that a reduction in AFDC funding be imposed.
While the
goal of the provision is to improve performance, Congress provided
a
specific incentive in the form of a reduction of AFDC funding, to
apply
where performance has not been improved sufficiently to bring a
state
into substantial compliance. This is not punitive, but is well
within
congressional authority to establish conditions for federal
funding.
Neither the statute nor the regulations provide any exception based
on
mere improvement in performance, where, as here, that improvement is
not
sufficient to bring a state into substantial compliance.
In sum, we conclude that neither OCSE's failure to detail an employee
to
DC nor DC's improved performance is a sufficient basis for the Board
to
overturn the funding reduction.
III. DC's statistical sampling arguments do not provide a basis
for
overturning the funding reduction.
We next turn to DC's arguments about OCSE's statistical
sampling
methodology. We first provide background about the methodology
used.
We then explain why we either fully reject DC's arguments or find
that
they do not raise any material question about OCSE's conclusion that
DC
failed to achieve substantial compliance.
A. Background
In both the program results audit for FY 1984 and the follow-up
review,
OCSE used statistical sampling techniques to determine whether DC
met
the 75% standard for the applicable service-related audit
criteria.
OCSE drew a systematic random sample of DC's Title IV-D cases that
were
open during each relevant time period. 9/ OCSE first examined
each
sample case to determine what action, if any, was required in the
case
(in other words, what audit criteria applied). For example,
if
paternity for a child whose absent parent's location was known had
not
been legally established, the case would be classified as a
"paternity"
case, requiring review to see if DC took any action, as required
by 45
C.F.R. 305.24(c). OCSE then examined the case files and other
records
to determine whether DC had, in fact, taken any required action
during
the relevant time period, finding either "action" or "no action"
for
each sample case reviewed. For example, in the follow-up review,
OCSE
found that DC took action in only 37 of the 87 cases which
required
action to establish paternity.
OCSE generally used its sample findings to calculate an "efficiency
rate"
and an "efficiency range" for each criterion. 10/ The "efficiency
rate"
is the single most likely estimate of the percentage of cases
requiring
review under an audit criterion which were "action" cases.
The "efficiency
range" was to be equivalent to what is called the "95%
confidence
interval." A confidence interval is a statistician's
calculation of the
range of values within which the statistician can say
with a specified degree
of certainty (here, 95%) the true value occurs.
Under OCSE's audit procedures, a criterion was considered "unmet" if
the
"high end" of the "efficiency range" (also called the "upper limit"
of
the confidence interval) was less than 75%, and only "marginally met"
if
the "high end" was 75 to 80%. It is undisputed that, to determine
the
upper limit of a 95% confidence interval, you first calculate
the
"standard error" associated with a particular sample. As we
discuss
below, the parties' experts agreed that the standard error could then
be
multiplied by 1.96, with the product added and subtracted from
the
efficiency rate, to produce a "two-sided" confidence interval, or
by
1.645, with the product added to the efficiency rate, to produce
a
"one-sided" interval. See Tr. at 41-42 (DC's expert), 87-88
(OCSE's
expert). In any event, by using the "high end" figure, rather
than the
efficiency rate, OCSE was essentially assuming the risk associated
with
potential sampling error.
In other words, not only could OCSE say with at least 95%
certainty
whether DC was meeting each criterion, it could also say that
its
approach erred on the side of passing DC where a complete review
might
well have identified a failure.
In the program results audit, OCSE examined all audit criteria listed
in
section 305.20(a) of the 1985 regulations. In the follow-up
review,
OCSE examined only those four audit criteria which were "unmet" in
the
DC's program results audit, finding that it could say with at least
95%
confidence that DC still failed to meet three of those criteria.
DC did not challenge the use of sampling in general, nor the use
of
systematic sampling under the circumstances here. DC did,
however,
present affidavits and testimony from a statistical sampling
expert
raising four questions about the statistical methodology used in
the
follow-up review: (1) why the auditors did not choose a
sampling
interval to assure a sample size of 350 cases; (2) why the sample
did
not include cases from the last quarter of 1988; (3) whether
the
auditors classified cases according to the proper review category;
and
(4) whether OCSE correctly calculated the confidence levels. 11/
B. The sample size
In his initial affidavit, DC's expert pointed out that OCSE's guide
for
follow-up reviews describes the determination of sample size
for
follow-up reviews. DC Ex. 30 at 3, citing DC Ex. 29 at 12-18.
He said
that by the OCSE calculation this should have included 350 reviews,
but
only 346 cases were actually drawn by the auditors. He stated
his
opinion that, considering cases that were excluded from the sample
or
"disallowed" by the auditors, a minimum of approximately 600
cases
should have been drawn to assure a sample size of 350 reviews. DC
Ex.
30 at 4. 12/
OCSE said that its recommended sample size was well within the
range
suggested by professionals. See OCSE Ex. R. OCSE explained
that its
auditors had mistakenly used 149 rather than 147 as the
sampling
interval, which is why they drew only 346 cases rather than
350. Tr. at
77-78. OCSE's sampling expert testified that the
difference between 147
and 149 was not statistically significant and that its
use favored DC
because the smaller sample size increased the standard
error. Tr. at
78-79.
At the hearing, DC's expert acknowledged that the smaller sample size
was
better for DC. Tr. at 17.
We conclude that the auditors' mistake in using a sampling interval
which
yielded a smaller sample size than the follow-up review guide
suggested is
immaterial here. It did not prejudice DC, but to the
contrary favored
DC since the confidence interval associated with the
smaller sample is wider
than for a larger sample (and OCSE bore the
burden of any potential sampling
error by using the upper limit of the
confidence interval to measure states'
performance). Thus, the sample
size is not a basis for invalidating the
sample results.
C. The lack of cases from the last quarter of 1988
DC's expert also asserted that the sample should have included
cases
opened during the last quarter of 1988 since OCSE's follow-up
review
guide specifies that the population of cases from which the sample is
to
be selected consists of calendar year 1988. DC Ex. 30 at 4, citing
DC
Ex. 29 at 12. DC's expert stated his opinion that this created a
bias
against cases which became active in the last 90 days of the
follow-up
review period.
OCSE responded that it had followed its guide when selecting the
sample.
The guide provides that the follow-up review sample "will generally
be
selected from the first month of the first full quarter after
the
expiration of the corrective action period, . . . ." DC Ex. 29 at
15.
OCSE asserted that the last quarter of 1988 was not omitted in a
biased
fashion since DC was given credit for actions in that quarter.
OCSE's
audit director explained that cases opened in the last quarter were
not
included in the sample "because our audit experience shows that for
many
cases opened during the last quarter of an audit period, sufficient
time
would not have been available for the cases to have been worked by
the
District, resulting in either higher exclusion rates or a finding of
no
action for most of these cases." OCSE Ex. I at 7.
At the hearing, DC's expert said that there was a possibility of a
bias
from excluding cases from the last quarter, but that any bias
introduced
would be "really pretty trivial." Tr. at 16-17 and 59.
DC presented no
evidence, however, that cases opened during the last quarter
would be
more apt to be "action" cases than cases from earlier
quarters. OCSE's
expert testified that, even if DC had acted on the
excluded cases at a
level of practically 100%, including them would not have
affected the
sample results significantly. Tr. at 72-74.
In light of the evidence presented, we conclude that it is likely
that
exclusion of the cases opened in the last quarter of 1988 favored
DC,
but that even if this exclusion introduced a bias against DC, that
bias
would not be significant here. Accordingly, we conclude that
the
exclusion of cases opened in the last quarter of 1988 does not provide
a
basis for invalidating the sample results.
D. Classification of cases
In his first affidavit, DC's statistical sampling expert challenged
the
auditors' classification of cases according to the audit
criteria,
asserting that proper assignment of cases into review categories
is
essential. DC Ex. 30 at 7-8. He asserted that the follow-up
review
guide was vague on how cases should be classified and
described
generally several areas where he questioned the
classifications. Id.
OCSE responded that its auditors had classified cases by reviewing
all
relevant case materials to evaluate what services were needed.
OCSE
pointed out that this was consistent with the preamble to the
final
rule, which explained generally how cases would be classified.
See 50
Fed. Reg. at 40131-40132. OCSE also responded in detail to
the
questions raised in the affidavit. OCSE Response Br. at
46-49. OCSE
noted that throughout the follow-up review the auditors had
consulted
with DC officials and had provided numerous opportunities to DC
for
comment on the audit findings, but that DC had not then questioned
the
classification of cases. Subsequently, OCSE also provided
relevant
parts of an internal case evaluation guide, which it used to
ensure
consistency in auditors' classification of cases.
Prior to the hearing, DC submitted case documentation for specific
cases
which DC alleged should be reclassified into different
review
categories. OCSE responded with a detailed analysis of the
proposed
reclassifications, and DC subsequently withdrew some of its
allegations,
leaving 17 cases in dispute.
At the hearing, DC's sampling expert said the dispute about
case
classification needed to be resolved, but acknowledged
that
misclassification was not a wholesale problem in the sample and that
the
reclassifications would not make much of a difference. Tr. at
18-20.
He also acknowledged that he was not an expert in case
classification.
Id. In presenting his calculations of the relevant
confidence intervals
for the sample results, he included the upper limit
figures for the
results both as classified by OCSE and as reclassified by
DC. DC
Hearing Ex. 1.
Based on our analysis of the parties' arguments and the legible
documents
in the record, we conclude that OCSE's classifications appear
to be
correct. Much of DC's reasoning for its proposed
reclassifications was
based on material in the case records which was
not relevant to the time
period in question. We do not need to provide
a detailed analysis of
the classification disputes, however, since we
find below that, even if DC's
reclassifications were accepted, DC would
not meet the paternity and location
criteria. 13/
E. Calculation of the confidence intervals
In his initial affidavit, DC's sampling expert noted that three kinds
of
reviewing were to be done for the follow-up review: locate,
paternity,
and support. He further noted that DC "does not service its
entire
caseload in a comparable manner, instead investigating different
types
of cases in widely varying ways." DC Ex. 30 at 5; see also Tr.
at
23-28. He said that the audit specifications agreed with this,
since
they designated cases according to whether the cases needed to
be
reviewed for more than one category, for just one, or for no category
at
all. He then asserted that to deal with this situation, it is
necessary
to consider the audit sample as being stratified into eight strata
of
cases aggregated "by the various combinations of
investigative
categories." DC Ex. 30 at 6. 14/ Finally, he
suggested that the
variance (standard error squared) for the sample result
should be
calculated as that for a combined ratio estimator given in a
statistical
treatise which he cited. He asserted that this would make
the
confidence intervals for the support and paternity
criteria
approximately twice that calculated by the auditors, but he provided
no
supporting calculations. DC Ex. 30 at 7.
In response, OCSE focused on explaining why it had not stratified
the
sample according to review categories at the time it drew its
sample.
DC's expert replied in his second affidavit:
According to accepted principles of survey sampling . . . it
is
necessary to take into account any great differences among
the
members of the population which is being sampled. This must
be
done either in the selection of cases or in the calculation
of
the resulting estimate. Since OCSE for admittedly good
reasons
did not stratify the sample at the outset, it must use
the
proper formula to calculate the estimate of occurrence of
the
attribute that is being reviewed ("action" vs. "no
action"),
along with the proper accompanying variance calculation.
DC Reply Br., Att. at 2-3.
At the hearing, DC's expert provided a chart comparing the results
of
OCSE's method of calculating the efficiency rates and the high end
of
the efficiency ranges (confidence intervals) for the criteria with
the
results of his proposed method (which was referred to as
ratio
estimation with post-stratification). DC Hearing Ex. 1. His
chart
indicated that DC would not meet the 75% standard for the
locate
criterion whatever method of calculating the confidence intervals
was
used and whichever case classifications (OCSE's or DC's) were
used.
Thus, DC's arguments about the locate criterion were not dependent
on
the method of calculating the confidence interval, but instead
were
based on other allegations. As we discuss in section IV below,
we
reject those allegations and thus find that DC failed to meet the
locate
criterion.
For the paternity criterion, DC's chart showed that DC would meet the
75%
standard only if the Board agreed with DC's allegation that certain
cases
should be credited with an "action" on the basis of Christmas
cards which DC
sent to some absent parents in December 1988. (Without
the Christmas
card, DC would miss the 75% standard by at least 16
percentage points.)
As we discuss in section V below, we find that DC
failed to document that it
should be given credit for "action" for the
Christmas cards in the sample
cases at issue here.
For the support criterion, DC's chart showed that DC would meet the
75%
standard, even using OCSE's methods, if DC were given the benefit of
a
two-sided 95% confidence interval. For reasons discussed in section
VI
below, we conclude that DC should be given the benefit of a
two-sided
confidence interval and therefore met the support criterion.
This
finding, however, is not sufficient to bring DC into
substantial
compliance since DC still failed to meet two criteria.
In view of our findings below, we do not need to decide the merits of
DC's
proposed use of ratio estimation with post-stratification to
calculate the
confidence intervals. We note, however, that although
OCSE's
statistical sampling expert acknowledged at the hearing that
ratio estimation
with post-stratification might be appropriate (Tr. at
80), OCSE also raised
serious questions about the underlying premise on
which DC's expert's
proposal was based. OCSE Post-hearing Br. at 6-13.
DC's expert's opinion was premised on his view that DC does not
service
its caseload in a comparable manner but investigates different types
of
cases in different ways. Yet, DC's program director testified at
the
hearing that DC expends the same effort on all of its cases. Tr.
at
148-149, 191-192. Moreover, even if DC would take different actions
for
locate than for paternity or support, the stratification DC's
expert
proposed assumes a difference in treatment between, for example,
the
locate only and the locate and paternity cases. 15/ At the point
that
the case is a locate case, though, one would not expect any
difference
in treatment. The difference between the locate and the
locate and
paternity categories would be that cases in the second category
would be
ones where DC either took a locate action which was successful so
that
then a paternity action was required, or took a paternity action
which
led to the determination that a locate action was required. We
are
concerned that constructing a separate stratum for cases
requiring
action for more than one criteria, such as locate and paternity,
could
artificially inflate the standard error, particularly since the size
of
that stratum would be smaller if fewer actions were taken. Because
DC's
proposal was not entirely clear until the hearing, these issues were
not
fully addressed. Thus, nothing in our decision suggests that we
would
accept post-stratification elsewhere. We simply find that it does
not
make any material difference here since DC would fail the paternity
and
locate criteria whichever calculation method is used. 16/
IV. DC did not provide locate services in 75% of the
cases requiring
them.
OCSE's sample for the follow-up review included 109 cases which
OCSE
identified as requiring services to locate an absent parent under
45
C.F.R. 305.33(g). Of those cases, OCSE credited DC for taking
the
required action in 49 cases (45% of the 109 cases). During
the
follow-up review period, DC had submitted 21,000 names of absent
parents
to the Federal Parent Locator Service (FPLS). This May 1988
submission
included approximately 20,000 of DC's active cases selected by
a
computer program that identified cases in which the absent
parent's
social security number was known, and about 900 in which it was
not
known. 17/ Tr. at 179. In addition, DC had entered into
several
contracts with credit agencies to provide social security numbers
for
absent parents, so that additional names could be submitted to FPLS.
DC did not argue that case records for any of the 60 "no action"
cases
revealed any required locate action during the relevant
period.
Instead, DC contended that it should be held to have met the
locate
criterion because of DC's mass submission to the FPLS and its
other
actions geared towards improving its location efforts.
OCSE responded that DC had admitted that its FPLS submission was
not
limited to cases needing locate services, although it could
apparently
have been so limited. Tr. at 173, 182. Thus, a DC
witness acknowledged
that some of the 21,000 cases submitted to FPLS were
cases where an
address was already indicated. Tr. at 183-184.
OCSE concluded that it
therefore could not assume that every single case
requiring locate
services had been submitted to the FPLS. OCSE pointed
out that, in
fact, not one of the 60 "no action" cases appeared on DC's list
of
21,000 names submitted to FPLS. (OCSE had already given credit for
18
"actions" in sample cases solely on the basis of this
list.)
Consequently, OCSE maintained that its sample-based finding that DC
had
failed to provide locate services in at least 75% of cases
requiring
such services should be upheld.
In order to pass the audit and avoid the penalty, "the procedures
required
. . . must be used in 75 percent of the cases reviewed for
each
criterion." 45 C.F.R. 305.20(a)(2). The procedures required
for the
locate criterion are that a state must have established and be
utilizing
a central state PLS as required by 45 C.F.R. 302.35(a) and be using
the
names and other identifying information of absent parents, the state
and
local data sources and the federal PLS, in an attempt to determine
the
actual whereabouts of the absent parent, or determine that
the
whereabouts of the absent parent cannot be ascertained. 45
C.F.R.
305.33(g). The key issue here is whether DC provided these
services to
at least 75% of those cases requiring them.
OCSE's sampling results, even under the analyses presented by DC's
expert
which most favored DC, yields at most only 57% as the upper limit
of the 95%
confidence interval (high end of the efficiency range). DC
Hearing Ex.
1. 18/ We conclude that this is an accurate representation
of DC's
performance on this criterion because the evidence indicates
that, although
DC took steps towards improving its location efforts,
these steps were so
unfocused that DC evidently provided locate services
to cases that did not
need them while missing cases that did. To the
extent that DC's list
showed that DC took action in sample locate cases,
OCSE gave DC credit for
that action.
The two activities cited by DC as support for its contention that
it
should be found to meet this criterion, despite the statistically
valid
sampling evidence to the contrary, are DC's massive FPLS submission
and
its use of credit companies to locate social security numbers (SSNs)
for
absent parents whose names could not be submitted to FPLS. We
are
unable to credit the latter activity. Although DC definitely did
enter
into contracts for such services (see Exs. 22-25), the record before
us
indicates that no cases were actually submitted to the contractors
until
1989. Tr. at 188-189. 19/
In addition, DC's evidence about the mass FPLS submission does not
support
a finding that OCSE's sample was not representative, or that
locate services
were supplied in 75% of cases requiring them. There are
several reasons
why we conclude that, although the names submitted
represented about 40% of
the total cases in the universe (51,000), this
effort did not provide needed
locate services. The chief reason is that
the computer program that
selected cases for FPLS submission was not
designed to select cases requiring
locate services. Actually, DC's
chief motivation for submitting this
list of cases seemed to be a
response to OCSE criticism of DC's failure to
use FPLS in the past. Tr.
at 115-116. (In fact, in 1987 DC had
submitted only 28 names to FPLS.
Tr. at 116; Exs. 19, 27.) Thus, the DC
computer systems analyst
testified that he drew up computer specifications to
include all cases
which had a SSN available, and cases where certain other
information
that might lead to a SSN was available. Tr. at 178.
He acknowledged
that he did not limit the list to cases requiring locate
services,
although he could have done so. Tr. at 182. He also
acknowledged that
there were a significant number of cases that needed locate
services
that could not be sent to FPLS because of insufficient
information. Tr.
at 178. DC's program director also admitted at
the hearing that many of
the cases included in the submission would have
already had an address
for the absent parent. Tr. at 117. DC was
informed that to receive
credit for an action the action must be "related to
providing necessary
services." 50 Fed. Reg. at 40132.
Although DC estimated that around 26% of its cases required
locate
services 20/, DC presented no evidence which shows the percentage
of
these cases which would have been included in the mass FPLS
submission.
21/ There are several factors indicating that many of these
cases
probably were not on the list. Location of an absent parent is
often
the first action required before action may be taken to
establish
paternity and child support obligations, so that, to the extent
that
many of these cases had been in the system for a long time, that
action
should have been completed. Moreover, the presence of an SSN for
most
of the submitted cases indicates that the absent parent's location
may
have already been known, since this is a level of detail that is
less
likely to exist in cases requiring locate services. In addition,
the
May 1988 submission was apparently the only one undertaken that
year;
cases filed later that year or cases where the SSN was furnished
later
were not submitted. See, e.g., DC Case No. 152 (fifteenth file
attached
to DC's 9/24/90 submission)(SSN supplied on May 10, 1988, case
referred
to locate in June 1988, no further action taken).
OCSE's sample results, however analyzed, showed that DC did not
come
anywhere near to providing locate services in 75% of the cases
requiring
them. DC did not show that this conclusion is unsound or
unreliable.
We therefore uphold OCSE on this criterion.
V. DC did not provide paternity services in 75%
of the cases
requiring them.
OCSE found in its follow-up review that 87 sample cases required
action
for establishing paternity under 45 C.F.R. 305.24, but that DC had
taken
action in only 37 of these cases. DC Ex. 2. DC argued that
it should
have been given credit for "action" in 56 sample cases because
OCSE
should have counted as an action the mailing of Christmas cards
to
absent parents.
In late 1988, DC had developed the idea of sending Christmas
cards
encouraging putative fathers voluntarily to admit to paternity as a
gift
to their children. DC's computer systems analyst was given
a
description of those to whom the Christmas card would be sent
--
basically, to absent parents in active cases where there was no
support
order but where there was a local home or employment address for
the
absent parent. DC Ex. 9. The analyst then developed
computer
specifications for printing mailing labels. Id., Att.
These labels
were due by November 20, 1988, and were used on 10,000 Christmas
cards
sent out between approximately December 1 and December 21, 1988.
Tr. at
98, 174; DC Ex. 9.
DC did not print out a list of the absent parents for whom mailing
labels
were printed in November 1988, nor did DC record in individual
case records
that a Christmas card had been sent. Tr. at 122, 126, 174;
see also
Affidavit of Norris E. Shepard, Att. to DC Supplemental Notice
of
Filing. During the course of the follow-up review (which was
performed
from February 13 to March 24, 1989), DC "reconstructed" a
mailing list for
the auditors. DC Ex. 2 at 7; Tr. at 175. DC later
produced
another reconstructed list, based on the database in the
computerized system
as of March 7, 1990. OCSE Ex. J at 3. DC said it
used the
computer specifications developed for the Christmas card labels
to
reconstruct the mailing list.
The auditors noted in their workpapers that 18 of the "no
action"
paternity cases from the sample appeared on the reconstructed
list
provided to them. OCSE Ex. M. During the course of Board
proceedings,
DC first identified 17 sample cases as appearing on the list and
then
added two more. DC did not specify which list it used. DC
acknowledged
that the list it reconstructed in 1990 had 1,000 names "not
previously
on the list." OCSE Ex. J at 3.
OCSE said that the Christmas card should not be counted as an "action"
for
the following reasons:
o The Christmas card was not referred to
in DC's Manual of
Policies and Procedures, which contained written procedures
for
establishing paternity;
o The message on the Christmas card was
not as specific as a
letter developed by DC to inform putative fathers of the
case in which
they had been named and of potential action DC might take;
and
o DC was required to document actions
taken and did not do so
here.
In response, DC presented testimony about why it had decided the
Christmas
card could be an effective technique and about the response it
had received
from the Christmas cards. 22/ DC also presented evidence
to show that
the Christmas card was developed with DC's "Same Day
Paternity Program"
(through which the court system cooperated to reduce
the time necessary for
establishing paternity) and was intended as the
notice for that program.
We reject OCSE's first two arguments. While the audit criteria
require
DC to establish and be using written procedures, nothing in
the
regulations specifies that such procedures must be contained in
a
manual. DC did have a written description of the Same Day
Paternity
Program, identifying the Christmas card as a means of giving notice
of
that program. DC Ex. 16. While the information on the card was
not as
specific as the paternity letter, OCSE's audit director testified
that
if the sending of a Christmas card had been noted in the case
records,
it would have been counted as an "action." Tr. at 201-203; see
also DC
Ex. 2, Att. at 3.
DC admitted, however, that its mechanisms for documenting actions it
took
were deficient. Tr. at 174. Yet, DC had clear notice that
it
needed to document its actions to receive credit for them. 45
C.F.R.
303.2; 50 Fed. Reg. at 40132. As DC pointed out, OCSE has
not
restricted states to case file documentation, but has credited
states
for action based on computer-generated lists, created at the time
the
action was taken. Where DC did not maintain
contemporaneous
documentation, however, DC has the burden of establishing
that Christmas
cards were actually sent to putative fathers in the sample
cases. DC's
evidence is insufficient to show this because:
o To accept the reconstructed lists as
evidence, we must be able
to assume that there was no change in the
information on DC's
computerized system between the time that the mailing
labels were
printed and the time the reconstructed lists were printed.
The record
shows, however, that this is not a reasonable assumption.
DC's computer
systems analyst testified that the computerized records are
"dynamic"
since they are updated daily. Tr. at 175. Moreover, the
auditors found
that often information in the documentary case files had not
been
entered into the computer database. DC Ex. 2 at 5, 6, 9. DC
was
informed of this deficiency and took steps to remedy it. Id., Att.
at
7. Thus, it is very possible that addresses for absent parents
which
were in the computer database when the reconstructed lists were
printed
were not in the database when the labels were printed. (Indeed,
the
discrepancy between the 18 cases the auditors found on the
reconstructed
list provided to them and the 19 cases DC claimed were on the
list
indicates that an address for one case was added after the audit.)
23/
o There is affirmative evidence in the
record that one of the
cases for which DC claimed a paternity action based on
the Christmas
card would not have fit the computer specifications at the time
the
mailing labels were printed. DC admitted that this case had
been
improperly closed in February 1988, and the record shows it was
not
reopened until January 1989. DC submission of 9/24/90 at 8, and
Att.
(Case No. N011-491.01). Since this case was not in active status
at the
time the mailing labels were printed, and mailing labels were
printed
only for active cases, we affirmatively find that no Christmas card
was
sent in this case.
o Even if a mailing label had been
printed for a sample case, this
does not necessarily mean that DC used the
label to send a Christmas
card. DC sent only 10,000 cards. This
was 1,000 fewer cards than names
on the reconstructed list. DC's
computer systems analyst testified
that, as he recalled, the number of labels
was "about even" to the
number of cards, but he maintained no record of how
many labels were
printed. Tr. at 174. Moreover, DC's own
testimony showed that some
addresses would have been removed from the
database if Christmas cards
were returned as undeliverable. Tr. at
187. Thus, we cannot assume
that the 1,000 extra names on the
reconstructed list represent only the
number of addresses added after the
mailing labels were printed.
o As OCSE pointed out, at least one of
the cases where DC was
claiming a paternity action based on the Christmas
card was a case where
the absent parent who appeared on the reconstructed
list was a mother,
not a father. (OCSE had not classified this case as
a paternity action,
but DC claimed this was a mistake.) We agree with
OCSE that a Christmas
card sent to a mother cannot be considered a paternity
action. DC may
be correct that even where a custodian such as a
grandparent has taken
action to obtain support from a mother, DC might want
to try to
establish paternity and seek support from the father as well.
But we
fail to see how a card sent to the mother asking her to
acknowledge
paternity can be considered related to the goal of having the
father
acknowledge paternity.
DC's evidence is sufficient to show that it is probable DC did
actually
send Christmas cards in some of the sample cases at issue.
Because DC
failed to maintain contemporaneous documentation, however, DC's
evidence
is not sufficient to show the precise cases or even the number of
cases
in which both a mailing label would have been printed and a
card
actually sent. 24/ Thus, we find that DC failed to meet the
75%
standard for the paternity criterion.
VI. DC did provide support services in 75% of the
cases requiring
them.
OCSE found in the follow-up review that 42 sample cases required
action
for "Establishment of Support Obligations" (support) under 45
C.F.R.
305.25(a) during the follow-up review period, and that DC took action
in
26 of these cases. OCSE found that this gave DC an efficiency rate
of
62%. OCSE then applied a statistical significance test called
the
"T-test" and determined that OCSE could say with a 95% degree
of
confidence that DC failed to meet the 75% standard for the
support
criterion.
At the hearing, DC's statistical expert testified that, even using
OCSE's
method for calculating the standard error for the sample, DC
would have
passed the support criterion if OCSE had used a two-sided 95%
confidence
interval in calculating the efficiency range, rather than the
T-test (which
he said is equivalent to a one-sided confidence interval).
Tr. at
43-44. DC noted that the two-sided confidence interval had been
used in
a related case involving Mississippi.
OCSE's expert did not directly dispute this testimony although he
said
that he could not understand why the T-test would not give the
same
result as calculating the two-sided 95% confidence interval. He
agreed
with DC's expert, however, that to calculate the two-sided
95%
confidence interval, a statistician would multiply the standard error
by
1.96, and to calculate a one-sided confidence interval, a
statistician
would multiply by 1.645. Tr. at 87-88.
In its post-hearing brief, OCSE implicitly acknowledged that the
T-test
was equivalent to a one-sided confidence interval, arguing that
the
1.645 factor should be used to get "comparable results."
OCSE
post-hearing br. at 8, n. 6. The issue, however, is not whether
we
should obtain results comparable to the T-test, but whether we
should
obtain results comparable to those used for other states. OCSE
used the
two-sided confidence interval for its program results audits, and
for
recalculations produced not only in the Mississippi case, but
other
related cases as well. See, e.g., Ohio at 10. While OCSE
did not need
to give the states the benefit of the two-sided interval, OCSE
cannot
arbitrarily give that benefit to some states and not to others.
OCSE's own exhibit shows that OCSE calculated a standard error of .075
for
the support criterion. OCSE Ex. S at 3. Multiplying .075 by
1.96
gives a result of .147. Expressing this as a percentage (14.7%)
and
adding it to the efficiency rate of 62% gives a figure of 76.7% as
the
upper limit of the two-sided 95% confidence interval (i.e., the high
end
of the efficiency range).
Thus, we find that DC met the 75% standard for the support criterion
in
the follow-up review. Since we have also found that DC failed to
meet
the 75% standard for the locate and paternity criteria, however,
our
finding on the support criterion does not affect our ultimate
conclusion
that DC failed to comply substantially with the requirements of
the Act.
Conclusion
For the reasons stated above, we uphold OCSE's decision to reduce by
one
percent DC's AFDC funding for the one-year period beginning July
1,
1988.
_____________________________ Donald F. Garrett
_____________________________ Alexander G. Teitz
_____________________________ Judith A. Ballard Presiding
Board
Member .1. The 1985 regulations also provided an
expanded list
of service-related audit criteria for subsequent audit periods,
and
added new performance-related indicators for use beginning with the
FY
1988 audit period.
2. This was not DC's first notice of problems with its
program. OCSE
began compliance audits in 1977, and DC did not dispute
OCSE's claim
that DC "had been repeatedly warned of its program deficiencies
and was
admonished to take action to improve title IV-D services."
OCSE
Response Br. at 14. The record before us contains audit reports
for FYs
1979 through 1981 that detail DC program failings in the very same
areas
found deficient in the FY 1984 audit. OCSE Ex. B. No
penalty was
previously imposed, however, due to congressional moratoria.
3. We note that the auditors examined whether any efforts were
made by
DC in these cases to furnish these services. The success of
these
efforts, while noted for statistical purposes, was not determinative
as
to whether DC was found to be in substantial compliance; DC
received
credit for "action" so long as it took some action towards these
goals.
See 50 Fed. Reg. at 40132.
4. Our conclusion here closely parallels our analysis of
virtually
identical arguments made by the parties in the Board's recent
decisions,
Ohio Dept. of Human Services, DAB No. 1202 (1990), and New Mexico
Human
Services Dept., DAB No. 1224 (1991). A copy of Ohio was furnished
to
the parties in this case for comment on any issues that were
applicable.
DC contended in a footnote to its post-hearing brief that "the
Board's
decision is contrary to law and . . . that the Board [is requested
to]
reconsider its decision." DC Post-hearing brief at n. 1. The
only
support DC cited for its position is the legislative history
already
considered by the Board in Ohio, so no "reconsideration"
seems
warranted.
5. The existing regulations required the states to have and
be
utilizing written procedures detailing step by step actions to be
taken.
45 C.F.R. 305.1, 305.24(a), 305.25(a), 305.33; 45 C.F.R. Part
303
(1983). Although no reduction had actually been imposed based on
the
existing audit criteria, this was due to the moratoria. The states
had
no guarantee that Congress would continue to delay imposition of
the
reductions.
6. We note that the percentages given in OCSE Exhibit H (a
draft
analysis by OCSE of 1980 and 1981 audit results) are derived simply
by
dividing the number of complying sample cases by the total
number
reviewed. If OCSE had instead used the same method for
estimating
compliance levels it used in the 1984 and 1985 audits for all
states
(see our discussion below), the compliance percentages shown on
Exhibit
H for the earlier years would have been higher. Moreover in
Maryland,
the Secretary had acknowledged that some errors in making
eligibility
determinations were unavoidable due to the complex nature of
the
requirements. Here, DC did not argue that the
service-related
requirements were complex or that there was any barrier to
meeting those
requirements which could not be overcome.
7. DC asserted that OCSE erred by failing to consider whether
DC's
alleged noncompliance was "technical." DC Appeal Br. at 26.
However,
DC never justified its position by showing how its particular
errors
were merely technical.
8. Section 452(a)(7) provides that the Secretary of Health and
Human
Services should establish a separate organizational unit under
the
direction of a Secretarial designee, who shall report directly to
the
Secretary and "provide technical assistance to the States to help
them
establish effective systems for collecting child and spousal support
and
establishing paternity. . . ."
9. For a systematic random sample, the auditor first selects a
case at
random and then selects every nth case thereafter to achieve the
desired
sample size. For example, in a universe of 6,000 cases where a
sample
size of 100 was desired, the auditor would select every sixtieth
case
after the first randomly selected one.
10. As discussed in section VI below, OCSE applied a test called
a
"significance test" rather than calculating an efficiency range for
the
follow-up review results.
11. In another Title IV-D case involving the use of
statistical
sampling methodology to determine compliance, the argument was
raised
that OCSE's sampling methodology was a rule subject to notice
and
comment rulemaking under the APA. DC did not raise this issue and,
in
any event, we discussed and rejected that argument in our recent
New
Mexico decision.
12. DC's expert referred to 31 cases excluded from the sample and
107
cases "disallowed" by the auditors. OCSE explained that the 31
cases
were cases where DC could not locate the case records and that this
fact
made it highly improbable that DC would have taken action in the
cases.
OCSE Ex. P at 5. OCSE also explained that the 107 cases were
not
excluded from the sample, but were simply cases in which none of
the
audit criteria at issue here were applicable. Id. at 8.
13. Other reasons for our not definitively resolving these
disputes is
that some of the materials in the record are illegible or
contain
abbreviations or codes not fully identified in the record. The
Board
specifically asked DC to provide legible copies of the documents
which
were illegible (Tr. at 209-211), but DC failed to do so.
14. Using P for paternity, L for locate, and S for support,
DC's
expert identified separate strata as P (68 cases), L (85 cases), S
(25
cases), P and L (13 cases), P and S (6 cases), L and S (11 cases), P
and
L and S (0 cases). He also identified a separate stratum for the
107
cases which did not require review for any of the criteria at
issue
here.
15. As OCSE's audit director pointed out, DC's expert's
stratification
proposal fails to take into consideration that each criterion
is
reviewed separately. See OCSE Ex. I at 6. In other words, OCSE
already
treats cases requiring review for a particular criterion as a
separate
universe and therefore is not aggregating cases requiring a
different
type of action with each other. Moreover, constructing the
strata as
DC's expert proposed could result in over 4 million strata if all
22
audit criteria were at issue. Tr. at 64.
16. We also note that DC provided none of the calculations
underlying
the figures on DC's chart. Moreover, although the Board
asked DC to
provide copies of the relevant parts of the treatise from which
DC's
expert said he drew his formulas, DC did not provide the parts of
the
treatise containing the two formulas he specifically cited in
his
initial affidavit. Thus, it cannot be verified on the record
here
whether DC correctly calculated the confidence intervals using the
ratio
estimation technique with post-stratification.
17. Regulations require that an FPLS request include the
absent
parent's name and social security number, if known, and that
all
reasonable efforts must be made to try to get the social security
number
before making a request. 45 C.F.R. 303.70(c). Cases
without the absent
parent's social security number could apparently be
submitted if other
information such as the absent parent's date and place of
birth or
parents' names were known. Tr. at 178.
18. This number reflects the expert's conclusion if
OCSE's
classification of sample cases is used. The percentage drops to
52% if
all of DC's reclassifications are accepted.
19. DC provided as Exhibit 26 a list of names sent to one
contractor,
Equifax, but there is no date on it. DC's computer systems
analyst
specifically testified that he was unable to assemble the list
"for
Equifax" until early 1989, after the end of the follow-up review
period.
Id. We also note that this list included many absent parents
for whom
addresses were listed.
20. In its comments on the follow-up review, DC estimated that
about
13,500 cases required locate services. DC Ex. 2, Att. at 5.
In the
program results audit, the auditors found that about 40% of
cases
sampled required locate services. In the follow-up review, the
sample
figure was 31%.
21. DC's statistical expert hypothesized that we could find that
up to
24 additional sample cases were "action" cases if we could assume
that
40% of the sample cases for the locate criterion which the
auditors
found were "no action" were included in the FPLS submission (since
40%
is the ratio of FPLS submissions to the total universe of
cases).
However, DC's statistical expert admitted that this assumption was
not
valid, given the audit finding that the 60 "no action" sample cases
were
not on the FPLS submission list. Tr. at 66-68. We note that
the
expert's hypothesis also assumes that the rate of cases submitted
to
FPLS would be the same for locate cases as for other cases in the
total
universe. Yet, as discussed below, this is not a reasonable
assumption.
22. Much of the testimony at the hearing and the post-hearing
briefing
focused on whether DC had properly evaluated the response from
the
Christmas card. OCSE questioned whether DC was attributing to
the
Christmas card some voluntary acknowledgments of paternity
which
resulted from other actions. We do not need to resolve this issue
here,
however. DC admitted that it had no documentation for the
individual
cases at issue here showing a response from the Christmas
card. Tr. at
101. Moreover, OCSE has not made success a
precondition for counting a
state activity as an "action" for purposes of
these audits. 50 Fed.
Reg. at 40132.
23. DC's statistical expert said that his calculations were based
on
adding 18 Christmas card actions to the 38 action cases found by
OCSE.
But OCSE's figures show that it had given credit for only 37
other
actions. Thus, we would not adopt DC's calculations unless we
found
that 19 additional actions should be credited to DC based on
the
Christmas card.
24. If there were credible evidence that DC sent a Christmas card
to
the absent parent in every case on the DC rolls in which some action
was
required for establishing paternity, then we would not need to deal
with
sample cases. We could find 100% compliance with this
criterion,
whether or not DC did anything further in the cases. There
would also
be no problem with reconstructing the list after the
mailing. Even in
light of evidence that there were errors by DC in not
properly coding
cases or not entering them into the computer database and
that cards
were sent to women or to closed cases, we might still find DC to
meet
the 75% standard, if we reasonably assumed that the error rate did
not
exceed 25%.
Here, however, DC made no attempt to show what percentage of total
cases
requiring paternity action were actually sent cards, and the
record
provides no basis for us to make such a finding ourselves.
Moreover, we
know that some cases which required paternity action would not
fit DC's
computer specifications for the mailing labels, such as cases with
no
local