These pages use javascript to create fly outs and drop down navigation elements.

Patient Judgment System (PJS)

Please note that this section is an archive (last updated in June 2006). [disclaimer]

Sections:   Overview | Instrument Reviews | Construct Overviews | Book Compendium Reviews | Internet Site Reviews

Created 2003 January 23
Jump To A Section

Practical Information | Research Contacts | Annotated Bibliography | Factors & Norms | Reliability Evidence | Validity Evidence | Comments | Updates | Feedback

Practical Information

Instrument Name:

Patient Judgment System (PJS)

Instrument Description:

A healthcare organization can use the PJS in order to examine “long-term trends in quality, recognize areas of excellence, and identity high priority opportunities for improvement.” (Ref: 1) There are 69 items under 11 scales: Admission, Daily care, Information, Nursing care, Physician care, Auxiliary staff, Living arrangements, Discharge, Billing, Total process, and Allegiance.

Price:

Free (Available through the developer described below)

Administration Time:

No information found.

Publication Year:

1989

Item Readability:

Flesch-Kincaid Grade Level of 6.2. Most items were written in simple language and contain approximately 15 words or less. Some items require comprehension of common medical terms, such as "IV."

Scale Format:

Fixed-response choice format for 66 items (including 46 items of 5-point “excellent-to-poor” response scale), and open-ended format for 3 items.

Administration Technique:

Mail-delivered, and self-administered questionnaire.

Scoring and Interpretation:

The scores on each item in each scale are summed up in a consistent way (reversing scores on some items), and they are transformed linearly to a 0-100 scale. A score of 100 indicates excellence in each subscale.

Forms:

There is a 108-item preceding version of the PJS called the Patient Judgment of Hospital Quality (PJHQ).

Research Contacts

Instrument Developers:

Eugene C. Nelson, DSc, Ron D. Hays, PhD, Celia D. Larson, PhD, and Paul B. Batalden, MD.

Instrument Development Location:

Hospital Corporation of America
One Park Plaza
P.O. Box 550
Nashville, TN 37202-09551

Latest address for Dr. Nelson:
Dartmouth-Hitchcock
One Medical Center Drive
Lebanon, NH 03756

Instrument Developer Email:

Instrument Developer Website:

No information found.

Annotated Bibliography

1. Nelson EC, Hays RD, Larson C, Batalden PB. The patient judgment system: reliability and validity. QRB Qual Rev Bull. 1989 Jun;15(6):185-91. [PMID: 2502749]
Purpose: To provide reliability and validity information on the PJS. (Survey development and pilot processes were described in Ref 2).
Sample: The 68-item PJS was administered to a stratified random sample of 8581 patients who were discharged from 32 urban and rural hospitals in five states. 5625 (66% response rate) returned completed questionnaires. Exclusion criteria included: patients who were discharged against medical advice; those patients with diagnosis of mental disorder, substance abuse, and brain disorder; and those younger than 1 or older than 80. The mean age for respondents was 51.78, 59% were female, and 72% were married. There were differences found between respondents and nonrespondents in terms of age, sex, and martial status at p < 0.05 level.
Methods: Reliability was estimated as internal consistency at the patient level, and as intraclass (i.e., intra-hospital) correlations, and test-retest at the hospital level. Discriminant, construct, and predictive validity estimates were obtained.
Implications: As related to PJS validity, the ‘total process’ and ‘global quality’ item scores had the highest correlation suggesting that total process “fully maps” the construct of hospital quality; whereas ‘billing and doctor care’ had a low correlation suggesting that patients might perceive ‘billing and doctor care’ as a lesser association to hospital quality than other areas such as ‘nursing care’. The PJS can be used to measure “long-term trends” in patient satisfaction, as related to the patient evaluation of care and services received from the hospital.

2. Meterko M.,Nelson EC, Rubin HR (editors). Patient judgments of hospital quality: Report of a pilot study. Med Care. 1990 Sep;28(9 Suppl). [PMID: 2214898]
Purpose: To construct a scale that reflects hospitalized patients’ experiences and concerns. (The scale constructed in this study was called Patient Judgments of Hospital Quality (PJHQ), and the PJS is the revised version of PJHQ. PJHQ is sometimes referred as a 108-item version of PJS.) To evaluate the impact of mode of scale administration and incentive structure on response rate.
Sample: N = 2113 (same sample as cited in the pilot test for Ref:1): 2113 patients who were recently discharged from ten hospitals in three regions of the United States were surveyed. 1367 total subjects responded (with differential response rates according to administration mode and incentive structure). The mean age was 46 years, 63% was female, and 92% was white.
Methods: Literature review, content analysis and taxonomy of patient concerns were used to identify scale content. A pilot study was conducted to examine the 108 items, and test the impact of administration mode and incentive structure on response rate. Two modes of administration were utilized: telephone (n = 1055) and mail (n = 1058). Those receiving mail administration were further stratified by incentive type: no incentive (n = 348), a “Susan Anthony dollar” incentive (n = 356), and a “ball-point pen” incentive (n = 354).
Implications: The response rate for the mail method (57%) was lower than for the telephone method (62%), but the mail method with a telephone follow-up showed an increase in the rate (57% to 67%). Also, the pen incentives groups responded more (63%) than other incentive groups (53%, 54%, p < 0.05). No consistent patterns in modes of administration across the hospitals were found. The authors suggested that a reduction in the lengthy 108-item PJHQ might be practical and useful. Reliability and validity information is presented.

3. Hospital Corporation of America. Your hospital stay: The patient's viewpoint [Questionnaire]. Nashville (TN); 1991-2. [No PMID].
Purpose: A 69-item questionnaire is included.

top

Factors and Norms

Factor Analysis Work:

An exploratory factor analysis with 50 items in PJHQ, a preceding version of PJS, was conducted. Results were the six factors: Nursing and Daily Care, Hospital Environment and Ancillary Staff, Medical Care, Information, Admissions, and Discharge and Billing. (Ref: 2)

Normative Information Availability:

Scale means (SD) for each quality scale when N = 5625 in 32 hospitals were reported as follows: Admissions = 70.4 (24.7), Daily Care = 74.5 (24.8), Information = 71.0 (25.6), Nursing Care =72.2 (24.5), Doctor Care =78.5 (22.8), Auxiliary Staff =73.6 (20.0), Living Arrangements =71.9 (19.0), Discharge =71.5 (24.2), Billing =60.9 (29.7), Total Process =71.7 (18.8), and Allegiance =78.0 (25.1). (Ref: 1)

Reliability Evidence

Test-retest:

The absolute values of the difference between average scale score (for each of the 11 quality scales) for hospital obtained in two different quarters ranged from 1.3 to 3.4 on a 100-point scale. No test-retest correlations reported. (Ref: 1)

Inter-rater:

No information found.

Internal Consistency:

Patient-level consistencies (Cronbach’s alpha) ranged from 0.86 to 0.97 for the 11 quality scales. Hospital-level consistencies (the ratio of between-hospital variation to within-hospital variation) were reported in the range of 0.70 to 0.89. (Ref: 1)

Alternate Forms:

No information found.

Validity Evidence

Construct/ Convergent/ Discriminant:

Item-Scale Correlations were used to assess convergent and discriminant validities. The average within scale correlation for determining convergent validity was 0.79. The correlations across scales for discriminant validity were ranged from 0.34 to 0.64. Analysis of variance was also conducted to access variability across hospitals on each of the quality scales, and significant differences were found at p < 0.05. (Ref: 1) Using Multitrait-Multimethod Matrix for the PJHQ scales, convergent validity correlations were reported in the range from 0.51 to 0.71 with the median of 0.67, and divergent correlations were ranged from 0.16 to 0.76 with the median of 0.38. (Ref: 2)

Criterion-related/ Concurrent/ Predictive:

No information found, although Pearson correlations between patient ratings (using the PJS) and employee ratings (not the PJS) for 5 quality scales (ranging from 0.52 to 0.82) provide weak evidence of criterion-related validity.

Content:

Literature review, analyses of patients’ qualitative statements, and inputs from experts were conducted to maximize content validity. (Ref: 2)

Responsiveness Evidence:

No information found, however, the hospital-level consistencies appear to indicate that it can distinguish among institutions cross-sectionally.

Scale Application in VA Populations:

No information found.

Scale Application in non-VA Populations:

Yes. (Ref: 1-2)

Comments


Advantages: This is a carefully constructed and well-described scale for assessing patient satisfaction with and perceptions of inpatient care. It has demonstrated reliability and stability.

Disadvantages: The validity evidence for the Patient Judgment System (PJS) is still limited, with little evidence presented relating to external validity. Information about the PJS’s responsiveness is also limited, although the intraclass correlations presented (Ref: 1) seem to indicate that the PJS distinguishes well among hospitals.

Recommendations: The PJS is designed for use in assessing inpatient satisfaction at the hospital level (i.e., average satisfaction levels among inpatients at facilities). Its utility as a tracking measure, that is its responsiveness to change, remains to be determined. Thus, its use in longitudinal analyses should be considered carefully. Furthermore, the exclusion of patients discharged against medical advice potentially leaves out a population who might contribute to our understanding of how patient care should be evaluated. Similarly, the use the of the PJS to evaluate mental health care would likely be inappropriate, primarily due to the exclusion of those with a mental disorder or a substance abuse disorder in the pilot.



Updates

No information found.