FDA Logo--links to FDA home page
U.S. Food and Drug Administration
HHS Log--links to Department of Health and Human Services website

FDA Home Page | Search FDA Site | FDA A-Z Index | Contact FDA


horizontal rule

Assessment of visual performance in the evaluation of new medical products

B. Drum*, D. Calogero and E. Rorer

Food and Drug Administration, Center for Devices and Radiological Health, Office of Device Evaluation, Division of Ophthalmic and ENT Devices, 9200 Corporate Boulevard, Rockville, MD 20850, USA

PDF Version

When evaluating how a medical product affects vision, it is important to assess how that product affects the ability to function in real life, not only the ability to read letters on a vision chart. Nevertheless, the measurement of visual acuity with a vision chart remains the primary test of the effects of medical products on vision. Here, we review efforts to identify reliable, cost-effective clinical tests to serve as surrogate measures of functional visual performance.

Introduction

When evaluating the safety and effectiveness of medical products, it is often important to assess their effects on the performance of ‘real-world’ visual tasks. However, tests of real-world visual performance are not standardized and are typically costly and difficult to conduct. No consensus has been reached on the ability of existing clinical vision tests to predict real-world performance.

Night driving is frequently chosen as a representative task in studies of functional visual performance. Driving requires a broad range of visual abilities, is an important task to a large portion of the population, and has major public safety impact. Driving performance studies involve either actual driving, usually on a track or specially designed course, or simulated driving in a controlled environment. Nighttime conditions maximize the visual challenge. However, owing to the expense and burden of conducting these studies, researchers are exploring the possibility that one or more clinical vision tests (e.g. visual acuity, contrast sensitivity, field of view, and glare) can act as acceptable surrogates of driving performance. This paper reviews relative advantages and disadvantages of the alternative clinical tests.

Assessment of visual performance in driving

Closed-course driving

Studies of actual driving have the major advantage of assessing performance under real-life conditions. However, driving courses are difficult to standardize, lighting and viewing conditions depend on the weather on outdoor courses, and there is an element of danger to the subjects, especially subjects with impaired vision. In addition, it is difficult, or in some cases impossible, to eliminate or control extraneous non-visual factors, such as auditory or somatosensory stimuli, that may interfere with the ability to isolate and assess visual performance variables [[1] and [2]].

Simulated night driving

Driving simulators attempt to duplicate the experience of driving in a controlled environment (usually an interactive video image). The advantages and disadvantages of simulated driving studies [[3], [4] and [5]] are largely complementary to those of real driving studies. Environmental variables are more controllable in driving simulators than on real driving courses, but it is extremely difficult to duplicate real-life lighting and viewing conditions in a simulator. The largest, most elaborate driving simulator in the world, the National Advanced Driving Simulator (NADS) (http://www.nads-sc.uiowa.edu/) at the University of Iowa (Fig. 1), still does not have forward spatial resolution equal to 20/20 visual acuity in its normal operating mode, or projectors bright enough to duplicate the full range of luminances in a typical night-time driving environment. The VSRC driving simulator (Vision Science Research Corporation, http://www.contrastsensitivity.net/) provides adequate resolution but does not simulate the motions associated with driving.

Figure 1: The National Advanced Driving Simulator (NADS). The driving dome is 24 ft in diameter and it runs on a 64-ft by 64-ft X–Y track. The dome contains a real car or truck cab surrounded by a 3608 by 408 interactive video screen. Operating the car’s steering and foot pedal controls generates real-time simulated changes in speed, vibration, sound, car tilt, and road scene.

Possible surrogate clinical vision test methodologies

Most currently available clinical vision tests were developed as general-purpose diagnostic tests for visual system disorders, not as substitutes for the assessment of driving performance. Specific validation studies are therefore needed to identify individual tests or combinations of tests that accurately and consistently predict visual performance on critical driving parameters.

Visual acuity

Tests of high-contrast visual acuity [6], in which the subject is asked to read the smallest possible black letters in a white surrounding field (Fig. 2a), are the most commonly used clinical tests of vision. In fact, visual acuity is often the only vision test performed in routine ophthalmological examinations. The test is relatively quick and easy to administer, and the results are well correlated with the level of visual system damage in a large number of disorders of the eye. It is also a good predictor of performance for high-resolution tasks like reading, threading a needle, or identifying road signs while driving. Some types of diseases, however, can seriously impair other aspects of visual perception and performance while leaving high-contrast acuity nearly unaffected. For example, retinitis pigmentosa is a degenerative disease of the eye that can destroy peripheral vision almost entirely before it significantly reduces central acuity. Also, some early cataracts can greatly increase scattered light in the eye, giving the patient the perception of looking through fog without significantly affecting acuity. In these cases, visual acuity is expected be a poor predictor of performance on important real-world tasks like walking through an obstacle course or driving at night or in other poor visibility conditions. Comparisons of driving performance with acuity confirm this expectation for drivers with acuity in the normal range [[2] and [7]]. However, acuity may contribute to the correlation when combined with contrast sensitivity or mesopic tests [2].

Figure 2. Examples of clinical vision tests: (a) EDTRS visual acuity chart: Sloan letters, >90% contrast, equal 0.1 log size steps between lines. (Image adapted from Colenbrander, ‘Measuring Vision and Vision Loss’, www.ski.org/Colenbrander/Images/Measuring_Vis_Duane01.pdf.) (b) Pelli–Robson letter contrast sensitivity chart: 20/120 letters, equal 0.15 log contrast steps between three-letter groups. (Image provided by Denis Pelli.) (c) FACT grating contrast sensitivity chart: 1.88 diameter gratings, equal log contrast steps between gratings, spatial frequencies 1.5, 3, 6, 12, 24 cycles/degree, threeorientation forced-choice task. (Image provided by VSRC.) (d) Oculus C-Quant Straylight Meter display: a flickering annulus produces intraocular stray light, counterphase flicker is randomly assigned to right or left half of bipartite field, the subject chooses which side flickers more in a two-alternative forcedchoice procedure, and the stray light level is determined from the resulting psychometric function.

Low-contrast acuity

Visual acuity is almost constant for all contrast levels higher than about 20%. At lower contrasts, acuity becomes strongly dependent on contrast changes. Losses of contrast in the retinal image caused by scattered light within the eye (e.g. from a cataract in the natural crystalline lens or from a multifocal intraocular lens) therefore produce greater acuity losses at low contrast than at high contrast. Contrast levels are low in many low-visibility driving conditions such as snow, rain, or fog. Low-contrast acuity tests are therefore expected to correlate better than high-contrast tests to driving performance under low visibility conditions [2]. Low-contrast acuity charts [[8] and [9]] are commercially available at contrast levels as low as 1.25% (http://www.precision-vision.com/). Disadvantages in comparison to standard high-contrast acuity are that low contrast acuity testing is more time-consuming if more than one contrast level is tested, and the results are more variable.

Mesopic acuity

At photopic (daytime) light levels, high-contrast visual acuity is affected only slightly by moderate changes in light level or contrast. At mesopic (twilight) light levels, however, visual acuity changes more rapidly with fluctuations in light level or contrast, and therefore may be expected to correlate better with mesopic driving performance. Because of the sensitivity of the results to light level and the difficulty of controlling the light level precisely at such low levels, mesopic acuity data are typically more variable than photopic data. Performing low-contrast acuity tests (see above) at mesopic light levels further increases the sensitivity of the test to functional vision loss, but also further increases the variability of the results and the difficulty of controlling the testing conditions.

Letter contrast sensitivity

Letter contrast sensitivity [[10] and [11]] (Fig. 2b) is similar to low-contrast acuity in that the patient's task is to read as many letters as possible from a chart. In the contrast sensitivity test, however, all the letters are the same size and are large enough to be legible whenever they can be seen at all, but their contrast is progressively reduced from near 100% at the top of the chart to near 0% at the bottom of the chart. The ability to see low contrast letters is important for reading signs and for identifying low-contrast objects that are similar in size to the test letters. However, letter contrast sensitivity test results may not be generalizable to the detection and recognition of objects that are either much larger or much smaller than the chart letters.

Grating contrast sensitivity

Grating contrast sensitivity involves the detection of sinusoidal gratings, which are patterns of parallel light and dark bars for which the transition from dark to light is gradual rather than abrupt. Sensitivity is measured over a range of spatial frequencies, or bar widths. The normal contrast sensitivity function is maximal at a spatial frequency of about six cycles per degree of visual angle and declines at both higher and lower spatial frequencies. The grating contrast sensitivity function has been recognized as an important fundamental measure of visual function since the early 1960s [12], but was first developed as a clinical test by Ginsburg [13] in 1984. Currently available tests include the CSV-1000 marketed by VectorVision (http://www.vectorvision.com/) and the Functional Acuity Contrast Test (FACT) [14] marketed by StereoOptical (http://www.stereooptical.com/) and Vision Science Research Corporation (http://www.contrastsensitivity.net/) (Fig. 2c). Both tests can be conducted at photopic and mesopic light levels.

Disability glare

Disability glare refers to the temporary loss of visual function in the presence of a bright adjacent light source. Common sources of disability glare for drivers are the sun during the day and headlights from oncoming cars at night. Susceptibility to such glare sources varies greatly from person to person depending on the amount of light that is scattered onto the retina from the crystalline lens and other structures in the eye. A clinical test that accurately predicted the effects of glare sources and light scattering characteristics on driving performance would be a valuable diagnostic tool for evaluating new medical products that affect intraocular light scatter. Several disability glare tests have been developed for clinical use [[15] and [16]]. In most existing tests, especially those that involve measuring contrast sensitivity or acuity in the presence of a continuous static glare source, the light from the glare source may cause the pupil to constrict enough to affect the results of the glare measurement. Improved test design, e.g., with a dynamic glare source [17], may be needed to assess the types of glare effects encountered in driving.

Intraocular stray light

A different approach to assessing the effects of disability glare on visual function is to obtain a direct measurement of the amount of stray light in the eye produced by a glare source. Oculus (http://www.oculus.de/) has recently marketed the C-Quant Straylight meter developed by van den Berg and IJspeert [[18] and [19]]. The device is currently marketed in the U.S. The operation of the straylight meter is illustrated in Fig. 2d. The temporal variation in the stray light from a flickering glare source is nullified by a superposed light flickering out of phase with the stray light. The amount of added light that just cancels the stray light flicker is a direct measurement of the stray light. The test is fast, easy for the patient, and accurate. However, the relationships between the stray light results from this test and the results of contrast sensitivity-with-glare tests and driving performance tests have not been established [20].

Visual fields

Most clinical visual field tests are limited to threshold measurements within the central 24°–30° radius. To determine the relationship between visual fields and driving performance, however, it is necessary to measure the peripheral field as well. Johnson and Keltner [21] performed automated peripheral field screening on 10,000 subjects, and related the results to survey results concerning their driving performance and accident rates. They found a doubling of accident and conviction rates for binocular loss of field to within 40° of eccentricity, but little correlation with monocular field loss. Wood and Troutbeck [22] obtained comparable results in a study of simulated field loss in young normal subjects. Considering that visual field tests are more time-consuming and difficult for the patient than most other clinical vision tests, standard automated perimetry would appear to have a low priority as a clinical surrogate for predicting driving performance in the general driving population.

Ball and Owsley [[7] and [23]] have proposed a different approach to peripheral visual function that shows a stronger relation to driving performance than standard visual field measurements. The Useful Field of View is defined as ‘the visual field area over which one can use rapidly presented visual information.’[7] It requires correctly identifying the direction of a peripherally presented target while simultaneously performing a complex central visual task. In a comparative study of accident rates and a battery of clinical vision tests, the Useful Field of View results were the only ones that were significantly related to crash rates. Nevertheless, the Useful Field of View test has yet to have an appreciable impact as a clinical test, because it remains unknown to much of the ophthalmic community.

A comparison of possible clinical surrogate tests and night driving performance

As part of the search for a possible alternative to costly and difficult night driving visual performance testing for the assessment of new medical products, FDA has sponsored a critical path project in collaboration with the University of Iowa to compare night driving measures with clinical vision tests. The objective of the project is to investigate possible surrogate measures for night driving visual performance. Fifty-five subjects from 30 to 60 years old with uncorrected visual acuity of 20/40 or better (visual acuity usually required for an unrestricted license) were enrolled in an initial, prospective, clinical trial. We compared clinical vision tests with visual performance during simulated night driving in the NADS. Clinical vision testing included visual acuity (photopic and mesopic) with an Early Treatment of Diabetic Retinopathy Study (ETDRS) chart (PrecisionVision) (Fig. 2a), letter contrast sensitivity with the Pelli–Robson chart (Haag–Streit http://www.haagstreituk.com/) (Fig. 2b), grating contrast sensitivity with the FACT chart (Fig. 2c), and stray light testing with a prototype of the Oculus C-Quant (Fig. 2d). Driving measures included distance to identify road signs and hazards (objects). We found that contrast sensitivity and intraocular stray light measures correlate with object recognition, a night driving measure, in a subset of subjects with visual acuity better than 20/20. More work is needed to identify and fully evaluate the most valuable clinical measures from the initial study. The ultimate goal of this critical path project is to develop better evaluation tools for medical products that affect functional visual performance.

Conclusions

Assessment of visual performance is often important for evaluating the safety and effectiveness of new drugs and medical devices, but is typically complex, expensive, and burdensome for subjects and investigators. Identification of clinical tests that could serve as acceptable surrogates for visual performance tests in clinical trials would yield major savings of time, effort, and expense in the evaluation of new products.

Driving is a complex, visually intensive task that is frequently used to represent visual performance capability. We have reviewed comparisons of visual performance measures in driving to clinical vision tests, including visual acuity, contrast sensitivity, disability glare, and visual field assessment. Many of these studies have found only weak or insignificant correlations between clinical and performance measures. This may be due in part to the complexity of the driving task, which includes non-visual factors that can obscure or compensate for variations in visual performance. Studies that isolate the visual aspects of driving performance improve the chances of revealing the true correlations with clinical measures of visual function.

Although the available data are not definitive, we can draw limited conclusions regarding the comparison of visual driving performance with specific types of clinical tests (see Table 1).


Table 1. Comparison of representative clinical vision test methodologies

  Methodology 1 Methodology 2 Methodology 3 Methodology 4 Methodology 5 Methodology 6 Methodology 7 Methodology 8
Name of methodology Photopic visual acuity Low-contrast acuity Mesopic visual acuity Letter contrast sensitivity Grating contrast sensitivity Glare testing Straylight testing Visual fields
Names of specific methodologies with associated companies and web sites ETDRS chart: PrecisionVision http://www. precision-vision.com/ Bailey–Lovie chart: The National Vision research Institute of Australia nvri. optometry.unimelb. edu.au/nvri/ Holladay Contrast Acuity Test: Stereo Optical, Inc. http:// www.stereooptical. com/ SKILL card: The Smith-Kettlewell Eye Research Institute http://www.ski.org/ ETDRS chart: PrecisionVision http:// www.precision-vision.com/ Bailey–Lovie chart: The National Vision research Institute of Australia nvri.optometry. unimelb.edu.au/nvri/ Pelli–Robson chart: Haag– Streit http://www. haagstreituk.com/ Mars letter Contrast Sensitivity chart: Mars Perceptrix Corp. http:// www.marsperceptrix.com/ FACT chart: Vision Sciences Research Corporation http://www. contrastsensitivity.net/ Optec 6500P: Stereo Optical, Inc. http:// www.stereooptical. com/ CSV-1000E Chart: VectorVision http:// www.vectorvision. com/ Contrast Sensitivity -Optec 6500P Stereo Optical, Inc. http://www. stereooptical.com/ CSV-1000HGT: Vector Vision www.vectorvision. com CST-1800: Vision Sciences Research Corporation http://www. contrastsensitivity.net/ Contrast acuity – Optec 6500P: Stereo Optical, Inc. http:// www.stereooptical.com/ Oculus C-Quant: Oculus http://www. oculus.de/ Humphrey Field Analyzer: Carl Zeiss Meditec. Vision Attention Analyzer Vision Resources, Inc. Chicago, IL
Pros Quick, easy, good predictor of performance for high resolution tasks under bright conditions Correlate better than standard acuity tests to driving performance under low visibility conditions Quick, easy, good predictor of performance for high resolution tasks under low light conditions such as night driving Assesses performance for reading low contrast signs Assesses the whole contrast sensitivity function from lowest to highest spatial frequencies Adding glare testing to vision tests adds information about the effects of intraocular light scatter on visual performance Fast, easy for the patient, and accurate Full field measurements can identify deficits that have been correlated to increased accident rates
Cons Poor predictor of performance under low contrast conditions Time consuming; results are more variable than standard acuity test results Test conditions difficult to control and results are more variable than photopic results May not provide an accurate assessment of performance detecting and recognizing objects with sizes different than the chart letters Time consuming; results are more variable than standard acuity test results Time consuming; results are more variable than standard acuity test results Correlation between straylight results and other vision tests with glare, and driving performance not yet established Time consuming and difficult for the patient
References [2,6,7] [1,8,9] [1] [10,11] [12–14] [4,5,15–17] [18–20] T[7,21–23]

Visual acuity

Classical high-contrast letter acuity is poorly correlated with driving performance down to or even below the legal acuity limit for licensed drivers. For theoretical reasons, acuity measured at low light and/or low contrast levels should show higher correlations. The limited available data appear consistent with this prediction.

Contrast sensitivity

Compared to visual acuity, both letter and grating contrast sensitivity tests show better but still modest correlations with driving performance. Letter contrast sensitivity is usually measured at photopic light levels, whereas grating contrast sensitivity is usually measured at both mesopic and photopic levels. As with acuity, mesopic tests typically show higher correlations with driving performance, but they tend to be more difficult and variable than photopic tests.

Glare and stray light

Clinical glare tests, including contrast sensitivity with glare, typically do not show functional impairment commensurate with that experienced in driving. New test designs with dynamic glare sources may be needed to assess the types of glare effects encountered in driving. A new test that directly measures intraocular stray light shows promise, but its relationships to glare tests and driving performance tests have not been established.

Visual fields

Standard automated visual field tests typically show poor correlations with driving performance except in cases of advanced binocular field loss. The Useful Field of View test, an alternative approach to peripheral vision assessment that measures the time needed to report the position of a peripheral event while attending to a central task, has shown a stronger correlation to driving accident rates than standard visual field measurements.

FDA critical path trial

FDA has sponsored a critical path project to compare night driving measures on the National Advanced Driving Simulator with clinical vision tests to look for surrogate measures for night driving visual performance in clinical trials of medical products. Preliminary results are promising, but more work is needed to determine whether clinical tests can replace driving performance tests in clinical trials.

*Corresponding author: B. Drum (bruce.drum@fda.hhs.gov) URL: http://www.fda.gov

References

1. K.E. Higgins and J.M. Wood, Predicting components of closed road driving performance from vision tests, Optom. Vis. Sci. 82 (2005), pp. 647–656. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (4)
2. J.B. German et al., Metabolomics in the opening decade of the 21st century: building the roads to individual health, J. Nutr. 134 (2004), pp. 2729–2732.
3. O. Fiehn, Metabolomics – the link between genotypes and phenotypes, Plant Mol. Biol. 48 (2002), pp. 155–171.
4. J.K. Nicholson et al., ‘Metabonomics’: understanding the metabolic responses of living systems to pathophysiological stimuli via multivariate statistical analysis of biological NMR spectroscopic data, Xenobiotica 29 (1999), pp. 1181–1189.
5. C.E. Thomas and G. Ganji, Integration of genomic and metabonomic data in systems biology – are we ‘there’ yet?, Curr. Opin. Drug Discov. Dev. 9 (2006), pp. 92–100.
6. R. Goodacre et al., Metabolomics by numbers: acquiring and understanding global metabolite data, Trends Biotechnol. 22 (2004), pp. 245–252.
7. J.C. Lindon et al., NMR spectroscopy of biofluids. In: G.A. Webb, Editor, Annual Reports on NMR Spectroscopy Vol. 38, Academic Press (1999), pp. 2–88.
8. W.B. Dunn and D.I. Ellis, Metabolomics: current analytical platforms and methodologies, Trends Anal. Chem. 24 (2005), pp. 285–294.
9. E.M. Lenz and I.D. Wilson, Analytical strategies in metabonomics, J. Proteome Res. 6 (2007), pp. 443–458.
10. J.K. Nicholson et al., Metabonomics: a platform for studying drug toxicity and gene function, Nat. Rev. Drug Discov. 1 (2002), pp. 153–161.
11. D.G. Robertson, Metabonomics in toxicology: a review, Toxicol. Sci. 85 (2005), pp. 809–822.
12. D. Portilla et al., Metabolomic study of cisplatin-induced nephrotoxicity, Kidney Int. 69 (2006), pp. 2194–2204.
13. L.K. Schnackenberg et al., An integrated study of acute effects of valproic acid in the liver using metabonomics, proteomics, and transcriptomics platforms, OMICS 10 (2006), pp. 1–14.
14. P. Espandiari et al., The utility of a rodent model in detecting pediatric drug-induced nephrotoxicity, Toxicol. Sci. 99 (2007), pp. 637–648.
15. G.G. Harrigan and L.A. Yates, High-throughput screening, metabolomics and drug discovery, IDrugs 9 (2006), pp. 188–192.
16. R.J. Mortshire-Smith et al., Use of metabonomics to identify impaired fatty acid metabolism as the mechanism of a drug-induced toxicity, Chem. Res. Toxicol. 17 (2004), pp. 165–173.
17. J.C. Lindon et al., The Consortium for Metabonomic Technology (COMET): aims, activities, and achievements, Pharmacogenomics 6 (2005), pp. 691–699.
18. I.D. Wilson et al., HPLC-MS-based methods for the study of metabonomics, J. Chromatogr. B: Analyt. Technol. Biomed. Life Sci. 817 (2005), pp. 67–76.
19. M.E. Bollard et al., NMR-based metabonomic approaches for evaluating physiological influences on biofluid composition, NMR Biomed. 18 (2005), pp. 143–162.
20. H.C. Keun et al., Geometric trajectory analysis of metabolic responses to toxicity can define treatment specific profiles, Chem. Res. Toxicol. 17 (2004), pp. 578–587.
21. W. Lehnert and D. Hunkler, Possibilities of selective screening for inborn errors of metabolism using high-resolution 1H-FT-NMR spectrometry, Eur. J. Pediatr. 145 (1986), pp. 260–266.
22. M.A. Constantinou et al., 1H NMR-based metabonomics for the diagnosis of inborn errors of metabolism in urine, Anal. Chim. Acta 511 (2005), pp. 303–312.
23. T. Kuhara, Gas chromatographic–mass spectrometric urinary metabolome analysis to study mutations of inborn errors of metabolism, Mass Spectrom. Rev. 24 (2005), pp. 814–827.
24. N. Serkova et al., 1H-NMR-based metabolic signatures of mild and severe ischemia/reperfusion injury in rat kidney transplants, Kidney Int. 67 (2005), pp. 1142–1151.
25. M.A. Silva et al., Hepatic artery thrombosis following orthotopic liver transplantation: a 10-year experience from a single centre in the United Kingdom, Liver Transpl. 12 (2006), pp. 146–151.
26. D.S. Wishart, Metabolomics: the principles and potential applications to transplantation, Am. J. Transplant. 5 (2005), pp. 2814–2820.
27. J.D. Bell et al., Nuclear magnetic resonance studies of blood plasma and urine from subjects with chronic renal failure: identification of trimethylamine-N-oxide, Biochim. Biophys. Acta 1096 (1991), pp. 101–107.
28. J.L. Griffin and A.W. Nicholls, Metabolomics as a functional genomic tool for understanding lipid dysfunction in diabetes, obesity and related disorders, Pharmacogenomics 7 (2006), pp. 1095–1107.
29. J. Yang et al., Diagnosis of liver cancer using HPLC-based metabonomics avoiding false-positive result from hepatitis and hepatocirrhosis diseases, J. Chromatogr. B 813 (2004), pp. 59–65.
30. K. Odunsi et al., Detection of epithelial ovarian cancer using 1H-NMR-based metabonomics, Int. J. Cancer 113 (2005), pp. 782–788.
31. R.D. Beger et al., Metabonomic models of human pancreatic cancer using 1D proton NMR spectra of lipids in plasma, Metabolomics 2 (2006), pp. 125–134.
32. E. Holmes et al., Proton NMR analysis of plasma from renal failure patients: evaluation of sample preparation and spectral-editing methods, J. Pharm. Biomed. Anal. 8 (1990), pp. 955–958.
33. S. Rozen et al., Metabolomic analysis and signatures in motor neuron disease, Metabolomics 1 (2005), pp. 101–108.
34. J.T. Brindle et al., Rapid and noninvasive diagnosis of the presence and severity of coronary heart disease using 1H-NMR-based metabonomics, Nat. Med. 8 (2002), pp. 1439–1445.
35. M. Coen et al., Proton nuclear magnetic resonance-based metabonomics for rapid detection of meningitis and ventriculitis, Clin. Infect. Dis. 41 (2005), pp. 1582–1590.
36. T.A. Clayton et al., Pharmaco-metabonomic phenotyping and personalized drug treatment, Nature 440 (2006), pp. 1073–1077.

The views presented in this article do not necessarily reflect those of the U.S. Food and Drug Administration.

horizontal rule