National Cancer Institute - IMAT

Past Seminar Series

1998

[ Expand All ]   [ Collapse All ]


Micro-Arraying Achievements and Challenges

Presenter: Jose Valle, Development Manager, Intelligent Bio-Systems
December 10, 1998

[ Expand Abstract ]

This is a discussion about some of the technical aspects affecting the performance of micro-arraying systems and what to be aware of when evaluating micro-array results. We will compare micro-arraying to other techniques and show you how to specify a micro-arraying system. This talk is based on users' experience of a production micro-arraying system. Several installations are currently producing substrates used in a large variety of assays.


Cells and Microfluidics - A New Platform for Cell Function and Analysis

Presenter: Dr. Bala Manion, Ph.D., Chairman, Biometric Imaging
November 12, 1998

[ Expand Abstract ]

Microvolume fluorimetry is a broad-based technology that enables cell function analysis in a microvolume format. This technology has empowered the development for the first time whole blood cell assays in a cartridge. This allows for near real-time monitoring of cellular response to administered therapy. Microvolume fluorimetry is based on the spatial analysis of fluorescence in small samples (capillaries, microwells, and chips). A laser scanning optical system permits the measurement of fluorescence from precise scan volume. When a fluorophore labeled reagent binds to a cell or other matrix, it is concentrated and gives rise to a signal over the background that is measured with a laser scanner. These features of Microvolume fluorimetry have allowed for example, the development of several cell function diagnostic assays very useful in managing high dose chemotherapy patients. Microvolume fluorimetry has also been implemented in a high throughput screening (HTS) system based on intact cell assays. In high throughput screening, microvolume fluorimetry allows the use of target cells in monitoring cell expression, cell activation and cytotoxicity. This talk will present specific examples in cell function diagnostics and HTS assays to illustrate the power and versatility of microvolume fluorimetry. The features of microvolume fluorimetry that can empower changes in the way in which drugs are monitored during discovery, clinical validation and the administration of therapy are emphasized.


Large-scale Gene Expression Technologies: Closing the Gap Between Sequence and Function

Presenter: Dr. Mark Boguski, Senior Staff Fellow, National Center for Biotechnology Information
October 22, 1998

[ Expand Abstract ]

The wealth and sufficiency of information about sequences of genes provided by genome projects has spawned the new field of functional genomics. I believe that the most exciting frontier is at the interface between computational and "high-throughput" experimental biology. For many years now, there has been an "impedance mismatch" between the rapid output of computational predictions and the ability of traditional experimental methods to test and verify these predictions. Through the development and application of new gene expression technologies, "wet bench" biologists can produce functional information about gene products almost as rapidly as computational biologists can analyze the underlying genomes. There are tremendous challenges and opportunities to be found here, and much new biology to be discovered. Soon it should be possible to examine the spatial and temporal expression of most, if not all, genes in an organism. This approach will lead to exciting insights into the pathways to which specific genes belong, and provide clues to their roles in health and disease.

Large-scale, high-throughput experimental methods require information processing and analysis systems to match. Software and database systems to design arrays, to track materials, to collect and analyze, and interpret data from gene expression studies are in their infancy. Among other things, such systems have to catalog the expression behavior of thousands of genes in a single experiment, and subsequently make comparisons across tissues, developmental and pathological states, or cellular perturbations. Very large quantities of data have to be managed both prior to and after the actual experiment because direct access is required to all sequences, annotations, and physical DNA resources for the genes of the organism studied. Following hybridization and readout of relative expression levels observed in the sites on an array, the data must be stored and preserved so that it is available for image processing and statistical and biological analysis. The latter includes identifying the transcripts that show statistically significant changes in absolute or relative quantity. Once this is done, a number of tasks need to be performed, the most obvious and straightforward of which is to provide information about the structures and functions of the gene products of interest. Interpreting this information is the responsibility of the investigator who should potentially be able to interrogate the data sets in other ways. Biochemical pathways to which a particular transcript belongs could be identified or genes with which the transcript is thought to interact could be found. In a time course experiment, sets of genes with similar temporal expression profiles could be sought. In the long run, the software could, indeed should, be made capable of pre-interpreting the data (using a biochemical knowledge base and set of heuristics) and presenting the investigator with alternative hypotheses or explanations about its meaning. It is only in this way that experiments involving tens of thousands of genes with hundreds or thousands of these showing changes, can be managed.

It is exciting to anticipate a time when data from thousands of gene expression experiments will be available for meta-analysis which has the potential to balance out artifacts from many individual studies, thus leading to more subtle findings and robust results. This will require that data adhere to some type of uniform structure and format that would ideally be independent of the particular expression technology used to generate it. The pros and cons of various publication modalities for these large electronic data sets will also be discussed.

Suggested Reading:

The following two articles are due to be published in a special issue of Elsevier "Trends Guide to Bioinformatics."

  • The Bioinformatics Era
  • Functional Genomics: Narrowing the Gap between Sequence and Function

Hieter P, Boguski M. Functional genomics: it's all how you read it. Science. 1997 Oct 24; 278(5338): 601-602.

Ermolaeva O, et al. Data management and analysis for gene expression arrays. Nature Genetics 1998 Sep; 20(1): 19-23.


Software and Microarrays for Cancer Research

Presenter: Dr. Harold "Skip" Garner, UT Southwestern Medical Center
September 24, 1998

[ Expand Abstract ]

Several technologies are being investigated with the ultimate goal of rapidly identifying genes involved in cancer and then analyzing them further with software and hardware. Specifically, we have recently completed and validated a computer code, POMPOUS, that predicts simple sequence repeat polymorphisms and then designs the necessary reagents for their use. We are also working on concepts for a possible similar code for SNPs. In addition, we have been computing Virtual Expression Arrays, by sequence homology between the CGAP/EST databases and any large collection of genes or sequences, for example the entire yeast genome and the Clonetech Atlas Arrays. Anchored to the yeast genome, the computer can identify candidate yeast genes with high homology and possible differential expression as indicated by 'hits' to the CGAP database. These are then used along with other information to design custom expression and re-sequencing arrays that would be diagnostic for a given cancer application. We plan to manufacture the custom arrays using Digital Optical Chemistry, an approach and device to produce 'Affymetrix' style oligonucleotide arrays ('chips'). The prototype device uses Texas Instruments Digital Light Processing micromirror system coupled to a UV source and a reaction chamber to make custom arrays on slides via a 'Digital Mask' and photolithographic chemistry.

Other technology in development or operation to support these efforts includes slide spotting robots; hyperspectral imaging readout for expression, re-sequencing and cytogenetics applications; expert systems coupled to supercomputer codes for data mining and culling for analysis of sequences; and automation for oligo production and DNA sequencing.


Micro-Array Based High Throughput Screening

Presenter: Mitch Eggers, Ph.D., President and CEO, and Michael Hogan, Ph. D., Chief Scientific Officer, Genometrix Incorporated
June 25, 1998

[ Expand Abstract ]

A cost effective, microarray-based technology platform will be presented for performing high throughput mRNA expression analysis and genotyping. The platform consists of a comprehensive robotic workstation which conducts sample preparation (RNA isolation and purification), hybridization (using DNA microarrays), detection (CCD-based imager) and information processing. The microarrays (hundreds of genes represented per microarray) are fabricated in the wells of standard 96 well microtiter plates, thereby accommodating gene expression analysis across hundreds of genes in a robot-compatible format. Methods for efficiently fabricating (1 microarray/second) and testing the quality of the microarrays will be presented.

Also several applications utilizing the microarray-based platform will be presented. The applications include high throughput screening for developing genetic therapy formulations for cancer treatment as well as genotyping for cancer risk and chemical toxicology analysis.

Finally a model microarray-based high throughput system for analyzing the RNA expression patterns of the NCI's 60 cell lines against the 60,000 compound library will be introduced.


Profiling Gene Expression Patterns Using High-density Oligonucleotide Arrays

Presenter: Dr. David H. Mack, Vice President Genomics Research, Eos Biotechnology, Inc.
June 11, 1998

[ Expand Abstract ]

We have applied Affymetrix GeneChip technology to generate gene expression profiles from over 20 normal and tumor matched patient colorectal samples. By correlating massive and highly parallel gene expression patterns with specific cellular programs, one can begin to produce important clues to gene function. These patterns of expression in and of themselves may begin to identify candidate genes with diagnostic and prognostic value, as well as potential new drug targets. The ability of highly parallel expression monitoring technologies and bioinformatics to create new patient management tools, and to identify new therapeutic targets, will be discussed.


How Do You Target RNA with Small Molecules?

Presented by: Dr. David Ecker, Vice-President and Managing Director, Combinational Drug Discovery, Isis Pharmaceuticals, Inc.
May 7, 1998

[ Expand Abstract ]

While functional genomics is largely focused on proteins encoded by the DNA, the RNA intermediate provides a unique, untapped wealth of opportunities for drug binding. Recent advances in structural biology have revealed that RNA has surprisingly complex structure and binds to proteins with high specificity. The structure provides the opportunity for targeted small molecule drug-binding. I will describe our program to exploit RNA as a target for small molecules including: 1.) How to mine the genome for structured RNA targets 2.) The computational design of RNA-binding small molecules, 3.)High throughput synthesis of RNA-binding libraries, 4.) Strategies for high throughput screening of RNA targets using mass spectrometry.


Image Engine-Multimedia Information System Supporting Clinical Care, Research & Education in Oncology

Presented by: Henry J. Lowe, M.D., Associate Professor of Medicine, University of Pittsburgh
April 23, 1998

[ Expand Abstract ]

Oncology is increasingly image-intensive but clinical and research information systems have, until recently, focused largely on text-based data. The central importance of imaging technologies such as computerized tomography and magnetic resonance imaging in the diagnosis, staging and follow-up of cancer patients, combined with the trend to store many 'traditional' clinical images such as conventional radiographs and microscopic pathology images in digital format present both challenges and opportunities for the designers of Oncology information systems.


Chromophore Assisted Laser Inactivation to Address Protein Function in situ

Presenter: Dr. Daniel G. Jay, Dept. of Physiology, Tufts University School of Medicine
April 9, 1998

[ Expand Abstract ]

Abstract: The advent of complete genomic information will herald a new revolution in molecular biology to develop a mechanistic understanding of how proteins function together in the living cell. One powerful tool to address this is chromophore assisted laser inactivation (CALI). Laser light is targeted to inactivate proteins of interest via a dye-labeled antibody that by itself does not block function. CALI provides a high degree of temporal and spatial resolution to acutely perturb protein function in situ . This approach is less subject to redundancy and compensation which can be problematic for more chronic deletion strategies. Several recent applications of CALI that address protein function in cellular processes will be presented, including the protooncogene pp60 c-src and the ERM protein ezrin. The development of CALI for application to functional genomics will be discussed.


Tumor Cytogenetics Revisited: Comparative Genomic Hybridization and Spectral Karyotyping

Presenter: Dr. Thomas Reid, National Human Genome Institute, National Institutes of Health
March 26, 1998

[ Expand Abstract ]

Fluorescence in situ hybridization techniques allow the visualization and localization of DNA target sequences on the chromosomal and cellular level and have evolved as exceedingly valuable tools in basic chromosome research and cytogenetic diagnostics. Recent advances of molecular cytogenetic approaches, namely comparative genomic hybridization and spectral karyotyping, now allow to survey tumor genomes for chromosomal aberrations in a single experiment and permit identification of tumor specific chromosomal aberrations with unprecedented accuracy. This seminar will review these new molecular cytogenetic concepts, describe applications of comparative genomic hybridization and spectral karyotyping for the visualization of chromosomal aberrations as they relate to human malignancies and animal models thereof, and, finally, provide evidence that fluorescence in situ hybridization has developed as a robust and reliable technique which justifies its translation to cytogenetic diagnostics.


Suspension Arrays: Discrete Microsphere Subsets as Carriers of Nucleic Acid and Protein Reactants

Presenter: Ralph L. McDade, Luminex Corporation
March 12, 1998

[ Expand Abstract ]

A novel technology for performance of multiple bioassays simultaneously has been developed. Incorporating defined quantities of two or more fluorescent dyes into the core of polystyrene microspheres imparts a defining signature to each set of microspheres as they pass through a computer-enhanced flow cytometer. Each set of microspheres is capable of carrying the reactants of one bioassay on their surface. Multiplexed assays are performed in minutes and read in seconds with all data acquisition and analysis presented real-time. Applicable formats include immunoassay, receptor-ligand analysis, enzymatic analysis and nucleic acid hybridization. Benefits include assay speed with great economy. Results of immunoassays such as ToRCH, multiple cytokine measurements, and allergy testing will demonstrate the femtomolar sensitivity and wide dynamic ranges inherent in these assays. In addition, single base pair specificity and excellent sensitivity of genetic analyses will be displayed by results of tissue typing and viral load analyses. The future of this technology will be presented with emphasis on a smaller, more efficient, less costly flow analyzer capable of performing up to 512 bioassays simultaneously.


From Genes to Drugs Using Ribozymes

Presenter: Dr. Nassim Usman, Vice President of Research, Ribozyme Pharmaceuticals, Inc.
February 26, 1998

[ Expand Abstract ]

The specificity of ribozymes indicates their use as extremely selective modulators of gene expression. This specificity results from several unique properties of ribozymes including their two part binding mechanism, built in catalysis and, in the case of synthetic ribozymes, chemical modifications that do not cause nonspecific effects. Furthermore, ribozymes can be expressed from a variety of viral vectors as an alternative delivery mechanism that may be tuned to give expression from weeks to months. All of the aforementioned attributes of the ribozyme technology allows the use of ribozymes as a functional genomic tool. Using high throughput synthesis of large libraries of ribozymes, any given mRNA may be scanned for an accessible site. Once found, the target mRNA may then be down-regulated to determine its effect on a given cellular phenotype. Once the role of the mRNA has been determined at the cellular level, the same ribozyme may then be used in animal models of disease to determine if the mRNA is a valid target for therapeutic intervention and whether there are any toxicities associated with the down regulation of the target mRNA. This final process is called target validation. Complete examples of this process from gene sequence to a validated therapeutic target, including a lead compound, i.e. the ribozymes used in the target validation experiment, will be presented for two anti-proliferative targets. This seminar is part of the Technology Seminar Series organized by the Strategic Technologies Office (STO) and will be tele-broadcast in Frederick.


Back To TopBack to Top
Skip Navigation