Skip Over Navigation Links
National Institutes of Health
:

Imaging Interoperability Workshop

MARCH 22 - 23, 1999
MARRIOTT HOTEL, POOKS HILL
Bethesda/Potomac Room
Bethesda, MD

March 22, 1999
1:00 PM - 6:00 pm

AGENDA

1:00 pm Welcome and Introductions

*Review of Agenda and Modifications.

*Discussion of the scope of Interoperability Issues including standards/standardization, nomenclatures, quality control.

*Presentations from participants about ongoing data exchanges of image and other data:

  • Problems

  • Concerns

    Brinkley, Cohen, Jacobs, Kennedy, Hirskovits, Mazziotta, Reiss, Rottenberg, Wong.

    4:15-4:30 pm Break

    4:30-6:00 pm

    * Summary of Data Exchange experiences.

    * Discussion on where to go from here on data exchange.

    6:00 pm Adjourn

    8:00 pm No Host dinner - let me know if you will be joining us.


    MARCH 23, 1999
    MARRIOTT HOTEL, POOKS HILL
    Bethesda, MD
    Bethesda/Potomac Room
    8:00 AM - 3:30 PM

    8:00 AM - 12:00 PM
    List and discuss all newly developed tools for analysis of image data:

    • Purpose of tool
    • How well it accomplishes the task
    • How well the measurement agrees with other approaches
    • How many other groups have used it and found it useful
    • Do we need more of these?
    • Catalogue

    Brinkley, Cohen, Jacobs, Kennedy, Hirskovits, Mazziotta, Reiss, Rottenberg, Wong.

    12:00 PM - 1:00 PM
    Lunch

    1:00 PM - 3:30 PM
    *Summary and Discussion of new tools
    *Proof of Concept: What collaborations can be initiated to enhance our ability for:

    • exchange of image data
    • validation of tools
    • minimization of duplicate efforts in software and hardware development

    *Archiving
    *Interface between our research systems and general clinical imaging systems.
    *The potential need for representative image libraries on which to test image-processing approaches.

    3:30PM
    Adjourn


    DRAFT MINUTES
    PARTICIPANTS
    NameE-mailPhone
    Tom Aignerta17r@nih.gov301-443-6975
    Lewis Beachlsb@jhu.edu410-288-3053
    Jim Brinkleybrinkley@u.washington.edu206-543-3958
    Larry Clarkelc148m@nih.gov301-496-9362
    Jonathan Cohenjdc@princeton.edu609-258-2696
    John Georgejsg@lanl.gov505-665-2550
    Edward Herkovitsehh@welchlink.welch.jhu.edu410-955-2353
    Michael Hirschmhirsch@helix.nih.gov301-443-1815
    Steven Hymanshyman@mail.nih.gov301-443-3673
    Russell Jacobsrjacobs@caltech.edu626-395-2849
    Rex Jakobovitsrex@cs.washington.edu206-329-6881
    Dave Kennedydave@cma.mgh.harvard.edu617-726-5711
    Cheryl Kittck82j@nih.gov301-496-1431
    Stephen Koslowkoz@helix.nih.gov301-443-1815
    Curt Langlotzlanglotz@erols.com609-722-5666
    John Mazziottamazz@loni.ucla.edu310-825-2699
    Kathleen MichelsKathleen_Michels@nih.gov301-435-6031
    David Rottenbergdar@pet.med.va.gov612-725-2230
    Dan Sullivands274k@nih.gov301-496-9531
    Stephen Wongswong@radiology.ucsf.edu415-476-6542

    Brief overview of each applicant's HBP funded research grant project was presented.

    1. Discussion of Interoperability Issues (Data vs. Tools)

    • Lack of conformity of data formats.
    • Poor documentation of software tools.
    • Difficulty in sharing data in the research community.
    • Difficulty in sharing and interoperability of various available programming codes.
    • Lack of adequate documentation for sharing multiple programming languages.
    • Socioeconomic and sociopolitical issues
    • The need to concentrate on APIs and data formats, to improve their compatibility, and establish quality control via oversight committee evaluation.
    • There is a need to establish proof that any one program accomplishes the outcome of its stated design.
    • There is insufficient documentation of quality control.
    • Need to standardize multidimensional data. And have extensible meta data standards

    The recommendation was made for a paradigm shift towards the use of a server-based model for addressing smaller experimental problems: Hence, to accomplish this, a mechanism needs to be established for adequately analyzing and validating tools via feasibility studies, and then to make this information and software server-accessible, with appropriate documentation (for server interfacing and the use of tools), rather than having to share codes.

    2. Additional Concerns:

    • This server-based model will be limited by the size of a given data set. Internet 2 may potentially solve this problem, if not then maybe internet 3.
    • Major support needed for such a server
    • Dicom standard for clinical radiology-works in this environment. Dicom standard is not yet appropriate for functional neuroimaging

    3. Discussion about Formatting Issues:

    • All laboratories are likely to continue to use proprietary formats as well as common ones. Every site should have some basic, similar capability.
    • There is a need to create translators.
    • Format specifications require better documentation, thereby rendering translators easier to create.
    • The need for multiple data sets and multidimensional data formats.
    • The need for meta standards, coupled with standards, archiving, and repositories.
    • How you store data depends upon what you plan to derive with it.
    • CORBA interface needed for research – put a wrapper around the data.

    4.Stephen Wong gave a brief overview of the capabilities of CORBA.

    • OMG - Object management group - 800 member organizations or vendors.
    • CORBA 1.0 – IDL (Interface Definition Language) transcends programming language defined attributes, functions, and objects. The client and server sides can be different.
    • Corba 2.0 IIOP for Web browser.
    • Corba 3.0 now
    • Healtheon, for example, is using a CORBA approach for administrative and financial data.
    • The need to have a defined nomenclature for CORBA.

    The group felt that there should be a central resource to facilitate and allow the sharing of collaborative software with the research community. This facility should take a package and modify it so that it can be implemented on a number of platforms etc. Tools should be recompiled with excellent standards, nomenclature, documentation, and portability.

    DICOM H/W and S/W toolkits are widely available commercially now. There are currently about 240 vendors offering products in various aspects of PACS, and most include DICOM capabilities. Public domain DICOM S/W also available.

    5. Technical vs. conceptual problems

    • Research developments will help commercial developments for clinical radiology. This thereby would allow basic and clinical research to advance.
    • Need executable log files for any standard program.
    • Need to be able to control the program and to know what analysis is being done.
    • NIH image is a great example, as is AFNI and AIR.

    Everyone expressed the concern that a major impediment to the advancement of the research field is the insufficiency of competitive salary support for programmers within the public sector when compared to that in the private commercial sector.

    We need to make existing tools more accessible. Need to have a list and complete documentation. We need to decide and develop standards and protocols.

    The need for an Information framework for this research, with data models, data flow sheets, and concurrence about the use of semantic terms.

    Need tutorials, benchmarks etc.

    A proffered suggestion would be to have a limited number of commonly shared platforms utilized in the field. This offering would serve as a non-obligatory mechanism towards helping to establish consistent, standardized data to be readily shared and cross-analyzed among the investigators in the field,

    Data sharing as a resource: Professional scientific journals, under the aegis of their editors, could play a vital synergistic role in this endeavor by serving as potential resources for the provision of centralized data repositories.

    Day 2

    6. Framework for tool sharing

    • Need for concurrence in identifying appropriate databases to allow data sharing.
    • Development of available tools for providing this capability.
    • Need to establish a testing capability with appropriate:
    • Standards.
    • Pipeline.
    • Modular data flow model:
    • Preprocessing
    • Brain surface extraction
    • Bias field correction
    • Image registration
    • Image segmentation
    • Image analysis
    • Testing
    • Visualization

    This needs to be done so that derived data will be reproducible and generalizable.

    It will be necessary to:

    • define categories of data
    • define a common data format
    • Need standard datasets for testing algorithms
    • Establish the level of granularity
    • Establish a resource or repository for data maintenance
    • Proof of Concept -- quality control, documentation

    To get there:

    • Oversight board
    • Contract
    • Intramural activity
    • Private industrial input and support(i.e., if potential proprietary viability can be demonstrated as a financial incentive)
    • Public domain
    • Goal-oriented pipeline

    The suggestion was made that it would be valuable to survey the international community to ascertain their current situation, and to determine if the general expression at this meeting were representative of the broader scientific community. The consensus was that the Dusseldorf Human Brain Mapping meeting would offer an excellent professional conference at which this effort could be initiated. Questionnaire should be both informative and open-ended in format and scope.

    We need to know:

    • Data sets
    • Tools used
    • Tools desired
    • Hardware platform
    • Software platform

    The following approach was offered for this issue:

    1. Provide standard data sets and benchmarking. Identify the types and whether there are any existing ones. Testing matrices for comparisons

    2. Model

    Meta environment
    Modular/data flow What are key steps and tools? Identify which ones now exist.

    3. Mechanism/logistics
    server-based
    Portability/cross-platforms

    4. Funding Mechanisms
    Mazziotta suggested the following categorical approach:

    • Preprocessing
    • Normalization
    • Segmentation
    • Statistical analysis
    • Visualization

    Data types attributes In addition the group suggested that we need to know:
    Data
    TYPES
    Formats
    Standards

    Environment
    Hardware
    Software structure
    Processing models

    Capabilities
    Needed
    Available
    Testing/comparison/quality control

    Mechanisms
    Social
    Standards

    The group consensus was that in order to move forward on the issues of interoperability, it would be necessary if the broader international community agreed with this need. If affirmative, then an International Workshop would be hosted by the HBP to define these needs in a more exacting way and to outline detailed alternative approaches to solving the first step in this problem, namely the sharing of software. To move in this direction, it was agreed that

    1. John Mazziotta, as a member of the HBP and a member of the council of the Human Brain Mapping conference, would present this issue to the council members. The request would be for both time to present this problem and approach before the meeting attendees and to distribute a Questionnaire to solicit their input on this issue. John and others at this meeting prepared the following statement to be presented to the HBM council:
    2. David Rottenberg and others at this meeting prepared a questionnaire. The questionnaire would be on a Web-based site, either at the NIMH or elsewhere, where the derived responses subsequently would be compiled into a database. The following questionnaire was developed:


    The following is a description of the survey intiated by grantees of the HBP ant the Annual Human Brain Mapping Meeting in June, 1999 in Dusseldor. This survey is still be conducted online at Stanford University.

    Below you will find a questionnaire that was distributed and discussed at the Human Brain Mapping Meeting last month in Dusseldorf. It is self-explanatory and requests information of use to the field that will be collated by members of the Human Brain Project, a consortium of federal agencies in the United States, whose goal is to build a neuroinformatics base for the field of neuroscience.

    We greatly appreciate your participation and look forward to distributing to you the results of this survey.

    I. INTRODUCTION

    A group of Principal Investigators funded by the US Human Brain Project (HBP) met recently at the National Institutes of Health (NIH) to discuss the status of interoperability (sharing of information resources) in brain mapping research. This group recognized a need for greater dissemination of neuroimaging datasets and software tools as well as a need for the means of comparing and evaluating new and existing software tools across the relevant brain mapping communities.

    To help meet these needs the HBP will convene an international workshop at the NIH to further explore the problems posed by interoperability and to outline a framework for addressing them. This group will solicit input from investigators worldwide whose interests encompass all relevant imaging modalities and scales of analysis.

    Toward this end, we wish to enlist the assistance of the Organization of Human Brain Mapping (OHBM) Council to (1) include in the agenda of the upcoming Dusseldorf meeting a brief presentation of the perceived problem and the approach to addressing it that is currently being pursued by the HBP and (2) the Council's assistance in reaching the general membership of the OHBM in order to ensure their participation and guidance in the planning of this process. Finally, we propose to poll the OHBM membership by means of a brief questionnaire, a preliminary draft of which appears below.

    II. BRAIN MAPPING INTEROPERABILITY SURVEY

    WHO ARE YOU? (select all that apply)
    1. User of image processing/analysis tools
    2. Developer of image processing/analysis tools
    3. Neither of above

    WHERE DO YOU WORK?
    1. North America
    2. South America
    3. Europe
    4. Asia
    5. Austraslasia
    6. Other

    WHAT KIND OF INSTITUTION DO YOU WORK IN? (select all that apply)
    1.University
    2. Research Institute
    3. Government Facility
    4. Hospital
    5. Other

    WHO IS YOUR LABORATORY CHIEF/DEPARTMENT HEAD?
    1. (fill in)
    2. Don't have one

    WHAT KIND OF WORK DO YOU DO? (select all that apply)
    1. Diagnostic Imaging
    2. Clinical Research
    3. Neuroscience Research
    4. Computer Science/Informatics Research
    5. Other

    WHAT TYPE OF SUBJECTS DO YOU STUDY? (select all that apply)
    1. Animals
    2. Normal Human Subjects
    3. Abnormal/Diseased Human Subjects
    4. None of the above

    WHAT KIND OF DATA DO YOU USE? (select all that apply)
    1. CT
    2. Structural MRI
    3. Functional MRI (fMRI)
    4. MR Spectroscopy (MRS/MRSI)
    5. PET
    6. SPECT
    7. EEG/EP
    8. MEG
    9. TMS
    10. Autoradiographic
    11. Optical
    12. Other

    WHAT HARDWARE PLATFORMS DO YOU USE? (select all that apply)
    1. Sun
    2. SGI
    3. DEC
    4. HP
    5. PC
    6. MAC
    7. Other

    WHAT OPERATING SYSTEMS DO YOU USE? (select all that apply)
    1. SunOS/Solaris
    2. HPUX
    3. IRIX
    4. Linux
    5. VMS
    6. Windows
    7. MacOS
    8. Browser-based
    9. Other

    IN WHAT LANGUAGE IS MOST OF YOUR CODE WRITTEN? (select all that apply)
    1. C
    2. C++
    3. Fortran
    4. IDL
    5. Matlab
    6. Mathematica
    7. Java
    8. Other

    WHAT DATA FORMATS DO YOU USE? (select all that apply)
    1. Analyze
    2. DICOM
    3. Manufacturer's Proprietary Format
    4. Internal Format (user defined)
    5. Graphical Image Format (e.g., GIF, JPEG)
    6. Other

    WHAT PROCESSING CAPABILITIES DO YOU HAVE? (select all that apply)
    1. Image Reconstruction and Preprocessing
    2. Brain Surface Extraction (stripping)
    3. Image Registration/Alignment
    4. Bias Field Correction
    5. Image Segmentation
    6. Image Warping
    7. Statistical Analysis
    8. Visualization
    9. Database Management
    10. Other

    WHAT SOFTWARE PACKAGES DO YOU USE? (select all that apply)
    1. AFNI
    2. AIR
    3. Analyze
    4. Lyngby
    5. Medex
    6. NIH Image/Brain Image
    7. SPM
    8. Stimulate
    9. Other

    WHAT STATISTICAL METHODS DO YOU EMPLOY? (select all that apply)
    1. General Linear Model/ANOVA
    2. Correlational Analysis
    3. Principal Component Analysis (PCA)
    4. Scaled Subprofile Model Preprocessing
    5. Independent Component Analysis
    6. Canonical Variables Analysis
    7. Other

    WHAT DATA SPACES DO YOU USE? (select all that apply)
    1. Talairach
    2. Other

    WHAT VISUALIZATION TECHNIQUES DO YOU USE? (select all that apply)
    1. Cross Sectional (2D) Brain Slices
    2. Rendered Surface
    3. Inflated Surface
    4. 3D Volume
    5. Orthogonal Projections
    6. Other

    WHAT SOFTWARE TOOLS WOULD YOU MOST LIKE TO HAVE? (select all that apply)
    1.Image Reconstruction and Preprocessing
    2. Brain Surface Extraction (stripping)
    3. Image Registration/Alignment
    4. Bias Field Correction
    5. Image Segmentation
    6. Image Warping
    7. Statistical Analysis
    8. Visualization
    9. Database Management
    10. Other (list)

    DO YOU HAVE SOFTWARE YOU WISH TO DISTRIBUTE?
    1. Yes
    2. No

    After reviewing and modifying both the statement(1) and the Questionnaire(2), the meeting was officially adjourned.

    POST MEETING COMMENTS:

    1. One alternative to having everyone share everyone else's code would be to have each tool developer act as a "consulting center" for that tool; people with data would collaborate with that developer, and would send the developer the data. This model would be more appropriate for complex tools, where the effort for analyzing a data set is less than the time it would take to teach others to use the tool (even with adequate documentation).

    2. I would mention the tradeoff between extensibility of coding/data standards and the complexity of writing code to these standards. For example, if we set very flexible/extensible standard data formats, but they require that tool developers acquire complex new skills (i.e., learning a new programming language or complex API), the standards may be seen as exclusionary or impractical by the larger community.

    3. I would support a fine-grained, pipeline approach to software development, in which each tool performs a precisely defined "atomic" task, and data are piped through a series of tools to accomplish more complex tasks. Any loss of efficiency is more than compensated for by gains in flexibility, clarity, ease of validation, and ease of maintenance.

    4. Along these lines, we should probably distinguish between "mature" tools and those still in development, and not include software in the central repository on the basis of desirability or novelty alone. The tools that are mature should be re-coded to conform to standards, and released to the general user community, whereas tools in development must await validation/QA, may depend on new data types that haven't been standardized yet, and in general will be much more difficult to share.

    Perhaps when a majority of developers or HBP researchers agree that a particular tool is stable should be the criterion for deciding when to make that tool widely available. The reasoning here is to prevent problems in the developer community caused by poorly documented, pre-alpha versions of software.