NIH LISTSERV
NIH LISTSERV
IMAGEJ archives -- August 2003 (#66)

Go to: Previous Message | Next Message
Previous in Topic | Next in Topic
Previous by Same Author | Next by Same Author
Previous Page (August 2003) | Back to Main IMAGEJ Page


Options: Reply | Post a New Message | Join or Leave IMAGEJ, or Change Options | Search
View: Chronologically | Most Recent First | Wrap Text (Proportional Font) | Don't Wrap Text (Non-proportional Font)
*

Content-Type: text/plain; charset=US-ASCII; format=flowed
Message-ID:  <[log in to unmask]>
Date:         Mon, 11 Aug 2003 17:09:34 -0400
Reply-To:     ImageJ Interest Group <[log in to unmask]>
Sender:       ImageJ Interest Group <[log in to unmask]>
From:         Wayne Rasband <[log in to unmask]>
Subject:      Re: Why is 16->8 bit image conversion done this way?
In-Reply-To:  <[log in to unmask]>

> We are doing some quantitative analysis on images (both 8-bit and > 16-bit) and wanted to set thresholds which were similar in both image > sets. > > I originally thought that the bit depths just scaled linearly. When I > noticed it didn't, I poked around in the code and saw the Calibration > class. ImageJ does linear 16-bit to 8 bit conversion. It scales from min-max to 0-255 when "Scale When Converting" is checked in Edit>Option>Miscellaneous, otherwise it scales from 0-65535 to 0-255. Can you provide a simple example that shows otherwise? > I'm just curious what is the history behind this chunk of code. The Calibration class, which stores spatial and density calibration information, is not used when converting from 16-bit to 8-bits. -wayne




Back to: Top of message | Previous page | Main IMAGEJ page

NIH LISTSERV Home Page

CIT
Center for Information Technology
National Institutes of Health
Bethesda, Maryland 20892
301 594 6248 (v) 301 496 8294 (TDD)
Comments and Assistance
Accessibility wheelchair icon