NIH LISTSERV
NIH LISTSERV
IMAGEJ archives -- July 2004 (#141)

Go to: Previous Message | Next Message
Previous in Topic | Next in Topic
Previous by Same Author | Next by Same Author
Previous Page (July 2004) | Back to Main IMAGEJ Page


Options: Reply | Post a New Message | Join or Leave IMAGEJ, or Change Options | Search
View: Chronologically | Most Recent First | Wrap Text (Proportional Font) | Don't Wrap Text (Non-proportional Font)
*

User-Agent: KMail/1.6.2
References: <[log in to unmask]>
Content-Disposition: inline
Content-Type: text/plain; charset="iso-8859-1"
Message-ID:  <[log in to unmask]>
Date:         Thu, 22 Jul 2004 12:11:17 +0100
Reply-To:     ImageJ Interest Group <[log in to unmask]>
Sender:       ImageJ Interest Group <[log in to unmask]>
From:         Gabriel Landini <[log in to unmask]>
Organization: The University of Birmingham, UK.
Subject:      Re: analysis of data.
In-Reply-To:  <[log in to unmask]>

On Thursday 22 July 2004 11:39, Mark Smith wrote: > In a small survey of ~35 people I have asked them to rank 10 print sample > in order of "visual quality" 1=best 5=worst. > I have not defined what I mean by quality so this is entirely at the > discretion of the tester. > I have developed a technique to automatically assign a quality measure to > the same 10 print samples. > > I get a correlation of ~0.75 between the average survey rank and my > automatic technique, can I interpret > this to mean that my definition of quality (i.e. the way the automatic > technique works) & the average > human response are significantly similar ? (what ever significantly means) I am no statistician and I do not know any details of your analysis, so forgive if I have understood your problem wrongly. Ranks are not quantities, they are just an ordinal measure. For a single observer, the distance between ranks 1 and 3 is not twice as that between 1 and 2 (and so on), so if you did a R-squared type of "correlation", then I think that the correlation coefficient may not be the right measure to use. Perhaps you may want to look into "Rank Correlation Test for Agreement in Multiple Judgements" and perhaps "reliability measures" (Cohen's kappa, sensitivity and specificity, although these deal with slightly different problems). I hope it helps. Gabriel




Back to: Top of message | Previous page | Main IMAGEJ page

NIH LISTSERV Home Page

CIT
Center for Information Technology
National Institutes of Health
Bethesda, Maryland 20892
301 594 6248 (v) 301 496 8294 (TDD)
Comments and Assistance
Accessibility wheelchair icon