Skip all navigation and go to page content
NN/LM Home About Us | Contact Us | Feedback |Site Map | Help

Archive for January, 2008

American Evaluation Association 2007 Meeting

The American Evaluation Association held its annual meeting in Baltimore this year and, as usual, the workshops were great and the program was packed with useful information aimed at evaluators with varied levels of expertise. Baltimore itself was wonderful; somehow they managed to provide very nice weather even in November! Of the sessions I attended, my very favorites were the ones with Brian Yates from American University, who is a sort of evangelist about “cost-inclusive” evaluation. Apparently, his outlook that “costs are at least as important to measure as outcomes” is not universally accepted amongst all evaluators. His theme is that all outcomes have costs and some outcomes are monetary. One of his presentations was about starting a cost study, the other was titled “Costs Are All That Matters.” I also took some workshops:

Quantitative Methods I spent two days in this workshop, which was quite fast-paced–the instructor claimed to be providing us with a semester’s worth of information! Participants who had never had methods courses were scrambling a bit to keep up; I have had methods courses but not since the last century, so I experienced the workshop as a challenging review. One of the main things that I took from the workshop was the instructor’s delightful phrase, “reasonable people will disagree.” It turns out that statistics can be so complicated that not even the statisticians always agree about when and how to use which approaches. For example, there is controversy about whether hypothesis testing is the best way to improve knowledge. Our instructor suggested that, since it’s established tradition for research, we should combine hypothesis testing with other approaches. Here are some other, probably more useful points I noted:

  • One person’s “6″ might be another person’s “3″
  • More than 10 choices will be hard for a respondent to manage. With fewer response choices, results tend to be more reliable
  • Always label the middle in a range of rating choices (and the two ends, also)
  • Variance in program outcomes can be very important to stakeholders; evaluators are looking to find out whether a program actually had an effect or whether an observed change was simply due to chance (or due to something other than the program)
  • Inferential statistics, which are used to generalize to populations of interest (ie, should I bring more people into my program?) are based on the assumption that the population in question is normally distributed
  • A number of statistical tests have their origins in agricultural research and were never really intended for use with humans

Our instructor ended the 2-day session with this warning: “I’ve taught you enough now that you’re really dangerous. Don’t try to do this without expert guidance.” True enough, but a class like this will help to communicate with experts.

Online Survey Research This was a half-day class that the instructors had not taught before. I empathized with them as they ran out of time and expressed the realization that it should have been a full-day class. Some of my notes:

  • Survey objectives are not the same as program objectives. Surveys can’t fix problems with programs
  • The instructors cited a 27% response rate as a “very good” one
  • Research has shown that people write longer responses to open-ended questions in online questionnaires than they do on printed ones

SurveyMonkey Tutorial Videos

SurveyMonkey has recently introduced three short tutorial videos that are open to anyone to view, without need for passwords or logging in. Their plan is to introduce more videos over the upcoming months. (These videos, by the way, are really nice examples of how Camtasia can be used)

These first three tutorials demonstrate the following areas of SurveyMonkey:

A lot of what these videos cover will already be known to those who have used SurveyMonkey, but they can serve as a good resource for classes or for introducing newbie colleagues to the resource. I did learn some very useful items from the tutorials, though, such as:

  • the difference between the “exactly one” and “at least one” settings for multiple-choice questions
  • how to use rating questions to force respondents to rank their choices
  • the bad news about what happens when respondents leave a SurveyMonkey survey uncompleted and return to it later

(…and that bad news is that the information about the partially-completed survey is cookie-dependent, stored by the web browser on the respondent’s computer. To go back and complete a survey, the respondent must be using the same browser on the same computer–and not have cleared the cookies!)