Skip to content

Musings on what should be published

I just reviewed a paper that wasn’t stupid, and asked an important question. It is just that it was thin, and a null-result. It used 80 participants in 4 cells and it wasn’t repeated measures. They replicated (weakly) one finding, but found no effect for what most likely was what they really were going for.

I’m getting very sensitive to the file-drawer problem. If we have sensible data, should it languish? Yet, there is a problem cluttering up the journals with short, underpowered studies.

I left it up to the editor (who is my colleague) to reject it.

What I would have wanted to see was, first, better power. Then, follow-up work on the particular question.

But, this makes me think about publishing policy. I really understand the desire to publish things that “work”, (except that the indication of what works are so weak in psychology). It is like you want to unveil the final sculpture, the polished version of the violin concerto, the bug-free version of the software – not all the sketches and wrong steps and other discards on the way. You want to publish a real Finding – even if (as in all research) it is tentative.

But, the sketches, and wrong turns, and pilots, and honing have some kind of information. At least sometimes it is really important to know what doesn’t work. And, as was evident from the special issue on replication, there is work out there that people informally know does not work, but is not in the public record because the failure to replicate has not been published.

We had a brief discussion about this at last years “solid science” meeting. Joe Simmons said that there really are loads of piloting of ideas that turned out to be crap that really don’t need to be cluttering up cyberspace and our ability to navigate information, whereas Jelte Wichert’s thought it is really important to have a data-record.

I’m very ambivalent. There is so much data collected – I’m thinking of a lot of final theses that are done – where the research is the equivalent of arts and crafts projects that show that you can do this, but doesn’t really add to the research record.

Or, all those pilots that you do to tweak your instruments and methods. What to do with those? Meehl, in his theory of science videos, suggested that you collect that info in short communications, just for the record.

I’m thinking of two file-drawers I have. One of them really demonstrates that the phenomenon we were testing doesn’t exist. It is a boundary condition. As such, it might have been important to have it out there (5 studies, 90 people in 3 conditions in each, repeated measures). I have another set of 9 studies looking at threat and attention which are more of the “tweak the paradigm” type. Something happened, but it was terribly messy to interpret, and thus we were working on finding an angle where results could be more clear and interpretable. How do you make that distinction?

I have some idea here that it would be nice if one could spend that time with the sketches. Once it works, one needs to replicate, and one only publishes when one feels fairly certain that there is something there (and possibly include links to the sketches). Which, of course, is not how it is done right now, because of the incentive structure.

 

 

Social Psychology Replication – Special Issue.

The first time I met Daniël Lakens, he and Brian Nosek were working on a special issue in Social Psychology, calling for registered replication of well-known, highly cited studies.

It is now out! 15 articles of attempts to replicate with, let us say, mixed results.

I’m linking in the PDF as they posted it on the OSF framework, so you get both the text, and more exposure to the framework for your future collaboration efforts!

Some people, Science reports, don’t like being replicated, at least when the results are different. I’m thinking, once things are out there in the record, work really is up for being replicated or questioned. I thought that was the point! Maybe, once this is done more regularly, people adapt and won’t go all drama. Exposure therapy, I believe, have evidence on its side.

Chris Chambers, who has long been at the forefront of the call for registered reports (and implemented it at Cortex) has a more uniformly positive of the practice here.

I have done a first skim-through, and clearly, clearly we need to put a lot more effort into replicating results, march slower, be careful with what we accept.

 

 

 

Aside

Signs of reliable and unreliable research, reason and persuasion, and an RIP.

I thought I had more cool posts to share, but I got so wrapped up in the Baumol disease I got discombobulated.

But, yes, plenty more of good posts to share, so I’m sharing them now.

About a week ago, my bud Daniel Lakens reported on this Find on his blog. A paper even older than me! Yes, people have been thinking about these issues for a long time.

Sylvia McLain asks if Spotting Bad Science really is as easy as a nice poster giving instructions on how to do it. And, of course, if it really were, there wouldn’t be as much bad science. But, as a handy dandy tool it can be a useful beginning tool. The creator of the poster answers in the comments, and there is a good conversation.

Speaking of Bad Science, JP de Ruiter linked in a Brain Pickings article highlighting Carl Sagans baloney detector kit (got that?)

Tom Stafford linked in his draft of this very lovely article on rational argument. He brings up both Cialdini, and argument as a means of persuasion rather than correctness. As it is draft 2, it may evolve further, but I thought it was just great.

Last month, Keith Laws and others debated whether CBT for psychosis had been oversold. It was all filmed, so you can check it here (as I watched it at the same time as I was reading about montage and cutting techniques, I found myself wishing for some of those, plus a good sound engineer, but you can’t have everything). A Storify from Alex Langford appears here. I considered it a good example how a good anecdote trumps good data as far as persuasion goes – which ties in with the Cialdini in Tom Stafford’s piece, but I’m not a clinician. Worth checking out though.

Last, I was very sad to hear that Seth Roberts died. I’ve followed his blog for a few years by now, and I thought him very interesting, innovative and thoughtful (I even posted on his comments once, regarding all this Stapel fraud stuff, as he has been involved in that).

 

On the Research University, and on Good Science – two articles/blogs you should read.

I’ve been working on trying to teach myself how to do a meta-analysis. With no really clear results yet (someone needs to help me, I think, though I get the gist of it, enough to worry about messing things up). This means massive blog neglect.

But, others have kept up blogging and writing articles. I wanted to share a couple of those.

The first I got via Stephen Hsu – It’s a Chronicel Article by Nicholas Lemann, called The soul of the research university. I have been thinking about the University – the conflict (and status differential) between research and teaching, and how that historically came about. I think this article answers some of those questions, although I appreciate any historian piping up and setting me and others straight.

It clearly highlights the perception problem between on the one hand the Research Focus from inside research institutions, and the Education focus, which seems to be the perception from outside the research institutions.

It also lifts up a couple of what I would consider economic questions (economists feel free to correct me here also): The research that universities engage in is, in many ways, high risk endeavors. No guarantee of pay-off, and if pay-offs they may very well be far in the future (when did the internet begin? Early 60’s? Yes, I know, defense and things, but also universities, if I recall.)

Also, it brings up what is called Baumol’s disease, which I first heard from an online teacher friend. Person centered work – like teaching, research, live playing, certain services – cannot be automated effectively (as much as the MOOC’s try). But, they still have to be paid, and they cannot be made cheaper. (Well, even with the adjunctification). I’m not sure I would cast that in “disease” terms – I figure much of all the progress we make were originally so that we could live well as humans – but it is a dynamic to consider.

It really is worth a read, and I would like to read more on the area, as I’m very interested in these kinds of policy questions.

The second is from Brent Roberts, on the PIGEE blog. It’s a follow up on his Deathly Hallows post (read that too), and focuses on his scary vision of good science, a vision that involves asking good interesting questions, and damn the direction of the results. As he says, BOO.

Visualisation of statistics.

I’ve met people who want to have statistical concepts written out as equations, and who swear that graphs and other visual aids don’t help them. Humans vary. I’m very visual, so I like graphs and other visual aids. Like the LA Natural History museums demo of the normal distribution.

Here are some nice visualizations of statistical ideas that I found in my twitter stream. I only remember that the last – on the p-values, came from Chris Said, because that was yesterday.

What do p-values look like.

Understanding statistical power and significance testing   by Kristoffer Magnusson

Conditional Probability by Victor Powell

 

 

Science problems made Cracked. Will it jump the shark next.

Science made Cracked, and Not In A good Way.

Six Chocking Studies that show that Science is Totally Broken.*

The headlines of the six:

#6. A Shocking Amount of Medical Research Is Complete Bullshit
#5. Many Scientists Still Don’t Understand Math
#4. … And They Don’t Understand Statistics, Either
#3. Scientists Have Nearly Unlimited Room to Manipulate Data
#2. The Science Community Still Won’t Listen to Women
#1. It’s All About the Money
Of course, anybody reading here knows about these problems, and have for a long time. But this just shows how urgent it is to not ignore the damned problems!
I’m saying three words first:
Tax payer funded.
And then one more:
Trust.

 

 

 

 

*(Prove. They wrote Prove.  I cannot do that when you are not doing maths. They show, demonstrate, illustrate, ups the confidence, is consistent with, gah. Pass the Smelling salt)  OK, as you were. I’m sure some smartass will comment that it is just fine saying prove.)

 

 

A new round on Social Priming.

PoPS has a section in their new issue containing responses from the Pro-Priming people. Alas behind paywall, but at least some here do have access. It is an interesting read, although I don’t agree with some of it. (My position, somewhat vaguely, is that I’m sympathetic to the idea behind behavioral priming – that we are sensitive to our surroundings and respond to it in ways that we are not really aware of, but I suspect that the conceptualization of it is problematic – don’t ask me to come up with a better one.)

But, I also wanted to link in Daniel Lakens’ blog response to the special issue, which, of course, is open to anybody with access to the net. I thought it was a very nice response.

Reforming Academia

From Dynamic Ecology, thoughts about how to change the funding schemes to ensure an academy focused on research, not prestige. I found the first answer quite interesting. But, I have never heard of Canada as a model before (poor Canadians).

From What’s the PONT is an intriguing post about the scaling problem. It may not be possible to scale up things that work on a small scale. There is a limit to the economy of scale. At some point in the scaling up, something becomes lost (perhaps it undergoes a kind of bifurcation or critical point). I think this is something to keep in mind when we try to educate more and more with less and less. Like the unraveling of the MOOC’s it is clear that it just won’t work. (And, people who had looked at this before basically said “I told you so”. Not quite me, I must confess, until someone pointed out that long-distance education is an old gambit, and the problems don’t go away just because we have new fancy tools). Even Sebastian Thrun has admitted it. A snarkier version from Rebecca Schuman in Slate.

Universities have been hoping to make money on patents from their researchers work. This is most definitely the hope at Lund, and I did read about it in Paula Stephan’s book. But, it is a poor bet. Most of it won’t pay off.

Samuel Arbesman says first to bring back the generalists. (Yay, I say, as I can’t make up my mind whether I’m interested in Emotion, Modeling, Evolutionary Psychology, methodology, behavioral economics, chaos theory, philosophy….), but also that the innovation and research is no longer in the Academy, but among the startups. Going Changizi, as I like to say.

Publishing and open access world links.

And, in this post, I link in things related to publishing and open access.

Randy Schekman won the Nobel Price, and dissed the glam mags (that is Nature, Science and Cell). Here is his The Conversation piece on how to break free from Glam. Not everyone took kindly to what he said. Here is Opiniomics considering that he may be a hypocrite, considering that he has published in the glams. But, perhaps before they were truly glam. Hypocrite or no, I think it is something that needs to be discussed even more than it is done. But, I don’t think it is really the glams fault. Glams wouldn’t be glams if there wasn’t a market clamoring for them. Like, those deciding on grants and careers looking at how many glossy covers. Yes, science as Hollywood. Vote for the sexiest research project of the year! Ronin institute articulated this well.

Related, here is Stephen Curry on the problem with the Glam Magazines. It is a commentary to a debate that he links in (confession, I haven’t watched. 2 hours!), but I think his commentary are worth it, sans watching.

Elsevier, the publisher that is the favorite hate-target it seems, started telling researchers and everybody else to take down the pdf’s to their own (Elsevier published) research. Which, well, they legally are allowed to do, as we regularly sign away our rights. But, it has been sort of a tacit custom that you get to keep your pdf’s on your home page. Sort of like being allowed to have multiple copies of your records I guess. I think it is time to consider better ways of publishing.

Here are some thoughts on that: First Micah Allen’s call for self-publication instead of via publishers. Then Shauna Gordon-McKeon’s 3-part series Chasing Paper from the OSC blog. Part 2 and 3 linked here. For full disclosure, I’m affiliated with the OSC blogs.

The PeerJ blog has a nice interview with Dorothy Bishop where they discuss open access, and her experience with PeerJ.

A Paper from PLOSOne compared post-pub peer review, impact factor and number of citations. None is a really good measure of, well, impact it seems. And, here something from Science critiquing the h-index.

More to come.

Linking in the stats

This fall, I squirreled away 400 e-mails to myself with links to interesting papers or blogs or commentary. Lots of them were things I wanted to stick on my blogs. Now I will try to slowly weed myself down to none again. It will likely results in multiple linking posts, so I declare this to be the first in the series, and it will be all about stats.

First up, Telliamed revisited’s post on the 10 commandments of statistics. Post it prominently on the classroom walls.

This one I have linked to before, but, hey, let’s repeat the good stuff. The p-curve page. Includes the Paper, the app, the user’s guide and supplementary materials. Use it on your favorite area of research.

Speaking of p-curve, Here is a paper (pdf) from Gelman and Loken on how multiple comparisons can be a problem, even when all practices are non-questionable. (Now, I hope that link will work).

A path to learning is to get exposed to What Not To Do! And, the least painful way to do that is to observe other failures, or at least read about them kind of in the abstract. Statistics Done Wrong is an excellent opportunity to do this. It is, um, amazing to realize how many of those misconceptions one has held…

NeoAcademic is a blog from an I/O perspective (Industrial/organizational psychology that is), and Richard Landers posted a series of commentary on a paper comparing Null hypothesis with effect size. I link in the last one (because that is the one that I sent myself), but you can easily get to the other installments from his post.

I also think I’ve linked in Felix Schönbrodt’s post before, but also worth repeating. At what sample size does correlations stablilize?

The collected works of Tukey. In Google Books.

Well, I’m down to November. There is more to come, but I have to sort it through. Probably a second post of stats links.

Follow

Get every new post delivered to your Inbox.

Join 859 other followers