While the most important part of an Altmetric report is the qualitative data, it's also useful to put attention in context and see how some research outputs are doing relative to others.

The Altmetric Attention Score for a research output provides an indicator of the amount of attention that it has received.

The score is a weighted count

The score is derived from an automated algorithm, and represents a weighted count of the amount of attention we've picked up for a research output. Why is it weighted? To reflect the relative reach of each type of source. It's easy to imagine that the average newspaper story is more likely to bring attention to the research output than the average tweet. This is reflected in the default weightings: 

News

8  

Blogs

5  

Twitter

1  

Facebook

0.25

Sina Weibo

1  

Wikipedia

3

Policy Documents (per source)

3

Q&A

0.25

F1000/Publons/Pubpeer

1

YouTube

0.25  

Reddit/Pinterest

0.25  

LinkedIn

0.5

Open Syllabus1
Google+1
Patents3

The Altmetric Attention Score always has to be a whole number. This means that mentions that contribute less than 1 to the score sometimes get rounded up to one. So, if we picked up one Facebook post for a paper, the score would increase by 1, but if we picked up 3 more Facebook posts for that same article, the score would still only increase by 1.

(LinkedIn and Pinterest have deprecated as sources, as they started putting more of their content behind login pages, which made it more difficult for us to pick up mentions from them). 

News

News outlets are assigned a tier, based on the reach we determine that outlet to have. The amount a news mention contributes to the score depends on the tier for that news source. This means that a mention from a popular national news outlet such as The New York Times will contribute more to the score than a news mention from a smaller, more niche publication such as 2Minute Medicine

Wikipedia 

The scoring for Wikipedia articles is static. This means that if a research output is mention in one Wikipedia post, the score for that paper will increase by 3. However, if a research output is mentioned in more than one Wikipedia post, the score will remain 3. This is because a reference to a research output in a Wikipedia post that may also mention lots of other outputs in its bibliography is not really comparable to a mainstream news story about the findings of one research output, in terms of reach and attention. Part of the rationale behind the Wikipedia scoring is also to prevent gaming; we wanted to prevent a situation where researchers could potentially bias their scores by retrospectively adding references to their research outputs in lots of different Wikipedia posts.  

Policy documents 

Mentions in policy documents are scored per source. A mention of a research output in a policy document has a default score contribution of 3. This means that if an output in mentioned in more than one policy documents from the same policy source (e.g. gov.uk), the score would increase by 3. However, if an output is mentioned in two policy documents from two different policy sources (e.g. gov.uk and the International Monetary Fund) the score would increase by 6.  

Open Syllabus

The scoring for Open Syllabus attention is static. This means that if a research output is mention in one syllabi, the score for that paper will increase by 1. However, if a research output is mentioned in more than one syllabi, the score will remain 1. 

Patent Citations

Mentions in patent citations are scored per jurisdiction. A mention of a research output in a patent has a default score contribution of 3. If the publication is then mentioned in another patent from a different jurisdiction the score will increase to 6. If the publication is mentioned by 10 patents from the same jurisdiction then the contribution remains 3. 

Altmetric Attention Score Modifiers


For Twitter and Sina Weibo, re-tweets and re-posts count for 0.85, rather than 1, as they are secondhand attention rather than original attention. The combined total of these re-tweets or re-posts will always be rounded up to the nearest whole number. 

With Twitter posts, we apply modifiers to the score based on three principles: 

  • reach - how many people is are likely to see the tweet - this is based on the number of followers attached to the account. 
  • promiscuity - how often does this person tweet about research outputs?  
  • bias - is this person/account tweeting about lots of papers from the same journal domain, thereby suggesting promotional intent? 

These modifiers mean that a Tweet from a publisher journal account will count for less than a tweet from a researcher who is unconnected to the paper and is sharing it more organically. This can also work the other way - if a hugely influential figure were to tweet about a research output, this could contribute 1.1 to the score, which would then be rounded up to 2.

Lastly....

Some mentions never count towards the score. This applies to Mendeley and CiteULike readers, as we can't display the actual profiles, and we want all our data to be fully auditable. Any posts we add to the "misc" tab on an Altmetric details page will not count towards the score. This is because we wouldn't have picked up these mentions automatically. This could either be because the mention came from a source we aren't tracking, or because the mention didn't include the right content for us to pick it up. 

Only the first mention from a source counts towards the score. If a news source publishes multiple stories then only the first one will contribute to the Altmetric Attention Score for that particular output. 

Remember

The Altmetric Attention Score is useful to rank research outputs based on attention - it can't tell you anything about the quality of the article itself, though reading the linked discussions might. 

 

It is important to know that the score is based on the kinds of attention that Altmetric tracks (specifically links to or saves of scholarly articles, books and datasets) and to be mindful of potential limitations.

 

You should also bear in mind that different subject areas usually aren't directly comparable: a "popular" physics paper may have a far lower Altmetric Attention Score than an "average" genetics paper.

Finally, you should keep in mind that in some rare cases, the Altmetric Attention Score may fluctuate slightly over time. Fluctuations can happen for various reasons, such as when tweets get removed by the original tweeter, or if twitter accounts are deemed to be "biased" according to our modifiers. Furthermore, we sometimes make changes to the algorithm, in order to ensure the score is an accurate reflection of the reach and legitimacy of the attention attached to a research output.