Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Measuring the Impact of Altmetrics

When it comes to ranking academic influence, PageRank is just the start

3 min read
Illustration of a bird tweeting out text messages.
Illustration: Jesse Lefkowitz

“There is a growing movement within the scientific establishment to better measure and reward all the different ways that people contribute to the messy and complex process of scientific progress.”

Illustration of a bird tweeting out text messages.Illustration: Jesse Lefkowitz

How do you measure the influence of a journal or scientist? Until recently that question was largely settled. For a journal, you could turn to the impact factor (or IF), which determines the relative importance of a journal within its field by looking at how many times its articles get cited in other journals relative to the total number of articles it publishes. PageRank (predating and loosely related to the famous PageRank algorithm used by the Google search engine) is a kind of IF measure that gives greater weight to journals with high impact; a similar measure is the Eigenfactor score created by the evolutionary biologist Carl T. Bergstrom. For an individual scientist, you could calculate his or her h-index (in which h of the scientist’s total number of papers have received at least h citations).

Lately, however, scholars have become increasingly disenchanted with these and similar bibliometric indicators that use such values as total number of articles published or total number of citations. They complain that traditional measures of scientific impact are too slow and too narrow to accurately reflect science in the Internet age.


Enter, then, the new field of article level metrics or, as it is increasingly known, altmetrics. This blend of alternative and metrics refers to tools based on bookmarks, links, blog posts, tweets, and other online measures that presumably indicate ways that readers have been influenced by an article—in short, how much “buzz” the paper is generating online. 


Extremely astute readers may recall an earlier column of mine [see “The Coming Data Deluge,” IEEE Spectrum, February 2011] that took note of researchers using “syndromic surveillance” to predict flu outbreaks based on an analysis of Google searches for flu-related terms. This is part of the emerging field of infodemiology (that is, information-based epidemiology), which is part of a broader field called infoveillance, the monitoring of online health information. If Google searches can show us the influence (no pun intended) of a flu virus on a population, why can’t we use similar online data to judge the influence of a researcher or a scientific article?


Much of the altmetric scholarship has focused on Twitter and what Gunther Eysenbach, a researcher at the University of Toronto, has called tweet metrics. Although the ­prudent neologism collector must be on guard against Twitter-based coinages [see “All A-Twitter,” Spectrum, October 2007] that are just silly (an adjective that can be rightfully applied to the vast majority of them), exceptions sometimes cry out to be made. To wit, I offer you the tweetation, a mash-up of tweet and citation that refers to a Twitter post that links to a scholarly article.


Another of Eysenbach’s creations is the TWn score, which measures the number of tweets within n days of publication. This is the basis of the twimpact factor .


Then there’s the tweeted half life (THL), which is the number of days after publication that it takes for an article to generate 50 percent of the tweetations that occur within a defined TWn period, say 30 days. If the article’s TW30 is 100—that is, it generated 100 tweets in its first 30 days—and it generated 35 tweets on day 0 (the publication date), 10 tweets on day 1, and 8 tweets on day 2, then its THL is 2, because it was on day 2 that it surpassed 50 tweets.


This is all part of what researchers are calling scientometrics 2.0, where data mining techniques are brought to bear on massive social media databases and other online storehouses to search for fresh indicators of scholarly impact. Will they replace traditional measures such as the impact factor? Almost certainly not. The goal is merely to drag the concept of scientific influence into a century characterized by the rapid dissemination of information and near-universal social media. Tweet on.


About the Author

Paul McFedries has written IEEE Spectrum’s Technically Speaking column since 2002. This year HP Press published Cloud Computing: Beyond the Hype, the latest of his more than 70 books. His website, Word Spy, tracks emerging words and phrases. His “lexpionage,” as he calls it, extends beyond technology: He recently wrote a post about “shtick lit,” a genre in which an author takes on a stunt-like project in order to write about it.


This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions