It’s Time for Better Project Metrics

I’m involved in lots of projects, based at many institutions, with multiple funders and oodles of people involved. Each of these projects has requirements for reporting metrics that are used to prove the project is successful. Here, I want to argue that many of these metrics are arbitrary, and in some cases misleading. I’m not sure what the solution is – but I am anxious for a discussion to start about reporting requirements for funders and institutions, metrics for success, and how we measure a project’s impact.

What are the current requirements for projects to assess success? The most common request is for text-based reports – which are reminiscent of junior high book reports. My colleague here at the CDL, John Kunze, has been working for the UC in some capacity for a long time. If anyone is familiar with the bureaucratic frustrations of metrics, it’s John. Recently he brought me a sticky-note with an acronym he’s hoping will catch on:

SNωωRF: Stuff nobody wants to write, read, or fund

The two lower-case omegas, which translate to “w” for the acronym, represent the letter “O” to facilitate pronunciation –i.e.,  “snorf”. He was prompted to invent this catchy acronym after writing up a report for a collaborative project we work on, based in Europe. After writing the report, he was told it “needed to be longer by two or three pages”. The necessary content was there in the short version – but it wasn’t long enough to look thorough. Clearly brevity is not something that’s rewarded in project reporting.

Which orange dot is bigger? Overall impressions differ from what the measurements say. Measuring and comparing projects doesn't always reflect success. From donomic10.edublogs.org

Which orange dot is bigger? Overall impressions differ from what the measurements say. Project metrics doesn’t always reflect success. From donomic10.edublogs.org

Outside of text-based reports, there are other reports and metrics that higher-ups like: number of website hits, number of collaborations, number of conferences attended, number of partners/institutions involved, et cetera. A really successful project can look weak in all these ways. Similarly, a crap project can look quite successful based on the metrics listed. So if there is not a clear correlation between metrics used for project success, and actual project success, why do we measure them?

So what’s the alternative? The simplest alternative – not measuring/reporting metrics – is probably not going to fly with funders, institutions, or organizations. In fact, metrics play an important role. They allow for comparisons among projects, provide targets to strive for, and allow project members to assess progress. Perhaps rather than defaulting to the standard reporting requirements, funders and institutions could instead take some time to consider what success means for a particular project, and customize the metrics based on that.

In the space I operate (data sharing, data management, open science, scholarly publishing etc.) project success is best assessed by whether the project has (1) resulted in new conversations, debates and dialogue, and/or (2) changed the way science is done. Examples of successful projects based on this definition: figshare, ImpactStory, PeerJ, IPython Notebook, and basically anything funded by the Alfred P. Sloan Foundation. Many of these would also pass the success test based on more traditional metrics, but not necessarily. I will avoid making enemies by listing projects that I deem unsuccessful, despite their passing the test based on traditional metrics.

The altmetrics movement is focused on reviewing researcher and research impact in new, interesting ways (see my blog posts on the topic here and here). What would this altmetrics movement look like in terms of projects? I’m not sure, but I know that its time has come.

Tagged , , ,

3 thoughts on “It’s Time for Better Project Metrics

  1. Mark Parsons says:

    Amen, sister. Measuring is good but always inaccurate, and you have to be clear on what you are measuring.

  2. I think it’s a good start to make sure basic stuff is measured. For example, did people stick to the data sharing plan they (may have) submitted — one of the consequences of the portability of a MozOBI-style standardized, signed,’badge’ would certainly be to simplify the collection of evidence of sharing (thinking about big funders — NIH/NSF/DoE/UK-RCs/etc. — with diverse portfolios, making thousands of awards each year, and wanting to check for compliance with funding conditions in perhaps tens-to-hundreds of repositories of varying type).

    But wrt wider impact, given that altmetrics will usually underestimate (for example, because of the current limit on tracing 2nd-and-higher-order tweets), it would be good to see some benchmarking work done to give some idea of scale (i.e., a well-funded, broad-scope research project to look hard to the ‘real’ impact of a few [quite different] projects with a comparison to available proxy impact measures). Such work might, for example, include a clear identification of alleged project beneficiaries who are then interviewed in confidence, or the collection of visitor stats to a bellwether web page (somehow made a critical part of a project’s outputs).

    Overall I’m just not sure that (until the world is on Twitter) we can do much more than scale what we have by something as close as we can get to reality, as many times as we can afford, then extrapolate.

  3. […] It’s Time for Better Project Metrics […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 316 other followers

%d bloggers like this: