Tuesday 14 June 2011

Measuring for Success?






Recently I was reading a post over at Fresh + New(er) the digital media blog for the Powerhouse Museum, which was talking about the launch of the first of their new iPhone Walking Tours which is now available in the AppStore. Whilst the app looks interesting, what really caught my attention was this statement:

"We aren’t measuring success by the number of downloads but in the number of completed tours. And I strongly believe that a low price (vs free) will lead to more tour completions relative to total downloads."

Clearly I wasn't the only one intrigued by this concept as Seb Chan further posted in response to a comment asking how they would know if people completed the tour:

"We've got analytics built in so like a website we can see what stops get viewed, time spent, paths etc. Plus more rubbery qualitative feedback like you tell us that you've walked it (after you do)."

This set me thinking about the ways in which libraries are measuring success. We all do it and whether your library relies on balanced scorecards or benchmarks, statistics or surveys, someone, somewhere in your organization is probably spending an awful lot of time trying to figure out how to show what we do and why it matters. 
But I'm curious to know how widespread electronic analytics, such as the ones the Powerhouse have employed for their walking tour, have become in libraries and how are we translating that data into knowledge about our organisations and our user communities. Have we reached a point yet with our electronic resources that we no longer need to ask our users what they are using because we can already see it?
 I remember being inspired a while back when Seb Chan from the Powerhouse mentioned that one of the ways they were improving the catalogue of their collection was tracking the parts of the pages that people were cutting and pasting from. This idea, whilst conceptually so simple, was such a clever way to know not just which pages were opened but what people did when they got there.

Currently in my organization we explore a lot of statistics about  our e-resources (when the publishers offer them) just as we create and use a lot of statistics about our services and physical collections. But is the the tricky leap from quantitative to qualitative measures that still causes so many complications. How do you know that 9000 searches in a year is a sign of a popular database and not a sign of a database that is viewed by our clients as necessary but chronically difficult to use? 

I'm afraid I don't have answers here but rather questions as I think our explorations of the ways in which our users are engaging with out electronic collections can only become more exciting. 

So I want to know, how does your library measure success?

Kate

Kate Byrne is the Convenor for ALIA Sydney.


4 comments:

  1. Kate,

    We've been working on this in our Library, and while we keep the regular statistics, one of the measurement techniques we're working on is a narrative collection of how people use the library.

    We're not necessarily looking for something that is scientifically valid, but we do want more of an idea of how people view the library.

    This ties in with some education goals the college has (English language learning, creating a knowledge culture in an area with a primarily oral culture)

    My part of this is working on some questions to ask about how people use the electronic resources.

    Our end goal is to make our measure's less about butts in seats/circulation, and more about the impact we make on the college community.

    ReplyDelete
  2. This is definitely a topic that I'm really interested in particularly with regards to the quality, usefulness and useability (ease of use) of a library collection. Interesting that you mention the problems in interpreting quantitative statistics. Hence, I'm planning on using both quantitative (stat counts) and qualitative methods (an online survey) to do a review of serials in my area, in order to get a broader picture of usage. I'm aware that getting a high response rate from a survey is also going to be problematic (busy people, busy lives etc) Does anyone have any ideas on how to better streamline this process?

    Another interesting area that relates to this issue of measurement, are the new models of collection development that are emerging- pay-per-view and demand-driven purchase, where resources are paid for after a certain number of downloads.

    I wonder if resources will be able to be de-selected in the same manner once demand drops off?? Wouldn't it be great if books that were no longer being used/required, or even older editions of books, could be returned with a certain percentage of the purchase price going back to the purchase of new books as a bit of goodwill? It would be kind of like the way some secondhand books shops operate. (if you bring back the books you buy, you get credit towards the purchase of other books.) This way, publishers get the guarantee of a new book being purchased from them, and libraries are more inclined to purchase from the publisher's collection, as well as maintaining a a relevant and up-to-date collection. I'm sure there are glaring flaws in my 'if only' model, so I'm keen to know what you think!
    Crystal

    ReplyDelete
  3. Brett: That's something we've been trying to do at my workplace as well through our Outreach services and trying to measure and track the building of relationships with our community is really hard, if only because it places so much on the individuals interpretation of the impact of their work.

    How are you measuring this?

    Crystal: We've started calculating a cost per download model which gives us a comparative guide to resources that are being underutilized which is kind of pay per view in reverse. If a resource is above a certain threshold we have a look and see if we can figure out why and promote those resources more actively.

    What would you be surveying people on?

    Kate

    ReplyDelete
  4. Kate, that's a good way of describing your model for analysing under used resources. I'm surveying people as part of a consultation process as well as getting a better picture of what people are wanting/using. It will also serve as a reminder or promotion of the existing resources that we actually have. If I find a discrepancy between stat counts and what people actually want, then it will be a good way to get a conversation going about how the library can better serve our clients. Perhaps discrepancies may be to do with an non user-friendly interface? Or maybe it needs to be promoted more on our end?

    ReplyDelete