S1 - Problem Statements

Cluster I

1. Community structures - a CrossRef for ALM?

What would be the optimal structure of community, to support coordination and collaboration. Do we need it? If so, when? And how would we tell?

Reasons to cooperate:

  • Share costs
  • Comparability
  • Legitimacy
  • Coordination with other (standards) groups
 

2. Advocacy and promotion:

There is a lack of inspiration and awareness outside this workshop.

  • ALM advocacy is similar to OA advocacy effort
  • Need new name to bring people together
 

3.Anti-gaming mechanisms, process interoperability: 

  • What is the difference between gaming and legitimate attention?
  • How do we tell?
  • How do we tell the people that care? Messaging is very important – technical explanations will not work.

Cluster II

4. Coverage, what to measure, what matters, to who?

What is currently scalably measurable and immeasurable and how do we make the immeasurable measurable?

  • Create tools for publishing and citing
  • Tools to crowdsource what can't be machine measured
  • Focus on implicit data (footprints)

Cluster III

5. Semantic Analysis:

Current metrics lack context, such as sentiment, source, intensity etc. Context is multi-dimensional but users of metrics need low-metrics (eg. good/bad, for method).

  • Need a good context vocabulary
  • Need to map hurdles.
  • Necessity for altmetrics as opposed to traditional citations.
 

6. Evidence, context, trust, and interpretation:

  • How do you contextualise alms?
  • What contexts matter to who?
  • Do we get different information, or do they just all tell us the same thing, and is the current lack of context an opportunity to create our own narrative, or do we need to get with the program ASAP and on next to reality now?
The shorter versions are: What does "14" mean? & Where's my black line?

Cluster IV

7. Personalization, use case targeting:

Altmetrics should enable scholars and institutions to present themselves to funders, peers and the public. Within this problem we considered it was relevant and important to:

  • Track the identity of who generates the metric (i.e. who tweets)
  • Reconsider the name of these activities to capture the idea of context-
  • Consider other research outputs (beyond articles)
 

8. Data Interoperability:

In an ever more distributed and chaotic landscape our challenge is the reconciliation of that which is being counted and the counts themselves.

  • How do we identify and reconcile copies of that which is being counted?
  • There is a need to create and foster standards and best practice which allow understanding, comparison, and aggregation across source data providers and altmetric providers.
  • We need to provide further context for the counts through normalization and identification of sources.

9. Too many measures, how to present? Single Number?

It is human nature to try to collapse complex data to a single # and given that oversimplification to one # is potentially dangerous…

How do we:

  • Have transparency of measures and indicators
  • Have at least as many measures as there are interesting things to measure
  • Have an articulation of what the numbers represent
S2 - Possible Solutions
Measurability
  1. Create a measurability map based on difficulty and demand
 
Advocacy and Outreach
  1. Develop new, sexy name
  2. Broaden the network around a charter
  3. Target publishers, researchers and funders as ambassadors
 
Context
  1. Utilize communities of practice to better understand and evaluate altmetrics (e.g., eLife editorial board)
  2. Provide context for metric on a single item by identifying similar items, using automated classification
  3. Find a specific directed question that context helps with
  4. Crowdsource binning all science tweets to make an open data set: researcher, journalist, funder, etc.
Use Case and Personalization
  1. Create prioritized list of use cases in the following categories:
    • Story telling informed by metrics
    • Organizational intelligence for strategy
    • Evaluation for a monetary decision
S3 - Action Plans
I. Measurability Map
1. *Collect all the metrics that people are using now in ImpactStory, altmetric.com, PLOS, and let people vote on and add to them.  Make sure that things we know about but that we haven't been able to calculate are also included: 
  • Ian will get the eLife editorial board to do this
  • Lisa will try to do this eScholarship journal editorial boards and paper series admininistrators
  • Juan will try to do this with Latin American publishers and editors
 Scope-planning of tool is critical
2. Quickly build something for people to respond to in a simple manner.
  • There are so many metrics, you need to make it manageable, i.e. grouping metrics so that they can be scanned; groups should be expandable to see details
  • Data input
  • Sources
  • What items
  • How important
  • Other supporting data
  • Way to point to provenance
  • Demographic characteristics of responders
e.g. Here's a measure (Ian's) what is the diffusion of a core idea of a paper
2.5 list of outputs/trackable artifacts/signals
3. Make sure you engage with all the different stakeholders;make sure to reach out to the funding community to see what metrics they are gathering 
4. We recognize that in capturing information for the tool, we will find information about things that are currently immeasurable, so we will create a place for people to submit examples of those objects, in order to create a large collection that can be analyzed in order to move those types of objects into the "measurable" category.
5. Idea of a registry of metrics which serves as a central clearinghouse for openly describing all metrics being gathered, and that is extensible so that we can capture the aspects of those metrics that are important to different communities.  On top of this, we may be able to create profiles of metrics that are important to different communities.  Those profiles should be versionable so that we can see whether different communities evolve over time in their attitudes and use of metrics.  Where this registry should live remains an open question, so we may possibly build this tomorrow and not worry about it.
 
II. Clusters, themes categories of User Stories
1. Collect use cases (a wiki - something more open than googledocs; a home). Seed with samples, offer a basic structure.  Create awareness and communication.  
2. Determine criteria for prioritization or a voting mechanism; some sort of model for ranking/sorting the collection. 
3. Pass 'solved' use cases to the advocacy/ambassador group for their use in communication and outreach and apply ourselves further to expand these other use cases into well-documented, solid user stories. 
Aim for presenting at Force 11 in March.   
4. For hackathon >> challenge that by sunset tomorrow show us the iteration of use case 1, 2, 3
Alex - reverse engineer - for what they do tomorrow, ask what use case are you trying to solve?
Example -- Make it easy to find stuff:
    • Add altmetrics for relevance in a discovery tool, how to utilize in a search tool 
    • which search too, where would it make sense to add that
    • what is the simplest thing that could be done in one day to give a demonstration of, I want to find stuff based on altmetrics
5. Open challenge to the community solve a use case by 15th Feb - use cases that are not do-able in an afternoon: a grand challenge.  Solve, create a YouTube video to show this.
 
III. “Crowdsource” binning of all science tweeters into categories and make dataset public
1. Renamed: classifying science-related tweets (and therefore Twitter users):
2. Assemble list of scientists
    • ask Euan (altmetric.com) for his information, data, algorithm, etc.
    • gather pre-existing lists
3. Create a simple web form (you or someone else can update your status)
4. Decide on classification (see below)
5. Make available as data dump, open API
Example classifications...
      • Profession:
        • Academic
          • student (undergrad, grad)
          • faculty (assist, assoc. prof)
          • scientist (sr, jr)
        • Journalist
          • blogger
          • sci writer
        • Other
          • librarian
          • publisher
      • Field (esp. for academics)
      • Interests
 
IV. Target publishers, researchers and funders as ambassadors
1. Identify “ambassadors” in respective fields.
2. Make and present them with, use cases specific to their respective fields
3. Capitalize on appropriate and effective relationships [to best evangelize and advocate for altmetrics].
 
V. Broaden network around a charter
While “Broaden network around a charter” was determined to be an action of immediate value to the altmetrics community, the group devoted their remaining time to delivering four final action plans.