We are Hiring a DMPTool Manager!

Do you love all things data management as much as we do? Then join our team! We are hiring a person to help manage the DMPTool, including development prioritization, promotion, outreach, and education. The position is funded for two years with the potential for an extension pending funding and budgets. You would be based in the amazing city of Oakland CA, home of the California Digital Library. Read more at jobs.ucop.edu or download the PDF description: Data Management Product Manager (4116).

Job Duties

Product Management (30%): Ensure the DMPTool remains a viable and relevant application. Update funder requirements, maintain the integrity of publicly available DMPs, contact partner institutions to report issues, and review DMPTool guidance and content for currency. Evaluates and presents new technologies and industry trends. Recommends those that are applicable to current products or services and the organization’s long-range, strategic plans. Identifies, organizes, and participates in technical discussions with key advisory groups and other customers/clients. Identifies additional opportunities for value added product/service delivery based on customer/client interaction and feedback.

Marketing and Ourtreach (20%): Develop and implement strategies for promoting the DMPTool. Create marketing materials, update website content, contacting institutions, and present at workshops and/or conferences. Develops and participates in marketing and professional outreach activities and informational campaigns to raise awareness of product or service including communicating developments and updates to the community via social media. This includes maintaining the DMPTool blog, Twitter and Facebook accounts, GitHub Issues, and listservs.

Project Management (30%): Develops project plans including goals, deliverables, resources, budget and timelines for enhancements of the DMPTool. Acting as product/service liaison across the organization, external agencies and customers to ensure effective production, delivery and operation of the DMPTool.

Strategic Planning (10%): Assist in strategic planning, prioritizing and guiding future development of the DMPTool. Pursue outside collaborations and funding opportunities for future DMPTool development including developing an engaged community of DMPTool users (researchers) and software developers to contribute to the codebase. Foster and engage open source community for future maintenance and enhancement.

Reporting (10%): Provides periodic content progress reports outlining key activities and progress toward achieving overall goals. Develops and reports on metrics/key performance indicators and provides corresponding analysis.

To apply, visit jobs.ucop.edu (Requisition No. 20140735)

From Flickr by Brenda Gottsabend

From Flickr by Brenda Gottsabend

Announcing The Dash Tool: Data Sharing Made Easy

We are pleased to announce the launch of Dash – a new self-service tool from the UC Curation Center (UC3) and partners that allows researchers to describe, upload, and share their research data. Dash helps researchers perform the following tasks:

  • Prepare data for curation by reviewing best practice guidance for the creation or acquisition of digital research data.
  • Select data for curation through local file browse or drag-and-drop operation.
  • Describe data in terms of the DataCite metadata schema.
  • Identify data with a persistent digital object identifier (DOI) for permanent citation and discovery.
  • Preserve, manage, and share data by uploading to a public Merritt repository collection.
  • Discover and retrieve data through faceted search and browse.

Who can use Dash?

There are multiple instances of the Dash tool that all have similar functions, look, and feel.  We took this approach because our UC campus partners were interested in their Dash tool having local branding (read more). It also allows us to create new Dash instances for projects or partnerships outside of the UC (e.g., DataONE Dash and our Site Descriptors project).

Researchers at UC Merced, UCLA, UC Irvine, UC Berkeley, or UCOP can use their campus-specific Dash instance:

Other researchers can use DataONE Dash (oneshare.cdlib.org). This instance is available to anyone, free of charge. Use your Google credentials to deposit data.

Note: Data deposited into any Dash instance is visible throughout all of Dash. For example, if you are a UC Merced researcher and use dash.ucmerced.edu to deposit data, your dataset will appear in search results for individuals looking for data via any of the Dash instances, regardless of campus affiliation.

See the Users Guide to get started using Dash.

Stay connected to the Dash project:

Dash Origins

The Dash project began as DataShare, a collaboration among UC3, the University of California San Francisco Library and Center for Knowledge Management, and the UCSF Clinical and Translational Science Institute (CTSI). CTSI is part of the Clinical and Translational Science Award program funded by the National Center for Advancing Translational Sciences at the National Institutes of Health (Grant Number UL1 TR000004).

Fontana del Nettuno

Sound the horns! Dash is live! “Fontana del Nettuno” by Sorin P. from Flickr.

Tagged , , , ,

Data: Do You Care? The DLM Survey

We all know that data is important for research. So how can we quantify that? How can you get credit for the data you produce? What do you want to know about how your data is used?

If you are a researcher or data manager, we want to hear from you. Take this 5-10 minute survey and help us craft data-level metrics:

surveymonkey.com/s/makedatacount

Please share widely! The survey will be open until December 1st.

Read more about the project at mdc.plos.org or check out our previous post. Thanks to John Kratz for creating the survey and jumping through IRB hoops!

What do you think of data metrics? We're listening.  From gizmodo.com. Click for more pics of dogs + radios.

What do you think of data metrics? We’re listening.
From gizmodo.com. Click for more pics of dogs + radios.

Tagged , , , ,

Dash Project Receives Funding!

We are happy to announce the Alfred P. Sloan Foundation has funded our project to improve the user interface and functionality of our Dash tool! You can read the full grant text at http://escholarship.org/uc/item/2mw6v93b.

More about Dash

Dash is a University of California project to create a platform that allows researchers to easily describe, deposit and share their research data publicly. Currently the Dash platform is connected to the UC3 Merritt Digital Repository; however, we have plans to make the platform compatible with other repositories using protocols during our Sloan-funded work. The Dash project is open-source; read more on our GitHub site. We encourage community discussion and contribution via GitHub Issues.

Currently there are five instances of the Dash tool available:

We plan to launch the new DataONE Dash instance in two weeks; this tool will replace the existing DataUp tool and allow anyone to deposit data into the DataONE infrastructure via the ONEShare repository using their Google credentials. Along with the release of DataONE Dash, we will release Dash 1.1 for the live sites listed above. There will be improvements to the user interface and experience.

The Newly Funded Sloan Project

Problem Statement

Researchers are not archiving and sharing their data in sustainable ways. Often data sharing involves using commercially owned solutions, posting data on personal websites, or submitting data alongside articles as supplemental material. A better option for data archiving is community repositories, which are owned and operated by trusted organizations (i.e., institutional or disciplinary repositories). Although disciplinary repositories are often known and used by researchers in the relevant field, institutional repositories are less well known as a place to archive and share data.

Why aren’t researchers using institutional repositories?

First, the repositories are often not set up for self-service operation by individual researchers who wish to deposit a single dataset without assistance. Second, many (or perhaps most) institutional repositories were created with publications in mind, rather than datasets, which may in part account for their less-than-ideal functionality. Third, user interfaces for the repositories are often poorly designed and do not take into account the user’s experience (or inexperience) and expectations. Because more of our activities are conducted on the Internet, we are exposed to many high-quality, commercial-grade user interfaces in the course of a workday. Correspondingly, researchers have expectations for clean, simple interfaces that can be learned quickly, with minimal need for contacting repository administrators.

Our Solution

We propose to address the three issues above with Dash, a well-designed, user friendly data curation platform that can be layered on top of existing community repositories. Rather than creating a new repository or rebuilding community repositories from the ground up, Dash will provide a way for organizations to allow self-service deposit of datasets via a simple, intuitive interface that is designed with individual researchers in mind. Researchers will be able to document, preserve, and publicly share their own data with minimal support required from repository staff, as well as be able to find, retrieve, and reuse data made available by others.

Three Phases of Work

  1. Requirements gathering: Before the design process begins, we will build requirements for researchers via interviews and surveys
  2. Design work: Based on surveys and interviews with researchers (Phase 1), we will develop requirements for a researcher-focused user interface that is visually appealing and easy to use.
  3. Technical work: Dash will be an added-value data sharing platform that integrates with any repository that supports community protocols (e.g., SWORD (Simple Web-service Offering Repository Deposit).

The dash is a critical component of any good ascii art. By reddit user Haleljacob

Tagged , , , , ,

New Project: Citing Physical Spaces

A few months ago, the UC3 group was contacted by some individuals interested in solving a problem: how should we reference field stations? Rob Plowes from University of Texas/Brackenridge Field Lab emailed us:

I am on a [National Academy of Sciences] panel reviewing aspects of field stations, and we have been discussing a need for data archiving. One idea proposed is for each field station to generate a simple document with a DOI reference to enable use in publications that make reference to the field station. Having this DOI document would enable a standardized citation that could be tracked by an online data aggregator.

We thought this was a great idea and started having a few conversations with other groups (LTER, NEON, etc.) about its feasibility. Fast forward to two weeks ago, when Plowes and Becca Fenwick of UC Merced presented our more fleshed out idea to the OBFS/NAML Joint Meeting in Woods Hole, MA. (OBFS: Organization of Biological Field Stations, and NAML: National Association of Marine Laboratories). The response was overwhelmingly positive, so we are proceeding with the idea in earnest here at the CDL.

The intent of this blog post is to gather feedback from the broader community about our idea, including our proposed metadata fields, our plans for implementation, and whether there are existing initiatives or groups that we should be aware of and/or partner with moving forward.

In a Nutshell

Problem: Tracking publications associated with a field station or site is difficult. There is no clear or standard way to cite field station descriptions.

Proposal: Create individual, citable “publications” with associated persistent identifiers for each field station (more generically called a “site”). Collect these Site Descriptors in the general use DataONE repository, ONEShare. The user interface will be a new instance of the existing UC3 Dash service (under development) with some modifications for Site Descriptors.

What we need from you: 

Moving forward: We plan on gathering community feedback for the next few months, with an eye towards completing a pilot version of the interface by February 2015. We will be ramping up Dash development over the next 12 months thanks to recent funding from the Alfred P. Sloan Foundation, and this development work will include creating a more robust version of the Site Descriptors database.

Project Partners:

  • Rob Plowes, UT Austin/Brackenridge Field Lab
  • Mark Stromberg, UC Berkeley/UC Natural Reserve System
  • Kevin Browne, UC Natural Reserve System Information Manager
  • Becca Fenwick, UC Merced
  • UC3 group
  • DataONE organization

Lovers Point Laboratory (1930), which was later renamed Hopkins Marine Laboratory. From Calisphere, contributed by Monterey County Free Libraries.

Tagged , , ,

The 10 Things Every New Grad Student Should Do

It’s now mid-October, and I’m guessing that first year graduate students are knee-deep in courses, barely considering their potential thesis project. But for those that can multi-task, I have compiled this list of 10 things that you should undertake in your first year as a grad student. These aren’t just any 10 things… they are 10 steps you can take to make sure you contribute to a culture shift towards open science. Some a big steps, and others are small, but they will all get you (and the rest of your field) one step closer to reproducible, transparent research.

1. Learn to code in some language. Any language.

Here’s the deal: it’s easier to use black-box applications to run your analyses than to create scripts. Everyone knows this. You put in some numbers and out pop your results; you’re ready to write up your paper and get that H-index headed upwards. But this approach will not cut the mustard for much longer in the research world. Researchers need to know about how to code. Growing amounts and diversity of data, more interdisciplinary collaborators, and increasing complexity of analyses mean that no longer can black-box models, software, and applications be used in research. The truth is, if you want your research to be reproducible and transparent, you must code. In a 2013 article “The Big Data Brain Drain: Why Science is in Trouble“, Jake Vanderplas argues that

In short, the new breed of scientist must be a broadly-trained expert in statistics, in computing, in algorithm-building, in software design, and (perhaps as an afterthought) in domain knowledge as well.

I learned MATLAB in graduate school, and experimented with R during a postdoc. I wish I’d delved into this world earlier, and had more skills and knowledge about best practices for scientific software. Basically, I wish I had attended a Software Carpentry bootcamp.

The growing number of Software Carpentry (SWC) bootcamps are more evidence that researchers are increasingly aware of the importance of coding and reproducibility. These bootcamps teach researchers the basics of coding, version control, and similar topics, with the potential for customizing the course’s content to the primary discipline of the audience. I’m a big fan of SWC – read more in my blog post on the organization. Check out SWC founder Greg Wilson’s article on some insights from his years in teaching bootcamps: Software Carpentry: Lessons Learned.

2. Stop using Excel. Or at least stop ONLY using Excel.

Most seasoned researchers know that Microsoft Excel can be potentially problematic for data management: there are loads of ways to manipulate, edit, reorder, and change your data without really knowing exactly what you did. In nerd terms, the trail of dataset changes is known as provenance; generally Excel is terrible at documenting provenance. I wrote about this a few years ago on the blog, and we mentioned a few of the more egregious ways people abuse Excel in our F1000Research publication on the DataUp tool. More recently guest blogger Kara Woo wrote a great post about struggles with dates in Excel.

Of course, everyone uses Excel. In our surveys for the DataUp project, about 88% of the researchers we interviewed used Excel at some point in their research. And we can’t expect folks to stop using it: it’s a great tool! It should, however, be used carefully. For instance, don’t manipulate the sole copy of your raw data in Excel; keep your raw data raw. Use Excel to explore your data, but use other tools to clean and analyze it, such as R, Python, or MATLAB (see #1 above on learning to code). For more help with spreadsheets, see our list of resources and tools: UC3 Spreadsheet Help.

3. Learn about how to properly care for your data.

You might know more about your data than anyone else, but you aren’t so smart when it comes stewardship your data. There are some great guidelines for how best to document, manage, and generally care for your data; I’ve collected some of my favorites here on CiteULike with the tag best_practices. Pick one (or all of them) to read and make sure your data don’t get short shrift.

4. Write a data management plan.

I know, it sounds like the ultimate boring activity for a Friday night. But these three words (data management plan) can make a HUGE difference in the time and energy spent dealing with data during your thesis. Basically, if you spend some time thinking about file organization, sample naming schemes, backup plans, and quality control measures, you can save many hours of heartache later. Creating a data management plan also forces you to better understand best practices related to data (#3 above). Don’t know how to start? Head over to the DMPTool to write a data management plan. It’s free to use, and you can get an idea for the types of things you should consider when embarking on a new project. Most funders require data management plans alongside proposal submissions, so you might as well get the experience now.

5. Read Reinventing Discovery by Michael Nielsen.

 Reinventing Discovery: The New Era of Networked Science by Michael Nielsen was published in 2013, and I’ve since heard it referred to as the Bible for Open Science, and the must-read book for anyone interested in engaging in the new era of 4th paradigm research. I’ve only just recently read the book, and wow. I was fist-bumping quite a bit while reading it, which must have made fellow airline passengers wonder what the fuss was about. If they had asked, I would have told them about Nielsen’s stellar explanation of the necessity for and value of openness and transparency in research, the problems with current incentive structures in science, and the steps we should all take towards shifting the culture of research to enable more connectivity and faster progress. Just writing this blog post makes me want to re-read the book.

6. Learn version control.

My blog post, Git/GitHub: a Primer for Researchers covers much of the importance of version control. Here’s an excerpt:

From git-scm.com, “Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later.”  We all deal with version control issues. I would guess that anyone reading this has at least one file on their computer with “v2″ in the title. Collaborating on a manuscript is a special kind of version control hell, especially if those writing are in disagreement about systems to use (e.g., LaTeX versus Microsoft Word). And figuring out the differences between two versions of an Excel spreadsheet? Good luck to you. TheWikipedia entry on version control makes a statement that brings versioning into focus:

The need for a logical way to organize and control revisions has existed for almost as long as writing has existed, but revision control became much more important, and complicated, when the era of computing began.

Ah, yes. The era of collaborative research, using scripting languages, and big data does make this issue a bit more important and complicated. Version control systems can make this much easier, but they are not necessarily intuitive for the fledgling coder. It might take a little time (plus attending a Software Carpentry Bootcamp) to understand version control, but it will be well worth your time. As an added bonus, your work can be more reproducible and transparent by using version control. Read Karthik Ram’s great article, Git can facilitate greater reproducibility and increased transparency in science.

7. Pick a way to communicate your science to the public. Then do it.

You don’t have to have a black belt in Twitter or run a weekly stellar blog to communicate your work. But you should communicate somehow. I have plenty of researcher friends who feel exasperated by the idea that they need to talk to the public about their work. But the truth is, in the US this communication is critical to our research future. My local NPR station recently ran a great piece called Why Scientists are seen as untrustworthy and why it matters. It points out that many (most?) scientists aren’t keen to spend a lot of time engaging with the broader public about their work. However:

…This head-in-the-sand approach would be a big mistake for lots of reasons. One is that public mistrust may eventually translate into less funding and so less science. But the biggest reason is that a mistrust of scientists and science will have profound effects on our future.

Basically, we are avoiding the public at our own peril. Science funding is on the decline, we are facing increasing scrutiny, and it wouldn’t be hyperbole to say that we are at war without even knowing it. Don’t believe me? Read this recent piece in Science (paywall warning): Battle between NSF and House science committee escalates: How did it get this bad?

So start talking. Participate in public lecture series, write a guest blog post, talk about your research to a crotchety relative at Thanksgiving, or write your congressman about the governmental attack on science.

8. Let everyone watch.

Consider going open. That is, do all of your science out in the public eye, so that others can see what you’re up to. One way to do this is by keeping an open notebook. This concept throws out the idea that you should be a hoarder, not telling others of your results until the Big Reveal in the form of a publication. Instead, you keep your lab notebook (you do have one, right?) out in a public place, for anyone to peruse. Most often an open notebook takes the form of a blog or a wiki, and the researcher updates their notebook daily, weekly, or whatever is most appropriate. There are links to data, code, relevant publications, or other content that helps readers, and the researcher themselves, understand the research workflow. Read more in these two blog posts: Open Up  and Open Science: What the Fuss is About.

9. Get your ORCID.

ORCID stands for “Open Researcher & Contributor ID”. The ORCID Organization is an open, non-profit group working to provide a registry of unique researcher identifiers and a transparent method of linking research activities and outputs to these identifiers. The endgame is to support the creation of a permanent, clear and unambiguous record of scholarly communication by enabling reliable attribution of authors and contributors. Basically, researcher identifiers are like social security numbers for scientists. They unambiguously identify you throughout your research life.

Lots of funders, tools, publishers, and universities are buying into the ORCID system. It’s going to make identifying researchers and their outputs much easier. If you have a generic, complicated, compound, or foreign name, you will especially benefit from claiming your ORCID and “stamping” your work with it. It allows you to claim what you’ve done and keep you from getting mixed up with that weird biochemist who does studies on the effects of bubble gum on pet hamsters. Still not convinced? I wrote a blog post a while back that might help.

10. Publish in OA journals, or make your work OA afterward.

A wonderful post by Michael White, Why I don’t care about open access to research: and why you should, captures this issue well:

It’s hard for me to see why I should care about open access…. My university library can pay for access to all of the scientific journals I could wish for, but that’s not true of many corporate R&D departments, municipal governments, and colleges and schools that are less well-endowed than mine. Scientific knowledge is not just for academic scientists at big research universities.

It’s easy to forget that you are (likely) among the privileged academics. Not all researchers have access to publications, and this is even more true for the general public. Why are we locking our work in the Ivory Tower, allowing for-profit publishers to determine who gets to read our hard-won findings? The Open Access movement is going full throttle these days, as evidenced by increasing media coverage (see “Steal this research paper: you already paid for it” from MotherJones, or The Guardian’s blog post “University research: if you believe in openness, stand up for it“). So what can you do?

Consider publishing only in open access journals (see the Directory of Open Access Journals). Does this scare you? Are you tied to a disciplinary favorite journal with a high impact factor? Then make your work open access after publishing in a standard journal. Follow my instructions here: Researchers! Make Your Previous Work #OA.

Openness is one of the pillars of a stellar academic career. From Flickr by David Pilbrow.

Openness is the pillar of a good academic career. From Flickr by David Pilbrow.

Tagged , , , , , ,

DataONE Funded by NSF for Another Round!

This is a week for important funding announcements! I already blogged about our new NSF funding for a Data-Level Metrics project with PLOS and DataONE, and on the heels of that, DataONE has announced their newest round of funding! I’ve borrowed heavily from their press release for this post. If you aren’t familiar with the work DataONE has been doing, go check out their website to learn more.

DataONE awarded $15 million from the NSF as part of an accomplishment based renewal.

DataONE: the Data Observation Network for Earth (www.dataone.org) is a distributed cyberinfrastructure that meets the needs of science and society for open, persistent, robust, and accessible Earth observational data. DataONE has dramatically increased the discoverability and accessibility of diverse yet interrelated Earth and environmental science data. In doing so, it has enhanced the efficiency of research and enabled scientists, policy makers and others to more easily address complex questions about our environment and our role within it.

DataONE Phase 1

Founded in 2009 by the NSF, DataONE was designed to provide both the tools and infrastructure for organizing and serving up vast amounts of scientific data, in addition to building an engaged community and developing openly available educational resources.

Accomplishments from the last five years include making over 260,000 publicly available data and metadata objects accessible through the DataONE search engine and building a growing network of 22 national and international data repositories. DataONE has published more than 74 papers, reached over 2,000 individuals via direct training events and workshops and connects with over 60,000 visitors annually via the website.

DataONE has developed an Investigator Toolkit that provides users with tools supporting activities across the full research data life cycle; a dynamic in-person and web-based education program comprising workshops, online best practices, curricula, training modules and other resources; and an engaged community of users via the DataONE Users Group and through collaboration with other national and international initiatives.

Plans for DataONE Phase 2

During the second phase, DataONE will target goals that enable scientific innovation and discovery while massively increasing the scope, interoperability, and accessibility of data. In particular DataONE will:

  • Significantly expand the volume and diversity of data available to researchers for large-scale scientific innovation;

  • Incorporate innovative features to dramatically improve data discovery and further support reproducible and open science; and

  • Establish an openly accessible online education series to support global participation and training in current techniques and perspectives.

DataONE will continue to engage, educate and grow the DataONE community, seek user input to ensure intuitive, user-friendly products and services, and work to ensure the long term sustainability of DataONE services so they continue to evolve and meet needs of researchers and other stakeholders for decades to come.

DataONE Phase 2 might not be quite as influential on graffiti culture as the 80s NY artist Phase 2. Click on the pic to learn more.

DataONE Phase 2 might not be quite as influential on graffiti culture as the 80s NY artist Phase 2. Click on the pic to learn more.

Tagged , ,

UC3, PLOS, and DataONE join forces to build incentives for data sharing

We are excited to announce that UC3, in partnership with PLOS and DataONE, are launching a new project to develop data-level metrics (DLMs). This 12-month project is funded by an Early Concept Grants for Exploratory Research (EAGER) grant from the National Science Foundation, and will result in a suite of metrics that track and measure data use. The proposal is available via CDL’s eScholarship repository: http://escholarship.org/uc/item/9kf081vf. More information is also available on the NSF Website.

Why DLMs? Sharing data is time consuming and researchers need incentives for undertaking the extra work. Metrics for data will provide feedback on data usage, views, and impact that will help encourage researchers to share their data. This project will explore and test the metrics needed to capture activity surrounding research data.

The DLM pilot will build from the successful open source Article-Level Metrics community project, Lagotto, originally started by PLOS in 2009. ALM provide a view into the activity surrounding an article after publication, across a broad spectrum of ways in which research is disseminated and used (e.g., viewed, shared, discussed, cited, and recommended, etc.)

About the project partners

PLOS (Public Library of Science) is a nonprofit publisher and advocacy organization founded to accelerate progress in science and medicine by leading a transformation in research communication.

Data Observation Network for Earth (DataONE) is an NSF DataNet project which is developing a distributed framework and sustainable cyberinfrastructure that meets the needs of science and society for open, persistent, robust, and secure access to well-described and easily discovered Earth observational data.

The University of California Curation Center (UC3) at the California Digital Library is a creative partnership bringing together the expertise and resources of the University of California. Together with the UC libraries, we provide high quality and cost-effective solutions that enable campus constituencies – museums, libraries, archives, academic departments, research units and individual researchers – to have direct control over the management, curation and preservation of the information resources underpinning their scholarly activities.

The official mascot for our new project: Count von Count. From muppet.wikia.com

The official mascot for our new project: Count von Count. From muppet.wikia.com

Tagged , , ,

The Dash Partners Meeting

This past Thursday, 30 University of California system librarians, developers, and colleagues from nine of the ten campuses assembled at UCLA’s Charles E. Young Library for a discussion of the Dash service. If you weren’t aware, Dash is a University of California project to create a platform that allows researchers to easily describe, deposit and share their research data publicly. The group assembled to talk about the project’s progress and future plans. See the full agenda.

Introductions & Expectations

UC Curation Center (UC3) Director Trisha Cruse kicked off the meeting by asking attendees to introduce themselves and describe what they want to learn  during the meeting. Responses had the following themes:

  • Better understanding of what the Dash service is, how it works, and what it offers for researchers.
  • Participation ideas: how the campuses can work together as a group, and what that work looks like.
  • How we will prioritize development and work together as a cohesive group to determine the trajectory of the service.
  • An understanding of how the campuses are implementing Dash: how they plan to reach out to faculty, how the service should be talked about on the campus, what outreach might look like, how this service can fit into the overall research infrastructure, and campus rollout/adoption plans.
  • Future plans for the Dash service.

Overview of the Dash Service

The team then provided an overview of the Dash service, demonstrating how to log in, describe, and upload a dataset to Dash. Four campus instances of Dash went live (beta) on Tuesday 23 September, and campuses were provided with instructions on how to help test the new system. Stephen Abrams covered the technical infrastructure of the Dash service, describing the relationship between the Merritt repository, the EZID identifier service, the DataONE network, and each of the campus Dash instances (slides).

Yours truly followed with a description of DataONE Dash, a unique instance of the service that will replace the existing DataUp Tool (slides). This instance will be available to anyone with a Google login, and all data submitted to DataONE Dash will be in the ONEShare repository (a DataONE Member Node) and therefore discoverable in the DataONE system. Emily Lin of UC Merced pointed out that some UC Dash contributors might also want their datasets discoverable in DataONE; an enhancement was suggested that would allow UC Dash users to check a box, indicating they would like their work indexed by DataONE.

Stephen then discussed the cost model that is pending approval for Dash (slides). This model is based  on recovering the cost for storage only; there is no service fee for UC users. The model indicates that UC could provide researchers, staff, and graduate students 10 GB of storage in Dash for a total of $290,000/year for the entire system. Sharon Farb of UCLA suggested that we determine what storage solutions are already in place on the various campuses, and coordinate our efforts with those extant solutions. Colleagues from UCSF pointed out that budgets are tight for research labs, and charging for storage may be a significant hurdle for them to participate. We need they need a concrete answer regarding costs now – options may be for each campus to pay up front, or for the UC Office of the President pays for the system. Individual researcher charges would be the responsibility of each campus; CDL has no plans to take on that responsibility.

I followed Stephen with an overview of data governance in Dash (slides). Dash will offer only CC-BY for UC researchers; DataONE Dash will offer only CC-0. The existing DataShare system at UCSF (on which Dash is based) uses a contract (i.e., data use agreement), however this option will not be available moving forward since it inhibits data reuse and complicates Dash implementation. The decision to use CC-BY for Dash is based on conversations with UC General Counsel, which is currently undergoing evaluation of the UC Data Policy. The UC Regents technically own data produced by UC researchers, which complicates how licenses can be used in Dash.

Development Contributions

Marisa Strong then described how campuses can get involved in the development process. She identified the different components of the Dash service, which include three code lines (all in GitHub under an MIT license):

  1. dash-xtf, which houses the search and browse functionality;
  2. dash-ingest, the rails client for ingest to Merritt; and
  3. dash-harvester, a python script for harvesting metadata.

Instructions on how to contribute code are available on the Dash wiki, including how to set up a local test environment.

Matthew McKinley from UC Irvine then described their group’s development efforts in working on the Dash code lines to implement geospatial metadata fields. He described the process for forking the code, implementing the new feature in a local branch, then merging that branch back into the main code line via a pull request.

Plans for Development with Requested Funding

UC3 has submitted a proposal to the Alfred P. Sloan Foundation, requesting funds to continue development of the Dash service. If approved, the grant would fund one year of development focused on the following:

  • Streamlined and improved user interface / user experience
  • Development of embedded widgets for deposit and search functionality in Dash
  • Generalization of Dash protocols so can be layered on top of any repository
  • Expanded functions, including parsing spreadsheets for cleaning and best practices (similar to previous DataUp functionality)
  • Support for more metadata schemas, e.g., EML, FGDC

This work would happen in parallel with the existing Dash application, allowing continuous service while development is ongoing. Declan Fleming of UCSD asked whether UC3 efforts would be better spent using existing infrastructures and tools, such as Fedora. The UC3 team said that they would like to talk further about better possible approaches to the Dash system, and encouraged attendees to share ideas prior to the start of development efforts (if funded).

Dash Enhancements: Identification & Prioritization

The group went through the existing enhancements suggested for Dash, available on GitHub Issues. There were 18 existing enhancements, and the group then suggested an additional 51. Attendees then broke into three groups to prioritize the 69 enhancements for future development. Enhancements that floated to the top included:

  • embargoes (restricted access) for datasets
  • metrics/feedback for data depositors and users (e.g., dataset-level metrics)
  • integration with tools and software such as GitHub, ResearchGate, R, and eScholarship
  • improvements to metadata, including ORCID and Fundref integration

This exercise is only the beginning of the process; the UC3 group plans to tidy up the list and re-share with the group after the meeting for continued discussion. This process will be documented on GitHub and via the Dash listserv. Stay tuned!

Next Steps & Wrap-up

The meeting ended with a discussion about how the campuses would stay informed, what contributions each campus might make to Dash, and how the cross-campus partnership should take shape moving forward. Communication lines will include the Dash Facebook page, Twitter account (@UC3Dash), and the GitHub page. Trisha facilitated a final around-the-room, where attendees could share final thoughts. Common thoughts included excitement for the Dash service, meeting campus partners and hearing about development plans moving forward.

The UCLA campus as it appeared in 1929. Enrollment was 6,175. Contributed to Calisphere by UC Berkeley.

The UCLA campus as it appeared in 1929. Enrollment was 6,175. Contributed to Calisphere by UC Berkeley.

DataUp is Merging with Dash!

Exciting news! We are merging the DataUp tool with our new data sharing platform, Dash.

About Dash

Dash is a University of California project to create a platform that allows researchers to easily describe, deposit and share their research data publicly. Currently the Dash platform is connected to the UC3 Merritt Digital Repository; however, we have plans to make the platform compatible with other repositories using protocols such as SWORD and OAI-PMH. The Dash project is open-source and we encourage community discussion and contribution to our GitHub site.

About the Merge

There is significant overlap in functionality for Dash and DataUp (see below), so we will merge these two projects to enable better support for our users. This merge is funded by an NSF grant (available on eScholarship) supplemental to the DataONE project.

The new service will be an instance of our Dash platform (to be available in late September), connected to the DataONE repository ONEShare. Previously the only way to deposit datasets into ONEShare was via the DataUp interface, thereby limiting deposits to spreadsheets. With the Dash platform, this restriction is removed and any dataset type can be deposited. Users will be able to log in with their Google ID (other options being explored). There are no restrictions on who can use the service, and therefore no restrictions on who can deposit datasets into ONEShare, and the service will remain free. The ONEShare repository will continue to be supported by the University of New Mexico in partnership with CDL/UC3. 

The NSF grant will continue to fund a developer to work with the UC3 team on implementing the DataONE-Dash service, including enabling login via Google and other identity providers, ensuring that metadata produced by Dash will meet the conditions of harvest by DataONE, and exploring the potential for implementing spreadsheet-specific functionality that existed in DataUp (e.g., the best practices check). 

Benefits of the Merge

  • We will be leveraging work that UC3 has already completed on Dash, which has fully-implemented functionality similar to DataUp (upload, describe, get identifier, and share data).
  • ONEShare will continue to exist and be a repository for long tail/orphan datasets.
  • Because Dash is an existing UC3 service, the project will move much more quickly than if we were to start from “scratch” on a new version of DataUp in a language that we can support.
  • Datasets will get DataCite digital object identifiers (DOIs) via EZID.
  • All data deposited via Dash into ONEShare will be discoverable via DataONE.

FAQ about the change

What will happen to DataUp as it currently exists?

The current version of DataUp will continue to exist until November 1, 2014, at which point we will discontinue the service and the dataup.org website will be redirected to the new service. The DataUp codebase will still be available via the project’s GitHub repository.

Why are you no longer supporting the current DataUp tool?

We have limited resources and can’t properly support DataUp as a service due to a lack of local experience with the C#/.NET framework and the Windows Azure platform.  Although DataUp and Dash were originally started as independent projects, over time their functionality converged significantly.  It is more efficient to continue forward with a single platform and we chose to use Dash as a more sustainable basis for this consolidated service.  Dash is implemented in the  Ruby on Rails framework that is used extensively by other CDL/UC3 service offerings.

What happens to data already submitted to ONEShare via DataUp?

All datasets now in ONEShare will be automatically available in the new Dash discovery environment alongside all newly contributed data.  All datasets also continue to be accessible directly via the Merritt interface at https://merritt.cdlib.org/m/oneshare_dataup.

Will the same functionality exist in Dash as in DataUp?

Users will be able to describe their datasets, get an identifier and citation for them, and share them publicly using the Dash tool. The initial implementation of DataONE-Dash will not have capabilities for parsing spreadsheets and reporting on best practices compliance. Also the user will not be able to describe column-level (i.e., attribute) metadata via the web interface. Our intention, however, is develop out these functions and other enhancements in the future. Stay tuned!

Still want help specifically with spreadsheets?

  • We have pulled together some best practices resources: Spreadsheet Help 
  • Check out the Morpho Tool from the KNB – free, open-source data management software you can download to create/edit/share spreadsheet metadata (both file- and column-level). Bonus – The KNB is part of the DataONE Network.

 

It's the dawn of a new day for DataUp! From Flickr by David Yu.

It’s the dawn of a new day for DataUp! From Flickr by David Yu.

Tagged , , , , , ,
Follow

Get every new post delivered to your Inbox.

Join 143 other followers