Embargoing the Term “Embargoes” Indefinitely

I’m two months into a position that lends part of its time to overseeing Dash, a Data Publication platform for the University of California. On my first day I was told that a big priority for Dash was to build out an embargo feature. Coming to the California Digital Library (CDL) from PLOS, an OA publisher with an OA Data Policy, I couldn’t understand why I would be leading endeavors to embargo data and not open it up- so I met this embargo directive with apprehension.

I began to acquaint myself with the campuses and a couple of weeks ago while at UCSF I presented the prototype for what this “embargo” feature would look like and I questioned why researchers wanted to close data on an open data platform. This is where it gets fun.

“Our researchers really just want a feature to keep their data private while their associated paper is under peer review. We see this frequently when people submit to PLOS”.

Yes, I had contributed to my own conflict.

While I laughed about how I was previously the person at PLOS convincing UC researchers to make their data public- I recognized that this would be an easy issue to clarify. And here we are.

Embargoes imply a negative connotation in the open community and I ask that moving forward we do not use this phrase to talk about keeping data private until an associated manuscript has been accepted. Let us call this “Private for Peer Review” or “Timed Release”, with a “Peer Review URL” that is available for sharing data during the peer review process as Dryad does.

  • Embargoes imply that data are being held private for reasons other than the peer review process.
  • Embargoes are not appropriate if you have a funder, publisher, or other mandate to open up your data.
  • Embargoes are not appropriate for sensitive data, as these data should not be held in a public repository (embargoed) unless this were through a data access committee and the repository had proper security.
  • Embargoes are not appropriate for open Data Publications.

To embargo your data for longer than the peer review process (or for other reasons) is to shield your data from being used, built off of, or validated. This is contrary to “Open” as a strategy to further scientific findings and scholarly communications.

Dash is implementing features that will allow researchers to choose, in line with what we believe is reasonable for peer review and revisions, a publication date up to six months after submission. If researchers choose to use this feature, they will be given a Peer Review URL that can be shared to download the data until the data are public. It is important to note though that while the data may be private during this time, the DOI for the data and associated metadata will be public and should be used for citation. These features will be for the use of Peer Review; we do not believe that data should be held private for a period of time on an open data publication platform for other reasons.

Opening up data, publishing data, and giving credit to data are all important in emphasizing that data are a credible and necessary piece of scholarly work. Dash and other repositories will allow for data to be private through peer review (with the intent to have data be public and accessible in the close future). However, my hope is that as the data revolution evolves, incentives to open up data sooner will become apparent. The first step is to check our vocab and limit the use of the term “embargo” to cases where data are being held private without an open data intention.

Tagged , , ,

California Digital Library Supports the Initiative for Open Citations

California Digital Library (CDL) is proud to announce our formal endorsement for the Initiative for Open Citations (I4OC). CDL has long supported free and reusable scholarly work, as well as organizations and initiatives supporting citations in publication. With a growing database of literature and research data citations, there is a need for an open global network of citation data.

The Initiative for Open Citations will work with Crossref and their Cited-by service to open up all references indexed in Crossref. Many publishers and stakeholders have opted in to participate in opening up their citation data, and we hope that each year this list will grow to encompass all fields of publication. Furthermore, we are looking forward to seeing how research data citations will be a part of this discussion.

CDL is a firm believer in and advocate for data citations and persistent identifiers in scholarly work. However, if research publications are cited and those citations are not freely accessible and searchable- our goal is not accomplished. We are proud to support the Initiative for Open Citations and invite you to get in touch with any questions you may have about the need for open citations or ways to be an advocate for this necessary change.

Below are some Frequently Asked Questions about the need, ways to get involved, and misconceptions regarding citations. The answers are provided by the Board and founders of the I4OC Initiative:

I am a scholarly publisher not enrolled in the Cited-by service. How do I enable it?

If not already a participant in Cited-by, a Crossref member can register for this service free-of-charge. Having done so, there is nothing further the publisher needs to do to ‘open’ its reference data, other than to give its consent to Crossref, since participation in Cited-by alone does not automatically make these references available via Crossref’s standard APIs.

I am a scholarly publisher already depositing references to Crossref. How do I publicly release them?

We encourage all publishers to make their reference metadata publicly available. If you are already submitting article metadata to Crossref as a participant in their Cited-by service, opening them can be achieved in a matter of days. Publishers can easily and freely achieve this:

  • either by contacting Crossref support directly by e-mail, asking them to turn on reference distribution for all of the relevant DOI prefixes;
  • or by themselves setting the < reference_distribution_opt > metadata element to “ any ” for each DOI deposit for which they want to make references openly available.

How do I access open citation data?

Once made open, the references for individual scholarly publications may be accessed immediately through the Crossref REST API.

Open citations are also available from the OpenCitations Corpus , a database created to house scholarly citations, that is progressively and systematically harvested citation data from Crossref and other sources. An advantage of accessing citation data from the OpenCitations Corpus is that they are available in standards-compliant machine-readable RDF format , and include information about both incoming and outgoing citations of bibliographic resources (published articles and books).

Does this initiative cover future citations only or also historical data?

Both. All DOIs under a prefix set for open reference distribution will have open references through Crossref, for past, present, and future publications.

Past and present publications that lack DOIs are not dealt with by Crossref, and gaining access to their citation data will require separate initiatives by their publishers or others to extract and openly publish those references.

Under what licensing terms is citation data being made available?

Crossref exposes article and reference metadata without a license, since it regards these as raw facts that cannot be licensed.

The structured citation metadata within the OpenCitations Corpus are published under a Creative Commons CC0 public domain dedication, to make it explicitly clear that these data are open.

My journal is open access. Aren’t its articles’ citations automatically available?

No. Although Open Access articles may be open and freely available to read on the publisher’s website, their references are not separate, and are not necessarily structured or accessible programmatically. Additionally, although their reference metadata may be submitted to Crossref, Crossref historically set the default for references to “closed,” with a manual opt-in being required for public references. Many publisher members have not been aware that they could simply instruct Crossref to make references open, and, as a neutral party, Crossref has not promoted the public reference option. All publishers therefore have to opt in to open distribution of references via Crossref.

Is there a programmatic way to check whether a publisher’s or journal’s citation data is free to reuse?

For Crossref metadata , their REST API reveals how many and which publishers have opened references. Any system or tool (or a JSON viewer) can be pointed to this query: http://api.crossref.org/members?filter=has-public-references:true&rows=1000 to show the count and the list of publishers with public-references “: true .

To query a specific publisher’s status, use, for example:

http://api.crossref.org/members?filter=has-public-references:true&rows=1000&qu ery=springer then find the tag for public-references. In some cases it will be set to false.

Contact

You can contact the founding group by e-mail at: info@i4oc.org .

Describing the Research Process

We at UC3 are constantly developing new tools and resources to help researchers manage their data. However, while working on projects like our RDM guide for researchers, we’ve noticed that researchers, librarians, and people working in the broader digital curation space often talk about the research process in very different ways.

To help bridge this gap, we are conducting an informal survey to understand the terms researchers use when talking about the various stages of a research project.

If you are a researcher and can spare about 5 minutes, we would greatly appreciate it if you would click the link below to participate in our survey.

http://survey.az1.qualtrics.com/jfe/form/SV_a97IJAEMwR7ifRP

Thank you.

Data Publication: Sharing, Crediting, and Re-Using Research Data

In the most basic terms- Data Publishing is the process of making research data publicly available for re-use. But even in this simple statement there are many misconceptions about what Data Publications are and why they are necessary for the future of scholarly communications.

Let’s break down a commonly accepted definition of “research data publishing”. A Data Publication has three core features: 1 – data that are publicly accessible and are preserved for an indefinite amount of time, 2 – descriptive information about the data (metadata), and 3 –  a citation for the data (giving credit to the data). Why are these elements essential? These three features make research data reusable and reproducible- the goal of a Data Publication.

Data are publicly accessible and preserved indefinitely

There are many ways for researchers to make their data publicly available, be it within Supporting Information files of a journal article or within an institutional, field specific, or general repository. For a true Data Publication, data should be submitted to a stable repository that can ensure data will be available and stored for an indefinite amount of time. There are over a thousand repositories registered with re3data and many publishers have repository guides to help with field specific guidance. When data are not suitable for public deposition, i.e. when data contain sensitive information, data should still be stored in a preserved and compliant space. While this restriction is a more difficult hurdle to jump over in advocating for data publishing and data preservation, it is important to ensure these data are not violating ethical requirements,  nor are they locked up in a filing cabinet and eventually thrown out. Preservation of data is a necessity for the future.

Data are described (data have metadata)

Data without proper documentation or descriptive metadata are about as useful as research without data. If a Data Publication is a citable piece of scholarly work, it should contain information that it allow it to be a useful and valued piece of scholarly work. Documentation and metadata range from information regarding software used for analysis to who funded the work. While these examples serve separate purposes (one for re-use and the other for credit), it is important that all information about the creation of the dataset (who, where, how, related publications) are available.

Data are citable and credible

We’ve established that datasets are essential to research output and are an important piece of scholarly work- and they should receive the same benefits. Data need to have a persistent identifier (a stable link) that can be referenced. While many repositories use a DataCite DOI to fulfill this, some field-specific repositories use accession numbers (i.e. NCBI repositories) that can be referenced within a URL. This is one of the reasons data need to be available in a stable repository. It’s a bit difficult to reference and credit data that are on your hard drive!

If it’s so clear- why are there barriers?

Data publishing has become more widely accepted in the last ten years, with new standards from funders and publishers and a growth in stable repositories. However, there’s still work to be done and more questions to be answered before we reach mass adoption. Let’s start that conversation (you can be the questioner and I’ll be the advocate):

Organizing and submitting data are time intensive and in turn, costly

Trying to replicate a data set from scratch takes much more time (and money) than publishing your data (see robotics example here). Taking the time to search your old computer files or get in touch with your last institution to get your data is more complicated than publishing your data. Having your paper retracted because your data are called into question and you can’t share your data or don’t have it would take more time, money, and hit to your reputation than proactively publishing your datasets.

As an important side note: Data Publications do not need to be linked to a journal publication. While it may take extra time to submit a Data Publication in proper form, if used as an intermediate step in the research process you can reduce time later, get credit, and benefit the research community in the meantime.

What’s the incentive?

Credit. Next question?

But beyond credit for a citable piece of work, publishing data as a common practice will shift focus from publications being an end point in the research cycle to a starting point and this shift is crucial for transparency and reproducibility in published works. Incentives will become clear once Data Citations become common practice within the publisher and research community, and resources are available for researchers to know how (and have the time/funds) to submit Data Publications.

Too few resources for understanding Data Publishing

Many great papers have been posted and published in the last ten years about what a Data Publication is; however, less resources have been made available to the research community on how to integrate Data Publishing into the research life cycle and how to organize data to even be suitable for a Data Publication. Data Management Plans, courses on research data management, and pressure from various funder and publisher policies will help, but there’s a serious need for education on data planning/organization (including metadata and format requirements) as well as awareness of data publishing platforms and their benefits. This is a call to the community to release these materials and engage in the Research Data Management (RDM) community to get as many of these conversations going. The more resources, answers, and guidance that institutions can provide to researchers, the less the “it takes too much time and money” argument will arise, the easier it will be to achieve the incentive, and the further we will push the boundaries of transparency in scholarly communications.

There’s no better time than now to re-evaluate what resources are available for research output. If we strive for re-use and reproducibility of research data within the community, then now is the time to increase awareness and adoption of Data Publication.

For more information about research data organizations, machine actionable Data Management Plans, or Data Publication platforms, please utilize UC3 resources or get in touch at uc3@ucop.edu.

Ensuring access to critical research data

For the last two months, UC3 have been working with the teams at Data.gov, Data Refuge, Internet Archive, and Code For Science (creators of the Dat Project) to aggregate the government data.

Data that spans the globe

There are currently volunteers across the country working to discover and preserve publicly funded research, especially climate data, from being deleted or lost from the public record. The largest initiative is called Data Refuge and is led by librarians and scientists. They are holding events across the UC campuses and the US that you should attend and help out in person, and are organizing the library community to band together to curate the data and ensure it’s preserved and accessible.

Our initiative builds on this and is looking to build a corpus of government data and corresponding metadata.  We are focusing on public research data, especially those at risk of disappearing. The initiative was nicknamed “Svalbard” by Max Ogden of the Dat project, after the Svalbard Global Seed Vault in the Arctic.  As of today, our friends at Code for Science have released 38GB of metadata, over 30 million hashes and URLs of research data files.

The Svalbard Global Seed Vault in the Arctic

To aid in this effort

We have assembled the following metadata as part of the Code for Science’s Svalbard v1:

  • 2.7 million SHA-256 hashes for all downloadable resources linked from Data.gov, representing around 40TB of data
  • 29 million SHA-1 hashes of files archived by the Internet Archive and the Archive Team from federal websites and FTP servers, representing over 120TB of data
  • All metadata from Data.gov, about 2.1 million datasets
  • A list of ~750 .gov and .mil FTP servers

There are additional sources such as Archivers.Space, EDGI, Climate Mirror, Azimuth Data Backup that we are working adding metadata for in future releases.

Following the principles set forth by the librarians behind Data Refuge, we believe it’s important to establish a clear and trustworthy chain of custody for research datasets so that mirror copies can be trusted. With this project, we are working to curate metadata that includes strong cryptographic hashes of data files in addition to metadata that can be used to reproduce a download procedure from the originating host.

We are hoping the community can use this data in the following ways:

  • To independently verify that the mirroring processes that produced these hashes can be reproduced
  • To aid in developing new forms of redundant dataset distribution (such as peer to peer networks)
  • To seed additional web crawls or scraping efforts with additional dataset source URLs
  • To encourage other archiving efforts to publish their metadata in an easily accessible format
  • To cross reference data across archives, for deduplication or verification purposes

What about the data?

The metadata is great, but the initial release of 30 million hashes and urls is just part of our project. The actual content (how the hashes were derived) have also been downloaded.  They are stored at either the Internet Archive or on our California Digital Library servers.

The Dat Project carried out a Data.gov HTTP mirror (~40TB) and uploaded it to our servers at California Digital Library. We are working with them to access ~160TB of data in the future and have partnered with UC Riverside to offer longer term storage .

Download

You can download the metadata here using Dat Desktop or Dat CLI tool.  We are using the Dat Protocol for distribution so that we can publish new metadata releases efficiently while still keeping the old versions around. Dat provides a secure cryptographic ledger, similar in concept to a blockchain, that can verify integrity of updates.

Feedback

If you want to learn more about how CDL and the UC3 team is involved, contact us at uc3@ucop.edu or @UC3CDL. If you have suggestions or questions, you can join the Code for Science Community Chat.  And, if you are a technical user you can report issues or get involved at the Svalbard GitHub.

This is crossposted here: https://medium.com/@maxogden/project-svalbard-a-metadata-vault-for-research-data-7088239177ab#.f933mmts8

Government Data At Risk

Government data is at risk, but that is nothing new.  

The existence of Data.gov, the Federal Open Data Policy, and open government data belies the fact that, historically, a vast amount of government data and digital information is at risk of disappearing in the transition between presidential administrations. For example, between 2008 and 2012, over 80 percent of the PDFs hosted on .gov domains disappeared. To track these and other changes, California Digital Library (CDL) joined with the University of North Texas, The Library of Congress, the Internet Archive, and the U.S. Government Publishing office to create the End of Term (EOT) Archive. After archiving the web presence of federal agencies in 2008 and 2012, the team initiated a new crawl in September of 2016.

In light of recent events, tools and infrastructure initially developed for EOT and other projects have been taken up by efforts to backup “at risk” datasets, including those related to the environment, climate change, and social justice. Data Refuge, coordinated by the Penn Program of Environmental Humanities (PPEH), has organized a series of “Data Rescue” events across the country where volunteers nominate webpages for submission to the End of Term Archive and harvest “uncrawlable” data to be bagged and submitted to an open data archive. Efforts such as the Azimuth Climate Data Backup Project and Climate Mirror do not involve submitting data or information directly to the End of Term Archive, but have similar aims and workflows.

These efforts are great for raising awareness and building back-ups of key collections. In the background, CDL and the team behind the Dat Project have worked to backup Data.gov, itself. The goal is not only to preserve the datasets catalogued by Data.gov but also the associated metadata and organization that makes it such a useful location for finding and using government data. As a result of this partnership, for the first time ever, the entire Data.gov metadata catalog of over 2 million datasets will soon be available for bulk download. This will allow the various backup efforts to coordinate and cross reference their data sets with those on Data.gov. To allow for further coordination and cross referencing, the Dat team has also begun acquiring the metadata for all the files acquired by Data Refuge, the Azimuth Climate Data Project, and Climate Mirror.

In an effort to keep track of all these efforts to preserve government data and information, we’re maintaining the following annotated list. As new efforts emerge or existing efforts broaden or change their focus, we’ll make sure the list is updated. Feel free to send additional info on government data projects to: uc3@ucop.edu

Get involved: Ongoing Efforts to Preserve Scientific Data or Support Science

Data.gov – The home of the U.S. Government’s open data, much of which is non-biological and non-environmental. Data.gov has a lightweight system for reporting and tracking datasets that aren’t represented and functions as a single point of discovery for federal data. Newly archived data can and should be reported there. CDL and the Dat team are currently working to backup the data catalogued on Data.gov and also the associated metadata.

End of Term – A collaborative project to capture and save U.S. Government websites at the end of presidential administrations. The initial partners in EOT included CDL, the Internet Archive, the Library of Congress, the University of North Texas, and the U.S. Government Publishing Office. Volunteers at many Data Rescue events use the URL nomination and BagIt/Bagger tools developed as part of the EOT project.

Data Refuge – A collaborative effort that aims to backup research-quality copies of federal climate and environmental data, advocate for environmental literacy, and build a consortium of research libraries to scale their tools and practices to make copies of other kinds of federal data. Find a Data Rescue event near you.

Azimuth Climate Data Backup Project – An urgent project to back up US government climate databases. Initially started by statistician Jan Galkowski and John Baez, a mathematician and science blogger at UC Riverside.

Climate Mirror – A distributed volunteer effort to mirror and back up U.S. Federal Climate Data. This project is currently being lead by Data Refuge.

The Environmental Data and Governance Initiative – An international network of academics and non-profits that addresses potential threats to federal environmental and energy policy, and to the scientific research infrastructure built to investigate, inform, and enforce. EDGI has built many of the tools used at Data Rescue events.

March for Science – A celebration of science and a call to support and safeguard the scientific community. The main march in Washington DC and satellite marches around the world are scheduled for April 22nd (Earth Day).

314 Action – A nonprofit that intends to leverage the goals and values of the greater science, technology, engineering, and mathematics community to aggressively advocate for science.

Tagged , , , , , , ,

Understanding researcher needs and values related to software

Software is as important as data when it comes to building upon existing scholarship. However, while there has been a small amount of research into how researchers find, adopt, and credit it, there is a comparative lack of empirical data on how researchers use, share, and value their software.

The UC Berkeley Library and the California Digital Library are investigating researchers’ perceptions, values, and behaviors in regards to software generated as part of the research process. If you are a researcher, it would be greatly appreciated if you could spare 10-15 minutes to complete the following survey:

Take the survey now!

The results of this survey will help us better understand researcher needs and values related to software and may also inform the development of library services related to software best practices, code sharing, and the reproducibility of scholarly activity.

If you have questions about our study or any problems accessing the survey, please contact yasminal@berkeley.edu or John.Borghi@ucop.edu.

Tagged , , , , , ,

csv conf is back in 2017!

csv,conf,v3 is happening!csv

This time the community-run conference will be in Portland, Oregon, USA on 2nd and 3rd of May 2017. It will feature stories about data sharing and data analysis from science, journalism, government, and open source. We want to bring together data makers/doers/hackers from backgrounds like science, journalism, open go
vernment and the wider software industry to share knowledge and stories.

csv,conf is a non-profit community conference run by people who love data and sharing knowledge. This isn’t just a conference about spreadsheets. CSV Conference is a conference about data sharing and data tools. We are curating content about advancing the art of data collaboration, from putting your data on GitHub to producing meaningful insight by running large scale distributed processing on a cluster.

Submit a Talk!  Talk proposals for csv,conf close Feb 15, so don’t delay, submit today! The deadline is fast approaching and we want to hear from a diverse range of voices from the data community.

Talks are 20 minutes long and can be about any data-related concept that you think is interesting. There are no rules for our talks, we just want you to propose a topic you are passionate about and think a room full of data nerds will also find interesting. You can check out some of the past talks from csv,conf,v1 and csv,conf,v2 to get an idea of what has been pitched before.

If you are passionate about data and the many applications it has in society, then join us in Portland!

csv-pic

Speaker perks:

  • Free pass to the conference
  • Limited number of travel awards available for those unable to pay
  • Did we mention it’s in Portland in the Spring????

Submit a talk proposal today at csvconf.com

Early bird tickets are now on sale here.

If you have colleagues or friends who you think would be a great addition to the conference, please forward this invitation along to them! csv,conf,v3 is committed to bringing a diverse group together to discuss data topics. 

– UC3 and the entire csv,conf,v3 team

For questions, please email csv-conf-coord@googlegroups.com, DM @csvconference or join the csv,conf public slack channel.

This was cross-posted from the Open Knowledge International Blog: http://blog.okfn.org/2017/01/12/csvconf-is-back-in-2017-submit-talk-proposals-on-the-art-of-data-analysis-and-collaboration/

Software Carpentry / Data Carpentry Instructor Training for Librarians

We are pleased to announce that we are partnering with Software Carpentry (http://software-carpentry.org) and Data Carpentry (http://datacarpentry.org) to offer an open instructor training course on May 4-5, 2017 geared specifically for the Library Carpentry movement.  

Open call for Instructor Training

This course will take place in Portland, OR, in conjunction with csv,conf,v3, a community conference for data makers everywhere. It’s open to anyone, but the two-day event will focus on preparing members of the library community as Software and Data Carpentry instructors. The sessions will be led by Library Carpentry community members, Belinda Weaver and Tim Dennis.

If you’d like to participate, please apply by filling in the form at https://amy.software-carpentry.org/forms/request_training/  Application closed

What is Library Carpentry?

lib_carpentryFor those that don’t know, Library Carpentry is a global community of library professionals that is customizing Software Carpentry and Data Carpentry modules for training the library community in software and data skills. You can follow us on twitter @LibCarpentry.

Library Carpentry is actively creating training modules for librarians and holding workshops around the world. It’s a relatively new movement that has already been a huge success. You can learn more by reading the recently published article: Library Carpentry: software skills training for library professionals.

Why should I get certified?

Library Carpentry is a movement tightly coupled with the Software Carpentry and Data Carpentry organizations. Since all are based on a train-the-trainer model, one of our challenges has been how to get more experience as instructors. This issue is handled within Software and Data Carpentry by requiring instructor certification.

Although certification is not a requirement to be involved in Library Carpentry, we know that doing so will help us refine workshops, teaching modules, and grow the movement. Also, by getting certified, you can start hosting your own Library Carpentry, Software Carpentry, or Data Carpentry events on your campus. It’s a great way to engage with your campuses and library community!

Prerequisites

Applicants will learn how to teach people the skills and perspectives required to work more effectively with data and software. The focus will be on evidence-based education techniques and hands-on practice; as a condition of taking part, applicants must agree to:

  1. Abide by our code of conduct, which can be found at http://software-carpentry.org/conduct/ and http://datacarpentry.org/code-of-conduct/,
  1. Agree to teach at a Library Carpentry, Software Carpentry, or Data Carpentry workshop within 12 months of the course, and
  1. Complete three short tasks after the course in order to complete the certification. The tasks take a total of approximately 8-10 hours: see http://swcarpentry.github.io/instructor-training/checkout/ for details.

Costs

This course will be held in Portland, OR, in conjunction with csv,conf,v3 and is sponsored by csv,conf,v3 and the California Digital Library. To help offset the costs of this event, we will ask attendees to contribute an optional fee (tiered prices will be recommended based on your or your employer’s ability to pay). No one will be turned down based on inability to pay and a small number of travel awards will be made available (more information coming soon).  

Application

Hope to see you there! To apply for this Software Carpentry / Data Carpentry Instructor Training course, please submit the application by Jan 31, 2017:

  https://amy.software-carpentry.org/forms/request_training/  Application closed

Under Group Name, use “CSV (joint)” if you wish to attend both the training and the conference, or “CSV (training only)” if you only wish to attend the training course.

More information

If you have any questions about this Instructor Training course, please contact admin@software-carpentry.org. And if you have any questions about the Library Carpentry movement, please contact via email at uc3@ucop.edu, via twitter @LibCarpentry or join the Gitter chatroom.

Dispatches from PIDapalooza

Last month, California Digital Library, ORCID, Crossref, and Datacite brought together the brightest minds in scholarly infrastructure to do the impossible: make a conference on persistent identifiers fun!

screen-shot-2016-09-22-at-11-53-28-am

Usually discussions about persistent identifiers (PIDs) and networked research are dry and hard to get through or we find ourselves discussing the basics and never getting to the meat.

We designed PIDapalooza to attract kindred spirits who are passionate about improving interoperability and the overall quality of our scholarly infrastructure. We knew if we built it, they would come!

The results were fantastic and there was a great showing from the University of California community:

All PIDapalooza presentations are being archived on Figshare: https:/pidapalooza.figshare.com

Take a look and make sure you are following @pidapalooza for word on future PID fun!

Tagged , , , ,