The Good & Bad: Web Application versus Add-in

If you missed it, I recently posted about the future direction of the DCXL project.  I boiled it down to the question of Add-in versus web application.  The community has offered feedback, and some major themes that have emerged, which I summarize below.  But first, a reminder of the goods and bads of our two possible approaches:

Web application
Good Bad
Easier to maintain, update Requires learning new user interface
Use with any platform (Mac, Windows, Linux, …)
Generalizable/extensible Not integrated into Excel
Community involvement easier Offline use may be limited
Excel Add-in
Good Bad
Integrated in workflow Windows only
Familiar user interface & functionality Install & updates required
Smaller shift in practice Not as generalizable/extensible
Available offline Not as easy for community to get involved in development, improvement

It seems that there are strong feelings on both sides of this issue.  The majority are excited about the web application, but there are some serious concerns about going whole hog into the web application realm.  Most of this apprehension stems from two major issues: potential problems when offline, and the lack of a visible DCXL presence in the Excel program.

Offline use: Metadata is best collected at the time the data are collected, which means the scientist might not have an internet connection. We should make sure that any features associated with generating metadata are available offline.

DCXL presence within Excel:what if we devise a way to connect the Excel user directly to the web application from within Excel? A “Lite” version of the add-in?

Add-in or Web application? Which would Clint (The Good) and Lee (The Bad) go for? Original image from

If we assume that we can tackle the two problems above, then the web application might be a great direction to take.  The DCXL project should focus on assisting scientists with metadata generation first, and connection to repositories second.  Both of these tasks may be easier with a web application.  Metadata generation could be aided by connecting to existing metadata schema and standards, which would be enabled by a generalizable API making connection easier.  More interesting is the possibility for connecting with repositories and institutions; what if there was a repository-specific implementation of the DCXL web application for each interested repository? Or a DCXL web application specifically geared towards the Geology department at UC Riverside?  The possibilities for connecting with existing services becomes more interesting if web connections are made easy.

Needless to say, we still want feedback from the community.  Decisions will be made soon, so drop me an email  or comment on the blog to make your voice heard.

Tagged , , , ,

2 thoughts on “The Good & Bad: Web Application versus Add-in

  1. John Deck says:

    I was struggling a bit with the “Excel Add-in” concept initially. This is because I don’t think a good thing like better data management in Academia needs to be branded with a Microsoft name. The “Bad” under the Excel Add-in of being Windows only is truly takes the cake for being “Bad” though.

    Another option is a downloadable plug-in to a browser that can validate Excel, open office, tab delimited data, or CSV files offline OR online. This is technically possible and will address the Offline use issue and the Windows only issue.

  2. lynn yarmey says:

    After reading the past few posts, I have more questions than answers: Why would metadata generation be easier through a web app? Why is the add-in Windows only? Would the add-in and the app both offer the same functionality (especially since the add-in seems better suited to metadata generation and the app to the repository connections piece from an outside perspective)? Given the different direction possibilities, what are the chances of ‘finishing’ each project given the scope? How reliable would each of the options be (esp. the Excel->web app connection in the ‘Lite’ version) given that this is a Microsoft product? Is it realistic to support connections with repositories given the scattered state of deposit requirements? You are the one who has had the technical conversations, who knows the project scope and players, who understands how this project maps to your broader goals. I am virtually attaching grains of salt to my comments.

    The reasons I see stated for the web-based direction are technically superior and likely more fun for more programmers. However, 1) I don’t necessarily think this is a completely technical problem (that Excel is still so dominant should be proof enough), and 2) of the technical requirements, I agree with John Cob’s prioritization in his comment on the last post (Excel-based and offline). Other people are working on interoperability, on data citations, on better repository ingest workflows, forms and tools. The (huge, gaping) hole I see in current efforts is in actually and realistically helping scientists generate rich, accurate metadata and machine-readable data formats.

    From a data center perspective, a cleaner, automated ingest is not nearly my biggest problem! An Excel-based tool that addresses the issues of generating rich metadata and machine-readable data from Excel with the *absolute minimum* shift in practices will have me dancing in the streets. 🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: