Loading…
This event has ended. Create your own event on Sched.
Join the 2020 ESIP Winter Meeting Highlights Webinar on Feb. 5th at 3 pm ET for a fast-paced overview of what took place at the meeting. More info here.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Tuesday, January 7
 

11:00am EST

FAIR Metadata Recommendations
We will discuss the FAIR metadata recommendations that were introduced at the ESIP Summer Meeting. How to Prepare for this Session: Use git repository: Issues

Links:
Glossary
Use git repository: 
Issues

View Recording:https://youtu.be/5hwZOLQ1p9M.

Takeaways
  • NCEAS is continuing to work on pinning down what are the fundamental characteristics for FAIR data. Have the suite of checks (e.g. is title present). 54 are currently implemented and they are working toward a community define 1.0 check suite. This is a good tool for data curators but has the potential to be misunderstood or misused - need a public FAIR metric. Public FAIR metric is high level and simple and includes only items that everyone agrees upon.
  • Future plans to create community specific custom FAIR suite checks to handle the variability of how metadata is hosted. Continually evaluating if checks are helping/hurting the data curators. Work is needed on the user interface - how do we ensure that metadata evaluation is a positive experience regardless of the score.
  • Reusability is typically low throughout the data repositories. Accessibility needs a greater focus as it’s hindered by broken/missing links. “When you decide what fields are mandatory (vs optional) you decide what metadata you get”


Speakers
avatar for Ted Habermann

Ted Habermann

Chief Game Changer, Metadata Game Changers
I am interested in all facets of metadata needed to discover, access, use, and understand data of any kind. Also evaluation and improvement of metadata collections, translation proofing. Ask me about the Metadata Game.
avatar for Matt Jones

Matt Jones

Director, DataONE Program, DataONE, UC Santa Barbara
DataONE | Arctic Data Center | Open Science | Provenance and Semantics | Scientific Synthesis


Tuesday January 7, 2020 11:00am - 12:30pm EST
Forest Glen
  Forest Glen, Breakout

11:00am EST

Creating a Data at Risk Commons at DataAtRisk.org
Several professional organizations have become increasingly concerned about the loss of reusable data from primary sources such as individual researchers, projects, and agencies. DataAtRisk.org aims to connect people with data in need, to data expertise, and is a response to the clear need for a community building application. This “Data at Risk” commons will allow individuals to submit and request help with threatened datasets and connect these datasets to experts who can provide resources and skills to help rescue data through a secure, professional mechanism to facilitate self-identification and discovery.

This session will provide an overview of the current status of the DataAtRisk.org project, and aims to expand the network of individuals involved in the development and implementation of DataAtRisk.org

How to Prepare for this Session: Please check out https://dataatrisk.org/ for some background on the activities.

Presentations: http://bit.ly/303gig7, https://doi.org/10.6084/m9.figshare.11536317.v1
Link to use case / user scenario: https://tinyurl.com/yh4rnk7b

View Recording: https://youtu.be/96NMQwx_EtI

Takeaways
  • Perfection is the enemy of getting stuff done
  • Something is better than nothing
  • Triage will be necessary at several places in the process



Speakers
avatar for Denise Hills

Denise Hills

Director, Energy Investigations, Geological Survey of Alabama
Long tail data, data preservation, connecting physical samples to digital information, geoscience policy, science communication


Tuesday January 7, 2020 11:00am - 12:30pm EST
Linden Oak
  Linden Oak, Working Session

11:00am EST

Interoperability of geospatial data with STAC
SpatioTemporal Asset Catalogs is an emerging specification of a common metadata model for geospatial data, and a way to make data catalogs indexable and searchable. We have already seen STAC being adopted for both public data and commercial data. Catalogs exist for several AWS Public Datasets, Landsat Collection 2 data will be published along with STAC metadata, and communities like Pangeo are using STAC to organize data repositories in a scalable way. Commercial companies like Planet and Digital Globe are starting to publish STAC metadata for some of their catalogs. Session talks may cover overviews of the STAC, software projects utilizing STAC, and use cases of STAC in organizations. How to Prepare for this Session: See https://stacspec.org/.

View Recording:https://youtu.be/BdZbJLQSNFE.

Takeaways


Speakers
avatar for Dan Pilone

Dan Pilone

Chief Technologist, Element 84
Dan Pilone is CEO/CTO of Element 84 and oversees the architecture, design, and development of Element 84's projects including supporting NASA, the USGS, Stanford University School of Medicine, and commercial clients. He has supported NASA's Earth Observing System for nearly 13 years... Read More →
avatar for Aimee Barciauskas

Aimee Barciauskas

Data engineer, Development Seed
MH

Matthew Hanson

Element 84
STAC


Tuesday January 7, 2020 11:00am - 12:30pm EST
White Flint
  White Flint, Breakout

2:00pm EST

Making a Good First Impression: Metadata Quality Metrics for Earth Observation Data and Information
Metadata is often the first information that a user interacts with when looking for data. Understanding that there is typically only one chance to make a good impression, data and information repositories have placed an emphasis on metadata quality as a way of increasing the likelihood that a user will have a favorable first impression. This session will explore quality metrics, badging or scoring, and metadata quality assessment approaches within the Earth observation community. Discussion questions include:
● Does your organization implement metadata quality metrics and/or scores?
○ What are the key metrics that the scores are based on?
○ What priorities are driving your metadata quality metrics? For example, different repositories have different priorities. These priorities can include an emphasis on discoverability, accessibility, usability, provenance, etc...
● Does your organization make metadata quality scores publically viewable? What are the pros and cons of making the scores publically accessible?
How to Prepare for this Session:

Presentations:
https://doi.org/10.6084/m9.figshare.11553606.v1
https://doi.org/10.6084/m9.figshare.11551182.v1

View Recording: https://youtu.be/lbza3gEHmtQ

Takeaways
  • Visualizations of the metadata quality metrics need to be easily understood or well documented to be effective
  • There are diverse ideas and current metrics that are being rolled out soon (U.S. Global Change Research Program & NCA)
  • Ensuring that metrics interact with existing standards such as FAIR is also important

Speakers
avatar for Amrutha Elamparuthy

Amrutha Elamparuthy

GCIS Data Manager, U.S. Global Change Research Program


Tuesday January 7, 2020 2:00pm - 3:30pm EST
Forest Glen
  Forest Glen, Breakout

2:00pm EST

ESIP Geoscience Community Ontology Engineering Workshop (GCOEW)
"Brains! Brains! Give us your brains!""
- Friendly neighbourhood machine minds
The collective knowledge in the ESIP community is immense and invaluable. During this session, we'd like to make sure that this knowledge drives the semantic technology (ontologies) being developed to move data with machine-readable knowledge in Earth and planetary science.
What we'll do:

In the first half hour of this session, we'll a) sketch out how and why we build ontologies and b) show you how to request that your knowledge gets added to ontologies (with nanocrediting).
We'll then have a 30-minute crowdsourcing jam session, during which participants can share their geoscience knowledge on the SWEET issue tracker. With a simple post, you can shape how the semantic layer will behave, making sure it does your field justice! Request content and share knowledge here: https://github.com/ESIPFed/sweet/issues
In the last, 30 minutes we'll take one request and demonstrate how we go about ""ontologising"" it in ENVO and how we link that to SWEET to create interoperable ontologies across the Earth and life sciences.

Come join us and help us shape the future of Geo-semantics!

Stuff you'll need:
A GitHub account available at https://github.com/
An ORCID (for nanocrediting your contributions) available at https://orcid.org How to Prepare for this Session:

Presentations:

View Recording:
https://youtu.be/tr0coi5ZQvM

Takeaways
  • Working toward a future (5-10 year goal) of making an open Earth & Space Science Foundry (from SWEET) similar to the OBO (Open Biological and Biomedical Ontology) Foundry. “Humans write queries”. Class definitions need to be machine-readable for interoperability, but must remain human-readable for authoring queries, ontology reuse, etc.
  • Please feel free to add phenomena of interest to the SWEET https://github.com/ESIPFed/sweet/issues/ or ENVO https://github.com/EnvironmentOntology/envo/issues/ issue trackers. 
  • At AGU they added a convention for changes to ontologies. Class level annotation convention. Can get now get textual defs from DBpedia for SWEET terms. See https://github.com/ESIPFed/sweet/wiki/SWEET-Class-Annotation-Convention


Speakers
avatar for Lewis J. McGibbney

Lewis J. McGibbney

Chair, ESIP Semantic Technologies Committee, NASA, JPL
My name is Lewis John McGibbney, I am currently a Data Scientist at the NASA Jet Propulsion Laboratory in Pasadena, California where I work in Computer Science and Data Intensive Applications. I enjoy floating up and down the tide of technologies @ The Apache Software Foundation having... Read More →


Tuesday January 7, 2020 2:00pm - 3:30pm EST
Glen Echo
  Glen Echo, Working Session

4:00pm EST

Bringing Science Data Uncertainty Down to Earth - Sub-orbital, In Situ, and Beyond
In the Fall of 2019, the Information Quality Cluster (IQC) published a white paper entitled “Understanding the Various Perspectives of Earth Science Observational Data Uncertainty”. The intention of this paper is to provide a diversely sampled exposition of both prolific and unique policies and practices, applicable in an international context of diverse policies and working groups, made toward quantifying, characterizing, communicating and making use of uncertainty information throughout the diverse, cross-disciplinary Earth science data landscape; to these ends, the IQC addressed uncertainty information from the following four perspectives: Mathematical, Programmatic, User, and Observational. These perspectives affect policies and practices in a diverse international context, which in turn influence how uncertainty is quantified, characterized, communicated and utilized. The IQC is now in a scoping exercise to produce a follow-on paper that is intended to provide a set of recommendations and best practices regarding uncertainty information. It is our hope that we can consider and examine additional areas of opportunity with regard to the cross-domain and cross-disciplinary aspects of Earth science data. For instance, the existing white paper covers uncertainty information from the perspective of satellite-based remote sensing well, but does not adequately address the in situ or airborne (i.e., sub-orbital) perspective. This session intends to explore such opportunities to expand the scope of the IQC’s awareness of what is being done with regard to uncertainty information, while also providing participants and observers with an opportunity to weigh in on how best to move forward with the follow-on paper. How to Prepare for this Session:Agenda:
  1. "IQC Uncertainty White Paper Status Summary and Next Steps" - Presented by: David Moroni (15 minutes)
  2. "Uncertainty quantification for in situ ocean data: The S-MODE sub-orbital campaign" - Presented by: Fred Bingham (15 minutes)
  3. "Uncertainty Quantification for Spatio-Temporal Mapping of Argo Float Data" - Presented by Mikael Kuusela (20 minutes)
  4. Panel Discussion (35 minutes)
  5. Closing Comments (5 minutes)
Notes Page: https://docs.google.com/document/d/1vfYBK_DLTAt535kMZusTPVCBAjDqptvT0AA5D6oWrEc/edit?usp=sharing

Presentations:
https://doi.org/10.6084/m9.figshare.11553681.v1

View Recording: https://youtu.be/vC2O8FRgvck

Takeaways

Speakers
avatar for David Moroni

David Moroni

Data Stewardship and User Services Team Lead, Jet Propulsion Laboratory, Physical Oceanography Distributed Active Archive Center
I am a Senior Science Data Systems Engineer at the Jet Propulsion Laboratory and Data Stewardship and User Services Team Lead for the PO.DAAC Project, which provides users with data stewardship services including discovery, access, sub-setting, visualization, extraction, documentation... Read More →
avatar for Ge Peng

Ge Peng

Research Scholar, CISESS/NCEI
Dataset-centric scientific data stewardship, data quality management
FB

Fred Bingham

University of North Carolina at Wilmington
MK

Mikael Kuusela

Carnegie Mellon University


Tuesday January 7, 2020 4:00pm - 5:30pm EST
Forest Glen
 
Wednesday, January 8
 

2:00pm EST

FAIR Laboratory Instrumentation, Analytical Procedures, and Data Quality
Acquisition and analysis of data in the laboratory are pervasive in the Earth, environmental, and planetary sciences. Analytical and experimental laboratory data, often acquired with sophisticated and expensive instrumentation, are fundamental for understanding past, present, and future processes in natural systems, from the interior of the Earth to its surface environments on land, in the oceans, and in the air, to the entire solar system. Despite the importance of provenance information for analytical data including, for example, sample preparation or experimental set up, instrument type and configuration, calibration, data reduction, and analytical uncertainties, there are no consistent community-endorsed best practices and protocols for describing, identifying, and citing laboratory instrumentation and analytical procedures, and documenting data quality. This session is intended as a kick-off working session to engage researchers, data managers, and system engineers, to contribute ideas how to move forward with and accelerate the development of global standard protocols and the promulgation of best practices for analytical laboratory data. How to Prepare for this Session:

Presentations:

View Recording:
https://youtu.be/LOfb_4r7DBA

Takeaways
  • Analytical and experimental data are collected widely in both the field and laboratory settings from a variety of earth environmental and planetary sciences, spanning a variety of disciplines. FAIR use of such data is dependent of data provenance. 
  • Need community exchange of such data consider use of data is broader than the original use of data in the domain. Brings to mind interoperability of such data. Need networks of these data to be plugged into evolving CI systems. In seismology a common standard for data implemented by early visionaries was a massive boon to the field. 
  • Documentation of how analytical data were generated is time consuming for data curators providers etc. Having standards/protocols for data exchange protocols is urgently required for emerging global data networks. OneGeochemistry as example use case for international research group to establish a global network for discoverable geochemical data.


Speakers
avatar for Lesley Wyborn

Lesley Wyborn

Adjunct Fellow, Australian National University
avatar for Kerstin Lehnert

Kerstin Lehnert

President, IGSN e.V.
Kerstin Lehnert is Senior Research Scientist at the Lamont-Doherty Earth Observatory of Columbia University and Director of EarthChem, the System for Earth Sample Registration, and the Astromaterials Data System. Kerstin holds a Ph.D in Petrology from the University of Freiburg in... Read More →


Wednesday January 8, 2020 2:00pm - 3:30pm EST
Forest Glen
  Forest Glen, Working Session

2:00pm EST

Citizen Science Data and Information Quality
The ESIP Information Quality Cluster (IQC) has formally defined information quality as a combination of the following four aspects of quality, spanning the full life cycle of data products: scientific quality, product quality, stewardship quality, and service quality. Focus of the IQC has been quality of Earth science data captured by scientists/experts. For example, the whitepaper “Understanding the Various Perspectives of Earth Science Observational Data Uncertainty”, published by IQC in the fall of 2019, mainly addresses uncertainty information from the perspective of satellite-based remote sensing. With the advance of mobile computing technologies, including smart phones, Citizen Science (CS) data have been increasingly becoming more and more important sources for Earth science research. CS data have their own unique challenges regarding data quality, compared with data captured through traditional scientific approaches. The purpose of this session is to broaden the scope of IQC efforts, present the community with the state-of-the-art of research on CS data quality, and foster a collaborative interchange of technical information intended to help advance the assessment, improvement, capturing, conveying, and use of quality information associated with CS data. This session will summarize the scope of what we mean by CS data (including examples of platforms/sensors commonly used in collecting CS data) and include presentations from both past and current CS projects focusing on the topics such as challenges with CS data quality; strategies to assess, ensure, and improve CS data quality; approaches to capturing CS data quality information and conveying it to users; and use of CS data quality information for scientific discovery. 

Agenda (Click titles to view presentations)
  1. Introduction - Yaxing Wei - 5 mins
  2. Citizen Science Data Quality: The GLOBE Program – Helen M. Amos (NASA GSFC) – 18 (15+3) mins.
  3. Can we trust the power of the crowd? A look at citizen science data quality from NOAA case studies - Laura Oremland (NOAA) – 18 (15+3) mins.
  4. Turning Citizen Science into Community Science - Stephen C. Diggs (Scripps Institution of Oceanography / UCSD) and Andrea Thomer (University of Michigan)  – 18 (15+3) mins.
  5. Earth Challenge 2020: Understanding and Designing for Data Quality at Scale - Anne Bowser (Wilson Center) – 18 (15+3) mins.
  6. Discussion and Key Takeaways – All – 13 mins.

    View Recording: https://youtu.be/xaTLP4wqwe8

    Takeaways

Notes Page:
https://docs.google.com/document/d/1lRp19SF9U727ureKjY38PHOF3EGUgE-BixYDs2KlmII/edit?usp=sharing

Presentation Abstracts

  • Citizen Science Data Quality: The GLOBE Program - Helen M. Amos (NASA GSFC)
The Global Learning and Observations to Benefit the Environment (GLOBE) Program is an international program that provides a way for students and the public to contribute Earth system observations. Currently 122 countries, more than 40,000 schools, and 200,000 citizen scientists are participating in GLOBE. Since 1995, participants have contributed 195 million observations. Modes of data collection and data entry have evolved with technology over the lifetime of the program, including the launch of the GLOBE Observer mobile app in 2016 to broaden access and public participation in data collection. GLOBE must meet the data needs of a diverse range of stakeholders, from elementary school classrooms to scientists across the globe, including NASA scientists. Operational quality assurance measures include participant training, adherence to standardized data collection protocols, range and logic checks, and an approval process for photos submitted with an observation. In this presentation, we will discuss the current state of operational data QA/QC, as well as additional QA/QC processes recently explored and future directions. 
  • Can we trust the power of the crowd? A look at citizen science data quality from NOAA case studies - Laura Oremland (NOAA)
NOAA has a rich history in citizen science dating back hundreds of years.  Today NOAA’s citizen science covers a wide range of topics such as weather, oceans, and fisheries with volunteers contributing over 500,000 hours annually to these projects. The data are used to enhance NOAA’s science and monitoring programs.   But how do we know we can trust these volunteer-based efforts to provide data that reflect the high standards of NOAA’s scientific enterprise? This talk will provide an overview of NOAA’s citizen science, describe the data quality assurance and quality control processes applied to different programs, and summarize common themes and recommendations for collecting high quality citizen science data. 
  • Earth Challenge 2020: Understanding and Designing for Data Quality at Scale - Anne Bowser (Wilson Center)
April 22nd, 2020 marks the 50th anniversary of Earth day.  In recognition of this milestone Earth Day Network, the Woodrow Wilson International Center for Scholars, and the U.S. Department of State are launching Earth Challenge 2020 as the world’s largest coordinated citizen science campaign.  For 2020, the project focuses on six priority areas: air quality, water quality, insect populations, plastics pollution, food security, and climate change.  For each of these six areas, one work stream will focus on collaborating with existing citizen science projects to increase the amount of open and findable, accessible, interoperable, and reusable (FAIR) data.  A second work stream will focus on designing tools to support both existing and new citizen science activities, including a mobile application for data collection; an open, API-enabled data integration platform; data visualization tools; and, a metadata repository and data journal.
A primary value of Earth Challenge 2020 is recognizing, and elevating, ongoing citizen science activities.  Our approach seeks first to document a range of data quality practices that citizen science projects are already using to help the global research and public policy community understand these practices and assess fitness-for-use.  This information will be captured primarily through the metadata repository and data journal.  In addition, we are leveraging a range of data quality solutions for the Earth Challenge 2020 mobile app, including designing automated data quality checks and leveraging a crowdsourcing platform for expert-based data validation that will help train machine learning (ML) support.  Many of the processes designed for Earth Challenge 2020 app data can also be applied to other citizen science data sets, so maintaining information on processing level, readiness level, and provenance is a critical concern.  The goal of this presentation is to offer an overview of key Earth Challenge 2020 data documentation and data quality practices before inviting the ESIP community to offer concrete feedback and support for future work.

Speakers
avatar for David Moroni

David Moroni

Data Stewardship and User Services Team Lead, Jet Propulsion Laboratory, Physical Oceanography Distributed Active Archive Center
I am a Senior Science Data Systems Engineer at the Jet Propulsion Laboratory and Data Stewardship and User Services Team Lead for the PO.DAAC Project, which provides users with data stewardship services including discovery, access, sub-setting, visualization, extraction, documentation... Read More →
avatar for Ge Peng

Ge Peng

Research Scholar, CISESS/NCEI
Dataset-centric scientific data stewardship, data quality management
avatar for Yaxing Wei

Yaxing Wei

Scientist, Oak Ridge National Laboratory


Wednesday January 8, 2020 2:00pm - 3:30pm EST
Linden Oak
  Linden Oak, Breakout

2:00pm EST

Advancing Data Integration approaches of the structured data web
Political, economic, social or scientific decision making is often based on integrated data from multiple sources across potentially many disciplines. To be useful, data need to be easy to discover and integrate.
This session will feature presentations highlighting recent breakthroughs and lessons learned from experimentation and implementation of open knowledge graph, linked data concepts and Discrete Global Grid Systems. Practicality and adoptability will be the emphasis - focusing on incremental opportunities that enable transformational capabilities using existing technologies. Best practices from the W3C Spatial Data on the Web Working Group, OGC Environmental Linked Features Interoperability Experiment, ESIP Science on Schema.org; implementation examples from Geoscience Australia, Ocean Leadership Consortium, USGS and other organisations will featured across the entire session.
This session will highlight how existing technologies and best practices can be combined to address important and common use cases that have been difficult if not impossible until recent developments. A follow up session will be used to seed future collaborative development through co-development, github issue creation, and open documentation generation.

How to Prepare for this Session: Review: https://opengeospatial.github.io/ELFIE/, https://github.com/ESIPFed/science-on-schema.org, https://www.w3.org/TR/sdw-bp/, and http://locationindex.org/.

Notes, links, and attendee contact info here.

View Recording: https://youtu.be/-raMt2Y1CdM

Session Agenda:
1.  2.00- 2.10,  Sylvain Grellet, Abdelfettah Feliachi, BRGM, France
'Linked data' the glue within interoperable information systems
“Our Environmental Information Systems are exposing environmental features, their monitoring systems and the observation they generate in an interoperable way (technical and semantic) for years. In Europe, there is even a legal obligation to such practices via the INSPIRE directive. However, the practice inducing data providers to set up services in a "Discovery > View > Download data" pattern hides data behind the services. This hinders data discovery and reuse. Linked Data on the Web Best Practices put this stack upside down and data is now back in the first line. This completely revamp the design and capacities of our Information Systems. We'll highlight the new data frontiers opened by such practices taking examples on the French National Groundwater Information Network”
View Slides: https://doi.org/10.6084/m9.figshare.11550570.v1

2.  2.10 - 2.20,  Adam Leadbetter, Rob Thomas, Marine Institute, Ireland
Using RDF Data Cubes for data visualization: an Irish pilot study for publishing environmental data to the semantic web
The Irish Wave and Weather Buoy Networks return metocean data at 5-60 minute intervals from 9 locations in the seas around Ireland. Outside of the Earth Sciences an example use case for these data is in supporting Blue Economy development and growth (e.g. renewable energy device development). The Marine Institute, as the operator of the buoy platforms, in partnership with the EU H2020 funded Open Government Intelligence project has published daily summary data from these buoys using the RDF DataCube model[1]. These daily statistics are available as Linked Data via a SPARQL endpoint making these data semantically interoperable and machine readable. This API underpins a pilot dashboard for data exploration and visualization. The dashboard presents the user with the ability to explore the data and derive plots for the historic summary data, while interactively subsetting from the full resolution data behind the statistics. Publishing environmental data with these technologies makes accessing environmental data available to developers outside those with Earth Science involvement and effectively lowers the entry bar for usage to those familiar with Linked Data technologies.
View Slides: https://doi.org/10.6084/m9.figshare.11550570.v1

3. 2.20 - 2.30,  Boyan Brodaric, Eric Boisvert, Geological Survey of Canada, Canada; David Blodgett, USGS, USA
Toward a Linked Water Data Infrastructure for North America
We will describe progress on a pilot project using Linked Data approaches to connect a wide variety of water-related information within Canada and the US, as well as across the shared border
View Slides: https://doi.org/10.6084/m9.figshare.11541984.v1

4.  2.30 - 2.40,  Dalia Varanka, E. Lynn Usery, USGS, USA
The Map as Knowledge Base; Integrating Linked Open Topographic Data from The National Map of the U.S. Geological Survey
This presentation describes the objectives, models, and approaches for a prototype system for cross-thematic topographic data integration based on semantic technology. The system framework offers a new perspectives on conceptual, logical, and physical system integration in contrast to widely used geographic information systems (GIS).
View Slides: https://doi.org/10.6084/m9.figshare.11541615.v1

5.  2.40 – 2.50,  Alistair Ritchie, Landcare, New Zealand
ELFIE at Landcare Research, New Zealand
Landcare Research, a New Zealand Government research institute, creates, manages and publishes a large set of observational and modelling data describing New Zealand’s land, soil, terrestrial biodiversity and invasive species. We are planning to use the findings of the ELFIE initiatives to guide the preparation of a default view of the data to help discovery (by Google), use (by web developers) and integration (into the large environmental data commons managed by other agencies). This integration will not only link data about the environment together, but will also expose more advanced data services. Initial work is focused on soil observation data, and the related scientific vocabularies, but we anticipate near universal application across our data holdings.
View Slides: https://doi.org/10.6084/m9.figshare.11550369.v1

6.  2.50 - 3.00,  Irina Bastrakova, Geoscience Australia, Australia
Location Index Project (Loc-I) – integration of data on people, business & the environment
Location Index (Loc-I) is a framework that provides a consistent way to seamlessly integrate data on people, business, and the environment.
Location Index aims to extend the characteristics of the foundation spatial data of taking geospatial data (multiple geographies) which is essential to support public safety and wellbeing, or critical for a national or government decision making that contributes significantly to economic, social and environmental sustainability and linking it with observational data. Through providing the infrastructure to suppo

Speakers
avatar for Jonathan Yu

Jonathan Yu

Research data scientist/architect, CSIRO
Jonathan is a data scientist/architect with the Environmental Informatics group in CSIRO. He has expertise in information and web architectures, data integration (particularly Linked Data), data analytics and visualisation. Dr Yu is currently the technical lead for the Loc-I project... Read More →
avatar for Dalia Varanka

Dalia Varanka

Research Physical Scientist, U.S. Geological Survey
Principle Investigator and Project Lead, The Map as Knowledge Base
AR

Alastair Richie

Landcare Research NZ
AL

Adam Leadbetter

Marine Institute
RT

Rob Thomas

Marine Institute
BB

Boyan Brodaric

Natural Resources Canada
EB

Eric Boisvert

Natural Resources Canada
avatar for Irina  Bastrakova

Irina Bastrakova

Director, Spatial Data Architecture, Geoscience Australia
I have been actively involved with international and national geoinformatics communities for more than 19 years. I am the Chair of the Australian and New Zealand Metadata Working Group. My particular interest is in developing and practical application of geoscientific and geospatial... Read More →
avatar for David Blodgett

David Blodgett

U.S. Geological Survey


Wednesday January 8, 2020 2:00pm - 3:30pm EST
White Flint

4:00pm EST

Structured data web and coverages integration working session
This working session will follow on the "Advancing Data Integration approaches of the structured data web” session and the Coverage Analytics sprint as an opportunity for those interested in building linked data information products that integrate spatial features, coverage data, and more. As such, inspiration will be drawn from projects like science on schema.org, the Environmental Linked Features Interoperability Experiment, the Australian Location Index, and those that session attendees take part in. Participants will self organize into use-case or technology focused groups to discuss and synthesize the outcomes of the sprint and structured data web session. Session outcomes could take a number of forms: linked data and web page mock ups, ideas and issues for OGC, W3C, or ESIP groups to consider, example data or use cases for relevant software development projects to consider, or work plans and proposals for suture ESIP work. The session format is expected to be fluid with an ideation and group formation exercise followed by structured discussion to explore a set of ideas then narrow on a focused valuable outcome. Participants will be encouraged to work together prior to the meeting to design and plan the session structure. Outcomes of the session will be reported at an Information Technology and Interoperability webinar in early 2020. How to Prepare for this Session: Attend the coverage sprint and the "Advancing Data Integration approaches of the structured data web" session.

Shared document for session here.

Full Notes: https://doi.org/10.6084/m9.figshare.11559087.v1

Presentations:

View Recording: https://youtu.be/u2x3I0cr46A

  • Takeaways
    Breakout session information interoperability committee and webinar series. See notes: https://docs.google.com/document/d/1LpcTMwP0mAD4G4Gb8mStI5uSDV61_qWPUkQ9nI1x1cI/edit?usp=sharing
  • Foster cross-project consistency via breakouts. Such as dealing with science on schema.org issue of Links to “in-band” linked (meta)data and “out of band” linked data. Content negotiation and in-band and out of band links Use blank nodes with link properties for rdf elements that are URI for out of band content. Identify in band links with sdo @id, out of band links with sdo:URL
  • Incorporating Spatial Coverages in Knowledge Graphs; Next Steps? Need to explore more on tessellations as an intermediate index. Will carry forward some of these ideas at the EDR SWG Will represent some of these ideas to the OGC-API Coverages SWG Will mention these ideas to the UFOKN Role of ‘spatial’ knowledge graphs Will spatial data analysis and transformation tools grow to adopt/support RDF as an underlying data structure for spatial information or will RDF continue to be a ‘view’ of existing (legacy) spatial data in GI systems?


Speakers
avatar for Adam Shepherd

Adam Shepherd

Technical Director, Co-PI, BCO-DMO
schema.org | Data Containerization | Linked Data | Semantic Web | Knowledge Representation | Ontologies
avatar for Irina  Bastrakova

Irina Bastrakova

Director, Spatial Data Architecture, Geoscience Australia
I have been actively involved with international and national geoinformatics communities for more than 19 years. I am the Chair of the Australian and New Zealand Metadata Working Group. My particular interest is in developing and practical application of geoscientific and geospatial... Read More →
WF

William Francis

Geoscience Australia
avatar for Jonathan Yu

Jonathan Yu

Research data scientist/architect, CSIRO
Jonathan is a data scientist/architect with the Environmental Informatics group in CSIRO. He has expertise in information and web architectures, data integration (particularly Linked Data), data analytics and visualisation. Dr Yu is currently the technical lead for the Loc-I project... Read More →
DF

Doug Fils

Consortium for Ocean Leadership
avatar for David Blodgett

David Blodgett

U.S. Geological Survey


Wednesday January 8, 2020 4:00pm - 5:30pm EST
White Flint
 
Thursday, January 9
 

10:15am EST

Working Group for the Data Stewardship Committee
This session is a working group for the 2020-2021 year for the Data Stewardship committee. We will discuss priorities for the next year, potential collaborative outputs, and review the work in progress from the last year. 

Notes Document: https://docs.google.com/document/d/1B_0K5jGnFgH72U3P2-oGr5vEqHOGU8CWU-IkZ6pjXbM/edit?ts=5e174588

Presentations

View Recording: https://youtu.be/am-ZLfHgM4w

Takeaways
  • Wow, the members of the Committee really are active! Practically everyone has their own cluster or two!
  • Six activities proposed for the upcoming year have champions who will lead the effort to define the outputs of their selected activity.


Speakers
avatar for Alexis Garretson

Alexis Garretson

Community Fellow, ESIP
avatar for Kelsey Breseman

Kelsey Breseman

Archiving Program Lead, Environmental Data & Governance Initiative
Governmental accountability around public data & the environment. Decentralized web. Intersection of tech & ethics & civics.


Thursday January 9, 2020 10:15am - 11:45am EST
Forest Glen
  Forest Glen, Business Meeting

10:15am EST

Identifying ESIP
Permanent Identifiers (PIDs) make connections across the scholarly community possible. We are familiar with DOI's for data, but how about ORCIDs for people or RORs for organizations. How is the ESIP community using identifiers and how can we benefit from that usage?

This is the first report from the Identifying ESIP Connections Funding Friday Project that started last summer. The focus so far has been on identifying organizations associated with ESIP using the Research Organization Registry. During this session we will introduces identifiers at four levels: U.S. Federal Agencies and Departments, ESIP Sponsors, ESIP Members, and ESIP Participants. Information on all of these levels is available on the ESIP Wiki.
  1. Maria Gould, the ROR Project lead at the California Digital Library will fill us in on ROOR and answer questions about RORs. (Presentation)
  2. Ted Habermann the PI of Identifying ESIP Connections will discuss this work and lead a working discussion of RORs

Click here to participate: http://wiki.esipfed.org/index.php/Category:Identifying_ESIP_Connections


Presentations
https://doi.org/10.6084/m9.figshare.11794182.v1

View Recording: https://youtu.be/iUYmTaDdJGQ

Takeaways
  • Generally positive attitude about using identifiers for organizations but all organizations in ESIP may not end up with RORs...
  • The granularity of RORs is an ongoing challenge and spans many challenges - multi-organization projects, changes as function of time.
  • How are research organizations defined? Do repositories have RORs? Wiki pages were good way to share information.



Speakers
avatar for Ted Habermann

Ted Habermann

Chief Game Changer, Metadata Game Changers
I am interested in all facets of metadata needed to discover, access, use, and understand data of any kind. Also evaluation and improvement of metadata collections, translation proofing. Ask me about the Metadata Game.


Thursday January 9, 2020 10:15am - 11:45am EST
Linden Oak
  Linden Oak, Breakout

12:00pm EST

License Up! What license works for you and your downstream repositories?
Many repositories are seeing an increase in the use and diversity of licenses and other intellectual property management (IPM) tools applied to externally-created data submissions and software developed by staff. However, adding a license to data files may have unexpected or unintended consequences in the downstream use or redistribution of those data. Who “owns” the intellectual property rights to data collected by university researchers using Federal and State (i.e., public) funding that must be deposited at a Federal repository? What license is appropriate for those data and what — exactly — does that license allow and disallow? What kind of license or other IPM instrument is appropriate for software written by a team of Federal and Cooperative Institute software engineers? Is there a significant difference between Creative Commons, GNU, and other ‘open source licenses’?

We have invited a panel of legal advisors from Federal and other organizations to discuss the implications of these questions for data stewards and the software teams that work collaboratively with those stewards. We may also discuss the latest information about Federal data licenses as it applies to the OPEN Government Data Act of 2019. How to Prepare for this Session: Consider what, if any, licenses, copyright, or other intellectual property rights management you apply or think applies to your work. Also consider Federal requirements such as the OPEN Government Data Act of 2019, Section 508 of the Rehabilitation Act of 1973.

Speakers:
Dr. Robert J. Hanisch is the Director of the Office of Data and Informatics, Material Measurement Laboratory, at the National Institute of Standards and Technology in Gaithersburg, Maryland. He is responsible for improving data management and analysis practices and helping to assure compliance with national directives on open data access. Prior to coming to NIST in 2014, Dr. Hanisch was a Senior Scientist at the Space Telescope Science Institute, Baltimore, Maryland, and was the Director of the US Virtual Astronomical Observatory. For more than twenty-five years Dr. Hanisch led efforts in the astronomy community to improve the accessibility and interoperability of data archives and catalogs.
Henry Wixon is Chief Counsel for the National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce. His office provides programmatic legal guidance to NIST, as well as intellectual property counsel and representation to the Department of Commerce and other Department bureaus. In this role, it interacts with principal developers and users of research, including private and public laboratories, universities, corporations and governments. Responsibilities of Mr. Wixon’s office include review of NIST Cooperative Research and Development Agreements (CRADAs), licenses, Non-Disclosure Agreements (NDAs) and Material Transfer Agreements (MTAs), and the preparation and prosecution of the agency’s patent applications. As Chief Counsel, Mr. Wixon is active in standing Interagency Working Groups on Technology Transfer, on Bayh-Dole, and on Research Misconduct, as well as in the Federal Laboratory Consortium. He is a Certified Licensing Professional and a Past Chair of the Maryland Chapter of the Licensing Executives Society, USA and Canada (LES), and is a member of the Board of Visitors of the College of Computer, Mathematical and Natural Sciences of the University of Maryland, College Park.

Presentations
See attached

View Recording: https://youtu.be/5Ng5FDW1LXk.

Takeaways



Speakers
DC

Donald Collins

Oceanographer, NESDIS/NCEI Archive Branch
Send2NCEI, NCEI archival processes, records management


Thursday January 9, 2020 12:00pm - 1:30pm EST
Forest Glen
  Forest Glen, Panel