Loading…
This event has ended. Create your own event on Sched.
Join the 2020 ESIP Winter Meeting Highlights Webinar on Feb. 5th at 3 pm ET for a fast-paced overview of what took place at the meeting. More info here.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Tuesday, January 7
 

11:00am EST

Analytic Centers for Air Quality
The Analytic Center Framework (ACF) is a concept to support scientific investigations with a harmonized collection of data from a wide range of sources and vantage points, tools and computational resources. Four recent NASA AIST competitive awards are focused on either ACFs or components which could feed into AQ ACF's. Previous projects have developed tools and improved the accessibility and usability of data for Air Quality analysis, and have tried to address issues related to inconsistent metadata, uncertainty quantification, interoperability among tools and computing resources and visualization to aid scientific investigation or applications. The format for this meeting will be a series of brief presentati.ons by invited speakers followed by a discussion. This generally follows the panel model How to Prepare for this Session: A link to a set of pre-read materials will be provided.

View Recording: https://youtu.be/fy4eoOfSbpo.

Takeaways
  • Is there enough interest to start an Air Quality cluster? Yes!
  • Technologists and scientists should both be involved in the cluster to ensure usability through stakeholder engagement


Speakers
ML

Mike Little

ESTO, NASA
Computational Technology to support scientific investigations


Tuesday January 7, 2020 11:00am - 12:30pm EST
Glen Echo
  Glen Echo, Working Session

2:00pm EST

ESIP Geoscience Community Ontology Engineering Workshop (GCOEW)
"Brains! Brains! Give us your brains!""
- Friendly neighbourhood machine minds
The collective knowledge in the ESIP community is immense and invaluable. During this session, we'd like to make sure that this knowledge drives the semantic technology (ontologies) being developed to move data with machine-readable knowledge in Earth and planetary science.
What we'll do:

In the first half hour of this session, we'll a) sketch out how and why we build ontologies and b) show you how to request that your knowledge gets added to ontologies (with nanocrediting).
We'll then have a 30-minute crowdsourcing jam session, during which participants can share their geoscience knowledge on the SWEET issue tracker. With a simple post, you can shape how the semantic layer will behave, making sure it does your field justice! Request content and share knowledge here: https://github.com/ESIPFed/sweet/issues
In the last, 30 minutes we'll take one request and demonstrate how we go about ""ontologising"" it in ENVO and how we link that to SWEET to create interoperable ontologies across the Earth and life sciences.

Come join us and help us shape the future of Geo-semantics!

Stuff you'll need:
A GitHub account available at https://github.com/
An ORCID (for nanocrediting your contributions) available at https://orcid.org How to Prepare for this Session:

Presentations:

View Recording:
https://youtu.be/tr0coi5ZQvM

Takeaways
  • Working toward a future (5-10 year goal) of making an open Earth & Space Science Foundry (from SWEET) similar to the OBO (Open Biological and Biomedical Ontology) Foundry. “Humans write queries”. Class definitions need to be machine-readable for interoperability, but must remain human-readable for authoring queries, ontology reuse, etc.
  • Please feel free to add phenomena of interest to the SWEET https://github.com/ESIPFed/sweet/issues/ or ENVO https://github.com/EnvironmentOntology/envo/issues/ issue trackers. 
  • At AGU they added a convention for changes to ontologies. Class level annotation convention. Can get now get textual defs from DBpedia for SWEET terms. See https://github.com/ESIPFed/sweet/wiki/SWEET-Class-Annotation-Convention


Speakers
avatar for Lewis J. McGibbney

Lewis J. McGibbney

Chair, ESIP Semantic Technologies Committee, NASA, JPL
My name is Lewis John McGibbney, I am currently a Data Scientist at the NASA Jet Propulsion Laboratory in Pasadena, California where I work in Computer Science and Data Intensive Applications. I enjoy floating up and down the tide of technologies @ The Apache Software Foundation having... Read More →


Tuesday January 7, 2020 2:00pm - 3:30pm EST
Glen Echo
  Glen Echo, Working Session

4:00pm EST

Bringing Science Data Uncertainty Down to Earth - Sub-orbital, In Situ, and Beyond
In the Fall of 2019, the Information Quality Cluster (IQC) published a white paper entitled “Understanding the Various Perspectives of Earth Science Observational Data Uncertainty”. The intention of this paper is to provide a diversely sampled exposition of both prolific and unique policies and practices, applicable in an international context of diverse policies and working groups, made toward quantifying, characterizing, communicating and making use of uncertainty information throughout the diverse, cross-disciplinary Earth science data landscape; to these ends, the IQC addressed uncertainty information from the following four perspectives: Mathematical, Programmatic, User, and Observational. These perspectives affect policies and practices in a diverse international context, which in turn influence how uncertainty is quantified, characterized, communicated and utilized. The IQC is now in a scoping exercise to produce a follow-on paper that is intended to provide a set of recommendations and best practices regarding uncertainty information. It is our hope that we can consider and examine additional areas of opportunity with regard to the cross-domain and cross-disciplinary aspects of Earth science data. For instance, the existing white paper covers uncertainty information from the perspective of satellite-based remote sensing well, but does not adequately address the in situ or airborne (i.e., sub-orbital) perspective. This session intends to explore such opportunities to expand the scope of the IQC’s awareness of what is being done with regard to uncertainty information, while also providing participants and observers with an opportunity to weigh in on how best to move forward with the follow-on paper. How to Prepare for this Session:Agenda:
  1. "IQC Uncertainty White Paper Status Summary and Next Steps" - Presented by: David Moroni (15 minutes)
  2. "Uncertainty quantification for in situ ocean data: The S-MODE sub-orbital campaign" - Presented by: Fred Bingham (15 minutes)
  3. "Uncertainty Quantification for Spatio-Temporal Mapping of Argo Float Data" - Presented by Mikael Kuusela (20 minutes)
  4. Panel Discussion (35 minutes)
  5. Closing Comments (5 minutes)
Notes Page: https://docs.google.com/document/d/1vfYBK_DLTAt535kMZusTPVCBAjDqptvT0AA5D6oWrEc/edit?usp=sharing

Presentations:
https://doi.org/10.6084/m9.figshare.11553681.v1

View Recording: https://youtu.be/vC2O8FRgvck

Takeaways

Speakers
avatar for David Moroni

David Moroni

Data Stewardship and User Services Team Lead, Jet Propulsion Laboratory, Physical Oceanography Distributed Active Archive Center
I am a Senior Science Data Systems Engineer at the Jet Propulsion Laboratory and Data Stewardship and User Services Team Lead for the PO.DAAC Project, which provides users with data stewardship services including discovery, access, sub-setting, visualization, extraction, documentation... Read More →
avatar for Ge Peng

Ge Peng

Research Scholar, CISESS/NCEI
Dataset-centric scientific data stewardship, data quality management
FB

Fred Bingham

University of North Carolina at Wilmington
MK

Mikael Kuusela

Carnegie Mellon University


Tuesday January 7, 2020 4:00pm - 5:30pm EST
Forest Glen
 
Wednesday, January 8
 

11:00am EST

Software Sustainability, Discovery and Accreditation
It is commonly understood that software is essential to research, in data collection, curation, analysis, and understanding, and it is also a critical element within any research infrastructure. This session will address two related software issues: 1) sustainability, and 2) discovery and accreditation.

Because scientific software is an instance of a software stack containing problem-specific software, discipline-specific tools, general tools and middleware, and infrastructural software, changes within the stack can cause the overall software to collapse and stop working, and as time goes on, work is increasingly needed to compensate for these problems, which we refer to as sustainability. Issues in which we are interested include incentives that encourage sustainability activities, business models for sustainability (including public-private partnership), software design that can reduce the sustainability burden, and metrics to measure sustainability (perhaps tied to the on-going process of defining FAIR software).

The second issue, discovery and accreditation, asks how we enable users to discover and access trustworthy and fit-for-purpose software to undertake science processing on the compute infrastructures to which they have access? And how do we ensure that publications cite the exact version of software that was used and is cited and properly credited the responsible authors?

This session will include a number of short talks, and at least two breakouts in parallel, one about the sustainability of software, and a second about discovery of sustainable and viable solutions.

Potential speakers who want to talk about an aspect of software sustainability, discovery, or accreditation should contact the session organizers.

Agenda/slides:
Presentations: See above

View Recording:
https://youtu.be/nsxjOC04JxQ

Key takeaways:

1. Funding agencies spend a large amount of money on software, but don't always know this because it's not something that they track.

OpenSource software is growing very quickly:
  • 2001: 208K SourceForge users
  • 2017: 20M GitHub users
  • 2019: 37M Github users
Software, like data, is a “first class citizen” in the ecosystem of tools and resources for scientific research and our community is accelerating their attention to this as they have for FAIR data


2. Ideas for changing our culture to better support and reward contributions to sustainable software:
  • Citation (ESIP guidelines) and/or software heritage IDs for credit and usage metrics and to meet publisher requirements (e.g. AGU)
  • Prizes
  • Incentives in hiring and promotion
  • Promote FAIR principles and/or Technical Readiness Levels for software
  • Increased use to make science more efficient through common software
  • Publish best practice materials in other languages, e.g. Mandarin, as software comes from a global community


3. A checklist of topics to consider for your community sustained software:
  • Repository with “cookie cutter” templates and sketches for forking
  • Licensing
  • Contributors Guide
  • Code of Conduct and Governance
  • Use of “Self-Documentation” features and standards
  • Easy step for trying out software
  • Continuous Integration builds
  • Unit tests
  • Good set of “known first issues” for new users trying out the software
  • Gitter or Slack Channel for feedback and communication, beyond a simple repo issues queue


Detailed notes:
The group then divided into 2 breakout sessions (Sustainability; Discovery and Accreditation), with notes as follows.

Notes from Sustainability breakout (by Daniel S. Katz):

What we think should be done:
  • Build a cookiecutter recipe for new projects, based on Ben’s slides?  What part of ESIP would be interested in this? And would do it, and support it?
  • Define governance as part of this? How do we store governance?
  • What is required, what is optional (maybe with different answers at different tiers)
  • Define types of projects (individual developer, community code, …)
  • Define for different languages – tooling needs to match needs
  • Is this specific to ESIP? Who could it be done with? The Carpentries?  SSI?

Other discussion:
  • What do we mean by sustainability – for how long?  Up to 50 years?  How do we run the system?
  • What’s the purpose of the software (use case) – transparency to see the software, actual reuse?
  • What about research objects that contain both software and data? How do we archive them? How do we cite them?
  • We have some overlap with research object citation cluster


Notes from Discovery and Accreditation breakout (by Shelley Stall):

Use Cases - Discovery
  1. science question- looking for software to support
  2. have some data output from a software process, need to gain access to the software to better understand the data.   

Example of work happening: Data and Software Preservation - NSF Funded
  • promote linked data to other research products
  • similar project in Australia - want to gain access to the chain of events that resulted in the data and/or software - the scientific drivers that resulted in this product
  • Provenance information is part of this concept.

A deeper look at discovery, once software is found, is to better understand how the software came into being. It is important to know the undocumented elements of a process that effected/impacted the chain of events that are useful information to understand for a particular piece of software.
How do we discover existing packages?
Dependency management helps to discover new elements that support software.
Concern expressed that packaged solution for creating an environment, like “AWS/AMI”, are not recognized as good enough, that an editor requested a d

Speakers
avatar for Daniel S. Katz

Daniel S. Katz

Assistant Dir. for Scientific Software & Applications, NCSA; Research Assoc. Prof., CS, ECE, iSchool, University of Illinois
avatar for Lesley Wyborn

Lesley Wyborn

Adjunct Fellow, Australian National University


Wednesday January 8, 2020 11:00am - 12:30pm EST
Forest Glen
  Forest Glen, Working Session

2:00pm EST

Participatory design and evaluation of a 3D-Printed Automatic Weather Station to explore hardware, software and data needs for community-driven decision making
The development of low-cost, 3D-printed weather stations aims to revolutionize the way communities collect long-term data about local weather phenomenon, as well as develop climate resilience strategies to adapt to the impacts of increasingly uncertain climate trends. This session will engage teachers and scientists in the evaluation and participatory design of the IoTwx 3D-printed weather station that is designed to be constructed and extended by students in middle and high school. We aim to explore the full spectrum of the station from construction (from pre-printed parts), to data collection and development of learning activities, to analysis of scientific phenomenon within the data. The stations also represent a unique opportunity to develop community-based strategies to extend the capabilities of the platform, and in the session we are encouraging full discussion of data collection and sensing technologies of specific relevance to communities adopting the stations.

In this working session, we will work directly with teachers on evaluation and development using a participatory design approach to stimulate and encourage relationships between ESIP Education Committee members and teachers.

Preparing for this Session: TBD

Presentations:

View Recording: https://youtu.be/AfvWhZBkQd8

Takeaways
  • Very valuable for the schools and community. It is an opportunity to include multiple departments within the school system (engineering, computer science, maths, earth science, etc.)
  • Need to understand the constraints that school systems may present: security, wifi, processing power, cloud access, only required for part of the year



Speakers
avatar for Shelley Olds

Shelley Olds

Science Education Specialist, UNAVCO
Data visualization tools, Earth science education, human dimensions of natural hazards, disaster risk reduction (DRR), resilience building.
avatar for Becky Reid

Becky Reid

Science Educator, Learners Without Walls
I discovered ESIP in the summer of 2009 when I was teaching science in Santa Barbara and attended the Summer meeting there. Ever since then, I have been volunteering with the ESIP Education Committee in various capacities, serving as Chair in 2013, 2019, and now, 2020! I currently... Read More →


Wednesday January 8, 2020 2:00pm - 3:30pm EST
Brookside A
  Brookside A, Working Session

2:00pm EST

AI for Augmenting Geospatial Information Discovery
Thanks to the rapid developments of hardware and computer science, we have seen a lot of exciting breakthroughs in self driving, voice recognition, street view recognition, cancer detection, check deposit, etc. Sooner or later the fire of AI will burn in Earth science field. Scientists need high-level automation to discover in-time accurate geospatial information from big amount of Earth observations, but few of the existing algorithms can ideally solve the sophisticated problems within automation. However, nowadays the transition from manual to automatic is actually undergoing gradually, a bit by a bit. Many early-bird researchers have started to transplant the AI theory and algorithms from computer science to GIScience, and a number of promising results have been achieved. In this session, we will invite speakers to talk about their experiences of using AI in geospatial information (GI) discovery. We will discuss all aspects of "AI for GI" such as the algorithms, technical frameworks, used tools & libraries, and model evaluation in various individual use case scenarios. How to Prepare for this Session: https://esip.figshare.com/articles/Geoweaver_for_Better_Deep_Learning_A_Review_of_Cyberinfrastructure/9037091
https://esip.figshare.com/articles/Some_Basics_of_Deep_Learning_in_Agriculture/7631615

Presentations:
https://doi.org/10.6084/m9.figshare.11626299.v1

View Recording: https://youtu.be/W0q8WiMw9Hs

Takeaways
  • There is a significant uptake of machine learning/artificial intelligence for earth science applications in the recent decade;
  • The challenge of machine learning applications for earth science domain includes:
    • the quality and availability of training data sets;
    • Requires a team with diverse skill background to implement the application
    • Need better understanding of the underlying mechanism of ML/AI models
  • There are many promising applications/ developments on streamlining the process and application of machine learning applications for different sectors of the society (weather monitoring, emergency responses, social good)



Speakers
avatar for Yuhan (Douglas) Rao

Yuhan (Douglas) Rao

Postdoctoral Research Scholar, CISESS/NCICS/NCSU
avatar for Aimee Barciauskas

Aimee Barciauskas

Data engineer, Development Seed
avatar for Annie Burgess

Annie Burgess

ESIP Lab Director, ESIP
avatar for Rahul Ramachandran

Rahul Ramachandran

Project Manager, Sr. Research Scientist, NASA
avatar for Ziheng Sun

Ziheng Sun

Research Assistant Professor, George Mason University
My research interests are mainly on geospatial cyberinfrastructure and agricultural remote sensing.


Wednesday January 8, 2020 2:00pm - 3:30pm EST
Salon A-C
  Salon A-C, Breakout

2:00pm EST

Advancing Data Integration approaches of the structured data web
Political, economic, social or scientific decision making is often based on integrated data from multiple sources across potentially many disciplines. To be useful, data need to be easy to discover and integrate.
This session will feature presentations highlighting recent breakthroughs and lessons learned from experimentation and implementation of open knowledge graph, linked data concepts and Discrete Global Grid Systems. Practicality and adoptability will be the emphasis - focusing on incremental opportunities that enable transformational capabilities using existing technologies. Best practices from the W3C Spatial Data on the Web Working Group, OGC Environmental Linked Features Interoperability Experiment, ESIP Science on Schema.org; implementation examples from Geoscience Australia, Ocean Leadership Consortium, USGS and other organisations will featured across the entire session.
This session will highlight how existing technologies and best practices can be combined to address important and common use cases that have been difficult if not impossible until recent developments. A follow up session will be used to seed future collaborative development through co-development, github issue creation, and open documentation generation.

How to Prepare for this Session: Review: https://opengeospatial.github.io/ELFIE/, https://github.com/ESIPFed/science-on-schema.org, https://www.w3.org/TR/sdw-bp/, and http://locationindex.org/.

Notes, links, and attendee contact info here.

View Recording: https://youtu.be/-raMt2Y1CdM

Session Agenda:
1.  2.00- 2.10,  Sylvain Grellet, Abdelfettah Feliachi, BRGM, France
'Linked data' the glue within interoperable information systems
“Our Environmental Information Systems are exposing environmental features, their monitoring systems and the observation they generate in an interoperable way (technical and semantic) for years. In Europe, there is even a legal obligation to such practices via the INSPIRE directive. However, the practice inducing data providers to set up services in a "Discovery > View > Download data" pattern hides data behind the services. This hinders data discovery and reuse. Linked Data on the Web Best Practices put this stack upside down and data is now back in the first line. This completely revamp the design and capacities of our Information Systems. We'll highlight the new data frontiers opened by such practices taking examples on the French National Groundwater Information Network”
View Slides: https://doi.org/10.6084/m9.figshare.11550570.v1

2.  2.10 - 2.20,  Adam Leadbetter, Rob Thomas, Marine Institute, Ireland
Using RDF Data Cubes for data visualization: an Irish pilot study for publishing environmental data to the semantic web
The Irish Wave and Weather Buoy Networks return metocean data at 5-60 minute intervals from 9 locations in the seas around Ireland. Outside of the Earth Sciences an example use case for these data is in supporting Blue Economy development and growth (e.g. renewable energy device development). The Marine Institute, as the operator of the buoy platforms, in partnership with the EU H2020 funded Open Government Intelligence project has published daily summary data from these buoys using the RDF DataCube model[1]. These daily statistics are available as Linked Data via a SPARQL endpoint making these data semantically interoperable and machine readable. This API underpins a pilot dashboard for data exploration and visualization. The dashboard presents the user with the ability to explore the data and derive plots for the historic summary data, while interactively subsetting from the full resolution data behind the statistics. Publishing environmental data with these technologies makes accessing environmental data available to developers outside those with Earth Science involvement and effectively lowers the entry bar for usage to those familiar with Linked Data technologies.
View Slides: https://doi.org/10.6084/m9.figshare.11550570.v1

3. 2.20 - 2.30,  Boyan Brodaric, Eric Boisvert, Geological Survey of Canada, Canada; David Blodgett, USGS, USA
Toward a Linked Water Data Infrastructure for North America
We will describe progress on a pilot project using Linked Data approaches to connect a wide variety of water-related information within Canada and the US, as well as across the shared border
View Slides: https://doi.org/10.6084/m9.figshare.11541984.v1

4.  2.30 - 2.40,  Dalia Varanka, E. Lynn Usery, USGS, USA
The Map as Knowledge Base; Integrating Linked Open Topographic Data from The National Map of the U.S. Geological Survey
This presentation describes the objectives, models, and approaches for a prototype system for cross-thematic topographic data integration based on semantic technology. The system framework offers a new perspectives on conceptual, logical, and physical system integration in contrast to widely used geographic information systems (GIS).
View Slides: https://doi.org/10.6084/m9.figshare.11541615.v1

5.  2.40 – 2.50,  Alistair Ritchie, Landcare, New Zealand
ELFIE at Landcare Research, New Zealand
Landcare Research, a New Zealand Government research institute, creates, manages and publishes a large set of observational and modelling data describing New Zealand’s land, soil, terrestrial biodiversity and invasive species. We are planning to use the findings of the ELFIE initiatives to guide the preparation of a default view of the data to help discovery (by Google), use (by web developers) and integration (into the large environmental data commons managed by other agencies). This integration will not only link data about the environment together, but will also expose more advanced data services. Initial work is focused on soil observation data, and the related scientific vocabularies, but we anticipate near universal application across our data holdings.
View Slides: https://doi.org/10.6084/m9.figshare.11550369.v1

6.  2.50 - 3.00,  Irina Bastrakova, Geoscience Australia, Australia
Location Index Project (Loc-I) – integration of data on people, business & the environment
Location Index (Loc-I) is a framework that provides a consistent way to seamlessly integrate data on people, business, and the environment.
Location Index aims to extend the characteristics of the foundation spatial data of taking geospatial data (multiple geographies) which is essential to support public safety and wellbeing, or critical for a national or government decision making that contributes significantly to economic, social and environmental sustainability and linking it with observational data. Through providing the infrastructure to suppo

Speakers
avatar for Jonathan Yu

Jonathan Yu

Research data scientist/architect, CSIRO
Jonathan is a data scientist/architect with the Environmental Informatics group in CSIRO. He has expertise in information and web architectures, data integration (particularly Linked Data), data analytics and visualisation. Dr Yu is currently the technical lead for the Loc-I project... Read More →
avatar for Dalia Varanka

Dalia Varanka

Research Physical Scientist, U.S. Geological Survey
Principle Investigator and Project Lead, The Map as Knowledge Base
AR

Alastair Richie

Landcare Research NZ
AL

Adam Leadbetter

Marine Institute
RT

Rob Thomas

Marine Institute
BB

Boyan Brodaric

Natural Resources Canada
EB

Eric Boisvert

Natural Resources Canada
avatar for Irina  Bastrakova

Irina Bastrakova

Director, Spatial Data Architecture, Geoscience Australia
I have been actively involved with international and national geoinformatics communities for more than 19 years. I am the Chair of the Australian and New Zealand Metadata Working Group. My particular interest is in developing and practical application of geoscientific and geospatial... Read More →
avatar for David Blodgett

David Blodgett

U.S. Geological Survey


Wednesday January 8, 2020 2:00pm - 3:30pm EST
White Flint

4:00pm EST

Citizen Science Data in Earth Science: Challenges and Opportunities
Citizen science is scientific data collection and research performed primarily or in part by non-professional and amateur scientists. Citizen science data has been used in a variety of the physical sciences, including physics, ecology, biology, and water quality. As volunteer-contributed datasets continue to grow, they represent a unique opportunity to collect and analyze earth-science data on spatial and temporal scales impossible to achieve by individual researchers. This session will explore the ways open citizen science data sets can be used in earth science research and some of the associated challenges and opportunities for the ESIP community to use and partner with citizen science organizations.

Speakers:View Recording: https://youtu.be/jTNgWZI6Cik

Takeaways


How to Prepare for this Session: https://www.nationalgeographic.org/encyclopedia/citizen-science/
http://www.earthsciweek.org/citizen-science

Speakers
avatar for Alexis Garretson

Alexis Garretson

Community Fellow, ESIP
avatar for Kelsey Breseman

Kelsey Breseman

Archiving Program Lead, Environmental Data & Governance Initiative
Governmental accountability around public data & the environment. Decentralized web. Intersection of tech & ethics & civics.


Wednesday January 8, 2020 4:00pm - 5:30pm EST
Linden Oak
  Linden Oak, Breakout

4:00pm EST

Planning for new Agriculture and Climate Cluster focus area on automated agriculture with AI
The Agriculture and Climate (ACC) Cluster will host a planning session for a new focus area on automated agriculture and AI (""Agro-AI""). Some initial ideas on possible activities in this space were presented at the ACC October 2019 telecon, including those related to the “Data-to-Decisions” ESIP Lab project (https://www.esipfed.org/wp-content/uploads/2018/07/Wee.pdf). Currently, there are many initiatives and funding opportunities for automated agriculture with AI. The National Science Foundation, e.g., recently announced a program aimed at significantly advancing research in AI (https://www.nsf.gov/news/news_summ.jsp?cntn_id=299329&org=NSF&from=news), including, in its initial set of high-priority areas, “AI-Driven Innovation in Agriculture and the Food System.”
Among the topics for discussion in this planning session will be related proposal opportunities and sponsoring an ACC breakout session on agriculture and AI at the ESIP 2020 Summer Meeting. How to Prepare for this Session: TBD; there will be an intro presentation, prior to the group discussion. This presentation may be made available ahead of the meeting in the scheduled session page.

Presentations:

View Recording: https://youtu.be/GhnSINRFNBg

Takeaways
  • Next step 1: Conduct a survey of available dashboards, existing data, ML use cases, existing APIs
  • Next step 2: Decide on an example question for a use case
  • Next step 3: Define and survey potential users



Speakers
AA

Arif Albayrak

Senior Software Engineer, ADNET (GESDISC)
avatar for Bill Teng

Bill Teng

NASA GES DISC (ADNET)


Wednesday January 8, 2020 4:00pm - 5:30pm EST
Salon A-C
  Salon A-C, Business Meeting

4:00pm EST

Structured data web and coverages integration working session
This working session will follow on the "Advancing Data Integration approaches of the structured data web” session and the Coverage Analytics sprint as an opportunity for those interested in building linked data information products that integrate spatial features, coverage data, and more. As such, inspiration will be drawn from projects like science on schema.org, the Environmental Linked Features Interoperability Experiment, the Australian Location Index, and those that session attendees take part in. Participants will self organize into use-case or technology focused groups to discuss and synthesize the outcomes of the sprint and structured data web session. Session outcomes could take a number of forms: linked data and web page mock ups, ideas and issues for OGC, W3C, or ESIP groups to consider, example data or use cases for relevant software development projects to consider, or work plans and proposals for suture ESIP work. The session format is expected to be fluid with an ideation and group formation exercise followed by structured discussion to explore a set of ideas then narrow on a focused valuable outcome. Participants will be encouraged to work together prior to the meeting to design and plan the session structure. Outcomes of the session will be reported at an Information Technology and Interoperability webinar in early 2020. How to Prepare for this Session: Attend the coverage sprint and the "Advancing Data Integration approaches of the structured data web" session.

Shared document for session here.

Full Notes: https://doi.org/10.6084/m9.figshare.11559087.v1

Presentations:

View Recording: https://youtu.be/u2x3I0cr46A

  • Takeaways
    Breakout session information interoperability committee and webinar series. See notes: https://docs.google.com/document/d/1LpcTMwP0mAD4G4Gb8mStI5uSDV61_qWPUkQ9nI1x1cI/edit?usp=sharing
  • Foster cross-project consistency via breakouts. Such as dealing with science on schema.org issue of Links to “in-band” linked (meta)data and “out of band” linked data. Content negotiation and in-band and out of band links Use blank nodes with link properties for rdf elements that are URI for out of band content. Identify in band links with sdo @id, out of band links with sdo:URL
  • Incorporating Spatial Coverages in Knowledge Graphs; Next Steps? Need to explore more on tessellations as an intermediate index. Will carry forward some of these ideas at the EDR SWG Will represent some of these ideas to the OGC-API Coverages SWG Will mention these ideas to the UFOKN Role of ‘spatial’ knowledge graphs Will spatial data analysis and transformation tools grow to adopt/support RDF as an underlying data structure for spatial information or will RDF continue to be a ‘view’ of existing (legacy) spatial data in GI systems?


Speakers
avatar for Adam Shepherd

Adam Shepherd

Technical Director, Co-PI, BCO-DMO
schema.org | Data Containerization | Linked Data | Semantic Web | Knowledge Representation | Ontologies
avatar for Irina  Bastrakova

Irina Bastrakova

Director, Spatial Data Architecture, Geoscience Australia
I have been actively involved with international and national geoinformatics communities for more than 19 years. I am the Chair of the Australian and New Zealand Metadata Working Group. My particular interest is in developing and practical application of geoscientific and geospatial... Read More →
WF

William Francis

Geoscience Australia
avatar for Jonathan Yu

Jonathan Yu

Research data scientist/architect, CSIRO
Jonathan is a data scientist/architect with the Environmental Informatics group in CSIRO. He has expertise in information and web architectures, data integration (particularly Linked Data), data analytics and visualisation. Dr Yu is currently the technical lead for the Loc-I project... Read More →
DF

Doug Fils

Consortium for Ocean Leadership
avatar for David Blodgett

David Blodgett

U.S. Geological Survey


Wednesday January 8, 2020 4:00pm - 5:30pm EST
White Flint
 
Thursday, January 9
 

10:15am EST

Working Group for the Data Stewardship Committee
This session is a working group for the 2020-2021 year for the Data Stewardship committee. We will discuss priorities for the next year, potential collaborative outputs, and review the work in progress from the last year. 

Notes Document: https://docs.google.com/document/d/1B_0K5jGnFgH72U3P2-oGr5vEqHOGU8CWU-IkZ6pjXbM/edit?ts=5e174588

Presentations

View Recording: https://youtu.be/am-ZLfHgM4w

Takeaways
  • Wow, the members of the Committee really are active! Practically everyone has their own cluster or two!
  • Six activities proposed for the upcoming year have champions who will lead the effort to define the outputs of their selected activity.


Speakers
avatar for Alexis Garretson

Alexis Garretson

Community Fellow, ESIP
avatar for Kelsey Breseman

Kelsey Breseman

Archiving Program Lead, Environmental Data & Governance Initiative
Governmental accountability around public data & the environment. Decentralized web. Intersection of tech & ethics & civics.


Thursday January 9, 2020 10:15am - 11:45am EST
Forest Glen
  Forest Glen, Business Meeting

10:15am EST

Mapping Data & Operational Readiness Levels (ORLs) to Community Lifelines
Approach: The Disaster Lifecycle Cluster has seen great success in its efforts to put Federated arms around “trusted data for decision makers” as a way to accelerate situational awareness and decision-making. By identifying trust levels for data. This session will build upon the Summer meeting and align perfectly with the overall ESIP theme of: Data to Action: Increasing the Use and Value of Earth Science Data and Information.

The ESIP Disaster Lifecycle Cluster has evolved into one of the most operationally active clusters in the Federation with a thirst for applying datasets to decision-making environments while building trust levels that manifest themselves as ORLs. Duke Energy, All Hazards Consortium’s Sensitive Information Sharing environment (SISE), DHS and FEMA are all increasing their interest in ORLs with their sights set on implementing them in the near future. Data is available everywhere and more of it is on the way. Trusted data is available some places and can help decision makers such as utilities make 30-second decisions that can save lives, property and get the lights back on sooner, saving millions of dollars.

This session will provide the venue to discuss emerging projects from NASA’s Applied Sciences Division (A.37), Initiatives at JPL and Federal Agency data portal access that can accelerate decision making today and in the future. We will also discuss drone data and European satellite data that is available for access and use when disasters threaten. Come and join us, the data you have may just save a life.

Agenda:
  1. Greg McShane, DHS CISA - The Critical Nature of the Public-Private Trusted Information Sharing Paradigm (10 min) Presented by Tom Moran, All Hazards Consortium Executive Director
  2. Dave Jones, StormCenter/GeoCollaborate - The status of ORLs, where we are, ESIP Announcement at GEO in Australia, AHC SISE, Next Steps (10 min)
  3. Maggi Glassco, NASA Disasters Program, JPL - New Applied Sciences Disasters Projects, Possible Lifeline Support Information Sources in the Future (10 min)
  4. Bob Chen/Bob Downs, Columbia Univ./SEDAC/CIESIN - Specific Global and Local Population Data for Community Lifeline Decision Making (10 min)
  5. Discussion/Q&A Period (40 min)

Presentations

View Recording: https://youtu.be/gJ93R6SlMkM

Key Takeaways for this Session: 
  1. Through the All Hazards Consortium, a new research institute will begin to help bring candidate research products into operations. An imagery committee, consisting of private and research members under SISE, will identify and evaluate use-case driven candidate imagery data within the ORL context using Geo-Collaborate.
  2. NASA grant opportunities within the disasters program requires co-funding by end user partners to guide usage needs and adoption (using ARL success criteria). This should increase adoption of NASA funded ASP project data and/or services. The cluster would like to work with NASA ASP as a testbed for funded projects to connect to additional user communities.
  3. We discussed the need / value of population data (current and predictions on affected populations) for preparedness activities and emergency response. We would like to leverage additional data services from SEDAC to test with operational decision makers. 


Speakers
avatar for Dave Jones

Dave Jones

StormCenter Communications, StormCenter Communications
Real-time data access, sharing and collaboration across multiple platforms. Collaborative Common Operating Pictures, Decision Making, Situational Awareness, connecting disparate mapping systems to share data, cross-product data sharing and collaboration. SBIR Phase III status with... Read More →
avatar for Karen Moe

Karen Moe

NASA Goddard Emeritus
ESIP Disasters Lifecycle cluster co-chair with Dave Jones/StormCenter IncManaging an air quality monitoring project for my town just outside of Washington DC and looking for free software!! Enjoying citizen science roles in environmental monitoring and sustainable practices in my... Read More →


Thursday January 9, 2020 10:15am - 11:45am EST
Salon A-C
  Salon A-C, Breakout

12:00pm EST

License Up! What license works for you and your downstream repositories?
Many repositories are seeing an increase in the use and diversity of licenses and other intellectual property management (IPM) tools applied to externally-created data submissions and software developed by staff. However, adding a license to data files may have unexpected or unintended consequences in the downstream use or redistribution of those data. Who “owns” the intellectual property rights to data collected by university researchers using Federal and State (i.e., public) funding that must be deposited at a Federal repository? What license is appropriate for those data and what — exactly — does that license allow and disallow? What kind of license or other IPM instrument is appropriate for software written by a team of Federal and Cooperative Institute software engineers? Is there a significant difference between Creative Commons, GNU, and other ‘open source licenses’?

We have invited a panel of legal advisors from Federal and other organizations to discuss the implications of these questions for data stewards and the software teams that work collaboratively with those stewards. We may also discuss the latest information about Federal data licenses as it applies to the OPEN Government Data Act of 2019. How to Prepare for this Session: Consider what, if any, licenses, copyright, or other intellectual property rights management you apply or think applies to your work. Also consider Federal requirements such as the OPEN Government Data Act of 2019, Section 508 of the Rehabilitation Act of 1973.

Speakers:
Dr. Robert J. Hanisch is the Director of the Office of Data and Informatics, Material Measurement Laboratory, at the National Institute of Standards and Technology in Gaithersburg, Maryland. He is responsible for improving data management and analysis practices and helping to assure compliance with national directives on open data access. Prior to coming to NIST in 2014, Dr. Hanisch was a Senior Scientist at the Space Telescope Science Institute, Baltimore, Maryland, and was the Director of the US Virtual Astronomical Observatory. For more than twenty-five years Dr. Hanisch led efforts in the astronomy community to improve the accessibility and interoperability of data archives and catalogs.
Henry Wixon is Chief Counsel for the National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce. His office provides programmatic legal guidance to NIST, as well as intellectual property counsel and representation to the Department of Commerce and other Department bureaus. In this role, it interacts with principal developers and users of research, including private and public laboratories, universities, corporations and governments. Responsibilities of Mr. Wixon’s office include review of NIST Cooperative Research and Development Agreements (CRADAs), licenses, Non-Disclosure Agreements (NDAs) and Material Transfer Agreements (MTAs), and the preparation and prosecution of the agency’s patent applications. As Chief Counsel, Mr. Wixon is active in standing Interagency Working Groups on Technology Transfer, on Bayh-Dole, and on Research Misconduct, as well as in the Federal Laboratory Consortium. He is a Certified Licensing Professional and a Past Chair of the Maryland Chapter of the Licensing Executives Society, USA and Canada (LES), and is a member of the Board of Visitors of the College of Computer, Mathematical and Natural Sciences of the University of Maryland, College Park.

Presentations
See attached

View Recording: https://youtu.be/5Ng5FDW1LXk.

Takeaways



Speakers
DC

Donald Collins

Oceanographer, NESDIS/NCEI Archive Branch
Send2NCEI, NCEI archival processes, records management


Thursday January 9, 2020 12:00pm - 1:30pm EST
Forest Glen
  Forest Glen, Panel

12:00pm EST

Fire effects on soil morphology across time scales: Data needs for near- and long-term land and hazard management
Fire impacts soil hydrology and biogeochemistry at both near (hours to days) and long (decades to centuries) time scales. Burns, especially in soils with high organic carbon stocks like peatlands, induce a loss of absolute soil carbon stock. Additionally, fire can alter the chemical makeup of the organic matter, potentially making it more resistant to decomposition. On the shorter timescales, fire can also change the water repellent properties or hydrophobicity of the soil, leading to an increased risk of debris flows and floods.

In this session, we will focus on the varying data needs for assessing the effects of burns across time scales, from informing emergency response managers in the immediate post-burn days, to monitoring post-burn recovery, to managing carbon in a landscape decades out.

Speaker abstracts (in order of presentation):

James MacKinnon (NASA GSFC)
Machine learning methods for detecting wildfires 

This talk shows the innovative use of deep neural networks, a type of machine learning, to detect wildfires in MODIS multispectral data. This effort attained a very high classification accuracy showing that neural networks could be useful in a scientific context, especially when dealing with sparse events such as fire anomalies. Furthermore, we laid the groundwork to continue beyond binary fire classification towards being able to detect the "state," or intensity of the fire, eventually allowing for more accurate fire modeling. With this knowledge, we developed software to enable neural networks to run on even the typically compute-limited spaceflight-rated computers, and tested it by building a drone payload equipped with a flight computer analog and flew it over controlled burns to prove its efficacy.

Kathe Todd-Brown (U. FL Gainesville)
An overview of effects of fire on ecosystems

Fire is a defining characteristic of many ecosystems worldwide, and, as the climate warms, both fire frequency and severity are expected to increase. In addition to the effects of smoke on the climate and human health, there are less apparent effects of fire on the terrestrial ecosystem. From alterations in the local soil properties to changes in the carbon budget as organic carbon is combusted into CO2 and pyrogenic carbon, fire is deeply impactful to the local landscape. The long-term climate implication of fire on the terrestrial carbon budget is a tension between carbon lost to the atmosphere as carbon dioxide and sequestered in the soil as recalcitrant pyrogenic carbon. Here we present a new model to simulate the interaction between ecosystem growth, decomposition, and fire on carbon dynamics. We find that the carbon lost to burned carbon dioxide will always be recovered, if there is any recalcitrant pyrogenic carbon generated by the fires. The time scale of this recovery, however, is highly variable and often not relevant to land managers. This model highlights key data gaps at the annual and decadal time scales. Quantifying and predicting the loss of soil, litter, and vegetation carbon in an individual fire event is a key unknown. Relatedly, the amount of pyrogenic carbon generated by fire events is another near-term data needed to better constrain this model. Finally, on the longer time scales, the degree of recalcitrancy of pyrogenic carbon is a critical unknown.

Daniel Fuka (VA Tech)

Rapidly improving the spatial representation of soil properties using topographically derived initialization with a proposed workflow for new data integration
Topography exerts critical controls on many hydrologic, geomorphologic, biophysical, and forest fire processes. However, in modeling these systems, the current use of topographic data neglects opportunities to account for topographic controls on processes such as soil genesis, soil moisture distributions, and hydrological response; all factors that significantly characterize the post-fire effects and potential risks of the new landscape. In this presentation, we demonstrate a workflow that takes advantage of data brokering to combine the most recent topographic data and best available soil maps to increase the resolution and representational accuracy of spatial soil morphologic and hydrologic attributes: texture, depth, saturated conductivity, bulk density, porosity, and the water capacities at field and wilting point tensions. We show several proofs of concept and initial performance test the values of the topographically adjusted soil parameters against those from the NRCS SSURGO (Soil Survey Geographic database). Finally, we pose the potential for a quickly configurable opensource data brokering system (NSF BALTO) to be used to make available the most recently updated topographic and soils characteristics, so this workflow can rapidly re-characterize and increase the resolution of post-fire landscapes.

Dalia Kirschbaum (NASA GSFC)
Towards characterization of global post-fire debris flow hazard

Post-fire debris flows commonly occur in the western United States, but the extent of this hazard is little known in other regions. These events occur when rain falls on the ground with little vegetative cover and hydrophobic soils—two common side effects of wildfire. The storms that trigger post-fire debris flows are typically high-intensity, short-duration events. Thus, a first step towards global modeling of this hazard is to evaluate the ability of GPM IMERG and other global precipitation data to detect these storms. The second step is to determine the effectiveness of MCD64 and other globally available predictors in identifying locations susceptible to debris flows. Finally, rainfall and other variables can be combined into a single global model of post-fire debris flow occurrence. This research can show both where post-fire debris flows are currently most probable, as well as where the historical impact has been greatest.

How to Prepare for this Session:

Presentations

View Recording: https://youtu.be/I89om-kBYB0

Takeaways
  • Modeling and detecting fires and fire impacts is changing (e.g. neural networks, carbon modeling) and needs to continue to improve
  • There are many data needs to be able to operationalize post-fire debris flow and soil modeling
  • Fires severely change ecosystems and soils and we do not really understand the exact changes yet, need more research in this area


Speakers
KT

Kathe Todd-Brown

University of Florida Gainesville
DF

Dan Fuka

Virginia Tech
avatar for Bill Teng

Bill Teng

NASA GES DISC (ADNET)


Thursday January 9, 2020 12:00pm - 1:30pm EST
Salon A-C
  Salon A-C, Breakout

12:00pm EST

Datacubes for Analysis-Ready Data: Standards & State of the Art
This workshop session will follow up on the OGC Coverage Analytics sprint, focusing specifically on advanced services for spatio-temporal datacubes. In the Earth sciences datacubes are accepted as an enabling paradigm for offering massive spatio-temporal Earth data analysis-ready, more generally: easing access, extraction, analysis, and fusion. Also, datacubes homogenizes APIs across dimensions, allowing unified wrangling of 1-D sensor data, 2-D imagery, 3-D x/y/t image timeseries and x/y/z geophysics voxel data, and 4-D x/y/z/t climate and weather data.
Based on the OGC datacube reference implementation we introduce datacube concepts, state of standardization, and real-life 2D, 3D, and 4D examples utilizing services from three continents. Ample time will be available for discussion, and Internet-connected participants will be able to replay and modify many of the examples shown. Further, key datacube activities worldwide, within and beyond Earth sciences, will be related to.
Session outcomes could take a number of forms: ideas and issues for OGC, ISO, or ESIP to consider; example use cases; challenges not yet addressed sufficiently, and entirely novel use cases; work and collaboration plans for future ESIP work. Outcomes of the session will be reported at the next OGC TC meeting's Big Data and Coverage sessions. How to Prepare for this Session: Introductory and advanced material is available from http://myogc.org/go/coveragesDWG

Presentations
https://doi.org/10.6084/m9.figshare.11562552.v1

View Recording: https://youtu.be/82WG7soc5bk

Takeaways
  • Abstract coverage construct defines the base which can be filled up with a coverage implementation schema. Important as previously implementation wasn’t interoperable with different servers and clients. 
  • Have embedded the coordinate system retrieved from sensors reporting in real time into their xml schema to be able to integrate the sensor data into the broader system. Can deliver the data in addition to GML but JSON, and RDF which could be used to link into semantic web tech. 
  • Principle is send HTTP url-encoded query to server and get some results that are extracted from datacube, e.g., sources from many hyperspectral images.

Speakers

Thursday January 9, 2020 12:00pm - 1:30pm EST
White Flint