SGCI webinars bring together community members across the globe.

Content with Resources Webinar .

Webinar: Jupyter as a Gateway for Scientific Collaboration and Education

July 20, 2017

Jupyter as a Gateway for Scientific Collaboration and Education
Presented by Carol Willing, Cal Poly SLO and Jupyter Steering Council

Project Jupyter, evolved from the IPython environment, provides a platform for interactive computing that is widely used today in research, education, journalism and industry. The core premise of the Jupyter architecture is to design tools around the experience of interactive computing, building an environment, protocol, file format and libraries optimized for the computational process when there is a human in the loop, in a live iteration with ideas and data assisted by the computer.

The Jupyter Notebook, a system that allows users to compose rich documents that combine narrative text and mathematics together with live code and the output of computations in any format compatible with a web browser (plots, animations, audio, video, etc.), provides a foundation for scientific collaboration. The next generation of the Jupyter web interface, JupyterLab, will combine in a single user interface not only the notebook, but multiple other tools to access Jupyter services and remote computational resources and data.  A flexible and responsive UI allows the user to mix Notebooks, terminals, text editors, graphical consoles and more, presenting in a single, unified environment the tools needed to work with a remote environment.  Furthermore, the entire design is extensible and based on plugins that interoperate via open APIs, making it possible to design new plugins tailored to specific types of data or user needs.

JupyterHub enables Jupyter Notebook and JupyterLab to be used by groups of users for research collaboration and education. We believe JupyterHub provides a foundation on which to build modern scientific gateways that support a wide range of user scenarios, from interactive data exploration in high-level languages like Python, Julia or R, to the education of researchers and students whose work relies on traditional HPC resources.

View the slides (Slideshare)

View the slides (Speaker Deck)

There's also a repo with the talk and a file with links

Watch on YouTube


Questions asked during the webinar (and some answers posted in chat)

If you have further questions, the best way to reach Carol is on Gitter or the Jupyter mailing list.

Q: I understand Jupyter Notebook isn't just for python, but can you use other languages? Is perl one of those languages? 
A: Yes, here is a list of supported languages:

Q: Is there a list for people exploring Jupyter in education. I am going to a hackathon on the subject, but I know there have been others. Is everything on GitHub or is the a better way to find out who is doing what?
A: Here are the mailing lists Carol mentioned:  

Project Jupyter:!forum/jupyter
Teaching with Jupyter:!forum/jupyter-education  
Jupyter in HPC:!forum/jupyter-hpc

Q: How is the Jupyter ecosystem thinking about security, as code is executable on users web browser?
A: Related to security: and dated, but also:

Q: Jupyter Notebooks great for educational purposes; can you give an example or two on research/science project using Jupyter Notebook?
A: Check out nbviewer and Jupyter's gallery of interesting notebooks.


Here's the link to the JupyterLab + Real Time Collaboration presentation Carol mentioned:

Many thanks to Johnathan Rush for providing many links on the fly during the webinar!

Webinar: Gateway Showcase featuring VectorBase and

June 14, 2017

Gateway Showcase featuring VectorBase and

Watch the YouTube recording


VectorBase: A bioinformatics resource for invertebrate vectors and other organisms related with human diseases 
Presented by Gloria I. Giraldo-Calderón, PhD, VectorBase Scientific Liaison/Outreach Manager 
Contact: ggiraldo at nd dot edu

Abstract: VectorBase ( is a free, web-based bioinformatics resource center (BRC) for invertebrate vectors of human pathogens, funded by NIAID/NIH. This database is the ‘home’ of 40 genomes of arthropod vectors and pests and also has transcriptomes, proteomes and population data for an even wider list of species. The population biology data includes lab and field collected information and, in addition to the data imported from external databases or directly submitted by users, VectorBase also generates and computes primary data. Over its 13 years of existence, the discovery and interpretation of hosted data has been used for basic and translational research, as expressed in numerous scientific publications, using data from one or more studies in new or re-purpose analyses, descriptions, and hypotheses testing. Raw and process data can be exported or downloaded in a variety of different formats, visualized, browsed, queried and analyzed with the site tools or any other external tools. VectorBase data, tools, and resources are updated every two months. The website has extensive documentation resources for new and experienced users including tutorials, video tutorials, practice exercises, answer keys, and sample files. 

Download the slides as PDF A platform for engaging citizen scientists through individualized websites 
Presented by Greg Newman, Director & Research Scientist, Natural Resource Ecology Laboratory, Colorado State University 
Contact: Gregory.Newman at colostate dot edu

Abstract: Citizen science empowers individuals to pursue their interests in the scientific world. Members of are encouraged to investigate their own scientific questions or jump on board as a volunteer for an existing project. In parallel, citizen science programs create their own online projects where trained volunteers and scientists together answer local, regional, and global questions, inform natural resource decisions, advance scientific understanding, and improve environmental education. The platform provides tools to empower the citizen science gateway creators and their participants to ask questions, select methods, submit data, analyze data, and share results. provides tools for the entire research process and full spectrum of citizen science program needs: creating new projects, managing project members, building custom data sheets, analyzing collected data, and gathering participant feedback. To date, our volunteer coordinators have started 414 projects that have contributed a total of 697,984 measurements for analysis to answer local, regional and/or global questions.

Download the slides as PDF

Webinar: Data and Software Carpentry: Using Training to Build a Worldwide Research Community

May 10, 2017

Data and Software Carpentry: Using Training to Build a Worldwide Research Community
Presented by Tracy Teal, co-founder and the Executive Director of Data Carpentry, and Adjunct Assistant Professor with BEACON, Michigan State University

Although petabytes of data are now available, most scientific disciplines are failing to translate this sea of data into scientific advances. The missing step between data collection and research progress is a lack of training for scientists in crucial skills for effectively and reproducibly managing and analyzing large amounts of data. Already faced with a deluge of data, researchers themselves are demanding this training. Short, intensive, hands-on Software and Data Carpentry workshops give researchers the opportunity to engage in deliberate practice as they learn these skills. This model has been shown to be effective, with the vast majority (more than 90%), of learners saying that participating in the workshop was worth their time and led to improvements in their data management and data analysis skills. Data Carpentry events have trained over 20,000 learners since 2014 on 6 continents with over 800 volunteer instructors. The strategies of growing this community could be applied toward growing communities of gateway users, particularly by offering training and demonstrating the value of the skills and tools that will enhance their work.

View the slides (Slideshare)

Watch on YouTube


Questions asked during the webinar (and some answers)

If you have further questions, you are welcome to contact Tracy at tkteal AT datacarpentry DOT org.

Q: What is the relationship between SGCI and her organization? There seems to be some training, for example? 
A: Currently there is no formal relationship between SGCI and Data Carpentry, but we definitely want to look into that option further!

Q: I wonder how additional topics and instructors get added to the set of offerings.

Q: When discussing "active learning", she used an acronym - IBU? IVU? What's that?
A: I, We, You [First the instructor shows it, then we do it together, and then you do it yourself.]

Q: Do you request attendees install software before a workshop (or during)? In the Python ecosystem, do you recommend a particular distribution?
A: Anaconda

Example of a lesson:

Q: So what about Jupyter? Do you use it?

Q: Where does the instructor training take place? and is there also a cost for this?

Q: Who pays for the volunteer instructor's travel?

Q: What is the relationship between Data Carpentry and Software Carpentry.

Webinar: Gateway Showcase featuring Ensayo Project's SimEOC and Spatial Portal for Analysis of Climatic Effects on Species (SPACES)

April 12, 2017

Gateway Showcase featuring 

Ensayo Project's SimEOC: A Web-Based Virtual Emergency Operations Center Simulator for Training and Research and 

Spatial Portal for Analysis of Climatic Effects on Species (SPACES) 

Watch the YouTube recording


Ensayo Project's SimEOC: A Web-Based Virtual Emergency Operations Center Simulator for Training and Research

Presented by Greg Madey, University of Notre Dame

Abstract: Training is an integral part of disaster preparedness. Practice in dealing with crises improves one’s ability to manage emergency situations. As an emergency escalates, more and more agencies get involved. These agencies require training to learn how to manage the crisis and to work together across jurisdictional boundaries. Consequently, training requires participation from many individuals, consumes a great deal of resources in vendor cost for support and staff time, and cannot be conducted often. Moreover, in the current crisis management environment, most training is conducted through discussion-based tabletop and paper-based scenario performance exercises. SimEOC was developed under the NSF-funded Ensayo Project. It is a web-based training simulator and research tool. SimEOC is built using MongoDB, Express.js, Angular and Node.js (the MEAN stack). A design overview and demonstration will be provided.

Download a PDF of the Ensayo Project's slides

Questions asked about Ensayo Project (answered in the video)

If you want to try out the gateway or have further questions, email gmadey AT nd DOT edu

  • Can anyone get an account for the SimEOC and do the exercises?
  • Please comment on the development process....
  • Who would configure these exercises? How do you add new facilities related to emergency management?
  • When did collaboration with CRC begin? how was the broader development team scoped and organized?
  • Has this system been used in a real training situation?
  • Did CRC have enough visibility at ND that you knew to reach out? Otherwise, how did you learn of them?

Spatial Portal for Analysis of Climatic Effects on Species (SPACES)
Presented by Dilkushi de Alwis Pitts, University of Cambridge

Abstract: To deal with escalating environmental shifts caused by climate change and other factors, ecologists are increasingly called upon to make risk assessment decisions about affected natural resources. As a result, there is a rapidly growing need for niche modeling of species projections to guide management decisions and activities related to intervention.

A number of software applications exist for carrying out fundamental niche modeling, but they present several problems for users, including distinct approaches to algorithms, data, and outputs, among others. The openModeller software was created to address these concerns by providing transparent, open-source tools under a common architecture.

SPACES builds on openModeller to manage for biologists the complication of running niche models, including data formatting and the complexities of the modeled systems. SPACES has endeavored to resolve the issues mentioned above by obtaining, handling, and storing the large quantities of data that niche models require, processing the data in a user-controlled way, and presenting the results in convenient formats. Through SPACES, extensive, quality spatial data are made available alongside species data and a variety of niche models that can be executed, analyzed, and compared—all through a common Web browser interface, designed to support a virtual scientific community and share the results of research.

Questions asked about SPACES Project (answered in the video)

If you want to try out the gateway or have further questions, email kad49 AT cam DOT ac DOT uk

  • Does SPACES use HPC to run jobs? If yes, which one(s)?
  • Are all the output spatial raster data? Otherwise, how to do map algebra on vector data? Where does map algebra happen, in user's browser or a backend server? If latter, how to manage the computation?
  • What does it cost to add a new model algorithm? How much automation has been done to speed it up?
  • How do other researchers put their algorithms into SPACES portal?

Webinar: Building a Modern Research Data Portal with Globus - Introduction to the Globus Platform

March 8, 2017

Building a Modern Research Data Portal with Globus - Introduction to the Globus Platform
Presented by Steve Tuecke and Greg Nawrocki, University of Chicago -

Abstract: Science DMZ (a portion of the network optimized for high-performance scientific applications) architectures provide frictionless end-to-end network paths; and Globus APIs allow programmers to create powerful research data portals that leverage these paths for data distribution, staging, synchronization, and other useful purposes. In this tutorial, we use real-world examples to show how these new technologies can be applied to realize immediately useful capabilities.

Attendees will develop an understanding of key identity management concepts as they are applied to data management across the research lifecycle, and will be exposed to tools and techniques for implementing these concepts in their own systems.

We will explain how the Globus APIs provide intuitive access to authentication, authorization, sharing, transfer, and synchronization services. Companion iPython/Jupyter notebooks will provide application skeletons that workshop participants can adapt to realize their own research data portals, science gateways, and other web applications that support research data workflows.

Download a PDF of the slides

Watch the YouTube recording

Answers to questions asked during the webinar

The slides have many links to various online resources. If you don't see what you are looking for, feel free to contact Greg directly.

Q: The globus sample portal is written in which language?
A: Python.

Q: For share endpoint, one can't see another share endpoint right?
A: Someone can see the data in an endpoint only if it's been explicitly shared with them. The endpoints themselves are all publicly visible.

Q: Does the Transfer/Share API include download from a share/endpoint to local machine that is not an endpoint?
A: All transfers are to and from endpoints. Globus Connect Personal is a very easy way to set up an endpoint on a local machine:

Q: What if someone doesn't want to set up a personal endpoint? We just have resistance from people who don't want to setup a personal endpoint for infrequent downloads.
A: Native “in browser” http transfers are on the roadmap. Transfers themselves are easy; getting them to work within the constraints of our security model requires care. We should have some more concrete timelines for delivery at GlobusWorld in April.

Q: When the user "logins" to the gateway, the gateway redirects to Globus and the user signs in, then Globus redirects back to gateway, is https required for this process or is http ok?
A: Https is required, standard OAuth.

Useful links: