SGCI webinars bring together community members across the globe.

Content with Resources Webinar .

Webinar: What's Wrong with Computational Science Software? Absolutely Nothing (nearly)

February 13, 2019

Q: What's Wrong with Computational Science Software? A: Absolutely Nothing (nearly)

Presented by Dr. Timothy Menzies, Professor of Computer Science, NC State University, and IEEE Fellow

Much of the data described in this presentation is drawn from computational science research software, and Tim's research team engaged deeply with the PIs funded by NSF's Software Infrastructure for Sustained Innovation (SI2) program, launched in 2010 to fund software research at multiple scales. This program has included annual PI meetings, where Tim presented and interacted with this developer community. When Tim refers to "you" in this presentation, it is this community of computational science developers/PIs, including but not limited to those in the SI2 program.

While funded with an NSF Eager grant, we have been applying empirical software engineering (SE) methods to software systems built for computational science. What we expected to see, and what we saw, were two very different things.
Initially, we spent time talking to computational scientists from CERN and around America. Most expressed concerns that their software was somehow not as good as it might be. Yet those concerns proved unfounded, at least for the code we could access.
Using feedback from the computational science community, we found and analyzed 40+ packages (some of which were very widely used in computational science). Many of those systems had been maintained by large teams, for many years. For example:
  • Lammps has 16,000+ commits from 80 developers (since 2012).
  • Trilinos (which is a more recent package) has been built by 80,000 commits from over 200 developers.
  • Elastic search is over 8 years old and has been built by 40,000+ commits from over 1100 developers.
  • Dealii has been maintained and extended since 1990 via 40,000+ commits from 100 active developers.
Note that some of these projects (e.g. Dealii) are much larger and show greater longevity than many open source projects. When we talked to the developers of these 40+ packages (particularly the post-docs), we found a group that was very well versed in current coding practices (Githib, Travis, etc).
LESSON 1: Many of those systems were written in modern languages (e.g. Python) or used modern programming tools (e.g. version control)
The reasons for this were economic and sociological: these developers are smart people who know that after their NSF-funding is over, they might get well-paid jobs in the software industry. Hence, it was in their interests to know current practices. Accordingly:
LESSON 2: Increasingly, computational software is being written using state-of-the-art software tools and practices.
When we applied standard SE defect predictors to that code, to our surprise they mostly failed since:
LESSON 3: Computational science code has a different (and lower) bug rate than other kinds of software.
Standard empirical SE methods, when applied to computational science code, failed to build useful defect predictors. In fact, to handle the computational science codes, we had to develop new methods that could handle such exemplary software. In the end, such predictors could be built, but only after significantly extending standard empirical SE methods. Hence:
LESSON 4: Computational science is an excellent testbed for the rest of the SE community to stress test their tools.
Note that the above suffers from a sampling bias (we could only examine the open source packages). But one thing is clear: the state-of-the-practice in computational science software is much healthier and insightful than what is commonly believed.

Watch on YouTube

Webinar: Authorizing Access to Science Gateway Resources

January 9, 2019

Authorizing Access to Science Gateway Resources

Presented by Jim Basney of NCSA & Trusted CI, Marlon Pierce of Indiana University & SGCI, and Tom Barton of the University of Chicago & Internet2

Data use agreements, controlled-access data sets, and restricted-access scientific instruments are just a few examples of authorization challenges faced by science gateways. There are many options for authenticating science gateway users, but fewer options for implementing complex authorization policies after users log on. The three panelists for this webinar will present their perspectives and experiences with authorization solutions applicable to science gateways.

Webinar Slides

Q&A from the webinar:

  • Q: In OAuth, can the user choose which items they allow and which not from the list of access requested by the app?
    A: In general a gateway should only request what items they need, so a user would accept or deny all.
  • Q: What is a good resource for getting started with Research & Scholarship attributes? I collect these attributes for my gateway with a custom sign-up form.
    A: InCommon's Research & Scholarship info is here:
  • Q: What is the URL for the paper "Federated Identity Management for Research Collaborations"?
  • Q: Is there any sort of federation body that takes into consideration students in K-12?
    A: The Steward program was started to address this audience, but there hasn't been enough perceived need to carry the project forward. As of now, there is no federation body for K-12.
  • Q: Any tips or success paths for gateways that must deal with PII information? I know this is not directly related to security but security plays a huge role in the overall infrastructure plan.
    A: That's a big question! Some things that come to mind: First, I hope that on your campus the CISO's office is viewed as a good enabling resource. If so, they should be able to provide in-depth guidance and assistance. Second, the TrustedCI Open Science Cyber Risk Profile can be useful to help you think through how to suitably protect the PII. 
  • Q: How do I learn more about Airavata?
    A: See to subscribe to the Airavata dev list.

Watch on YouTube

Webinar: Your Audience Comes First: The Key to Communication and Engagement

November 7, 2018

Your Audience Comes First: The Key to Communication and Engagement

Presented by Elyse Aurbach, Public Engagement Lead, Office of Academic Innovation, University of Michigan

This was a special reprise of the Gateways 2017 Keynote. This presentation is particularly useful for people presenting at conferences or to stakeholder communities.

Connecting people through scientific or engineering gateways requires effective communication. All gateway users – researchers, educators, public audiences – must understand the purpose, utility, and significance of the gateway if it is to be successful. Therefore, establishing effective communication practices to explain the project becomes critical to developing the gateway and ensuring its longevity. Fortunately, communication skills can be learned and practiced by anyone. Developing communication efforts with goals that align with those of the target audience, using storytelling to convey clear messages, and using appropriate language and visual elements are all key skills of a successful communication strategy. Putting these skills into practice can maximize the utility and benefits of your gateway for all parties involved.

Webinar Slides

Resources mentioned during the webinar:

Watch on YouTube

Special Edition Webinar: How SGCI Helped Coastal Emergency Risks Assessment (CERA) Gateway Meet Their Goals

October 24, 2018

Special Edition Webinar: How SGCI Helped Coastal Emergency Risks Assessment (CERA) Gateway Meet Their Goals

Presented by Jason Fleming, ASGS Technology Manager, Seahorse Coastal Consulting, LLC

During a critical event like an impending or active tropical storm, emergency managers, weather forecasters, and GIS specialists seek visualizations and geographic data to evaluate the impact of this storm. The Coastal Emergency Risks Assessment (CERA) science gateway provides real-time decision support through their storm surge mapper and storm surge model. The CERA team participated in the second Science Gateway Bootcamp offered by the SGCI Incubator. Join us to hear about the CERA team's experience with the Bootcamp and how the Bootcamp led them down a path to work with usability consultants from SGCI's Incubator service area. The team is also currently working with SGCI's Extended Developer Support consultants. 

Webinar Slides

Watch on YouTube

Webinar: Amazon Web Services (AWS) Cloud for Gateways

October 10, 2018

Amazon Web Services (AWS) Cloud for Gateways

Presented by Don Schulte, AWS Technical Business Development Manager, HPC

AWS helps researchers process complex workloads by providing the cost-effective, scalable and secure compute storage and database capabilities needed to accelerate time-to-science and produce science gateways. With AWS, scientists can quickly analyze massive data pipelines, store petabytes of data, and share their results with collaborators around the world, focusing on science, not servers. 

About the presenter:

Don Schulte
Technical Business Development Manager, HPC

For the last 20 years, Don Schulte has worked in technical, business development and sales roles for Dell, Cluster File Systems (developer of the Lustre filesystem), Penguin Computing, Cray and DataDirect Networks (DDN). The common theme through all of these roles was serving university research customers in the high-performance computing (HPC) industry as well as research data storage management. Don has designed and helped implement some of the largest university research HPC systems and research data repositories in the United States.

Access the AWS Researcher's Handbook for the Research Cloud Program:

Webinar Slides coming soon

Watch on YouTube