Pub Date : 2021-12-01DOI: 10.22369/issn.2153-4136/12/3/3
Ana C. González-Ríos
{"title":"Infusing Fundamental Competencies of Computational Science to the General Undergraduate Curriculum","authors":"Ana C. González-Ríos","doi":"10.22369/issn.2153-4136/12/3/3","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/12/3/3","url":null,"abstract":"","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123729110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.22369/issn.2153-4136/12/3/1
T. Wetherbee, Elizabeth A. Jones
{"title":"Student Simulations of Local Wildfires in a Liberal Arts Geography Course","authors":"T. Wetherbee, Elizabeth A. Jones","doi":"10.22369/issn.2153-4136/12/3/1","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/12/3/1","url":null,"abstract":"","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116721438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.22369/ISSN.2153-4136/12/1/4
Ayobami Adewale
The main objective of computer graphics is to effectively depict an image in a virtual scene in its realistic form within a reasonable amount of time. This paper discusses two different ray tracing techniques and the performance evaluation of the serial and parallel implementation of ray tracing, which in its serial form is known to be computational intensive and costly for previous computers. The parallel implementation was achieved using OpenMP with C++, and the maximum speedup was ten times that of the serial implementation. The experiment in this paper can be used to teach high-performance computing students the benefits of multi-threading in computationally intensive algorithms and the benefits of parallel programming.
{"title":"Performance Evaluation of Monte Carlo Based Ray Tracer","authors":"Ayobami Adewale","doi":"10.22369/ISSN.2153-4136/12/1/4","DOIUrl":"https://doi.org/10.22369/ISSN.2153-4136/12/1/4","url":null,"abstract":"The main objective of computer graphics is to effectively depict an image in a virtual scene in its realistic form within a reasonable amount of time. This paper discusses two different ray tracing techniques and the performance evaluation of the serial and parallel implementation of ray tracing, which in its serial form is known to be computational intensive and costly for previous computers. The parallel implementation was achieved using OpenMP with C++, and the maximum speedup was ten times that of the serial implementation. The experiment in this paper can be used to teach high-performance computing students the benefits of multi-threading in computationally intensive algorithms and the benefits of parallel programming.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115185961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/16
Alex Younts, Stephen Lien Harrell
The ability to grow and teach systems professionals relies on having the capacity to let students interact with supercomputers at levels not given to normal users. In this paper we describe the teaching methods and hardware platforms used by Purdue Research Computing to train undergraduates for HPC systems-facing roles. From Raspberry Pi clusters to the LittleFe project, previous work has focused on providing miniature hardware platforms and developing curriculums for teaching. Recently, we have developed and employed a method using virtual machines to reach a wider audiences, created best practices, and removed barriers for approaching coursework. This paper outlines the system we have designed, expands on the benets and drawbacks over hardware systems, and discusses the failures and successes we have had teaching HPC System Administrators.
{"title":"Teaching HPC Systems Administrators","authors":"Alex Younts, Stephen Lien Harrell","doi":"10.22369/issn.2153-4136/11/1/16","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/16","url":null,"abstract":"The ability to grow and teach systems professionals relies on having the capacity to let students interact with supercomputers at levels not given to normal users. In this paper we describe the teaching methods and hardware platforms used by Purdue Research Computing to train undergraduates for HPC systems-facing roles. From Raspberry Pi clusters to the LittleFe project, previous work has focused on providing miniature hardware platforms and developing curriculums for teaching. Recently, we have developed and employed a method using virtual machines to reach a wider audiences, created best practices, and removed barriers for approaching coursework. This paper outlines the system we have designed, expands on the benets and drawbacks over hardware systems, and discusses the failures and successes we have had teaching HPC System Administrators.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124686132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/5
Jennifer Houchins, Scott A. Lathrop, R. Panoff, Aaron Weeden
There are numerous reports documenting the critical need for high performance computing infrastructure to advance discovery in all elds of study. The Blue Waters project was funded by the National Science Foundation to address this need and provide leading edge petascale computing resources to advance research and scholarship. There are also numerous reports that identify the lack of an adequate workforce capable of utilizing and advancing petascale class computing infrastructure well into the future. From the outset, the Blue Waters project has responded to this critical need by conducting national scale workforce development activities to prepare a larger and more diverse workforce. This paper describes those activities as exemplars for adoption and replication by the community.
{"title":"Blue Waters Workforce Development: Delivering National Scale HPC Workforce Development","authors":"Jennifer Houchins, Scott A. Lathrop, R. Panoff, Aaron Weeden","doi":"10.22369/issn.2153-4136/11/1/5","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/5","url":null,"abstract":"There are numerous reports documenting the critical need for high performance computing infrastructure to advance discovery in all elds of study. The Blue Waters project was funded by the National Science Foundation to address this need and provide leading edge petascale computing resources to advance research and scholarship. There are also numerous reports that identify the lack of an adequate workforce capable of utilizing and advancing petascale class computing infrastructure well into the future. From the outset, the Blue Waters project has responded to this critical need by conducting national scale workforce development activities to prepare a larger and more diverse workforce. This paper describes those activities as exemplars for adoption and replication by the community.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116220116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/10
C. Terboven, Julian Miller, Sandra Wienke, Matthias S. Müller
In a software lab, groups of students develop parallel code using modern tools, document the results and present their solutions. The learning objectives include the foundations of High-Performance Computing (HPC), such as the understanding of modern architectures, the development of parallel programming skills, and coursespecic topics, like accelerator programming or cluster set-up. In order to execute the labs successfully with limited personnel resources and still provide students with access to world-class HPC architectures, we developed a set of concepts to motivate students and to track their progress. This includes the learning status survey and the developer diary, which are presented in this work. We also report on our experiences with using innovative teaching concepts to incentivize students to optimize their codes, such as using competition among the groups. Our concepts enable us to track the eectiveness of our labs and to steer them for increasing sizes of diverse students. We conclude that software labs are eective in adding practical experiences to HPC education. Our approach to hand out open tasks and to leave creative freedom in implementing the solutions enables the students to self-pace their learning process and to vary their investment of eort during the semester. Our eort and progress tracking ensures the achieving of the extensive learning objectives and enables our research on HPC programming productivity.
{"title":"Self-paced Learning in HPC Lab Courses","authors":"C. Terboven, Julian Miller, Sandra Wienke, Matthias S. Müller","doi":"10.22369/issn.2153-4136/11/1/10","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/10","url":null,"abstract":"In a software lab, groups of students develop parallel code using modern tools, document the results and present their solutions. The learning objectives include the foundations of High-Performance Computing (HPC), such as the understanding of modern architectures, the development of parallel programming skills, and coursespecic topics, like accelerator programming or cluster set-up. In order to execute the labs successfully with limited personnel resources and still provide students with access to world-class HPC architectures, we developed a set of concepts to motivate students and to track their progress. This includes the learning status survey and the developer diary, which are presented in this work. We also report on our experiences with using innovative teaching concepts to incentivize students to optimize their codes, such as using competition among the groups. Our concepts enable us to track the eectiveness of our labs and to steer them for increasing sizes of diverse students. We conclude that software labs are eective in adding practical experiences to HPC education. Our approach to hand out open tasks and to leave creative freedom in implementing the solutions enables the students to self-pace their learning process and to vary their investment of eort during the semester. Our eort and progress tracking ensures the achieving of the extensive learning objectives and enables our research on HPC programming productivity.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116860556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/14
Stephen Lien Harrell, Betsy Hillery, Xiao Zhu
HPC and Scientic Computing are integral tools for sustaining the growth of scientic research. Additionally, educating future domain scientists and research-focused IT sta about the use of computation to support research is as important as capital expenditures on new resources. The aim of this paper it to describe the parallel computing portion of Purdue University’s HPC seminar series which is used as a tool to introduce students from many non-traditional disciplines to scientic, parallel and high-performance computing.
{"title":"Introducing Novices to Scientific Parallel Computing","authors":"Stephen Lien Harrell, Betsy Hillery, Xiao Zhu","doi":"10.22369/issn.2153-4136/11/1/14","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/14","url":null,"abstract":"HPC and Scientic Computing are integral tools for sustaining the growth of scientic research. Additionally, educating future domain scientists and research-focused IT sta about the use of computation to support research is as important as capital expenditures on new resources. The aim of this paper it to describe the parallel computing portion of Purdue University’s HPC seminar series which is used as a tool to introduce students from many non-traditional disciplines to scientic, parallel and high-performance computing.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116357014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/9
Anjia Wang, Alok Mishra, C. Liao, Yonghong Yan, B. Chapman
OpenMP is one of the most popular programming models to exploit node-level parallelism of supercomputers. Many researchers are interested in developing OpenMP compilers or extending existing standard for new capabilities. However, there is a lack of training resources for researchers who are involved in the compiler and language development around OpenMP, making learning curve in this area steep. In this paper, we introduce an ongoing effort, FreeCompilerCamp.org, a free and open online learning platform aimed to train researchers to quickly develop OpenMP compilers. The platform is built on top of Play-With-Docker, a docker playground for users to conduct experiments in an online terminal sandbox. It provides a live training website that is set up on cloud, so anyone with internet access and a web browser will be able to take the training. It also enables developers with relevant skills to contribute new tutorials. The entire training system is open-source and can be deployed on a private server, workstation or even laptop for personal use. We have created some initial tutorials to train users to learn how to extend the Clang/LLVM and ROSE compiler to support new OpenMP features. We welcome anyone to try out our system, give us feedback, contribute new training courses, or enhance the training platform to make it an effective learning resource for the HPC community.
{"title":"FreeCompilerCamp.org: Training for OpenMP Compiler Development from Cloud","authors":"Anjia Wang, Alok Mishra, C. Liao, Yonghong Yan, B. Chapman","doi":"10.22369/issn.2153-4136/11/1/9","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/9","url":null,"abstract":"OpenMP is one of the most popular programming models to exploit node-level parallelism of supercomputers. Many researchers are interested in developing OpenMP compilers or extending existing standard for new capabilities. However, there is a lack of training resources for researchers who are involved in the compiler and language development around OpenMP, making learning curve in this area steep. In this paper, we introduce an ongoing effort, FreeCompilerCamp.org, a free and open online learning platform aimed to train researchers to quickly develop OpenMP compilers. The platform is built on top of Play-With-Docker, a docker playground for users to conduct experiments in an online terminal sandbox. It provides a live training website that is set up on cloud, so anyone with internet access and a web browser will be able to take the training. It also enables developers with relevant skills to contribute new tutorials. The entire training system is open-source and can be deployed on a private server, workstation or even laptop for personal use. We have created some initial tutorials to train users to learn how to extend the Clang/LLVM and ROSE compiler to support new OpenMP features. We welcome anyone to try out our system, give us feedback, contribute new training courses, or enhance the training platform to make it an effective learning resource for the HPC community.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114884066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.22369/issn.2153-4136/11/1/1
K. Holcomb, J. Huband, Tsengdar J. Lee
From 2013 to 2018 the University of Virginia operated a summer school and internship program in partnership with NASA. The goal was to improve the software skills of students in environmental and earth sciences and to introduce them to high-performance computing. In this paper, we describe the program and discuss its evolution in response to student needs and changes in the high-performance computing landscape. The future direction for the summer school and plans for the materials developed are also discussed.
{"title":"Lessons Learned from the NASA-UVA Summer School and Internship Program","authors":"K. Holcomb, J. Huband, Tsengdar J. Lee","doi":"10.22369/issn.2153-4136/11/1/1","DOIUrl":"https://doi.org/10.22369/issn.2153-4136/11/1/1","url":null,"abstract":"From 2013 to 2018 the University of Virginia operated a summer school and internship program in partnership with NASA. The goal was to improve the software skills of students in environmental and earth sciences and to introduce them to high-performance computing. In this paper, we describe the program and discuss its evolution in response to student needs and changes in the high-performance computing landscape. The future direction for the summer school and plans for the materials developed are also discussed.","PeriodicalId":330804,"journal":{"name":"The Journal of Computational Science Education","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}