The 5th ACM SIGPLAN International Workshop on Artificial Intelligence and Empirical Methods for Software Engineering and Parallel Computing Systems

The purpose of this workshop is to provide a stable forum for researchers and practitioners dealing with compelling challenges and issues of the software development life cycle on modern parallel platforms and HPC systems. The increased complexity of parallel applications on modern parallel platforms (e.g. multicore/manycore, distributed/hybrid systems) requires more insight into engineering of parallel software for targeting the underlying parallel systems. Rapidly emerging artificial intelligence-related technologies and their application to software engineering and parallel computing systems will be promising approaches to tackle these issues as well as approaches using traditional empirical and experimental methods, so the acronym AI-SEPS reflects a change from previous editions with emphasis placed on this trend. We aim to advance the state of the art in all aspects of techniques on software engineering and parallel computing systems such as requirements engineering and software specification; design and implementation; program analysis; performance analysis, profiling and tuning; testing and debugging.

Proceedings are available online.

Accepted Papers

Title
Deep Learning at ScaleKeynote
AI-SEPS
Panel discussion
AI-SEPS
PIRA: Performance Instrumentation Refinement Automation
AI-SEPS
PyGA: A Python to FPGA compiler prototype
AI-SEPS

Call for Papers

Submissions for Position (max. 2 pages) or Abstract (max. 800 words) papers are still open until September 28th. Position or Abstract papers can include industrial and practical experiences, tool presentations/demonstrations, early results & novel ideas without a comprehensive/extensive evaluation, preliminary and exploratory work with unconventional approaches or wild and crazy ideas. Position or Abstract papers provide an opportunity for the presentation at the workshop but will not be included in the ACM Digital Library formal proceedings.

The goal of the workshop is to present a stimulating environment where ideas, experiences and topics relevant to parallel software engineering and software analytics can be shared/exchanged among researchers and practitioners in the fields of systems, programming, languages and software. The intention of the workshop is to initiate collaborations focused on solving challenges introduced by ongoing research in these topics. Through Q&A sessions, presenters have the opportunity to receive feedback and opinions of other domain experts as well as to discuss obstacles and promising approaches in current research. Both authors and attendees can discover new ideas and new directions for parallel programming research.

Specific topics of interest include, but are not limited to:

  • AI and machine learning for parallel programming and high-performance computing
  • Software analytics for parallel programs
  • Tools and environments for all aspects of engineering parallel software and their enhancement through AI-related technologies
  • High-performance deep learning
  • Design of parallel programs and parallel design patterns
  • Software development process and requirement engineering of parallel software
  • Parallel software architectures
  • Performance modeling techniques on parallel systems
  • Profiling and event trace analysis
  • Refactoring and reengineering
  • Performance analysis and auto-tuning
  • Energy-efficient parallel computing
  • Testing and debugging of parallel applications
  • Case studies and experience reports

The format of the workshop will be a full-day, SIGPLAN-approved workshop. We welcome original, unpublished regular papers (10 pages) and short papers (4 pages) on current research, including industrial and practical experiences, tool presentations/demonstrations, early results & novel ideas without a comprehensive/extensive evaluation, preliminary and exploratory work with unconventional approaches or wild and crazy ideas. Accepted papers will be published as proceedings in the ACM Digital Library.

Contact AI SEPS 2018 Organizing Committee ai-seps-2018@googlegroups.com with any questions or concerns.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 6 Nov

Displayed time zone: Guadalajara, Mexico City, Monterrey change

08:00 - 10:00
AI SEPSAI-SEPS at Cabot
Chair(s): Ali Jannesari Iowa State University, Yukinori Sato Toyohashi University of Technology
08:00
50m
Talk
Deep Learning at ScaleKeynote
AI-SEPS
Prabhat NERSC, Berkeley Lab
08:50
25m
Talk
PIRA: Performance Instrumentation Refinement Automation
AI-SEPS
Jan-Patrick Lehr Graduate School of Computational Engineering, TU Darmstadt, Alexander Hück Institute for Scientific Computing, TU Darmstadt, Christian Bischof Scientific Computing, TU Darmstadt
09:15
15m
Talk
PyGA: A Python to FPGA compiler prototype
AI-SEPS
Yohann Uguen Univ Lyon, INSA Lyon, Inria, CITI, Eric Petit Intel, France
09:30
30m
Talk
Panel discussion
AI-SEPS
P: Yukinori Sato Toyohashi University of Technology, P: Ali Jannesari Iowa State University, P: Shigeru Chiba The University of Tokyo

Title: Deep Learning at Scale

Speaker: Prabhat (Data and Analytics Group Lead, NERSC, Berkeley Lab)


Presentation pdf file


Abstract: This talk will review NERSC’s efforts in scaling Deep Learning on the largest CPU and GPU-based HPC systems in the DOE complex. Motivated by challenging scientific problems in high-energy physics, cosmology and climate science, we have developed 2D and 3D convolutional architectures to solve a range of pattern classification, regression and segmentation problems. These projects have resulted in a number of first-time results: scaling Caffe to 9600 Cori/KNL nodes (SC’17) obtaining 15PF performance; scaling TensorFlow to 8192 Cori/KNL nodes obtaining 3.5PF performance, and finally, scaling TensorFlow to 4560 Summit/Volta nodes obtaining 1 ExaOp performance. The talk will review lessons learnt from these projects, and outline future challenges in Deep Learning for Science.


Bio: Prabhat leads the Data and Analytics Services Group at Berkeley Lab’s supercomputing center NERSC. In this role, Prabhat is responsible for the Data software stack and services for NERSC’s 7000+ users. Prabhat is a pioneer in the application of Deep Learning, Machine Learning and statistical methods for scientific applications. He has a broad set of interests spanning topics in High Performance Computing, Data Management and Visualization. Prabhat has degrees in computer science from IIT Delhi and Brown University, and is currently pursuing a PhD in Earth and Planetary Sciences from UC Berkeley.

The future of AI-Inspired methods for Software Engineering and Parallel Computing Systems


Panel organizer:
Yukinori Sato (Toyohashi University of Technology)
Panelists:
Shigeru Chiba (The University of Tokyo), Ali Jannesari (Iowa State University)

Rapidly emerging AI-related technologies and their application to software engineering and parallel computing systems will be promising approaches to tackle challenges in software development life cycle on modern parallel platforms and HPC systems. Together with traditional empirical and knowledge-based methods so far developed, this trend is forming a strong eco-system for all aspect of performance-centric parallel software. In this panel, we share the current practices for such approaches and seek further roles looking toward the future.


Schedule

9:30-9:35 Introduction of panel from panel organizer

9:35-9:50 Position talks (5 minutes each) from:

  • Ali Jannesari. "DeepRace: Finding Data Race Bugs via Deep Learning”.
  • Shigeru Chiba. “Capturing a programming-language grammar”.
  • Yukinori Sato. "Synthesizing performance tools by codesign of deep learning and empirical methods”.

9:50-10:00 Interactive discussions