EU Regional School - Bockhorst Seminar
Dr. Heinrich Bockhorst - One-Sided Communication, MPI on Threads, Overlap of Communication and Computation. Old MPI topics - New Answers?
Senior Software Engineer at Intel Cooperation, Germany
This Talk will provide a short overview on some MPI topics that have been discussed for a long time. MPI’s one sided communication is available for about 20 years but it was ignored by most programmers due to the available implementations showing poor performance. This is kind of a chicken or the egg dilemma because the MPI developers did not spent their time on software that was not used. Another reason was that the necessary hardware was missing or too expensive. The final reason against one sided MPI communication was the cumbersome syntax that had to be used. The recent advances with MPI-3 may help to promote one sided communication.
MPI on threads is the next topic. The combination of MPI and threads is defined in the MPI standard. Nowadays people are switching to hybrid MPI+ threading programs instead of pure MPI. This became necessary since a pure MPI utilization of CPUs with up to 72 cores is not efficient. Reasons against the pure MPI model are memory consumption and network congestions. Many collectives are not scaling very well and the rank count should be kept low. Most implementations do not efficiently support the hybrid model. Reasons for that will be discussed and a recent solution presented.
Overlap of communication and computation is the third topic. There are coding examples where one sided communication and threads are used to achieve this overlap. Some coding examples will be presented to show how this can be done.
The poster shows two next neighbor exchange patterns. The first is with overlap and the second pattern is without overlap.
SSD - Peherstorfer Seminar
Prof. Dr. Benjamin Peherstorfer- Learning Context -aware Reduced Models for Multifidelity Computations
Courant Institute of Mathematical Sciences, New York University, USA
Traditional model reduction constructs reduced models with the aim of replacing expensive, high-fidelity models to speed up computations. However, reduced and high-fidelity models are increasingly used together in multifidelity methods, which means that the purpose of reduced models becomes supporting computations with the high-fidelity models rather than approximating and replacing them. In this presentation, we propose context-aware reduced models that are explicitly constructed for being used together with high-fidelity models in multifidelity computations. In the first part of the presentation, we introduce the adaptive multifidelity Monte Carlo (AMFMC) method that constructs reduced models that optimally support the multifidelity estimation of statistics of high-fidelity model outputs. Our analysis shows that our context-aware reduced models optimally reduce the runtime of multifidelity estimation, even though they are less accurate in the sense of traditional model reduction. In the second part, we present a multifidelity approach to dynamically couple reduced models with high-fidelity models, where the reduced models are adapted in a context-aware sense with sparse data from the high-fidelity model. Our numerical examples demonstrate that the dynamic coupling is particularly beneficial in case of convection-dominated problems, where our context-aware approach achieves significant speedups, whereas traditional reduced models are even more costly to evaluate than the high-fidelity models.
CHARLEMAGNE DISTINGUISHED LECTURE SERIES - Ghattas Seminar
Prof. Omar Ghattas, Ph.D. - Large-Scale Bayesian Inversion with Applications to the Flow of the Antarctic Ice Sheet
Institute for Computational Engineering Sciences (ICES)
Many physical systems are characterized by complex nonlinear behavior coupling multiple physical processes over a wide rang of length and time scales. Mathematical and computational models of these systems often contain numerous uncertain parameters, making high-reliability predictive modeling a challenge. Rapidly expanding volumes of observational data--along with tremendous increases in HPC capability--present opportunities to reduce these uncertainties via solution of large-scale inverse probles. Bayesian inference provides a systematic framework for inferring model parameters with associated uncertainties from (possibly noisy) data and any prior information. However, solution of Bayesian inverse problems via conventional Markov chain Monte Carlo (MCMC) methods remains prohibitive for expensive models and high-dimensional parameterizations, as result from discretization of infinite dimensional problems with uncertain fields. Despite the large size of observational datasets, typically they inform only low dimensional manifolds in parameter space, due to ill-posedness of the inverse problem. Based on this property we design scalable Bayesian inversion algorithms that adapt to the structure and geometry of the posterior probability, thereby exploiting an effectively-reduced parameter dimension and making Bayesian inference tractable for some large-scale, high-dimensional inverse problems. We discuss an inverse problem for the flow of the Antarctic ice sheet, which has been solved for as many as one million uncertain parameters at a cost (measured in forward ice sheet flow solves) that is independent of both the parameter and data dimensions. This work is joint with Tobin Isaac, Noemi Petra, and Georg Stadler.
SSD - Frison Seminar
Dr. Gianluca Frison - BLASFEO and its Use in Structure-Exploiting Algorithms for Optimal Control
Systems Control and Optimization Laboratory, University of Freiburg
BLASFEO is a newly developed dense linear algebra library, which differentiate itself being optimized for the rather small matrix sizes (up to a couple hundreds) typically encountered in embedded optimization and control. In this talk, I will introduce the main concepts behind the implementation of the library, and show the results of benchmarks against state-of-the-art BLAS libraries (e.g. MKL, OpenBLAS, BLIS) or code-generated linear algebra (e.g. libxsmm, Eigen). Subsequently, I will introduce the embedded optimization framework, and show how the combination of structure-exploiting algorithms with an high-performance dense linear algebra library like BLASFEO allows to obtain very fast optimization algorithms, outperforming current state-of-the-art software based on code-generation.
SSD - Vedula Seminar
Vijay Vedula, Ph.D. - Ventricular Hemodynamics in Disease and Development
Department of Pediatrics, Stanford University, USA
Despite continuous advancements in medical technologies and imaging, cardiovascular disease forms the leading cause of mortality and death. Computational modeling provides a low cost, non-invasive modality that complements animal testing and routine clinical care. Simulation-based diagnosis has demonstrated a growing impact in the clinic, ultimately leading to improved decision-making and patient outcomes. While this translation was successfully achieved in vascular flow applications, cardiac hemodynamics (representing blood flow in the heart chambers) has remained distant, partly due to the significant cost and complexity involved in modeling the underlying blood dynamics. These include high Reynolds number flows, moving boundaries and fluid-structure interaction effects, in addition to the complex multiphysics interactions and the valve dynamics. In this talk, I will present a robust and efficient framework to perform patient-specific modeling of ventricular hemodynamics with examples from single ventricle physiology (children born with a single `functional’ ventricle). I will then present the utility of the framework in embryonic cardiac flow modeling to understand shear regulated mechanotransduction during cardiac morphogenesis. Finally, I will discuss future directions, both from computational modeling and clinical translation perspective.
* funded by Theodore-von-Kármán-Fellowship