# 2019

# Course 1 - Prof. Torsten Hoefler - MPI Remote Memory Access Programming and Scientific Benchmarking of Parallel Codes

We will provide an overview of advanced MPI programming techniques. Specifically, we will focus on MPI-3's new Remote Memory Access (RMA) programming and an implementation thereof. We will discuss how to utilize MPI-3 RMA in modern applications. Furthermore, we will discuss issues in large-scale implementation and deployment. The lecture will then continue to a small number of other advanced MPI usage scenarios that every scientific computing researcher should know. Finally, we will discuss how to benchmark parallel applications in a scientifically rigorous way. This turns out to be surprisingly difficult and the state of the art is suboptimal. We will present twelve simple rules that can be used as guidelines for good scientific practice when it comes to measuring and reporting performance results.

# 2018

# Course 9 - Prof. Antonio Huerta, Ph.D. - Low and High-Order Approximations of Parameterized Engineering Problems in Computational Solid and Fluid Mechanics

Finite volume and finite element methods are well established computational frameworks to simulate complex engineering problems in solid and fluid mechanics. Efficient and reliable implementations of these techniques are available in several commercial and open-source softwares. Nonetheless, parametric studies and real-time simulations still represent a major challenge for today’s industry which demands the development of fast and accurate techniques to tackle such problems.

In the first part, an overview of recent advances on modern hybrid discretization approaches, namely the face-centered finite volume (FCFV) and the hybridizable discontinuous Galerkin (HDG) methods, are presented. The former is an efficient low-order approach that has been shown to be extremely robust to mesh distortion and stretching, which are usually responsible for the degradation of classical finite volume solutions [R. Sevilla, M. Giacomini, and A. Huerta. “A face-centred finite volume method for second-order elliptic problems” Int. J. Numer. Methods Eng. 115(8), pp. 986-1014 (2018). R. Sevilla, M. Giacomini, and A. Huerta. “A locking-free face-centred finite volume (FCFV) method for linear elasticity” arXiv:1806.07500 (2018)]. The latter is a high-order strategy originally proposed in [B. Cockburn, J. Gopalakrishnan, and R. Lazarov. “Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems" SIAM J. Numer. Anal. 47(2):1319–1365 (2009)]. Recently, an alternative high-order HDG formulation allowing the pointwise fulfillment of the conservation of angular momentum has been proposed. This aspect is crucial in the approximation of problems in computational solid and fluid mechanics in which quantities of engineering interest (e.g. compliance and aeronautical forces) have to be evaluated starting from the stress tensor. [R. Sevilla, M. Giacomini, A. Karkoulias, and A. Huerta. “A superconvergent hybridisable discontinuous Galerkin method for linear elasticity” Int. J. Numer. Methods Eng. 116(2), pp. 91-116 (2018). M. Giacomini, A. Karkoulias, R. Sevilla, and A. Huerta. “A superconvergent HDG method for Stokes flow with strongly enforced symmetry of the stress tensor” arXiv:1802.09394 (2018)].

In the second part, the proper generalized decomposition (PGD) is employed to devise efficient separated representations of the solution of parameterized engineering problems. The resulting PGD-based computational vademecums allow the fast evaluation of solutions involving user-supplied data, such as boundary conditions and geometrical configurations of the domain.

# Course 8 - Prof. Dr. Thomas Pock - Variational Methods for Computer Vision: Modeling, Numerical Solution and Learning

Variational methods (also known as energy minimization methods) are among the most flexible methods for solving inverse problems. The idea is to set up an energy functional whose low energy states correspond to physically plausible solutions of the problem. Hence, computing the solution of a problem is formulated as an optimization problem. In this course, you will learn about variational methods for solving classical computer vision problems such as image restoration, image segmentation, stereo and motion estimation. You will learn about both the basic modeling aspects (different regularization terms and data fitting terms) as well as numerical optimization algorithms to solve the models. Moreover, you will learn about functional lifting, which is a technique whose aim is to reformulate a hard problem (usually due to non-convexity) in a higher dimensional space, where the problem becomes convex. Finally, you will also learn about our recent activities to improve variational models by means of machine learning techniques.

# Course 6 - Prof. Dr. Georg Pingen - Introduction to Topology Optimization for Fluids

The short course will provide a practical introduction to topology optimization for fluids. Attendees will be provided with a functional MATLAB based flow topology optimization algorithm using the lattice Boltzmann method and a sequential convex programming (SCP) based optimizer. The focus of the short course will be on fundamental aspects of flow topology optimization such as boundary representations and the adjoint sensitivity analysis. The fundamentals will be presented using the problem of drag reduction for an object placed in a low Reynolds number flow. Attendees will be encouraged to experiment with other problems following the short course. Further, we will consider the mathematical formulation of the adjoint sensitivity analysis and possible alternatives for its solution. To conclude, we will briefly discuss current challenges in flow topology optimization.

## Course 1 - Prof. Nicholas Higham, Ph.D. - Multiprecision Algorithms

- IEEE standard arithmetic and availability in hardware and software. Motivation for low precision from applications, including machine learning. Applications requiring high precision. Simulating low precision for testing purposes. Software for high precision. Challenges of implementing algorithms in low precision.
- Basics of rounding error analysis. Examples of error analyses of algorithms, focusing on issues relating to low precision.
- Solving linear systems using mixed precision: iterative renement, hybrid direct-iterative methods. Multiprecision algorithms for matrix functions, focusing on the matrix logarithm.

# 2017

## Course 6 - Prof. Dr. Raul F. Tempone - On Monte Carlo and Multilevel Monte Carlo

## Course 10 - Prof. Dr. Surya Kalidindi - Rigorous Quantification of the Hierarchical Material Structure in a Statistical Framework

## Course 3 - Prof. Dr. Hermann Ney - Human Language Technology and Machine Learning: From Bayes Decison Theory to Deep Learning

Spoken and written language and the processing of language are considered to be inherently human capabilities. With the advent of computing machinery, automatic language processing systems became one of the corner-stone goals in artificial intelligence. Typical tasks involve the recognition and understanding of speech, the recognition of text images and the translation between languages. The most successful approaches to building automatic systems to date are based on the idea that a computer learns from examples (possibly very large amounts) and uses plausibility scores rather than externally provided categorical rules. Such approaches are based on statistical decision theory and machine learning. The last 40 years have seen a dramatic progress in machine learning for human language technology. This lecture will present a unifying view of the underlying statistical methods including the recent developments in deep learning and artificial neural networks.

## Course 2 - Prof. Dr. Simon R. Phillpot - Classical Interatomic Potentials for Molecular Dynamics Simulations: Recent Advances and Challenges

The remarkable increasing in computing power and rapid advances in simulation methodologies over the last three decades have led to computer simulation becoming a third approach, complementary to experiment and theory, to exploring materials systems. Molecular Dynamics (MD) simulation is the dominant method for the simulation of complex microstructures and dynamical effects with atomic-scale resolution.

This presentation offers a brief introduction to MD methods and then focuses in on the interatomic potential. The interatomic potential is a mathematical description of the interactions among the atoms and ions in the system and thus defines the material being simulated. To provide a foundation, we review some of the standard interatomic potentials for metals, ceramics and covalently-bonded materials. We then focus on recent developments in the area of reactive potentials that can describe materials systems in which metal, ionic and covalent bonding coexist. Specifically, we present details of the Charge Optimized Many Body (COMB) potential formalism and its applications to a number of materials systems. Finally, we address the issue of the development of interatomic potentials. Currently, this is a black art: it can take a skilled researcher many months to develop a potential for a single system. We present a new paradigm for the rational design of interatomic potentials that will greatly accelerate their development and offer a number of other advantages over standard approaches.

# 2016

## Course 6 - Prof. Dr. Anupam Saxena - Systematic Synthesis of Large Displacement Compliant Mechanisms: A Structural Optimization Approach

Topology optimization entails determining optimal material layout of a continuum within a specified design domain for a desired set of objectives. Irrespective of the parameterization used, material state of a point/sub-region/cell/finite-element should ideally toggle between ‘solid’ and ‘void’ states eventually leading to a well-defined optimized solution. In other words, material assignment must, ideally, be discrete. Two parameterization schemes will be described – line element and honeycomb tessellation, both in context of synthesizing large displacement compliant continua. The latter could be monolithic (single-piece), partially compliant, or some of their members could physically interact via ‘contact.’ With line element parameterization, topology-size decoupling will be emphasized as in how it helps pose topology, shape and size optimization independent of each other. Furthermore, such a framework also helps in introducing, say, rigid members and pin joints within a network of flexible frames leading to the possibility of synthesizing partially compliant continua. As discrete material assignment is strictly adhered to, notwithstanding efficiency, the solitary choice of using a stochastic optimization approach also helps in rejecting ‘non-convergent’ (from the perspective of large displacement analysis) intermediate continua, which, otherwise, tends to impede the functioning of a gradient-based optimization algorithm. Co-rotational beam theory to model frames undergoing geometrically large displacements will be briefed followed by a Fourier Shape Descriptors based objective and a random mutation hill climber algorithm to synthesize ‘path-generating’ continua exemplifying large displacement compliant mechanisms. In continuum parameterization, traditionally, each sub-region is represented by a single (or a set of) Lagrangian (e.g., triangular/rectangular) type finite element(s). With such parameterization however, numerous connectivity singularities such as checkerboards, point flexures, layering/islanding, right-angled notches and ‘blurred’ boundaries are observed, unless ‘additional’ filtering-type methods are used. Use of honeycomb tessellation will be described. As hexagonal cells provide edge-connectivity between any two contiguous cells, most geometric singularities get eliminated naturally. However, numerous ‘V’ notches persist at continuum boundaries which are subdued via a winged-edge data structure based boundary resolution scheme. Consequently, many hexagonal cells get morphed into concave cells. Finite element modeling of each cell is therefore accomplished using the Mean-Value Coordinate based shape functions that can cater to any generic polygonal shape. Overlaying negative circular masks are used to assign material states to sub-regions. Their radii and center coordinates are varied in a manner that material is removed from sub-regions lying beneath the masks so that remnant, unexposed sub-regions constitute a realizable continuum. Honeycomb tessellation, boundary smoothing and Mean Value Coordinates based analysis, all pave way to synthesize Contact-aided Compliant Mechanisms (CCMs) with suitable modifications in the topology optimization formulation which will be highlighted. Augmented Lagrangian method along with active set constraints has been used for contact analysis. Synthesis of large displacement CCMs is exemplified via path generation. Self-contact between continuum subregions undergoing large deformation can also occur, a feature that could be used by such continua to assist them in performing special tasks, such as, attaining negative stiffness and static balancing. Contact analysis is extended to cater to deforming bodies. Numerous examples will be presented to showcase a variety of design features and highlight the ability of Contact-aided Compliant Mechanisms to achieve/accomplish complex kinematic tasks.

## Course 7 - Prof. Dr. Luciano Colombo - Introduction to Nanoscale Thermal Transport

I will review the available theoretical schemes to predict the lattice thermal conductivity in a solid state material through atomistic simulations. Merits and limitations of each method will be critically addressed and discussed with reference to actual systems of current interest in nano-science/-technology.

I will discuss thermal transport in graphene, here selected as the prototypical 2D material of paramount relevance to nanotechnology, In particular, I will address the following issues: (i) diverging vs. finite intrinsic thermal conductivity; (ii) suppression of thermal conductivity by defect engineering; (iii) thermal current rectification properties.

## Course 8.1 - Prof. Dr. Thomas J.R. Hughes - Isogeometric Analysis

Last October marked the tenth anniversary of the appearance of the first paper [1] describing a vision of how to address a major problem in Computer Aided Engineering (CAE). The motivation was as follows: Designs are encapsulated in Computer Aided Design (CAD) systems. Simulation is performed in Finite Element Analysis (FEA) programs. FEA requires the conversions of CAD descriptions to analysis-suitable formats from which finite element meshes can be developed. The conversion process involves many steps, is tedious and labor intensive, and is the major bottleneck in the engineering design-through-analysis process, accounting for more than 80% of overall analysis time, which remains an enormous impediment to the efficiency of the overall engineering product development cycle. The approach taken in [1] was given the name Isogeometric Analysis. Since its inception it has become a focus of research within both the fields of FEA and CAD and is rapidly becoming a mainstream analysis methodology and a new paradigm for geometric design [2]. The key concept utilized in the technical approach is the development of a new foundation for FEA, based on rich geometric descriptions originating in CAD, resulting in a single geometric model that serves as a basis for both design and analysis. In this short course I will introduce Isogeometric Analysis, describe some of the basic tools and methods, identify a few areas of current intense activity, and areas where problems remain open, representing opportunities for future research [3].

REFERENCES

[1] T.J.R. Hughes, J.A. Cottrell and Y. Bazilevs, Isogeometric Analysis: CAD, Finite Elements, NURBS, Exact Geometry and Mesh Refinement, Computer Methods in Applied Mechanics and Engineering, 194, (1 October 2005), 4135-4195. [2] J.A. Cottrell, T.J.R. Hughes and Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, Wiley, Chichester, U.K., 2009. [3] Isogeometric Analysis Special Issue (eds. T.J.R. Hughes, J.T. Oden and M. Papadrakakis), Computer Methods in Applied Mechanics and Engineering, 284, (1 February 2015), 1-1182.

## Course 8.2 - Prof. Dr. Thomas J.R. Hughes - Isogeometric Analysis

Last October marked the tenth anniversary of the appearance of the first paper [1] describing a vision of how to address a major problem in Computer Aided Engineering (CAE). The motivation was as follows: Designs are encapsulated in Computer Aided Design (CAD) systems. Simulation is performed in Finite Element Analysis (FEA) programs. FEA requires the conversions of CAD descriptions to analysis-suitable formats from which finite element meshes can be developed. The conversion process involves many steps, is tedious and labor intensive, and is the major bottleneck in the engineering design-through-analysis process, accounting for more than 80% of overall analysis time, which remains an enormous impediment to the efficiency of the overall engineering product development cycle. The approach taken in [1] was given the name Isogeometric Analysis. Since its inception it has become a focus of research within both the fields of FEA and CAD and is rapidly becoming a mainstream analysis methodology and a new paradigm for geometric design [2]. The key concept utilized in the technical approach is the development of a new foundation for FEA, based on rich geometric descriptions originating in CAD, resulting in a single geometric model that serves as a basis for both design and analysis. In this short course I will introduce Isogeometric Analysis, describe some of the basic tools and methods, identify a few areas of current intense activity, and areas where problems remain open, representing opportunities for future research [3].

REFERENCES

[1] T.J.R. Hughes, J.A. Cottrell and Y. Bazilevs, Isogeometric Analysis: CAD, Finite Elements, NURBS, Exact Geometry and Mesh Refinement, Computer Methods in Applied Mechanics and Engineering, 194, (1 October 2005), 4135-4195. [2] J.A. Cottrell, T.J.R. Hughes and Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, Wiley, Chichester, U.K., 2009. [3] Isogeometric Analysis Special Issue (eds. T.J.R. Hughes, J.T. Oden and M. Papadrakakis), Computer Methods in Applied Mechanics and Engineering, 284, (1 February 2015), 1-1182.

## Course 9 - Prof. Dr. Victor Pascual Cid - Data Visualization: More than a Thousand Words

Big Data is one of the trendiest topics in the last years. It is important as it provides means to analyse large volumes of data in order to detect patterns and outliers. However, understanding data requires much more than running complex statistics and data mining processes. We need tools that let us explore the data in order to understand its complexity. Data Visualization is the discipline behind the generation of static and interactive representations of abstract data to amplify cognition. In this lecture we will understand the main goals of Data Visualization through examples that show the wide variety of representations that can be produced. We will also discuss the pros and cons of some of the most classic visualizations in order to understand the rationale we need to know to use them properly.

## Course 10 - Prof. Dr. Ryan McClarren - Polynomial Chaos Expansions for Uncertainty Quantification

In computational science and engineering one often deals with computer simulations where inputs to the calculation are uncertain. A natural question to ask is how uncertain the output of a simulation is given uncertainties in the inputs. In this lecture I will give cover the application of orthogonal expansions in probability space (also known as polynomial chaos expansions) to determine the distribution of quantities of interest from a numerical simulation. I will detail how to apply these methods to a variety of input uncertainty distributions, and give concrete examples for simple functions as well as non-trivial applications. The examples will also be an opportunity to point out to students the pitfalls and common mistakes that can be made applying these techniques. Finally, I will cover more advanced ideas such as sparse quadrature and regularized regression techniques to estimate expansion coefficients.

## Course 1 - Dr. Francesco Bonchi - On Information Propagation, Social Influence, and Communities in Social Networks: an Algorithmic Perspectiv

With the success of online social networks and microblogging platforms such as Facebook, Tumblr, and Twitter, the phenomenon of influence-driven propagations, has recently attracted the interest of computer scientists, sociologists, information technologists, and marketing specialists. In this talk we will take a data mining perspective, discussing what (and how) can be learned from a social network and a database of traces of past propagations over the social network. Starting from one of the key problems in this area, i.e. the identification of influential users, we will provide a brief overview of our recent contributions in this area. We will expose the connection between the phenomenon of information propagation and the existence of communities in social network, and we will go deeper in this new research topic arising at the overlap of information propagation analysis and community detection.

## Course 2 - Prof. Dr. Lars Grüne - Nonlinear Model Predictive Control

Nonlinear Model Predictive Control (NMPC) is a control method in which a closed loop or feedback control is synthesized from the iterative solution of open loop optimal control problems. As such, the method is applicable to all classes of systems for which optimal control problems can be efficiently solved numerically, including most ODE and DAE models but also many control systems governed by PDEs. Traditionally, the optimal control problem in the NMPC formulation is of tracking type, i.e., it penalizes the distance of the solution to a desired reference. In this context, the important question we will answer in the first part of this short course is whether the closed loop solution generated by NMPC converges to the desired reference, or, more specifically, whether it is asymptotically stable. In the second part of the course we will investigate performance issues. Here we will also consider NMPC problems which are not of tracking type and which have recently attracted a lot of interest in the literature under the name economic NMPC.

## Course 3&4 - Prof. Dr. Heinkenschloss - Model Reduction in PDE-Constrained Optimization

The numerical solution of optimization problems governed by partial differential equations (PDEs) requires the repeated solution of coupled systems of PDEs. Model reduction can be used to substantially lower the computational cost by using reduced order models (ROMs) as surrogates for the expensive original objective and constraint functions, or to use ROMs to accelerate subproblem solves in traditional Newton-type methods. In these lectures I will present approaches for the integration of projection based ROMs into PDE-constrained optimization, discuss their computational costs and convergence properties, and demonstrate their performance on example problems. I will review the generation of projection based ROMs, as well as Newton-type optimization algorithms. The integration of projection based ROMs and optimization will first be discussed for relatively simple PDE constrained optimization problems that allow for precomputations of ROMs and computations of global error bounds, and then for nonlinear PDE constrained optimization problems where such precomputations and generations of global error bounds are typically impossible.

# 2015

## Course 6 - Prof. Dr. Henning Struchtrup - Entropy, Equilibrium, Irreversibility: The 2nd Law of Thermodynamics and its Numerous Consequences

Put in simple words, the 2nd Law of Thermodynamics describes the trend of any system left to itself towards a unique and stable equilibrium state. Indeed, the second law can be deduced from rather simple arguments on equilibration processes seen in daily life, including the experience that heat by itself will only go from warm to cold, while there is no restriction on the direction of work transfer. The 2nd Law is a balance equation for entropy, with a non-negative production term, and often is written as an inequality. With this, it differs from most other laws of physics, which typically are conservation laws (e.g., mass, energy, momentum). The non-negativity of entropy generation has far reaching consequences for technical processes and for the description of nature which will be explored through a large number of examples. We will look at modifications of technical processes that reduce entropy generation and thus improve efficiency. Then we discuss the approach to equilibrium, and the final equilibrium states, which often are a result of a competition between entropy and energy. We will identify thermodynamic forces in non-equilibrium states that induce thermodynamics fluxes which drive the system towards equilibrium. Along the way, we make an excursion to Kinetic Theory of Gases which provides the microscopic interpretation of entropy. The 2nd law is often perceived to be difficult to understand—this lecture aims at making it accessible by clear explanations and a multitude of examples.

## Course 7 - Prof. Tan Thanh Bui - Towards Large-Scale Computational Science and Engineering with Quantifiable Uncertainty

We present our recent efforts towards uncertainty quantification for large-scale computational mechanics. The talk has three parts. In the first part, we present a reduce-then-sample approach to efficiently study the probabilistic response of turbomachinery flow due to random geometric variation of bladed disk. We first propose a model-constrained adaptive sampling approach that explores the physics of the problem under consideration to build reduced-order models. Monte Carlo simulation is then performed using the cost-effective reduced model. We demonstrate the effectiveness of our approach in predicting the work per cycle with quantifiable uncertainty. In the second part of the talk, we consider the shape inverse problem of electromagnetic scattering. We address this large-scale inverse problems in a Bayesian inference framework. Since exploring the Bayesian posterior is intractable for high dimensional parameter space and/or expensive computational model, we propose a Hessian-informed adaptive Gaussian process response surface to approximate the posterior. The Monte Carlo simulation task, which is impossible using the original posterior, is then carried out on the approximate posterior to predict the shape and its associated uncertainty with negligible cost. In the last part of the talk, we address the problem of solving globally large-scale seismic inversion governed by the linear elasticity equation. We discuss a mesh-independent uncertainty quantification method for full wave form seismic inversion exploiting the compactness of the misfit Hessian and low rank approximation.

## Course 8 - Dr. Timothy Mattson - Parallel Computing: From Novice to “Expert” in Four Hours

Parallel computing for the computational sciences is quite old with the first commercially produced shared memory computer in 1962 (the 4 CPU Burroughs D825). Parallel computing started a sort of “Cambrian explosion” roughly 30 years ago with “the attack of the killer micros” resulting in a vast range of parallel architectures: VLIW and explicitly parallel instruction sets, the famous MIMD vs SIMD wars (MIMD won), fights over network topologies in MPP supercomputers (hypercube, 3D torus, grid, rings, etc.), the dream of SMP and the harsh reality of NUMA, easy to use vector units (which are often quite hard to use), and more recently heterogeneous computing with the tension between CPUs and GPUs. As if the hardware landscape is not confusing enough, the software side is even worse with abstract models (e.g. SIMT, SMT, CSP, BSP, and PRAM) and a list of programming models that would easily fill several pages. It’s enough to scare even the most motivated computational scientist away from parallel computing. Ignoring parallel computing, however, is not an option. As Herb Sutter noted several years ago, “the free lunch is over”. If you want to achieve a reasonable fraction of the performance available from a modern computer, you MUST deal with parallelism. Sorry. Fortunately, we’ve figured out how to make sense of this chaos. By looking back over all the twists and turns of the last 30 years, we can boil things down to a small number of essential abstractions. And while there are countless parallel programming models, if you focus on standard models that let you write programs that run on multiple platforms from multiple vendors, you can reduce the number of programming models you need to consider down to a small number. In fact, if you give me four hours of your time, you can learn: (1) enough to understand parallel computing and intelligently choose which hardware and software technologies to use, and (2) the small number of design patterns used in most parallel applications. That’s not too bad. In fact, parallel computing is much less confusing than the other hot trends kicking around these days (anyone up for a 40 hour summary of machine learning over big data running in the cloud?).

## Course 9 - Prof. Dr. Pierre Alliez - Mesh Generation and Shape Reconstruction

Meshes play a central role in computational engineering for the simulation and visualization of physical phenomena. They are also commonplace for modeling and animating complex scenes in special effects or multimedia applications. After some motivating applications we will cover basic geometric algorithms and data structures: convex hulls, Delaunay triangulations and Voronoi diagrams. These notions are central to generate, through Delaunay refinement, isotropic triangle and tetrahedron meshes for 2D domains, surfaces and volumes. We will then discuss variational approaches to optimize the quality of the mesh elements.

## Course 10 - Prof. Dr. Pierre Alliez - Mesh Generation and Shape Reconstruction

Shape reconstruction is concerned with recovering a digital representation of physical objects from measurement data. Shape reconstruction came to importance primarily as a result of the ability to acquire 3D point clouds: through, e.g., laser scanning, structured light and multi-view stereo. Without any prior assumption the shape reconstruction problem is inherently ill-posed, as an infinite number of shapes (curves in 2D, surfaces in 3D) pass through or near the data points. The problem must thus be regularized through several priors such as sampling density or level of noise. We will first discuss some basic methods devised in the field of Computational Geometry: these approaches, based on Voronoi diagrams and Delaunay triangulations, come with theoretical guarantees but require several assumptions that are rarely met on real-world data. We will then explore more advanced approaches based on variational formulations, that are robust to imperfect data such as noise and outliers.

## Course 2 - Prof. Rupert Klein - Multiscale Dynamics of the Atmosphere-Ocean System and Related Computational Challenges

TBA

## Course 3 - Prof. Mark E. Tuckerman Multiple Time - Stepping in Molecular Dynamics: Challenges, Solutions, and Applications

The inherent separation of time scales that is a ubiquitous feature in a wide variety of complex dynamical systems can be exploited to generate efficient algorithms for numerically integrating the classical equations of motion of such systems. The basic strategy is to assign the force component associated with each time scale its own time step and then to devise multiple time-step solvers based on this division of forces. The underlying assumption of such schemes is that fast forces arise from potentials that are computationally inexpensive to evaluate while the slower forces contain the major computational bottlenecks. Consequently, if the slower forces can be integrated with a larger time step and evaluated less frequently, the computational cost of the calculation will be significantly reduced. In the talks I will give, I will review this basic strategy and point out that the cost savings are limited by phenomena known as resonances. I will then present several solutions to the resonance problem that allow the full time-scale separation to be exploited by the multiple time-step solver. Finally, I will present some application areas to which multiple time-stepping approaches can be applied, including rare-event sampling techniques for biomolecular and crystal structure prediction, fixed- charge and polarizable models, and in coarse-grained approaches such as dissipative- particle dynamics.

## Course 4 - Prof. Robert Scheichl - Efficient Use of Model Hierarchies in Uncertainty Quantification

TBA

## Course 5 - Prof. Cory Hauck - Numerical Topics in Collisional Kinetic Equations: Moment Models, Asymptotic Preserving Methods, and Hybrid Approaches

Kinetic equations describe the evolution of a particle system via the evolution a probability distribution function that is typically defined over a six-dimensional phase-space. The mathematical, computational, and physical aspects of these equations are very interesting, but also very complicated. From the numerical point of view, simulations are challenging due to both the large phase-space and the existence of structure at multiple scales. In particular, the number of unknowns needed to accurately represent the solution can be quite large. In this talk, I will discuss the basic goals and challenges of these type of simulations and present some of my own attempts at tackling these challenges. In particular we will discuss the moment-based formalism, numerical approaches for capturing macroscopic behavior with under-resolved meshes, and finally some ideas on hybrid approaches.

# 2014

## Course 6 - Dr. Dennis M. Kochmann - When Nano Meets Macro – Conceptual and Computational Challenges of Coarse-Graining Atomistics

Models allow us to describe, to understand, and to predict the complex behavior of materials across multiple scales of length and time, from the fundamental particles all the way up to what we observe with the naked eye. For each scale we have powerful models ranging from quantum mechanics and molecular dynamics to the continuum theories. But what if all of those methods fail? Nano-sized structures and devices or nanostructured materials provide excellent examples of great technological and scientific interest. Here, the continuum assumption fails; yet, atomic-level computational techniques become prohibitively expensive. Therefore, how can we apply atomistic techniques to myriads of particles?

In this short course, we will discuss the challenges of bridging the scales from atomistics to the continuum with a focus on crystalline materials (including, e.g., metals and ceramics). After a review of techniques to understand the behavior of solids from the electronic structure all the way up to the macroscale (and relations between these), we will study ways to coarse-grain the atomistic ensemble in order to extend atomic-level accuracy to much larger scales with a particular focus on the family of quasicontinuum methods. Such techniques require an integration of physics and computational science to provide an optimal balance between accuracy and efficiency. We will highlight both conceptual and computational challenges by various examples and discuss the latest developments.

## Course 8.1 - Prof. Dr. Wolfgang A. Wall - Computational Fluid-Structure Interaction

Fluid-structure interaction (FSI) problems, as well as many other multi-field problems, have received much attention in recent years and their importance is still continuously growing. The main reason for this is that they are of great relevance in all fields of engineering (civil, mechanical, aerospace, bio, etc.) as well as in the applied sciences. Hence, both development and application of respective modeling and simulation approaches have gained great attention over the past decades. This lecture will address two topics that are crucial for FSI, namely the way the overall FSI problem is formulated as well as how the coupled transient system is solved. We will start with a basic introduction to these topics for the non-experts and at the end will also address some recent developments that could also be interesting for FSI experts.

## Course 8.2 - Prof. Dr. Wolfgang A. Wall - Advanced Computational Techniques for General Interface Problems

In this lecture we will present advanced computational techniques that have proven to be very powerful in the treatment of different interface problems. The main building block consists of so called dual mortar finite element methods. We will first present a simple introduction to these methods and then show how they can be used in different settings in a very favorable way. The type of interface problems that will be covered include contact dynamics and associated physical phenomena such as friction, abrasive wear and lubrication, mesh tying in solid and fluid dynamics, fluid-structure interaction, thermo-mechanics, etc.

## Course 9 - Prof. Dr. Francesco Casella - Object-Oriented Dynamic Modelling: Principles, Applications, and Future Challenges

Object-oriented (O-O), equation based modelling languages and tools are increasingly used for the system-level modelling of physical and cyber-physical systems. The Modelica language, first introduced in 1997, is becoming the de-facto standard in this area, as a growing number of commercial and open-source tools supports it.

The talk will introduce the basic concepts of declarative, O-O modelling languages (with emphasis on Modelica), briefly review the techniques used to obtain efficient simulation code from O-O models, and highlight future challenges and open research questions, in particular in the areas of simulation of large-scale systems and optimal control of O-O models. Applications in different fields of engineering will also be discussed.

## Course 10 - Dr. Stefan Kriebel - Applied Software Enigneering

After the appearance of the first programming languages in the late fifties, e.g. Fortran, ALGOL or COBOL, the software crisis was announced in 1968 on a NATO conference in Garmisch-Partenkirchen, Germany. At this conference discussion started how software could be better engineered in order to become less cost intensive, more reliable and target matching. The discussion on Software Engineering is still going on today... Since the NATO conference the last five decades of software engineering were dedicated to standardization, management of the complexity and the improvement of the quality of software. In the seventies structured design and programming approaches were developed, e.g. Edward Yourdan and Tom DeMarco. In the eighties new programming paradigms were invented like object orientation, e.g. Bjarne Stroustrup. The nineties were the decade of modeling software, software architecture and processes, e.g. CASE, UML and CMMI, as well as the start of the industrial exploitation of the internet. After the millennium software got more and more complex and pervasive through distributed applications and their interdependencies particularly in the world wide cloud.

However, all theory is nothing without application. All software engineering excellence which was created has to be applied also to different industrial host domains like mechanical and electrical engineering. The challenge for the effective transfer of software engineering techniques to host domains is the availability of knowledge in the computer science domain as well as in the host domain. Compared with spoken languages this means knowledge of the software engineering language and the host language is required.

The objective of my lecture “Applied Software Engineering” is to provide a comprehensive overview on the necessary software engineering techniques to be applied to host domains. In terms of spoken languages this could be compared to a first culture and translation guide. Therefore, I would like to welcome the natives speakers of the host language to join the exiting world of software engineering in order to get the latter effectively translated in the host domain. The lecture comprises the software engineering processes itself and its relevant key processes like requirements engineering, verification and validation, project and risk management, configuration management as well as process and quality management.

As literature please refer to “Software Engineering” by Ian Sommerville (9th edition, 2012).

## Course 1 - Dr. David Champion - Computational Challenges in Pulsar Astronomy

Pulsars are the remnants of massive stars following a supernova explosion. Their extreme density (equivalent to that of the atomic nucleus) and rapid rotation (up to 716 Hz) make them stable clocks that can rival the atomic time standard. By finding and timing these pulsars we can probe many different areas of physics and astrophysics - general relativity, the equation of state, stellar evolution, and the Galactic magnetic field to name but a few. Advances in computing technology have a direct impact at almost every step of this process - from observing at the telescope, to searching the data for pulsars, and then using these pulsars for scientific research. With the design of the Square Kilometre Array telescope underway the importance of computer science to this community will only increase. In these lectures Dr. Champion will present an overview of pulsar research highlighting the computational challenges that face pulsar astronomy.

## Course 2 - Prof. Dr. Hank Childs - Visualization and Analysis of Very Large Data

Scientific simulations run on leading edge supercomputers produce petabytes of data with very high spatial resolution meshes, many fields stored on those meshes, and also a history of the simulation over time. Visualization and analysis are key technologies for deriving knowledge, and thus for realizing the value of the simulations come from. The usage of these technologies is varied and benefit scientists in multiple ways: confirming that simulations are running correctly, communicating the phenomena in the simulations to audiences, and exploring the data to discover new science. Visualizing these very large data sets creates two grand challenges. First, how to process the massive scale of the data? How to load a petabyte of data, process it, and render it at interactive frame rates? Second, how to understand the data? How can trillions of data points be effectively represented using only millions of pixels? How can the data be reduced in a way that preserves its integrity and allows scientists to gain insight? This course will be broken into the two distinct phases. The first phase will give a general overview of scientific visualization – how it works, how it is done in practice, and descriptions of the most commonly used algorithms, such as isocontouring, slicing, volume rendering, and particle advection. The second phase will focus on techniques for very large data – a survey of processing modalities, parallel approaches, and how to achieve efficiency. By the end of the course, students will be able to understand important concepts in visualization, as well as visualization of very large data sets.

## Course 3 - Prof. Dr. Suvrit Sra - Introduction to Machine Learning

Machine learning studies the question: "how to build (efficient) machines that can learn from examples"? Learning refers to the ability to improve performance according to some measure, as the machine gets to process more data. For example, email spam filters that learn to block unwanted email and phishing attempts, robots that learn to navigate their environments by exploring it, or more generally systems that autonomously adapt to changing environments. This short course is targeted at students with little or no prior exposure to machine learning. We will cover some theoretical basis for the subject, along with some examples on how to apply machine learning ideas to some applications. Modern machine learning encompasses a large body of interdisciplinary knowledge, e.g., from data mining, information theory, statistics, functional analysis, computer science, optimization, and several others. Therefore, we will include some remarks on several of these connectionthe during the course. Time permitting, we will go through some exercises on a computer. Students with a background in linear algebra, statistics, and optimization will be at an advantage, though these are not hard requirements.

# 2013

## Course 6 - Prof. Dr. Venkat Raman - Turbulence Modeling

This lecture will provide an introduction to turbulence and turbulence modeling. The following main topics will be covered: 1) Turbulence theory (Scales, Kolmogorov theory, turbulence spectrum), 2) Reynolds-averaged Navier Stokes (RANS) approach (statistical modeling, turbulence models, limitations), 3) large eddy simulation (LES) (formulation, filtering, numerical solution, dynamic modeling).

Further, a brief overview of emerging trends and challenges in modeling will be presented.

## Course 8 - Prof. Dr. Uwe Naumann - The Art of Differentiating Computer Programs

Discrete adjoint versions of numerical simulation programs allow for gradients of certain objectives with respect to N free parameters to be computed with machine accuracy at a computational cost that is independent of N. Similar complexity results hold for second- and higher-order adjoints. The underlying semantic source code transformation technique is known as Algorithmic Differentiation (AD). We have been developing AD software for more than 15 years. This seminar will focus on the discussion of its theoretical foundations and demonstrate its use in the context of practically relevant simulation and optimization problems.

## Course 9 - Prof. Dr. Fredi Tröltzsch - Optimal Control of Partial Differential Equations - Some Main Techniques and Applications

The lecture introduces in some main ideas for solving optimal control problems for nonlinear partial differential equations. As foundation for numerical methods, necessary optimality conditions are motivated and explained. In particular adjoint equations and optimality systems are introduced. Important numerical methods for solving optimal control problems with partial differential equations are discussed. Applications of optimal control in cooling steel, to semiconductor crystal growth, flow around airfoils, flow measurement by alternating magnetic fields, and to the control of wave fronts for reaction diffusion equations illustrate the theory.

## Course 10 - Prof. Dr. Ulisse Stefanelli - Minimization of Configurational Potentials and Crystallization

Despite its obvious scientific and technological relevance, the question on why atoms and molecules arrange in crystalline order is still largely eluding a rigorous mathematical treatment. At very low temperature, atomic interactions are expected to be governed solely by the respective positions of particles. One is hence concerned with the minimization of an interaction energy depending on particle positions.The basic crystallization problem consists in characterizing the local and global geometry of ground-state configurations of such an interaction energy. In particular, one says that crystallization occurs when ground states are periodic.

I will present an overview of the available crystallization results in one and two dimensions and comment on the challenges of the many open problems, especially in three-dimensions. The formation and stability of carbon nanostructures such as nanotubes and fullerenes will also be discussed.

## Course 2 - Part 1 - Prof. Dr. Alfio Quarteroni - Domain Decomposition for Partial Differential Equations

The numerical approximation of partial differential equations can take advantage of domain decomposition (DD) methods. In this presentation we will introduce a general mathematical setting for DD, discuss DD preconditioners, and illustrate their role and efficiency for parallel computing. Finally, we will show the way DD methods can be adapted to solve multiphysics problems.

## Course 2 - Part 2 Prof. Dr. Alfio Quarteroni - Numerical Models for Complexity Reduction in Multiphysics Problems

The numerical solution of complex physical problems often requires a great deal of computational resources. Sometimes, the numerical problem is so large that a reduction of its complexity becomes mandatory. This can be achieved by a manifold strategy with the attempt of simplifying the original mathematical model, devising novel numerical approximation methods, developing efficient parallel algorithms that exploit the dimensional reduction paradigm.

After introducing some illustrative examples, in this seminar several approaches will be proposed and a few representative applications to blood flow modeling, sports design, and the environment will be addressed.

## Course 3 - Prof. Dr. Wolfgang Bangerth - Using Finite Elements via the Deal.II Library

Modern finite element codes are far more complex than what can be written from scratch within any reasonable amount of time: they utilize adaptive meshes for complex geometries, multigrid methods, sophisticated solvers and preconditioners, and need to run on hundreds, thousands, or even more processors. Software of this kind can only be written by reusing libraries that provide this functionality in a generic way. This course will examine the typical structure of finite element codes. It will then demonstrate the use of the open source deal.II library (see http://www.dealii.org) that is widely as the basis for finite element programs in research and provide an overview of its features.

## Course 4 - Prof. Dr. Jochen Garcke - Sparse Grids and Applications

It is well known that classical numerical discretization schemes fail in more than three or four dimensions due to the curse of dimensionality. The technique of sparse grids allows to overcome this problem to some extent under suitable regularity assumptions. This discretization approach is obtained from a multi-scale basis by a tensor product construction and subsequent truncation of the resulting multiresolution series expansion. The underlying ideas can be used for numerical integration and (stochastic) partial differential equations, applications include data analysis, finance, and physics.

## Course 5 - Dr. Michael R. Bussieck & Dr. Stefan Vigerske - Decomposition Methods for Mathematical Programming Problems

This lecture reviews solutions methods for linear, mixed-integer linear, and mixed-integer nonlinear programming problems, in particular the simplex method, cutting plane methods, and branch-and-bound methods. Further, decomposition algorithms like benders decomposition and column generation for mixed-integer linear programs are discussed. The methods are illustrated via examples that use state-of-the-art modelling and solving software.

# 2012

## Course 6 - Prof. Sanjay Mittal, Ph.D. - Aerodynamic Shape Optimization using Adjoint based Methods

A method for aerodynamic shape optimization using adjoint variables is developed and implemented. A stabilized finite element method is used to solve the governing equations. The validation of the formulation and its implementation is carried out via steady flow past an elliptical bump whose eccentricity is used as a design parameter. Results for, both, optimal design and inverse problems are presented. Using different initial guesses, multiple optimal shapes are obtained. A multi-objective function with additional constraints on the volume and the drag coefficient of the bump is utilized. It is seen that as more constraints are added to the objective function the design space is constrained and the multiple optimal shapes become progressively similar to each other. Next, the shape of an airfoil is optimized for various different objectives and for various values of Reynolds number. Very interesting shapes are discovered at low Reynolds numbers. The non-monotonic behavior of the objective functions with respect to the design variables is demonstrated. The method is extended to design airfoils for a range of Reynolds number and angles of attack. Next, the approach for optimizing shapes that are associated with unsteady flows is developed. The objective function is typically based on time-averaged aerodynamic coefficients. Interesting shapes are obtained, especially when the objective is to produce high performance airfoils. The method is utilized to obtain high performance airfoils for Re=1000 and 10,000 using relatively large number of design variables. Beyond a certain number of control points the optimization leads to a spontaneous appearance of corrugations on the upper surface of the airfoil. The corrugations are responsible for generation of small vortices that add to suction on the upper surface of the airfoil and lead to enhanced lift. Preliminary results will be presented for optimization for finite wings.

Figure: Time-averaged pressure field for the optimal airfoils for desired values of the time-averaged lift coefficient. The Reynolds number, based on chord length, is 1000 and the angle of attack is 4 degree.

## Course 7 - Prof. Dr. Ryan Elliott - Modeling Materials: Continuum, Atomistic, and Multiscale Techniques

his course will present an introduction to the mathematical theory of stability and bifurcation in the context of materials problems. Basic concepts of bifurcation and stability are covered based primarily on the minimum potential energy criterion for investigating the stability of elastic conservative systems. An introduction to numerical methods for continuation (branch-following) and branch switching will be discussed and their application to martensitic phase transformations in periodic crystalline alloys will be reviewed.

## Course 8 - Dr. Adrian Muntean - The Homogenization Method and Multiscale Modeling

The lecture starts o with the study of an oscillatory elliptic PDE formulated rstly in xed and, afterwards, in periodically-perforated domains. We remove the oscillations by means of a (formal) asymptotic homogenization method. The output of this procedure consists of a \guessed" averaged model equations and explicit rules (based on cell problems) for computing the eective coecients. About half of the lecture will be spent on explaining the basic steps of the averaging procedure and the way this can be put in action when other PDE problems need to be tackled. In the second part of the lectures, I introduce the concept of two-scale conver- gence (and correspondingly, the two-scale compactness) in the sense of Allaire and Nguetseng and derive rigorously the averaged PDE models and coecients obtained previously. [This step uses the framework of Sobolev and Bochner spaces and relies on basic tools like weak convergence methods, compact em- beddings as well as extension theorems in Sobolev spaces. I will be very brief on these aspects, but suciently clear such that a large audience, which is not necessarily a mathematical one, can follow.] I will particularly emphasize the key aspect { the role the choice of mi- crostructures (pores, perforations, subgrids, etc.) plays in performing the overall averaging procedure.

Basic Ref: A. Muntean, V. Chalupecky, Homogenization Method and Multiscale Modeling, Lecture notes at the Institute for Mathematics and Industry, Kyushu University, 2011.

## Course 9 - Prof. Luigi Preziosi, Ph.D. - Mathematical Models of Tumor Growth

The lectures will give an overview of several mathematical frameworks (individual cell-based models, reaction-diffusion models, continuum mechanics models) used to describe tumour evolution in the different phases of growth, from avascular growth to vascular growth through the formation of vascular networks, to end with the detachment of metastasis and the invasion of the surrounding tissues. To enter more in detail we will start from the first models developed to describe avascular growth to introduce then the more recent multiphase model. We will then describe the mechanical aspects related to tumour growth and cell migration. We will conclude with some continuous and discrete models to describe the process of formation of vascular networks, through vasculogenesis and angiogenesis.

## Course 10 - Prof. Dr. Markus Böl - Aspects of Active Muscle Modeling

This course will present an insight into active muscle tissue modelling. In more detail, we focus on the numerical modelling of skeletal as well as smooth muscles. The lecture is tripartite: First the basics of muscle contraction will be presented. In a second step associated three-dimensional multi-scale/field models are introduced. The lecture ends with the presentation of advanced experimental methods used to validate the aforementioned modelling concepts.