Events
SSD - Hughes Seminar-CANCELED
Prof. Thomas J.R. Hughes, Ph.D. - The Isogeometric Approach to Analysis
Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, USA
Abstract
The vision of Isogeometric Analysis was first presented in a paper published October 1, 2005 [1]. Since then it has become a focus of research within both the fields of Finite Element Analysis (FEA) and Computer Aided Design (CAD) and is rapidly becoming a mainstream analysis methodology and a new paradigm for geometric design [2]. The key concept utilized in the technical approach is the development of a new foundation for FEA, based on rich geometric descriptions originating in CAD, resulting in a single geometric model that serves as a basis for both design and analysis.
In this overview, I will describe some areas in which progress has been made in developing improved methodologies to efficiently solve problems that have been at the very least difficult, if not impossible, within traditional FEA. I will also describe current areas of intense activity and areas where problems remain open, representing both challenges and opportunities for future research (see, e.g., [3,4]).
References
[1] T.J.R. Hughes, J.A. Cottrell and Y. Bazilevs, Isogeometric Analysis: CAD, Finite Elements, NURBS, Exact Geometry and Mesh Refinement, Computer Methods in Applied Mechanics and Engineering, 194, (2005) 4135-4195.
[2] J.A. Cottrell, T.J.R. Hughes and Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, Wiley, Chichester, U.K., 2009.
[3] Special Issue on Isogeometric Analysis, (eds. T.J.R. Hughes, J.T. Oden and M. Papadrakakis), Computer Methods in Applied Mechanics and Engineering, 284, (1 February 2015), 1-1182.
[4] Special Issue on Isogeometric Analysis: Progress and Challenges, (eds. T.J.R. Hughes, J.T. Oden and M. Papadrakakis), Computer Methods in Applied Mechanics and Engineering, 316, (1 April 2017), 1-1270.
EU Regional School - Huerta Seminar
Prof. Antonio Huerta, Ph.D. - Low and High-order Approximations of Parameterized Engineering Problems in Computational Solid and Fluid Mechanics
Department of Applied Mathematics III, Universitat Politècnica de Catalunya, Spain
Abstract
In the first part, an overview of recent advances on modern hybrid discretization approaches, namely the face-centered finite volume (FCFV) and the hybridizable discontinuous Galerkin (HDG) methods, are presented. The former is an efficient low-order approach that has been shown to be extremely robust to mesh distortion and stretching, which are usually responsible for the degradation of classical finite volume solutions [R. Sevilla, M. Giacomini, and A. Huerta. “A face-centred finite volume method for second-order elliptic problems” Int. J. Numer. Methods Eng. 115(8), pp. 986-1014 (2018). R. Sevilla, M. Giacomini, and A. Huerta. “A locking-free face-centred finite volume (FCFV) method for linear elasticity” arXiv:1806.07500 (2018)]. The latter is a high-order strategy originally proposed in [B. Cockburn, J. Gopalakrishnan, and R. Lazarov. “Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems" SIAM J. Numer. Anal. 47(2):1319–1365 (2009)]. Recently, an alternative high-order HDG formulation allowing the pointwise fulfillment of the conservation of angular momentum has been proposed. This aspect is crucial in the approximation of problems in computational solid and fluid mechanics in which quantities of engineering interest (e.g. compliance and aeronautical forces) have to be evaluated starting from the stress tensor. [R. Sevilla, M. Giacomini, A. Karkoulias, and A. Huerta. “A superconvergent hybridisable discontinuous Galerkin method for linear elasticity” Int. J. Numer. Methods Eng. 116(2), pp. 91-116 (2018). M. Giacomini, A. Karkoulias, R. Sevilla, and A. Huerta. “A superconvergent HDG method for Stokes flow with strongly enforced symmetry of the stress tensor” arXiv:1802.09394 (2018)].
In the second part, the proper generalized decomposition (PGD) is employed to devise efficient separated representations of the solution of parameterized engineering problems. The resulting PGD-based computational vademecums allow the fast evaluation of solutions involving user-supplied data, such as boundary conditions and geometrical configurations of the domain.
SSD - Helander Seminar
Prof. Dr. Per Helander - Fusion Energy, Stellarators, and the Wendelstein 7-X Project
Department of Stellarator Theory, Max Planck Institute for Plasma Physics Greifswald, Germany
Abstract
Do we need new energy sources? Despite all the rhetoric from politicians, the vast majority of all energy still comes from fossil fuels and will continue to do so for the foreseeable future – the main reason being the enormous technical difficulties facing any alternative solution. There are, in fact, only very few carbon-free options that could even remotely satisfy Mankind’s present hunger for energy.
Fusion energy is one of these options. Fusion reactions occur in the Sun and other stars, but another reaction, that between deuterium and tritium producing helium, has a much higher cross section and would be easier to realise on Earth. The fuel must however be heated to at least 100 million degrees and be thermally insulated from the surroundings. The most promising way to accomplish this task is to confine the resulting plasma in a toroidal magnetic field.
Two main confinement concepts have emerged along these lines, the tokamak and the stellarator. They have been explored over decades of research and taken great strides in recent years. A very large tokamak, ITER, is now being built in Cadarache in the south of France, with the aim of demonstrating a positive energy balance from a fusion plasma for the first time. A much more modest stellarator – but still the world’s largest experiment of this type – has recently started operation in Greifswald. This device, Wendelstein 7-X, aims to show the feasibility of fusion in stellarators, which offers potential benefits in comparison with tokamaks.
In my talk, I will elaborate on the need for fusion research and on the physical principles of magnetic plasma confinement, and describe the Wendelstein 7-X project. I will also show the latest results from this device, which recently managed to achieve the best plasma confinement ever in a stellarator.
EU Regional School - Pock Seminar
Prof. Dr. Thomas Pock - Variational Methods for Computer Vision: Modeling, Numerical Solution and Learning
Institute for Computer Graphics and Vision, Graz University of Technology, Austria
Abstract
Variational methods (also known as energy minimization methods) are among the
most flexible methods for solving inverse problems. The idea is to set up an
energy functional whose low energy states correspond to physically plausible
solutions of the problem. Hence, computing the solution of a problem is
formulated as an optimization problem. In this course, you will learn about
variational methods for solving classical computer vision problems such as image
restoration, image segmentation, stereo and motion estimation. You will learn
about both the basic modeling aspects (different regularization terms and data
fitting terms) as well as numerical optimization algorithms to solve the models.
Moreover, you will learn about functional lifting, which is a technique whose
aim is to reformulate a hard problem (usually due to non-convexity) in a higher
dimensional space, where the problem becomes convex. Finally, you will also
learn about our recent activities to improve variational models by means of
machine learning techniques.
SSD - Van der Aast Seminar
Prof. Dr. Wil van der Aalst - Process Mining and Simulation: A Match Made in Heaven
Chair of Process and Data Science, RWTH Aachen University
Abstract
Event data are collected everywhere: in logistics, manufacturing, finance, healthcare, customer relationship management, e-learning, e-government, and many other domains. The events found in these domains typically refer to activities executed by resources at particular times and for particular cases. Process mining provides a novel set of tools to exploit such data. Event data can be used to discover the real processes, to detect deviations from normative processes, and to analyze bottlenecks and waste. However, process mining tends to be backward-looking. Fortunately, simulation can be used to explore different design alternatives and to anticipate performance problems. Through simulation experiments various “what if” questions can be answered and redesign alternatives can be compared with respect to key performance indicators. However, making a good simulation model may be very time consuming and models may be outdated by the time they are ready. Therefore, process mining and simulation complement each other well. In his talk, Wil van der Aalst will argue that process mining and simulation form a match made in heaven. He will introduce process mining concepts and show (1) how to discover simulation models, (2) how to view real and simulated event data in a unified manner, and (3) how to make process mining more forward-looking using simulation. He will also explain how his team applied process mining in over 150 organizations, developed the open-source tool ProM, and influenced the 20+ commercial process mining tools available today.