We study physical systems composed of at least two immiscible fluids occupying different regions of space, the so-called phases. Flows of such multi-phase fluids are frequently met in industrial applications which rises the need for their numerical simulations. In particular, the research conducted herein is motivated by the need to model the float glass forming process. The systems of interest are in the present contribution mathematically described in the framework of the so-called diffuse interface models. The thesis consists of two parts.
In the modelling part, we first derive standard diffuse interface models and their generalized variants based on the concept of multi-component continuous medium and its careful thermodynamic analysis. We provide a critical assessment of assumptions that lead to different models for a given system. Our newly formulated class of generalized models of Cahn-Hilliard-Navier-Stokes-Fourier (CHNSF) type is applicable in a non-isothermal setting. Each model belonging to that class describes a mixture of separable, heat conducting Newtonian fluids that are either compressible or incompressible. The models capture capillary and thermal effects in thin interfacial regions where the fluids actually mix.
In the computational part, we focus on the development of an efficient and robust numerical solver for a specific isothermal model describing incompressible fluids. The proposed numerical scheme, which is based on the finite element method, partly decouples the system of governing equations on the level of time discetization. We carefully discuss the advanced design of the preconditioner for the computationally most demanding part of the scheme, given by the system of incompressible Navier-Stokes equations with variable coefficients. The numerical scheme has been implemented using the FEniCS computing platform. The code capable of running parallel 2D and 3D multi-phase flow simulations is available in the newly developed FEniCS-based library MUFLON.
Diseases of the cardiovascular system count to the most common causes of death in the developed countries. There are many open research questions with respect to a better understanding for example of the physiology of the heart and the main arteries or to the determination of the factors for aneurysm or stenosis development of the aorta. Furthermore, on a daily basis, a heart surgeon has to estimate the probability of success for different treatment scenarios as opposed to no intervention. In recent decades, methods of investigation with living probands (in vivo) and artificial experiments (in vitro) have been complemented more and more by computational methods and simulation (in silico). In particular, numerical simulations have the capability to enhance medical imaging modalities with additional information. However, to date, the biomechanical simulation of aortic blood flow given an uncertain data situation represents a major challenge. So far, mostly deterministic models have been used, Yet, measurement data for the configuration of a simulation is subject to measurement inaccuracies. For the choice of model parameters, which are non-measurable in a living body, often imprecise information is available only. In this work, novel development steps for a numerical framework are presented aiming for the simulation and evaluation of aortic biomechanics using methods of Uncertainty Quantification (UQ). The work includes the modelling of the aortic biomechanics as a fluid-structure interaction (FSI) problem with uncertain parameters. By means of a subject-specific workflow, the simulation of different probands, phantoms and, ultimately, patients is enabled. For the solution of the complex partial differential system of equations, they are discretised with the finite element method (FEM) and a novel, parallelly efficient and problem-specific solver is developed. To verify the numerical framework implemented in the course of this work, a novel analytically solvable benchmark for UQ-FSI problems is proposed. Furthermore, the numerical framework is validated by means of a prototypical aortic phantom experiment. Finally, the UQ-FSI simulation enables the evaluation of a stress overload probability. This novel parameter is exemplarily evaluated by means of the simulation of a human aortic bow. Therewith, this work represents a new contribution to aspects of the development of simulation methods for the investigation of aortic biomechanics.
Profile-drawings and tracings are essential elements of a throughout scientific object documentation in archaeological pottery studies. Within the study of Greek pottery, unwrappings of painted surfaces have a long tradition and a still well-deserved high significance. They show the depiction without photographic distortions or sectioning, enabling archaeologists to analyse and interpret the image as a whole. This is especially true in the case of Corinthian pottery, where the poor preservation of the painting tending to flake off often results in unclear photographs. Nevertheless, traces of flaked off painting layers are still visible on the surface under specific illumination. Creating profile-drawings and unwrappings manually is time-consuming. Manual acquisition with tactile tools like lead wires, profile combs or tracing paper is often not allowed due to the fragile nature of the surfaces. To facilitate this task for pottery archaeologists, we propose a combination of 3D data derived by photogrammetry (SfM) and computed tomography (CT), where each technique can also be on its own. The fusion of different data sources is a new approach. It exploits the specific strengths of these not-contact digitisation technologies: SfM with a high resolution in texture and CT with a high accuracy in geometry as well as with the added value of providing inner surfaces of closed vessels. Having SfM and CT data, we can combine them by transferring colour information to the vertices of the CT model. Afterwards, we use the GigaMesh Software Framework for enhancing geometric features in the surface data, in our case the fine incisions of the black-figure style. Doing this, we are able to create accurate and sufficiently detailed unwrappings aligned to the needs of pottery specialists. Additionally, we can compute profile lines with inner and outer contours and unwrappings of the inner surface showing significant details of the manufacturing process.
The development of advanced discretization methods for the radiation transport equation is of fundamental importance, since the numerical effort of modeling increasingly complex multidimensional problems with increasing accuracy is extremely challenging. Different expressions of this equation arise in several science fields, from nuclear fission and fusion to astrophysics, climatology and combustion.
Mathematically, the radiation intensity is usually a rapidly changing function, causing a considerable loss in accuracy for many discretization methods. Depending on the coefficient ranges, the equation behaves like totally different equation types, making it very difficult to find a discretization method that is efficient in all regimes. Computationally, the huge amount of unknowns involved demands not only extremely powerful computers, but also efficient numerical methods and optimized implementations. Today, solvers covering all the coefficient ranges and still being robust in the diffusion dominated case are very scarce.
In the last 20 years, Discontinous Galerkin (DG) methods have been studied for the monoenergetic problem, unsuccessfully, due to lack of stability for diffusion-dominated cases. Recently, new mathematical developments have fully explained the instability and provided a remedy by using a numerical flux depending on the scattering cross section and the mesh size. The new formulation has proven to be stable and allows the application of multigrid, matrix-free methods, reducing the memory needed for such an amount of unknowns.
We use these numerical methods to address the solution of a energy dependent problem with a multigroup approach. We study the diffusion approximation to the transport problem, obtaining convergence proofs for the symmetric scattering case and advances in the nonsymmetric case, using field of values analysis.
For the full transport case, we discretize by means of an asymptotic preserving, weakly penalized discontinuous Galerkin method that we solve with a multigrid preconditioned GMRES solver, using nonoverlapping Schwarz smoothers for the energy and direction dependent radiative transfer problem.
To address the local thermodynamic equilibrium (LTE) constraint, we use a nonlinear additive Schwarz method to precondition the Newton solver. By solving full local radiative transfer problems for each grid cell, performed in parallel on a matrix-free implementation, we achieve a method capable to address large scale calculations arising from applications such as astrophysics, atmospheric radiation calculations and nuclear applications.
To the best of our knowledge, this is the first time this preconditioner combination has been used in LTE radiation transport and in several tests we show the robustness of the approach for different mesh sizes, cross sections, energy distributions and anisotropic regimes, both in the linear and nonlinear cases.
This thesis aims at the investigation and development of the control of waste heat recovery systems (WHR) for heavy duty trucks based on the organic Rankine cycle. It is desired to control these systems in real time so that they recover as much energy as possible, but this is no trivial task since their highly nonlinear dynamics are strongly affected by external inputs (disturbances). Additionally, nonlinear operational constraints must be satisfied. To deal with this problem, in this thesis a dynamic model of a WHR that is based on first principles and empirical relationships from thermodynamics and heat transfer is formulated. This model corresponds to a DAE of index 1. In view of the requirements of the employed numerical methods, it includes a spline-based evaluation method for the thermophysical properties needed to evaluate the model. Therewith, the continuous differentiability of the state trajectories with respect to controls and states on its domain of evaluation is achieved. Next, an optimal control problem (OCP) for a fixed time horizon is formulated. From the OCP, a nonlinear model-predictive control (NMPC) scheme is formulated as well. Since NMPC corresponds to a state feedback strategy, a state estimator is also formulated in the form of a moving horizon estimation (MHE) scheme. In this thesis, we make use of efficient numerical methods based on the direct multiple shooting (DMS) method for optimal control, backward differentiation formulae for the solution of initial value problems for DAE, and the corresponding versions of the real-time iteration (RTI) scheme in order to approximately solve the OCP and implement the MHE and NMPC schemes. The simultaneous implementation of NMPC and MHE schemes based on RTI has been already proven to be stable in the control literature.
Several numerical instances of the DMS method for the proposed OCP, NMPC and MHE schemes are tested assuming a given real-world operation scenario consisting of truck exhaust gas data recorded during a real trip. These data have been kindly provided by our industry cooperation partner Daimler AG. Additionally, the PI and LQGI control strategies, of wide-spread use in the literature of control of WHR, are also considered for comparison with the proposed scheme. An important result of this thesis is that, considering the highest energy recovery obtained from both strategies as a reference for the given operation scenario, the proposed NMPC scheme is able to reach an additional energy generation of around 3% when the full state vector is assumed to be known, and its computational speed allows it to update the control function in times shorter than the considered sampling time of 100 [ms], which makes it a suitable candidate for real-time implementation. In a more realistic scenario in which the state has to be estimated from noisy measurements, a combination of both aforementioned NMPC and MHE schemes yields an additional energy generation of around 2%.
Concretely, this thesis presents novel results and advances in the following areas:
• A first principles DAE model of the WHR is presented. The model is derived from the energy and mass conservation considerations and empirical heat transfer relationships; and features a tailored evaluation method of thermophysical properties with which it possesses the property of being at least continuously differentiable with respect to its controls and states on its whole domain of evaluation.
• A new real-time optimization control strategy for the WHR is developed. It consists of an NMPC strategy based on efficient simulation, optimization and control tools developed in previous works. The scheme is able to explicitly handle nonlinear constraints on controls and states. In contrast to other NMPC instances for the WHR found in the literature, our scheme's efficient numerical treatment make it real-time feasible even if the full nonlinear WHR dynamics are considered.
• To the author's knowledge, this is the first implementation that considers both the NMPC and the MHE approaches used simultaneously in the control of the WHR. The combination of NMPC and MHE produces a closed-loop, model-based implementation that can treat realistic measurements as inputs and calculates the corresponding control functions as outputs.
This video demonstrates the digital creation of a profile line i.e. drawing of a ceramic fragment, which was captured using a 3D-scanner. Such fragments also known as sherds are among the most common findings at archaeological excavations.
The rim sherd shown as an example was part of a bowl. It was found 2017 during excavations in the Roman vicus of Gleisdorf, Styria, Austria. It belongs to a group of fine ware which is called "Pannonische Glanztonware“ (PGW, Pannonian glazed pottery), which was produced between flavian times and the beginning of the 3rd century AD. PGW imitated Samian Ware vessels regarding their forms and also decorations. Usually burnt in a de-oxidizing atmosphere in the kiln and therefor black or grey, the example shown was burnt in an oxidizing atmosphere and is therefore of orange color.
Related publications for computing drawings of ceramic fragments are: [1] Hubert Mara, Martin Kampel and Robert Sablatnig, “Preprocessing of 3D-Data for Classification of Archaeological Fragments in an Automated System“, in: Leberl F., Fraundorfer F., (Eds.), “Vision with Non-Traditional Sensors, Proc. of the 26th Workshop of the Austrian Association for Pattern Recognition (OEAGM)“, Schriftenreihe der OCG, Vol. 160, pp. 257-264, 2002. [2] Robert Sablatnig and Hubert Mara, "Orientation of Fragments of Rotationally Symmetrical 3D-Shapes for Archaeological Documentation," 3D Data Processing Visualization and Transmission, International Symposium on(3DPVT), University of North Carolina, Chapel Hill, USA, 2006, pp. 1064-1071. doi:10.1109/3DPVT.2006.105 [3] Hubert Mara and Julia Portl, “Acquisition and Documentation of Vessels using High-Resolution 3D-Scanners”, in: Trinkl E., Corpus Vasorum Antiquorum Österreich-Beiheft 1, pp. 25-40, Verlag der Österreichischen Akademie der Wissenschaften, Vienna, Austria, 2013.
This video demonstrates another umbrella transformation using a sphere to unwrap an archaeological finding acquired using Structure from Motion (SfM).
The unwrapping was published by Bastian Rieck, Hubert Mara, and Susanne Krömker: Unwrapping Highly-Detailed 3D Meshes of Rotationally Symmetric Man-Made Objects, in proc. of the XXIV. CIPA 2013 Symposium Recording, Documentation and Cooperation for Cultural Heritage. ISPRS - International Society for Photogrammetry and Remote Sensing 2013, pp. 259-264 (ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci.; II-5/W).
The object shown as an example is the Aryballos KFUG IA Inv. G 26 of the collection of the Institut für Archäologie, Karl-Franzens-Universität Graz, Austria. More information can be found via the permalink: http://gams.uni-graz.at/o:arch.2478
This video demonstrates an umbrella transformation using a cone to unwrap an archaeological finding acquired using a 3D-scanner based on the principle of structured light.
The unwrapping was published by Bastian Rieck, Hubert Mara, and Susanne Krömker: Unwrapping Highly-Detailed 3D Meshes of Rotationally Symmetric Man-Made Objects, in proc. of the XXIV. CIPA 2013 Symposium Recording, Documentation and Cooperation for Cultural Heritage. ISPRS - International Society for Photogrammetry and Remote Sensing 2013, pp. 259-264 (ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci.; II-5/W).
The rollout of the object was published by Paul Bayer and Susanne Lamm: Mehr als nur Ben Hur – Eine 3D-Abrollung des römischen Silberbechers von Grünau, Steiermark, Forum Archaeologiae 87/VI/2018.
Timestamps within the video for the 3D-mesh processing tasks demonstrated:
00:28 Open Mesh
01:00 Mouse Navigation
01:36 Lighting
02:33 Mesh Inspection
04:02 Cleaning
05:48 Hole Reopening
08:12 Mesh Orientation
10:24 Save Mesh
10:38 Various Tips
Nowadays, an increasing number of numerical modeling techniques, notably by means of the finite element method (FEM), are involved in the industrial design process and play a vital role in the area of the biomedical engineering. Particularly, the computational fluid dynamics (CFD) has become a promising tool for investigating the fluid behavior and has also been used to study the cardiovascular hemodynamics to predict the blood flow in the cardiovascular system over the recent decades.
However, simulating a fluid in rotational frames is not trivial, as the classical fluid calculation considers that the geometry of the fluid domain does not alter along the time. In the meanwhile, due to the high rotating speed and the complex geometry of the ventricular assist device (VAD), a turbulent flow must be developed inside the pump housing. The Navier-Stokes equations are not applicable in respect of our available computing resource, additional assumptions and approaches are often applied as a means to model the eddy formation and cope with numerical instabilities.
For many applications, there is still a big gap between the experimental data and the numerical results. Some of the discrepancies come especially from uncertain data which are used in the physical model, therefore, Uncertainty Quantification (UQ) comes into play. The Galerkin-based polynomial chaos expansion method delivers directly the mean and higher stochastic moments in a closed form. Due to the Galerkin projection’s properties, the spectral convergence is achieved.
This thesis is dedicated to developing an efficient model to simulate the blood pump assuming uncertain parametric input sources. In a first step, we develop the shear layer update approach built on the Shear-Slip Mesh Update Method (SSMUM), our proposition facilitates the update procedure in parallel computing by forcing the local vector to retain the same structure. In a second step, we focus on the Variational Multiscale method (VMS) in order to handle the numerical instability and approximate the turbulent behavior in the blood. As a consequence of utilizing the intrusive Polynomial Chaos formulation, a highly coupled system needs to be solved in an efficient manner. Accordingly, we take advantage of the Multilevel preconditioner to precondition our stochastic Galerkin system, in which the Mean-based preconditioner is prescribed to be the smoother. Besides, the mean block is preconditioned with the Schur-Complement method, which leads to an acceleration of the solution process. Hence, by developing and combining the proposed solvers and preconditioners, dealing with a large coupled stochastic fluid problem on a modern computer architecture is then feasible. Furthermore, based on the stochastic solutions obtained from the previous described system, we obtain valuable information about the blood flow accompanied with certain level of confidence, which is beneficial for designing a new blood-handle device or improving the current model.
Computer sind dumm. Es wird zwar ständig über Künstliche Intelligenz geredet, aber bislang gilt für die Geräte, mit denen wir zu tun haben, noch überwiegend die alte goldene IT-Regel: „shit in = shit out“. Was schon ganz gut funktioniert, ist das maschinelle Lernen. Da geht es um Software, die lernt, immer besser spezielle Aufgaben zu lösen. Wie das funktioniert, berichtet Campus Reporter Nils Birschmann und hat dabei mit Prof. Dr. rer. nat. Fred A. Hamprecht gesprochen.
Der Beitrag erschien in der Sendereihe "Campus-Report" - einer Beitragsreihe, in der über aktuelle Themen aus Forschung und Wissenschaft der Universitäten Heidelberg, Mannheim, Karlsruhe und Freiburg berichtet wird. Zu hören ist "Campus-Report" montags bis freitags jeweils um ca. 19.10h im Programm von Radio Regenbogen (Empfang in Nordbaden: UKW 102,8. In Mittelbaden: 100,4 und in Südbaden: 101,1)
In this thesis we develop the Multi-Level Iteration schemes (MLI), a numerical method for Nonlinear Model Predictive Control (NMPC) where the dynamical models are described by ordinary differential equations. The method is based on Direct Multiple Shooting for the discretization of the optimal control problems to be solved in each sample. The arising parametric nonlinear problems are solved approximately by setting up a generalized tangential predictor in a preparation phase. This generalized tangential predictor is given by a quadratic program (QP), which implicitly defines a piecewise affine linear feedback law. The feedback law is then evaluated in a feedback phase by solving the QP for the current state estimate as soon as it becomes known to the controller.
The method developed in this thesis yields significant computational savings by updating the matrix and vector data of the tangential predictor in a hierarchy of four levels. The lowest level performs no updates and just calculates the feedback for a new initial state estimate. The second level updates the QP constraint functions and approximates the QP gradient. The third level updates the QP constraint functions and calculates the exact QP gradient. The fourth level evaluates all matrix and vector data of the QP. Feedback schemes are then assembled by choosing a level for each sample. This yields a successive update of the piecewise affine linear feedback law that is implicitly defined by the generalized tangential predictor.
We present and discuss four strategies for data communication between the levels in a scheme and we describe how schemes with fixed level choices can be assembled in practice. We give local convergence theory for each level type holding its own set of primal-dual variables for fixed initial values, and discuss existing convergence theory for the case of a closed-loop process. We outline a modification of the levels that yields additional computational savings.
For the adaptive choice of the levels at runtime, we develop two contraction-based criteria to decide whether the currently used linearization remains valid and use them in an algorithm to decide which level to employ for the next sample. Furthermore, we propose a criterion applicable to online estimation. The criterion provides additional information for the level decision for the next sample. Focusing on the second lowest level, we propose an efficient algorithm for suboptimal NMPC.
For the presented algorithmic approaches, we describe structure exploitation in the form of tailored condensing, outline the Online Active Set Strategy as an efficient way to solve the quadratic subproblems and extend the method to linear least-squares problems. We develop iterative matrix-free methods for one contraction-based criterion, which estimates the spectral radius of the iteration matrix.
We describe three application fields where MLI provides significant computational savings compared to state-of-the-art numerical methods for NMPC. For both fixed and adaptive MLI schemes, we carry out extensive numerical testings for challenging nonlinear test problems and compare the performance of MLI to a state-of-the-art numerical method for NMPC. The schemes obtained by adaptive MLI are computationally much cheaper while showing comparable performance. By construction, the adaptive MLI allows giving feedback with a much higher frequency, which significantly improves controller performance for the considered test problems.
To perform the numerical experiments, we have implemented the proposed method within a MATLAB(R) based software called MLI, which makes use of a software package for the automatic derivative generation of first and higher order for the solution of the dynamic model as well as objective and constraint functions, which performs structure exploitation by condensing, and which efficiently solves the parametric quadratic subproblems by using a software package that provides an implementation of the Online Active Set Strategy.
Background: In this retrospective randomized case series, we compared bilateral symmetry between OD and OS eyes, intercorneal differences and Functional Optical Zone (FOZ) of the corneal aberrations.
Methods: Sixty-seven normal subjects (with no ocular pathology) who never had any ocular surgery were bilaterally evaluated at Augenzentrum Recklinghausen (Germany). In all cases, standard examinations and corneal wavefront topography (OPTIKON Scout) were performed. The OD/OS bilateral symmetry was evaluated for corneal wavefront aberrations, and FOZ-values were evaluated from the Root-Mean-Square (RMS) of High-Order Wavefront-Aberration (HOWAb). Moreover, correlations of FOZ, spherical equivalent (SE), astigmatism power, and cardinal and oblique astigmatism for binocular vs. monocular, and binocular vs. intercorneal differences were analyzed.
Results: Mean FOZ was 6.56 ± 1.13 mm monocularly, 6.97 ± 1.34 mm binocularly, and 7.64 ± 1.30 mm intercorneal difference, with all strongly positively correlated, showing that the diameter of glare-free vision is larger in binocular than monocular conditions. Mean SE was 0.78 ± 1.30 D, and the mean astigmatism power (magnitude) was 0.46 ± 0.52 D binocularly. The corresponding monocular values for these metrics were 0.78 ± 1.30 D and 0.53 ± 0.53 D respectively. SE, astigmatism magnitude, cardinal astigmatism component, and FOZ showed a strong correlation and even symmetry; and oblique astigmatism component showed odd symmetry indicating Enantiomorphism between the left and right eye.
Conclusions: These results confirm OD-vs.-OS bilateral symmetry (which influences binocular summation) of HOWAb, FOZ, defocus, astigmatism power, and cardinal and oblique astigmatism. Binocular Functional Optical Zone calculated from corneal wavefront aberrations can be used to optimize refractive surgery design.
Background: To retrospectively analyse strategies for adjusting refractive surgery plans with reference to the preoperative manifest refraction.
Methods: We constructed seven nomograms based on the refractive outcomes (sphere, cylinder, axis [SCA]) of 150 consecutive eyes treated with laser in situ keratomileusis for myopic astigmatism. We limited the initial data to the SCA of the manifest refraction. All nomograms were based on the strategy: if for x diopters (D) of attempted metric, y D is achieved; we can reverse this sentence and state for achieving y D of change in the metric, x D will be planned. The effects of the use of plus or minus astigmatism notation, spherical equivalent, sphere, principal meridians notation, cardinal and oblique astigmatism, and astigmatic axis were incorporated.
Results: All nomograms detected subtle differences in the spherical component (p < 0.0001). Nomograms 5 and 7 (using power vectors) and 6 (considering axis shifts) detected significant astigmatic differences (nomogram 5, p < 0.001; nomogram 6, p < 0.05; nomogram 7, p < 0.005 for cardinal astigmatism, p = 0.1 for oblique astigmatism). We observed mild clinically relevant differences (~ 0.5 D) in sphere or astigmatism among the nomograms; differences of ~ 0.25 D in the proposals for sphere or cylinder were not uncommon. All nomograms suggested minor improvements versus actual observed outcomes, with no clinically relevant differences among them.
Conclusions: All nomograms anticipated minor improvements versus actual observed outcomes without clinically relevant differences among them. The minimal uncertainties in determining the manifest refraction (~ 0.6 D) are the major limitation to improving the accuracy of refractive surgery nomograms.
Alle reden von künstlicher Intelligenz, es ist das heiß diskutierte Zukunftsthema schlechthin. Manche warnen davor, andere freuen sich auf ein leichteres Leben durch KIs. Dabei haben wir bislang noch nicht einmal verstanden, wie unsere eigene, menschliche Intelligenz überhaupt funktioniert. Um das zu ändern, arbeiten Forscher der Uni Heidelberg daran, einen Schaltplan des Gehirns zu entwickeln. Campus-Reporter Nils Birschmann hat sich das angeschaut.
Der Beitrag erschien in der Sendereihe "Campus-Report" - einer Beitragsreihe, in der über aktuelle Themen aus Forschung und Wissenschaft der Universitäten Heidelberg, Mannheim, Karlsruhe und Freiburg berichtet wird. Zu hören ist "Campus-Report" montags bis freitags jeweils um ca. 19.10h im Programm von Radio Regenbogen (Empfang in Nordbaden: UKW 102,8. In Mittelbaden: 100,4 und in Südbaden: 101,1)
This thesis treats different aspects of nonlinear optimal control problems under uncertainty in which the uncertain parameters are modeled probabilistically. We apply the polynomial chaos expansion, a well known method for uncertainty quantification, to obtain deterministic surrogate optimal control problems. Their size and complexity pose a computational challenge for traditional optimal control methods. For nonlinear optimal control, this difficulty is increased because a high polynomial expansion order is necessary to derive meaningful statements about the nonlinear and asymmetric uncertainty propagation. To this end, we develop an adaptive optimization strategy which refines the approximation quality separately for each state variable using suitable error estimates. The benefits are twofold: we obtain additional means for solution verification and reduce the computational effort for finding an approximate solution with increased precision. The algorithmic contribution is complemented by a convergence proof showing that the solutions of the optimal control problem after application of the polynomial chaos method approach the correct solution for increasing expansion orders. To obtain a further speed-up in solution time, we develop a structure-exploiting algorithm for the fast derivative generation. The algorithm makes use of the special structure induced by the spectral projection to reuse model derivatives and exploit sparsity information leading to a fast automatic sensitivity generation. This greatly reduces the computational effort of Newton-type methods for the solution of the resulting high-dimensional surrogate problem. Another challenging topic of this thesis are optimal control problems with chance constraints, which form a probabilistic robustification of the solution that is neither too conservative nor underestimates the risk. We develop an efficient method based on the polynomial chaos expansion to compute nonlinear propagations of the reachable sets of all uncertain states and show how it can be used to approximate individual and joint chance constraints. The strength of the obtained estimator in guaranteeing a satisfaction level is supported by providing an a-priori error estimate with exponential convergence in case of sufficiently smooth solutions. All methods developed in this thesis are readily implemented in state-of-the-art direct methods to optimal control. Their performance and suitability for optimal control problems is evaluated in a numerical case study on two nonlinear real-world problems using Monte Carlo simulations to illustrate the effects of the propagated uncertainty on the optimal control solution. As an industrial application, we solve a challenging optimal control problem modeling an adsorption refrigeration system under uncertainty.
In this thesis we propose new methods in the field of numerical mathematics and stochastics for a model-based optimization method to control dynamical systems under uncertainty. In model-based control the model-plant mismatch is often large and unforeseen external influences on the dynamics must be taken into account. Therefore we extend the dynamical system by a stochastic component and approximate it by scenario trees. The combination of Nonlinear Model Predictive Control (NMPC) and the scenario tree approach to robustify with respect to the uncertainty is of growing interest. In engineering practice scenario tree NMPC yields a beneficial balance of the conservatism introduced by the robustification with respect to the uncertainty and the controller performance. However, there is a high numerical effort to solve the occuring optimization problems, which heavily depends on the design of the scenario tree used to approximate the uncertainty. A big challenge is then to control the system in real-time. The contribution of this work to the field of numerical optimization is a structure exploiting method for the large-scale optimization problems based on dual decomposition that yields smaller subproblems. They can be solved in a massively parallel fashion and additionally their discretization structure can be exploited numerically. Furthermore, this thesis presents novel methods to generate suitable scenario trees to approximate the uncertainty. Our scenario tree generation based on quadrature rules for sparse grids allows for scenario tree NMPC in high-dimensional uncertainty spaces with approximation properties of the quadrature rules. A further novel approach of this thesis to generate scenario trees is based on the interpretation of the underlying stochastic process as a Markov chain. Under the Markovian assumption we provide guarantees for the scenario tree approximation of the uncertainty. Finally, we present numerical results for scenario tree NMPC. We consider dynamical systems from the chemical industry and demonstrate that the methods developed in this thesis solve optimization problems with large scenario trees in real-time.