Filtern
Erscheinungsjahr
Dokumenttyp
- Preprint (1038)
- Dissertation (951)
- Wissenschaftlicher Artikel (672)
- Bericht (399)
- Masterarbeit (30)
- Konferenzveröffentlichung (28)
- Diplomarbeit (24)
- Teil eines Periodikums (21)
- Arbeitspapier (15)
- Vorlesung (11)
Sprache
- Englisch (3227) (entfernen)
Schlagworte
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Wavelet (13)
- finite element method (12)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- Optimization (11)
- RODEO (11)
Fachbereich / Organisatorische Einheit
- Kaiserslautern - Fachbereich Mathematik (1056)
- Kaiserslautern - Fachbereich Informatik (758)
- Kaiserslautern - Fachbereich Physik (324)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (315)
- Fraunhofer (ITWM) (205)
- Kaiserslautern - Fachbereich Chemie (126)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (116)
- Kaiserslautern - Fachbereich Biologie (103)
- Kaiserslautern - Fachbereich Sozialwissenschaften (77)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (38)
This thesis addresses several challenges associated with introducing autonomous auxiliary vehicles, e.g., drones or robots, into a logistics system from the perspective of Operations Research. To this end, optimization models are formalized that enable the assessment of potential benefits of integrating such vehicles into last-mile delivery. As the resulting models are computationally challenging, the thesis continuously refines the formulations and develops appropriate algorithms that are capable of producing high-quality solutions with reasonable computational effort. This facilitates effective problem-solving and aids in shaping the design of future delivery fleets, laying the groundwork for future decision support systems. In Operations Research, Mixed-Integer Programming solvers play a pivotal role. As is evident by this thesis, this concerns solving Mixed-Integer Linear Programming formulations directly or integrating parts of them in matheuristic frameworks. This raises the fundamental question if the performance of such solvers can be enhanced in any meaningful way, by adjusting their default algorithmic behavior on a per-instance basis. We trace the roots of this problem to the Algorithm Selection Problem and develop a novel methodology that leverages Machine Learning to formalize a prescriptive optimization problem. By using a tailored Branch & Bound approach, this methodology enables us to effectively compute the (predictably) optimal configuration of a Mixed-Integer Programming solver on a per-instance basis with minimal computational overhead. The potential impact extends beyond specific problem domains, fostering a more comprehensive and synergistic approach to decision-making and optimization.
The primary focus of this work was on exploring the utility of VR and continuous response tracking in psychological experiments. Continuous tracking elucidates the fine-grained dynamics of decision-making. Distributional methods, such as Survival analysis (SA) and my newly developed method, Spatiotemporal Survival Analysis (StSA) served as the primary tool to analyze these responses. Studying the time-course of behavior in classical paradigms can deepen our insight into the cognitive processes, such as working memory and response conflict. This methodology not only sheds new light on classical paradigms but also establishes a groundwork for future research aiming to unravel the complexities of cognitive processes in experimental contexts.
This work also sets out to demonstrate how it is possible to preserve the legacy of prior well established experimental paradigms in VR. In a series of experiments, VR was used to replicate findings in tasks related to visual perception (chapter 4) and working memory (chapters 5 and 6). These experiments demonstrate that, not only can classical results be replicated in VR, but that this technology enables a more fine-grained understanding of human behavior, relying on continuous tracking of response behavior (demonstrated by chapter 5). Finally, this work concludes with remarks on the future of experiments in VR and the utility of Survival Analysis using continuous movement trajectory data.
Mycotoxins are secondary toxic metabolites synthesized by several species of filamentous fungi. Occurrence of hazardous mycotoxins has been studied along the whole food production chain, i.e. in crops and in foods for consumption. In order to protect consumers against an exposure, strategies aimed to reduce and mitigate the occurrence of mycotoxins at pre- and post-harvest stages have been implemented. Furthermore, maximum limits for mycotoxins in major food/feed commodities are set in legislation, and health-based guidance values are derived by scientific advisory committees as basis for assessing risks. Although these strategies and regular monitoring allow for a significant reduction of risks from exposure to mycotoxins, humans continue to be exposed to mycotoxins, in some circumstances at levels exceeding the current health-based guidance values. Therefore, it can be hypothesized that stages in the production of crops and consumer choices of foods are contributors to mycotoxin exposure, but presently not considered. The work presented in this habilitation developed and evaluated new monitoring strategies to investigate this hypothesis.
Soil is the basis for the cultivation and production of crops, but has not been evaluated as source of mycotoxins until now. This is relevant, considering that the soil is the habitat for the inoculum of several mycotoxigenic fungal species and that mycotoxins are ubiquitous in the environment including soil. Next, mycotoxins can be mobilized from soil to plant, which may increase the risk for a contamination of harvested commodities. Evaluation of the soil as a source of mycotoxins should also consider the role of managements and treatments in modern agriculture and how these affect soil physicochemical or biological properties as well as the occurrence and fate of mycotoxins in soils. In addition, some mycotoxins have antimicrobial properties and may thus influence the soil microbiome with the consequence of changes in soil biogeochemical processes and functions.
Current strategies for mycotoxin exposure prevention are restricted to the food production chain, namely from cropping/harvest until retail level. But, food commodities are stored in the households with the possibility that mycotoxins can form by late fungal infection, with a risk for exposure. In this regard, human exposure to mycotoxins is determined by the individual food habits, preferences and lifestyle. Therefore, it is important to identify major sources of mycotoxin exposure in different population groups, and also how the exposure is driven by food preferences and lifestyle, in particular at the household level.
The overall aim of this habilitation is to extend the knowledge on mycotoxins as environmental pollutants by evaluating steps beyond the food production chain that may contribute to mycotoxin-related risks for humans and for the environment. Concretely, this means to conduct monitoring of soils as a source of mycotoxins, and to assess also how alimentary habits and lifestyle may impact on exposure at the consumer level. This entails the design of an integrated monitoring strategy concept which includes unexplored sources and risks for mycotoxin exposure. This habilitation covers 23 published papers and is divided into three main chapters:
(i) Plastic mulching and soil quality indices (chapter 2): In this chapter the impact of mulching systems, namely straw and plastic, on soil physical, chemical and biological properties as well as biogeochemical processes were analysed using the example of asparagus and strawberry crops. The starting point was a literature review on the benefits and potential risks related to the use of plastic mulching in agriculture [chapter 2.1]. This was followed up by field monitoring experiments with a focus on the effects of plastic mulching on soil (micro)organisms [chapter 2.2 and 2.3] and on modifications of soil biogeochemical processes in short- and long-term application [chapter 2.4 and 2.5]. Due to the increasing awareness of the use of plastics in the environment, plastic mulching was also investigated for its contribution to soil plastic pollution [chapter 2.6 - 2.8].
(ii) Occurrence and fate of mycotoxins in agricultural soils (chapter 3): Mycotoxins do occur in soil, but their biosynthesis in situ as well as their persistence are influenced by soil biogeochemical processes related to the structure and function of the soil microbiome. The occurrence and fate of mycotoxins was investigated starting with the development of sensitive methods for the analysis of mycotoxins in soils [chapter 3.1], considering that levels in soils may be of factor hundred lower than concentrations observed in food commodities. Then, suitable sampling strategies were developed to account for the heterogeneous distribution of mycotoxins in soils [chapter 3.2]. Sampling and analytical methods were applied to investigate the occurrence of mycotoxins in soils, exemplified by studies on the use of plastic mulching in agriculture [chapter 3.3 and 3.4]. Since mycotoxin levels in soils reflect only concentrations measured at the time of sampling, without considering spatio-temporal dynamics, a further focus of our studies was to evaluate the biosynthesis and stability (fate) of the mycotoxins in the soil matrix depending on the integrity of the soil microbiome [chapter 3.5 - 3.8].
(iii) Human exposure and biomonitoring (chapter 4): Here, it was evaluated how alimentary habits and lifestyle may contribute to the risk of mycotoxin exposure. Firstly, (bio-) monitoring strategies which include the identification and characterization of suitable biomarkers in biological matrices that are representative of exposure were investigated [chapter 4.1 - 4.4], considering also food intake differences between infants and adults. In the context of sensitive population-groups also an in silico approach was used to model their mycotoxin exposure [chapter 4.5]. Finally, the risk of mycotoxin exposure was evaluated based on alimentary habits and contaminant levels in food commodities, and for lifestyle aspects and awareness on risks from mouldy food [chapter 4.6 and 4.7].
The results of this habilitation provide new insights on so far unexplored sources for mycotoxins with relevance for humans and for the environment. This includes the development and application of suitable sampling and (bio-)monitoring strategies to assess (i) mycotoxin occurrence in agricultural soils and (ii) exposure at the consumer level. This work shows that soil is a source of mycotoxins and that agricultural practices influence the integrity of the soil and consequently in situ mycotoxin concentrations. Then alimentary habits, lifestyle and knowledge on mycotoxins are decisive factors for exposure at the household level. Both aspects are not yet considered in current risk assessment strategies. Therefore, an integrated interdisciplinary model for mycotoxin prevention strategies starting in soil and including also the consumer level is suggested.
Nuclear magnetic resonance (NMR) spectroscopy is an excellent tool for reaction and
process monitoring. Process monitoring is often carried out online on flowing samples. Benchtop NMR spectrometers are especially well-suited for these applications because they can be installed close to the studied process. However, it is a challenge to analyze a fast-flowing liquid with NMR spectroscopy because short residence times in the magnetic field of the spectrometer result in inefficient polarization build-up and thus poor signal intensity. This is particularly problematic for benchtop NMR spectrometers because of
their compact design. Therefore, different methods to counteract this prepolarization problem in benchtop NMR spectroscopy were studied experimentally in the present work. Established approaches that were studied gave only poor results at high flow velocities. To overcome this, signal enhancement by Overhauser DNP (ODNP) was used, which is based on polarization transfer from unpaired electron spins to nuclear spins and happens on very short time scales, resulting in high signal enhancements, also in fast-flowing liquids. A corresponding set-up was developed and used for the studies: the line leading to the 1 Tesla benchtop NMR spectrometer first passes a fixed bed of a radical matrix which is placed in a Halbach magnet equipped with a microwave cavity to facilitate the polarization transfer. With this ODNP set-up, excellent results were obtained also for the highest studied flow velocities. This shows that ODNP is an enabler for fast-flow benchtop NMR spectroscopy.
ODNP requires the presence of unpaired electrons in the sample which is usually accomplished by addition of stable radicals. However, radicals affect the nuclear relaxation times and can hamper the NMR detection. This was circumvented by immobilizing radicals in a fixed bed, allowing for the measurement of radical-free samples when using ex situ DNP techniques (DNP build-up and NMR detection happen at different places) with flow-induced separation of the hyperpolarized liquid from the radicals. Therefore, the synthesis of robust and chemically inert immobilized radical matrices is mandatory. This was accomplished by immobilizing the radical glycidyloxy-tetramethylpiperidinyloxyl (GT) with a polyethyleneimine (PEI) linker on the surface of controlled porous glasses (CPG). Both the porosity of the CPGs and also the size of the PEI-linker were varied resulting in a set of distinct radical matrices for continuous-flow ODNP. The study shows that CPGs with PEI linkers provide robust, inert, and efficient ODNP matrices.
Another method to address the prepolarization problem in continuous-flow NMR applications is paramagnetic relaxation enhancement (PRE) by using a T1 relaxation agent. In the present work, a PRE agent was developed that was again based on PEI-grafted CPGs with PEI-linker and GT. Here, the interaction of the studied liquid with this PRE agent significantly accelerates the buildup of nuclear polarization prior to NMR detection, which enables quantitative measurements in continuous-flow benchtop NMR applications. The results show that the flow regime for quantitative measurements can be greatly extended by the use of the synthesized PRE agent.
Riparian areas are an important transition zone in freshwater systems, connecting and regulating both aquatic and terrestrial systems. They are characterized by a high diversity and are important for conservation. However, riparian areas are frequently under stress from human activities. Two of these stressors are agricultural activity and introduction of invasive plant species. One example of the impact of agriculture is pollution with heavy metals such as copper. Copper is commonly used as a fungicide in agriculture and can be introduced into riparian areas via flooding events via streams in areas previously unaffected by pollution. There, copper is toxic to animals, plants and microorganisms at high concentrations, damaging DNA, enzymes, cell membranes and chloroplasts. This leads to a reduction in growth and reproduction of the affected organisms, potentially disrupting the ecosystems. Invasive alien plants are another major cause of biodiversity loss in animals, plants and microorganisms. This can negatively affect entire ecosystems above as well as belowground, leading to alterations of resources and ecosystem functions. Especially soil fungi are important for ecosystem functions. For example by forming symbiotic interactions with plants, which can be disrupted by plant invasion and copper pollution. Two common invasive plant species in riparian areas are Fallopia japonica and Impatiens glandulifera. They frequently invade stands of the native Urtica dioica. The aim of this project was to investigate the impact of these two plant invaders, especially of F. japonica, on native soil communities further modified by copper pollution. This was done in two parts: a field study, investigating the impact of the invasive plants on soil properties, invertebrates, fungi and activity and a mesocosm experiment under the influence of copper pollution, comparing the impact of copper on plants, soil invertebrates, microorganisms and activity depending on the presence of a native or invasive plant species. Under field conditions, plant invasion mainly reduced the diversity of fungi directly associated with the plants but not the biomass of fungi. Direct impacts on soil invertebrates were also observed. In the mesocosms, microbial biomass was reduced under the invasive plant and no impact on invertebrates was observed. Similarly, the soil activity was not affected in the field but was strongly reduced by the presence of F. japonica in the mesocosms. These results align with the enemy release hypothesis, indicating that these invasive species, especially F. japonica, may be less associated with fungal parasites in the invasive range, allowing them to perform better than native species. These findings also indicate that these invaders have various and contrasting impacts on belowground systems, making their effects highly context-dependent and site-specific. Copper pollution inhibited growth in both F. japonica and U. dioica. Urtica dioica seemed to be more sensitive to copper pollution compared to the invasive plant. In the soil, copper pollution further amplified the reduction of soil activity by the invasive plant and had variable effects on invertebrates and microbial biomass. This indicates that F. japonica may gain an advantage against the commonly occurring U. dioica, especially in polluted areas. The negative impact of copper pollution on soil functions could therefore be amplified by facilitating invasion by F. japonica, which also negatively impacts soil functions. Therefore, disturbances by agricultural activity, one major source of copper pollution, could have an even stronger impact across much wider distances and in previously undisturbed areas.
Product manufacturing is performed in a massively automated and increasingly customized manner.
However, overall production speed is limited by automation of inspection since each product has to ensure the required quality.
A widespread and often-used quality assurance method is visual surface inspection.
Automated surface inspection relies on an inspection plan and defect recognition algorithms.
Both inspection planning and defect recognition algorithms development heavily rely on the availability of representative image data containing various product surface textures and imperfections showing a wide variety of possible surface responses to different viewing and lighting conditions.
Due to the advancements in manufacturing, defects in products occur rarely, with different frequencies of appearance, followed by a subjective and laborious annotation process.
Further, since the surface texture is often not relevant to product performance and thus not controlled, products with different surface textures are not treated as different product samples and thus not provided.
Motivated by aforementioned problems, this work introduces the following contributions: (1) image synthesis requirements for industrial quality inspection and a novel realistic image synthesis pipeline satisfying those requirements (Chapter 4), (2) texture synthesis requirements for industrial quality inspection and a procedural approach to parameterized surface texture modeling incorporating domain knowledge (Chapter 5) and (3) defect synthesis requirements for industrial quality inspection as well as a procedural approach to parameterized defect modeling (Chapter 6).
The contributions presented in this thesis, make it possible to obtain, in a controllable and automated manner, the required amount of image data, containing realistic and varying surface textures resembling machining surfaces as well as diversified geometrical defects with automated, pixel-precise annotations (Chapters 7,8).
The presented contributions enable the inspection planning and development of machine vision algorithms for defect recognition to be performed completely virtually, by inspection planning experts, without computer graphics knowledge.
Machine learning and artificial intelligence are pivotal pillars in the area of
computer vision, especially object detection and classification. They support
or replace conventional methods such as morphological operators or manual
surveillance. These models, tailored and trained for various use cases, typically possess a vast number of trainable parameters to cover a wide range of scenarios.
However, their sizes have reached a point where classical computers struggle
to train them efficiently, both in terms of time and computational resources.
Moreover, the data itself is becoming increasingly detailed and thus larger. In
our case, we are dealing with 2D or 3D image data, specifically gray value
images.
One promising avenue to mitigate computational demands is quantum computing.
With properties like superposition, entanglement, and other quantum
mechanical properties, there exists a theoretical advantage over classical methods.
In this doctoral thesis, we aim to investigate the practical utility of quantum
hardware in several application scenarios.
The first part of our study focuses on encoding classical image data into
quantum states. To design quantum algorithms, we must first transform image
information, represented as gray values, into quantum states. This step is
crucial and a main part for the development of quantum algorithms. Image
information is converted into quantum states through methods like basis encoding,
amplitude encoding, or phase encoding. We contribute to this field by
enhancing a phase encoding method called Flexible Representation of Quantum
Images (FRQI). This contribution is included in our two papers [1, 2] and in
Chapter 4 in this thesis. Our approach reduces the number of so called CXoperations
and consequently the errors observed on current quantum hardware.
We also evaluate the scalability in terms of feasibility and usability on existing
hardware.
We adapted our research for the following parts of the thesis based on the
results of the first part. We can not encode and retrieve large images on current
quantum devices. Either we simulate the quantum hardware as in the second
part of this thesis, reduce the image size, or use hybrid approaches as in the
other parts of this thesis.
In the second part, we concentrate on amplitude encoding, with Quantum
Probability Image Encoding (QPIE), and apply the Quantum Fourier Transform
(QFT) to the quantum states. We can detect the orientation of objects in images
with this approach by using additional post-processing methods. We compare
the results of the QFT with those of the Fast Fourier Transform (FFT) and
demonstrate that, at least on the simulator, we get the same results as with the
classical method (see Chapter 5).
The third part of the study is about edge detection of objects in gray value
images. We use the idea of a quantum artificial neuron as the core building block
of our algorithm (see [3] and Chapter 6). In this part, our primary focus is on
the algorithm’s robustness in the face of current hardware limitations. To tailor
it further to the current hardware, we developed six variations of the algorithm
with the aim of reducing the number of quantum circuits. We compare the
results of the six variations. Our adaption of the algorithm allows to examine
image sizes that were previously unattainable by quantum algorithms on existing
quantum hardware.
In the fourth part, we focus on hybrid algorithms in the form of quantum
transfer learning. Drawing from the experiences of the first part regarding the
practical usability of current hardware, quantum transfer learning offers a way
to circumvent these limitations by keeping some parts of the algorithm classical
while executing other parts on quantum hardware. Our algorithm demonstrates
its utility in detecting small cracks with a thickness of approximately one or
two pixels in concrete samples (see [4] and Chapter 7). We highlight differences
between simulators and current quantum computers and demonstrate the capability
to detect the cracks in the images with the current quantum hardware.
Tropical dry forests are crucial for climate adaptation, economic development, and poverty alleviation, offering vital ecosystem services. However, this understudied, and inadequately protected biome faces severe threats like deforestation and land-use changes and is often overlooked in national policies. This neglect poses risks to services like clean water provision, diverse habitats, and climate change mitigation. Changes in land use within these forests impact environmental conditions, causing reduced biodiversity and vegetation restructuring. The regeneration process relies on abiotic factors and natural soil recovery. In this dissertation, I investigated the role of two keystone organism groups—Biological soil crusts ('biocrusts') and leaf-cutting ants (LCA)—in dry forest regeneration. These ecosystem engineers can enhance topsoil quality, introduce essential nutrients and water, and influence plant germination and growth, thereby potentially affecting dry forest regeneration. My primary objectives were to determine the relevance of biocrusts in the Caatinga dry forest, their interaction with LCA, as well as both of their provision of essential ecosystem services, and their response to chronic anthropogenic disturbance. I employed various techniques to document biocrust diversity, and distribution, along with the abiotic environment alterations caused by biocrusts and LCA.
Biocrusts, diverse components in the Caatinga dry forest, were present in various successional stages, including agricultural fields, regenerating areas, and old-growth forests. Dominated by cyanobacteria, their coverage depended on factors like leaf-litter burial, disturbance levels, soil stability, seasonality, and the presence/ activity of LCA nests. A balance between vascular plant cover and disturbance pressure was also crucial for biocrust distribution. Both biocrusts and LCA impacted key abiotic factors for dry forest resilience but with significantly differing ecological consequences and reactions to anthropogenic disturbances. Biocrusts, by reducing water infiltration, promoted runoff, fostering small-scale source-sink patterns, benefiting vascular vegetation. They enhanced soil fertility and provided erosion protection, with older biocrusts exhibiting more significant positive effects. Anthropogenic disturbance disrupted biocrust succession, limiting their services and leading to negative feedback loops. LCA nests increased compaction, and reduced water infiltration, potentially hindering forest regeneration. These physico-hydrological barriers persisted, especially in disturbed areas, impacting forest dynamics and resilience for years, even after colony death. Adverse effects of LCA on water availability and soil resistance escalated with anthropogenic disturbance, though LCA refuse had the potential to mitigate some negative soil property changes.
Both biocrusts and LCA act as edaphic ecosystem engineers in the Caatinga dry forest, impacting vascular plants through their abiotic influence. A greenhouse experiment demonstrated the positive effects of both organisms on plant germination, development, and survival across various functional groups. This dissertation also showed for the first time that LCA can accelerate germination time. These facilitative effects are attributed to improved soil conditions, including enhanced water availability and nutrient richness. Given species-specific responses and the prevalence of LCA nests and biocrust coverage in regenerating areas, their activities likely play a pivotal role in shaping successional trajectories and regeneration dynamics in dry forests. This underscores the significant potential of both ecosystem engineers in influencing the regeneration and resilience of tropical dry forests.
In summary, in the human-modified landscapes of the Caatinga, biocrusts and LCA act as ecosystem engineers, influencing vital soil properties. Biocrusts protect degraded soils and facilitate plant establishment, while the impact of LCA depends on the nest structure. These engineers play a crucial role in dry forest regeneration and sustainability. However, climate change and land degradation pose significant threats to both ecosystems and engineers, impacting their effects diametrically. This research enhances understanding of the biome's functioning, regeneration, and resilience, providing insights for sustainable management, restoration, and conservation to support biodiversity and human well-being.
Reaktive Strömungen sind ein wichtiger Bestandteil vieler umwelttechnischer und industrieller Prozesse und ein Forschungsgegenstand in vielen Bereichen. Ein Beispiel eines solchen Prozesses ist die Reinigung von Abgasen eines Verbrennungsmotors in der Automobilindustrie. Hierzu werden Katalysatoren und poröse Filter benutzt. In allen Forschungsbereichen wird mathematische Modellierung und Simulation eingesetzt, die es ermöglicht, die Effizienz des zu entwickelnden Prozesses oder Produkts zu steigern und Daten zu erhalten, die für theoretische oder experimentelle Forschungsmethoden unzugänglich sind. Numerische Algorithmen zur Simulation reaktiver Strömungen werden seit Jahrzehnten entwickelt und haben in vielen Bereichen ihre Leistungsfähigkeit bei der Lösung angewandter Industrie- und Umweltprobleme bewiesen. Die Klasse der reaktiven Strömungen und insbesondere die der reaktiven Strömungen auf der Porenskala ist aber sehr reichhaltig, und es gibt keinen allgemeinen Algorithmus, der für alle Strömungen dieser Art effizient ist. Eine Anpassung der Algorithmen für bestimmte Klassen von Problemen ist erforderlich. In dieser Arbeit liegt der Schwerpunkt auf der Entwicklung effizienter Algorithmen zur porenskaligen Simulation von Prozessen in katalytischen Filtern. Ein besonderes Merkmal dieser Filter ist, dass das Filtermaterial ein inertes, undurchlässiges Grundgerüst und nanoporöse aktive (Washcoat) Partikel enthält, in denen die Reaktionen stattfinden. Der Stofftransport findet innerhalb der Poren statt und in den Washcoat-Partikeln kann der Transport durch Konvektion (oft vernachlässigt) und Diffusion beschrieben werden. Die mathematischen Modelle basieren auf einer Konvektions-Diffusions-Reaktions-Gleichung oder Systemen solcher Gleichungen. Zu den größten Herausforderungen bei der Lösung solcher Probleme gehören die Nichtlinearitäten der reaktiven Terme und die Heterogenität des Strömungsfeldes, die durch die Heterogenität der porösen Medien verursacht wird. Letzteres bedeutet, dass in ein und derselben Materialprobe schnelle und langsame Zonen koexistieren, was bedeutet, dass sich die Art der maßgeblichen Gleichungen lokal ändert. Letzteres impliziert, dass nicht einfach Algorithmen für parabolische oder hyperbolische Probleme ausgewählt werden können. Die Algorithmen sollten für jede Art von Strömung robust sein oder sich an die Änderung der Art der Gleichungen anpassen können. Darüber hinaus kann die Eigenschaft ausgenutzt werden, dass der Washcoat (in dem die Reaktionen stattfinden und den Algorithmen eine wesentliche Änderung auferlegen) nur einen begrenzten Teil des Rechengebiets einnimmt. All dies motiviert dazu, die Klasse der Splitting-Verfahren erneut zu untersuchen, an die betrachtete Klasse von Problemen anzupassen und ihre Stabilität und Leistung numerisch zu untersuchen. Dies ist das Hauptthema der Dissertation. Die Steigerung der Rechenleistung und sich ändernde Rechnerarchitekturen erfordern eine Überarbeitung der bisherigen Softwareimplementierung und der bisher verwendeten Datenstrukturen. Ein weiteres Ziel dieser Arbeit ist daher die Entwicklung von Softwarelösungen, die die Simulation reaktiver Strömungen für hochdimensionale Probleme ermöglichen. Zur reaktiven Strömungssimulation auf der Porenskala wurden verschiedene Methoden verwendet. Die Arbeit konzentrierte sich auf zwei allgemeine Methodenklassen: Splitting-Algorithmen und implizit-explizite Schemata. Zum Vergleich werden vollständig implizite Algorithmen verwendet. Eine Reihe von Benchmark-Geometrien und chemischen Reaktionen, deren Komplexität von einfachen 1D- und linearen Fällen bis hin zu CT-Scan-basierten echten Filterdomänen und echten nichtlinearen komplexen chemischen Reaktionen reichte, wurden berücksichtigt und untersucht. Die vorliegende Arbeit zeigt, dass für alle betrachteten Fälle durch die Nutzung von Splitting-Verfahren für den (schwachen) Transport- und den Reaktionsterm, Verbesserungen in Bezug auf Speichernutzung und Konvergenzgeschwindigkeit erzielt werden können.
Comparative analysis of scalar fields in scientific visualization often involves distance functions on topological abstractions. This paper focuses on the merge tree abstraction (representing the nesting of sub- or superlevel sets) and proposes the application of the unconstrained deformation-based edit distance. Previous approaches on merge trees often suffer from instability: small perturbations in the data can lead to large distances of the abstractions. While some existing methods can handle so-called vertical instability, the unconstrained deformation-based edit distance addresses both vertical and horizontal instabilities, also called saddle swaps. We establish the computational complexity as NP-complete, and provide an integer linear program formulation for computation. Experimental results on the TOSCA shape matching ensemble provide evidence for the stability of the proposed distance. We thereby showcase the potential of handling saddle swaps for comparison of scalar fields through merge trees.
Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.
Intelligent formal methods
(2022)
Information technology has become an indispensable part of our daily lives, with a significant proportion of our everyday activities relying on the safe and reliable operation of computer systems. One promising approach to ensuring these critical properties is the use of so-called formal methods, a broad range of rigorous, mathematical techniques for specifying, developing, and verifying hardware, software, cyber-physical systems, and artificial intelligence. Unlike traditional quality assurance approaches, such as testing, formal methods offer the unique ability to provide formal proof of the absence of errors, a trait particularly desirable in the context of today's ubiquitous safety-critical systems. However, this advantage comes at a cost: formal methods require extensive training, often assume idealized or limited settings, and typically demand substantial computational resources.
Inspired by the vision of artificial intelligence, this work seeks to automate formal methods and dramatically expand their applicability. To achieve this goal, we develop a novel, innovative type of formal method that combines inductive techniques from machine learning with deductive techniques from logic. We name this new approach "intelligent formal methods" and apply it to three fundamental areas: software verification, hardware and software synthesis, and the generation of formal specifications.
Objective. Gradient-based optimization using algorithmic derivatives can be a useful technique to improve engineering designs with respect to a computer-implemented objective function. Likewise, uncertainty quantification through computer simulations can be carried out by means of derivatives of the computer simulation. However, the effectiveness of these techniques depends on how 'well-linearizable' the software is. In this study, we assess how promising derivative information of a typical proton computed tomography (pCT) scan computer simulation is for the aforementioned applications. Approach. This study is mainly based on numerical experiments, in which we repeatedly evaluate three representative computational steps with perturbed input values. We support our observations with a review of the algorithmic steps and arithmetic operations performed by the software, using debugging techniques. Main results. The model-based iterative reconstruction (MBIR) subprocedure (at the end of the software pipeline) and the Monte Carlo (MC) simulation (at the beginning) were piecewise differentiable. However, the observed high density and magnitude of jumps was likely to preclude most meaningful uses of the derivatives. Jumps in the MBIR function arose from the discrete computation of the set of voxels intersected by a proton path, and could be reduced in magnitude by a 'fuzzy voxels' approach. The investigated jumps in the MC function arose from local changes in the control flow that affected the amount of consumed random numbers. The tracking algorithm solves an inherently non-differentiable problem. Significance. Besides the technical challenges of merely applying AD to existing software projects, the MC and MBIR codes must be adapted to compute smoother functions. For the MBIR code, we presented one possible approach for this while for the MC code, this will be subject to further research. For the tracking subprocedure, further research on surrogate models is necessary.
We discuss the dynamics of the formation of a Bose polaron when an impurity is injected into a weakly interacting one-dimensional Bose condensate. While for small impurity-boson couplings this process can be described within the Froehlich model as generation, emission and binding of Bogoliubov phonons, this is no longer adequate if the coupling becomes strong. To treat this regime we consider a mean-field approach beyond the Froehlich model which accounts for the backaction to the condensate, complemented with Truncated Wigner simulations to include quantum fluctuation. For the stationary polaron we find a periodic energy-momentum relation and non-monotonous relation between impurity velocity and polaron momentum including regions of negative impurity velocity. Studying the polaron formation after turning on the impurity-boson coupling quasi-adiabatically and in a sudden quench, we find a very rich scenario of dynamical regimes. Due to the build-up of an effective mass, the impurity is slowed down even if its initial velocity is below the Landau critical value. For larger initial velocities we find deceleration and even backscattering caused by emission of density waves or grey solitons and subsequent formation of stationary polaron states in different momentum sectors. In order to analyze the effect of quantum fluctuations we consider a trapped condensate to avoid 1D infrared divergencies. Using Truncated Wigner simulations in this case we show under what conditions the influence of quantum fluctuations is small.
European crayfish species are considered keystone in freshwater ecosystems. As such, their conservation is of paramount importance to prevent biodiversity decline and loss of ecosystem function. Unfortunately, today, European crayfish species are among the most threatened crayfish species worldwide. An especially relevant threat is represented by the invasive pathogen Aphanomyces astaci. This oomycete, native of North America, has been one of the main causes of crayfish population declines across Europe since its first introduction 150 years ago, to the point of causing the local extinction of many populations. Over the years, several introductions of A. astaci strains into Europe took place through translocation of infected North American crayfish, and were followed by mass mortalities across European crayfish populations. However, in the past 20 years, more and more reports emerged of European crayfish populations surviving A. astaci infections or being latently infected with the pathogen. The survival of infected crayfish can be ascribed to both increased resistance of some crayfish populations and decreased virulence of some A. astaci strains. As the relationship between host and pathogen in Europe is changing, it is imperative to gain insights on what shapes these changes to understand the implications for the long-term coexistence of crayfish and A. astaci in Europe. With this thesis, I focused on the virulence of A. astaci, looking for mechanisms, patterns and determinants underlying the pathogen’s virulence variability. In particular, by characterising the virulence of several A. astaci strains, I identified two possible different mechanisms of loss of virulence. I revealed that A. astaci’s virulence variability is not linked to variation of in vitro growth and sporulation, traits classically associated with a pathogen’s virulence. Based on these results, I suggest that the pathogen’s virulence determinants are likely its “virulence effectors”, of which A. astaci genome is enriched. Additionally, with the present work I provided transcriptomic evidence of coevolution between A. astaci and European crayfish. I showed that the haplogroups based on the canonical mitochondrial markers, often used to assess A. astaci’s virulence to inform management actions, do not differ for some of their characterising phenotypical traits, including virulence. Finally, after experimental characterisation of virulence and assessment of its likely phenotypical determinants, i.e., sporulation and growth, the next and more comprehensive step to study the pathogen’s virulence is through genomic approaches. To this aim, I provided key data for future comparative genomic studies, i.e., highly complete genome assemblies based on Nanopore (3) and Illumina reads (11). These data can be exploited in several ways, from building a pangenome of the species to a genome-wide association study (GWAS), that can offer a much deeper understanding of A. astaci’s virulence and adaptability. In particular, the identification of the loci associated with virulence through a GWAS has the potential to be revolutionary for the management of A. astaci, as it can become the basis to create a genomic tool to quickly and accurately assess the virulence of newly introduced strains, directing management actions towards the more dangerous strains.
We report the experimental implementation of dynamical decoupling on a small, non-interacting ensemble of up to 25 optically trapped, neutral Cs atoms. The qubit consists of the two magnetic-insensitive Cs clock states \(\vert F = 3, m_F = 0\rangle\) and \(\vert F = 4, m_F = 0\rangle\), which are coupled by microwave radiation. We observe a significant enhancement of the coherence time when employing Carr-Purcell-Meiboom-Gill (CPMG) dynamical decoupling. A CPMG sequence with ten refocusing pulses increases the coherence time of 16.2(9) ms by more than one order of magnitude to 178(2) ms. In addition, we make use of the filter function formalism and utilise the CPMG sequence to measure the background noise floor affecting the qubit coherence, finding a power-law noise spectrum \(1/\omega^\alpha\) with \(\mathit{\alpha} = 0.89(2)\). This finding is in very good agreement with an independent measurement of the noise in the intensity of the trapping laser. Moreover, the measured coherence evolutions also exhibit signatures of low-frequency noise originating at distinct frequencies. Our findings point toward noise spectroscopy of engineered atomic baths through single-atom dynamical decoupling in a system of individual Cs impurities immersed in an ultracold 87Rb bath.
Olive mill wastewater (OMW) is a by-product of olive oil extraction and its disposal on soil has been associated with significant environmental challenges, including toxic effects on soil organisms and quality of groundwater due to its high phenolic content. Recent studies focusing on the dynamics of OMW degradation in soil are handling the environmental conditions as main factors influencing the fate and transport of polyphenols in the soil-water system. The understanding of seasonal-dependent phenol leaching from OMW-treated soil remained elusive, as field studies are hindered by spatial variability and complex environmental dynamics. Therefore, controlled lysimeter experiments were conducted to investigate the leaching and transport mechanisms of OMW-derived phenolic compounds in soil.
This thesis presents the results of an 18-week lysimeter experiment conducted in a laboratory setting, aimed at monitoring and comprehending the distribution and leaching of OMW-derived phenolic compounds in soil after OMW application. The experiment spanned four seasonal simulation phases, including two winter, one spring, and one summer, under semi-arid climate Tunisian conditions. The effects of OMW on soil leachates properties, soil water repellency, and soil water retention capacity were assessed.
The soil leachates exhibited varying degrees of recovery across the different simulation phases. However, persistent salinity in the leachates and high soil water repellency at the top treated OMW-soils were recorded. The findings revealed also that OMW application changed the pore size distribution in treated OMW-soils. Most of the OMW-derived phenols were immobilized in the upper 5 cm of the soil. Notably, soluble phenolic compounds exhibited the formation of coarser pores for the sake of fine pores, suggesting that OMW- organic carbon played a crucial role in controlling the depth-dependent transport mechanisms of OMW within the soil matrix.
In conclusion, this study provides valuable insights into the fate and impact of OMW-derived phenolic compounds in soil. It emphasizes the significance of conducting OMW applications with careful irrigation practices and thorough phenol leaching surveys to minimize the risk of potential groundwater contamination. Additionally, more experiments are warranted to investigate the sorption capacity of the soil during and after OMW application and its influence on the stability of soluble phenolic compounds
in soils.
Thermo-optic interaction significantly differs from the usual particle-particle interactions in physics, as it is retarded in time. A prominent platform for realising this kind of interaction are photon Bose–Einstein condensates, which are created in dye-filled microcavities. The dye solution continually absorbs and re-emits these photons, causing the photon gas to thermalize and to form a Bose–Einstein condensate. Because of a non-ideal quantum efficiency, these cycles heat the dye solution, creating a medium that provides an effective thermo-optic photon–photon interaction. So far, only a mean-field description of this process exists. This paper goes beyond by working out a quantum mechanical description of the effective thermo-optic photon–photon interaction. To this end, the self-consistent modelling of the temperature diffusion builds the backbone of the modelling. Furthermore, the manyfold experimental timescales allow for deriving an approximate Hamiltonian. The resulting quantum theory is applied in the perturbative regime to both a harmonic and a box potential for investigating its prospect for precise measurements of the effective photon–photon interaction strength.
Although photon Bose–Einstein condensates have already been used for studying many interesting effects, the precise role of the photon–photon interaction is not fully clarified up to now. In view of this, it is advantageous that these systems allow measuring both the intensity of the light leaking out of the cavity and its spectrum at the same time. Therefore, the photon–photon interaction strength can be determined once via analysing the condensate broadening and once via examining the interaction-induced modifications of the cavity modes. As the former method depends crucially on the concrete shape of the trapping potential and the spatial resolution of the used camera, interferometric methods promise more precise measurements. To this end, the present paper works out the impact of the photon–photon interaction upon the cavity modes. A quantum mechanical description of the photon–photon interaction, including the thermal cloud, builds the theoretical backbone of the method. An exact diagonalisation approach introduced here exposes how the effective photon–photon interaction modifies both the spectrum and the width of the photon gas. A comparison with a variational approach based on the Gross–Pitaevskii equation quantifies the contribution of the thermal cloud in the respective applications.
Surface roughness plays a critical role and has effects in, e.g. fluid dynamics or contact mechanics. For example, to evaluate fluid behavior at different roughness properties, real-world or numerical experiments are performed. Numerical simulations of rough surfaces can speed up these studies because they can help collect more relevant information. However, it is hard to simulate rough surfaces with deterministic or structured components in current methods. In this work, we present a novel approach to simulate rough surfaces with a Gaussian process (GP) and a noise model because GPs can model structured and periodic elements. GPs generalize traditional methods and are not restricted to stationarity so they can simulate a wider range of rough surfaces. In this paper, we summarize the theoretical similarities of GPs with auto-regressive moving-average processes and introduce a linear process view of GPs. We also show examples of ground and honed surfaces simulated by a predefined model. The proposed method can also be used to fit a model to measurement data of a rough surface. In particular, we demonstrate this to model turned profiles and surfaces that are inherently periodic.