Zoom link for conference: https://us06web.zoom.us/j/83626784729?pwd=QUpZWVl3c0l4bVpEeTFIZWk0ai9XQT0
The Computation-based Science and Technology Research Centre (CaSToRC) will hold a final conference on the topic of simulations in Multiscale Physical and Biological Systems.
Explanation methods aim to make the underlying decision process of neural networks transparent. These methods are increasingly deployed by practitioners and will be required by law under the European "right to explain" legislation.
In this talk, I will demonstrate that current explanation methods are easily manipulable and can thus not serve as a reliable proof for a sensible decision-making process. Theoretically, this can be shown using mathematics that is well-known to physicists from General Relativity, such as certain extension theorems for smooth maps on submanifolds. If time permits, I will also discuss mitigation strategies which can be derived from the theoretical analysis.
In this talk, we review and discuss some optimizations necessary to achieve high-performance results when running on the latest processors, both in terms of bandwidth and computing throughput. We present some results achieved on both multi- and many-core systems.
Algebraic multigrid methods have become state of the art at solving discretizations of the Dirac equation in Lattice QCD, in particular, when systems are ill-conditioned. Lattice QCD is bound by the use of supercomputers to speed up their simulations, where algebraic multigrid methods have brought a way of pushing the computational boundaries at large-scale and they open the possibility of scalably simulating at the exascale.
Unfortunately, under certain extreme situations such as the use of many nodes, and/or very high condition of the linear system to be solved which happens when quark masses are small, coarsest-level solves end up representing most of the execution time of the overall multigrid solves. If the solver employed during coarsest-level solves is e.g. GMRES, in which case we see the appearance of many dot products, scalability is at risk. We therefore take on the task of understanding and improving the scalability of coarsest-level solves.
For this, we merge four techniques into a single solver for coarsest-level computations: block Jacobi, polynomial preconditioning, Krylov recycling and pipelining.
All of our implementations and tests are performed within our solver library DD-alphaAMG, tailored in particular for the twisted mass fermion formulation. Recycling has a great interplay with our solver due to changes in the coarser matrices during the setup phase of the AMG employed by DD-alphaAMG, and the matrix-vector multiplications contain nearest-neighbour communications which gives us the opportunity of better scalability.
The application of operator splitting methods to ordinary differential equations (ODEs) is well established. However, for differential-algebraic equations (DAEs) it is subjected to many restrictions due to the presence of constraints and the index property. In order to perform a smooth transfer of the operator splitting from ODEs to DAEs, it is important to have a suitable decoupled structure for the desired DAE system. Coupling a spatially discretized system of Maxwell’s equations that describe electromagnetic field to a system of equations describing the circuit yields a system of DAEs. In this work, we present an approach for splitting linear coupled field-circuit DAEs based on a topological decoupling in which we derive the circuit model from loop and cutset equations. Finally, numerical tests are made to underline the mathematical model as well as present convergence results for the proposed DAE operator splitting.
Huge systems of algebraic linear equations $Dx = b$ emanate in numerous areas of science. The numerical solution of linear systems of large, sparse matrices is a basic approach for many scientific applications. This work focuses on these fundamental aspects and interested in developing algorithms for high performance computing related to evaluating physical observable in Lattice-Dirac equation in Quantum ChromoDynamics (QCD) by improving the trace inverse estimation of matrix $D$ using MLMC approach.
The Variational quantum eigensolver (VQE) is a hybrid quantum-classical algorithm
used to find the ground state of a Hamiltonian using the variational method.
In particular, this procedure can be used to study LGT in the Hamiltonian formulation.
Bayesian Optimization (BO) based on Gaussian Process Regression (GPR)
is a powerful algorithm for finding the global minimum of the energy with a very low number of iterations.
This work explores some available methods for BO and GPR,
and proposes a setup that is specifically tailored to perform VQE on quantum computers already available today.
Materials are at the core of our technological advances, and are needed to address many of our societal challenges: from energy to information, from food to medicine. Prof. Marzari will highlight the great strides made in the last few years in the design and discovery of novel materials, where computational simulations can now precede, streamline, or accelerate experiments. This acceleration is driven by the central paradigm of computational science (doubling performance every 14-16 months), combined with powerful and predictive quantum simulation techniques, and by the convergence of data mining and machine learning towards materials simulations.
He will also underscore the IT requirements needed to perform calculations in a reproducible, shareable, high-throughput mode. Case studies will be the computational exfoliation of all known inorganic materials, leading to ~3,500 promising candidates, for which he will discuss highlights for quantum spin Hall insulators and superconductors, and the search for novel solid-state Li-ion electrolytes.
Multiphase flows are ubiquitous in nature and play a major role in countless scientific and technological applications.
The Lattice Boltzmann Method has made the study of multiphase flows one of his greatest “atous”, delivering reliable
and accurate predictions in different areas, proving extremely versatile.
The main aspects related to the Shan-Chen multiphase approach and its recent enhancements will be discussed, with
focus on the technological implications and on scientific perspectives.
A Network Science approach for studying galaxy evolution
Michalis Papadopoulos, Vicky Papadopoulou, Andreas Efstathiou, Ioannis Michos
School of Sciences, European University Cyprus, Diogenes Street,
Engomi, 1516 Nicosia, Cyprus
An interesting topic in Astrophysics is the evolution of galaxies and how it can be identified through observations. In this work, we investigate a specific type of galaxies, ultraluminous infrared galaxies (ULIRGs), formed from the collision of two galaxies and emitting high amounts of luminosity. For ULIRGs, most of the energy is emitted in the infrared region of the spectrum and can be subdivided into 3 main contributions (Perez et al. 2021). One contribution is coming from the pre-existing stars within the galaxy, along with the emission of the interstellar medium. Another contribution is coming from the newly formed stars, during the starburst phase of the galaxy. And finally, a contribution is introduced by the Active Galactic Nucleus (AGN).
The emitted energy across different wavelengths constitutes the Spectral Energy Distribution (SED) of a galaxy. The data used in this work come from the Herschel Extragalactic Legacy Project (HELP) (HELP). In this work, we are focusing on the European Large Area Infrared Survey field North 1 (ELAIS-N1) field and on galaxies at redshift z∈[1,2]. For every galaxy in the database, various qualities are available (i.e. redshift, right ascension, inclination, etc), most importantly, the emitted luminosity at different wavelengths, the SED.
The total (observed) SED of each galaxy is a superposition of the above-mentioned phenomena. It is important to decompose the SED and distinguish between the different contributions, making it necessary to separately model the occurring phenomena. The phenomenon that dominates the SED, is then used to categorize the galaxies (starburst galaxies, active galaxy, etc.). The modeling here is done using “CYprus models for Galaxies and their Nuclear Spectra” (CYGNUS), a collection of radiative transfer models for AGN tapered discs/tori (Efstathiou et al. 2013), starburst galaxies (Efstathiou et al. 2009) and host galaxies (Efstathiou 2003). Prior to the SED decomposition, fitting of the data is done, using Spectral energy distribution Analysis Through Markov Chains (SATMC) (satmc), a general purpose SED fitting tool, made public in 2013.
In this work, after pre-processing and fitting the data, we use Graph Clustering to identify clusters of galaxies in the same evolutionary state. We create Graphs, where each galaxy is represented by a node and the weight between nodes shows the similarity of their SEDs. In order for this to be done, we need to define a similarity function, compare the galaxies pairwise and create a Similarity Matrix. The simpler approach is to measure the Euclidean distance between SEDs, χ_(i,j)^2=∑SED▒ ((L_i-L_j )^2)/(σ_i^2+σ_j^2 ), where L_i is the flux of galaxy i for every point of the spectrum and σ_i the error of the measurement. We also use Gaussian Kernels, K(i,j)=e^(-γ||L_i-L_j ||^2 ), where i,j are the compared nodes, L_i is the flux of galaxy i for every point of the spectrum and γ determines the width of the kernel.
It would be possible to use the Similarity Matrix as an Adjacency matrix and create a Graph, but this would result to a complete Graph. Since this would be hard to analyze and visualize, some filtering is done. Instead of choosing an arbitrary threshold, we prefer to construct a k-Nearest-Neighbor (kNN) Graph. In a kNN graph, only the k-strongest connections are maintained for each node, ignoring all other links. But still, despite the fact that some bounds exist, the decision of ‘k’ is also arbitrary. In order to decide, we iteratively create Graphs for increasing “k” and use the Louvain algorithm (Louvain) to see how the number of clusters and modularity (the most popular measure of the quality of a clustering) change.
After deciding on which Graph to use, we run different clustering algorithms, such as the Louvain (Louvain) or Newman’s leading eigenvector method (Newman 2006). We then compare our results with the SEDs, to check if there is consistency among clusters, hence galaxies within a cluster are of the same type. Finally, we are able to see how certain attributes (i.e. AGN-fraction, Star formation rate, etc.) are distributed on the Graph and check if there is some correlation among the attributes and the clusters.
References
[Perez-Torres et al. 20021] Miguel Perez-Torres et al. “Star formation and nuclear activity in luminous infrared galaxies: aninfrared through radio review”. In: The Astronomy and Astrophysics Review29.1 (Jan. 2021).
[Shirley et al. 2021] R Shirley et al. “HELP: the Herschel Extragalactic Legacy Project”. In: Monthly Notices of the Royal Astronomical Society 507.1 (June 2021), pp. 129–155.
[Efstathiou et al. 2013] A. Efstathiou et al. “Active galactic nucleus torus models and the puzzling infrared spectrum of IRASF10214+4724”. In: Monthly Notices of the Royal Astronomical Society 436.2 (Oct. 2013), pp. 1873–1882.
[Efstathiou et al. 2009] Efstathiou, A. and Siebenmorgen, R. “Starburst and cirrus models for submillimeter galaxies”. In: A&A502.2 (2009), pp. 541–548.
[Efstathiou et al. 2003] A. Efstathiou and M. Rowan-Robinson. “Cirrus models for local and high-z SCUBA galaxies”. In: Monthly Notices of the Royal Astronomical Society343.1 (July 2003), pp. 322–330.
[Johnson et al 2013] S. P. Johnson et al. “satmc: Spectral energy distribution Analysis Through Markov Chains”. In: Monthly Notices of the Royal Astronomical Society436.3 (Oct. 2013), pp. 2535–2549.
[Blondel et al. 2008] Vincent D Blondel et al. “Fast unfolding of communities in large networks”. In:Journal of Sta-tistical Mechanics: Theory and Experiment 2008. 10 (Oct. 2008), P10008.
[Newman 2006] M. E. J. Newman. “Finding community structure in networks using the eigenvectors of matrices”. In: Physical Review E74.3 (Sept. 2006).ISSN: 1550-2376.
The Reconstructed Image from Simulations Ensemble (RISE) has been demonstrated in emission tomography as an alternative reconstruction method that uses simulation and modeling techniques to reconstruct images from projection data. In this study, we present results from the first application of RISE in Myocardial Perfusion Imaging (MPI) with Single Photon Emission Computed Tomography (SPECT). Extensive phantom experiments using a hardware cardiac phantom were performed to evaluate the efficacy of RISE in reconstructing the activity images of the myocardium with and without defect. The Ordered Subset Expectation Maximization (OSEM) method is used to provide images for comparison. The Contrast for Cold Regions was employed to quantify the detectability of the cold spots (defects) in the obtained RISE and OSEM images. Preliminary results show the capacity of RISE to provide comparable reconstructions to those of OSEM using an entirely different approach for modeling the tomographic problem.
Local ultraluminous infrared galaxies (ULIRGs), with 1-1000$\mu$m luminosities exceeding $10^{12}L_\odot$, have been studied extensively since their discovery by the Infrared Astronomical Satellite (IRAS) in the 1980s. It is now well understood that their infrared emission arises from a combination of star formation and active galactic nucleus (AGN) activity. Star formation dominates the far-infrared emission, whereas AGN can make a significant contribution and even dominate the emission of ULIRGs at near- and mid-infrared wavelengths (Farrah et al. 2003, Efstathiou et al. 2022). Disentangling the contributions of star formation and AGN activity still remains a major challenge. This is mainly due to the presence of dust that obscures the energy sources of these galaxies and reprocesses their optical/ultraviolet emission to infrared radiation. Understanding local ULIRGs is of fundamental importance for interpreting submillimeter galaxies (SMGs) and other populations of galaxies, where extreme starburst and AGN activity occurred in the history of the Universe.
The classical evolution scenario for ULIRGs suggests that they represent an evolutionary stage of a merger of two large spiral galaxies (Perez et al. 2021). In this scenario, the end product of the evolution of ULIRGs is another important population of extragalactic objects known as quasars. However, there are studies which challenge this single evolutionary scenario and suggest the existence of multiple different evolutionary paths to be undertaken by ULIRGs (Farrah et al. 2009).
The verification of the aforementioned scenarios of ULIRGs’ evolution relies on the resolution of the main power source behind their infrared emission, and its relationship with the evolutionary stages of their undergoing interaction. The infrared spectral energy distributions (SEDs) of ULIRGs display many features that can be utilised to discern their nature. Particularly, the mid-infrared part of the spectrum has long been known to show a number of spectral features due to interstellar dust with excellent diagnostic power. These features, which are due to polycyclic aromatic hydrocarbon (PAH) molecules and silicate dust grains, provide important information which can be used to classify ULIRGs. PAH features are generally considered to give a strong indication of star formation. The mid-infrared spectrum also contains information of emission and absorption features due to silicate dust around 9.7 and 18$\mu m$. Such features are predicted by radiative transfer models of the putative torus in AGN which is the essential element of the unified model (Antonucci 1993). The silicate emission features arise due to the emission of hot dust at a temperature of $\sim$300-1000K which is viewed almost completely unobscured when the tori are viewed face-on. The absorption features at 9.7$\mu m$ are more difficult to interpret as they may be either due to the torus when viewed edge-on or buried AGN (Imanishi et al. 2007) or obscured star formation (Rowan-Robinson & Efstathiou 1993).
Various methods have been used to decipher the dominant power source of ULIRGs based on the features of their infrared emission; such as SED fitting (Efstathiou et al. 2014), development of diagnostic diagrams (Spoon et al. 2007), Graph theory (Farrah et al. 2009), Principal Component Analysis (Hurley et al. 2012) and Non-negative matrix factorization (Hurley et al. 2013).
We propose a new classification diagram for ULIRGs and quasars, based on a nonlinear enhanced version of the well-known Principal Component Analysis (PCA). This method enables us to classify the spectra of the ULIRGs and quasars. In particular, we managed to recover the four known optical classes of the galaxies, namely the H\ _{II} galaxies; the low-ionization nuclear emission-line regions (LINERs); the Seyfert 1 and Seyfert 2 galaxies (Yuan et al. 2010). The novelty of our classification scheme lies in the fact that it distributes these classes on a well defined geometrical shape, whose intrinsic directions correspond to the physical characteristics of the galaxies. This geometrical classification supports the ULIRG merger temporal evolutionary scenario and by considering one more dimension in the space of our diagram, we can also categorize galaxies based on the degree of obscuration of their dominant power source from dust. Our results serve as an extension of previous efforts to classify ULIRGs i.e of (Hurley et al. 2012) and (Farrah et al. 2009).
In this work, we apply an appropriate dimensionality reduction technique on the SED signals of the galaxies. In particular, we implement the Kernel Principal Component Analysis (Kernel PCA) method, a non-linear method that extends the well-established Principal Component Analysis (PCA) method. This method enables us to classify the galaxies based on their main features, and extract their reduced feature space, whose axes correspond to the principal components of the signals.
We apply the Gaussian PCA method in order to extract the underlying manifold of the distribution of the galaxies in the Feature Space. From the analysis we obtain that the first five principal components encapsulate most of the variation in our data, suggesting that the data actually lives on a 5-dimensional space. For practical and visualization reasons we focus our study on the 3-dimensional projections of this space.
We first focus on the 2-dimensional distribution of the data on the PC1-PC2 plane. In order to classify the galaxies, based on their distribution along the ellipse, we implemented the K-means method, in order to cut the curve into distinct segments. Motivated from the fact that there exist mainly 4 optical classes for the galaxies under our study, i.e. $H\ _{II}$ galaxies, low-ionization nuclear emission-line regions (LINERs), Seyfert 1 and Seyfert 2 galaxies (Yuan et al. 2010), we detected 4 clusters along the curve.
References
Antonucci R., 1993, ARAA, 31, 473
Efstathiou A., et al., 2014, MNRAS, 437, L16
Efstathiou A., Farrah D., et al. 2022 submited.
Farrah D., Afonso J., Efstathiou A., Rowan-Robinson M., Fox M., Clements D., 2003, MNRAS, 343, 585–60
Farrah D., et al., 2009, ApJ, 700, 395
Farrah D., et al., 2013, ApJ, 776, 38
Hurley P. D., Oliver S., Farrah D., Wang L., Efstathiou A., 2012, MNRAS,424, 2069
Hurley P. D., Oliver S., Farrah D., Lebouteiller V., Spoon H. W. W., 2013,MNRAS, 437, 241
Imanishi I., Dudley C. C., Maiolino R., Maloney P. R., Nakagawa T., RisalitiG., 2007, ApJS, 171, 29
Kewley L. J., Groves B., Kauffmann G., Heckman T., 2006, MNRAS, 372,961
Kirkpatrick A., et al., 2020, ApJ, 900, 5
Pérez-Torres M., Mattila S., Alonso-Herrero A., Aalto S., Efstathiou A.,2021, A&ARv, 29, 2
Rowan-Robinson M., Efstathiou A., 1993, MNRAS, 263, 675
Spoon H. W. W., Marshall J. A., Houck J. R., Elitzur M., Hao L., Armus L.,Brandl B. R., Charmandaris V., 2007, ApJ, 654, L49
Yuan T.-T., Kewley L. J., Sanders D. B., 2010, ApJ, 709, 884
Recent observational data obtained from redshift surveys of galaxies Guzzo et.al 2013 combined with high quality $\Lambda$CDM simulations, provide a new era for the understanding of the topological structure and connectivity of the cosmic web (VandeWeygaert et.al 2008, Wilding et.al 2021).
In this work we have explored spatial (clustering) algorithms Miller et. al 2009 for the detection and analysis of the cosmic web. Spatial (clustering) algorithms take as in input a given set of points located in a space and find a partitioning of the set into groups of highly density, closely placed, groups of points, which are called, in the Data Mining domain, as clusters. Clusters detected by a clustering algorithm applied on cosmological data corresponds to clusters or super-clusters structures, depending on the their size. To avoid confusion between the two possible meanings of the notion of a cluster, we use the term communities for the clusters obtained as a result of a clustering algorithm.
In this article, we perform various computational methods on simulated data from the IllustrisTNG Nelson et. al 2015 database in order to extract the cosmic web. We examine three spatial computational methods for detecting and characterizing (various parts of) the cosmic web, as voids, walls, clusters and super-clusters. The methods are able to reveal the structure of the cosmic web in various resolutions and using a variety of physical properties, such as (internal) density, size, distance and masses.
We introduce a new spatial method, called Gravity lattice, which allows a detection of the cosmic web and also characterization of various parts of it as various kinds of cosmological structures, i.e. voids, walls, clusters and super-clusters. In particular, we implement a 3D gravitational lattice and then we have measured the effect of the gravitational forces of the data points on nearby test loads of the gravity lattice. Furthermore, filtering based on the number of galaxies affecting the test loads enables also the characterization of various parts of the cosmic web detected as voids, walls, clusters and super-clusters. We have assumed that the more dense (in number of galaxies) a region is, the greater is the number of galaxies affecting nearby test loads.
Inspired by a spatial clustering algorithm, called ABACUS Chaoji et.al 2011, which clusters a set of spatial points thought finding a backbone of the data points, we have detected the backbone of the cosmic web. Furthermore, suitable modifications of ABACUS and appropriate filtering on the detected cosmic structures allow the detection of cosmic structures of various sizes and masses, corresponding to voids, walls, clusters and super-clusters within the detected cosmic web. The extraction of the backbone structure of the cosmic web can be useful for the understanding of the evolution of the cosmic web but can be also used as a pre-processing step for the application of other related algorithms from the domain, such as the DTFE algorithm, for reducing the size of the data set resulting to much faster methods.
Finally, we use a modified version of the HDBSCAN method with suitable fine tuning on the values of its parameters to detect the cosmic web, highly dense structures within it (communities) and categorizing various parts of it as voids, walls, clusters and super-clusters. Additionally, the method allows a hierarchical detection of highly density structures. Also, varying the value of the parameter allows the detection of communities in various scales, that is, communities partitioning the whole data set as well as communities within the detected communities.
The methods are compared with a classic method of the domain, ie. the DTFE method and are shown to obtain results of similar quality. Their main advantage is that they can be accomplished in much faster time, making them suitable for use for larger and more dense galactic databases. This nice property allows one of them to be combined with the DTFE method to get results of similar quality but of one order of magnitude less completion time.
Reference
V. Chaoji, G. Li, H. Yildirim, and M. J. Zaki. ABACUS: mining arbitrary shaped clusters from large datasets based onbackbone identification. In Proceedings of the Eleventh SIAM International Conference on Data Mining, SDM 2011, April28-30, 2011, Mesa, Arizona, USA, pages 295–306. SIAM / Omnipress, 2011.
L. Guzzo and VIPERS Team. VIPERS: An Unprecedented View of Galaxies and Large-scale Structure Halfway Back in theLife of the Universe.The Messenger, 151:41–46, Mar. 2013.
H. J. Miller and J. Han.Geographic data mining and knowledge discovery. CRC press, 2009.
S. G. e. a. Nelson, D. A. Pillepich. Illustris simulation: Public data release.Astronomy and Computing, 13, November 2015.
R. van de Weygaert and J. R. Bond.Observations and Morphology of the Cosmic Web, pages 409–468. Springer Netherlands,Dordrecht, 2008. ISBN 978-1-4020-6941-3.
G. Wilding, K. Nevenzeel, R. van de Weygaert, G. Vegter, P. Pranav, B. J. T. Jones, K. Efstathiou, and J. Feldbrugge.Persistent homology of the cosmic web – i. hierarchical topology in𝜆cdm cosmologies. Monthly Notices of the Royal Astronomical Society, 507(2):2968–2990, Aug 2021. ISSN 1365-2966. doi: 10.1093/mnras/stab2326.
We present an application of a deep geneative model to simulate the quantum mechanical systems of the harmonic and anaharmonic oscillator. In particular, we use the normalizing flow framework as to model probability distributions, based on the Feynman Path Integral Formulation, which typically are very difficult to sample from. We investigate the performance of such deep learning models in generating configurations for the aforementioned systems, while the lattice spacing decreases towards the continuum limit, and the lattice volume increases. We compare the results obtained from this approach to the standard Markov Chain Monte Carlo method. Furthermore, we propose a transfer learning technique for transferring the Neural Network parameters of models trained on larger lattice spacings to smaller ones, and compare the performance of different model architectures. We present preliminary results for both physical systems in question, with particular interest in the challenging case of the anharmonic oscillator as the energy barrier increases.
The properties of polymeric nanostructured materials involving a solid phase are typically determined by the presence of polymer/solid interfaces. Polymer chains in interfaces are characterized by a very broad range of characteristic time (from fs up to sec) and length (from Å up to several nm) scales. The latter and the complex quantum mechanical interactions between the surface and polybutadiene (PB) molecules indicate that multiple spatiotemporal scales must be bridged. Here, we study such systems via a new hierarchical multi-stage simulation methodology, involving ab-initio calculations and atomistic simulations of PB/alumina interfacial systems. Initially, density functional theory (DFT) calculations of a single butadiene monomer adsorbed on alumina surface are performed. A detailed scan of the interaction energy between butadiene and alumina is applied in order to get both the equilibrium configuration and the interaction between butadiene and alumina as a function of the butadiene/alumina distance. In the second stage, a detailed (classical) atomistic force field is obtained for the butadiene/alumina interaction by machine learning (ML) algorithms using proper parametric functional forms to fit the DFT data. The last stage of the proposed hierarchical simulation approach concerns the prediction of the properties of PB/Alumina interfaces at the atomic level, using the derived classical atomistic force field. We study the structure, conformations and dynamics of polymer chains as a function of distance from the alumina substrate.
The spectra of galaxies are usually decomposed into a number of components. It is widely acknowledged that radiative transfer models that include the effects of cosmic dust in a realistic geometry are needed for proper interpretation of the data and the self-consistent determination of a number of physical quantities of interest, such as the stellar mass of a galaxy, its current star formation rate and the fraction of its bolometric luminosity that is due to accretion onto a supermassive black hole. A number of radiative transfer models for the components of emission in galaxies, as well as methods of fitting them to data, are currently available. However, as the volume and quality of observational data improves, new challenges arise.
The Aristarchus Research Center (http://arc.euc.ac.cy/) at European University Cyprus has, over the last two decades, developed a niche in radiative transfer models of galaxies, mainly due to the work of its director (AE). Some of these models are currently available publicly through the CYprus models for Galaxies and their Nuclear Spectra (CYGNUS) project.
The goal of our project is to develop a new method for fitting the CYGNUS models to data using a Markov Chain Monte Carlo (MCMC) code and test the method with a large sample of galaxies with excellent photometry and infrared spectrophotometry from the Spitzer Space Telescope. In particular, we utilize three models as input of the MCMC code for fitting the data: these are the starburst (Efstathiou et al. 2000, Efstathiou & Siebenmorgen 2009) and active galactic nucleus (AGN) torus (Efstathiou & Rowan-Robinson 1995) models and a parallelized spheroidal code that we developed within CYGNUS following Efstathiou et al. 2021. For the MCMC code, we utilize the publicly available emcee code which is a pure-Python implementation of Goodman & Weare’s Affine Invariant MCMC Ensemble sampler (Foreman-Mackey et al. 2013). emcee gives us the fitted parameters and realistic estimates of their uncertainties.
We assemble multi-wavelength photometry for ∼100 galaxies of various types and at a range of redshifts with Spitzer/IRS spectroscopy and Herschel photometry (e.g. HERUS by Farrah et al. 2013 at z∼0.2, COSMOS by Fu et al. 2010 at z∼0.7, Sajina et al. 2012, Kirkpatrick et al. 2013). We incorporate the parallelized spheroidal code as well as the starburst and AGN models in emcee and the data are used for testing the models and fitting method developed in this project.
Regarding the parallelized spheroidal code, we run it ‘on the fly’ for each set of parameters in the Markov chains after computing the spectrum of starlight in the galaxy assuming an arbitrary star formation and metallicity history.
References
Efstathiou, A. and Rowan-Robinson, M., 1995. Dusty discs in active galactic nuclei. Monthly Notices of the Royal Astronomical Society, 273(3), pp.649-661.
Efstathiou, A., Rowan-Robinson, M. and Siebenmorgen, R., 2000. Massive star formation in galaxies: radiative transfer models of the UV to millimetre emission of starburst galaxies. Monthly Notices of the Royal Astronomical Society, 313(4), pp.734-744.
Efstathiou, A. and Siebenmorgen, R., 2009. Starburst and cirrus models for submillimeter galaxies. Astronomy & Astrophysics, 502(2), pp.541-548.
Efstathiou, A., Małek, K., Burgarella, D., Hurley, P., Oliver, S., Buat, V., Shirley, R., Duivenvoorden, S., Lesta, V.P., Farrah, D. and Duncan, K.J., 2021. A hyperluminous obscured quasar at a redshift of z≈ 4.3. Monthly Notices of the Royal Astronomical Society: Letters, 503(1), pp.L11-L16.
Farrah, D., Lebouteiller, V., Spoon, H.W., Bernard-Salas, J., Pearson, C., Rigopoulou, D., Smith, H.A., Gonzalez-Alfonso, E., Clements, D.L., Efstathiou, A. and Cormier, D., 2013. Far-infrared fine-structure line diagnostics of ultraluminous infrared galaxies. The Astrophysical Journal, 776(1), p.38.
Foreman-Mackey, D., Hogg, D.W., Lang, D. and Goodman, J., 2013. emcee: the MCMC hammer. Publications of the Astronomical Society of the Pacific, 125(925), p.306.
Fu, H., Yan, L., Scoville, N.Z., Capak, P., Aussel, H., Le Floc'h, E., Ilbert, O., Salvato, M., Kartaltepe, J.S., Frayer, D.T. and Sanders, D.B., 2010. Decomposing star formation and active galactic nucleus with Spitzer mid-infrared spectra: luminosity functions and co-evolution. The Astrophysical Journal, 722(1), p.653.
Kirkpatrick, A., Pope, A., Alexander, D.M., Charmandaris, V., Daddi, E., Dickinson, M., Elbaz, D., Gabor, J., Hwang, H.S., Ivison, R. and Mullaney, J., 2012. GOODS-Herschel: impact of active galactic nuclei and star formation activity on infrared spectral energy distributions at high redshift. The Astrophysical Journal, 759(2), p.139.
Sajina, A., Yan, L., Fadda, D., Dasyra, K. and Huynh, M., 2012. Spitzer-and herschel-based spectral energy distributions of 24 μm bright z∼ 0.3-3.0 starbursts and obscured quasars. The Astrophysical Journal, 757(1), p.13.
Predict the Next Influenza Pandemic using Deep Learning Methodologies
Charalambos Chrysostomou, Floris Alexandrou and Mihalis Nicolaou
Computation-based Science and Technology Research Center, The Cyprus Institute, Nicosia, Cyprus
Pathogens, including viruses, can cause infectious diseases to spread within populations. These pathogens can be transmitted in multiple ways, with high transmission rates in most circumstances. Influenza viruses are part of the Orthomyxoviridae family of viruses that have negative-sense, single-stranded, segmented RNA genomes, with the majority of the virus burden being associated with influenza viruses type A and B [1]. Influenza viruses capable of infecting humans were introduced from birds and swine [2]. Their introduction to humans has begun global pandemics with the 1918 “Spanish flu” and 2009 “Swine flu” pandemics. Influenza viruses are responsible for more than 500,000 deaths worldwide and affect around 5–15% of the population each year [3]. The evolution of influenza viruses enables them to infect individuals who have previously gained immunity through vaccination or previous infections. As the recent events of the COVID-19 pandemic have shown, a computational tool capable of identifying novel and potentially dangerous viruses in the wild that have acquired the capability to infect Human hosts will be crucial and needed to predict and control future outbreaks [4,5].
In this paper, a highly successful predictive model is presented, based on Deep Learning and Convolutional Neural Networks (CNN), for the characterisations and classification of Influenza type A based upon the ability to infect a specific host, more specifically human, avian and swine hosts, by solely using the Hemagglutinin (HA) protein sequence. The results based on the test sets per species were also calculated with 98.74% ± 0.32%, 99.52% ± 0.25% and 97.19% ± 0.75% for Human, Avian and Swine species, respectively. The proposed method yields almost 10%, 5% and 2% higher accuracy for Avian, Human and Swine, respectively, than those of the earlier study [6]. It is also observed that the accuracy for each class presented in our work is more balanced than what was presented in this earlier study. The final overall accuracy is also found to be as much as 5% higher than that of the earlier study.
References
[1] M. C. Zambon, “Epidemiology and pathogenesis of influenza,” Journal of Antimicrobial Chemotherapy, vol. 44, no. suppl2, pp. 3–9, 1999.
[2] C. A. Russell, P. M. Kasson, R. O. Donis, S. Riley, J. Dunbar,A. Rambaut, J. Asher, S. Burke, C. T. Davis, R. J. Garten, et al., “Science forum: improving pandemic influenza risk assessment,” Elife, vol. 3, p. e03883, 2014
[3] K. St ̈ohr, “Influenza—who cares,” The Lancet infectious diseases, vol. 2, no. 9, p. 517, 2002
[4] C. Chrysostomou, H. Partaourides, and H. Seker, “Prediction of influenza A virus infections in humans using an artificial neural network learning approach,” in2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).IEEE, 2017, pp. 1186–1189.
[5] C. Chrysostomou, H. Seker, N. Aydin, and P. I. Haris, “Complex resonant recognition model in analysing influenza a virus subtype protein sequences,” in Information Technology and Applications in Biomedicine (ITAB), 2010 10th IEEE International Conference on.IEEE, 2010, pp. 1–4.
[6] . F. Sherif, N. Zayed, and M. Fakhr, “Classification of host origin in influenza a virus by transferring protein sequences into numerical feature vectors,” Int J Biol Biomed Eng, vol. 11, 2017
1 Introduction
In astrophysics, distributions constructed by energy measurements in different wavelengths, namely Spectral Energy Distributions (SEDs), are important tools for studying the physical properties and evolution of astronomical objects. SEDs can be used for example to determine the luminosity of astronomical objects, the rate at which galaxies form new stars or the rate at which supermassive black holes accrete mass to generate energy in quasars Efstathiou and Rowan-Robinson [1995], Efstathiou et al. [2000], Rowan-Robinson and et al. [2005]. However, the measurement process is prone to statistical (random)as well as systematic errors, such as background and foreground noise interference, i.e., atmospheric absorption and distortion,opaque/obscuring dust,etc. Due to these factors, as well as technical limitations, such as camera sensor sensitivity, cooling, resolution,etc, SEDs are collected in scarce, often incomplete datasets. SEDs are compared to physical models in order to find the best-fit model(s), which provides an insight into the underlying physical processes and properties of the target. This highlights the importance of expanding the range and improving the accuracy of the available data points.
Computational methods have been widely used in the literature to enhance SEDs and handle the experimental error Walcher et al. [2010]. In recent years, deep learning has proven to be an important tool for enhancement of real data and in general for solving inverse problems, where the goal is to reconstruct or correct a signal given an incomplete and/or noisy version. Specifically for astronomical data, deep learning techniques have been used mainly for astronomical imaging, such as deblending images of galaxies Boucaud et al. [2019] or image enhancement Lanusse et al. [2019]. For SEDs, deep learning has been used in forward problems such as feature extraction Frontera-Pons, J. et al. [2017], but not inverse problems.
2 Methods and Results
In this paper we use well-known deep learning techniques adjusted appropriately in order to solve various inverse problems for SEDs. More specifically, we focus on general problems i.e. inpainting (predicting missing measurements in a continuous window), super-resolution(predicting random missing measurements throughout the extent of the signal) and denoising (correcting random additive noise). The method we apply is data-driven and utilizes Deep Generative Models as learned structural priors. More specifically, models like Variational AutoEncoders (VAEs) Kingma and Welling [2019] and Generative Adversarial Networks (GANs) Goodfellow et al. [2014], trained on large datasets (most frequently of images) are able to extract information about the underlying data distribution and generate realistic samples. These models, once trained, can be used as structural priors for solving inverse problems Bora et al. [2017]. Thus, these methods require training of a high-quality generative network which can model realistic SEDs, with properties such as high-frequency, irregularity etc. In this paper, we use the Generative Latent Optimization framework (GLO) Bojanowski et al. [2017] to train a deep generative network suitable for our needs. The framework allows us to train a high-quality generative network with more flexibility than a VAE and at the same time offers training efficiency unlike GANs, which are notoriously hard to train.In order to train a generative network any state-of-the-art method requires a high-quality large dataset. However, for the case of SEDs these prerequisites are unrealistic since the measurement procedure contains innate error, incompleteness and is particularly expensive. To overcome the issue of erroneous and/or incomplete samples we propose an end-to-end approach: (1) a preprocessing step where we utilize classical computational methods for enhancement,e.g., iterative PCA Vanderplas et al.[2012], Walcher et al. [2010], (2) the deep learning method described above. Our approach is useful for a variety of inverse problems and it can mitigate the long-term cost of solving such problems for SEDs. Furthermore, it is expected to improve overall performance on these problems even with significant corruption and/or incompleteness by leveraging the powerful generalization property as well as the robustness of a deep generative network.
We evaluate our approach, both qualitatively and quantitatively, for different inverse problems, by artificially injecting realistic corruption and/or incompleteness to our test data. In qualitative results we can see that the reconstruction we predict closely follows the trajectory of the original signal and in most cases predicts the high-frequency changes and large spikes. In quantitative evaluation we observe that in most experimental configurations the performance on test data is on par with the training data. Given that our generative network was optimized to represent the training data, this shows a considerable generalization capability, which is crucial for the effectiveness of our approach.
References
P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam. Optimizing the latent space of generative networks, 2017.A.
Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. InProceedings of the34th International Conference on Machine Learning - Volume 70, ICML’17, pages 537–546. JMLR.org, 2017. URLhttp://dl.acm.org/citation.cfm?id=3305381.3305437.
A. Boucaud, M. Huertas-Company, C. Heneka, E. E. O. Ishida, N. Sedaghat, R. S. de Souza, B. Moews, H. Dole, M. Castellano,E. Merlin, and et al. Photometry of high-redshift blended galaxies using deep learning.Monthly Notices of the RoyalAstronomical Society, 491(2):2481–2495, Dec 2019. ISSN 1365-2966. doi: 10.1093/mnras/stz3056. URLhttp://dx.doi.org/10.1093/mnras/stz3056.
A. Efstathiou and M. Rowan-Robinson. Dusty discs in active galactic nuclei.Monthly Notices of the Royal AstronomicalSociety, 273(3):649–661, 1995. URLhttp://adsabs.harvard.edu/full/1995MNRAS.273..649E.
A. Efstathiou, M. Rowan-Robinson, and R. Siebenmorgen. Massive star formation in galaxies: radiative transfer models of theuv to millimetre emission of starburst galaxies.Monthly Notices of the Royal Astronomical Society, 313(4):734–744, 2000.URLhttps://academic.oup.com/mnras/article/313/4/734/1085752.
Frontera-Pons, J., Sureau, F., Bobin, J., and Le Floc ́h, E. Unsupervised feature-learning for galaxy seds with denoising au-toencoders.A&A, 603:A60, 2017. doi: 10.1051/0004-6361/201630240. URLhttps://doi.org/10.1051/0004-6361/201630240.
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generativeadversarial networks, 2014.
D. P. Kingma and M. Welling. An introduction to variational autoencoders.CoRR, abs/1906.02691, 2019. URLhttp://arxiv.org/abs/1906.02691.
F. Lanusse, P. Melchior, and F. Moolekamp. Hybrid physical-deep learning model for astronomical inverse problems, 2019.
M. Rowan-Robinson and et al. Spectral energy distributions and luminosities of galaxies and active galactic nuclei in thespitzer wide-area infrared extragalactic (swire) legacy survey.The Astronomical Journal, 129:1183–1197, March 2005.URLhttps://iopscience.iop.org/article/10.1086/428001.
J. Vanderplas, A. Connolly, Ž. Ivezić, and A. Gray. Introduction to astroml: Machine learning for astrophysics. InConferenceon Intelligent Data Understanding (CIDU), pages 47–54, Oct 2012. doi: 10.1109/CIDU.2012.6382200.
J. Walcher, B. Groves, T. Budavári, and D. Dale. Fitting the integrated spectral energy distributions of galaxies.Astrophysicsand Space Science, 331(1):1–51, Aug 2010. ISSN 1572-946X. doi: 10.1007/s10509-010-0458-z.
URLhttp://dx.doi.org/10.1007/s10509-010-0458-z.
Maria Arnittali1,2,3,*, Anastassia N. Rissanou1,2, and Vagelis Harmandaris1,2,3
1 Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas, (FORTH), IACM/FORTH, FR-71110 Heraklion, Greece;
2 Department of Mathematics and Applied Mathematics, University of Crete, GR-71409, Heraklion, Crete, Greece;
3 Computation-based Science and Technology Research Center, The Cyprus Institute, Nicosia 2121, Cyprus.
*Correspondence: m.arnittali@cyi.ac.cy
Keywords: Biomolecules; Rop; RM6; Proteins; α-helix; Molecular Dynamics Simulations; Thermostability
ABSTRACT
The engineering of functional materials at the nanometer scale is an ultimate challenge in the field of nanotechnology. Nature provides peptides and proteins as a major source of inspiration for the engineering of responsive, protein-based nanomaterials for medical and biotechnology applications. Despite the scientific progress, an unsolved puzzle remains how a protein dictates its functional three-dimensional structure and its physicochemical properties, such as stabilities. A detailed and comprehensive understanding of them would provide us with the knowledge to develop novel bio-inspired materials with desired functionalities.
Nowadays, mathematical, and computational techniques are employed to study proteins in aqueous solutions. Here we provide a detailed study of biomolecular systems via Molecular Dynamics (MD) simulations, which provide direct insights in atomic detail, beyond what could be provided through experiments. A trajectory is generated through MD simulations, by numerical integration of classical equations of motion that contain all the dynamical information that is necessary for the analysis of the studied systems. Our work concerns the detailed exploration of how a protein mutation can cause major changes in its physical properties, like its structural stability, via detailed all-atom MD simulations. More specifically, we have studied two proteins: the dimeric RNA-binding ColE1 Repressor of Primer (Rop) protein that is a paradigm of a highly regular 4-α-helical bundle, and its loopless mutation (RM6). An extensive investigation of the thermal stability of their native state in an aqueous solution is performed at three different temperatures (300 K, 350 K, and 368 K). Key structural and conformational properties are calculated, such as α-helix dimensional properties, Ramachandran plot, and pair correlation functions, which reveal RM6 as more thermostable than wtRop protein. Deviations from the native structure are detected mostly in tails and loop regions and the most flexible residues are indicated. In an alternative approach we study the protein folding problem through the reversion of the amino acid sequence of the well-characterized wtRop protein in order to investigate the sequence-structure relationships. MD simulations starting from different initial configurations are performed in order to discover the final structure of the reversed protein (rRop) and to explore its similarities with the native state of its parent protein wtRop.
Computer simulations are widely used to imitate real-life problems and, through the years, have become a very powerful and accurate tool for predicting the behavior of very complex systems. Using atomistic molecular dynamics (MD) simulations, we are able to represent the chemical structure of the system, taking into account force field approximations, in different time and length scales, depending on the problem.
Star polymers have been used as model systems for more complex architectures of industrial relevance polymer-based materials, due to their unique properties and well-defined structure. Moreover, both the structural and dynamical behavior of polymeric materials is strongly chemically and architecturally dependent. As a consequence, the utilization of an as accurate as possible, and at the same time computationally feasible, atomic model, is required.
Since we do not live in a world of infinite computational power it is of high interest to speed-up processes and optimize the problems. By using high-performance computing we do not only manage to equilibrate our complex systems (~106 atoms) but also measure computationally expensive quantities in a short period of real-time. Our study aims to contribute to a better understanding of structure-dynamics relation in materials with branch-like architectures. Thus, we choose to investigate two dissimilar polymers, differing in flexibility and glass transition temperature: poly(ethylene oxide) (PEO) and polystyrene (PS) through structural and dynamical properties.[1,2] Taking advantage of our atomic model and the available resources, we also implement a challenging grid-based algorithm for estimating the free volume of the system. This is a quantity of great importance since it is directly related to the permeability of the material and can be estimated only from atomistic representation.
The current work represents, to the best of our knowledge, the first attempt to describe, in atomistic detail, the behavior of star polymer melts as a function of the number of arms as well as the chemistry. Through our systematic work, we provide additional information to the experimental techniques and generic coarse-grained models, which are not chemistry specific, and we aim to bridge the gap between theory and reality.
[1]E. Gkolfi, P. Bačová, and V. Harmandaris, “Size and Shape Characteristics of Polystyrene and Poly(ethylene oxide) Star Polymer Melts Studied By Atomistic Simulations”, Macromolecular Theory and Simulations ,2021, 30, 2000067.
[2]P. Bacova, E. Gkolfi, L. Hawke, and V. Harmandaris, “Dynamical heterogeneities in non-entangled polystyrene and poly(ethylene oxide) star melts”, Physics of Fluids, 2020, 32, 127117
The NEET proteins constitute a unique class of [2Fe–2S] proteins. The metal ions bind to three cysteines and one histidine. The proteins’ clusters exist in two redox states; The oxidized protein (containing two FeIII ions) can transfer the cluster to apo-acceptor protein(s), while the reduced form (containing one ferrous ion) remains bound to the protein frame. Here, we perform in silico and in vitro studies on human NEET proteins in both reduced and oxidized forms. Quantum chemical calculations on all available human NEET proteins structures suggest that reducing the cluster weakens the Fe–NHis and Fe–SCys bonds, similar to what is seen in other Fe–S proteins (e.g., ferredoxin and Rieske protein). We further show that the extra electron in the [2Fe–2S]+ clusters of one of the NEET proteins (mNT) is localized on the His-bound iron ion, consistently with our previous spectroscopic studies. Kinetic measurements demonstrate that the mNT [2Fe–2S]+ is released only by an increase in temperature. Thus, the reduced state of human NEET proteins [2Fe–2S] cluster is kinetically inert. This previously unrecognized kinetic inertness of the reduced state, along with the reactivity of the oxidized state, is unique across all [2Fe–2S] proteins. Finally, by using a coevolutionary analysis, along with molecular dynamics simulations, we provide insight on the observed allostery between the loop L2 and the cluster region. Specifically, we show that W75, R76, K78, K79, F82 and G85 in the latter region share similar allosteric characteristics in both redox states.
Intrinsically disordered proteins (IDPs) lack a unique, ordered, tridimensional structure. They exhibit rather diverse conformational ensembles in solution, retaining a high degree of structural disorder.
Here we have focused on the alpha-synuclein protein, an IDP that plays a key role for the progression of Parkinson's disease. By using a variety of molecular simulation approaches, we interpreted low resolution data from MS/SMFS spectroscopy and described the impact of point mutations onto the protein conformational ensemble. The approach presented here could be easily transferred to other IDPs.
Elevated levels of mitochondrial iron and reactive oxygen species (ROS) accompany the progression of diabetes, negatively impacting insulin production and secretion from pancreatic cells. In search for a tool to reduce mitochondrial iron and ROS levels, we arrived at a novel molecule that destabilizes the [2Fe-2S] clusters of NEET proteins (M1). Treatment of db/db diabetic mice with M1 improved hyperglycemia, without the weight gain observed with alternative treatments such as rosiglitazone. The molecular interactions of M1 with the NEET proteins mNT and NAF-1 were determined by X-crystallography. The possibility of controlling diabetes by molecules that destabilize the [2Fe–2S] clusters of NEET proteins, thereby reducing iron-mediated oxidative stress, opens a new route for managing metabolic aberration such as in diabetes.
The underlying object of study behind this thesis is the physics and statistics of extreme fluctuations in a class of dynamical model for turbulent energy cascades, called Shell Models, as well as the role that instantons play in such events. Instantons are special solutions which contribute maximally to the action in a path integral formulation of the problem. The applicability and efficacy of numerical methods for importance sampling based on the MSRJD path integral formulation is the second question behind this work. Such methods, originally developed in lattice QCD, have recently begun to be employed in Fluid Dynamics and have been successfully tested for the Burgers equation. Their application to complex-valued and chaotic systems, with an energy spectrum à la Kolmogorov remains to be seen and is the central development in our work. Moreover, we have found an interesting Shell Model with parametric control between forward/backward energy transfer. Not only is this an interesting result by itself but will also allow for the exploration of the method in different setting for the energy transfer mechanisms.
In this talk the results obtained during the course
of the Ph.D. project Algorithms for Relativistic Lattice Boltzmann, one of
the projects created in the framework of the European network of Joint
Doctorates STIMULATE, will be presented.
The main focus of the project has been the algorithmic
refinement and extension of existent lattice kinetic solvers for the simulation of relativistic hydrodynamics and their applications in the fields of astrophysics, condensed matter, nuclear physics.
The first achievement reported is the generalization of the method to a generic number of spatial dimensions, particularly instrumental to the correct simulation of condensed matter systems, which are typically laid out in bidimensional fashion.
Next, we present the benchmarking results obtained by the method in the simulation of the Riemann problem, showing that the numerical results are compatible with both analytic solutions and data from other numerical solvers.
A possible technique for the extension of the numerical scheme to beyond hydrodynamic regimes is then discussed, together with a brief introduction to the development of a new Lattice Boltzmann inspired kinetic scheme for the simulation of radiative transfer.
The statistical error in lattice QCD for certain quantities is currently reduced to levels such that it is of a similar magnitude to the systematic error due to neglecting QED effects. Pushing forwards for increased precision, therefore, requires the inclusion of QED if accuracy is to be maintained. We present preliminary results for the masses of the pseudo-scalar mesons, and the proton, neutron and $\Omega^{-}$ baryons, obtained from $n_f$=1+2+1 QCD+QED lattice simulations performed using C${}^*$ boundary conditions. Hadron two-point correlators are extracted from full QCD+QED lattice simulations through an extension of the OpenQ${}^*$D publicly available code. In particular, spin-1/2 and spin-3/2 baryon correlators are computed by smearing, at different levels along the spatial directions, both the gauge links and the fermion operators. Baryon spectra are then extracted by applying the Generalised Eigenvalue Problem. These results are part of the ongoing effort of the RC${}^*$ collaboration and have been obtained on a single ensemble.
Dinner at Pelican Restaurant
https://g.page/PelicanRestaurantPafos
We investigate an idealized prey-predator problem in a low
Reynolds hydro-dynamic environment using reinforcement learning
techniques. The problem is formalized in a game theoretic framework.
Two microswimmers (the agents) — the pursuer (predator) and the evader
(prey) — play the following game: the pursuer has to capture the
evader in the shortest possible time and the latter to stay away from
its predator as long as possible. The game terminates either upon
capture (pursuer wins) or if the game duration exceeds a given time
(evader wins). To accomplish its goal each agent is equipped with
limited steering abilities and is capable to sense the hydrodynamic
disturbances generated by the swimming opponent, which provide only
partial information on its position and direction of motion. Such
hydrodynamic disturbances also modify the motion of the microswimmers,
making the environment dynamically complex. We show that learning
through reinforcement both agents find nontrivial and co-evolving
(with the learning process) strategies to accomplish their goals. This
work offers a proof-of-concept for the use of Reinforcement Learning
to discover prey-predator strategies in aquatic environments.
REFERENCE: F. Borra, L. Biferale, M. Cencini, and A. Celani.
"Reinforcement learning for pursuit and evasion of microswimmers at
low Reynolds number." arXiv preprint arXiv:2106.08609 (2021)
We examine the applicability of Artificial Intelligence tools to
different open problems in fluid dynamics, from the search for an
optimal navigation strategy in complex environments to data
reconstruction from partial measurements of turbulent flows. To solve
navigation problems we follow a Reinforcement Learning (RL) approach.
Here, we will focus on the problem of finding the path that minimizes
the navigation time between two given points in a fluid flow. I will
show how RL is able to take advantage of the flow properties in order to
reach its target, providing stable solutions with respect to
perturbations on the initial conditions and to addiction of external
noise. These results illustrate the potential of RL algorithms to model
adaptive behavior in real/complex flows and pave the way towards the
engineering of smart unmanned autonomous vehicles. The search for
optimal navigation strategies is key in several applications, with a
potential breakthrough in the open challenge of Lagrangian data
assimilation (DA). In the DA direction, we also explore the capability
of Generative Adversarial Network (GAN) to generate missing data. In
this direction, I will present a quantitative investigation of their
potential in reconstructing 2d damaged snapshots extracted from a large
numerical database of 3d turbulence in the presence of rotation. I will
briefly compare GAN with different, well-known, data assimilation tools,
such as Nudging, an equation-informed protocol, or Gappy POD, developed
in the context of image reconstruction. I will discuss as one can use DA
tools with a reverse engineering approach, to investigate theoretical
questions like which features of the input flow data are required/"more
important" in order to obtain a better full-field reconstruction.
Spectral densities are central objects in the calculation of hadronic rates and cross sections. In this talk I will discuss a method that allows to extract spectral densities from euclidean lattice calculations. The ill-posed inverse problem of determining spectral densities from euclidean correlation functions is made tractable through the determination of smeared spectral densities in which the desired density is convolved with a set of known smearing kernels of finite width. After taking the infinite volume limit, the un-smeared spectral density (when this is sufficiently regular) can be obtained by extrapolating to the limit of zero smearing width. A detailed numerical investigation of this procedure will also be discussed in the context of the two-dimensional non-linear O(3) sigma model. In this case, thanks to the integrability of the model, the non-perturbative numerical results for the spectral densities can be compared with exact analytical results thus allowing to asses the reliability of the error estimates provided by the method.
More than 99% of the mass of the visible matter resides in hadrons which are bound states of quarks and gluons, collectively called partons. These are the fundamental constituents of Quantum Chromodynamics (QCD), the theory of strong interaction. While QCD is a very elegant theory, it is highly non-linear and cannot be solved analytically, posing severe limitations on our knowledge for the structure of the hadrons. Lattice QCD is a powerful first-principle formulation that enables the study of hadrons numerically, which is done by defining the continuous equations on a discrete Euclidean four-dimensional lattice.
The proton structure is among the frontiers of Nuclear and Particle Physics both experimentally and theoretically. From the theory side, lattice QCD is a vital component for the physics program of the future $2B Electron-Ion-Collider to be built at Brookhaven National Laboratory in the U.S. Parton distribution functions (PDFs) and their generalizations (GPDs, TMDs) have a central role in understanding the hadron structure and are under investigation both experimentally and theoretically for several decades. Their direct calculation in lattice QCD poses challenges; information is accessible through their Mellin moments. However, novel approaches to extract their x-dependence using matrix elements of non-local operators have been proposed and extensively investigated in recent years.
In this talk, we will demonstrate the advances in extracting PDFs and GPDs from lattice QCD using novel approaches, in an effort to map the three-dimensional structure of the proton. I will discuss the strengths of lattice calculations, but also identify the challenges associated with the elimination of systematic uncertainties.
We describe the computation of the pion transition form factor $F_{\pi\rightarrow\gamma^*\gamma^*}$ to two photons from first principles using twisted mass lattice QCD. The form factor is relevant for the calculation of the pion-pole contribution to the hadronic light-by-light scattering (HLBL) in the anomalous magnetic moment g-2 of the muon. The pion-pole contribution to HLBL is expected to be dominant at long-distance, and it is therefore important for controlling the systematic error in the HLBL contribution to the muon g-2. In this way, the computation helps to better understand the current tension between theory and experiment for the muon g-2.
We present an ab initio calculation of the individual up, down, and strange quark unpolarized, helicity, and transversity parton distribution functions for the proton. The calculation is performed within the twisted mass clover-improved fermion formulation of lattice QCD. We use a $N_f = 2 + 1 + 1$ gauge ensemble simulated with pion mass $M_\pi = 250$ MeV, $M_\pi L \approx 3.8$ and lattice spacing $a = 0.0938$ fm. Momentum smearing is employed in order to improve the signal-to-noise ratio, allowing for the computation of the matrix elements up to nucleon boost momentum $P_3 = 1.24$ GeV. The lattice matrix elements are non-perturbatively renormalized and the final results are presented in the $\overline{\rm MS}$ scheme at a scale of 2 GeV.
We will present results on the neutron electric dipole moment $\vert \vec{d}_N\vert$ using an ensemble of $N_f=2+1+1$ twisted mass clover-improved fermions with lattice spacing of $a \simeq 0.08 \ {\rm fm}$ and physical pion mass ($m_{\pi} \simeq 139 \ {\rm MeV}$).
The approach followed in this work is to compute the $CP$-odd electromagnetic form factor $F_3(Q^2 \to 0)$ by expanding the action to leading order in $\theta$. This gives rise to correlation functions that involve the topological charge, for which we employ a fermionic definition by means of spectral projectors. We include a comparison between the results using the fermionic and the gluonic definition where for the latter we employ the gradient flow. We show that using spectral projectors leads to half the statistical uncertainty on the evaluation of $F_3(0)$. Using the fermionic definition, we find a value of $\vert \vec{d}_N\vert = 0.0009(24) \ \theta \ e \cdot {\rm fm}$.
We investigate the consequences of including a topological term in the action of effective bosonic strings. In analogy with the topological θ term, the Gauss-Bonnet topological term is introduced into the effective action with a general complex coupling. The Yang-Mills lattice data corresponding to the potential of static quark-antiquark in a heatbath are then compared to the string potential. The link-integrated Polyakov-loop correlators are average over SU(3) gauge configurations with β=6.0. The topological induced shifts substantially improve the fit behavior of the potential over short distances. Remarkably, the fitted coupling parameter of the Gauss-Bonnet term is found in proportion to an integral quantum number. Such an effect is not pronounced when the string's self-interaction term is absent in the action. These findings are in consonance with the results obtained using axionic-string ansatz for closed strings. The manifested integer quantum number can be interpreted as the winding/self-intersection of the string around "the worldsheet's axion".
The ab initio study of baryon-meson resonances from lattice QCD is an exciting and timely research opportunity. Numerous baryon resonances observed in experiment can benefit from a first-principles theoretical determination of the resonance pole, and beyond that resonant matrix elements or even resonance form factors are of critical interest. The L\"uscher method based on hadron interaction in a finite volume provides a practical framework for studying meson-baryon resonances withlattice techniques. Combined with the drive to large lattice volumes and quark masses matching the physical world, even the seemingly simplest case of pion-nucleon scattering becomes a computational challenge by today's computational power. Based on our work on the $\Delta$ channel I give a review on how we simulate pion-nucleon scattering on the lattice and find the $\Delta$ resonance pole.
Heavy quark systems, containing charm or bottom quarks have played a critical role in developing our understanding of QCD and in tests of the Standard Model.
The simulation of heavy flavour hadrons using lattice QCD presents a multiscale problem but this arena also offers a fertile hunting ground for new strong exotic matter. I will discuss how lattice QCD can address the new challenges emerging in heavy quark spectroscopy, including through studies at zero and finite temperature QCD. Some recent results will be presented and future prospects considered.
The quantitative understanding of hadron structure holds the key to
the interpretation of current and future experiments in particle,
hadron and nuclear physics. In this talk I review the status of lattice QCD
calculations of structural properties of the nucleon, focussing on the
determination of electromagnetic and axial form factors, calculations
of the axial, scalar and tensor charges, as well as the determination
of sigma-terms. A central issue for precision calculations of these
quantities is control over excited state contributions that may cause
a systematic bias in results.
State-of-the-art simulations of discrete gauge theories are based on Markov chains with local changes in the field space, which however at very fine lattice spacings are notoriously difficult due to separated topological sectors of the gauge field. Namely, Hybrid Monte Carlo (HMC) algorithms or heatbath overrelaxation steps, which are very efficient at coarser lattice spacings, suffer from increasing autocorrelation times. This makes simulation of lattice QCD close to the continuum infeasible even with exa-scale computing.
We will discuss the potential of gauge proposals within the 2D Schwinger Model based on equivariant flows using deep learning introduced by G. Kanwar et al., (arXiv:2003.06413). We will discuss possible ways to speed up training time and some strategies to reach larger lattice volumes. Moreover we will present first results on simulating the 2D Schwinger Model with dynamical Nf=2 Wilson fermions at very fine lattice spacings using scalable global correction steps and compare the autocorrelation time achieved to HMC.
We present recent results on quark masses using N_f=2+1+1 clover-improved twisted mass fermion gauge ensembles simulated by the Extended Twisted Mass Collaboration. We evaluate the renormalized light, strange and charm quark masses using mesonic and baryonic data in the continuum limit. This provides a nice consistency check of our data.