Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The accurate treatment of outflow boundary conditions remains a critical challenge in computational fluid dynamics when predicting aerodynamic forces and/or acoustic emissions. This is particularly evident when employing the lattice Boltzmann method (LBM) as the numerical solution technique, which often suffers from inaccuracies induced by artificial reflections from outflow boundaries. This paper investigates the use of neural networks (NN) to mitigate these adverse boundary effects and enable truncated domain requirements. Two distinct NN-based approaches are proposed: (1) direct reconstruction of unknown particle distribution functions at the outflow boundary; and (2) enhancement of established characteristic boundary conditions (CBC) by dynamically tuning their parameters. The direct reconstruction model was trained on data generated from a 2D flow over a cylindrical obstruction. The drag, lift, and Strouhal number were used to test the new boundary condition. We analyzed results for various Reynolds numbers and restricted domain sizes where it demonstrated significantly improved predictions when compared with the traditional Zou & He boundary condition. To examine the robustness of the NN-based reconstruction, the same condition was applied to the simulation of a NACA0012 airfoil, again providing accurate aerodynamic performance predictions. The neural-enhanced CBC were evaluated on a 2D convected vortex benchmark and showed superior performance in minimizing density errors compared to CBCs with fixed parameters. These findings highlight the potential of NN-integrated boundary conditions to improve accuracy and reduce computational expense of aerodynamic and acoustic emissions simulations with the LBM.
Machine learning techniques offer an effective approach to modeling dynamical systems solely from observed data. However, without explicit structural priors -- built-in assumptions about the underlying dynamics -- these techniques typically struggle to generalize to aspects of the dynamics that are poorly represented in the training data. Here, we demonstrate that reservoir computing -- a simple, efficient, and versatile machine learning framework often used for data-driven modeling of dynamical systems -- can generalize to unexplored regions of state space without explicit structural priors. First, we describe a multiple-trajectory training scheme for reservoir computers that supports training across a collection of disjoint time series, enabling effective use of available training data. Then, applying this training scheme to multistable dynamical systems, we show that RCs trained on trajectories from a single basin of attraction can achieve out-of-domain generalization by capturing system behavior in entirely unobserved basins.
This study examines the feasibility of carbon dioxide storage in shale rocks and the reliability of reactive transport models in achieving accurate replication of the chemo-mechanical interactions and transport processes transpiring in these rocks when subjected to CO2 saturated brine. Owing to the heterogeneity of rocks, experimental testing for adequate deductions and findings, could be an expensive and time-intensive process. Therefore, this study proposes utilization of reactive transport modeling to replicate the pore-scale chemo-mechanical reactions and transport processes occurring in silicate-rich shale rocks in the presence of CO2 saturated brine under high pressure and high temperature. For this study, Crunch Tope has been adopted to simulate a one-dimensional reactive transport model of a Permian rock specimen exposed to the acidic brine at a temperature of 100 {\deg}C and pressure of 12.40 MPa (1800 psi) for a period of 14 and 28 days. The results demonstrated significant dissolution followed by precipitation of quartz rich phases, precipitation and swelling of clay rich phases, and dissolution of feldspar rich phases closer to the acidic brine-rock interface. Moreover, porosity against reaction depth curve showed nearly 1.00% mineral precipitation occur at 14 and 28 days, which is insufficient to completely fill the pore spaces.
Crystal-structure match (CSM), the atom-to-atom correspondence between two crystalline phases, is used extensively to describe solid-solid phase transition (SSPT) mechanisms. However, existing computational methods cannot account for all possible CSMs. Here, we propose a formalism to classify all CSMs into a tree structure, which is independent of the choices of unit cell and supercell. We rigorously proved that only a finite number of noncongruent CSMs are of practical interest. By representing CSMs as integer matrices, we introduce the crystmatch method to exhaustively enumerate them, which uncontroversially solves the CSM optimization problem under any geometric criterion. For most SSPTs, crystmatch can reproduce all known deformation mechanisms and CSMs within 10 CPU minutes, while also revealing thousands of new candidates. The resulting database can be further used for comparing experimental phenomena, high-throughput energy barrier calculations, or machine learning.
Quantum computing offers transformative potential for simulating real-world materials, providing a powerful platform to investigate complex quantum systems across quantum chemistry and condensed matter physics. In this work, we leverage this capability to simulate the Hubbard model on a six-site graphene hexagon using Qiskit, employing the Iterative Quantum Phase Estimation (IQPE) and adiabatic evolution algorithms to determine its ground-state properties. Noiseless simulations yield accurate ground-state energies (GSEs), charge and spin densities, and correlation functions, all in excellent agreement with exact diagonalization, underscoring the precision and reliability of quantum simulation for strongly correlated electron systems. However, deploying IQPE and adiabatic evolution on today's noisy quantum hardware remains highly challenging. To examine these limitations, we utilize the Qiskit Aer simulator with a custom noise model tailored to the characteristics of IBM's real backend. This model includes realistic depolarizing gate errors, thermal relaxation, and readout noise, allowing us to explore how these factors degrade simulation accuracy. Preliminary hardware runs on IBM devices further expose discrepancies between simulated and real-world noise, emphasizing the gap between ideal and practical implementations. Overall, our results highlight the promise of quantum computing for simulating correlated quantum materials, while also revealing the significant challenges posed by hardware noise in achieving accurate and reliable physical predictions using current quantum devices.
Turbulence poses challenges for numerical simulation due to its chaotic, multiscale nature and high computational cost. Traditional turbulence modeling often struggles with accuracy and long-term stability. Recent scientific machine learning (SciML) models, such as Fourier Neural Operators (FNO), show promise in solving PDEs, but are typically limited to one-step-ahead predictions and often fail over long time horizons, especially in 3D turbulence. This study proposes a framework to assess the reliability of neural operator models in turbulent flows. Using three-dimensional forced homogeneous isotropic turbulence (HIT) as a benchmark, we evaluate models in terms of uncertainty quantification (UQ), error propagation, and sensitivity to initial perturbations. Statistical tools such as error distribution analysis and autocorrelation functions (ACF) are used to assess predictive robustness and temporal coherence. Our proposed model, the factorized-implicit FNO (F-IFNO), improves long-term stability and accuracy by incorporating implicit factorization into the prediction process. It outperforms conventional LES and other FNO-based models in balancing accuracy, stability, and efficiency. The results highlight the importance of prediction constraints, time interval selection, and UQ in developing robust neural operator frameworks for turbulent systems.
Core-shell nanoparticles, particularly those having a gold core, have emerged as a highly promising class of materials due to their unique optical and thermal properties, which underpin a wide range of applications in photothermal therapy, imaging, and biosensing. In this study, we present a comprehensive study of the thermal dynamics of gold-core silica-shell nanoparticles immersed in water under pulse illumination. The plasmonic response of the core-shell nanoparticle is described by incorporating Mie theory with electronic temperature corrections to the refractive indices of gold, based on a Drude Lorentz formulation. The thermal response of the core-shell nanoparticles is modeled by coupling the two temperature model with molecular dynamics simulations, providing an atomistic description of nanoscale heat transfer. We investigate nanoparticles with both dense and porous silica shells (with 50% porosity) under laser pulse durations of 100 fs, 10 ps, and 1 ns, and over a range of fluences between 0.05 and 5mJ/cm2. We show that nanoparticles with a thin dense silica shell (5 nm) exhibit significantly faster water heating compared to bare gold nanoparticles. This behavior is attributed to enhanced electron-phonon coupling at the gold silica interface and to the relatively high thermal conductance between silica and water. These findings provide new insights into optimizing nanoparticle design for efficient photothermal applications and establish a robust framework for understanding energy transfer mechanisms in heterogeneous metal dielectric nanostructures.
A method is presented for the fast evaluation of the transient acoustic field generated outside a spherical surface by sources inside the surface. The method employs Lebedev quadratures, which are the optimal method for spatial integration, and Lagrange interpolation and differentiation in an advanced time algorithm for the evaluation of the transient field. Numerical testing demonstrates that the approach gives near machine-precision accuracy and a speed-up in evaluation time which depends on the order of quadrature rule employed but breaks even with direct evaluation at a number of field points about 1.15 times the number of surface quadrature nodes.
Large aperture ground based solar telescopes allow the solar atmosphere to be resolved in unprecedented detail. However, observations are limited by Earths turbulent atmosphere, requiring post image corrections. Current reconstruction methods using short exposure bursts face challenges with strong turbulence and high computational costs. We introduce a deep learning approach that reconstructs 100 short exposure images into one high quality image in real time. Using unpaired image to image translation, our model is trained on degraded bursts with speckle reconstructions as references, improving robustness and generalization. Our method shows an improved robustness in terms of perceptual quality, especially when speckle reconstructions show artifacts. An evaluation with a varying number of images per burst demonstrates that our method makes efficient use of the combined image information and achieves the best reconstructions when provided with the full image burst.
The modeling and simulation of multiphase fluid flow receive significant attention in reservoir engineering. Many time discretization schemes for multiphase flow equations are either explicit or semi-implicit, relying on the decoupling between the saturation equation and the pressure equation. In this study, we delve into a fully coupled and fully implicit framework for simulating multiphase flow in heterogeneous porous media, considering gravity and capillary effects. We utilize the Vertex-Centered Finite Volume Method for spatial discretization and propose an efficient implementation of interface conditions for heterogeneous porous media within the current scheme. Notably, we introduce the Linearly Implicit Extrapolation Method (LIMEX) with an error estimator, adapted for the first time to multiphase flow problems. To solve the resulting linear system, we employ the BiCGSTAB method with the Geometric Multigrid (GMG) preconditioner. The implementations of models and methods are based on the open-source software: UG4. The results from parallel computations on the supercomputer demonstrate that the scalability of our proposed framework is sufficient, supporting a scale of thousands of processors with Degrees of Freedom (DoF) extending up to billions.
We introduce Perturbative Gradient Training (PGT), a novel training paradigm that overcomes a critical limitation of physical reservoir computing: the inability to perform backpropagation due to the black-box nature of physical reservoirs. Drawing inspiration from perturbation theory in physics, PGT uses random perturbations in the network's parameter space to approximate gradient updates using only forward passes. We demonstrate the feasibility of this approach on both simulated neural network architectures, including a dense network and a transformer model with a reservoir layer, and on experimental hardware using a magnonic auto-oscillation ring as the physical reservoir. Our results show that PGT can achieve performance comparable to that of standard backpropagation methods in cases where backpropagation is impractical or impossible. PGT represents a promising step toward integrating physical reservoirs into deeper neural network architectures and achieving significant energy efficiency gains in AI training.
We present GollumFit, a framework designed for performing binned-likelihood analyses on neutrino telescope data. GollumFit incorporates model parameters common to any neutrino telescope and also model parameters specific to the IceCube Neutrino Observatory. We provide a high-level overview of its key features and how the code is organized. We then discuss the performance of the fitting in a typical analysis scenario, highlighting the ability to fit over tens of nuisance parameters. We present some examples showing how to use the package for likelihood minimization tasks. This framework uniquely incorporates the particular model parameters necessary for neutrino telescopes, and solves an associated likelihood problem in a time-efficient manner.
We present a high-accuracy spectral method for solving the unbounded three-dimensional Poisson equation with smooth, compactly supported sources. The approach is based on a super-potential formulation, where the solution is obtained by applying the Laplacian to a convolution with the biharmonic Green's function. A separable Gaussian-sum (GS) approximation enables efficient FFT-based computation with quasi-linear complexity. Owing to the improved regularity of the biharmonic kernel, the GS cutoff error is of order four, eliminating the need for correction terms or Taylor expansions required in standard GS or Ewald-type methods. Numerical benchmarks demonstrate that the method achieves machine-precision accuracy and outperforms existing GS-based schemes in both error and runtime, making it a robust and efficient tool for free-space Poisson problems on uniform grids.
BridgeNet is a novel hybrid framework that integrates convolutional neural networks with physics-informed neural networks to efficiently solve non-linear, high-dimensional Fokker-Planck equations (FPEs). Traditional PINNs, which typically rely on fully connected architectures, often struggle to capture complex spatial hierarchies and enforce intricate boundary conditions. In contrast, BridgeNet leverages adaptive CNN layers for effective local feature extraction and incorporates a dynamically weighted loss function that rigorously enforces physical constraints. Extensive numerical experiments across various test cases demonstrate that BridgeNet not only achieves significantly lower error metrics and faster convergence compared to conventional PINN approaches but also maintains robust stability in high-dimensional settings. This work represents a substantial advancement in computational physics, offering a scalable and accurate solution methodology with promising applications in fields ranging from financial mathematics to complex system dynamics.
In quantum chemistry, self-consistent field (SCF) algorithms define a nonlinear optimization problem, with both continuous and discrete components. In this work, we derive Hartree-Fock-inspired SCF algorithms that can be exactly written as a sequence of Quadratic Unconstrained Spin/Binary Optimization problems (QUSO/QUBO). We reformulate the optimization problem as a series of MaxCut graph problems, which can be efficiently solved using semi-definite programming techniques. This procedure provides performance guarantees at each SCF step, irrespective of the complexity of the optimization landscape. We numerically demonstrate the QUBO-SCF and MaxCut-SCF methods by studying the hydroxide anion OH- and molecular Nitrogen N2. The largest problem addressed in this study involves a system comprised of 220 qubits (equivalently, spin-orbitals). Our results show that QUBO-SCF and MaxCut-SCF suffer much less from internal instabilities compared with conventional SCF calculations. Additionally, we show that the new SCF algorithms can enhance single-reference methods, such as configuration interaction. Finally, we explore how quantum algorithms for optimization can be applied to the QUSO problems arising from the Hartree-Fock method. Four distinct hybrid-quantum classical approaches are introduced: GAS-SCF, QAOA-SCF, QA-SCF and DQI-SCF.
Machine learning potentials (MLPs) have advanced rapidly and show great promise to transform molecular dynamics (MD) simulations. However, most existing software tools are tied to specific MLP architectures, lack integration with standard MD packages, or are not parallelizable across GPUs. To address these challenges, we present chemtrain-deploy, a framework that enables model-agnostic deployment of MLPs in LAMMPS. chemtrain-deploy supports any JAX-defined semi-local potential, allowing users to exploit the functionality of LAMMPS and perform large-scale MLP-based MD simulations on multiple GPUs. It achieves state-of-the-art efficiency and scales to systems containing millions of atoms. We validate its performance and scalability using graph neural network architectures, including MACE, Allegro, and PaiNN, applied to a variety of systems, such as liquid-vapor interfaces, crystalline materials, and solvated peptides. Our results highlight the practical utility of chemtrain-deploy for real-world, high-performance simulations and provide guidance for MLP architecture selection and future design.
Inertial confinement fusion requires a constant search for the most effective materials for improving the efficiency of the compression of the capsule and of the laser-to-target energy transfer. Foams could provide a solution to these problems, but they require further experimental and theoretical investigation. The new 3D-printing technologies, such as the two-photon polymerization, are opening a new era in the production of foams, allowing for the fine control of the material morphology. Detailed studies of their interaction with high-power lasers in regimes relevant for inertial confinement fusion are very few in the literature so far and more investigation is needed. In this work we present the results an experimental campaign performed at the ABC laser facility in ENEA Centro Ricerche Frascati where 3D-printed micro-structured materials were irradiated at high power. 3D simulations of the laser-target interaction performed with the FLASH code reveal a strong scattering when the center of the focal spot is on the through hole of the structure. The time required for the laser to completely ablate the structure obtained by the simulations is in good agreement with the experimental measurement. The measure of the reflected and transmitted laser light indicates that the scattering occurred during the irradiation, in accordance with the simulations. Two-plasmon decay has also been found to be active during irradiation.
In this study, we investigate wall boundary condition schemes for Lattice Boltzmann simulations of turbulent flows modeled using RANS equations with wall functions. Two alternative schemes are formulated and assessed: a hybrid regularized boundary condition and a slip-velocity bounce-back scheme. Their performance is evaluated using two canonical turbulent flow cases, a fully developed channel flow and a zero-pressure-gradient flat plate boundary layer (BL), selected specifically to isolate and analyze the impact of wall boundary condition treatments on turbulence modeling. The comparative analysis reveals that the slip-velocity bounce-back approach, which has received relatively little attention within the context of LBM-RANS with wall functions, consistently outperforms the regularized-based method in terms of both accuracy and sensitivity to mesh resolution. Moreover, the regularized-based approach is shown to be highly sensitive to the reconstruction of the wall-normal velocity gradient, even in simple geometries such as flat walls where no interpolation is required. This dependency necessitates the use of specialized, ad-hoc gradient reconstruction techniques, requirements that are not present in the slip-velocity bounce-back method.