Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Marine low clouds play a crucial role in Earth's radiation budget. These clouds efficiently reflect sunlight and drive the magnitude and sign of the global cloud feedback. Despite their relevance, the evolution of shallow cloud decks over the last decades is not well understood. One of the dominant controls of this low cloud cover is the lower tropospheric stability, quantified by the estimated inversion strength (EIS). Here, we quantify how regional EIS depends on local and remote surface temperature, revealing the dynamics controlling the characteristics of shallow clouds. We find that global EIS increases with warming in tropical regions of ascent and decreases with warming in regions of descent, as expected. In addition to the West Pacific Warm Pool, the Atlantic convection regions and the central Pacific are important predictors. Focusing on subtropical ocean upwelling regions in different ocean basins, where the low cloud decks reside, EIS increases with a fairly complex pattern of remote warming and decreases with local warming. The spatial relationship between surface temperature and EIS is robust across different climate models and reanalyses, allowing us to constrain the large spread in estimates of historical EIS trends. In the Southeast Pacific, where historical temperature trends are not well understood, we attribute the observed increased EIS since 1980 entirely to remote warming, indicating that local cooling did not increase stability in this region. Our results put into question the dominance of the West Pacific Warm Pool in controlling low cloud feedbacks in the eastern Pacific and give insights into mechanisms underlying the spatial dependence of radiative feedbacks on surface temperature patterns.
The Data Assimilation (DA) community has been developing various diagnostics to understand the importance of the observing system in accurately forecasting the weather. They usually rely on the ability to compute the derivatives of the physical model output with respect to its initial condition. For example, the Forecast Sensitivity-based Observation Impact (FSOI) estimates the impact on the forecast error of each observation processed in the DA system. This paper presents how these DA diagnostic tools are transferred to Machine Learning (ML) models, as their derivatives are readily available through automatic differentiation. We specifically explore the interpretability and explainability of the observation-driven GraphDOP model developed at the European Centre for Medium-Range Weather Forecasts (ECMWF). The interpretability study demonstrates the effectiveness of GraphDOP's sliding attention window to learn the meteorological features present in the observation datasets and to learn the spatial relationships between different regions. Making these relationships more transparent confirms that GraphDOP captures real, physically meaningful processes, such as the movement of storm systems. The explainability of GraphDOP is explored by applying the FSOI tool to study the impact of the different observations on the forecast error. This inspection reveals that GraphDOP creates an internal representation of the Earth system by combining the information from conventional and satellite observations.
Deep learning-based surrogate models offer a computationally efficient alternative to high-fidelity computational fluid dynamics (CFD) simulations for predicting urban wind flow. However, conventional approaches usually only yield low-frequency predictions (essentially averaging values from proximate pixels), missing critical high-frequency details such as sharp gradients and peak wind speeds. This study proposes a hierarchical approach for accurately predicting pedestrian-level urban winds, which adopts a two-stage predictor-refiner framework. In the first stage, a U-Net architecture generates a baseline prediction from urban geometry. In the second stage, a conditional Generative Adversarial Network (cGAN) refines this baseline by restoring the missing high-frequency content. The cGAN's generator incorporates a multi-scale architecture with stepwise kernel sizes, enabling simultaneous learning of global flow structures and fine-grained local features. Trained and validated on the UrbanTALES dataset with comprehensive urban configurations, the proposed hierarchical framework significantly outperforms the baseline predictor. With a marked qualitative improvement in resolving high-speed wind jets and complex turbulent wakes as well as wind statistics, the results yield quantitative enhancement in prediction accuracy (e.g., RMSE reduced by 76% for the training set and 60% for the validation set). This work presents an effective and robust methodology for enhancing the prediction fidelity of surrogate models in urban planning, pedestrian comfort assessment, and wind safety analysis. The proposed model will be integrated into an interactive web platform as Feilian Version 2.
Assessing the frequency and intensity of extreme weather events, and understanding how climate change affects them, is crucial for developing effective adaptation and mitigation strategies. However, observational datasets are too short and physics-based global climate models (GCMs) are too computationally expensive to obtain robust statistics for the rarest, yet most impactful, extreme events. AI-based emulators have shown promise for predictions at weather and even climate timescales, but they struggle on extreme events with few or no examples in their training dataset. Rare event sampling (RES) algorithms have previously demonstrated success for some extreme events, but their performance depends critically on a hard-to-identify "score function", which guides efficient sampling by a GCM. Here, we develop a novel algorithm, AI+RES, which uses ensemble forecasts of an AI weather emulator as the score function to guide highly efficient resampling of the GCM and generate robust (physics-based) extreme weather statistics and associated dynamics at 30-300x lower cost. We demonstrate AI+RES on mid-latitude heatwaves, a challenging test case requiring a score function with predictive skill many days in advance. AI+RES, which synergistically integrates AI, RES, and GCMs, offers a powerful, scalable tool for studying extreme events in climate science, as well as other disciplines in science and engineering where rare events and AI emulators are active areas of research.
This study presents FCI-FireDyn, a new algorithm developed to monitor wildfire dynamics using the Flexible Combined Imager (FCI) onboard the Meteosat Third Generation satellite. Leveraging the high temporal resolution of FCI (10-minute full-disk observations), the algorithm derives fire arrival time maps, rate of spread (ROS), and Burn Area (BA) evolution at sub-kilometer spatial resolution and 2-minute temporal intervals. The method combines threshold-based MWIR detection, spatio-temporal interpolation to reconstruct fire front progression and ROS fields at 175 m resolution. FCI-FireDyn was tested on three major fire events in Southern Europe (Portugal, Greece, and France) from the 2024 2025 seasons. The retrieved BA and Fire Growth Rate show good agreement with reference datasets from EFFIS, Copernicus EMS, and PT-FireSprd, with total final BA deviations below 20%. The algorithm captures distinct propagation phases, including acceleration episodes that precede FRP peaks, highlighting a potential for NRT fire behavior monitoring. Despite limitations due to FCI spatial resolution, results demonstrate that it provides sufficient spatio-temporal coverage to estimate front-scale fire dynamics. FCI-FireDyn thus represents a proof of concept for deriving high-frequency fire behavior metrics from geostationary observations to support operational and modeling applications.
Land surface models (LSMs) play a crucial role in characterizing land-atmosphere interactions by providing boundary conditions to regional climate models (RCMs). This is particularly true over the Iberian Peninsula (IP), where a water-limited regime governs much of the territory. We optimize the configuration of the Noah land surface model with multiparameterization options (Noah-MP) for characterizing heat fluxes in the IP when the Weather Research and Forecasting (WRF) model v3.9.1 is used as an RCM. We perform 70 one-year simulations using 35 Noah-MP combinations, for a dry year (2005) and a wet year (2010). Land-surface heat fluxes and soil moisture from WRF/Noah-MP are evaluated against FLUXNET station data and the CERRA-Land reanalysis. In general, WRF/Noah-MP reproduces soil moisture and surface heat fluxes well over the IP, especially under wetter conditions. Clustering identifies an optimal configuration from 10 groups (A to J). The Noah-MP options with greatest impact are canopy stomatal resistance (CRS), surface exchange coefficient for heat (SFC), soil-moisture factor controlling stomatal resistance (BTR), runoff and groundwater (RUN), and surface resistance to evaporation/sublimation (RSF); dynamic vegetation (DVEG) also matters. Several configurations performed reasonably; experiment s27I (Jarvis CRS, Chen97 SFC, CLM-type BTR, BATS RUN, and adjusted Sellers RSF for wet soils) provides a particularly good characterization of heat fluxes over the IP.
The midlatitude climate and weather are shaped by storms, yet the factors governing their predictability remain insufficiently understood. Here, we use a Convolutional Neural Network (CNN) to predict and quantify uncertainty in the intensity growth and trajectory of over 200,000 storms simulated in a 200-year aquaplanet GCM. This idealized framework provides a controlled climate background for isolating factors that govern predictability. Results show that storm intensity is less predictable than trajectory. Strong baroclinicity accelerates storm intensification and reduces its predictability, consistent with theory. Crucially, enhanced jet meanders further degrade forecast skill, revealing a synoptic source of uncertainty. Using sensitivity maps from explainable AI, we find that the error growth rate is nearly doubled by the more meandering structure. These findings highlight the potential of machine learning for advancing understanding of predictability and its governing mechanisms.
The accurate prediction of oceanographic variables is crucial for understanding climate change, managing marine resources, and optimizing maritime activities. Traditional ocean forecasting relies on numerical models; however, these approaches face limitations in terms of computational cost and scalability. In this study, we adapt Aurora, a foundational deep learning model originally designed for atmospheric forecasting, to predict sea surface temperature (SST) in the Canary Upwelling System. By fine-tuning this model with high-resolution oceanographic reanalysis data, we demonstrate its ability to capture complex spatiotemporal patterns while reducing computational demands. Our methodology involves a staged fine-tuning process, incorporating latitude-weighted error metrics and optimizing hyperparameters for efficient learning. The experimental results show that the model achieves a low RMSE of 0.119K, maintaining high anomaly correlation coefficients (ACC $\approx 0.997$). The model successfully reproduces large-scale SST structures but faces challenges in capturing finer details in coastal regions. This work contributes to the field of data-driven ocean forecasting by demonstrating the feasibility of using deep learning models pre-trained in different domains for oceanic applications. Future improvements include integrating additional oceanographic variables, increasing spatial resolution, and exploring physics-informed neural networks to enhance interpretability and understanding. These advancements can improve climate modeling and ocean prediction accuracy, supporting decision-making in environmental and economic sectors.
The global ocean model NEMO is run in a series of stand-alone configurations (2015-2022) to investigate the potential for improving global medium-range storm surge forecasts by including the inverse barometer effect. Here, we compare a control experiment, where the inverse barometer effect was not included, against a run dynamically forced with mean sea level pressure. In the control experiment, the inverse barometer effect was then calculated diagnostically and added to the ocean model sea surface elevation, resulting in a total of three experiments to investigate. We compare against the global GESLA3 water level data set and find that the inclusion of the inverse barometer effect reduces the root-mean-square error by $\sim 1~cm$ on average. When we mask out all data where the observed storm surge is less than $\pm1$ or $\pm2$ standard deviations, including the inverse barometer effect reduces the RMS error by $4-5$ cm. While both methods reduce water level errors, there are regional differences in their performance. The run with dynamical pressure forcing is seen to perform slightly better than diagnostically adding the inverse barometer effect in enclosed basins such as the Baltic Sea. Finally, an ensemble forecast experiment with the Integrated Forecast System of the European Centre for Medium-range Weather Forecasts demonstrates that when the diagnostic inverse barometer effect is included for a severe storm surge event in the North Sea (Storm Xaver, December 2013), the ensemble spread of water level provides a stronger and earlier indication of the observed maximum surge level than the when the effect is excluded.
Convectively Coupled Equatorial Waves (CCEWs) dominate atmospheric variability on timescales of 2--30 days in the Tropics, bringing episodes of widespread heavy precipitation. This study compares the representation of CCEWs and their connection to upscale energy transfer in two Met Office Unified Model simulations of the full tropical channel with identical km-scale resolution. The principal difference between the simulations is that one parametrizes convection (GAL), while the other (RAL) is convection permitting. This means GAL acts to remove vertical instability without explicitly representing the resolved-scale circulation associated with convective plumes. We present the first quantitative diagnosis of interscale energy transfer and its relation to CCEWs. This diagnosis is important because upscale energy transfer between convection and large-scale waves may influence accurate simulation and predictability of tropical weather systems. The average upper-tropospheric upscale transfer simulated by RAL is approximately 50\% higher than GAL. CCEWs are more coherent in RAL, with an average phase-speed variability 80\% higher than observations, compared with 166\% higher in GAL. RAL also simulates greater upscale energy transfer within waves than GAL, with a stronger correlation between the interscale energy transfer rate and equatorial wave winds. Simulated Kelvin and Rossby waves are associated with upscale energy transfer from scales 2--8 times smaller than the dominant wavelength, related to active deep convection within a particular sector of the wave phase. Our findings show that the explicit representation of convective motions has a significant impact on the simulation of upscale energy transfer, and is therefore very likely to be a significant factor in the faithful simulation of the convective coupling within CCEWs.
Thunderstorm Ground Enhancements (TGEs) are bursts of high-energy particle fluxes detected at Earth's surface, linked to the Relativistic Runaway Electron Avalanche (RREA) mechanism within thunderclouds. Accurate detection of TGEs is vital for advancing atmospheric physics and radiation safety, but event selection methods heavily rely on expert-defined thresholds. In this study, we use an automated supervised classification approach on a newly curated dataset of 2024 events from the Aragats Space Environment Center (ASEC). By combining a Tabular Prior-data Fitted Network (TabPFN) with SHAP-based interpretability, we attain 94.79% classification accuracy with 96% precision for TGEs. The analysis reveals data-driven thresholds for particle flux increases and environmental parameters that closely match the empirically established criteria used over the last 15 years. Our results demonstrate that modest but concurrent increases across multiple particle detectors, along with strong near-surface electric fields, are reliable indicators of TGEs. The framework we propose offers a scalable method for automated, interpretable TGE detection, with potential uses in real-time radiation hazard monitoring and multi-site atmospheric research.
Recent advances in AI-based weather prediction have led to the development of artificial intelligence weather prediction (AIWP) models with competitive forecast skill compared to traditional NWP models, but with substantially reduced computational cost. There is a strong need for appropriate methods to evaluate their ability to predict extreme weather events, particularly when spatial coherence is important, and grid resolutions differ between models. We introduce a verification framework that combines spatial verification methods and proper scoring rules. Specifically, the framework extends the High-Resolution Assessment (HiRA) approach with threshold-weighted scoring rules. It enables user-oriented evaluation consistent with how forecasts may be interpreted by operational meteorologists or used in simple post-processing systems. The method supports targeted evaluation of extreme events by allowing flexible weighting of the relative importance of different decision thresholds. We demonstrate this framework by evaluating 32 months of precipitation forecasts from an AIWP model and a high-resolution NWP model. Our results show that model rankings are sensitive to the choice of neighbourhood size. Increasing the neighbourhood size has a greater impact on scores evaluating extreme-event performance for the high-resolution NWP model than for the AIWP model. At equivalent neighbourhood sizes, the high-resolution NWP model only outperformed the AIWP model in predicting extreme precipitation events at short lead times. We also demonstrate how this approach can be extended to evaluate discrimination ability in predicting heavy precipitation. We find that the high-resolution NWP model had superior discrimination ability at short lead times, while the AIWP model had slightly better discrimination ability from a lead time of 24-hours onwards.
In this work, we study a three-wave kinetic equation with resonance broadening arising from the theory of stratified ocean flows. Unlike Gamba-Smith-Tran(On the wave turbulence theory for stratified flows in the ocean, Math. Models Methods Appl. Sci. 30 (2020), no.1, 105--137), we employ a different formulation of the resonance broadening, which makes the present model more suitable for ocean applications. We establish the global existence and uniqueness of strong solutions to the new resonance broadening kinetic equation.
In 1999 the NWS began using the phrase "tornado emergency" to denote tornado warnings for storms with the potential to cause rare, catastrophic damage. After years of informal usage, tornado emergencies were formally introduced to 46 weather forecasting offices in 2014 as part of the impact-based warning (IBW) program, with a nationwide rollout occurring over the following years. In concert with the new tiered warning approach, the Warning Decision Training Division (WDTD) also introduced suggested criteria for when forecasters should consider upgrading a tornado warning to a tornado emergency, which includes thresholds of rotational velocity (VROT) and significant tornado parameter (STP). Although significant research has studied both tornado forecasting and tornado warning dissemination in the decade since, relatively little work has examined the effectiveness of the tornado emergency specifically. Our analysis of all 89 IBW tornado emergencies issued from 2014-2023 found that forecasters do not appear to follow suggested criteria for issuance in the majority of cases, with only two tornado emergencies meeting both VROT and STP thresholds. Regardless, 70% of tornado emergencies were issued for EF-3+ tornadoes and tornado emergencies covered 55% of all EF-4 tornadoes as well as 41% of all tornadoes resulting in 3 or more fatalities. Based on these results, we propose several updates to the current NWS training materials for impact-based tornado warnings.
Hydrogen, helium, silicates, and iron are key building blocks of rocky and gas-rich planets, yet their chemical interactions remain poorly constrained. Using first-principles molecular dynamics and thermodynamic integration, we quantify hydrogen and helium partitioning between molten silicate mantles and metallic cores for Earth-to-Neptune-mass planets. Hydrogen becomes strongly siderophilic above $\sim$25 GPa but weakens beyond $\sim$200 GPa, whereas helium remains lithophilic yet increasingly soluble in metal with pressure. Incorporating these trends into coupled structure-chemistry models suggests that majority of hydrogen and helium reside in planetary interiors, not atmospheres, with abundances strongly depending on planet mass. Such volatile exchange may influence the redox states of secondary atmospheres, longevity of primordial envelopes, predicted CHNOPS abundances, and emergence of helium-enriched atmospheres, while He 1083 nm and H Lyman-$\alpha$ lines provide potential probes of atmosphere-interior exchange. These findings link atomic-scale interactions to planetary-scale observables, providing new constraints on the origins of Earth-to-Neptune-sized worlds.
Understanding and forecasting precipitation events in the Arctic maritime environments, such as Bear Island and Ny-{\AA}lesund, is crucial for assessing climate risk and developing early warning systems in vulnerable marine regions. This study proposes a probabilistic machine learning framework for modeling and predicting the dynamics and severity of precipitation. We begin by analyzing the scale-dependent relationships between precipitation and key atmospheric drivers (e.g., temperature, relative humidity, cloud cover, and air pressure) using wavelet coherence, which captures localized dependencies across time and frequency domains. To assess joint causal influences, we employ Synergistic-Unique-Redundant Decomposition, which quantifies the impact of interaction effects among each variable on future precipitation dynamics. These insights inform the development of data-driven forecasting models that incorporate both historical precipitation and causal climate drivers. To account for uncertainty, we employ the conformal prediction method, which enables the generation of calibrated non-parametric prediction intervals. Our results underscore the importance of utilizing a comprehensive framework that combines causal analysis with probabilistic forecasting to enhance the reliability and interpretability of precipitation predictions in Arctic marine environments.
Two extreme flood-inducing precipitation events in two cities in Mali, on 08 August 2012 in San (127 mm) and on 25 August 2019 in Kenieba (126 mm), are investigated with respect to rainfall structures, dynamical forcings, and the ability of the ICOsahedral Nonhydrostatic (ICON) model to represent their evolution. Two sets of experiments with convective parameterization enabled (PARAM) and disabled (EXPLC), both at 6.5 km grid spacing, are conducted for each case. While the (thermo)dynamical fields of the simulations are compared with ERA5 reanalysis data, the rainfall fields are tested against the satellite-based precipitation dataset IMERG by applying the spatial verification methods Fractions Skill Score (FSS) and the Structure-Amplitude-Location (SAL) score. In addition, a spectral filtering of tropical waves is applied to investigate their impact on the extreme events. The most prominent results are: (1) Both cases were caused by organized convective systems associated with a westward propagating cyclonic vortex, but differ in their environmental setting. Although both cases featured an east African wave (AEW), the San case involved convective enhancement along dry Saharan airmasses, whereas the Kenieba case occurred within an unusual widespread wet environment extending deep into the Sahel. (2) Although EXPLC captures the rainfall distribution in the San case better than PARAM, it fails to organize convection in the moisture-laden Kenieba case, which PARAM is capable of simulating. (3) The FSS confirms the case-dependency of the ICON skill. The SAL method hints towards a systematic deficiency of EXPLC to represent the convective organization by producing too many scattered and weak rainfall systems, while PARAM is more effective in converting abundant moisture into excessive rainfall.
Climate simulations, at all grid resolutions, rely on approximations that encapsulate the forcing due to unresolved processes on resolved variables, known as parameterizations. Parameterizations often lead to inaccuracies in climate models, with significant biases in the physics of key climate phenomena. Advances in artificial intelligence (AI) are now directly enabling the learning of unresolved processes from data to improve the physics of climate simulations. Here, we introduce a flexible framework for developing and implementing physics- and scale-aware machine learning parameterizations within climate models. We focus on the ocean and sea-ice components of a state-of-the-art climate model by implementing a spectrum of data-driven parameterizations, ranging from complex deep learning models to more interpretable equation-based models. Our results showcase the viability of AI-driven parameterizations in operational models, advancing the capabilities of a new generation of hybrid simulations, and include prototypes of fully coupled atmosphere-ocean-sea-ice hybrid simulations. The tools developed are open source, accessible, and available to all.