Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The Nancy Grace Roman Space Telescope (``Roman'') is a 2.4m space telescope scheduled for a 2026 launch. The Coronagraph Instrument (CGI) on Roman is a technology-demonstration instrument with a coronagraph and, for the first time in space, deformable mirrors and active wavefront control. This paper walks through the algorithmic and system-level architecture of the HOWFSC implementation for CGI, including the use of ground-in-the-loop (GITL) operations to support computationally-expensive operations, and reports on instrument performance measured during thermal vacuum testing in instrument integration and test. CGI achieved better than $5\times10^{-8}$ total raw contrast with two independent coronagraph architectures covering 3-9 and 6-20 $\lambda/D$ between them and a $360^{\circ}$ dark hole on each. The contrast limits appear to be driven by time available for testing, and do not appear to represent a floor in the achievable performance of CGI in flight.
Neural Operators (NOs) are a leading method for surrogate modeling of partial differential equations. Unlike traditional neural networks, which approximate individual functions, NOs learn the mappings between function spaces. While NOs have been predominantly tested on simplified 1D and 2D problems, such as those explored in prior works, these studies fail to address the complexities of more realistic, high-dimensional, and high-dynamic range systems. Moreover, many real-world applications involve incomplete or noisy data, which has not been adequately explored in current NO literature. In this work, we present a novel application of NOs to astrophysical data, which involves high-dynamic range projections into an observational space. We train Fourier NO (FNO) models to predict the evolution of incomplete observational proxies with density variations spanning four orders of magnitude. We demonstrate that FNOs can predict the effects of unobserved dynamical variables. Our work lays the groundwork for future studies that forecast direct astronomical observables.
The estimation of the number of point-sources in the sky is one the oldest problems in astronomy, yet an easy and efficient method for estimating the uncertainty on these counts is still an open problem. Probabilistic cataloging solves the general point-source inference problem, but the trans-dimensional nature of the inference method requires a bespoke approach that is difficult to scale. Here it is shown that probabilistic cataloging can be performed in a fixed-dimensional framework called Parametric Cataloging under mild assumptions on some of the priors. The method requires only a simple reparameterization of the flux coordinates, yielding an accessible method that can be implemented in most probabilistic programming environments. As the parameter space is fixed-dimensional, off the shelf gradient based samplers can be employed which allows the method to scale to tens of thousands of sources.
The balloon-borne hard X-ray polarimetry mission XL-Calibur observed the Black Hole X-ray Binary (BHXRB) Cygnus X-1 (Cyg X-1) during its nearly six-day Long Duration Balloon (LDB) flight from Sweden to Canada in July 2024. The XL-Calibur observations allowed us to derive the most precise constraints to date of the Polarization Degree (PD) and Polarization Angle (PA) of the hard X-ray emission from a BHXRB. XL-Calibur observed Cyg X-1 in the hard state and measured a $\sim$19-64 keV PD of ($5.0^{+2.7}_{-3.0}$)% at a PA of $-28^{\circ}\pm 17^{\circ}$, with an 8.7% chance probability of detecting larger PDs than the one observed, given an unpolarized signal. The XL-Calibur results are thus comparable to the 2-8 keV PD and PA found by IXPE, with a similar agreement between the hard X-ray PA and the radio jet direction. We also discuss the implications of our polarization measurements in the context of models describing the origin of the broadband X-ray and $\gamma$-ray emission, to which XL-Calibur provides independent constraints on any proposed emission modeling.
Simulation-based inference (SBI) allows fast Bayesian inference for simulators encoding implicit likelihoods. However, some explicit likelihoods cannot be easily reformulated as simulators, hindering their integration into combined analyses within SBI frameworks. One key example in cosmology is given by the Planck CMB likelihoods. We present a simple method to construct an effective simulator for any explicit likelihood using samples from a previously converged Markov Chain Monte Carlo (MCMC) run. This effective simulator can subsequently be combined with any forward simulator. To illustrate this method, we combine the full Planck CMB likelihoods with a 3x2pt simulator (cosmic shear, galaxy clustering and their cross-correlation) for a Stage IV survey like Euclid, and test evolving dark energy parameterized by the $w_0w_a$ equation-of-state. Assuming the $w_0w_a$CDM cosmology hinted by DESI BAO DR2 + Planck 2018 + PantheonPlus SNIa datasets, we find that future 3x2pt data alone could detect evolving dark energy at $5\sigma$, while its combination with current CMB, BAO and SNIa datasets could raise the detection to almost $7\sigma$. Moreover, thanks to simulation reuse enabled by SBI, we show that our joint analysis is in excellent agreement with MCMC while requiring zero Boltzmann solver calls. This result opens up the possibility of performing massive global scans combining explicit and implicit likelihoods in a highly efficient way.
Optical beamsplitters with similar properties for orthogonal, linear polarisation modes are required for realising polarisation-based speedmeter schemes to reduce back-action noise in gravitational-wave interferometers. In this paper, we investigate two beamsplitter coatings obtained from Laseroptik GmbH and Optoman on a best-effort basis that aim for a 50/50 power splitting ratio and equal overall phase shift for two orthogonal, linear polarisation modes interacting with the optic. We show that while Laseroptik GmbH opted for coating stack with 22 alternating layers of Ta2O5 and SiO2, Optoman produced a much thinner coating made of 5 SiO2 and SiOx (0 < x < 2) layers. With these strategies, the Laseroptik coating achieves an equal power reflectivity of 51% at 46 deg angle of incidence, and zero phase shift between both polarisations at 44.25 deg angle of incidence. The Optoman coating achieves power reflectivities of 49% for s-polarisation and 51% for p-polarisation with a differential phase shift around 5 deg largely independent of the angle of incidence.
The Vera C. Rubin Observatory will soon survey the southern sky, delivering a depth and sky coverage that is unprecedented in time domain astronomy. As part of commissioning, Data Preview 1 (DP1) has been released. It comprises a ComCam observing campaign between November and December 2024 with multi-band imaging of seven fields, covering roughly 0.4 square degree each, provides a first glimpse into the data products that will become available once the Legacy Survey of Space and Time begins. In this work, we search three fields for extragalactic transients. We identify six new extragalactic transients, and three known ones from a sample of 369,644 difference image analysis objects. Photometric classification using \texttt{Superphot+} indicates that this sample likely comprises six type Ia, two type II, two type Ibc and one type IIn supernovae. Our findings are in slight tension with supernova detection rate predictions from the literature of $12\pm3$ SN Ia and $3\pm1$ core-collapse supernovae likely due to the lack of suitable templates. Nevertheless, this work demonstrates the quality of the data products delivered in DP1 and indicates that Rubin Observatory Legacy Survey and Space and Time (LSST) is well placed to fulfill its discovery potential in time domain astronomy.
The Ricochet experiment aims to measure the coherent elastic neutrino-nucleus scattering process from antineutrinos emitted by a research nuclear reactor operated by the Institut Laue-Langevin (Grenoble, France). This article presents a description of the Ricochet experimental installation and the detector performance achieved during its commissioning with a mini-CryoCube module consisting of three 42-gram germanium cryogenic calorimeters. The baseline resolutions and background levels are reported both during reactor-on and reactor-off periods, and as noise mitigation techniques were improved. A baseline resolution of 40 eV electron equivalent was achieved for the ionization channel after setup improvements, and the phonon channel resolutions ranged from 50 to 80 eV of total phonon energy. In the energy region from 2 to 7 keV, a nuclear recoil rate of 15(2) events/(kg day keV) is measured during the reactor-off period selecting events in coincidence with muon veto signals. This rate is in agreement with the cosmogenic neutron rate calculated from GEANT4 simulations. After the rejection of events in coincidence with signals in the muon veto detectors, a combined 90% C.L. limit on the nuclear recoil background of < 9 events/(kg day keV) is obtained in that energy region during the reactor-on period, which is compatible with our GEANT4 model calculation corresponding to a total rate of 5 events/(kg day keV). The sensitivity of this analysis was however found to be limited by a surface event contamination which is currently being addressed by the Ricochet Collaboration with upgraded detectors.
Modern large scale cosmological hydrodynamic simulations require robust tools capable of analysing their data outputs in a parallel and efficient manner. We introduce SOAP (Spherical Overdensity and Aperture Processor), a Python package designed to compute halo and galaxy properties from SWIFT simulations after being post-processed with a subhalo finder. SOAP takes a subhalo catalogue as input and calculates a wide array of properties for each object. SOAP offers parallel processing capabilities via mpi4py for efficient handling of large datasets, and allows for consistent property calculation across multiple halo finders. SOAP supports various halo definitions, including spherical overdensities and fixed physical apertures, providing flexibility for diverse observational comparisons. The package is compatible with both dark matter-only and full hydrodynamic simulations, producing HDF5 catalogues that are integrated with the swiftsimio package for seamless unit handling.
The MAGIC telescopes, located at Observatorio El Roque de los Muchachos (La Palma, Spain) are two Imaging Air Cherenkov Telescopes observing the Very High Energy (VHE) gamma rays. They are run by an international collaboration composed of over 40 institutions from 12 countries. The first telescope was inaugurated in October 2003. The commissioning of the second finished in 2008. The MAGIC telescopes were designed to lower the energies to which ground based telescopes had access as well as to be able to point to any direction in the sky in less than 25 seconds. The former required the large reflective surface of 17 meters as well as an effort to optimise the mirror reflectivity and photo sensor sensitivity. The latter was achieved by minimising the weight of the full instrument using for instance carbon fibre reinforced plastic tubes for the mirror frame. The sensitivity of the MAGIC telescopes have been improving over the years thanks to hardware upgrades as well as new analysis techniques, which allowed the collaboration to keep a rich scientific program. The discovery of VHE emission from Gamma Ray Bursts and pulsars have called for a revision of the models that explain the production of gamma rays there. Both the observation of sources in flaring state as well as a systematic monitoring of sources have provided valuable data to better understand astrophysical sources both in our Galaxy and outside it. Relevant constraints on fundamental quantities like dark matter cross-section, quantum gravity scale and density of extragalactic background light have also been extracted from the observations.
Wavelength calibration is a key factor for high-resolution spectroscopic measurements for precision radial velocities. Hollow-cathode lamps (e.g., ThAr), absorption cells (e.g., iodine cell), dielectric coated Fabry-P\'erot etalons and laser frequency combs have been implemented over the years for precise wavelength calibration and wavelength drift measurements. However, due to their various impediments as wavelength calibrators, investigations of alternative methods remain of prime interest. In this paper, we examined the feasibility of low-cost (~ $1000) commercially available solid fused silica etalon with a broadband metallic coating as a calibrator. We studied the behaviour for two cavity spacings (free spectral range of 1/cm and 0.5/cm) with temperature from theoretical derivation and experimental data. Our setup had a temperature stability of 0.8 mK for a calibrator system using an off-the-shelf dewar flask with active stabilisation. Our result from radial velocity drift measurements demonstrated that such a calibration system is capable of providing higher signal-to-noise calibration and better nightly drift measurement relative to ThAr in the wavelength range between 470 nm and 780 nm. A similar result has been previously found for Fabry-P\'erot etalons, and although the metalon solution lacks the efficiency of an etalon, it does offers a cost-effective broadband solution, which should be less prone to aging relative to complex dielectric mirror coatings. Nonetheless, long-term monitoring is required to understand the metalon behaviour in detail.
The High-Altitude Water Cherenkov (HAWC) observatory was designed to study gamma-ray sources in the energy range between a few hundred GeV up to few hundred TeV. It is composed of 300 Water Cherenkov Detectors (WCDs) that cover a surface of approximately 22000 m${}^2$, at 4100 m a.s.l. In this study, we use the HAWC WCDs as a very large horizontal particle tracker, searching for horizontal muon rate variations by season using 1.5 years of HAWC data. We look for a possible correlation between the effective temperature and the horizontal muon rate. In order to do this, we developed a method to calculate the effective temperature for the horizontal propagation of muons. This is the first time that a search for seasonal variations in the high-altitude horizontal muon rate is reported.
We present SLIDE, a pipeline that enables transient discovery in data from the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST), using archival images from the Dark Energy Camera (DECam) as templates for difference imaging. We apply this pipeline to the recently released Data Preview 1 (DP1; the first public release of Rubin commissioning data) and search for transients in the resulting difference images. The image subtraction, photometry extraction, and transient detection are all performed on the Rubin Science Platform. We demonstrate that SLIDE effectively extracts clean photometry by circumventing poor or missing LSST templates. This is especially useful for transient analysis in the early years of LSST, when template coverage will be largely incomplete or when templates may be contaminated by transients present at the time of acquisition. We present multiband light curves for a sample of known transients, along with new transient candidates identified through our search. Finally, we discuss the prospects of applying this pipeline during the main LSST survey.
We present a unified post-Newtonian framework for relativistic timing and coordinate transformations covering six time scales (TCB, TCG, TT, TDB, TCL, TL) and three reference systems (BCRS, GCRS, LCRS). Extending the IAU conventions, we define a Lunicentric Celestial Reference System (LCRS) metric that retains all contributions above a fractional threshold of 5x10^{-18} and timing terms above 0.1 ps by expanding the lunar gravity field to spherical-harmonic degree l=9 with Love number variations and including external tidal and inertial multipoles to the octupole. We derive closed-form mappings among TCB, TCG, TT, TCL and TL, yielding proper-to-coordinate time transformations and two-way time-transfer corrections at sub-picosecond accuracy. We evaluate secular rate constants and periodic perturbations arising from kinematic dilation, lunar monopole and multipoles, Earth tides and gravitomagnetic effects for clocks on the lunar surface, in low lunar orbits, at the Earth-Moon L1 point and in near-rectilinear halo orbits. Our analysis demonstrates that harmonics through l=9 and tides through l=8 are required to achieve 5x10^{-18} fractional stability, supporting sub-picosecond clock synchronization and centimeter-level navigation in cislunar space. This framework underpins high-precision time and frequency transfer, relativistic geodesy, quantum communication links and fundamental physics experiments beyond low Earth orbit.
The advent of large aperture arrays, such as the ones currently under construction for the SKA project, allows for observing the Universe in the radio-spectrum at unprecedented resolution and sensitivity. To process the enormous amounts of data produced by these telescopes, scalable software pipelines are required. This paper helps address this by proposing a framework that allows for decentralized radio-interferometric image reconstruction, parallelizing by spatial frequency. This is achieved by creating pseudo-full-resolution problems for each node by using the local visibilities together with previous major cycle reconstructed images from the other nodes. We apply the proposed framework to both multiscale CLEAN and sparsity regularized convex reconstruction and compare them to their serial counterparts across four different data sets of varying properties in the context of two visibility partitions. We found that the parallelization framework allows for significantly improved reconstruction times for images of similar quality. This was especially the case for our larger datasets where we were able to achieve close to the optimal $2\times$ speedup.
Cosmological hydrodynamical simulations have become an indispensable tool to understand galaxies. However, computational constraints still severely limit their numerical resolution. This not only restricts the sampling of the stellar component and its direct comparison to detailed observations, but also the precision with which it is evolved. To overcome these problems we introduce the \emph{Superstars} method. This method increases the stellar mass resolution in cosmological galaxy simulations in a computationally inexpensive way for a fixed dark matter and gas resolution without altering any global properties of the simulated galaxies. We demonstrate the \emph{Superstars} method for a Milky Way-like galaxy of the Auriga project, improving the stellar mass resolution by factors of $8$ and $64$. We show and quantify that this improves the sampling of the stellar population in the disc and halo without changing the properties of the central galaxy or its satellites, unlike simulations that change the resolution of all components (gas, dark matter, stars). Moreover, the better stellar mass resolution reduces numerical heating of the stellar disc in its outskirts and keeps substructures in the stellar disc and inner halo more coherent. It also makes lower mass and lower surface brightness structures in the stellar halo more visible. The \emph{Superstars} method is straightforward to incorporate in any cosmological galaxy simulation that does not resolve individual stars.
While significant advances have been made in photometric classification ahead of the millions of transient events and hundreds of supernovae (SNe) each night that the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will discover, classifying SNe spectroscopically remains the best way to determine most subtypes of SNe. Traditional spectrum classification tools use template matching techniques (Blondin & Tonry 2007) and require significant human supervision. Two deep learning spectral classifiers, DASH (Muthukrishna et al. 2019) and SNIascore (Fremling et al. 2021) define the state of the art, but SNIascore is a binary classifier devoted to maximizing the purity of the SN Ia-norm sample, while DASH is no longer maintained and the original work suffers from contamination of multi-epoch spectra in the training and test sets. We have explored several neural network architectures in order to create a new automated method for classifying SN subtypes, settling on an attention-based model we call ABC-SN. We benchmark our results against an updated version of DASH, thus providing the community with an up-to-date general purpose SN classifier. Our dataset includes ten different SN subtypes including subtypes of SN Ia, core collapse and interacting SNe. We find that ABC-SN outperforms DASH, and we discuss the possibility that modern SN spectra datasets contain label noise which limit the performance of all classifiers.
This paper presents properties and approximations of a random variable based on the zero-order modified Bessel function that results from the compounding of a zero-mean Gaussian with a $\chi^2_1$-distributed variance. This family of distributions is a special case of the McKay family of Bessel distributions and of a family of generalized Laplace distributions. It is found that the Bessel distribution can be approximated with a null-location Laplace distribution, which corresponds to the compounding of a zero-mean Gaussian with a $\chi^2_2$-distributed variance. Other useful properties and representations of the Bessel distribution are discussed, including a closed form for the cumulative distribution function that makes use of the modified Struve functions. Another approximation of the Bessel distribution that is based on an empirical power-series approximation is also presented. The approximations are tested with the application to the typical problem of statistical hypothesis testing. It is found that a Laplace distribution of suitable scale parameter can approximate quantiles of the Bessel distribution with better than 10% accuracy, with the computational advantage associated with the use of simple elementary functions instead of special functions. It is expected that the approximations proposed in this paper be useful for a variety of data science applications where analytic simplicity and computational efficiency are of paramount importance.