Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We present a simulation of the time-domain catalog for the Nancy Grace Roman Space Telescope's High-Latitude Time-Domain Core Community Survey. This simulation, called the Hourglass simulation, uses the most up-to-date spectral energy distribution models and rate measurements for ten extra-galactic time-domain sources. We simulate these models through the design reference Roman Space Telescope survey: four filters per tier, a five day cadence, over two years, a wide tier of 19 deg$^2$ and a deep tier of 4.2 deg$^2$, with $\sim$20% of those areas also covered with prism observations. We find that a science-independent Roman time-domain catalog, assuming a S/N at max of >5, would have approximately 21,000 Type Ia supernovae, 40,000 core-collapse supernovae, around 70 superluminous supernovae, $\sim$35 tidal disruption events, 3 kilonovae, and possibly pair-instability supernovae. In total, Hourglass has over 64,000 transient objects, 11 million photometric observations, and 500,000 spectra. Additionally, Hourglass is a useful data set to train machine learning classification algorithms. We show that SCONE is able to photometrically classify Type Ia supernovae with high precision ($\sim$95%) to a z > 2. Finally, we present the first realistic simulations of non-Type Ia supernovae spectral-time series data from Roman's prism.
Binary systems in the Asymptotic Giant Branch (AGB) phase are widely recognized as a leading theoretical framework underpinning the observed asymmetric morphologies of planetary nebulae. However, the detection of binary companions in AGB systems is severely hampered by the overwhelming brightness and variability of the evolved primary star, which dominate the photo-metric and spectroscopic signatures. Ultraviolet (UV) excess emission has been proposed as a candidate diagnostic for the presence of binary companions in AGB systems. This paper evaluates the Chinese Space Station Telescope's (CSST) ability to detect UV excess emission in AGB stars, leveraging its unprecedented UV sensitivity and wide-field survey capabilities. We employed synthetic spectral libraries of M0-M8 type giants for primary stars and the ATLAS 9 atmospheric model grid for companion stars spanning a temperature range of 6500 K to 12000 K. By convolving these model spectra with the CSST multi-band filter system, we computed color-color diagrams (g-y versus NUV-u) to construct a diagnostic grid. This grid incorporates interstellar extinction corrections and establishes a framework for identifying AGB binary candidates through direct comparison between observed photometry and theoretical predictions. Furthermore, we discuss the physical origins of UV excess in AGB stars. This study pioneers a diagnostic framework leveraging CSST's unique multi-band UV-visible synergy to construct color-color grids for binary candidate identification, overcoming limitations of non-simultaneous multi-instrument observations.
Gravitational-wave astronomy has entered a regime where it can extract information about the population properties of the observed binary black holes. The steep increase in the number of detections will offer deeper insights, but it will also significantly raise the computational cost of testing multiple models. To address this challenge, we propose a procedure that first performs a non-parametric (data-driven) reconstruction of the underlying distribution, and then remaps these results onto a posterior for the parameters of a parametric (informed) model. The computational cost is primarily absorbed by the initial non-parametric step, while the remapping procedure is both significantly easier to perform and computationally cheaper. In addition to yielding the posterior distribution of the model parameters, this method also provides a measure of the model's goodness-of-fit, opening for a new quantitative comparison across models.
We discuss the requirements, concepts, simulations, implementation, and calibration of two dual Fabry-Perot based imaging spectropolarimeters, CRISP and CHROMIS, at the Swedish 1-meter Solar Telescope, and CRISP2 that is under construction. These instruments use a combination of a high-resolution and a low-resolution etalon together with an order-sorting prefilter to define the bandpass. The overall design is made robust and stable by tailoring the low-resolution etalon reflectivity to accommodate expected cavity errors from both etalons, and by using a compact optical design that eliminates the need for folding mirrors. By using a telecentric design based on lenses rather than mirrors, image degradation by the FPI system is negligible, as shown in a previous publication, and the throughput of the system is maximised. Initial alignment, and maintaining that alignment over time, is greatly simplified. The telecentric design allows full calibration and/or modelling of essential system parameters to be carried out without interfering with the optical setup. We also discuss briefly the polarimeters developed for CRISP and CHROMIS. The high performance of CRISP and CHROMIS has been demonstrated in an earlier publication through measurements of the granulation contrast and comparisons with similar measurements simultaneously made through broadband continuum filters. Here, we focus on the aspects of the design that are central to enabling high performance and robustness, but also discuss the calibration and processing of the data, and use a few examples of processed data to demonstrate the achievable image and data quality. We put forward a proposal for a similar conceptual design for the European Solar Telescope and conclude by discussing potential problems of the proposed approach to designs of this type. Some aspects of these FPI systems may be of interest also outside the solar community.
Context. Determining the ages of young stellar systems is fundamental to test and validate current star-formation theories. Aims. We aim at developing a Bayesian version of the expansion rate method that incorporates the a priori knowledge on the stellar system's age and solves some of the caveats of the traditional frequentist approach. Methods. We upgrade an existing Bayesian hierarchical model with additional parameter hierarchies that include, amongst others, the system's age. For this later, we propose prior distributions inspired by literature works. Results. We validate our method on a set of extensive simulations mimicking the properties of real stellar systems. In stellar associations between 10 and 40 Myr and up to 150 pc the errors are <10%. In star forming regions up to 400 pc, the error can be as large as 80% at 3 Myr but it rapidly decreases with increasing age. Conclusions. The Bayesian expansion rate methodology that we present here offers several advantages over the traditional frequentist version. In particular, the Bayesian age estimator is more robust and credible than the commonly used the frequentist ones. This new Bayesian expansion rate method is made publicly available as a module of the free and open-source code Kalkayotl.
Euclid mission is designed to understand the dark sector of the universe. Precise redshift measurements are provided by H2RG detectors. We propose an unbiased method of fitting the flux with Poisson distributed and correlated data, which has an analytic solution and provides a reliable quality factor - fundamental features to ensure the goals of the mission. We compare our method to other techniques of signal estimation and illustrate the anomaly detection on the flight like detectors. Although our discussion is focused on Euclid NISP instrument, much of what is discussed will be of interest to any mission using similar near-infrared sensors
We present the readout noise reduction methods and the 1/f noise response of an 2Kx2K HgCdTe detector similar to the detectors that will be used in the Near Infrared Spectrometer Photometer - one of the instruments of the future ESA mission named Euclid. Various algorithms of common modes subtraction are defined and compared. We show that the readout noise can be lowered by 60% using properly the references provided within the array. A predictive model of the 1/f noise with a given frequency power spectrum is defined and compared to data taken in a wide range of sampling frequencies. In view of this model the definition of ad-hoc readout noises for different sampling can be avoided
A new generation of optical intensity interferometers are emerging in recent years taking advantage of the existing infrastructure of Imaging Atmospheric Cherenkov Telescopes (IACTs). The MAGIC SII (Stellar Intensity Interferometer) in La Palma, Spain, has been operating since its first successful measurements in 2019 and its current design allows it to operate regularly. The current setup is ready to follow up on bright optical transients, as changing from regular gamma-ray observations to SII mode can be done in a matter of minutes. A paper studying the system performance, first measurements and future upgrades has been recently published. MAGIC SII's first scientific results are the measurement of the angular size of 22 stars, 13 of which with no previous measurements in the B band. More recently the Large Sized Telescope prototype from the Cherenkov Telescope Array Observatory (CTAOLST1) has been upgraded to operate together with MAGIC as a SII, leading to its first correlation measurements at the beginning of 2024. MAGIC+CTAO-LST1 SII will be further upgraded by adding the remaining CTAOLSTs at the north site to the system (which are foreseen to be built by the end of 2025). MAGIC+CTAO-LST1 SII shows a feasible technical solution to extend SII to the whole CTAO.
The CYGNO experiment is developing a high-resolution gaseous Time Projection Chamber with optical readout for directional dark matter searches. The detector uses a helium-tetrafluoromethane (He:CF$_4$ 60:40) gas mixture at atmospheric pressure and a triple Gas Electron Multiplier amplification stage, coupled with a scientific camera for high-resolution 2D imaging and fast photomultipliers for time-resolved scintillation light detection. This setup enables 3D event reconstruction: photomultipliers signals provide depth information, while the camera delivers high-precision transverse resolution. In this work, we present a Bayesian Network-based algorithm designed to reconstruct the events using only the photomultipliers signals, yielding a full 3D description of the particle trajectories. The algorithm models the light collection process probabilistically and estimates spatial and intensity parameters on the Gas Electron Multiplier plane, where light emission occurs. It is implemented within the Bayesian Analysis Toolkit and uses Markov Chain Monte Carlo sampling for posterior inference. Validation using data from the CYGNO LIME prototype shows accurate reconstruction of localized and extended tracks. Results demonstrate that the Bayesian approach enables robust 3D description and, when combined with camera data, further improves the precision of track reconstruction. This methodology represents a significant step forward in directional dark matter detection, enhancing the identification of nuclear recoil tracks with high spatial resolution.
We derive the full covariance matrix formulae are derived for proper treatment of correlations in signal fitting procedures, extending the results from previous publications. The straight line fits performed with these matrices demonstrate that a significantly higher signal to noise is obtained when the fluence exceeds 1 e/sec/pix in particular in long (several hundreds of seconds) spectroscopic exposures. The improvement arising from the covariance matrix is particularly strong for the initial intercept of the fit at t=0, a quantity which provides a useful redundancy to cross check the signal quality. We demonstrate that the mode that maximizes the signal to noise ratio in all ranges of fluxes studied in this paper is the one that uses all the frames sampled during the exposure. While at low flux there is no restriction on the organization of frames within groups for fluxes lower than 1 e/sec/pix, for fluxes exceeding this value the coadding of frames shell be avoided.
The Chinese Pulsar Timing Array (CPTA) has collected observations from 57 millisecond pulsars using the Five-hundred-meter Aperture Spherical Radio Telescope (FAST) for close to three years, for the purpose of searching for gravitational waves (GWs). To robustly search for ultra-low-frequency GWs, pulsar timing arrays (PTAs) need to use models to describe the noise from the individual pulsars. We report on the results from the single pulsar noise analysis of the CPTA data release I (DR1). Conventionally, power laws in the frequency domain are used to describe pulsar red noise and dispersion measurement (DM) variations over time. Employing Bayesian methods, we found the choice of number and range of frequency bins with the highest evidence for each pulsar individually. A comparison between a dataset using DM piecewise measured (DMX) values and a power-law Gaussian process to describe the DM variations shows strong Bayesian evidence in favour of the power-law model. Furthermore, we demonstrate that the constraints obtained from four independent software packages are very consistent with each other. The short time span of the CPTA DR1, paired with the large sensitivity of FAST, has proved to be a challenge for the conventional noise model using a power law. This mainly shows in the difficulty to separate different noise terms due to their covariances with each other. Nineteen pulsars are found to display covariances between the short-term white noise and long-term red and DM noise. With future CPTA datasets, we expect that the degeneracy can be broken. Finally, we compared the CPTA DR1 results against the noise properties found by other PTA collaborations. While we can see broad agreement, there is some tension between different PTA datasets for some of the overlapping pulsars. This could be due to the differences in the methods and frequency range compared to the other PTAs.
The Hamamatsu R12699-406-M2 is a $2\times2$ multi-anode 2-inch photomultiplier tube that offers a compact form factor, low intrinsic radioactivity, and high photocathode coverage. These characteristics make it a promising candidate for next-generation xenon-based direct detection dark matter experiments, such as XLZD and PandaX-xT. We present a detailed characterization of this photosensor operated in cold xenon environments, focusing on its single photoelectron response, dark count rate, light emission, and afterpulsing behavior. The device demonstrated a gain exceeding $2\cdot 10^6$ at the nominal voltage of -1.0 kV, along with a low dark count rate of $(0.4\pm0.2)\;\text{Hz/cm}^2$. Due to the compact design, afterpulses exhibited short delay times, resulting in some cases in an overlap with the light-induced signal. To evaluate its applicability in a realistic detector environment, two R12699-406-M2 units were deployed in a small-scale dual-phase xenon time projection chamber. The segmented $2\times2$ anode structure enabled lateral position reconstruction using a single photomultiplier tube, highlighting the potential of the sensor for effective event localization in future detectors.
Large aperture ground based solar telescopes allow the solar atmosphere to be resolved in unprecedented detail. However, observations are limited by Earths turbulent atmosphere, requiring post image corrections. Current reconstruction methods using short exposure bursts face challenges with strong turbulence and high computational costs. We introduce a deep learning approach that reconstructs 100 short exposure images into one high quality image in real time. Using unpaired image to image translation, our model is trained on degraded bursts with speckle reconstructions as references, improving robustness and generalization. Our method shows an improved robustness in terms of perceptual quality, especially when speckle reconstructions show artifacts. An evaluation with a varying number of images per burst demonstrates that our method makes efficient use of the combined image information and achieves the best reconstructions when provided with the full image burst.
Magnetized exoplanets are expected to emit auroral cyclotron radiation in the radio regime due to the interactions between their magnetospheres, the interplanetary magnetic field, and the stellar wind. Prospective extrasolar auroral emission detections will constrain the magnetic properties of exoplanets, allowing the assessment of the planets' habitability and their protection against atmospheric escape by photoevaporation, enhancing our understanding of exoplanet formation and demographics. We construct a numerical model to update the estimates of radio emission characteristics of the confirmed exoplanets while quantifying the uncertainties of our predictions for each system by implementing a Monte Carlo error propagation method. We identify 16 candidates that have expected emission characteristics that render them potentially detectable from current ground-based telescopes. Among these, the hot Jupiter tau Bootis b is the most favorable target with an expected flux density of $51^{+36}_{-22}$ mJy. Notably, eleven candidates are super-Earths and sub-Neptunes, for which magnetism is key to understanding the associated demographics. Together with the other predictive works in the literature regarding the characteristics and the geometry of the magnetospheric emissions, our predictions are expected to guide observational campaigns in pursuit of discovering magnetism on exoplanets.
A core motivation of science is to evaluate which scientific model best explains observed data. Bayesian model comparison provides a principled statistical approach to comparing scientific models and has found widespread application within cosmology and astrophysics. Calculating the Bayesian evidence is computationally challenging, especially as we continue to explore increasingly more complex models. The Savage-Dickey density ratio (SDDR) provides a method to calculate the Bayes factor (evidence ratio) between two nested models using only posterior samples from the super model. The SDDR requires the calculation of a normalised marginal distribution over the extra parameters of the super model, which has typically been performed using classical density estimators, such as histograms. Classical density estimators, however, can struggle to scale to high-dimensional settings. We introduce a neural SDDR approach using normalizing flows that can scale to settings where the super model contains a large number of extra parameters. We demonstrate the effectiveness of this neural SDDR methodology applied to both toy and realistic cosmological examples. For a field-level inference setting, we show that Bayes factors computed for a Bayesian hierarchical model (BHM) and simulation-based inference (SBI) approach are consistent, providing further validation that SBI extracts as much cosmological information from the field as the BHM approach. The SDDR estimator with normalizing flows is implemented in the open-source harmonic Python package.
NASA's Nancy Grace Roman Space Telescope (Roman) will provide an opportunity to study dark energy with unprecedented precision and accuracy using several techniques, including measurements of high-$z$ Type Ia Supernovae (SNe Ia, $z \lesssim 3.0$) via the High-Latitude Time Domain Survey (HLTDS). In this work, we do an initial "benchmark" characterization of the photometric repeatability of stellar fluxes, which must be below $1\%$ when sky noise is subdominant in order xto enable a number of calibration requirements. Achieving this level of flux precision requires attention to Roman's highly-structured, spatially-varying, undersampled PSF. In this work, we build a library of effective PSFs (ePSFs) compatible with the OpenUniverse HLTDS simulations. Using our library of ePSFs, we recover fractional flux between $0.6 - 1.2\%$ photometric precision, finding that redder bands perform better by this metric. We also find that flux recovery is improved by up to $20\%$ when a chip (sensor chip assembly; SCA) is divided into 8 sub-SCAs in order to account for the spatial variation of the PSF. With our optimized algorithm, we measure non-linearity due to photometry (magnitude dependence) of $|s_{NL}| < 1.93 \times 10^{-3}$ per dex, which is still larger than stated requirements for detector effects and indicates that further work is necessary. We also measure the dependence of photometric residuals on stellar color, and find the largest possible dependence in R062, implying a color-dependent PSF model may be needed. Finally, we characterize the detection efficiency function of each OpenUniverse Roman filter, which will inform future studies.
In early 2024, ESA formally adopted the Laser Interferometer Space Antenna (LISA) space mission with the aim of measuring gravitational waves emitted in the millihertz range. The constellation employs three spacecraft that exchange laser beams to form interferometric measurements over a distance of 2.5 million kilometers. The measurements will then be telemetered down to Earth at a lower sampling frequency. Anti-aliasing filters will be used on board to limit spectral folding of out-of-band laser noise. The dominant noise in these measurements is laser frequency noise which does not cancel naturally in LISA's unequal-arm heterodyne interferometers. Suppression of this noise requires time-shifting of the data using delay operators to build virtual beam paths that simulate equal-arm interferometers. The non-commutativity of these delay operators and on-board filters manifests as a noise (flexing-filtering) that significantly contributes to the noise budget. This non-commutativity is a consequence of the non-flatness of the filter in-band. Attenuation of this noise requires high-order and computationally expensive filters, putting additional demands on the spacecraft. The following work studies an alternative method to reduce this flexing filtering noise via the introduction of a modified delay operator that accounts for the non-commutativity with the filter in the delay operation itself. Our approach allows us to reduce the flexing-filtering noise by over six orders of magnitude whilst reducing the dependency on the flatness of the filter. The work is supplemented by numerical simulations of the data processing chain that compare the results with those of the standard approach.
We report the delivery to the Mikulski Archive for Space Telescopes (MAST) of tables containing Root Mean Square (RMS) Combined Differential Photometric Precision (CDPP) values for all TESS 2-min cadence targets with Science Processing Operations Center (SPOC) light curves in Sectors 1-90. Each comma-separated values (CSV) file contains CDPP values for all 2-min light curves in the given sector. The tables include robust RMS CDPP values for the 15 trial transit pulse durations searched in the SPOC 2-min processing pipeline, ranging from 0.5-15.0 hr. For each pulse duration, CDPP is computed in the transit search for a trial transit centered on every cadence. The RMS value of the CDPP time series is a metric that may be employed to estimate signal-to-noise ratio for transits with the given duration and a specified depth. We will continue to deliver the RMS CDPP tables to MAST for each observing sector.