Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-03 13:30 to 2023-11-07 11:30 | Next meeting is Friday Nov 1st, 11:30 am.
Using cosmological hydrodynamical zoom-in simulations, we explore the properties of subhalos in Milky Way analogs that contain a sub-component of Atomic Dark Matter (ADM). ADM differs from Cold Dark Matter (CDM) due to the presence of self interactions that lead to energy dissipation and bound-state formation, analogous to Standard Model baryons. This model can arise in complex dark sectors that are natural and theoretically-motivated extensions to the Standard Model. The simulations used in this work were carried out using GIZMO and utilize the FIRE-2 galaxy formation physics in the Standard Model baryonic sector. For the parameter points we consider, the ADM gas cools efficiently, allowing it to collapse to the center of subhalos. This increases a subhalo's central density and affects its orbit, with more subhalos surviving small pericentric passages. The subset of subhalos that host visible satellite galaxies have cuspier density profiles and smaller stellar-half-mass radii relative to CDM. The entire population of dwarf galaxies produced in the ADM simulations is much more compact than those seen in CDM simulations, unable to reproduce the entire diversity of observed dwarf galaxy structures. Additionally, we also identify a population of highly compact subhalos that consist nearly entirely of ADM and form in the central region of the host, where they can leave distinctive imprints in the baryonic disk. This work presents the first detailed exploration of subhalo properties in a strongly dissipative dark matter scenario, providing intuition for how other regions of ADM parameter space, as well as other dark sector models, would impact galactic-scale observables.
The axion, as a leading dark matter candidate, is the target of many on-going and proposed experimental searches based on its coupling to photons. Ultralight axions that couple to photons can also cause polarization rotation of light which can be probed by cosmic microwave background. In this work, we show that a large axion field is inevitably developed around black holes due to the Bose-Einstein condensation of axions, enhancing the induced birefringence effects. Therefore, we propose measuring the modulation of supermassive black hole imaging polarization angles as a new probe to the axion-photon coupling of axion dark matter. The oscillating axion field around black holes induces polarization rotation on the black hole image, which is detectable and distinguishable from astrophysical effects on the polarization angle, as it exhibits distinctive temporal variability and frequency invariability. We present the range of axion-photon couplings within the axion mass range $10^{-21}-10^{-16}\text{eV}$ that can be probed by the Event Horizon Telescope. The axion parameter space probed by black hole polarimetry will expand with the improvement in sensitivity on the polarization measurement and more black hole polarimetry targets with determined black hole masses.
Cold-atom analogue experiments are a promising new tool for studying relativistic vacuum decay, allowing us to empirically probe early-Universe theories in the laboratory. However, existing analogue proposals place stringent requirements on the atomic scattering lengths that are challenging to realize experimentally. Here we generalize these proposals and show that any stable mixture between two states of a bosonic isotope can be used as a relativistic analogue. This greatly expands the range of suitable experimental setups, and will thus expedite efforts to study vacuum decay with cold atoms.
In this paper, we analyze parity-violating effects in the propagation of gravitational waves (GWs). For this purpose, we adopt a newly proposed parametrized post-Einstenian (PPE) formalism, which encodes modified gravity corrections to the phase and amplitude of GW waveforms. In particular, we focus our study on three well-known examples of parity-violating theories, namely Chern-Simons, Symmetric Teleparallel and Hor\v ava-Lishitz gravity. For each model, we identify the PPE parameters emerging from the inclusion of parity-violating terms in the gravitational Lagrangian. Thus, we use the simulated sensitivities of third-generation GW interferometers, such as the Einstein Telescope and Cosmic Explorer, to obtain numerical bounds on the PPE coefficients and the physical parameters of binary systems. In so doing, we find that deviations from General Relativity cannot be excluded within given confidence limits. Moreover, our results show an improvement of one order of magnitude in the relative accuracy of the GW parameters compared to the values inferred from the LIGO-Virgo-KAGRA network. In this respect, the present work demonstrates the power of next-generation GW detectors to probe fundamental physics with unprecedented precision.
The hot and dilute intracluster medium (ICM) plays a central role in many key processes that shape galaxy clusters. Nevertheless, the nature of plasma turbulence and particle transport in the ICM remain poorly understood and quantifying the effect of kinetic plasma instabilities on the macroscopic dynamics represents an outstanding problem. Here we focus on the impact of whistler-wave suppression of the heat flux on the magneto-thermal instability (MTI), which is expected to drive significant turbulent motions in the periphery of galaxy clusters. We perform small-scale Boussinesq simulations with a sub-grid closure for the thermal diffusivity in the regime of whistler-wave suppression. Our model is characterized by a single parameter that quantifies the collisionality of the ICM on the astrophysical scales of interest that we tune to explore a range appropriate for the periphery of galaxy clusters. We find that the MTI is qualitatively unchanged for weak whistler-suppression. Conversely, with strong suppression the magnetic dynamo is interrupted and MTI-turbulence dies out. In the astrophysically relevant limit, however, the MTI is likely to be supplemented by additional sources of turbulence. Investigating this scenario, we show that the inclusion of external forcing has a beneficial impact and revives even MTI simulations with strong whistler-suppression. As a result, the plasma remains buoyantly unstable, with important consequences for turbulent mixing in the ICM.
The inference of astrophysical and cosmological properties from the Lyman-$\alpha$ forest conventionally relies on summary statistics of the transmission field that carry useful but limited information. We present a deep learning framework for inference from the Lyman-$\alpha$ forest at field-level. This framework consists of a 1D residual convolutional neural network (ResNet) that extracts spectral features and performs regression on thermal parameters of the IGM that characterize the power-law temperature-density relation. We train this supervised machinery using a large set of mock absorption spectra from Nyx hydrodynamic simulations at $z=2.2$ with a range of thermal parameter combinations (labels). We employ Bayesian optimization to find an optimal set of hyperparameters for our network, and then employ a committee of ten neural networks for increased statistical robustness of the network inference. In addition to the parameter point predictions, our machine also provides a self-consistent estimate of their covariance matrix with which we construct a pipeline for inferring the posterior distribution of the parameters. We compare the results of our framework with the traditional summary (PDF and power spectrum of transmission) based approach in terms of the area of the 68% credibility regions as our figure of merit (FoM). In our study of the information content of perfect (noise- and systematics-free) Ly$\alpha$ forest spectral data-sets, we find a significant tightening of the posterior constraints -- factors of 5.65 and 1.71 in FoM over power spectrum only and jointly with PDF, respectively -- that is the consequence of recovering the relevant parts of information that are not carried by the classical summary statistics.
We extend the formalism to calculate non-Gaussianity of primordial curvature perturbations produced by preheating in the presence of a light scalar field. The calculation is carried out in the separate universe approximation using the non-perturbative delta N formalism and lattice field theory simulations. Initial conditions for simulations are drawn from a statistical ensemble determined by modes that left the horizon during inflation, with the time-dependence of Hubble rate during inflation taken into account. Our results show that cosmic variance, i.e., the contribution from modes with wavelength longer than the size of the observable universe today, plays a key role in determining the dominant contribution. We illustrate our formalism by applying it to an observationally-viable preheating model motivated by non-minimal coupling to gravity, and study its full parameter dependence.
We study the perturbations in the temperature (and density) distribution for 28 clusters selected from the CHEX-MATE sample to evaluate and characterize the level of inhomogeneities and the related dynamical state of the ICM. We use these spatially resolved 2D distributions to measure the global and radial scatter and identify the regions that deviate the most from the average distribution. During this process, we introduce three dynamical state estimators and produce clean temperature profiles after removing the most deviant regions. We find that the temperature distribution of most of the clusters is skewed towards high temperatures and is well described by a log-normal function. There is no indication that the number of regions deviating more than 1$\sigma$ from the azimuthal value is correlated with the dynamical state inferred from morphological estimators. The removal of these regions leads to local temperature variations up to 10-20% and an average increase of $\sim$5% in the overall cluster temperatures. The measured relative intrinsic scatter within $R_{500}$, $\sigma_{T,int}/T$, has values of 0.17$^{+0.08}_{-0.05}$, and is almost independent of the cluster mass and dynamical state. Comparing the scatter of temperature and density profiles to hydrodynamic simulations, we constrain the average Mach number regime of the sample to $M_{3D}$=0.36$^{+0.16}_{-0.09}$. We infer the ratio between the energy in turbulence and the thermal energy, and translate this ratio in terms of a predicted hydrostatic mass bias $b$, estimating an average value of $b\sim$0.11 (covering a range between 0 and 0.37) within $R_{500}$. This study provides detailed temperature fluctuation measurements for 28 CHEX-MATE clusters which can be used to study turbulence, derive the mass bias, and make predictions on the scaling relation properties.
The Hubble diagram (HD) is a plot that contains luminous distance modulus presented with respect to the redshift. The distance modulus--redshift relation of the most well-known ``standard candles'', the type Ia supernovae (SN), is a crucial tool in cosmological model testing. In this work, we use the SN Ia data from the Pantheon catalogue to calibrate the Swift long gamma-ray bursts (LGRBs) as ``standard candles'' via the Amati relation. Thus, we expand the HD from supernovae to the area of the Swift LGRBs up to $z\sim8$. To improve the quality of estimation of the parameters and their errors, we implement the Monte-Carlo uncertainty propagation method. We also compare the results of estimation of the Amati parameters calibrated by the SN Ia, and by the standard $\Lambda$CDM model and find no statistically significant distinguish between them. Although the size of our LGRB sample is relatively small and the errors are high, we find this approach of expanding the cosmological distance scale perspective for future cosmological tests.
In this paper, the first in a series of four articles, the scientific goals of the Metron project are highlighted, and the characteristics of the cosmic objects available for study within its framework are provided. The Metron interferometer radio telescope should include arrays of meter-range dipole antennas placed on Earth, in outer space or on the far side of the Moon (or a combination of these options). Working in the meter range will enable the study of the so-called cosmological epoch of ``Dark Ages'', which is challenging to observe but highly interesting for understanding the origin of the first stars, galaxies, and black holes, as well as for the search for new cosmological objects and processes. One possibility is to search for absorption in the 21 cm line within extended halos around early protogalaxies and supermassive primordial black holes, whose existence is predicted in a number of models. Another goal of Metron may be to clarify the anomalous absorption in the 21 cm line previously detected by the EDGES telescopes and to observe radio emissions from stars' and exoplanets' magnetospheres. The Metron project aims to achieve unprecedented resolution for the meter range, which is expected to yield new world-class scientific results. Meter-range antennas and receivers are relatively simple and inexpensive, and the construction of interferometric arrays from them can be accomplished in a relatively short period of time.
We revise the dynamics of interacting vector-like dark energy, a theoretical framework proposed to explain the accelerated expansion of the universe. By investigating the interaction between vector-like dark energy and dark matter, we analyze its effects on the cosmic expansion history and the thermodynamics of the accelerating universe. Our results demonstrate that the presence of interaction significantly influences the evolution of vector-like dark energy, leading to distinct features in its equation of state and energy density. We compare our findings with observational data and highlight the importance of considering interactions in future cosmological studies.
A galaxy cluster as the most massive gravitationally-bound object in the Universe, is dominated by Dark Matter, which unfortunately can only be investigated through its interaction with the luminous baryons with some simplified assumptions that introduce an un-preferred bias. In this work, we, for the first time, propose a deep learning method based on the U-Net architecture, to directly infer the projected total mass density map from mock observations of simulated galaxy clusters at multi-wavelengths. The model is trained with a large dataset of idealised mock images from simulated clusters of The Three Hundred Project. Through different metrics to assess the fidelity of the inferred density map, we show that the predicted total mass distribution is in very good agreement with the true simulated cluster. Therefore, it is not surprising to see the integrated halo mass is almost unbiased, around 1 per cent for the best result from multiview, and the scatter is also very small, basically within 3 per cent. This result suggests that this ML method provides an alternative and easier way to reconstruct the overall matter distribution in galaxy clusters than the traditional lensing method.
We present an analysis of the quenching of star formation in massive galaxies ($M_* > 10^{9.5} M_\odot$) within the first 0.5 - 3 Gyr of the Universe's history utilizing JWST-CEERS data. We utilize a combination of advanced statistical methods to accurately constrain the intrinsic dependence of quenching in a multi-dimensional and inter-correlated parameter space. Specifically, we apply Random Forest (RF) classification, area statistics, and a partial correlation analysis to the JWST-CEERS data. First, we identify the key testable predictions from two state-of-the-art cosmological simulations (IllustrisTNG & EAGLE). Both simulations predict that quenching should be regulated by supermassive black hole mass in the early Universe. Furthermore, both simulations identify the stellar potential ($\phi_*$) as the optimal proxy for black hole mass in photometric data. In photometric observations, where we have no direct constraints on black hole masses, we find that the stellar potential is the most predictive parameter of massive galaxy quenching at all epochs from $z = 0 - 8$, exactly as predicted by simulations for this sample. The stellar potential outperforms stellar mass, galaxy size, galaxy density, and S\'ersic index as a predictor of quiescence at all epochs probed in JWST-CEERS. Collectively, these results strongly imply a stable quenching mechanism operating throughout cosmic history, which is closely connected to the central gravitational potential in galaxies. This connection is explained in cosmological models via massive black holes forming and growing in deep potential wells, and subsequently quenching galaxies through a mix of ejective and preventative active galactic nucleus (AGN) feedback.
This study proposes a novel parametrization approach for the dimensionless Hubble parameter i.e. $E^2(z)=A(z)+\beta (1+\gamma B(z))$ in the context of scalar field dark energy models. The parameterization is characterized by two functions, $A(z)$ and $B(z)$, carefully chosen to capture the behavior of the Hubble parameter at different redshifts. We explore the evolution of cosmological parameters, including the deceleration parameter, density parameter, and equation of state parameter. Observational data from Cosmic Chronometers (CC), Baryonic Acoustic Oscillations (BAO), and the Pantheon+ datasets are analyzed using MCMC methodology to determine model parameters. The results are compared with the standard $\Lambda$CDM model using the Planck observations. Our approach provides a model-independent exploration of dark energy, contributing to a comprehensive understanding of late-time cosmic acceleration.
The primordial black holes (PBHs) formation in the early universe inflationary cosmology has received a lot of attention in recent years. One of the ways PBHs formation can be a possibility is the preheating stage after inflation and this particular scenario does not require any ad-hoc fine tuning of the scalar field potential. In this paper, we focus on the growth of primordial density perturbation and the consequent possibility of PBHs formation in the preheating stage of the Starobinsky model for inflation. The typical mechanism for PBH formation during preheating is based on the collapse of primordial fluctuations that become super-horizon during inflation (type I) and re-enter the particle horizon in the different phases of cosmic expansion. In this work, we show that there exists a certain range of modes that remain in the sub-horizon (not exited) during inflation (type II modes). Those can, in the later phase of evolution, lead to large density perturbation above the threshold and can potentially also contribute to the PBH formation. We obtain in detail the conditions that determine the possible collapse of type I and/or type II modes. Since the preheating stage is an 'inflaton' (approximately) matter-dominated phase with the equation of state $w\ll 1$, we follow the framework of the critical collapse of fluctuations and compute the mass fraction using the well-known Press-Schechter and the Khlopov-Polnarev formalisms, and compare the two. Finally, we comment on the implications of our study for the investigations concerned with primordial accretion and consequent PBH contribution to the dark matter.
The Q&U Bolometric Interferometer for Cosmology (QUBIC) is the first bolometric interferometer designed to measure the primordial B-mode polarization of the Cosmic Microwave Background (CMB). Bolometric interferometry is a novel technique that combines the sensitivity of bolometric detectors with the control of systematic effects that is typical of interferometry, both key features in the quest for the faint signal of the primordial B-modes. A unique feature is the so-called "spectral imaging", i.e., the ability to recover the sky signal in several sub-bands within the physical band during data analysis. This feature provides an in-band spectral resolution of \Delta{\nu}/{\nu} \sim 0.04 that is unattainable by a traditional imager. This is a key tool for controlling the Galactic foregrounds contamination. In this paper, we describe the principles of bolometric interferometry, the current status of the QUBIC experiment and future prospects.
We develop a Python tool to estimate the tail distribution of the number of dark matter halos beyond a mass threshold and in a given volume in a light-cone. The code is based on the extended Press-Schechter model and is computationally efficient, typically taking a few seconds on a personal laptop for a given set of cosmological parameters. The high efficiency of the code allows a quick estimation of the tension between cosmological models and the red candidate massive galaxies released by the James Webb Space Telescope, as well as scanning the theory space with the Markov Chain Monte Carlo method. As an example application, we use the tool to study the cosmological implication of the candidate galaxies presented in Labb\'e et al. (2023). The standard $\Lambda$ cold dark matter ($\Lambda$CDM) model is well consistent with the data if the star formation efficiency can reach $\sim 0.3$ at high redshift. For a low star formation efficiency $\epsilon \sim 0.1$, $\Lambda$CDM model is disfavored at $\sim 2\sigma$-$3\sigma$ confidence level.
This paper is based on a talk in which I discussed how a component of the dynamical affine connection, that is independent of the metric, can drive inflation in agreement with observations. This provides a geometrical origin for the inflaton. I also illustrated how the decays of this field, which has spin 0 and odd parity, into Higgs bosons can reheat the universe to a sufficiently high temperature.
Measurement of the redshifted 21-cm signal of neutral hydrogen from the Cosmic Dawn (CD) and Epoch of Reionisation (EoR) promises to unveil a wealth of information about the astrophysical processes during the first billion years of evolution of the universe. The AARTFAAC Cosmic Explorer (ACE) utilises the AARTFAAC wide-field imager of LOFAR to measure the power spectrum of the intensity fluctuations of the redshifted 21-cm signal from the CD at z~18. The RFI from various sources contaminates the observed data and it is crucial to exclude the RFI-affected data in the analysis for reliable detection. In this work, we investigate the impact of non-ground-based transient RFI using cross-power spectra and cross-coherence metrics to assess the correlation of RFI over time and investigate the level of impact of transient RFI on the ACE 21-cm power spectrum estimation. We detected moving sky-based transient RFI sources that cross the field of view within a few minutes and appear to be mainly from aeroplane communication beacons at the location of the LOFAR core in the 72-75 MHz band, by inspecting filtered images. This transient RFI is mostly uncorrelated over time and is only expected to dominate over the thermal noise for an extremely deep integration time of 3000 hours or more with a hypothetical instrument that is sky temperature dominated at 75 MHz. We find no visible correlation over different k-modes in Fourier space in the presence of noise for realistic thermal noise scenarios. We conclude that the sky-based transient RFI from aeroplanes, satellites and meteorites at present does not pose a significant concern for the ACE analyses at the current level of sensitivity and after integrating over the available 500 hours of observed data. However, it is crucial to mitigate or filter such transient RFI for more sensitive experiments aiming for significantly deeper integration.
We investigate a model that modifies general relativity on cosmological scales, specifically by having a 'glitch' in the gravitational constant between the cosmological (super-horizon) and Newtonian (sub-horizon) regimes. This gives a single-parameter extension to the standard $\Lambda$CDM model, which is equivalent to adding a dark energy component, but where the energy density of this component can have either sign. Fitting to data from the Planck satellite, we find that negative contributions are, in fact, preferred. Additionally, we find that roughly one percent weaker superhorizon gravity can significantly ease the Hubble and clustering tensions in a range of cosmological observations, although at the expense of spoiling fits to the baryonic acoustic oscillation scale in galaxy surveys. Therefore, the extra parametric freedom offered by our model deserves further exploration, and we discuss how future observations may elucidate this potential cosmic glitch in gravity, through a four-fold reduction in statistical uncertainties.
The astrophysical engines that power ultra-high-energy cosmic rays (UHECRs) remain to date unknown. Since the propagation horizon of UHECRs is limited to the local, anisotropic Universe, the distribution of UHECR arrival directions should be anisotropic. In this paper we expand the analysis of the potential for the angular, harmonic cross-correlation between UHECRs and galaxies to detect such anisotropies. We do so by studying proton, oxygen and silicium injection models, as well as by extending the analytic treatment of the magnetic deflections. Quantitatively, we find that, while the correlations for each given multipole are generally weak, (1) the total harmonic power summed over multipoles is detectable with signal-to-noise ratios well above~5 for both the auto-correlation and the cross-correlation (once optimal weights are applied) in most cases studied here, with peaks of signal-to-noise ratio around between~8 and~10 at the highest energies; (2) if we combine the UHECR auto-correlation and the cross-correlation we are able to reach detection levels of \(3\sigma\) and above for individual multipoles at the largest scales, especially for heavy composition. In particular, we predict that the combined-analysis quadrupole could be detected already with existing data.
Current observational data indicate that dark energy (DE) is a cosmological constant without considering its conclusiveness evidence. Considering the dynamic nature of $\Lambda$ individually as a function of time and the scale factor, we review their effects on the gravitational waves. This article is a continuation of the previous work \textit{JHEAp 36 (2022) 48-54}, in which DE only was based on Hubble's parameter and/or its derivatives. For the DE model based on the scale factor ($a^{-m}$), the results showed that the parameter $m$ is more limited as $ 2 < m \leqslant 3$ compared with the other models and due to the small value of DE density at the early universe. It is only in the mode $m=3$ that DE affects the low-frequency gravitational waves when its frequency is less than the $10^{-3}$Hz in a matter-dominated epoch. The broad bound on reducing the amplitude and the "B-B" polarization multipole coefficients, from maximum to minimum, is for the models developed based on the Hubble parameter function. There are primary sources of low- and very low-frequency GWs, such as the coalescence of massive black hole binaries with $M_{bh} > 10^{3} M_{sun}$, to determine the type of DE by mHz frequency space experiments (e.g., LISA) and by nHz-range NANOGrav 15-year data.
Massive black holes are key inhabitants of the nuclei of galaxies. Moreover, their astrophysical relevance has gained significant traction in recent years, thanks especially to the amazing results that are being (or will be) delivered by instruments such as the James Webb Space Telescope, Pulsar Timing Array projects and LISA. In this Chapter, we aim to detail a broad set of aspects related to the astrophysical nature of massive black holes embedded in galactic nuclei, with a particular focus on recent and upcoming advances in the field. In particular, we will address questions such as: What shapes the relations connecting the mass of massive black holes with the properties of their host galaxies? How do massive black holes form in the early Universe? What mechanisms keep on feeding them so that they can attain very large masses at z = 0? How do binaries composed of two massive black holes form and coalesce into a single, larger black hole? Here we present these topics from a mainly theoretical viewpoint and discuss how present and upcoming facilities may enhance our understanding of massive black holes in the near future.
In this paper we investigate the impact of lensing magnification on the analysis of Euclid's spectroscopic survey, using the multipoles of the 2-point correlation function for galaxy clustering. We determine the impact of lensing magnification on cosmological constraints, and the expected shift in the best-fit parameters if magnification is ignored. We consider two cosmological analyses: i) a full-shape analysis based on the $\Lambda$CDM model and its extension $w_0w_a$CDM and ii) a model-independent analysis that measures the growth rate of structure in each redshift bin. We adopt two complementary approaches in our forecast: the Fisher matrix formalism and the Markov chain Monte Carlo method. The fiducial values of the local count slope (or magnification bias), which regulates the amplitude of the lensing magnification, have been estimated from the Euclid Flagship simulations. We use linear perturbation theory and model the 2-point correlation function with the public code coffe. For a $\Lambda$CDM model, we find that the estimation of cosmological parameters is biased at the level of 0.4-0.7 standard deviations, while for a $w_0w_a$CDM dynamical dark energy model, lensing magnification has a somewhat smaller impact, with shifts below 0.5 standard deviations. In a model-independent analysis aiming to measure the growth rate of structure, we find that the estimation of the growth rate is biased by up to $1.2$ standard deviations in the highest redshift bin. As a result, lensing magnification cannot be neglected in the spectroscopic survey, especially if we want to determine the growth factor, one of the most promising ways to test general relativity with Euclid. We also find that, by including lensing magnification with a simple template, this shift can be almost entirely eliminated with minimal computational overhead.
The formation of primordial black holes in the early universe may happen through the collapse of large curvature perturbations generated during a non-attractor phase of inflation or through a curvaton-like dynamics after inflation. The fact that such small-scale curvature perturbation is typically non-Gaussian leads to the renormalization of composite operators built up from the smoothed density contrast and entering in the calculation of the primordial black abundance. Such renormalization causes the phenomenon of operator mixing and the appearance of an infinite tower of local, non-local and higher-derivative operators as well as to a sizable shift in the threshold for primordial black hole formation. This hints that the calculation of the primordial black hole abundance is more involved than what generally assumed.
The separate-universe approach gives an intuitive way to understand the evolution of cosmological perturbations in the long-wavelength limit. It uses solutions of the spatially-homogeneous equations of motion to model the evolution of the inhomogeneous universe on large scales. We show that the separate-universe approach fails on a finite range of super-Hubble scales at a sudden transition from slow roll to ultra-slow roll during inflation in the very early universe. Such transitions are a feature of inflation models giving a large enhancement in the primordial power spectrum on small scales, necessary to produce primordial black holes after inflation. We show that the separate-universe approach still works in a piece-wise fashion, before and after the transition, but spatial gradients on finite scales require a discontinuity in the homogeneous solution at the transition. We discuss the implications for the $\delta N$ formalism and stochastic inflation, which employ the separate-universe approximation.
We present the Einstein-Boltzmann module of the DISCO-DJ (DIfferentiable Simulations for COsmology - Done with JAX) software package. This module implements a fully differentiable solver for the linearised cosmological Einstein-Boltzmann equations in the JAX framework, and allows computing Jacobian matrices of all solver output with respect to all input parameters using automatic differentiation. This implies that along with the solution for a given set of parameters, the tangent hyperplane in parameter space is known as well, which is a key ingredient for cosmological inference and forecasting problems as well as for many other applications. We discuss our implementation and demonstrate that our solver agrees at the per-mille level with the existing non-differentiable solvers CAMB and CLASS, including massive neutrinos and a dark energy fluid with parameterised equation of state. We illustrate the dependence of various summary statistics in large-scale structure cosmology on model parameters using the differentiable solver, and finally demonstrate how it can be easily used for Fisher forecasting. Since the implementation is significantly shorter and more modular than existing solvers, it is easy to extend our solver to include additional physics, such as additional dark energy models, modified gravity, or other non-standard physics.
(abr.) We consider the potential of future microwave spectrometers akin to PIXIE in light of the sky-averaged global signal expected from the total intensity of extragalactic carbon monoxide (CO) and ionized carbon ([CII]) line emission. We start from models originally developed for forecasts of line-intensity mapping (LIM) observations targeting the same line emission at specific redshifts, extrapolating them across all of cosmic time. We then calculate Fisher forecasts for uncertainties on parameters describing relic spectral deviations, the CO/[CII] global signal, and a range of other Galactic and extragalactic foregrounds considered in previous work. We find that the measurement of the CO/[CII] global signal with a future CMB spectrometer presents an exciting opportunity to constrain the evolution of metallicity and molecular gas in galaxies across cosmic time. From PIXIE to its enhanced version, SuperPIXIE, microwave spectrometers would have the fundamental sensitivity to constrain the redshift evolution of average kinetic temperature and cosmic molecular gas density at a level of 10% to 1%, respectively. Taking a spectral distortion-centric perspective, when combined with other foregrounds, sky-averaged CO/[CII] emission can mimic $\mu$- and to a lesser extent $y$-type distortions. Under fiducial parameters, marginalising over the CO/[CII] model parameters increases the error on $\mu$ by $\simeq50$%, and the error on $y$ by $\simeq10$%. Incorporating information from planned CO LIM surveys can recover some of this loss in precision. Future work should deploy a more general treatment of the microwave sky to quantify in more detail the potential synergies between PIXIE-like and CO LIM experiments, which complement each other strongly in breadth versus depth, and ways to optimise both spectrometer and LIM surveys to improve foreground cleaning and maximise the science return for each.
We propose a novel approach to parameterize the equation of state for Scalar Field Dark Energy (SFDE) and use it to derive analytical solutions for various cosmological parameters. Using statistical MCMC with Bayesian techniques, we obtain constraint values for the model parameters and analyze three observational datasets. We find a quintessence-like behavior for Dark Energy (DE) with positive values for both model parameters $\alpha$ and $\beta$. Our analysis of the $CC$+$BAO$+$SNe$ datasets reveals that the transition redshift and the current value of the deceleration parameter are $z_{tr}=0.73_{-0.01}^{+0.03}$ and $q_{0}=-0.44_{-0.02}^{+0.03}$, respectively. We also investigate the fluid flow of accretion SFDE around a Black Hole (BH) and analyze the nature of the BH's dynamical mass during accretion, taking into account Hawking radiation and BH evaporation. Our proposed model offers insight into the nature of DE in the Universe and the behavior of BHs during accretion.
We present a pedagogical review of the halo model, a flexible framework that can describe the distribution of matter and its tracers on non-linear scales for both conventional and exotic cosmological models. We start with the premise that the complex structure of the cosmic web can be described by the sum of its individual components: dark matter, gas, and galaxies, all distributed within spherical haloes with a range of masses. The halo properties are specified through a series of simulation-calibrated ingredients including the halo mass function, non-linear halo bias and a dark matter density profile that can additionally account for the impact of baryon feedback. By incorporating a model of the galaxy halo occupation distribution, the properties of central and satellite galaxies, their non-linear bias and intrinsic alignment can be predicted. Through analytical calculations of spherical collapse in exotic cosmologies, the halo model also provides predictions for non-linear clustering in beyond-$\Lambda$CDM models. The halo model has been widely used to model observations of a variety of large-scale structure probes, most notably as the primary technique to model the underlying non-linear matter power spectrum. By documenting these varied and often distinct use cases, we seek to further coherent halo model analyses of future multi-tracer observables. This review is accompanied by the release of pyhalomodel: https://github.com/alexander-mead/pyhalomodel , flexible software to conduct a wide range of halo-model calculations.
A variety of supergravity and string models involve hidden sectors where the hidden sectors may couple feebly with the visible sectors via a variety of portals. While the coupling of the hidden sector to the visible sector is feeble its coupling to the inflaton is largely unknown. It could couple feebly or with the same strength as the visible sector which would result in either a cold or a hot hidden sector at the end of reheating. These two possibilities could lead to significantly different outcomes for observables. We investigate the thermal evolution of the two sectors in a cosmologically consistent hidden sector dark matter model where the hidden sector and the visible sector are thermally coupled. Within this framework we analyze several phenomena to illustrate their dependence on the initial conditions. These include the allowed parameter space of models, dark matter relic density, proton-dark matter cross section, effective massless neutrino species at BBN time, self-interacting dark matter cross-section, where self-interaction occurs via exchange of dark photon, and Sommerfeld enhancement. Finally fits to the velocity dependence of dark matter cross sections from galaxy scales to the scale of galaxy clusters is given. The analysis indicates significant effects of the initial conditions on the observables listed above. The analysis is carried out within the framework where dark matter is constituted of dark fermions and the mediation between the visible and the hidden sector occurs via the exchange of dark photons. The techniques discussed here may have applications for a wider class of hidden sector models using different mediations between the visible and the hidden sectors to explore the impact of Big Bang initial conditions on observable physics.
We study signatures of primordial non-Gaussianity (PNG) in the redshift-space halo field on non-linear scales, using a combination of three summary statistics, namely the halo mass function (HMF), power spectrum, and bispectrum. The choice of adding the HMF to our previous joint analysis of power spectrum and bispectrum is driven by a preliminary field-level analysis, in which we train graph neural networks on halo catalogues to infer the PNG $f_\mathrm{NL}$ parameter. The covariance matrix and the responses of our summaries to changes in model parameters are extracted from a suite of halo catalogues constructed from the Quijote-PNG N-body simulations. We consider the three main types of PNG: local, equilateral and orthogonal. Adding the HMF to our previous joint analysis of power spectrum and bispectrum produces two main effects. First, it reduces the equilateral $f_\mathrm{NL}$ predicted errors by roughly a factor $2$, while also producing notable, although smaller, improvements for orthogonal PNG. Second, it helps break the degeneracy between the local PNG amplitude, $f_\mathrm{NL}^\mathrm{local}$, and assembly bias, $b_{\phi}$, without relying on any external prior assumption. Our final forecasts for PNG parameters are $\Delta f_\mathrm{NL}^\mathrm{local} = 40$, $\Delta f_\mathrm{NL}^\mathrm{equil} = 210$, $\Delta f_\mathrm{NL}^\mathrm{ortho} = 91$, on a cubic volume of $1 \left( {\rm Gpc}/{\rm h} \right)^3$, with a halo number density of $\bar{n}\sim 5.1 \times 10^{-5}~h^3\mathrm{Mpc}^{-3}$, at $z = 1$, and considering scales up to $k_\mathrm{max} = 0.5~h\,\mathrm{Mpc}^{-1}$.
We propose that the universe contains two identical sets of particles and gauge interactions, coupling only through gravitation, which differ by their Higgs potentials. We postulate that because of underlying symmetries, the two sectors when uncoupled have Higgs potentials that lie at the boundary between phases with nonzero and zero Higgs vacuum expectation. Turning on the coupling between the two sectors can break the degeneracy, pushing the Higgs potential in one sector into the domain of nonzero Higgs expectation (giving the visible sector), and pushing the Higgs potential in the other sector into the domain of zero Higgs expectation (giving the dark sector). The least massive baryon in the dark sector will then be a candidate self-interacting dark matter particle.
The non-Gaussian part of the covariance matrix of the galaxy power spectrum involves the connected four-point correlation in Fourier space, i.e. trispectrum. This paper introduces a fast method to compute the non-Gaussian part of the covariance matrix of the galaxy power spectrum multipoles in redshift space at tree-level standard perturbation theory. For the tree-level galaxy trispectrum, the angular integral between two wavevectors can be evaluated analytically by employing an FFTLog. The new implementation computes the non-Gaussian covariance of the power spectrum monopole, quadrupole, hexadecapole and their cross-covariance in $O(10)$ seconds, for an effectively arbitrary number of instances of cosmological and galaxy bias parameters and redshift, without any parallelization or acceleration. It is a large advantage over conventional numerical integration. We demonstrate that the computation of the covariance at $k = 0.005 - 0.4\,h\,\mathrm{Mpc}^{-1}$ gives results with $0.1 - 1\%$ accuracy. The efficient computation of the analytic covariance can be useful for future galaxy surveys, especially utilizing multi-tracer analysis.
According to the best-fit parameters of the Standard Model, the Higgs field's potential reaches a maximum at a field value $h \sim 10^{10-11}$ GeV and then turns over to negative values. During reheating after inflation, resonance between the inflaton and the Higgs can cause the Higgs to fluctuate past this maximum and run down the dangerous side of the potential if these fields couple too strongly. In this paper, we place constraints on the inflaton-Higgs couplings such that the probability of the Higgs entering the unstable regime during reheating is small. To do so, the equations of motion are approximately solved semi-analytically, then solved fully numerically. Next the growth in variance is used to determine the parameter space for $\kappa$ and $\alpha$, the coupling coefficients for inflaton-Higgs cubic and quartic interactions, respectively. We find the upper bounds of $\kappa < 1.6 \times 10^{-5} m_\phi \sim 2.2 \times 10^8$ GeV and $\alpha < 10^{-8}$ to allow the Higgs to remain stable in most Hubble patches during reheating, and we also find the full two parameter joint constraints. We find a corresponding bound on the reheat temperature of $T_\text{reh} \lesssim 9.2 \times 10^9$ GeV. Additionally, de Sitter temperature fluctuations during inflation put a lower bound on inflaton-Higgs coupling by providing an effective mass for the Higgs, pushing back its hilltop during inflation. These additional constraints provide a lower bound on $\alpha$, while $\kappa$ must also be non-zero for the inflaton to decay efficiently.
Phantom scalar theories are widely considered in cosmology, but rarely at the quantum level, where they give rise to negative-energy ghost particles. These cause decay of the vacuum into gravitons and photons, violating observational gamma-ray limits unless the ghosts are effective degrees of freedom with a cutoff $\Lambda$ at the few-MeV scale. We update the constraints on this scale, finding that $\Lambda \lesssim 19$ MeV. We further explore the possible coupling of ghosts to a light, possibly massless, hidden sector particle, such as a sterile neutrino. Vacuum decays can then cause the dark matter density of the universe to grow at late times. The combined phantom plus dark matter fluid has an effective equation of state $w < -1$, and functions as a new source of dark energy. We derive constraints from cosmological observables on the rate of vacuum decay into such a phantom fluid. We find a mild preference for the ghost model over the standard cosmological one, and a modest amelioration of the Hubble and $S_8$ tensions.
We present a new high-resolution free-form mass model of Abell 2744, combining both weak-lensing (WL) and strong-lensing (SL) datasets from JWST. The SL dataset comprises 286 multiple images, presenting the most extensive SL constraint to date for a single cluster. The WL dataset, employing photo-$z$ selection, yields a source density of ~ 350 arcmin$^{-2}$, marking the densest WL constraint ever. The combined mass reconstruction enables the highest-resolution mass map of Abell 2744 within the ~ 1.8 Mpc$\times$1.8 Mpc reconstruction region to date, revealing an isosceles triangular structure with two legs of ~ 1 Mpc and a base of ~ 0.6 Mpc. Although our algorithm MAximum-entropy ReconStruction (${\tt MARS}$) is entirely blind to the cluster galaxy distribution, the resulting mass reconstruction remarkably well traces the brightest cluster galaxies with the five strongest mass peaks coinciding with the five most luminous cluster galaxies within $\lesssim 2''$. We do not detect any unusual mass peaks that are not traced by the cluster galaxies, unlike the findings in previous studies. Our mass model shows the smallest scatters of SL multiple images in both source (~0".05) and image (~0".1) planes, which are lower than the previous studies by a factor of ~ 4. Although ${\tt MARS}$ represents the mass field with an extremely large number of ~ 300,000 free parameters, it converges to a solution within a few hours thanks to our utilization of the deep learning technique. We make our mass and magnification maps publicly available.
Over 60 years after the discovery of the first quasar, more than $275$ such sources are identified in the epoch of reionization at $z>6$. JWST is now exploring higher redshifts ($z\gtrsim 8$) and lower mass ($ \lesssim 10^7 \Msun$) ranges. The discovery of progressively farther quasars is instrumental to constraining the properties of the first population of black holes (BHs), or BH seeds, formed at $z \sim 20-30$. For the first time, we use Bayesian analysis of the most comprehensive catalog of quasars at $z>6$ to constrain the distribution of BH seeds. We show that the mass distribution of BH seeds can be effectively described by combining a power law and a lognormal function tailored to the mass ranges associated with light and heavy seeds, assuming Eddington-limited growth and early seeding time. Our analysis reveals a power-law slope of $-0.70^{+0.46}_{-0.46}$ and a lognormal mean of $4.44^{+0.30}_{-0.30}$. The inferred values of the Eddington ratio, the duty cycle, and the mean radiative efficiency are $0.82^{+0.10}_{-0.10}$, $0.66^{+0.23}_{-0.23}$, and $0.06^{+0.02}_{-0.02}$, respectively. Models that solely incorporate a power law or a lognormal distribution within the specific mass range corresponding to light and heavy seeds are statistically strongly disfavored, unlike models not restricted to this specific range. Our results suggest that including both components is necessary to comprehensively account for the masses of high-redshift quasars, and that both light and heavy seeds formed in the early Universe and grew to form the population of quasars we observe.
The Internal Linear Combination (ILC) method is commonly employed to extract the cosmic microwave background (CMB) signal from multi-frequency observation maps. However, the performance of the ILC method tends to degrade when the signal-to-noise ratio (SNR) is relatively low, especially when measuring the primordial $B$-modes for detecting the primordial gravitational waves. To address this issue, an enhanced version of the ILC method on the $B$ map, called constrained ILC, is suggested in the literature. This method is designed to be more suitable for situations with low signal-to-noise ratio (SNR) by incorporating additional prior foreground information. In our study, we have made modifications to the constraint Needlet ILC method and have successfully enhanced its performance. We illustrate our methods using mock data generated from the combination of WMAP, Planck and a ground-based experiment in the northern hemisphere, and the chosen noise level for the ground-based experiment are very conservative which can be easily achieved in the very near future. The results show that the level of foreground residual can be well controlled. In comparison to the standard NILC method, which introduces a bias to the tensor-to-scalar ratio ($r$) of approximately $0.05$, the constrained NILC method exhibits a significantly reduced bias of only around $5\times10^{-3}$ towards $r$ which is much smaller than the statistical error.
Ray tracing plays a vital role in black hole imaging, modeling the emission mechanisms of pulsars, and deriving signatures from physics beyond the Standard Model. In this work we focus on one specific application of ray tracing, namely, predicting radio signals generated from the resonant conversion of axion dark matter in the strongly magnetized plasma surrounding neutron stars. The production and propagation of low-energy photons in these environments are sensitive to both the anisotropic response of the background plasma and curved spacetime; here, we employ a fully covariant framework capable of treating both effects. We implement this both via forward and backward ray tracing. In forward ray tracing, photons are sampled at the point of emission and propagated to infinity, whilst in the backward-tracing approach, photons are traced backwards from an image plane to the point of production. We explore various approximations adopted in prior work, quantifying the importance of gravity, plasma anisotropy, the neutron star mass and radius, and imposing the proper kinematic matching of the resonance. Finally, using a more realistic model for the charge distribution of magnetar magnetospheres, we revisit the sensitivity of current and future radio and sub-mm telescopes to spectral lines emanating from the Galactic Center Magnetar, showing such observations may extend sensitivity to axion masses $m_a \sim \mathcal{O}({\rm few}) \times 10^{-3}$ eV, potentially even probing parameter space of the QCD axion.
In this paper, we construct a viable model for a GeV scale self-interacting dark matter (DM), where the DM was thermally produced in the early universe. Here, a new vector-like fermion with a dark charge under the $U(1)_{D}$ gauge symmetry serves as a secluded WIMP DM and it can dominantly annihilate into the light dark gauge boson and singlet scalar through the dark gauge interaction. Also, the self-interaction of DM is induced by the light dark gauge boson via the same gauge interaction. In addition to these particles, we further introduce two Weyl fermions and a doublet scalar, by which the dark gauge boson produced from s-wave DM annihilations can mostly decay into active neutrinos after the dark symmetry breaking such that the CMB bound on the DM with low masses can be eluded. In order to have a common parameter region to explain the observed relic abundance and self-interaction of DM, we also study this model in a non-standard cosmological evolution, where the cosmic expansion driven by a new field species is faster than the standard radiation-dominated universe during the frozen time of DM. Reversely, one can also use the self-interacting nature of light thermal DM to examine the non-standard cosmological history of the universe.
How protoclusters evolved from sparse galaxy overdensities to mature galaxy clusters is still not well understood. In this context, detecting and characterizing the hot ICM at high redshifts (z~2) is key to understanding how the continuous accretion from and mergers along the filamentary large-scale structure impact the first phases of cluster formation. We study the dynamical state and morphology of the z=1.98 galaxy cluster XLSSC 122 with high-resolution observations (~5") of the ICM through the SZ effect. Via Bayesian forward modeling, we map the ICM on scales from the virial radius down to the core of the cluster. To constrain such a broad range of spatial scales, we employ a new technique that jointly forward-models parametric descriptions of the pressure distribution to interferometric ACA and ALMA observations and multi-band imaging data from the 6-m, single-dish Atacama Cosmology Telescope. We detect the SZ effect with $11\sigma$ in the ALMA+ACA observations and find a flattened inner pressure profile that is consistent with a non-cool core classification with a significance of $>3\sigma$. In contrast to the previous works, we find better agreement between the SZ effect signal and the X-ray emission as well as the cluster member distribution. Further, XLSSC 122 exhibits an excess of SZ flux in the south of the cluster where no X-ray emission is detected. By reconstructing the interferometric observations and modeling in the uv-plane, we obtain a tentative detection of an infalling group or filamentary-like structure that is believed to boost and heat up the ICM while the density of the gas is low. In addition, we provide an improved SZ mass of $M_{500,\mathrm{c}} = 1.66^{+0.23}_{-0.20} \times 10^{14} \rm M_\odot$. Altogether, the observations indicate that we see XLSSC 122 in a dynamic phase of cluster formation while a large reservoir of gas is already thermalized.
We predict the X-ray background (XRB) expected from the population of quasars detected by the JWST spectroscopic surveys over the redshift range $z \sim 4-7$. We find that the measured UV emissivities, in combination with a best-fitting quasar SED template, imply a $\sim 10$ times higher unresolved X-ray background than constrained by current experiments. We illustrate the difficulty of simultaneously matching the faint-end of the quasar luminosity function and the X-ray background constraints. We discuss possible origins and consequences of this discrepancy.
One of the hottest questions in the cosmology of self-interacting dark matter (SIDM) is whether scatterings can induce detectable core-collapse in halos by the present day. Because gravitational tides can accelerate core-collapse, the most promising targets to observe core-collapse are satellite galaxies and subhalo systems. However, simulating small subhalos is computationally intensive, especially when subhalos start to core-collapse. In this work, we present a hierarchical framework for simulating a population of SIDM subhalos, which reduces the computation time to linear order in the total number of subhalos. With this method, we simulate substructure lensing systems with multiple velocity-dependent SIDM models, and show how subhalo evolution depends on the SIDM model, subhalo mass and orbits. We find that an SIDM cross section of $\gtrsim 200$ cm$^2$/g at velocity scales relevant for subhalos' internal heat transfer is needed for a significant fraction of subhalos to core-collapse in a typical lens system at redshift $z=0.5$, and that core-collapse has unique observable features in lensing. We show quantitatively that core-collapse in subhalos is typically accelerated compared to field halos, except when the SIDM cross section is non-negligible ($\gtrsim \mathcal{O}(1)$ cm$^2$/g) at subhalos' orbital velocities, in which case evaporation by the host can delay core-collapse. This suggests that substructure lensing can be used to probe velocity-dependent SIDM models, especially if line-of-sight structures (field halos) can be distinguished from lens-plane subhalos. Intriguingly, we find that core-collapse in subhalos can explain the recently reported ultra-steep density profiles of substructures found by lensing with the \emph{Hubble Space Telescope}
Beyond-two-point statistics contain additional information on cosmological as well as astrophysical and observational (systematics) parameters. In this methodology paper we provide an end-to-end simulation-based analysis of a set of Gaussian and non-Gaussian weak lensing statistics using detailed mock catalogues of the Dark Energy Survey. We implement: 1) second and third moments; 2) wavelet phase harmonics (WPH); 3) the scattering transform (ST). Our analysis is fully based on simulations, it spans a space of seven $\nu w$CDM cosmological parameters, and it forward models the most relevant sources of systematics of the data (masks, noise variations, clustering of the sources, intrinsic alignments, and shear and redshift calibration). We implement a neural network compression of the summary statistics, and we estimate the parameter posteriors using a likelihood-free-inference approach. We validate the pipeline extensively, and we find that WPH exhibits the strongest performance when combined with second moments, followed by ST. and then by third moments. The combination of all the different statistics further enhances constraints with respect to second moments, up to 25 per cent, 15 per cent, and 90 per cent for $S_8$, $\Omega_{\rm m}$, and the Figure-Of-Merit ${\rm FoM_{S_8,\Omega_{\rm m}}}$, respectively. We further find that non-Gaussian statistics improve constraints on $w$ and on the amplitude of intrinsic alignment with respect to second moments constraints. The methodological advances presented here are suitable for application to Stage IV surveys from Euclid, Rubin-LSST, and Roman with additional validation on mock catalogues for each survey. In a companion paper we present an application to DES Year 3 data.
4-Dimensional Einstein-Gauss-Bonnet (4DEGB) gravity has garnered significant attention in the last few years as a phenomenological competitor to general relativity. We consider the theoretical and observational implications of this theory in both the early and late universe, (re-)deriving background and perturbation equations and constraining its characteristic parameters with data from cosmological probes. Our investigation surpasses the scope of previous studies by incorporating non-flat spatial sections. We explore consequences of 4DEGB on the sound and particle horizons in the very early universe, and demonstrate that 4DEGB can provide an independent solution to the horizon problem for some values of its characteristic parameter $\alpha$. Finally, we constrain an unexplored regime of this theory in the limit of small coupling $\alpha$ (empirically supported in the post-Big Bang Nucleosynthesis era by prior constraints). This version of 4DEGB includes a geometric term that resembles dark radiation at the background level, but whose influence on the perturbed equations is qualitatively distinct from that of standard forms of dark radiation. In this limit, only one beyond-$\Lambda$CDM degree of freedom persists, which we denote as $\tilde{\alpha}_C$. Our analysis yields the estimate $\tilde{\alpha}_C = (-9 \pm 6) \times 10^{-6}$ thereby providing a new constraint of a previously untested sector of 4DEGB.
Four supernova remnants and four anomalous X-ray pulsars were previously observed with the Parkes telescope in a campaign to detect pulsed radio emission from associated neutron stars. No signals were detected in the original searches of these data. I have reprocessed the data with the more recently developed HEIMDALL and FETCH software packages, which are optimized for single-pulse detection and classification. In this new analysis, no astrophysical pulses were detected having a signal-to-noise ratio above 7 from any of the targets at dispersion measures ranging from 0 to $10^{4}$ pc cm$^{-3}$. I include calculated fluence limits on single radio pulses from these targets.
The densities in the cores of the neutron stars (NSs) can reach several times that of the nuclear saturation density. The exact nature of matter at these densities is still virtually unknown. We consider a number of proposed, phenomenological relativistic mean-field equations of state to construct theoretical models of NSs. We find that the emergence of exotic matter at these high densities restricts the mass of NSs to $\simeq 2.2 M_\odot$. However, the presence of magnetic fields and a model anisotropy significantly increases the star's mass, placing it within the observational mass gap that separates the heaviest NSs from the lightest black holes. Therefore, we propose that gravitational wave observations, like GW190814, and other potential candidates within this mass gap, may actually represent massive, magnetized NSs.
Most massive stars end their lives with core collapse. However, it is not clear which explode as a Core-collapse Supernova (CCSN), leaving behind a neutron star and which collapse to black hole, aborting the explosion. One path to predict explodability without expensive multi-dimensional simulations is to develop analytic explosion conditions. These analytic explosion conditions also provide a deeper understanding of the explosion mechanism and they provide some insight as to why some simulations explode and some do not. The analytic force explosion condition (FEC) reproduces the explosion conditions of spherically symmetric CCSN simulations. In this followup manuscript, we include the dominant multi-dimensional effect that aids explosion, neutrino driven convection, into the FEC. This generalized critical condition (FEC+) is suitable for multi-dimensional simulations and has potential to accurately predict explosion conditions of two- and three-dimensional CCSN simulations. We show that adding neutrino-driven convection reduces the critical condition by $\sim 30\%$, which is consistent with previous multi-dimensional simulations.
We present the first compelling evidence of shock-heated molecular clouds associated with the supernova remnant (SNR) N49 in the Large Magellanic Cloud (LMC). Using $^{12}$CO($J$ = 2-1, 3-2) and $^{13}$CO($J$ = 2-1) line emission data taken with the Atacama Large Millimeter/Submillimeter Array, we derived the H$_2$ number density and kinetic temperature of eight $^{13}$CO-detected clouds using the large velocity gradient approximation at a resolution of 3.5$''$ (~0.8 pc at the LMC distance). The physical properties of the clouds are divided into two categories: three of them near the shock front show the highest temperatures of ~50 K with densities of ~500-700 cm$^{-3}$, while other clouds slightly distant from the SNR have moderate temperatures of ~20 K with densities of ~800-1300 cm$^{-3}$. The former clouds were heated by supernova shocks, but the latter were dominantly affected by the cosmic-ray heating. These findings are consistent with the efficient production of X-ray recombining plasma in N49 due to thermal conduction between the cold clouds and hot plasma. We also find that the gas pressure is roughly constant except for the three shock-engulfed clouds inside or on the SNR shell, suggesting that almost no clouds have evaporated within the short SNR age of ~4800 yr. This result is compatible with the shock-interaction model with dense and clumpy clouds inside a low-density wind bubble.
We report the first extensive optical flux and spectral variability study of the TeV blazar TXS 0506+056 on intra-night to long-term timescales using BVRI data collected over 220 nights between January 21, 2017 to April 9, 2022 using 8 optical ground-based telescopes. In our search for intraday variability (IDV), we have employed two statistical analysis techniques, the nested ANOVA test and the power enhanced F-test. We found the source was variable in 8 nights out of 35 in the R-band and in 2 of 14 in the V-band yielding Duty Cycles (DC) of 22.8% and 14.3%, respectively. Clear colour variation in V - R was seen in only 1 out of 14 observing nights, but no IDV was found in the more limited B, I, and B - I data. During our monitoring period the source showed a 1.18 mag variation in the R-band and similar variations are clearly seen at all optical wavelengths. We extracted the optical (BVRI) SEDs of the blazar for 44 nights when observations were carried out in all four of those wavebands. The mean spectral index (\alpha) was determined to be 0.897+-0.171
Multi-wavelength (MW) observations of Fast Radio Bursts (FRBs) is a key avenue to uncover the yet-unknown origin(s) of these extragalactic signals. In this proceeding, we discuss the need for precise localization to conduct MW studies. We present a number of theoretical predictions of MW counterparts and mention a few examples of on-going MW campaigns.
We have analyzed the optical light curves of the blazar OJ 287 obtained with the Transiting Exoplanet Survey Satellite (TESS) over about 80 days from 2021 October 13 to December 31, with an unprecedented sampling of 2 minutes. Although significant variability has been found during the entire period, we have detected two exceptional flares with flux nearly doubling and then nearly tripling over 2 days in the middle of 2021 November. We went through the light curves analysis using the excess variance, generalized Lomb-Scargle periodogram, and Continuous Auto-Regressive Moving Average (CARMA) methods, and estimated the flux halving/doubling timescales. The most probable shortest variability timescale was found to be 0.38 days in the rising phase of the first flare. We briefly discuss some emission models for the variability in radio-loud active galactic nuclei that could be capable of producing such fast flares.
We study the optical flux and polarization variability of the binary black hole blazar OJ 287 using quasi-simultaneous observations from 2015 to 2023 carried out using telescopes in the USA, Japan, Russia, Crimea, and Bulgaria. This is one of the most extensive quasi-simultaneous optical flux and polarization variability studies of OJ 287. OJ 287 showed large amplitude, ~3.0 mag flux variability, large changes of ~37% in degree of polarization, and a large swing of ~215 degrees in the angle of the electric vector of polarization. During the period of observation, several flares in flux were detected. Those flares are correlated with a rapid increase in the degree of polarization and swings in electric vector of polarization angle. A peculiar behavior of anticorrelation between flux and polarization degree, accompanied by a nearly constant polarization angle, was detected from JD 2,458,156 to JD 2,458,292. We briefly discuss some explanations for the flux and polarization variations observed in OJ 287.
We present the results of our study of cross-correlations between long-term multi-band observations of the radio variability of the blazar 3C 279. More than a decade (2008-2022) of radio data were collected at seven different frequencies ranging from 2 GHz to 230 GHz. The multi-band radio light curves show variations in flux, with the prominent flare features appearing first at higher-frequency and later in lower-frequency bands. This behavior is quantified by cross-correlation analysis, which finds that the emission at lower-frequency bands lags that at higher-frequency bands. Lag versus frequency plots are well fit by straight lines with negative slope, typically ~-30 day/GHz. We discuss these flux variations in conjunction with the evolution of bright moving knots seen in multi-epoch VLBA maps to suggest possible physical changes in the jet that can explain the observational results. Some of the variations are consistent with the predictions of shock models, while others are better explained by a changing Doppler beaming factor as the knot trajectory bends slightly, given a small viewing angle to the jet.
The multi-drifting subpulse behaviors in PSR J2007+0910 have been studied carefully with the high sensitivity observations of the Five-hundred-meter Aperture Spherical radio Telescope (FAST) at 1250 MHz. We found that there are at least six different single emission modes in PSR J2007+0910 are observed, four of which show significant subpulse drifting behaviors (modes A, B, C, and D), and the remaining two (modes $E_1$ and $E_2$) show stationary subpulse structures. The subpulse drifting periods of modes A, B, C, and D are $P_{3, A} = 8.7 \pm 1.6 P$, $P_{3, B} = 15.8 \pm 1.2 P$, $P_{3, C} = 21.6 \pm 1.3 P$ and $P_{3, D} = 32.3 \pm 0.9 P$, respectively, where $P$ represents the pulse period of this pulsar. The subpulse separation is almost the same for all modes $P_2 = 6.01 \pm 0.18 ^\circ$. Deep analysis suggests that the appearance and significant changes in the drifting period of multi-drifting subpulse emission modes for a pulsar may originate from the aliasing effect. The observed non-drifting modes may be caused by the spark point move with a period ~P_2. Our statistical analysis shows that the drift mode of this pulsar almost always switches from slower to faster drifts in the mode change. The interesting subpulse emission phenomenon of PSR J2007+0910 provides a unique opportunity to understand the switching mechanism of multi-drift mode.
In high-energy astrophysical processes involving compact objects, such as core-collapse supernovae or binary neutron star mergers, neutrinos play an important role in the synthesis of nuclides. Neutrinos in these environments can experience collective flavor oscillations driven by neutrino-neutrino interactions, including coherent forward scattering and incoherent (collisional) effects. Recently, there has been interest in exploring potential novel behaviors in collective oscillations of neutrinos by going beyond the one-particle effective or "mean-field" treatments. Here, we seek to explore implications of collective neutrino oscillations, in the mean-field treatment and beyond, for the nucleosynthesis yields in supernova environments with different astrophysical conditions and neutrino inputs. We find that collective oscillations can impact the operation of the {\nu}p-process and r-process nucleosynthesis in supernovae. The potential impact is particularly strong in high-entropy, proton-rich conditions, where we find that neutrino interactions can nudge an initial {\nu}p process neutron rich, resulting in a unique combination of proton-rich low-mass nuclei as well as neutron-rich high-mass nuclei. We describe this neutrino-induced neutron capture process as the "{\nu}i process". In addition, nontrivial quantum correlations among neutrinos, if present significantly, could lead to different nuclide yields compared to the corresponding mean-field oscillation treatments, by virtue of modifying the evolution of the relevant one-body neutrino observables.
We describe two fitting schemes that aim to represent the high-density part of realistic equations of state for numerical simulations such as neutron star oscillations. The low-density part of the equation of state is represented by an arbitrary polytropic crust, and we propose a generic procedure to stitch any desired crust to the high-density fit, which is performed on the internal energy, pressure and sound speed of barotropic equations of state that describe cold neutron stars in $\beta$-equilibrium. An extension of the fitting schemes to equations of state with an additional compositional parameter is proposed. In particular we develop a formalism that ensures the existence of a $\beta$-equilibrium at low densities. An additional feature of this low-density model is that it can be, in principle, applied to any parametrization. The performance of the fits is checked on mass, radius and tidal deformability as well as on the dynamical radial oscillation frequencies. To that end, we use a pseudospectral isolated neutron star evolution code based on a non-conservative form of the hydrodynamical equations. A comparison to existing parametrizations is proposed, as far as possible, and to published radial frequency values in the literature. The static and dynamic quantities are well reproduced by the fitting schemes. Our results suggest that, even though the radius is very sensitive to the choice of the crust, this choice has little influence on the oscillation frequencies of a neutron star.
We study the connection between the variations in the far ultra-violet (FUV), near ultra-violet (NUV) and X-ray band emission from NGC 4051 using 4-days long AstroSat observations performed during 5-9 June 2016. NGC 4051 showed rapid variability in all three bands with the strongest variability amplitude in the X-ray band ($F_{var} \sim 37\%$) and much weaker variability in the UV bands ($F_{var} \sim 3 - 5\%$). Cross-correlation analysis performed using Interpolated cross-correlation Functions (ICCF) and Discrete cross-correlation Functions (DCF) revealed a robust correlation ($\sim 0.75$) between the UV and X-ray light curves. The variations in the X-ray band are found to lead those in the FUV and NUV bands by $\sim 7.4{\rm~ks}$ and $\sim 24.2{\rm~ks}$, respectively. The UV lags favour the thermal disc reprocessing model. The FUV and NUV bands are strongly correlated ($\sim 0.9$) and the variations in the FUV band lead those in the NUV band by $\sim 13{\rm~ks}$. Comparison of the UV lags found using the AstroSat observations with those reported earlier and the theoretical model for thermal reverberation time-lag suggests a possible change in either the geometry of the accretion disc/corona or the height of the corona.
We study the motion of uncharged particles and photons in the background of a magnetically charged Euler-Heisenberg (EH) black hole (BH) with scalar hair. The spacetime can be asymptotically (A)dS or flat. After investigating particle motions around the BH and the behavior of the effective potential of the particle radial motion, we determine the contribution of the BH parameters to the geodesics. Photons follow null geodesics of an effective geometry induced by the corrections of the EH non-linear electrodynamics. Thus, after determining the effective geometry, we calculate the shadow of the BH. Upon comparing the theoretically calculated BH shadow with the images of the shadows of M87* and Sgr A* obtained by the Event Horizon Telescope collaboration, we impose constraints on the BH parameters, namely the scalar hair ($\nu$), the magnetic charge ($Q_{m}$) and the EH parameter ($\alpha$).
Recent measurements of primary and secondary CR spectra, their arrival directions, and our improved knowledge of the magnetic field geometry around the heliosphere allow us to set a bound on the distance beyond which a puzzling 10-TeV "bump" cannot originate. The sharpness of the spectral breaks associated with the bump, the abrupt change of the CR intensity across the local magnetic equator ($90^{\circ}$ pitch angle), and the similarity between the primary and secondary CR spectral patterns point to a local reacceleration of the bump particles out of the background CRs. We argue that a nearby shock may generate such a bump by increasing the rigidity of the preexisting CRs below 50 TV by a mere factor of ~1.5. Reaccelerated particles below ~0.5 TV are convected with the interstellar medium flow and do not reach the Sun, thus creating the bump. This single universal process is responsible for the observed spectra of all CR species in the rigidity range below 100 TV. We propose that one viable candidate is the system of shocks associated with Epsilon Eridani star at 3.2 pc of the Sun, which is well aligned with the direction of the local magnetic field. Other shocks, such as old supernova shells, may produce a similar effect. We provide a simple formula that reproduces the spectra of all CR species with only three parameters uniquely derived from the CR proton data. We show how our formalism predicts helium and carbon spectra and the B/C ratio.
With the motivation of explaining the dark matter and achieving the electroweak baryogenesis via the spontaneous CP-violation at high temperature, we propose a complex singlet scalar ($S=\frac{\chi+i\eta_s}{\sqrt{2}}$) extension of the two-Higgs-doublet model respecting a discrete dark CP-symmetry: $S\to -S^*$. The dark CP-symmetry guarantees $\chi$ to be a dark matter candidate on one hand and on the other hand allows $\eta_s$ to have mixings with the pseudoscalars of the Higgs doublet fields, which play key roles in generating the CP-violation sources needed by the electroweak baryogenesis at high temperature. The universe undergoes multi-step phase transitions, including a strongly first-order electroweak phase transition during which the baryon number is produced. At the present temperature, the observed vacuum is produced and the CP-symmetry is restored so that the stringent electric dipole moment experimental bounds are satisfied. Considering relevant constraints, we study the simple scenario of $m_{\chi}$ around the Higgs resonance region, and find that the dark matter relic abundance and the baryon asymmetry can be simultaneously explained. Finally, we briefly discuss the gravitational wave signatures at future space-based detectors and the LHC signatures.
Type-I X-ray bursts are rapid-brightening transient phenomena on the surfaces of accreting neutron stars (NSs). Some X-ray bursts, called {\it clocked bursters}, exhibit regular behavior with similar light curve profiles in their burst sequences. The periodic nature of clocked bursters has the advantage of constraining X-ray binary parameters and physics inside the NS. In the present study, we compute numerical models, based on different equations of state and NS masses, which are compared with the observation of a recently identified clocked burster, 1RXS J180408.9$-$342058. We find that the relation between accretion rate and recurrence time is highly sensitive to the NS mass and radius. We determine, in particular, that 1RXS J180408.9$-$342058 appears to possess a mass less than $1.7M_{\odot}$ and favors a stiffer nuclear equation of state (with an NS radius $\gtrsim12.7{\rm km}$). Consequently, the observations of this new clocked burster may provide additional constraints for probing the structure of NSs.
Gamma-ray bursts (GRBs) are one of the most exciting sources that offer valuable opportunities for investigating the evolution of energy fraction given to magnetic fields and particles through microphysical parameters during relativistic shocks. The delayed onset of GeV-TeV radiation from bursts detected by the \textit{Fermi} Large Area Telescope (\textit{Fermi}-LAT) and Cherenkov Telescopes provide crucial information in favor of the external-shock model. Derivation of the closure relations (CRs) and the light curves in external shocks requires knowledge of GRB afterglow physics. In this manuscript, we derive the CRs and light curves in a stratified medium with variations of microphysical parameters of the synchrotron and SSC afterglow model radiated by an electron distribution with a hard and soft spectral index. Using Markov Chain Monte Carlo simulations, we apply the current model to investigate the evolution of the spectral and temporal indexes of those GRBs reported in the Second Gamma-ray Burst Catalog (2FLGC), which comprises 29 bursts with photon energies above 10 GeV and of those bursts (GRB 180720B, 190114C, 190829A and 221009A) with energetic photons above 100 GeV, which can hardly be modeled with the CRs of the standard synchrotron scenario. The analysis shows that i) the most likely afterglow model using synchrotron and SSC emission on the 2FLGC corresponds to the constant-density scenario, and ii) variations of spectral (temporal) index keeping the temporal (spectral) index constant could be associated with the evolution of microphysical parameters, as exhibited in GRB 190829A and GRB 221009A.
In this article we have used stochastic gravitational wave background as a unique probe to gain insight regarding the creation mechanism of primordial black holes. We have considered the cumulative gravitational wave background which consists of the primary part coming from the creation mechanism of the primordial black holes and the secondary part coming from the different mechanisms the primordial black holes go through. We have shown that in the presence of light or ultra light scalar bosons, superradiant instability generates the secondary part of the gravitational wave background which is the most detectable. In order to show the unique features of the cumulative background, we have considered the delayed vacuum decay during a first order phase transition as the origin of primordial black holes. We have shown the dependence of the features of the cumulative background, such as the mass of the relevant light scalars, peak frequencies, etc. on the transition parameters. We have also generated the cumulative background for a few benchmark cases to further illustrate our claim.
The Hellings-Downs (HD) curve plays a crucial role in search for nano-hertz gravitational waves (GWs) with pulsar timing arrays. We discuss the angular pattern of correlations for pulsar pairs within a celestial hemisphere. The hemisphere-averaged correlation curve depends upon the sky location of a GW compact source like a binary of supermassive black holes. If a single source is dominant, its sky location is the north pole of the hemisphere for which the variation in the hemisphere-averaged angular correlation becomes the largest.
Modeling the multiwavelength spectral energy distributions (SEDs) of blazars provides key insights into the underlying physical processes responsible for the emission. While SED modeling with self-consistent models is computationally demanding, it is essential for a comprehensive understanding of these astrophysical objects. We introduce a novel, efficient method for modeling the SEDs of blazars by the mean of a convolutional neural network (CNN). In this paper, we trained the CNN on a leptonic model that incorporates synchrotron and inverse Compton emissions, as well as self-consistent electron cooling and pair creation-annihilation processes. The CNN is capable of reproducing the radiative signatures of blazars with high accuracy. This approach significantly reduces computational time, thereby enabling real-time fitting to multi-wavelength datasets. As a demonstration, we used the trained CNN with MultiNest to fit the broadband SEDs of Mrk 421 and 1ES 1959+650, successfully obtaining their parameter posterior distributions. This novel framework for fitting the SEDs of blazars will be further extended to incorporate more sophisticated models based on external Compton and hadronic scenarios, allowing for multi-messenger constraints in the analysis. The models will be made publicly available via a web interface, the Markarian Multiwavelength Datacenter, to facilitate self-consistent modeling of multi-messenger data from blazar observations.
Shocked winds of massive stars in young stellar clusters have been proposed as possible sites in which relativistic particles are accelerated. Electrons, accelerated in such an environment, are expected to efficiently comptonize optical radiation (from massive stars) and the infra-red radiation (re-scattered by the dust within the cluster) producing GeV-TeV gamma-rays. We investigate the time dependent process of acceleration, propagation and radiation of electrons in the stellar wind of the massive star $\Theta^1$ Ori C within the Trapezium cluster. This cluster is located within the nearby Orion Nebula (M 42). We show that the gamma-ray emission expected from the Trapezium cluster is consistent with the present observations of the Orion Molecular Cloud by the Fermi-LAT telescope provided that the efficiency of energy conversion from the stellar wind to relativistic electrons is very low, i.e. $\chi < 10^{-4}$. For such low efficiencies, the gamma-ray emission from electrons accelerated in the stellar wind of $\Theta^1$ Ori C can be only barely observed by the future Cherenkov telescopes, e.g. the Cherenkov Telescope Array (CTA).
I identify a point-symmetric morphology composed of three pairs of ears (small lobes) in the X-ray images of the core-collapse supernova remnant (CCSNR) N63A and argue that this morphology supports the jittering jets explosion mechanism (JJEM). The opposite two ears in each of the three pairs of SNR N63A are not equal to each other as one is larger than the other. From the morphology of SNR N63A, I infer that this asymmetry is due to asymmetrical opposite jets at launching. Namely, the newly born neutron star that launches the jets that explode the star, does it in many cases with one jet more powerful than its counter-jet. I propose that this asymmetry results from that the accretion disk that launches the jets has no time to fully relax during a jet-launching episode. This implies that if the disk is born with two unequal sides as expected in the JJEM, then during a large fraction, or even all, of the jet-launching episode the two sides remain unequal. I also show that the magnetic reconnection timescale, which is about the timescale for the magnetic field to relax, is not much shorter than the jet-launching episode, therefore the two sides of the accretion disk might have a different magnetic structure. The unequal sides of the accretion disk launch two opposite jets with different energy from each other.
Context: Some apparently quiescent supermassive black holes (BHs) at centers of galaxies show quasi-periodic eruptions (QPEs) in the X-ray band, the nature of which is still unknown. A possible origin for the eruptions is an accretion disk, however the properties of such disks are restricted by the timescales of reccurance and durations of the flares. Aims: In this work we test the possibility that the known QPEs can be explained by accretion from a compact accretion disk with an outer radius $r_{\rm out}\sim 10-40 r_{\rm g}$, focusing on a particular object GSN 069. Methods: We run several 3D GRMHD simulations with the {\tt HARMPI} code of thin and thick disks and study how the initial disk parameters such as thickness, magnetic field configuration, magnetization and Kerr parameter affect the observational properties of QPEs. Results: We show that accretion onto a slowly rotating BH through a small, thick accretion disk with an initially low plasma $\beta$ can explain the observed flare duration, the time between outbursts and the lack of evidence for a variable jet emission. In order to form such a disk the accreting matter should have a low net angular momentum. A potential source for such low angular momentum matter with a quasi periodic feeding mechanism might be a tight binary of wind launching stars.
Neutrinos with large self-interactions, arising from exchange of light scalars or vectors with mass $M_\phi\simeq 10{\rm MeV}$, can play a useful role in cosmology for structure formation and solving the Hubble tension. It has been proposed that large self-interactions of neutrinos may change the observed properties of supernova like the neutrino luminosity or the duration of the neutrino burst. In this paper, we study the gravitational wave memory signal arising from supernova neutrinos. Our results reveal that memory signal for self-interacting neutrinos are weaker than free-streaming neutrinos in the high frequency range. Implications for detecting and differentiating between such signals for planned space-borne detectors, DECIGO and BBO, are also discussed.
We investigate the impact of a mean field model for the $\alpha\Omega$-dynamo potentially active in the post-merger phase of a binary neutron star coalescence. We do so by deriving equations for ideal general relativistic magnetohydrodynamics (GRMHD) with an additional $\alpha-$term, which closely resemble their Newtonian counterpart, but remain compatible with standard numerical relativity simulations. We propose a heuristic dynamo closure relation for the magnetorotational instability-driven turbulent dynamo in the outer layers of a differentially rotating magnetar remnant and its accretion disk. As a first demonstration, we apply this framework to the early stages of post-merger evolution ($\lesssim 50\, \rm ms$). We demonstrate that depending on the efficacy of the dynamo action, magnetically driven outflows can be present with their amount of baryon loading correlating with the magnetic field amplification. These outflows can also contain precursor flaring episodes before settling into a quasi-steady state. For the dynamo parameters explored in this work, we observe electromagnetic energy fluxes of up to $10^{50}\, \rm erg/s$, although larger amplification parameters will likely lead to stronger fluxes. Our results are consistent with the expectation that substantial dynamo amplification (either during or after the merger) may be necessary for neutron-star remnants to power short gamma-ray bursts or precursors thereof.
The ultrahigh energy range of neutrino physics (above $\sim 10^{7} \, \mathrm{GeV}$), as yet devoid of detections, is an open landscape with challenges to be met and discoveries to be made. Neutrino-nucleon cross sections in that range - with center-of-momentum energies $\sqrt{s} \gtrsim 4 \, \mathrm{TeV}$ - are powerful probes of unexplored phenomena. We present a simple and accurate model-independent framework to evaluate how well these cross sections can be measured for an unknown flux and generic detectors. We also demonstrate how to characterize and compare detector sensitivity. We show that cross sections can be measured to $\simeq ^{+65}_{-30}$% precision over $\sqrt{s} \simeq$ 4-140 TeV ($E_\nu = 10^7$-$10^{10}$ GeV) with modest energy and angular resolution and $\simeq 10$ events per energy decade. Many allowed novel-physics models (extra dimensions, leptoquarks, etc.) produce much larger effects. In the distant future, with $\simeq 100$ events at the highest energies, the precision would be $\simeq 15\%$, probing even QCD saturation effects.
It is possible that the strongest interactions between dark matter and the Standard Model occur via the neutrino sector. Unlike gamma rays and charged particles, neutrinos provide a unique avenue to probe for astrophysical sources of dark matter, since they arrive unimpeded and undeflected from their sources. Previously, we reported on annihilations of dark matter to neutrinos; here, we review constraints on the decay of dark matter into neutrinos over a range of dark matter masses from MeV to ZeV, compiling previously reported limits, exploring new electroweak corrections and computing constraints where none have been computed before. We examine the expected contributions to the neutrino flux at current and upcoming neutrino experiments as well as photons from electroweak emission expected at gamma-ray telescopes, leading to constraints on the dark matter decay lifetime, which ranges from $\tau \sim 1.2\times10^{21}$ s at 10~MeV to $1.5\times10^{29}$~s at 1~PeV.
The Einstein Telescope (ET) is a proposed third-generation, wide-band gravitational wave (GW) detector. Given its improved detection sensitivity in comparison to the second-generation detectors, it will be capable of exploring the Universe with GWs up to very high redshifts. In this paper, we present a population-independent method to infer the functional form of star formation rate density (SFR) for different populations of compact binaries originating in stars from Population (Pop) I+II and Pop III using ET as a single instrument. We use an algorithm to answer three major questions regarding the SFR of different populations of compact binaries. Specifically, these questions refer to the termination redshift of the formation of Pop III stars, the redshift at peak SFR, and the functional form of SFR at high redshift, all of which remain to be elucidated. We show that the reconstruction of SFR as a function of redshift for the different populations of compact binaries is independent of the time-delay distributions up to $z \sim 14,$ and that the accuracy of the reconstruction only strongly depends on this distribution at higher redshifts of $z\gtrsim 14$. We define the termination redshift for Pop III stars as the redshift where the SFR drops to 1\% of its peak value. In this analysis, we constrain the peak of the SFR as a function of redshift and show that ET as a single instrument can distinguish the termination redshifts of different SFRs for Pop III stars, which have a true separation of at least $\Delta z \sim 2$. The accurate estimation of the termination redshift depends on correctly modelling the tail of the time-delay distribution, which constitutes delay times of $\gtrsim 8$ Gyr.
The annihilation of TeV-scale leptophilic dark matter into electron-positron pairs (hereafter $e^+e^-$) will produce a sharp cutoff in the local cosmic-ray $e^+e^-$ spectrum at an energy matching the dark matter mass. At these high energies, $e^+e^-$ cool quickly due to synchrotron interactions with magnetic fields and inverse-Compton scattering with the interstellar radiation field. These energy losses are typically modelled as a continuous process. However, inverse-Compton scattering is a stochastic energy-loss process where interactions are rare but catastrophic. We show that when inverse-Compton scattering is modelled as a stochastic process, the expected $e^+e^-$ flux from dark matter annihilation is about a factor of $\sim$2 larger near the dark matter mass than in the continuous model. This greatly enhances the detectability of heavy dark matter annihilating to $e^+e^-$ final states.
Measurements of neutron star masses, radii, and tidal deformability have direct connections to nuclear physics via the equation of state (EoS), which for the cold, catalyzed matter in neutron star cores is commonly represented as the pressure as a function of energy density. Microscopic models with exotic degrees of freedom display nontrivial structure in the speed of sound ($c_s$) in the form of first-order phase transitions and bumps, oscillations, and plateaus in the case of crossovers and higher-order phase transitions. We present a procedure based on Gaussian processes to generate an ensemble of EoSs that include nontrivial features. Using a Bayesian analysis incorporating measurements from X-ray sources, gravitational wave observations, and perturbative QCD results, we show that these features are compatible with current constraints. We investigate the possibility of a global maximum in $c_s$ that occurs within the densities realized in neutron stars -- implying a softening of the EoS and possibly an exotic phase in the core of massive stars -- and find that such a global maximum is consistent with, but not required by, current constraints.
Electrons densities in different locations of our galaxy are obtained in pulsar astronomy by dividing the dispersion Measure by the distance of the pulsar to Earth. In order that such measurements will be reliable these distances should be obtained by methods which are independent of the electron density model and the results of such measurements are used in the present article to derive plasma mass densities. Our main analysis is based on the idea that the pulsars measurements are obtained for completely ionized hydrogen plasma. We use the electrons densities to derive the plasma mass densities and compare it with dark matter mass densities which are found to be much larger. But some factors which may reduce this difference are discussed. The properties of the low-density plasma are analyzed by using Saha equation.
Ultra-high-energy (UHE) cosmic neutrinos, with energies above 100 PeV, could be finally discovered in the near future. Measuring their flavor composition would reveal information about their production and propagation, but new techniques are needed for UHE neutrino telescopes to have the capabilities to do it. We address this by proposing a new way to measure the UHE neutrino flavor composition that does not rely on individual telescopes having flavor-identification capabilities. We manufacture flavor sensitivity from the joint detection by one telescope sensitive to all flavors -- the radio array of IceCube-Gen2 -- and one mostly sensitive to $\nu_\tau$ -- GRAND. From this limited flavor sensitivity, predominantly to $\nu_\tau$, and even under conservative choices of neutrino flux and detector size, we extract important insight. For astrophysics, we forecast meaningful constraints on the neutrino production mechanism; for fundamental physics, vastly improved constraints on Lorentz-invariance violation. These are the first measurement forecasts of the UHE $\nu_\tau$ content.
I examine the morphologies of the brightest planetary nebulae (PNe) in the Milky Way Galaxy and conclude that violent binary interaction processes eject the main nebulae of the brightest PNe. The typical morphologies of the brightest PNe are multipolar, namely have been shaped by two or more major jet-launching episodes at varying directions, and possess small to medium departures from pure point-symmetry. I discuss some scenarios, including a rapid onset of the common envelope evolution (CEE) and the merger of the companion, mainly a white dwarf, with the asymptotic giant branch core at the termination of the CEE. Some of these might be progenitors of type Ia supernovae (SNe Ia), as I suggest for SNR G1.9+0.3, the youngest SN Ia in the Galaxy.
The search for multi-messenger signals of binary black hole (BBH) mergers is crucial to understanding the merger process of BBH and the relative astrophysical environment. Considering BBH mergers occurring in the active galactic nuclei (AGN) accretion disks, we focus on the accompanying high-energy neutrino production from the interaction between the jet launched by the post-merger remnant BH and disk materials. Particles can be accelerated by the shocks generated from the jet-disk interaction and subsequently interact with the disk gas and radiations to produce high-energy neutrinos through hadronic processes. We demonstrate that the identification of the high-energy neutrino signal from BBH merger in AGN disks is feasible. In addition, the joint BBH gravitational wave (GW) and neutrino detection rate is derived, which can be used to constrain the BBH merger rate and the accretion rate of the remnant BH based on the future associated detections of GWs and neutrinos. To date, an upper limit of BBH merger rate density in AGN disks of $R_0 \lesssim 3\,\rm Gpc^{-3} yr^{-1}$ is derived for the fiducial parameter values based on the current null association of GWs and neutrinos.
LS V +44 17 is a persistent Be/X-ray binary (BeXRB) that displayed a bright, double-peaked period of X-ray activity in late 2022/early 2023. We present a radio monitoring campaign of this outburst using the Very Large Array. Radio emission was detected, but only during the second, X-ray brightest, peak, where the radio emission followed the rise and decay of the X-ray outburst. LS V +44 17 is therefore the third neutron star BeXRB with a radio counterpart. Similar to the other two systems (Swift J0243.6+6124 and 1A 0535+262), its X-ray and radio luminosity are correlated: we measure a power law slope $\beta = 1.25^{+0.64}_{-0.30}$ and a radio luminosity of $L_R = (1.6\pm0.2)\times10^{26}$ erg/s at a $0.5-10$ keV X-ray luminosity of $2\times10^{36}$ erg/s (i.e. $\sim 1\%$ $L_{\rm Edd}$). This correlation index is slightly steeper than measured for the other two sources, while its radio luminosity is higher. We discuss the origin of the radio emission, specifically in the context of jet launching. The enhanced radio brightness compared to the other two BeXRBs is the first evidence of scatter in the giant BeXRB outburst X-ray--radio correlation, similar to the scatter observed in sub-classes of low-mass X-ray binaries. While a universal explanation for such scatter is not known, we explore several options: we conclude that the three sources do not follow proposed scalings between jet power and neutron star spin or magnetic field, and instead briefly explore the effects that ambient stellar wind density may have on BeXRB jet luminosity.
We have imaged the entirety of eight (plus one partial) Milky Way-like satellite systems, a total of 42 (45) satellites, from the Satellites Around Galactic Analogs (SAGA) II catalog in both H$\alpha$ and HI with the Canada-France-Hawaii Telescope and the Jansky Very Large Array. In these eight systems we have identified four cases where a satellite appears to be currently undergoing ram pressure stripping (RPS) as its HI gas collides with the circumgalactic medium (CGM) of its host. We also see a clear suppression of gas fraction ($M_\mathrm{HI}/M_\ast$) with decreasing (projected) satellite--host separation; to our knowledge, the first time this has been observed in a sample of Milky Way-like systems. Comparisons to the Auriga, APOSTLE, and TNG-50 cosmological zoom-in simulations show consistent global behavior, but they systematically under-predict gas fractions across all satellites by roughly 0.5 dex. Using a simplistic RPS model we estimate the average peak CGM density that satellites in these systems have encountered to be $\log \rho_\mathrm{cgm}/\mathrm{g\,cm^{-3}} \approx -27.3$. Furthermore, we see tentative evidence that these satellites are following a specific star formation rate-to-gas fraction relation that is distinct from field galaxies. Finally, we detect one new gas-rich satellite in the UGC903 system with an optical size and surface brightness meeting the standard criteria to be considered an ultra-diffuse galaxy.
Extreme emission line galaxies (EELGs), where nebular emissions contribute 30-40% of the flux in certain photometric bands, are ubiquitous in the early universe (z>6). We utilise deep NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) to investigate the properties of companion galaxies (projected distance <40 kpc, |dv|<10,000 km/s) around EELGs at z~3. Tests with TNG100 simulation reveal that nearly all galaxies at z=3 will merge with at least one companion galaxy selected using similar parameters by z=0. The median mass ratio of the most massive companion and the total mass ratio of all companions around EELGs is more than 10 times higher than the control sample. Even after comparing with a stellar mass and stellar mass plus specific SFR-matched control sample, EELGs have three-to-five times higher mass ratios of the brightest companion and total mass ratio of all companions. Our measurements suggest that EELGs are more likely to be experiencing strong interactions or undergoing major mergers irrespective of their stellar mass or specific SFRs. We suspect that gas cooling induced by strong interactions and/or major mergers could be triggering the extreme emission lines, and the increased merger rate might be responsible for the over-abundance of EELGs at z>6.
We combine deep imaging data from the CEERS early release JWST survey and HST imaging from CANDELS to examine the size-mass relation of star-forming galaxies and the morphology-quenching relation at stellar masses $\textrm{M}_{\star} \geq 10^{9.5} \ \textrm{M}_{\odot}$ over the redshift range $0.5 < z < 5.5$. In this study with a sample of 2,450 galaxies, we separate star-forming and quiescent galaxies based on their star-formation activity and confirm that star-forming and quiescent galaxies have different morphologies out to $z=5.5$, extending the results of earlier studies out to higher redshifts. We find that star-forming and quiescent galaxies have typical S\'{e}rsic indices of $n\sim1.3$ and $n\sim4.3$, respectively. Focusing on star-forming galaxies, we find that the slope of the size-mass relation is nearly constant with redshift, as was found previously, but shows a modest increase at $z \sim 4.2$. The intercept in the size-mass relation declines out to $z=5.5$ at rates that are similar to what earlier studies found. The intrinsic scatter in the size-mass relation is relatively constant out to $z=5.5$.
The star formation rate (SFR) of galaxies can change due to interactions between galaxies, stellar feedback ejection of gas into the circumgalactic medium, and energy injection from accretion onto black holes. However, it is not clear which of these processes dominantly alters the formation of stars within galaxies. Johnson et al. (2018) reported the discovery of large gaseous nebulae in the intragroup medium of a galaxy group housing QSO PKS0405$-$123 and hypothesized they were created by galaxy interactions. We identify a sample of 30 group member galaxies at z$\sim$0.57 from the VLT/MUSE observations of the field and calculate their [OII]$\lambda$$\lambda$3727,3729 SFRs in order to investigate whether the QSO and nebulae have affected the SFRs of the surrounding galaxies. We find that star formation is more prevalent in galaxies within the nebulae, signifying galaxy interactions are fueling higher SFRs.
We report the first ground-based detection of the water line p-H2O (211-202) at 752.033 GHz in three z < 0.08 ultra-luminous infrared galaxies (ULIRGs): IRAS 06035-7102, IRAS 17207-0014 and IRAS 09022-3615. Using the Atacama Pathfinder EXperiment (APEX), with its Swedish-ESO PI Instrument for APEX (SEPIA) band-9 receiver, we detect this H2O line with overall signal-to-noise ratios of 8-10 in all three galaxies. Notably, this is the first detection of this line in IRAS 06035-7102. Our new APEX-measured fluxes, between 145 to 705 Jy km s-1, are compared with previous values taken from Herschel SPIRE FTS. We highlight the great capabilities of APEX for resolving the H2O line profiles with high spectral resolutions while also improving by a factor of two the significance of the detection within moderate integration times. While exploring the correlation between the p-H2O(211-202) and the total infrared luminosity, our galaxies are found to follow the trend at the bright end of the local ULIRG's distribution. The p-H2O(211-202) line spectra are compared to the mid-J CO and HCN spectra, and dust continuum previously observed with ALMA. In the complex interacting system IRAS 09022-3615, the profile of the water emission line is offset in velocity with respect to the ALMA CO(J = 4 - 3) emission. For IRAS 17207-0014 and IRAS 06035-7102, the profiles between the water line and the CO lines are spectroscopically aligned. This pilot study demonstrates the feasibility of directly conducting ground-based high-frequency observations of this key water line, opening the possibility of detailed follow-up campaigns to tackle its nature.
Photometric stellar surveys now cover a large fraction of the sky, probe to fainter magnitudes than large-scale spectroscopic studies, and are relatively free from the target-selection biases often associated with such studies. Photometric-metallicity estimates that include narrow/medium-band filters can achieve comparable accuracy and precision to existing low- and medium-resolution spectroscopic surveys such as SDSS/SEGUE and LAMOST, with metallicities as low as [Fe/H] $\sim -3.5$ to $-4.0$. Here we report on an effort to identify likely members of the Galactic disk system among the Very Metal-Poor (VMP; [Fe/H] $\leq$ -2) and Extremely Metal-Poor (EMP; [Fe/H] $\leq$ -3) stars. Our analysis is based on a sample of some 11.5 million stars with full space motions selected from the SkyMapper Southern Survey (SMSS) and Stellar Abundance and Galactic Evolution Survey (SAGES). After applying a number of quality cuts, designed to obtain the best available metallicity and dynamical estimates, we analyze a total of about 7.74 million stars in the combined SMSS/SAGES sample. We employ two techniques which, depending on the method, identify between 5,878 and 7,600 VMP stars (19% to 25% of all VMP stars) and between 345 and 399 EMP stars (35% to 40% of all EMP stars) that appear to be members of the Galactic disk system on highly prograde orbits (v$_{\phi} > 150$ kms$^{-1}$), the majority of which have low orbital eccentricities (ecc $\le 0.4$). The large fractions of VMP/EMP stars that are associated with the MW disk system strongly suggests the presence of an early forming ``primordial" disk.
Multiwavelength dust continuum and polarization observations arising from self-scattering have been used to investigate grain sizes in young disks. However, the polarization by self-scattering is low in face-on optically thick disks and puts some of the size constraints from polarization on hold, particularly for the younger and more massive disks. The 1.3 mm emission detected toward the hot ($\gtrsim$400 K) Class 0 disk IRAS 16293-2422 B has been attributed to self-scattering, predicting grain sizes between 200-2000 $\mu$m. We investigate the effects of grain size in the resultant flux and polarization fractions from self-scattering using a hot and massive Class 0 disk model and compare with observations. We compared new and archival high-resolution observations between 1.3 and 18 mm to a set of synthetic models. We have developed a new public tool to automate this process called Synthesizer. This is an easy-to-use program to generate synthetic observations from numerical simulations. Optical depths are in the range of 130 to 2 from 1.3 to 18 mm, respectively. Predictions from significant grain growth populations, including millimetric grains are comparable to the observations at all wavelengths. The polarization fraction produced by self-scattering reaches a maximum of $\sim$0.1% at 1.3 mm for a maximum grain size of 100 $\mu$m, being an order of magnitude lower than that observed with ALMA. From the comparison of Stokes I fluxes, we conclude that significant grain growth could be present in the young Class 0 disk IRAS 16293 B, particularly in the inner hot region ($<10$ au, $T>$ 300 K) where refractory organics evaporate. The polarization produced by self-scattering in our model is not high enough to explain the observations at 1.3 and 7 mm, and effects like dichroic extinction or polarization reversal of elongated aligned grains remain other possible but untested scenarios.
AI and deep learning techniques are beginning to play an increasing role in astronomy as a necessary tool to deal with the data avalanche. Here we describe an application for finding resolved Planetary Nebulae (PNe) in crowded, wide-field, narrow-band H-alpha survey imagery in the Galactic plane. PNe are important to study late stage of stellar evolution of low to intermediate-mass stars. However, the confirmed ~3800 Galactic PNe fall far short of the numbers expected. Traditional visual searching for resolved PNe is time-consuming due to the large data size and areal coverage of modern astronomical surveys, especially those taken in narrow-band filters highlighting emission nebulae. To test and facilitate more objective, reproducible, efficient and reliable trawls for PNe candidates we have developed a new, deep learning algorithm. In this paper, we applied the algorithm to several H-alpha digital surveys (e.g. IPHAS and VPHAS+). The training and validation dataset was built with true PNe from the HASH database. After transfer learning, it was then applied to the VPHAS+ survey. We examined 979 out of 2284 survey fields with each survey field covering 1 * 1 deg^2. With a sample of 454 PNe from the IPHAS as our validation set, our algorithm correctly identified 444 of these objects (97.8%), with only 16 explicable 'false' positives. Our model returned ~20,000 detections, including 2637 known PNe and many other kinds of catalogued non-PNe such as HII regions. A total of 815 new high-quality PNe candidates were found, 31 of which were selected as top-quality targets for subsequent optical spectroscopic follow-up. Representative preliminary confirmatory spectroscopy results are presented here to demonstrate the effectiveness of our techniques with full details to be given in paper-II.
The tidal love number determines a star's deformability rate in the presence of gravitational potential and depends on the star's internal structure. In this work, we investigate two significant prospects on tidal love number : (i) the influence of the polytropic index of stars on the tidal love number and , (ii) how tidal love number affects the pericenter shift of S-stars near Sgr A* which is an important probe for strong-field tests of gravitational theories. We consider S-stars orbiting Sgr A* at a pericenter distance of 45 au to 500 au, well below the S-2 orbit. The S-stars have polytropic indices of the range, n = 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, and 4.5, and eccentricity, e = 0.9 inclined at $i=90 ^{\circ}$. The tidal love number is estimated for multipole moments $l=2$ and $l=3$. It has been found that the tidal love number decreases as the polytropic index increases. Additionally, the tidal love number for the multipole moment of $l=2$ is dominant over that of $l=3$. The tidal distortion effect also causes a greater pericenter shift in compact orbit S-stars with lower polytropic indices and tidal love number having multipole moment $l=2$. The estimated results offer relevant insights for testing general relativity and its alternative theories in the vicinity of Sgr A*
During the early stages of galaxy evolution, a significant {fraction} of galaxies undergo a transitional phase between the "blue nugget" systems, which arise from the compaction of large, active star-forming disks, and the "red nuggets", which are red and passive compact galaxies. These objects are typically only observable with space telescopes, and detailed studies of their size, mass, and stellar population parameters have been conducted on relatively small samples. Strong gravitational lensing can offer a new opportunity to study them in detail, even with ground-based observations. In this study, we present the first 6 \textit{bona fide} sample of strongly lensed post-blue nugget (pBN) galaxies, which were discovered in the Kilo Degree Survey (KiDS). By using the lensing-magnified luminosity from optical and near-infrared bands, we have derived robust structural and stellar population properties of the multiple images of the background sources. The pBN galaxies have very small sizes ($<1.3$ kpc), high mass density inside 1 kpc ($\log \Sigma_1 /M_{\odot} \mathrm{kpc}^{-2}>9.3$), and low specific star formation rates ($\log \mathrm{sSFR/Gyrs}\lesssim0.5$), which places them between the blue and red nugget phases. The size-mass and $\Sigma_1$-mass relations of this sample are consistent with those of the red nuggets, while their sSFR is close to the lower end of compact star-forming blue nugget systems at the same redshift, suggesting a clear evolutionary link between them.
We study the magnetic field structures in six giant filaments associated with the spiral arms of the Milky Way by applying the Velocity Gradient technique (VGT) to the 13CO spectroscopic data from GRS, Fugin, and SEDIGSM surveys. Compared to dust polarized emission, the VGT allows us to separate the foreground and background using the velocity information, from which the orientation of the magnetic field can be reliably determined. We find that in most cases, the magnetic fields stay aligned with the filament bodies, which are parallel to the disk midplane. Among these, G29, G47, and G51 exhibit smooth magnetic fields, and G24, G339, and G349 exhibit discontinuities. The fact that most filaments have magnetic fields that stay aligned with the Galactic disk midplane suggests that Galactic shear can be responsible for shaping the filaments. The fact that the magnetic field can stay regular at the resolution of our analysis (<= 10 pc) where the turbulence crossing time is short compared to the shear time suggests that turbulent motion can not effectively disrupt the regular orientation of the magnetic field. The discontinuities found in some filaments can be caused by processes including filament reassembly, gravitational collapse, and stellar feedback.
This paper explores the application of machine learning methods for classifying astronomical sources using photometric data, including normal and emission line galaxies (ELGs; starforming, starburst, AGN, broad line), quasars, and stars. We utilized samples from Sloan Digital Sky Survey (SDSS) Data Release 17 (DR17) and the ALLWISE catalog, which contain spectroscopically labeled sources from SDSS. Our methodology comprises two parts. First, we conducted experiments, including three-class, four-class, and seven-class classifications, employing the Random Forest (RF) algorithm. This phase aimed to achieve optimal performance with balanced datasets. In the second part, we trained various machine learning methods, such as $k$-nearest neighbors (KNN), RF, XGBoost (XGB), voting, and artificial neural network (ANN), using all available data based on promising results from the first phase. Our results highlight the effectiveness of combining optical and infrared features, yielding the best performance across all classifiers. Specifically, in the three-class experiment, RF and XGB algorithms achieved identical average F1 scores of 98.93 per~cent on both balanced and unbalanced datasets. In the seven-class experiment, our average F1 score was 73.57 per~cent. Using the XGB method in the four-class experiment, we achieved F1 scores of 87.9 per~cent for normal galaxies (NGs), 81.5 per~cent for ELGs, 99.1 per~cent for stars, and 98.5 per~cent for quasars (QSOs). Unlike classical methods based on time-consuming spectroscopy, our experiments demonstrate the feasibility of using automated algorithms on carefully classified photometric data. With more data and ample training samples, detailed photometric classification becomes possible, aiding in the selection of follow-up observation candidates.
With the unprecedented increase of known star clusters, quick and modern tools are needed for their analysis. In this work, we develop an artificial neural network trained on synthetic clusters to estimate the age, metallicity, extinction, and distance of $Gaia$ open clusters. We implement a novel technique to extract features from the colour-magnitude diagram of clusters by means of the QuadTree tool and we adopt a multi-band approach. We obtain reliable parameters for $\sim 5400$ clusters. We demonstrate the effectiveness of our methodology in accurately determining crucial parameters of $Gaia$ open clusters by performing a comprehensive scientific validation. In particular, with our analysis we have been able to reproduce the Galactic metallicity gradient as it is observed by high-resolution spectroscopic surveys. This demonstrates that our method reliably extracts information on metallicity from colour-magnitude diagrams (CMDs) of stellar clusters. For the sample of clusters studied, we find an intriguing systematic older age compared to previous analyses present in the literature. This work introduces a novel approach to feature extraction using a QuadTree algorithm, effectively tracing sequences in CMDs despite photometric errors and outliers. The adoption of ANNs, rather than Convolutional Neural Networks, maintains the full positional information and improves performance, while also demonstrating the potential for deriving clusters' parameters from simultaneous analysis of multiple photometric bands, beneficial for upcoming telescopes like the Vera Rubin Observatory. The implementation of ANN tools with robust isochrone fit techniques could provide further improvements in the quest for open clusters' parameters.
Accurate knowledge of the interstellar medium (ISM) at high-Galactic latitudes is crucial for future cosmic microwave background (CMB) polarization experiments due to extinction, albeit being low, being a foreground larger than the anticipated signal in these regions. We develop a Bayesian model to identify a region of the Hertzsprung-Russell (HR) diagram and an associated dataset suited to constrain the single-star extinction accurately at high-Galactic latitudes. Using photometry from Gaia, 2MASS and ALLWISE, parallax from Gaia and stellar parameters derived from the Gaia low-resolution BP/RP (XP) spectra as input data, we employ nested sampling to fit the model to the data and analyse samples from the extinction posterior. Charting low variations in extinction is complex due to both systematic errors and degeneracies between extinction and other stellar parameters. The systematics can be minimised by restricting our data to a region of the HR diagram where the stellar models are most accurate. Moreover, the degeneracies can be significantly reduced by including spectroscopic estimates of the effective temperature as data. We show that underestimating the measurement error on the data is detrimental to recovering an accurate extinction distribution. We demonstrate that a full posterior solution is necessary to understand the extinction parameter and find fine variation in the ISM. However, by only using the mean extinction and a prior assumption of spatial correlation, we can produce a dust map similar to other benchmark maps.
The universality or non-universality of the initial mass function (IMF) has significant implications for determining star formation rates and star formation histories from photometric properties of stellar populations. We reexamine whether the IMF is deficient in high-mass stars (top-light) in the low-density environment of the outer disk of M83 and constrain the shape of the IMF therein. Using archival Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) and near ultraviolet (NUV) data and new deep OmegaCAM narrowband H$\alpha$ imaging, we constructed a catalog of FUV-selected objects in the outer disk of M83. We counted H$\alpha$-bright clusters and clusters that are blue in FUV$-$NUV in the catalog, measured the maximum flux ratio $F_{\mathrm{H}\alpha}/f_{\lambda \mathrm{FUV}}$ among the clusters, and measured the total flux ratio $\Sigma F_{\mathrm{H}\alpha}/\Sigma f_{\lambda \mathrm{FUV}}$ over the catalog. We then compared these measurements to predictions from stellar population synthesis models made with a standard Salpeter IMF, truncated IMFs, and steep IMFs. We also investigated the effect of varying the assumed internal extinction on our results. We are not able to reproduce our observations with models using the standard Salpeter IMF or the truncated IMFs. It is only when assuming an average internal extinction of $0.10 < A_{\mathrm{V}} < 0.15$ in the outer disk stellar clusters that models with steep IMFs ($\alpha > 3.1$) simultaneously reproduce the observed cluster counts, the maximum observed $F_{\mathrm{H}\alpha}/f_{\lambda \mathrm{FUV}}$, and the observed $\Sigma F_{\mathrm{H}\alpha}/\Sigma f_{\lambda \mathrm{FUV}}$. Our results support a non-universal IMF that is deficient in high-mass stars in low-density environments.
We present CO(2-1) and $^{13}$CO(2-1) maps of the Cygnus-X molecular cloud complex using the 10m Heinrich Hertz Submillimeter Telescope (SMT). The maps cover the southern portion of the complex which is strongly impacted by the feedback from the Cygnus OB2 association. Combining CO(1-0) and $^{13}$CO(1-0) maps from the Nobeyama 45m Cygnus-X CO Survey, we carry out a multi-transition molecular line analysis with RADEX and derive the volume density of velocity-coherent gas components. We select those components with a column density in the power-law tail part of the column density probability distribution function (N-PDF) and assemble their volume density into a volume density PDF ($\rho$-PDF). The $\rho$-PDF exhibits a power-law shape in the range of 10$^{4.5}$ cm$^{-3}$ $\lesssim n_{\rm H_2} \lesssim$ 10$^{5.5}$ cm$^{-3}$ with a fitted slope of $\alpha = -1.12 \pm 0.05$. The slope is shallower than what is predicted by simulations of rotationally supported structures or those undergoing gravitational collapse. Applying the same analysis to synthetic observations with feedback may help identify the cause of the shallow slope. The $\rho$-PDF provides another useful benchmark for testing models of molecular cloud formation and evolution.
The vastness of a clear night sky evokes curiosity about the distance to the stars. There are two primary methods for estimating stellar distances, parallax and luminosity. In this study, we present a new analysis revealing a noteworthy discrepancy between these two methods. Due to the accuracy of GAIA, parallaxes can directly be converted into distances. In contrast, luminosity distances require, apart from the determination of the apparent and absolute brightness of a star, the reddening value that allows a correction for interstellar extinction. Using 47 stars with non-peculiar reddening curves from the high-quality sample we find here that the luminosity distance overestimates the parallactic distance for most (80 %) of these stars. This puzzling discrepancy can only be removed when incorporating a new population of large dust grains, so-called dark dust, with our model that respects contemporary constraints of the inter-stellar dust and is updated to scope for the first time with absolute reddening. The model provides a visual extinction which unifies the conflicting distances. Another far-reaching consequence of the flat absorption and scattering properties of dark dust is that it broadens the light curves of SN Ia, which serves as a measure of the quantity of dark energy.
We present an investigation into the first 500 Myr of galaxy evolution from the Cosmic Evolution Early Release Science (CEERS) survey. CEERS, one of 13 JWST ERS programs, targets galaxy formation from z~0.5 to z>10 using several imaging and spectroscopic modes. We make use of the first epoch of CEERS NIRCam imaging, spanning 35.5 sq. arcmin, to search for candidate galaxies at z>9. Following a detailed data reduction process implementing several custom steps to produce high-quality reduced images, we perform multi-band photometry across seven NIRCam broad and medium-band (and six Hubble broadband) filters focusing on robust colors and accurate total fluxes. We measure photometric redshifts and devise a robust set of selection criteria to identify a sample of 26 galaxy candidates at z~9-16. These objects are compact with a median half-light radius of ~0.5 kpc. We present an early estimate of the z~11 rest-frame ultraviolet (UV) luminosity function, finding that the number density of galaxies at M_UV ~ -20 appears to evolve very little from z~9 to z~11. We also find that the abundance (surface density [arcmin^-2]) of our candidates exceeds nearly all theoretical predictions. We explore potential implications, including that at z>10 star formation may be dominated by top-heavy initial mass functions, which would result in an increased ratio of UV light per unit halo mass, though a complete lack of dust attenuation and/or changing star-formation physics may also play a role. While spectroscopic confirmation of these sources is urgently required, our results suggest that the deeper views to come with JWST should yield prolific samples of ultra-high-redshift galaxies with which to further explore these conclusions.
Strong lensing offers a precious opportunity for studying the formation and early evolution of super star clusters that are rare in our cosmic backyard. The Sunburst Arc, a lensed Cosmic Noon galaxy, hosts a young super star cluster with escaping Lyman continuum radiation. Analyzing archival HST images and emission line data from VLT/MUSE and X-shooter, we construct a physical model for the cluster and its surrounding photoionized nebula. We confirm that the cluster is $\lesssim4\,$Myr old, is extremely massive $M_\star \sim 10^7\,M_\odot$ and yet has a central component as compact as several parsecs, and we find a gas-phase metallicity $Z=(0.22\pm0.03)\,Z_\odot$. The cluster is surrounded by $\gtrsim 10^5\,M_\odot$ of dense clouds that have been pressurized to $P\sim 10^9\,{\rm K}\,{\rm cm}^{-3}$ by perhaps stellar radiation at within ten parsecs. These should have large neutral columns $N_{\rm HI} > 10^{22.5}\,{\rm cm}^{-2}$ to survive rapid ejection by radiation pressure. The clouds are likely dusty as they show gas-phase depletion of silicon, and may be conducive to secondary star formation if $N_{\rm HI} > 10^{24}\,{\rm cm}^{-2}$ or if they sink further toward the cluster center. Detecting strong ${\rm N III]}\lambda\lambda$1750,1752, we infer heavy nitrogen enrichment $\log({\rm N/O})=-0.21^{+0.10}_{-0.11}$. This requires efficiently retaining $\gtrsim 500\,M_\odot$ of nitrogen in the high-pressure clouds from massive stars heavier than $60\,M_\odot$ up to 4 Myr. We suggest a physical origin of the high-pressure clouds from partial or complete condensation of slow massive star ejecta, which may have important implication for the puzzle of multiple stellar populations in globular clusters.
We present a CO dynamical estimate of the mass of the super-massive black hole (SMBH) in three nearby early-type galaxies: NGC0612, NGC1574 and NGC4261. Our analysis is based on Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 3-6 observations of the $^{12}$CO(2-1) emission line with spatial resolutions of $14-58$ pc ($0.01"-0.26"$). We detect disc-like CO distributions on scales from $\lesssim200$ pc (NGC1574 and NGC4261) to $\approx10$ kpc (NGC0612). In NGC0612 and NGC1574 the bulk of the gas is regularly rotating. The data also provide evidence for the presence of a massive dark object at the centre of NGC1574, allowing us to obtain the first measure of its mass, $M_{\rm BH}=(1.0\pm0.2)\times10^{8}$ M$_{\odot}$ (1$\sigma$ uncertainty). In NGC4261, the CO kinematics is clearly dominated by the SMBH gravitational influence, allowing us to determine an accurate black hole mass of $(1.62{\pm 0.04})\times10^{9}$ M$_{\odot}$ ($1\sigma$ uncertainty). This is fully consistent with a previous CO dynamical estimate obtained using a different modelling technique. Signs of non-circular gas motions (likely outflow) are also identified in the inner regions of NGC4261. In NGC0612, we are only able to obtain a (conservative) upper limit of $M_{\rm BH}\lesssim3.2\times10^{9}$ M$_{\odot}$. This has likely to be ascribed to the presence of a central CO hole (with a radius much larger than that of the SMBH sphere of influence), combined with the inability of obtaining a robust prediction for the CO velocity curve. The three SMBH mass estimates are overall in agreement with predictions from the $M_{\rm BH}-\sigma_{\star}$ relation.
Rapidly outflowing cold H-I gas is ubiquitously observed to be co-spatial with a hot phase in galactic winds, yet the ablation time of cold gas by the hot phase should be much shorter than the acceleration time. Previous work showed efficient radiative cooling enables clouds to survive in hot galactic winds under certain conditions, as can magnetic fields even in purely adiabatic simulations for sufficiently small density contrasts between the wind and cloud. In this work, we study the interplay between radiative cooling and magnetic draping via three dimensional radiative magnetohydrodynamic simulations with perpendicular ambient fields and tangled internal cloud fields. We find magnetic fields decrease the critical cloud radius for survival by two orders of magnitude (i.e., to sub-pc scales) in the strongly magnetized ($\beta_{\rm wind}=1$) case. Our results show magnetic fields (i) accelerate cloud entrainment through magnetic draping, (ii) can cause faster cloud destruction in cases of inefficient radiative cooling, (iii) do not significantly suppress mass growth for efficiently cooling clouds, and, crucially, in combination with radiative cooling (iv) reduce the average overdensity by providing non-thermal pressure support of the cold gas. This substantially reduces the acceleration time compared to the destruction time (more than due to draping alone), enhancing cloud survival. Our results may help to explain the cold, tiny, rapidly outflowing cold gas observed in galactic winds and the subsequent high covering fraction of cold material in galactic halos.
We present dynamical scaling relations, combined with the stellar population properties, for a subsample of about 6000 nearby galaxies with the most reliable dynamical models extracted from the full MaNGA sample of 10K galaxies. We show that the inclination-corrected mass plane (MP) for both early-type galaxies (ETGs) and late-type galaxies (LTGs), which links dynamical mass, projected half-light radius $R_{\rm e}$, and the second stellar velocity moment $\sigma_{\rm e}$ within $R_{\rm e}$, satisfies the virial theorem and is even tighter than the uncorrected one. We find a clear parabolic relation between $\lg(M/L)_{\rm e}$, the total mass-to-light ratio within a sphere of radius $R_{\rm e}$, and $\lg\sigma_{\rm e}$, with the $M/L$ increasing with $\sigma_{\rm e}$ and for older stellar populations. However, the relation for ETGs is linear and the one for the youngest galaxies is constant. We confirm and improve the relation between average logarithmic total density slopes $\overline{\gamma_{\rm T}}$ and $\sigma_{\rm e}$: $\overline{\gamma_{\rm T}}$ become steeper with increasing $\sigma_{\rm e}$ until $\lg(\sigma_{\rm e}/{\rm km\,s^{-1}})\approx 2.2$ and then remain constant around $\overline{\gamma_{\rm T}}\approx -2.2$. The $\overline{\gamma_{\rm T}}-\sigma_{\rm e}$ variation is larger for LTGs than ETGs. At fixed $\sigma_{\rm e}$ the total density profiles steepen with galaxy age and for ETGs. We find generally low dark matter fractions, median $f_{\rm DM}(<R_{\rm e})=8$ per cent, within a sphere of radius $R_{\rm e}$. However, we find that $f_{\rm DM}(<R_{\rm e})$ depends on $\sigma_{\rm e}$ better than stellar mass: dark matter increases to a median $f_{\rm DM}(<R_{\rm e})=33$ percent for galaxies with $\sigma_{\rm e}\lesssim100{\rm km\,s^{-1}}$. The increased $f_{\rm DM}(<R_{\rm e})$ at low $\sigma_{\rm e}$ explains the parabolic $\lg(M/L)_{\rm e}-\lg\sigma_{\rm e}$ relation.
We present the first detection of cool, neutral gas in the outskirts of low-z galaxy clusters using a statistically significant sample of 3191 z$\approx$0.2 background quasar - foreground cluster pairs by cross-matching the Hubble Spectroscopic Legacy Archive quasar catalog with optically- and SZ-selected cluster catalogs. The median cluster mass of our sample is $\approx 10^{14.2}$ M_sun, with a median impact parameter ($\rho_{cl}$) of $\approx5$ Mpc. We detect significant Lya, marginal CIV, but no OVI absorption in the signal-to-noise ratio weighted mean stacked spectra with rest-frame equivalent widths of 0.096$\pm$0.011 A, 0.032$\pm$0.015 A, and <0.009 A (3$\sigma$) for our sample. The Lya REW shows a declining trend with increasing $\rho_{cl}$ ($\rho_{cl}$ / $R_{500}$) which is well explained by a power-law with a slope of -0.79 (-0.70). The covering fractions (CFs) measured for Lya (21\%), CIV (10\%) and OVI (10\%) in cluster outskirts are significantly lower than in the circumgalatic medium (CGM). We also find that the CGM of galaxies that are closer to cluster centers or that are in massive clusters is considerably deficient in neutral gas. The low CF of the Lya along with the non-detection of Lya signal when the strong absorbers (N(HI) > $10^{13} cm^{-2}$) are excluded, indicate the patchy distribution of cool gas in the outskirts. We argue that the cool gas in cluster outskirts in combination arises from the circumgalactic gas stripped from cluster galaxies and to large-scale filaments feeding the clusters with cool gas.
The results of high-resolution spectral-line observations of dense molecular gas are presented towards the nuclear region of the type 2 Seyfert galaxy NGC1068. MERLIN observations of the 22 GHz H2O maser were made for imaging the known off-nuclear maser emission at radio jet component located about 0.3" north-east of the radio nucleus in the galaxy. High angular resolution ALMA observations have spatially resolved the molecular gas emissions of HCN and HCO$^{+}$ in this region. The off-nuclear maser spots are found to nearly overlap with a ring-like molecular gas structure and are tracing an evolving shock-like structure, which appears to be energized by interaction between the radio jet and circumnuclear medium. A dynamic jet-ISM interaction is further supported by a systematic shift of the centroid velocities of the off-nuclear maser features over a period of 35 years. The integrated flux ratios of the HCO$^{+}$ line emission features at component C suggest a kinetic temperature T$_{k}$ $\gtrsim$ 300K and an H$_2$ density of $\gtrsim$ 10$^6$ cm$^{-3}$, which are conditions where water masers may be formed. The diagnostics of the masering action in this jet-ISM interaction region is exemplary for galaxies hosting off-nuclear H2O maser emission.
The Kilo Degree Survey (KiDS) is currently the only sky survey providing optical ($ugri$) plus near-infrared (NIR, $ZYHJK_S$) seeing matched photometry over an area larger than 1000 $\rm deg^2$. This is obtained by incorporating the NIR data from the VISTA Kilo Degree Infrared Galaxy (VIKING) survey, covering the same KiDS footprint. As such, the KiDS multi-wavelength photometry represents a unique dataset to test the ability of stellar population models to return robust photometric stellar mass ($M_*$) and star-formation rate (SFR) estimates. Here we use a spectroscopic sample of galaxies for which we possess $u g r i Z Y J H K_s$ ``gaussianized'' magnitudes from KiDS data release 4. We fit the spectral energy distribution from the 9-band photometry using: 1) three different popular libraries of stellar {population} templates, 2) single burst, simple and delayed exponential star-formation history models, and 3) a wide range of priors on age and metallicity. As template fitting codes we use two popular softwares: LePhare and CIGALE. We investigate the variance of the stellar masses and the star-formation rates from the different combinations of templates, star formation recipes and codes to assess the stability of these estimates and define some ``robust'' median quantities to be included in the upcoming KiDS data releases. As a science validation test, we derive the mass function, the star formation rate function, and the SFR-$M_*$ relation for a low-redshift ($z<0.5$) sample of galaxies, that result in excellent agreement with previous literature data. The final catalog, containing $\sim290\,000$ galaxies with redshift $0.01<z<0.9$, is made publicly available.
We investigate the alignment of galaxy and halo orientations using the TNG300-1 hydrodynamical simulation. Our analysis reveals that the distribution of the 2D misalignment angle $\theta_{\rm{2D}}$ can be well described by a truncated shifted exponential (TSE) distribution with only {\textit{one}} free parameter across different redshifts and galaxy/halo properties. We demonstrate that the galaxy-ellipticity (GI) correlations of galaxies can be reproduced by perturbing halo orientations with the obtained $\theta_{\rm{2D}}$ distribution, with only a small bias ($<3^{\circ}$) possibly arising from unaccounted couplings between $\theta_{\rm{2D}}$ and other factors. We find that both the 2D and 3D misalignment angles $\theta_{\rm{2D}}$ and $\theta_{\rm{3D}}$ decrease with ex situ stellar mass fraction $F_{\rm{acc}}$, halo mass $M_{\rm{vir}}$ and stellar mass $M_{*}$, while increasing with disk-to-total stellar mass fraction $F_{\rm{disk}}$ and redshift. These dependences are in good agreement with our recent observational study based on the BOSS galaxy samples. Our results suggest that $F_{\rm{acc}}$ is a key factor in determining the galaxy-halo alignment. Grouping galaxies by $F_{\rm{acc}}$ nearly eliminates the dependence of $\theta_{\rm{3D}}$ on $M_{\rm{vir}}$ for all three principle axes, and also reduces the redshift dependence. For $\theta_{\rm{2D}}$, we find a more significant redshift dependence than for $\theta_{\rm{3D}}$ even after controlling $F_{\rm{acc}}$, which may be attributed to the evolution of galaxy and halo shapes. Our findings present a valuable model for observational studies and enhance our understanding of galaxy-halo alignment.
We report the discovery of the ``mm fundamental plane of black-hole accretion'', which is a tight correlation between the nuclear 1 mm luminosity ($L_{\rm \nu, mm}$), the intrinsic $2$ -- $10$~keV X-ray luminosity ($L_{\rm X,2-10}$) and the supermassive black hole (SMBH) mass ($M_{\rm BH}$) with an intrinsic scatter ($\sigma_{\rm int}$) of $0.40$ dex. The plane is found for a sample of 48 nearby galaxies, most of which are low-luminosity active galactic nuclei (LLAGN). Combining these sources with a sample of high-luminosity (quasar-like) nearby AGN, we find that the plane still holds. We also find that $M_{\rm BH}$ correlates with $L_{\rm \nu, mm}$ at a highly significant level, although such correlation is less tight than the mm fundamental plane ($\sigma_{\rm int}=0.51$ dex). Crucially, we show that spectral energy distribution (SED) models for both advection-dominated accretion flows (ADAFs) and compact jets can explain the existence of these relations, which are not reproduced by the standard torus-thin accretion disc models usually associated to quasar-like AGN. The ADAF models reproduces the observed relations somewhat better than those for compact jets, although neither provides a perfect prediction. Our findings thus suggest that radiatively-inefficient accretion processes such as those in ADAFs or compact (and thus possibly young) jets may play a key role in both low- and high-luminosity AGN. This mm fundamental plane also offers a new, rapid method to (indirectly) estimate SMBH masses.
The exploration of the low acceleration $a<a_{0}$ regime, where $a_{0}=1.2 \times 10^{-10}$m s$^{-2}$ is the acceleration scale of MOND around which gravitational anomalies at galactic scale appear, has recently been extended to the much smaller mass and length scales of local wide binaries thanks to the availability of the {\it Gaia} catalogue. Statistical methods to test the underlying structure of gravity using large samples of such binary stars and dealing with the necessary presence of kinematic contaminants in such samples have also been presented. However, an alternative approach using binary samples carefully selected to avoid any such contaminants, and consequently much smaller samples, has been lacking a formal statistical development. In the interest of having independent high quality checks on the results of wide binary gravity tests, we here develop a formal statistical framework for treating small, clean, wide binary samples in the context of testing modifications to gravity of the form $G \to \gamma G$. The method is validated through extensive tests with synthetic data samples, and applied to recent {\it Gaia} DR3 binary star observational samples of relative velocities and internal separations on the plane of the sky, $v_{2D}$ and $r_{2D}$, respectively. Our final results for a high acceleration $r_{2D}<0.01$pc region are of $\gamma=1.000 \pm 0.096$, in full accordance with Newtonian expectations. For a low acceleration $r_{2D}>0.01$pc region however, we obtain $\gamma=1.5 \pm 0.2$, inconsistent with the Newtonian value of $\gamma=1$ at a $2.6 \sigma$ level, and much more indicative of MOND AQUAL predictions of close to $\gamma=1.4$.
Gravity drives the collapse of molecular clouds through which stars form, yet the exact role of gravity in cloud collapse remains a complex issue. Studies point to a picture where star formation occurs in clusters. In a typical, pc-sized cluster-forming region, the collapse is hierarchical, and the stars should be born from regions of even smaller sizes ($\approx 0.1\;\rm pc$). The origin of this spatial arrangement remains under investigation. Based on a high-quality surface density map towards the Perseus region, we construct a 3D density structure, compute the gravitational potential, and derive eigenvalues of the tidal tensor ($\lambda_{\rm min}$, $\lambda_{\rm mid}$, $\lambda_{\rm max}$, $\lambda_{\rm min} < \lambda_{\rm mid} < \lambda_{\rm max}$), analyze the behavior of gravity at every location and reveal its multiple roles in cloud evolution. We find that fragmentation is limited to several isolated, high-density ``islands''. Surrounding them, is a vast amount of gas ($75 \%$ of the mass, $95 \%$ of the volume) that stays under the influence of extensive tides where fragmentation is suppressed. This gas will be transported towards these regions to fuel star formation. The spatial arrangement of regions under different tides explains the hierarchical and localized pattern of star formation inferred from the observations. Tides were first recognized by Newton, yet this is the first time its dominance in cloud evolution has been revealed. We expect this link between cloud density structure and role gravity to be strengthened by future studies, resulting in a clear view of the star formation process.
We propose that the ultralight dark matter (ULDM) model, in which dark matter particles have a tiny mass of $m=O(10^{-22})eV$, has characteristic scales for physical quantities of observed galaxies such as mass, size, acceleration, mass flux, and angular momentum from quantum mechanics. The typical angular momentum per dark matter particle is $\hbar$ and the typical physical quantities are functions of specific angular momentum $\hbar/m$ and average background density of the particles. If we use the Compton wavelength instead for the length scale, we can obtain bounds for these physical quantities. For example, there is an upper bound for acceleration of ULDM dominated objects, $a_c={c^3 m}/{\hbar}$. We suggest that the physical scales of galaxies depend on the time of their formation and that these characteristic scales are related to some mysteries of observed galaxies. Future observations from the James Webb Space Telescope and NANOGrav can provide evidences for the presence and evolution of these scales.
We present a hierarchical Bayesian pipeline, BP3M, that measures positions, parallaxes, and proper motions (PMs) for cross-matched sources between Hubble~Space~Telescope (HST) images and Gaia -- even for sparse fields ($N_*<10$ per image) -- expanding from the recent GaiaHub tool. This technique uses Gaia-measured astrometry as priors to predict the locations of sources in HST images, and is therefore able to put the HST images onto a global reference frame without the use of background galaxies/QSOs. Testing our publicly-available code in the Fornax and Draco dSphs, we measure accurate PMs that are a median of 8-13 times more precise than Gaia DR3 alone for $20.5<G<21~\mathrm{mag}$. We are able to explore the effect of observation strategies on BP3M astrometry using synthetic data, finding an optimal strategy to improve parallax and position precision at no cost to the PM uncertainty. Using 1619 HST images in the sparse COSMOS field (median 9 Gaia sources per HST image), we measure BP3M PMs for 2640 unique sources in the $16<G<21.5~\mathrm{mag}$ range, 25% of which have no Gaia PMs; the median BP3M PM uncertainty for $20.25<G<20.75~\mathrm{mag}$ sources is $0.44~$mas/yr compared to $1.03~$mas/yr from Gaia, while the median BP3M PM uncertainty for sources without Gaia-measured PMs ($20.75<G<21.5~\mathrm{mag}$) is $1.16~$mas/yr. The statistics that underpin the BP3M pipeline are a generalized way of combining position measurements from different images, epochs, and telescopes, which allows information to be shared between surveys and archives to achieve higher astrometric precision than that from each catalog alone.
Using Hubble Space Telescope (HST)/Cosmic Origins Spectrograph (COS) observations of one of the most metal-poor dwarf star-forming galaxies (SFG) in the local Universe, J2229+2725, we have discovered an extremely strong nebular CIV 1549, 1551 emission-line doublet, with an equivalent width of 43A, several times higher than the value observed so far in low-redshift SFGs. Together with other extreme characteristics obtained from optical spectroscopy (oxygen abundance 12+log(O/H)=7.085+/-0.031, ratio O32 = I([OIII]5007)/I([OII]3727) ~ 53, and equivalent width of the Hbeta emission line EW(Hbeta) = 577A), this galaxy greatly increases the range of physical properties for dwarf SFGs at low redshift and is a likely analogue of the high-redshift dwarf SFGs responsible for the reionization of the Universe. We find the ionizing radiation in J2229+2725 to be stellar in origin and the high EW(CIV 1549,1551) to be due to both extreme ionization conditions and a high carbon abundance, with a corresponding log C/O = -0.38, that is ~ 0.4 dex higher than the average value for nearby low-metallicity SFGs.
With the launch of JWST, understanding star formation in the early Universe is an active frontier in modern astrophysics. Whether the higher gas pressures and lower metallicities in the early Universe altered the shape of the stellar initial mass function (IMF) remains a fundamental open question. Since the IMF impacts nearly all observable properties of galaxies and controls how stars regulate galaxy growth, determining whether the IMF is variable is crucial for understanding galaxy formation. Here we report the detection of two Lyman-$\alpha$-emitting galaxies in the Epoch of Reionization with exceptionally top-heavy IMFs. Our analysis of JWST/NIRSpec data demonstrates that these galaxies exhibit spectra which are completely dominated by the nebular continuum. In addition to a clear Balmer jump, we observe a steep turnover in the ultraviolet continuum. Although this feature can be reproduced with a contrived damped Lyman-$\alpha$ absorption model, we show instead that this is two-photon emission from neutral hydrogen. Two-photon emission can only dominate if the ionizing emissivity is $\gtrsim10\times$ that of a typical star-forming galaxy. While weak He~{\sc~II} emission disfavours ionizing contributions from AGN or X-ray binaries, such radiation fields can be produced in star clusters dominated by low-metallicity stars of $\gtrsim50\ {\rm M_{\odot}}$, where the IMF is $10-30\times$ more top-heavy than typically assumed. Such a top-heavy IMF implies our understanding of star formation in the early Universe and the sources of reionization may need revision.
The phenomenon of Gravitational Wave (GW) analysis has grown in popularity as technology has advanced and the process of observing gravitational waves has become more precise. Although the sensitivity and the frequency of observation of GW signals are constantly improving, the possibility of noise in the collected GW data remains. In this paper, we propose two new Machine and Deep learning ensemble approaches (i.e., ShallowWaves and DeepWaves Ensembles) for detecting different types of noise and patterns in datasets from GW observatories. Our research also investigates various Machine and Deep Learning techniques for multi-class classification and provides a comprehensive benchmark, emphasizing the best results in terms of three commonly used performance metrics (i.e., accuracy, precision, and recall). We train and test our models on a dataset consisting of annotated time series from real-world data collected by the Advanced Laser Interferometer GW Observatory (LIGO). We empirically show that the best overall accuracy is obtained by the proposed DeepWaves Ensemble, followed close by the ShallowWaves Ensemble.
We present an extensive archaeoastronomical study of the orientations of seventeenth- and eighteenth-century Jesuit churches in the lands of the historic viceroyalty of New Spain. Our sample includes forty-one chapels and churches located mainly in present-day Mexico, which documentary sources indicate were built by the Society, and for which we measured the azimuths and heights of the horizon of their principal axes using satellite imagery and digital elevation models. Our results show that neither the orientation diagram nor the statistical analysis derived from the sample declination histogram can select a particular orientation pattern with an adequate level of confidence. We suggest some possible explanations for our results, discussing these North American churches within a broader cultural and geographical context that includes previous studies involving Jesuit mission churches in South America. Based on the analysis of the data presented here, we conclude that the orientation of Jesuit churches in the viceroyalty of New Spain most likely does not follow a well-defined prescription.
Far-infrared (far-IR) astrophysics missions featuring actively cooled telescopes will offer orders of magnitude observing speed improvement at wavelengths where galaxies and forming planetary systems emit most of their light. The PRobe far-Infrared Mission for Astrophysics (PRIMA), which is currently under study, emphasizes low and moderate resolution spectroscopy throughout the far-IR. Full utilization of PRIMA's cold telescope requires far-IR detector arrays with per-pixel noise equivalent powers (NEPs) at or below 1 x 10-19 W/rtHz. We are developing low-volume Aluminum kinetic inductance detector (KID) arrays to reach these sensitivities. We will present on the development of our long-wavelength (210 um) array approach, with a focus on multitone measurements of our 1,008-pixel arrays. We measure an NEP below 1 x 10-19 W/rtHz for 73 percent of our pixels.
This report first describes the status quo regarding the emerging deployment of very large groups of low-Earth-orbit satellites in the late 2010s, the concerns raised by the international astronomy community, and steps the community took to address the issue. We then describe the results of a series of four conferences held in 2020-21 that considered the impacts of large satellite constellations as it impacted a number of stakeholders, and how those outcomes resulted in the establishment of both the IAU Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference (IAU CPS) and its Community Engagement (CE) Hub. We finish with a brief description of CE Hub's initial plans and activities, flowing from the recommendations of those conferences.
The pyramid wavefront sensor (PyWFS) has become increasingly popular to use in adaptive optics (AO) systems due to its high sensitivity. The main drawback of the PyWFS is that it is inherently nonlinear, which means that classic linear wavefront reconstruction techniques face a significant reduction in performance at high wavefront errors, particularly when the pyramid is unmodulated. In this paper, we consider the potential use of neural networks (NNs) to replace the widely used matrix vector multiplication (MVM) control. We aim to test the hypothesis that the neural network (NN)'s ability to model nonlinearities will give it a distinct advantage over MVM control. We compare the performance of a MVM linear reconstructor against a dense NN, using daytime data acquired on the Subaru Coronagraphic Extreme Adaptive Optics system (SCExAO) instrument. In a first set of experiments, we produce wavefronts generated from 14 Zernike modes and the PyWFS responses at different modulation radii (25, 50, 75, and 100 mas). We find that the NN allows for a far more precise wavefront reconstruction at all modulations, with differences in performance increasing in the regime where the PyWFS nonlinearity becomes significant. In a second set of experiments, we generate a dataset of atmosphere-like wavefronts, and confirm that the NN outperforms the linear reconstructor. The SCExAO real-time computer software is used as baseline for the latter. These results suggest that NNs are well positioned to improve upon linear reconstructors and stand to bring about a leap forward in AO performance in the near future.
Over the past decades, libraries of stellar spectra have been used in a large variety of science cases, including as sources of reference spectra for a given object or a given spectral type. Despite the existence of large libraries and the increasing number of projects of large-scale spectral surveys, there is to date only one very high-resolution spectral library offering spectra from a few hundred objects from the southern hemisphere (UVES-POP) . We aim to extend the sample, offering a finer coverage of effective temperatures and surface gravity with a uniform collection of spectra obtained in the northern hemisphere. Between 2010 and 2020, we acquired several thousand echelle spectra of bright stars with the Mercator-HERMES spectrograph located in the Roque de Los Muchachos Observatory in La Palma, whose pipeline offers high-quality data reduction products. We have also developed methods to correct for the instrumental response in order to approach the true shape of the spectral continuum. Additionally, we have devised a normalisation process to provide a homogeneous normalisation of the full spectral range for most of the objects. We present a new spectral library consisting of 3256 spectra covering 2043 stars. It combines high signal-to-noise and high spectral resolution over the entire range of effective temperatures and luminosity classes. The spectra are presented in four versions: raw, corrected from the instrumental response, with and without correction from the atmospheric molecular absorption, and normalised (including the telluric correction).
The large combined field of view of the Geostationary Lightning Mapper (GLM) instruments onboard the GOES weather satellites makes them useful for studying the population of other atmospheric phenomena, such as bolides. Being a lightning mapper, GLM has many detection biases when applied to non-lightning and these systematics must be studied and properly accounted for before precise measurements of bolide flux can be ascertained. We developed a Bayesian Poisson regression model which simultaneously estimates instrumental biases and our statistic of principal interest: the latitudinal variation of bolide flux. We find that the estimated bias due to the angle of incident light upon the instrument corresponds roughly with the known sensitivity of the GLM instruments. We compare our latitudinal flux variation estimates to existing theoretical models and find our estimates consistent with GLM being strongly biased towards high-velocity bolides.
We take a fresh look at about 60 years of recommendations for US federal funding for astronomical and astrophysical facilities provided by seven survey committees at roughly 10-year intervals. It remains true that very roughly one third of the highest priority items were done with (mostly) federal funding within about 15 years of the reports; another third happened with (mostly) state, private, or international funding; and about a third never happened (and we might well not want them now). Some other very productive facilities were never quite recommended but entered the queue in other ways. We also take brief looks at the long-term achievements of the highest-priority facilities that were actually funded and built more or less as described in the decadal reports. We end with a very brief look at the gender balance of the various panels and committees and mention some broader issues that came to look important while we were collecting the primary data. A second paper will look at what sorts of institutions the panel and committee members have come from over the years.
Artificial satellites orbiting around the Earth, under certain conditions, result to be visible even to the naked eye. The phenomenon of light pollution jeopardises the researching activities of the astronomical community: traces left by the objects are clear and evident and images for scientific purposes are damaged and deteriorated. The development of a mathematical model able to estimate the satellite's brightness is required and it represents a first step to catch all the aspects of the reflection phenomenon. The brightness model (by Politecnico di Milano) will be exploited to implement a realistic simulation of the apparent magnitude evolution and it could be used to develop an archetype of new-generation spacecraft at low light-pollution impact. Starting from classical photometry theory, which provides the expressions of radiant flux density of natural spherical bodies, the global laws describing flux densities and the associated apparent magnitude are exploited to generalise the analysis. The study is finally focused on three-dimensional objects of whatever shape which can be the best representation of the spacecraft geometry. To obtain representative results of the satellite brightness, a validation process has been carried on. The observation data of OneWeb satellites have been collected by GAL Hassin astronomical observatory, settled in Isnello, near Palermo. The observations were carried out in order to map the satellites brightness at various illumination conditions, also targeting a single satellite across its different positions on the sky (i.e., during its rise, culmination and setting).
This document presents the study conducted during the European Moon Rover System Pre-Phase A project, in which we have developed a lunar rover system, with a modular approach, capable of carrying out different missions with different objectives. This includes excavating and transporting over 200kg of regolith, building an astrophysical observatory on the far side of the Moon, placing scientific instrumentation at the lunar south pole, or studying the volcanic history of our satellite. To achieve this, a modular approach has been adopted for the design of the platform in terms of locomotion and mobility, which includes onboard autonomy, of course. A modular platform allows for accommodating different payloads and allocating them in the most advantageous positions for the mission they are going to undertake (for example, having direct access to the lunar surface for the payloads that require it), while also allowing for the relocation of payloads and reconfiguring the rover design itself to perform completely different tasks.
For many years, various experiments have attempted to shed light on the nature of dark matter (DM). This work investigates the possibility of using CaWO4 crystals for the direct search of spin-dependent DM interactions using the isotope 17O with a nuclear spin of 5/2. Due to the low natural abundance of 0.038%, an enrichment of the CaWO4 crystals with 17O is developed during the crystal production process at the Technical University of Munich. Three CaWO4 crystals were enriched, and their 17O content was measured by nuclear magnetic resonance spectroscopy at the University of Leipzig. This paper presents the concept and first results of the 17O enrichment and discusses the possibility of using enriched crystals to increase the sensitivity for the spin-dependent DM search with CRESST.
ESA's mission Euclid, while undertaking its final integration stage, is fully qualified. Euclid will perform an extragalactic survey ($0<z<2$) by observing in the visible and near-infrared wavelength range. To detect infrared radiation, it is equipped with the Near Infrared Spectrometer and Photometer (NISP) instrument, operating in the 0.9--2 $\mu$m range. In this paper, after introducing the survey strategy, we focus our attention on the NISP Data Processing Unit's Application Software, highlighting the experimental process to obtain the final parametrization of the on-board processing of data produced by the array of 16 Teledyne HAWAII-2RG (HgCdTe) detectors. We report results from the latest ground test campaigns with the flight configuration hardware - complete optical system (Korsh anastigmat telescope), detectors array (0.56 deg$^2$ field of view), and readout systems (16 Digital Control Units and Sidecar ASICs). The performance of the on-board processing is then presented. We also describe a major issue found during the final test phase. We show how the problem was identified and solved thanks to an intensive coordinated effort of an independent review `Tiger' team, lead by ESA, and a team of NISP experts from the Euclid Consortium. An extended PLM level campaign at ambient temperature in Li\`ege and a dedicated test campaign conducted in Marseille on the NISP EQM model eventually confirmed the resolution of the problem. Finally, we report examples of the outstanding spectrometric (using a Blue and two Red Grisms) and photometric performance of the NISP instrument, as derived from the end-to-end payload module test campaign at FOCAL 5 -- CSL; these results include the photometric Point Spread Function (PSF) determination and the spectroscopic dispersion verification.
Modeled searches of gravitational wave signals from compact binary mergers rely on template waveforms determined by the theory of general relativity (GR). Once a signal is detected, one generally performs the model agnostic test of GR, either looking for consistency between the GR waveform and data or introducing phenomenological deviations to detect the departure from GR. The non-trivial presence of beyond-GR physics can alter the waveform and could be missed by the GR template-based searches. A recent study [Phys. Rev. D 107, 024017 (2023)] targeted the binary black hole merger, assuming the parametrized deviation in lower post-Newtonian terms and demonstrated a mild effect on the search sensitivity. Surprisingly, for the search space of binary neutron star (BNS) systems where component masses range from 1 to $2.4\:\rm{M}_\odot$ and parametrized deviations span $1\sigma$ width of the deviation parameters measured from the GW170817 event, the GR template bank is highly ineffectual for detecting the non-GR signals. Here, we present a new hybrid method to construct a non-GR template bank for the BNS search space. The hybrid method uses the geometric approach of three-dimensional lattice placement to cover most of the parameter space volume, followed by the random method to cover the boundary regions of parameter space. We find that the non-GR bank size is $\sim$15 times larger than the conventional GR bank and is effectual towards detecting non-GR signals in the target search space.
Reflexive metrics is a branch of science studies which explores how the demand for accountability and performance measurement in science has shaped the research culture in recent decades. Hypercompetition and publication pressure are part of this neoliberal culture. How do scientists respond to these pressures? Studies on research integrity and organizational culture suggest that people who feel treated unfairly by their institution are more likely to engage in deviant behaviour, such as scientific misconduct. By building up on reflexive metrics, combined with studies on the influence of organisational culture on research integrity, this study reflects on the research behaviour of astronomers: 1) To what extent is research (mis-)behaviour reflexive, i.e. dependent on perceptions of publication pressure and distributive & organisational justice? 2) What impact does scientific misconduct have on research quality? In order to perform this reflection, we conducted a comprehensive survey of academic and non-academic astronomers worldwide and received 3,509 responses. We found that publication pressure explains 19% of the variance in occurrence of misconduct and between 7 and 13% of the variance of the perception of distributive & organisational justice as well as overcommitment to work. Our results on the perceived impact of scientific misconduct on research quality show that the epistemic harm of questionable research practices should not be underestimated. This suggests there is a need for a policy change. In particular, lesser attention to metrics (such as publication rate) in the allocation of grants, telescope time and institutional rewards would foster better scientific conduct and hence research quality.
Recent research in the field of reflexive metrics have studied the emergence and consequences of evaluation gaps in science. The concept of evaluation gaps captures potential discrepancies between what researchers value about their research, in particular research quality, and what metrics measure. As a result, scientists may experience anomie and adopt innovative ways to cope. These often value quantity over quality and may even compromise research integrity. A consequence of such gaps may therefore be research misconduct and a decrease in research quality. In the language of rational choice theory, an evaluation gap persists if motivational factors arising out of the internal component of an actors situation are incongruent with those arising out of the external components. The aim of this research is therefore to study and compare autonomous and controlled motivations to become an astronomer, to do research in astronomy and to publish scientific papers. Moreover, we study how these different motivational factors affect publication pressure, the experience of organisational justice and the observation of research misconduct. In summary, we find evidence for an evaluation gap and that controlled motivational factors arising from evaluation procedures based on publication record drives up publication pressure, which, in turn, was found to increase the likelihood of perceived frequency of misbehaviour.
Context. Arrays of radio antennas have proven to be successful in astroparticle physics with the observation of extensive air showers initiated by high-energy cosmic rays in the Earth's atmosphere. Accurate determination of the energy scale of the primary particles' energies requires an absolute calibration of the radio antennas for which, in recent years, the utilization of the Galactic emission as a reference source has emerged as a potential standard. Aims. To apply the "Galactic Calibration", a proper estimation of the systematic uncertainties on the prediction of the Galactic emission from sky models is necessary, which we aim to determine on a global level as well as for the specific cases of selected radio arrays. We further aim to quantify the influence of the quiet Sun on the Galactic Calibration. Methods. We look at four different sky models that predict the full-sky Galactic emission in the frequency range from 30 to 408 MHz and compare them. We make an inventory of the reference maps on which they rely and use the output of the models to determine their global level of agreement. Next, we take the sky exposures and frequency bands of selected radio arrays into account and repeat the comparison for each of them. Finally, we study the relative influence of the Sun in its quiet state by projecting it onto the sky with brightness data from recent measurements. Results. We find systematic uncertainty of 12% on the predicted power from the Galactic emission, which scales to approximately half of that value as the uncertainty on the determination of the energy of cosmic particles. When looking at the selected radio arrays, the uncertainty on the predicted power varies between 10% and 19%. The influence of the quiet Sun turns out to be insignificant at the lowest frequencies but increases to a relative contribution of ~ 20% around 400 MHz.
In recent years, the field of Gravitational Wave Astronomy has flourished. With the advent of more sophisticated ground-based detectors and space-based observatories, it is anticipated that Gravitational Wave events will be detected at a much higher rate in the near future. One of the future data analysis challenges is performing robust statistical inference in the presence of detector noise transients or non-stationarities, as well as in the presence of stochastic Gravitational Wave signals of possible astrophysical and/or cosmological origin. The incomplete knowledge of the total noise of the observatory can introduce challenges in parameter estimation of detected sources. In this work, we propose a heavy-tailed, Hyperbolic likelihood, based on the Generalized Hyperbolic distribution. With the Hyperbolic likelihood we obtain a robust data analysis framework against data outliers, noise non-stationarities, and possible inaccurate modeling of the noise power spectral density. We apply this methodology to examples drawn from gravitational wave astronomy, and in particular to synthetic data sets from the planned LISA mission.
Solar radio emissions provide several unique diagnostics to estimate different physical parameters of the solar corona, which are otherwise simply inaccessible. However, imaging the highly dynamic solar coronal emissions spanning a large range of angular scales at radio wavelengths is extremely challenging. At GHz frequencies, MeerKAT radio telescope is possibly globally the best-suited instrument at present for providing high-fidelity spectroscopic snapshot solar images. Here, we present the first published spectroscopic images of the Sun made using the observations with MeerKAT in the 880-1670 MHz band. This work demonstrates the high fidelity of spectroscopic snapshot MeerKAT solar images through a comparison with simulated radio images at MeerKAT frequencies. The observed images show extremely good morphological similarities with the simulated images. Our analysis shows that below ~900 MHz MeerKAT images can recover essentially the entire flux density from the large angular scale solar disc. Not surprisingly, at higher frequencies, the missing flux density can be as large as ~50%. However, it can potentially be estimated and corrected for. We believe once solar observation with MeerKAT is commissioned, it will enable a host of novel studies, open the door to a large unexplored phase space with significant discovery potential, and also pave the way for solar science with the upcoming Square Kilometre Array-Mid telescope, for which MeerKAT is a precursor.
The Vera C. Rubin Observatory's LSST Camera (LSSTCam) pixel response has been characterized using laboratory measurements with a grid of artificial stars. We quantify the contributions to photometry, centroid, point-spread function size, and shape measurement errors due to small anomalies in the LSSTCam CCDs. The main sources of those anomalies are quantum efficiency variations and pixel area variations induced by the amplifier segmentation boundaries and "tree-rings" - circular variations in silicon doping concentration. This laboratory study using artificial stars projected on the sensors shows overall small effects. The residual effects on point-spread function (PSF) size and shape are below $0.1\%$, meeting the ten-year LSST survey science requirements. However, the CCD mid-line presents distortions that can have a moderate impact on PSF measurements. This feature can be avoided by masking the affected regions. Effects of tree-rings are observed on centroids and PSFs of the artificial stars and the nature of the effect is confirmed by a study of the flat-field response. Nevertheless, further studies of the full-focal plane with stellar data should more completely probe variations and might reveal new features, e.g. wavelength-dependent effects. The results of this study can be used as a guide for the on-sky operation of LSSTCam.
Due to the massive increase in astronomical images (such as James Webb and Solar Dynamic Observatory), automatic image description is essential for solar and astronomical. Zernike moments (ZMs) are unique due to the orthogonality and completeness of Zernike polynomials (ZPs); hence valuable to convert a two-dimensional image to one-dimensional series of complex numbers. The magnitude of ZMs is rotation invariant, and by applying image normalization, scale and translation invariants can be made, which are helpful properties for describing solar and astronomical images. In this package, we describe the characteristics of ZMs via several examples of solar (large and small scale) features and astronomical images. ZMs can describe the structure and morphology of objects in an image to apply machine learning to identify and track the features in several disciplines.
The determination of the physical parameters of gravitational wave events is a fundamental pillar in the analysis of the signals observed by the current ground-based interferometers. Typically, this is done using Bayesian inference approaches which, albeit very accurate, are very computationally expensive. We propose a convolutional neural network approach to perform this task. The convolutional neural network is trained using simulated signals injected in a Gaussian noise. We verify the correctness of the neural network's output distribution and compare its estimates with the posterior distributions obtained from traditional Bayesian inference methods for some real events. The results demonstrate the convolutional neural network's ability to produce posterior distributions that are compatible with the traditional methods. Moreover, it achieves a remarkable inference speed, lowering by orders of magnitude the times of Bayesian inference methods, enabling real-time analysis of gravitational wave signals. Despite the observed reduced accuracy in the parameters, the neural network provides valuable initial indications of key parameters of the event such as the sky location, facilitating a multi-messenger approach.
The present review aims to show that a modified space-time with an invariant minimum speed provides a relation with Weyl geometry in the Newtonian approximation of weak-field. The deformed Special Relativity so-called Symmetrical Special Relativity (SSR) has an invariant minimum speed $V$, which is associated with a preferred reference frame $S_V$ for representing the vacuum energy, thus leading to the cosmological constant ($\Lambda$). The equation of state (EOS) of vacuum energy for $\Lambda$, i.e., $\rho_{\Lambda}=\epsilon=-p$ emerges naturally from such space-time, where $p$ is the pressure and $\rho_{\Lambda}=\epsilon$ is the vacuum energy density. With the aim of establishing a relationship between $V$ and $\Lambda$ in the modified metric of the space-time, we should consider a dark spherical universe with Hubble radius $R_H$, having a very low value of $\epsilon$ that governs the accelerated expansion of universe. In doing this, we aim to show that SSR-metric has an equivalence with a de-Sitter (dS)-metric ($\Lambda>0$). On the other hand, according to the Boomerang experiment that reveals a slightly accelerated expansion of the universe, SSR leads to a dS-metric with an approximation for $\Lambda<<1$ close to a flat space-time, which is in the $\Lambda CDM$ scenario where the space is quasi-flat, so that $\Omega_{m}+\Omega_{\Lambda}\approx 1$. We have $\Omega{cdm}\approx 23\%$ by representing dark cold matter, $\Omega_m\approx 27\%$ for matter and $\Omega_{\Lambda}\approx 73\%$ for the vacuum energy. Thus, the theory is adjusted for the redshift $z=1$. This corresponds to the time $\tau_0$ of transition between gravity and anti-gravity, leading to a slight acceleration of expansion related to a tiny value of $\Lambda$, i.e., we find $\Lambda_0=1.934\times 10^{-35}s^{-2}$. This result is in agreement with observations.
In this paper we try to clarify that a regular metric can generate a singular spacetime. Our work focuses on a static and spherically symmetric spacetime in which regularity exists when all components of the Riemann tensor are finite. There is work in the literature that assumes that the regularity of the metric is a sufficient condition to guarantee it. We study three regular metrics and show that they have singular spacetime. We also show that these metrics can be interpreted as solutions for black holes whose matter source is described by nonlinear electrodynamics. We analyze the geodesic equations and the Kretschmann scalar to verify the existence of the curvature singularity. Moreover, we use a change of the line element $r \rightarrow \sqrt{r^2+a^2}$, which is a process of regularization of spacetime already known in the literature. We then recompute the geodesic equations and the Kretschmann scalar and show that all metrics now have regular spacetime. This process transforms them into black-bounce solutions, two of which are new. We have discussed the properties of the event horizon and the energy conditions for all models.
The significant properties and phase transition of charged Anti-de Sitter (AdS) black holes have been extensively studied in a variety of modified theories of gravity in the presence of numerous matter fields. The goal of our current research is to investigate the AdS black hole's thermodynamics under the impact of $f(Q)$ gravity. Additionally, this paper explores the black hole's local stability and phase structure under the relevant gravity. Besides, we use Ruppeiner geometry to look into the AdS black hole's microscopic structure. We have numerically computed the Ricci curvature scalar $R$ to explain the interactions between the AdS black hole's microscopic particles under the influence of $f(Q)$ gravity.
We show that detector switching profiles consisting of trains of delta couplings are a useful computational tool to efficiently approximate results involving continuous switching functions, both in setups involving a single detector and multiple ones. The rapid convergence to the continuous results at all orders in perturbation theory for sufficiently regular switchings means that this tool can be used to obtain non-perturbative results for general particle detector phenomena with continuous switching functions.
We give a scheme to geometrize the partial entanglement entropy (PEE) for holographic CFT in the context of AdS/CFT. More explicitly, given a point $\textbf{x}$ we geometrize the two-point PEEs between $\textbf{x}$ and any other points in terms of the bulk geodesics connecting these two points. We refer to these geodesics as the \textit{PEE threads}, which can be naturally regarded as the integral curves of a divergenceless vector field $V_{\textbf{x}}^{\mu}$, which we call \emph{PEE thread flow}. The norm of $V_{\textbf{x}}^{\mu}$ that characterizes the density of the PEE threads can be determined by some physical requirements of the PEE. We show that, for any static interval or spherical region $A$, a unique bit thread configuration can be generated from the PEE thread configuration determined by the state. Hence, the non-intrinsic bit threads are emergent from the intrinsic PEE threads. For static disconnected intervals, the vector fields describing a divergenceless flow is are longer suitable to reproduce the RT formula. We weight the PEE threads with the number of times it intersects with any homologous surface. Instead the RT formula is perfectly reformulated to be the minimization of the summation of the PEE threads with all possible assignment of weights.
By considering the concept of the modified Chaplygin gas (MCG) as a single fluid model unifying dark energy and dark matter, we construct a static, spherically charged black hole (BH) solution in the framework of General Relativity. The $P-V$ criticality of the charged anti-de Sitter (AdS) BH with a surrounding MCG is explored in the context of the extended phase space, where the negative cosmological constant operates as a thermodynamical pressure. This critical behavior shows that the small/large BH phase transition is analogous to the van der Waals liquid/gas phase transition. Accordingly, along the $P-V$ phase spaces, we derive the BH equations of state and then numerically evaluate the corresponding critical quantities. Similarly, critical exponents are identified, along with outcomes demonstrating the scaling behavior of thermodynamic quantities near criticality into a universal class. The use of \emph{geometrothermodynamic} (GT) tools finally offers a new perspective on discovering the critical phase transition point. At this stage, we apply a class of GT tools, such as Weinhold, Ruppeiner, HPEM, and Quevedo classes I and II. The findings are therefore non-trivial, as each GT class metric captures at least either the physical limitation point or the phase transition critical point. Overall, this paper provides a detailed study of the critical behavior of the charged AdS BH with surrounding MCG.
By implementing the gravitational decoupling method, we find the deformed AdS-Schwarzschild black hole solution when there is also an additional gravitational source, which obeys the weak energy condition. We also deliberately choose its energy density to be a certain monotonic function consistent with the constraints. In the method, there is a positive parameter that can adjust the strength of the effects of the geometric deformations on the background geometry, which we refer to as a deformation parameter. The condition of having an event horizon limits the value of the deformation parameter to an upper bound. After deriving various thermodynamic quantities as a function of the event horizon radius, we mostly focus on the effects of the deformation parameter on the horizon structure, the thermodynamics of the solution and the temperature of the Hawking- Page phase transition. The results show that with the increase of the deformation parameter: the minimum horizon radius required for a black hole to have local thermodynamic equilibrium and the minimum temperature below which there is no black hole decrease, and the horizon radius of the phase transition and the temperature of the first-order Hawking-Page phase transition increase. Furthermore, when the deformation parameter vanishes, the obtained thermodynamic behavior of the black hole is consistent with that stated in the literature.
We discuss the possibility of explaining observations usually related to the existence of dark matter by passing from the general relativity (GR) theory to a modified theory of gravity, the embedding theory proposed by Regge and Teitelboim. In this approach, it is assumed that our space-time is a four-dimensional surface in a ten-dimensional flat ambient space. This clear geometric interpretation of a change of a variable in the GR action leading to a new theory distinguishes this approach from the known alternatives: mimetic gravity and other variants. After the passage to the modified theory of gravity, additional solutions that can be interpreted as GR solutions with additional fictitious matter appear besides the solutions corresponding to GR. Just in that fictitious matter, one can try to see dark matter, with no need to assume the existence of dark matter as a fundamental object; its role is played by the degrees of freedom of modified gravity. In the embedding theory, the number of degrees of freedom of fictitious matter is sufficiently large, and hence an explanation of all observations without complicating the theory any further can be attempted.
We incorporate some corrections inspired by loop quantum gravity into the concept of gravitational collapse and propose a complete model of the dynamic process. The model carries the essence of a mass-independent upper bound on the curvature scalars originally found as a crucial feature of black holes in loop quantum gravity. The quantum-inspired interior is immersed in a geometry filled with null radiation and they are matched at a distinct boundary hypersurface. The ultimate fate of the process depends on inhomogeneities of the metric tensor cofficients. We find a critical parameter $\lambda$ embedded in the inhomogeneity of the conformal factor of the interior metric. Examples with $\lambda < 0$ enforce an eventual collapse to singularity and $\lambda > 0$ cases produce a non-singular collapse resulting in a loop-quantum-corrected Schwarzschild geometry modulo a conformal factor. Interestingly, for $\lambda < 0$ as well, there exist situations where the quantum effects are able to cause a bounce but fall short of preventing the ultimate formation of singularity. The trapped surface formation condition is studied for $\lambda<0$ case to infer about the visibility of the final singularity. Interestingly, we find a possibility of formation of three horizons during the course of the collapse. Eventually all of them merge into one single horizon which envelopes the final singularity. For the non-singular case, there is a possibility that the sphere can evolve into a wormhole throat whose radius is found to be inversely proportional to the critical parameter $\lambda$. Depending on the nature of evolution and the shell regions, the collapsing shells violate some standard energy conditions which can be associated with the quantum inspired corrections.
Using the integration of wave equation in time-domain we show that scalar field perturbations around the $(2+1)$-dimensional asymptotically flat black hole with Gauss-Bonnet corrections is dynamically unstable even when the coupling constant is sufficiently small.
We apply a new version of quintessential $\alpha$-attractor inflaton potential (arXiv: 2305.00230 [gr-qc]) to make a detailed analysis of the phenomenon of \emph{spontaneous baryogenesis} in the post-inflationary kination period. In this context, we compute various temperatures, their respective number of e-folds, densities and the freeze-out values of baryon-to-entropy ratio ($\eta_F$) at different mass scales, as functions of $\alpha$. The numerical calculations show that the baryon-to-entropy ratio is obtained as: for $\alpha = 0.29 - 0.30$, $\eta_F = 8.7\times 10^{-11} - 8.5\times 10^{-11}$ and for $\alpha = 4.2 - 4.3$, $\eta_F = 8.5\times 10^{-11} - 8.6\times 10^{-11}$. The results are found to satisfy experimental bounds quite satisfactorily. We also find a blue-tilted spectrum of relic gravitational waves (GW) of frequencies lying in two narrow bands corresponding to the two regions of $\alpha$, indicated above, \emph{viz.,} $f_{\mathrm{end}} = 2.00\times 10^{10} - 2.10\times 10^{10}$ Hz for lower region of $\alpha$ and $f_{\mathrm{end}} = 2.12\times 10^{10} - 2.11\times 10^{10}$ Hz for higher region of $\alpha$, during transition from inflation to kination, which is supported by current literature. The present-day peak values of the amplitudes of GWs emitted during radiation domination, ($(\Omega_{\mathrm{GW},0}^{(\mathrm{RD})})_{\mathrm{peak}}$), are found to be $\sim 10^{-7}$, which is consistent with the requirement for nucleosynthesis and the associated root-mean-square values, ($\Omega_{\mathrm{GW},0}^{(\mathrm{RD})}\sim 10^{-18}$), conform to the characteristic strain of the ongoing GW-detectors. In this way, a unified picture of the roles of quintessential $\alpha$-attractor model, considered here, in inflation, quintessence, spontaneous baryogenesis and the gravitational waves production, emerges from the present study.
In spacetime dimensions $n+1\geq 4$, we show the existence of solutions of the Einstein vacuum equations which describe asymptotically de Sitter spacetimes with prescribed smooth data at the conformal boundary. This provides a short alternative proof of a special case of a result by Shlapentokh-Rothman and Rodnianski, and generalizes earlier results by Friedrich and Anderson to all dimensions.
We investigate the violation of the Leggett-Garg inequalities for a quantum field, focusing on the two-time quasi-probability distribution function of the dichotomic variable with a coarse-grained scalar field. The Leggett-Garg inequalities are violated depending on the quantum state of the field and the size of coarse-graining. We also demonstrate that the violation of the Leggett-Garg inequalities appears even for the vacuum state and the squeezed state by properly constructing the dichotomic variable and the projection operator.
The interior of a pure-state black hole is known to be reconstructed from the Petz map by collecting a sufficiently large amount of the emitted Hawking radiation. This was established based on the Euclidean replica wormhole, which comes from an ensemble averaging over gravitational theories. On the other hand, this means that the Page curve and the interior reconstruction are both ensemble averages; thus, there is a possibility of large errors. In the previous study \cite{Bousso:2023efc}, it was shown that the entropy of the Hawking radiation has fluctuation of order $e^{-S_{\mathbf{BH}}}$, thus is typical in the ensemble. In the present article, we show that the fluctuations of the relative entropy difference in the encoding map and the entanglement fidelity of the Petz map are both suppressed by $e^{-S_{\mathbf{BH}}}$ compared to the signals, establishing the typicality in the ensemble. In addition, we also compute the entanglement loss of the encoding map.
We develop a static charged stellar model in $f(R,T)$ gravity where the modification is assumed to be linear in $T$ which is the trace of the energy momentum tensor. The exterior spacetime of the charged object is described by the Reissner-Nordstr\"om metric. The interior solution is obtained by invoking the Buchdahl-Vaidya-Tikekar ansatz, for the metric potential $g_{rr}$, which has a clear geometric interpretation. A detailed physical analysis of the model clearly shows distinct physical features of the resulting stellar configuration under such a modification. We find the maximum compactness bound for such a class of compact stars which is a generalization of the Buchdahl bound for a charged sphere described in $f(R,T)$ gravity. Our result shows physical behaviour that is distinct from general relativity.
We have studied influence of STVG in circular motion of massive particle around the Schwarzschild-MOG black hole and discussed the Innermost Stable Circular Orbit (ISCO) and Marginally Bound Orbit (MBO) position of massive test particle. This research explores dynamics of massive particles within the framework of Scalar-Tensor-Vector Gravity (STVG) when navigating the vicinity of Schwarzschild-MOG black holes. The study extends its investigation to the angular velocity of massive particles orbiting Schwarzschild-MOG black holes, comparing it to predictions from general relativity. Finally, we have investigated gravitational analogue of radiation reaction by considering Landau-Abraham-Dirac equation in STVG. We have shown that because of radiation reaction massive test particle falls down to the black hole.
In the second part of a three-fold series, we examine semi-classical models of black holes and white holes, generalizing to the axisymmetric case by modelling their near-horizon geometry via the radiative Kerr-Vaidya metrics. We examine the form of the energy-momentum tensor (EMT) near the apparent horizon and the experiences of various observers. Two out of the four possible classes of Kerr-Vaidya solutions are counterparts of their spherically-symmetric self-consistent solutions: evaporating black holes and expanding white holes. We demonstrate a consistent description of an accreting black hole based on the ingoing Kerr-Vaidya metric with increasing mass, and further show that the model can be extended to the case where the angular momentum to mass ratio varies. However, pathologies are identified in the expanding white hole geometry which reinforce controversies arising from the classical and quantum instabilities of their static counterparts. We also show that the apparent horizon of a Kerr-Vaidya black hole admits a description in terms of a Rindler horizon.
We study nonlinear matter models compatible with radiative Robinson--Trautman spacetimes and analyze their stability and well-posedness. The results lead us to formulate a conjecture relating the (in)stability and well/ill-posedness to the character of singularity appearing in the solutions. We consider two types of nonlinear electrodynamics models, namely we provide a radiative ModMax solution and extend recent results for the RegMax model by considering the magnetically charged case. In both cases, we investigate linear perturbations around stationary spherically symmetric solutions to determine the stability and principal symbol of the system to argue about well-posedness of these geometries. Additionally, we consider a nonlinear sigma model as a source for Robinson--Trautman geometry. This leads to stationary solutions with toroidal (as opposed to spherical) topology thus demanding modification of the analysis.
The asymptotic symmetry group of general relativity in asymptotically flat spacetimes can be extended from the Bondi-Metzner-Sachs (BMS) group to the generalized BMS (GMBS) group suggested by Campiglia and Laddha, which includes arbitrary diffeomorphisms of the celestial two-sphere. It can be further extended to the Weyl BMS (BMSW) group suggested by Freidel, Oliveri, Pranzetti and Speziale, which includes general conformal transformations. We compute the action of fully nonlinear BMSW transformations on the leading order Bondi-gauge metric functions: specifically, the induced metric, Bondi mass aspect, angular momentum aspect, and shear. These results generalize previous linearized results in the BMSW context by Freidel et al., and also nonlinear results in the BMS context by Chen, Wang, Wang and Yau. The transformation laws will be useful for exploring implications of the BMSW group.
We outline how the symmetry groups of spacetime are interpreted in a gauge-theoretic approach. Specifically, we focus on the hypermomentum concept and discuss the hyperfluid, that appropriately generalizes the perfect (Euler) fluid of general relativity to the case of continuous media with microstructure. We demonstrate that a possible violation of Lorentz invariance is most adequately understood by means of non-vanishing nonmetricity of a metric-affine geometry of spacetime.
We study the Cauchy problem for the Einstein-Boltzmann system with soft potentials in a cosmological setting. We assume the Bianchi I symmetry to describe a spatially homogeneous, but anisotropic universe and consider a cosmological constant $ \Lambda > 0 $ to describe an accelerated expansion of the universe. For the Boltzmann equation we introduce a new weight function and apply the method of Illner and Shinbrot to obtain the future global existence of spatially homogeneous, small solutions. For the Einstein equations we assume that the initial value of the Hubble variable is close to $ ( \Lambda / 3 )^{ 1 / 2 } $. We obtain the future global existence and asymptotic behavior of spatially homogeneous solutions to the Einstein-Boltzmann system with soft potentials.
We study strong gravitational lensing by a class of static wormhole geometries. Analytical approaches to the same are developed, and the results differ substantially from strong lensing by black holes, first reported by Bozza. We consider two distinct situations, one in which the observer and the source are on the same side of the wormhole throat, and the other in which they are on opposite sides. Distinctive features in our study arise from the fact that photon and antiphoton spheres might be present on both sides of the wormhole throat, and that the throat might itself act as a photon sphere. We show that strong gravitational lensing thus opens up a rich variety of possibilities of relativistic image formation, some of which are novel, and are qualitatively distinct from black hole lensing. These can serve as clear indicators of exotic wormhole geometries.
We establish a Penrose-type inequality with angular momenta for four dimensional, biaxially symmetric, maximal, asymptotically flat initial data sets $(M,g,k)$ for the Einstein equations with fixed angular momenta and horizon inner boundary associated to a 3-sphere outermost minimal surface. Moreover, equality holds if and only if the initial data set is isometric to a canonical time slice of a stationary Myers-Perry black hole.
We use the local momentum space technique to obtain an expansion of the Feynman propagators for scalar field and graviton up to first order in the background curvature. The expressions for the propagators are cross-checked with the past literature as well as with the expressions for the traced heat kernel coefficients. The propagators so obtained are used to compute one-loop divergences in the Vilkovisky-Dewitt's effective action for a scalar field non-minimally coupled with gravity for an arbitrary spacetime metric background. The Vilkovisky-DeWitt effective action is then compared with the standard effective action in the limit $\kappa =0$, where $\kappa = 2/M_P$ in terms of the Planck mass. The comparison yields the important result that taking the limit $\kappa=0$ after computing the Vikovisky-DeWitt effective action is not equivalent to computing the Vikovisky-DeWitt effective action for the same theory in the absence of gravity.
The dynamics of the Restricted 3 Body Problem in the Post Newtonian context have been, and continue to be, studied extensively and a number of characteristics such as ejections of bodies from the system, precession of orbits, chaotic trajectories and collisions are investigated and classified. In this paper, I examine the extent to which these characteristics are attributable to Relativistic causes (more correctly Post Newtonian approximations of Relativistic causes) or more prosaically whether these characteristics already exist in the Newtonian case and whether these or other effects would also be caused by the time of propagation of the field, the impact of which is less well covered in the literature. In this project I have written bespoke code for the simulations and present the results in a methodical way showing many of the steps in the process. In the Appendices, I discuss the technical challenges of constructing, testing and calibrating the code, so that other non-experts in the field can reproduce the code and results and build on the work that I have done. Although the work done here must be considered preliminary, results suggest that the impact of the field propagation time on the behaviour of the test mass in the restricted 3 body problem outweighs the impact of the Post Newtonian corrections, in particular due to a transfer of angular momentum from the binary system to the test mass, and this effect may therefore be of interests to those studying accretion discs of binary systems.
Reasoning by analogies permeates theoretical developments in physics and astrophysics, motivated by the unreachable nature of many phenomena at play. For example, analogies have been used to understand black hole physics, leading to the development of a thermodynamic theory for these objects and the discovery of the Hawking effect. The latter, which results from quantum field theory on black hole space-times, changed the way physicists approached this subject: what had started as a mere aid to understanding becomes a possible source of evidence via the research programme of `analogue gravity' that builds on analogue models for field effects. Some of these analogue models may and can be realised in the laboratory, allowing experimental tests of field effects. Here, we present a historical perspective on the connection between the Hawking effect and analogue models. We also present a literature review of current research, bringing history and contemporary physics together. We argue that the history of analogue gravity and the Hawking effect is divided into three distinct phases based on how and why analogue models have been used to investigate fields in the vicinity of black holes. Furthermore, we find that modern research signals a transition to a new phase, where the impetus for the use of analogue models has surpassed the problem they were originally designed to solve.
We consider how the reduced dynamics of an open quantum system coupled to an environment admits the Poincar\'e symmetry. The reduced dynamics is described by a dynamical map, which is given by tracing out the environment from the total unitary evolution without initial correlations. We investigate the dynamical map which is invariant under the Poincar\'e group. Based on the unitary representation theory of the Poincar\'e group, we develop a systematic way to give such a dynamical map. Using this way, we derive the dynamical map of a massive particle with a finite spin and a massless particle with a finite spin and a nonzero momentum. The dynamical map of a spinless massive particle is exemplified and the conservation of the Poincar\'e generators is discussed. We then find the map with the Poincar\'e invariance and the four-momentum conservation. Further, we show that the conservation of the angular momentum and the boost operator makes the map of a spinless massive particle unitary
Lorentzian quantum gravity is believed to cure the pathologies encountered in Euclidean quantum gravity, such as the conformal factor problem. We show that this is the case for the Lorentzian Regge path integral expanded around a flat background. We illustrate how a subset of local changes of the triangulation, so-called Pachner moves, allow to isolate the indefinite nature of the gravitational action at the discrete level. The latter can be accounted for by oppositely chosen deformed contours of integration. Moreover, we construct a discretization-invariant local path integral measure for 3D Lorentzian Regge calculus and point out obstructions in defining such a measure in 4D. We see the work presented here as a first step towards establishing the existence of the non-perturbative Lorentzian path integral for Regge calculus and related frameworks such as spin foams. An extensive appendix provides an overview of Lorentzian Regge calculus, using the recently introduced concept of the complexified Regge action, and derives useful geometric formulae and identities needed in the main text.
It is assumed that the non-singular big-bang birth of the Universe as set forth by Einstein-Cartan's theory particularly brought about the appearance of the cosmic microwave and dark energy backgrounds, dark matter, gravitons as well as of Dirac particles. On account of this assumption, a two-component description of the motion of quarks and leptons prior to the occurrence of hadronization is presented within the framework of the torsionful {\epsilon}-formalism of Infeld and van der Waerden. The relevant field equations are settled on the basis of the implementation of conjugate minimal coupling covariant derivative operators that carry additively typical potentials for the cosmic backgrounds such as geometrically specified in a previous work. It appears that the derivation of the wave equations which control the spacetime propagation of Dirac fields at very early stages of the cosmic evolution, must be tied up with the applicability of certain subsidiary relations. The wave equations themselves suggest that quarks and leptons interact not only with both of the cosmic backgrounds, but also with dark matter. Nevertheless, it becomes manifest that the inner structure of the framework allowed for does not give rise at all to any interaction between gravitons and Dirac particles. The overall formulation ascribes an intrinsically non-geometric character to Dirac's theory, in addition to exhibiting a formal evidence that dark energy and dark matter must have partaken of a cosmic process of hadronization.
In this work, we wish to address the question -- whether the quasi-normal modes, the characteristic frequencies associated with perturbed black hole spacetimes, central to the stability of these black holes, are themselves stable. Though the differential operator governing the perturbation of black hole spacetimes is self-adjoint, the boundary conditions are dissipative in nature, so that the spectral theorem becomes inapplicable, and there is no guarantee regarding the stability of the quasi-normal modes. We have provided a general method of transforming to the hyperboloidal coordinate system, for both asymptotically flat and asymptotically de Sitter spacetimes which neatly captures the dissipative boundary conditions, and the differential operator becomes non-self-adjoint. Employing the pseudospectrum analysis and numerically implementing the same through Chebyshev's spectral method, we present how the quasi-normal modes will drift away from their unperturbed values under external perturbation of the scattering potential. Intriguingly, for strong enough perturbation, even the fundamental quasi-normal mode, associated with gravitational perturbations, drifts away from its unperturbed position for asymptotically de Sitter black holes, in stark contrast to the case of asymptotically flat black holes. Besides presenting several other interesting results, specifically for asymptotically de Sitter black holes, we also discuss the implications of the instability of the fundamental quasi-normal mode on the strong cosmic censorship conjecture.
We introduce a toy model of baryogenesis where our usual visible Universe is a 3-brane coevolving with a hidden 3-brane in a multidimensional bulk, in an ekpyrotic-like approach. The visible matter and antimatter sectors are coupled with the hidden matter and antimatter sectors, breaking the C/CP invariance and leading to baryogenesis occurring after the quark-gluon era. The issue of leptogenesis is also discussed. This model complements cosmological approaches in which dark matter and dark energy could naturally emerge from many-brane scenarios.
We find three new exact solutions of the vacuum Einstein equations without cosmological constant in more than three dimensions. We consider globally hyperbolic spacetimes in which almost abelian Lie groups act on the spaces isometrically and simply transitively. We give left-invariant metrics on the spaces and solve Ricci-flat conditions of the spacetimes. In the four-dimensional case, our solutions correspond to the Bianchi type II vacuum solution. By our results and previous studies, all spatially homogeneous solutions whose spaces have zero-dimensional moduli spaces of left-invariant metrics are found. For the simplest solution, we show that each of the spatial dimensions cannot expand or contract simultaneously in the late-time limit.
The Ehrenfest paradox for a rotating ring is examined and a kinematic resolution, within the framework of the special theory of relativity, is presented. Two different ways by which a ring can be brought from rest to rotational motion, whether by keeping the rest lengths of the blocks constituting the ring constant or by keeping their lengths in the inertial frame constant, are explored and their effect on the length of the material ring in the inertial as well as the co-rotating frame is checked. It is found that the ring tears at a point in the former case and remains intact in the latter case, but in neither of the two cases is the motion of the ring Born rigid during the transition from rest to rotational motion.
Stable light rings, which are associated with spacetime instabilities, are known to exist in four-dimensional stationary axisymmetric spacetimes that solve the Einstein-Maxwell equations (so-called electrovacuum solutions, with Faraday tensor $F_{\mu \nu} \neq 0$); however, they are not permitted in pure vacuum ($F_{\mu \nu} = 0$). In this work, we extend this result to spacetimes with a non-zero cosmological constant $\Lambda$. In particular, we demonstrate that stable light rings are permitted in $\Lambda$-electrovacuum ($F_{\mu \nu} \neq 0$, $\Lambda \neq 0$), but ruled out in $\Lambda$-vacuum ($F_{\mu \nu} = 0$, $\Lambda \neq 0$).
Within the next decade the Laser Interferometer Space Antenna (LISA) is due to be launched, providing the opportunity to extract physics from stellar objects and systems, such as \textit{Extreme Mass Ratio Inspirals}, (EMRIs) otherwise undetectable to ground based interferometers and Pulsar Timing Arrays (PTA). Unlike previous sources detected by the currently available observational methods, these sources can \textit{only} be simulated using an accurate computation of the gravitational self-force. Whereas the field has seen outstanding progress in the frequency domain, metric reconstruction and self-force calculations are still an open challenge in the time domain. Such computations would not only further corroborate frequency domain calculations and models, but also allow for full self-consistent evolution of the orbit under the effect of the self-force. Given we have \textit{a priori} information about the local structure of the discontinuity at the particle, we will show how to construct discontinuous spatial and temporal discretisations by operating on discontinuous Lagrange and Hermite interpolation formulae and hence recover higher order accuracy. In this work we demonstrate how this technique in conjunction with well-suited gauge choice (hyperboloidal slicing) and numerical (discontinuous collocation with time symmetric) methods can provide a relatively simple method of lines numerical algorithm to the problem. This is the first of a series of papers studying the behaviour of a point-particle prescribing circular geodesic motion in Schwarzschild in the \textit{time domain}. In this work we describe the numerical machinery necessary for these computations and show not only our work is capable of highly accurate flux radiation measurements but it also shows suitability for evaluation of the necessary field and it's derivatives at the particle limit.
We re-analyze the far zone contribution to the two-body conservative dynamics arising from interaction between radiative and longitudinal modes, the latter sourced by mass and angular momentum, which in the mass case is known as tail process. We verify the expected correspondence between two loop self-energy amplitudes and the gluing of two classical (one leading order, one at one loop) emission amplitudes. In particular we show that the factorization of the self-energy amplitude involving the angular momentum is violated when applying standard computation procedures, due to a violation of the Lorentz gauge condition commonly adopted in perturbative computations. We show however that a straightforward fix exists, as the violation corresponds to a consistent anomaly, and it can be re-absorbed by the variation of a suitable action functional.
This paper explores the evolutionary behavior of the Earth-satellite binary system within the framework of the ghost-free parity-violating gravity and the corresponding discussion on the parity-violating effect from the laser-ranged satellites. For this purpose, we start our study with the Parameterized Post-Newtonian (PPN) metric of this gravity theory to study the orbital evolution of the satellites in which the spatial-time sector of the spacetime is modified due to the parity violation. With this modified PPN metric, we calculate the effects of the parity-violating sector of metrics on the time evolution of the orbital elements for an Earth-satellite binary system. We find that among the five orbital elements, the parity violation has no effect on the semi-latus rectum, while the eccentricity and ascending node are affected only in a periodic manner. These three orbital elements are the same as the results of general relativity and are also consistent with the observations of the present experiment. In particular, parity violation produces non-zero corrections to the eccentricity and pericenter, which will accumulate with the evolution of time, indicating that the parity violation of gravity produces observable secular effects. The observational constraint on the parity-violating effect is derived by confronting the theoretical prediction with the observation by the LAGEOS II pericenter advance, giving a constraint on the parity-violating parameter space from the satellite experiments.
We consider Einstein-Weyl gravity with a minimally coupled scalar field in four dimensional spacetime. By using the Minimal Geometric Deformation (MGD) approach, we split the highly nonlinear coupled field equations into two subsystems that describing the background geometry and scalar field source, respectively. Regarding the Schwarzschild-AdS metric as a background geometry, we derive analytical approximate solutions of scalar field and deformation metric functions with Homotopy Analysis Method (HAM), providing their analytical approximations to fourth order. Moreover, we discuss the accuracy of the analytical approximations, showing they are sufficiently accurate throughout the exterior spacetime.
There is a common expectation that the big-bang singularity must be resolved in quantum gravity but it is not clear how this can be achieved. A major obstacle here is the difficulty of interpreting wave-functions in quantum gravity. The standard quantum mechanical framework requires a notion of time evolution and a proper definition of an invariant inner product having a probability interpretation, both of which are seemingly problematic in quantum gravity. We show that these two issues can actually be solved by introducing the embedding coordinates as dynamical variables \`a la Isham and Kuchar. The extended theory is identical to general relativity but has a larger group of gauge symmetries. The Wheeler-DeWitt equations describe the change of the wave-function from one arbitrary spacelike slice to another, however the constraint algebra makes this evolution purely kinematical and furthermore enforces the wave-function to be constrained in the subspace of zero-energy states. An inner product can also be introduced having all the necessary requirements. In this formalism big-bang appears as a finite field space boundary on which certain boundary conditions must be imposed for mathematical consistency. We explicitly illustrate this point both in the full theory and in the minisuperspace approximation.
We develop a new setting in the framework of braneworld holography to describe a pair of coupled and entangled uniformly accelerated universes. The model consists of two branes embedded into AdS space capping off the UV and IR regions, giving rise to a notion of dS wedge holography. Specializing in a three-dimensional bulk, we show that dS JT gravity can emerge as an effective braneworld theory, provided that fluctuations transverse to the brane are included. We study the holographic entanglement entropy between the branes as well as the holographic complexity within the `complexity=anything' proposal. We reproduce a Page curve with respect to an observer collecting radiation on the UV brane, as long as we take the limit where gravity decouples in that universe, thus acting as a non-gravitating bath. The Page curve emerges due to momentum-space (UV/IR) entanglement and can be understood as analogous to the `confinement-deconfinement' transition in theories with a mass gap. Moreover, the analysis of complexity shows that the hyperfast growth phenomenon is displayed within a set of proposals, while late-time linear growth can be recovered for a different set. Our framework thus provides new test grounds for understanding quantum information concepts in dS space and dS holography.
Configurations of rotating black holes in the cubic Galileon theory are computed by means of spectral methods. The equations are written in the 3+1 formalism and the coordinates are based on the maximal slicing condition and the spatial harmonic gauge. The black holes are described as apparent horizons in equilibrium. It enables the first fully consistent computation of rotating black holes in this theory. Several quantities are extracted from the solutions. In particular, the vanishing of the mass is confirmed. A link is made between that and the fact that the solutions do not obey the zeroth-law of black hole thermodynamics.
The models of New General Relativity have recently got attention of research community, and there are some works studying their dynamical properties. The formal aspects of this investigation have been mostly restricted to the primary constraints in the Hamiltonian analysis. However, it is by far not enough for counting their degrees of freedom or judging whether they are any good and viable. In this paper we study linearised equations in vacuum around the trivial Minkowski tetrad. By taking the approach of cosmological perturbation theory we show that the numbers of primary constraints are very easily seen without any need of genuine Hamiltonian techniques, and give the full count of linearised degrees of freedom in the weak field limit of each and every version of New General Relativity without matter.
In this work, we choose a minimal coupling interaction between massive Klein Gordon (KG) quantum scalar free fields and Janis-Newman-Winicour (JNW) spherically symmetric static black hole, to produce its Hawking temperature and luminosity. This is done by calculating asymptotic wave solutions at near and far from the black hole horizon. They are orthogonal mode solutions of local Hilbert spaces. By using these mode solutions, we calculated Bogolubov coefficients and then, we investigated number density matrix of created particles. Mathematical calculations show that this is not exactly similar to the Planck`s black body radiation energy density distribution but, it is "gray" body radiation distribution depended to the emitted Hawking particles frequency. Their difference is a non-vanishing absorptivity factor of backscattered particles after to form horizon of a collapsing body. Our motivation is determination of position of Hawking created pairs in which, two different proposals are proposed, so called as "fairwall" and "quantum atmosphere".
Inspired by DGP gravity, this paper investigates higher derivative gravity localized on the brane. Similar to the case in bulk, we find the brane-localized higher derivative gravity suffers the ghost problem generally. Besides, the spectrum includes complex-mass modes and thus is unstable in some parameters. On the other hand, the DGP gravity and brane-localized Gauss-Bonnet gravity are well-defined for suitable parameters. We also derive novel relations between the ghost and non-ghost modes, which remain invariant under the variations of DGP and Gauss-Bonnet parameters. Furthermore, we discuss various constraints on parameters of brane-localized gravity in AdS/BCFT and wedge holography, respectively. They include the tachyon-free and ghost-free conditions of Kaluza-Klein and brane-bending modes, the positive definiteness of boundary central charges, and entanglement entropy. The tachyon-free and ghost-free conditions impose the most substantial restrictions, which require a non-negative DGP gravity and Gauss-Bonnet gravity on the brane. The ghost-free condition rules out one class of brane-localized higher derivative gravity. Thus, such higher derivative gravity should be understood as a low energy effective theory on the brane, under the ghost energy scale. Finally, we briefly discuss the applications of our results.
The motion of water is governed by the Navier-Stokes equations, which are complemented by the continuity equation to ensure local mass conservation. In this work, we construct the relativistic generalization of these equations through a gradient expansion for a fluid with conserved charge in a curved $d$-dimensional spacetime. We adopt a general hydrodynamic frame approach and introduce the Irreducible-Structure (IS) algorithm, which is based on derivatives of both the expansion scalar and the shear and vorticity tensors. By this method, we systematically generate all permissible gradients up to a specified order and derive the most comprehensive constitutive relations for a charged fluid, accurate to third-order gradients. These constitutive relations are formulated to apply to ordinary, non-conformal, and conformally invariant charged fluids. Furthermore, we examine the hydrodynamic frame dependence of the transport coefficients for a non-conformal charged fluid up to the third order in the gradient expansion. The frame dependence of the scalar, vector, and tensor parts of the constitutive relations is obtained in terms of the field redefinitions of the fundamental hydrodynamic variables. Managing these frame dependencies is challenging due to their non-linear character. However, in the linear regime, these higher-order transformations become tractable, enabling the identification of a set of frame-invariant coefficients. An advantage of employing these coefficients is the possibility of studying the linear equations of motion in any chosen frame and, hence, we apply this approach to the Landau frame. Subsequently, these linear equations are solved in momentum space, yielding dispersion relations for shear, sound, and diffusive modes for a non-conformal charged fluid, expressed in terms of the frame-invariant transport coefficients.
The effects of the Ohmic and magnetic density currents are investigated in the linearized Born-Infeld electrodynamics. The linearization is introduced through an external magnetic field, in which the vector potential of the Born-Infeld electrodynamics is expanded around of a magnetic background field, that we consider uniform and constant in this paper. From the Born-Infeld linearized equations, we obtain the solutions for the refractive index associated with the electromagnetic wave superposition, when the current density is ruled by the Ohm law, and in the second case, when the current density is set by a isotropic magnetic conductivity. These solutions are functions of the magnetic background $({\bf B})$, of the wave propagation direction $({\bf k})$, and also it depends on the material conductivity, and on the wave frequency. As consequence, the dispersion and the absorption of waves change when ${\bf B}$ is parallel to ${\bf k}$ in relation to the case of ${\bf B}$ perpendicular to ${\bf k}$ in the material medium. These characteristics of the refraction index related to directions of ${\bf B}$ and ${\bf k}$ open a discussion of the birefringence in the material medium.
While sanctions in political and economic areas are now part of the standard repertoire of Western countries (not always endorsed by UN mandates), sanctions in science and culture in general are new. Historically, fundamental research as conducted at international research centers such as CERN has long been seen as a driver for peace, and the Science4Peace idea has been celebrated for decades. However, much changed with the war against Ukraine, and most Western science organizations put scientific cooperation with Russia and Belarus on hold immediately after the start of the war in 2022. In addition, common publications and participation in conferences were banned by some institutions, going against the ideal of free scientific exchange and communication. These and other points were the topics of an international virtual panel discussion organized by the Science4Peace Forum together with the "Natural Scientists Initiative - Responsibility for Peace and Sustainability" (NatWiss e.V.) in Germany and the journal "Wissenschaft und Frieden" (W&F) (see the Figure). Fellows from the Hamburg Institute for Peace Research and Security Policy (IFSH), scientists collaborating with the large physics research institutes DESY and CERN, as well as from climate and futures researchers were represented on the panel. In this Dossier we document the panel discussion, and give additional perspectives. The authors of the individual sections present their personal reflections, which should not be taken as implying that they are endorsed by the Science4Peace Forum or any other organizations. It is regrettable that some colleagues who expressed support for this document felt that it would be unwise for them to co-sign it.
In this study, we explore the azimuthal angle decorrelation of lepton-jet pairs in e-p and e-A collisions as a means for precision measurements of the three-dimensional structure of bound and free nucleons. Utilizing soft-collinear effective theory, we perform the first-ever resummation of this process in e-p collisions at NNLL accuracy using a recoil-free jet axis. Our results are validated against Pythia simulations. In e-A collisions, we address the complex interplay between three characteristic length scales: the medium length $L$, the mean free path of the energetic parton in the medium $\lambda$, and the hadronization length $L_h$. We demonstrate that in the thin-dilute limit, where $L \ll L_h$ and $L \sim \lambda$, this process can serve as a robust probe of the three-dimensional structure for bound nucleons. We conclude by offering predictions for future experiments at the Electron-Ion Collider within this limit.
The top quark mass plays a central role in our understanding of the Standard Model and its extrapolation to high energies. However, precision measurements of the top quark mass at hadron colliders are notoriously difficult, motivating the development of qualitatively new approaches to take advantage of the enormous datasets of top quarks at the Large Hadron Collider (LHC). Recent advances in jet substructure have enabled the precise theoretical description of angular correlations in energy flux. However, their application to precision mass measurements is less direct since they measure a dimensionless angular scale. Inspired by the use of standard candles in cosmology, we introduce a single energy correlator-based observable that reflects the characteristic angular scales of both the $W$-boson and the top quark masses. This gives direct access to the dimensionless quantity $m_{t}/m_{W}$, from which $m_{t}$ can be extracted in a well-defined short-distance mass scheme as a function of the well-known $m_{W}$ (our LHC Cepheid variable). We perform a Monte-Carlo-based study to demonstrate the properties of our observable and the statistical feasibility of its extraction from the Run 2 and 3 and High-Luminosity LHC data sets. The resulting $m_t$ has remarkably small uncertainties from non-perturbative effects and is insensitive to the parton distribution functions. We believe that our proposed observable provides a road map for a rich program to achieve a record precision top quark mass measurement at the LHC.
MoEDAL's Apparatus for Penetrating Particles (MAPP) Experiment is designed to expand the search for new physics at the LHC, significantly extending the physics program of the baseline MoEDAL Experiment. The Phase-1 MAPP detector (MAPP-1) is currently undergoing installation at the LHC's UA83 gallery adjacent to the LHCb/MoEDAL region at Interaction Point 8 and will begin data-taking in early 2024. The focus of the MAPP experiment is on the quest for new feebly interacting particles$\unicode{x2014}$avatars of new physics with extremely small Standard Model couplings, such as minicharged particles (mCPs). In this study, we present the results of a comprehensive analysis of MAPP-1's sensitivity to mCPs arising in the canonical model involving the kinetic mixing of a massless dark $U(1)$ gauge field with the Standard Model hypercharge gauge field. We focus on several dominant production mechanisms of mCPs at the LHC across the mass$\unicode{x2013}$mixing parameter space of interest to MAPP: Drell$\unicode{x2013}$Yan pair production, direct decays of heavy quarkonia and light vector mesons, and single Dalitz decays of pseudoscalar mesons. The $95\%$ confidence level background-free sensitivity of MAPP-1 for mCPs produced at the LHC's Run 3 and the HL-LHC through these mechanisms, along with projected constraints on the minicharged strongly interacting dark matter window, are reported. Our results indicate that MAPP-1 exhibits sensitivity to sizable regions of unconstrained parameter space and can probe effective charges as low as $8 \times 10^{-4}\:e$ and $6 \times 10^{-4}\:e$ for Run 3 and the HL-LHC, respectively.
Thermal behavior of effective, chiral condensate-dependent $U_A(1)$ anomaly couplings is investigated using the functional renormalization group approach in the $N_f = 3$ flavor meson model. We derive flow equations for anomaly couplings that arise from instantons of higher topological charge, dependent also on the chiral condensate. These flow equations are solved numerically for the $|Q|=1,2$ topological sectors at finite temperature. Assuming that the anomaly couplings at the ultraviolet scale may also exhibit explicit temperature dependence, we calculate the thermal behavior of the effective potential. In accordance with our earlier study, [G. Fejos and A. Patkos, Phys. Rev. D{\bf 105}, 096007 (2022)], we find that for increasing temperatures, the anomalous breaking of chiral symmetry tends to strengthen toward the pseudocritical temperature ($T_C$) of chiral symmetry breaking. It is revealed that below $T_C$, around $\sim$10\% of the $U_A(1)$ breaking arises from the $|Q|=2$ topological sector. Correspondingly, a detailed analysis on the thermal behavior of the mass spectrum is also presented.
Heavy-ion collisions, such as Pb-Pb or p-Pb, produce extreme conditions in temperature and density that make the hadronic matter transition to a new state, called quark-gluon plasma (QGP). Simulations of heavy-ion collisions provide a way to improve our understanding of the QGP's properties. These simulations are composed of a hybrid description that results in final observables in agreement with accelerators like LHC and RHIC. However, recent works pointed out that these hydrodynamic simulations can display acausal behavior during the evolution in certain regions, indicating a deviation from a faithful representation of the underlying QCD dynamics. To pursue a better understanding of this problem and its consequences, this work simulated two different collision systems, Pb-Pb and p-Pb at $\sqrt{s_{NN}} = 5.02$ TeV. In this context, our results show that causality violation, even though always present, typically occurs on a small part of the system, quantified by the total energy fraction residing in the acausal region. In addition, the acausal behavior can be reduced with changes in the pre-hydrodynamic factors and the definition of the bulk-viscous relaxation time. Since these aspects are fairly arbitrary in current simulation models, without solid guidance from the underlying theory, it is reasonable to use the disturbing presence of acausal behavior in current simulations to guide improvements towards more realistic modeling. While this work does not solve the acausality problem, it sheds more light on this issue and also proposes a way to solve this problem in simulations of heavy-ion collisions.
FIRE is a program which performs integration-by-parts (IBP) reduction of Feynman integrals. Originally, the C++ version of FIRE relies on the computer algebra system Fermat by Robert Lewis to simplify rational functions. We present an upgrade of FIRE which incorporates a new library FUEL initially described in a separate publication, which enables a flexible choice of third-party computer algebra systems as simplifiers, as well as efficient communications with some of the simplifiers as C++ libraries rather than through Unix pipes. We achieve significant speedups for IBP reduction of Feynman integrals involving many kinematic variables, when using an open source backend based on FLINT newly added in this work, or the Symbolica backend developed by Ben Ruijl as a potential successor of FORM.
We investigate the effects of broken scale invariant unparticle at the MUonE experiment. The choice of the broken model is because the original scale-invariant model is severely suppressed by constraints from cosmology and low-energy experiments. Broken scale invariant unparticle model is categorized into four types: pseudoscalar, scalar, axial-vector, and vector unparticle. Each uparticle type is characterized by three free parameters: coupling constant $\lambda$, scaling dimension $d$, and energy scale $\mu$ at which the scale-invariance is broken. After considering all of the available constraints on the model, we find that the MUonE experiment is sensitive to (axial-)vector unparticle with $1 < d < 1.4$ and $1\le \mu \le 12$ GeV.
We present a projective framework for the construction of Integration by Parts (IBP) identities and differential equations for Feynman integrals, working in Feynman-parameter space. This framework originates with very early results which emerged long before modern techniques for loop calculations were developed. Adapting and generalising these results to the modern language, we use simple tools of projective geometry to generate sets of IBP identities and differential equations in parameter space, with a technique applicable to any loop order. We test the viability of the method on simple diagrams at one and two loops, providing a unified viewpoint on several existing results.
To study the light baryon light-cone distribution amplitudes (LCDAs), the spatial correlator of the light baryon has been calculated up to one-loop order in coordinate space. They reveal certain identities that do not appear in the study of mesons DAs and PDFs. Subsequently, it was renormalized using the ratio renormalization scheme involving division by a 0-momentum matrix element. Then through the matching in the coordinate space, the light baryon light-cone correlator can be extracted out from the spatial correlator. %By choosing a specific type of spatial correlator for light baryons, the LCDA is determined from first principles. These results provide insights into both the ultraviolet (UV) and infrared (IR) structures of the light baryon spatial correlator, which is valuable for further exploration in this field. Furthermore, the employed ratio scheme is an efficient and lattice-friendly renormalization approach suitable for short-distance applications. These can be used for studying the light baryon LCDAs using lattice techniques.
In this study, we employ the homogeneous balance method to obtain an analytical solution to the Balitsky-Kovchegov equation with running coupling. We utilize two distinct prescriptions of the running coupling scale, namely the saturation scale dependent running coupling and the dipole momentum dependent running coupling. By fitting the proton structure function experimental data, we determine the free parameters in the analytical solution. The resulting $\chi^{2}/d.o.f$ values are determined to be $1.07$ and $1.43$, respectively. With these definitive solutions, we are able to predict exclusive $J/\psi$ production, and demonstrate that analytical solutions with running coupling are in excellent agreement with $J/\psi$ differential and total cross section. Furthermore, our numerical results indicate that the analytical solution of the BK equation with running coupling can provide a reliable description for both the proton structure function and exclusive vector meson production.
In this paper, we comprehensively explore bottomonia mass spectra and their decay properties by solving the non-relativistic Schrodinger wave equation numerically with approximate quark-antiquark potential form. We also incorporate spin-dependent terms - spin-spin, spin-orbit, and tensor terms to remove mass degeneracy and to obtain excited states ($nS, nP, nD, nF, n = 1, 2, 3, 4, 5$) mass spectra. By using Van Royen - Weisskopf formula, we investigate leptonic decay constants, di-leptonic, di-gamma, tri-gamma, di-gluon decay widths and also incorporate first-order radiative corrections. We also computed radiative transition widths, which give a better insight into the non-perturbative aspects of QCD. The present results for mass spectroscopy and decay properties are in tune with available experimental values and other theoretical predictions. Our results may provide better insight to upcoming experimental information in the near future.
The long-range near-side ridge phenomenon in two-particle correlation is crucial for understanding the motion of partons after high-energy heavy-ion collisions. While it has been well explained by the hydrodynamic flow effect of the quark-gluon plasma (QGP) in heavy-ion collisions, the recent observation of the ridge structure in small systems has led to debates about the applicability of hydrodynamic models to explain the phenomenon since the collisions in small systems could not be sufficient to produce the medium required by the QGP matter. The Momentum Kick Model (MKM), on the other hand, explains the long-range near-side ridge phenomenon by the kinematic process; the high-momentum jet particles collide with medium partons, transfer their momentum to them (called the "kick" process), and induce collective motion of the kicked-partons resulting in the ridge phenomenon. This MKM has successfully described the ridge structure in heavy-ion collisions at the RHIC. Furthermore, since the ridge phenomenon in small systems is prominent in high-multiplicity events, the MKM with multiplicity dependence (MKMwM) has been studied in pp collisions at the LHC using a relationship between the number of kicked-partons and the multiplicity through an impact parameter. In this research, we extend the previous study with more recent experimental data-driven parameters and apply them to the new measurements that have a wider multiplicity range with $p_T$ and $\Delta\Phi$ bins at the LHC. Simultaneously, we not only provide a theoretical basis for the ridge behavior from the new measurements but also predict the ridge structure at the energies scheduled by the LHC in the upcoming Run 3 experiments.
Kaon physics is at a turning point -- while the rare-kaon experiments NA62 and KOTO are in full swing, the end of their lifetime is approaching and the future experimental landscape needs to be defined. With HIKE, KOTO-II and LHCb-Phase-II on the table and under scrutiny, it is a very good moment in time to take stock and contemplate about the opportunities these experiments and theoretical developments provide for particle physics in the coming decade and beyond. This paper provides a compact summary of talks and discussions from the Kaons@CERN 2023 workshop.
In this paper we study the magnetic dipole moments of the newly discovered $\Omega_{c}(3185)^0$ and $\Omega_c(3327)^0$ assuming that $\Omega_{c}(3185)^0$ and $\Omega_c(3327)^0$ are S -wave $D \Xi$ and $D^* \Xi$ molecular pentaquark states, respectively. Together with these states, the magnetic dipole moments of possible $D_s \Xi$, $D_s^* \Xi$, $D \Xi^*$, and $D_s \Xi^*$ pentaquark states are also studied. The magnetic dipole moments of these singly-charmed pentaquarks are estimated within the framework of the QCD light cone sum rules utilizing the photon distribution amplitudes. In the search for the properties of singly-charmed pentaquark states, the results obtained for the magnetic dipole moments can be useful.
Our present understanding of elementary particle interactions is the synergic result of developments of theoretical ideas and of experimental advances that lead to the theory known as the Standard Model of particle physics. Despite the uncountable experimental confirmations, we believe that it is not yet the ultimate theory, for a number of reasons that I will briefly recall in this lecture. My main focus will be on the role that Flavour Physics has played in development of the theory in its present formulation, as well as on the opportunities that this sector offers to discover physics beyond the Standard Model.
One-loop form factors for $h\rightarrow e^-e^+\gamma$ and $e^-e^+\rightarrow h\gamma$ in $U(1)_{B-L}$ extension of the standard model are presented in this work. The computations are performed in 't Hooft-Veltman gauge. Analytical results are then expressed in terms of Passarino-Veltman functions following the standard notations of {\tt LoopTools}. As a results, one-loop form factors can be evaluated numerically by using {\tt LoopTools}. In phenomenological results, the signal strengths of $e^-e^+\rightarrow h\gamma$, defined as ratio of cross sections computed in $U(1)_{B-L}$ extension models to the corresponding ones in the standard model, are analyzed at future lepton colliders. The signal strengths for both vector and chiral $B-L$ models are scanned in physical parameter space. We find that the effects of charged Higgs in the chiral $B-L$ model can be probed easily with the help of the initial polarization beams at future lepton colliders.
We study the excitation spectrum of light and strange mesons in diffractive scattering. We identify different hadron resonances through partial wave analysis, which inherently relies on analysis models. Besides statistical uncertainties, the model dependence of the analysis introduces dominant systematic uncertainties. We discuss several of their sources for the $\pi^-\pi^-\pi^+$ and $K^0_S K^-$ final states and present methods to reduce them. We have developed a new approach exploiting a-priori knowledge of signal continuity over adjacent final-state-mass bins to stably fit a large pool of partial-waves to our data, allowing a clean identification of very small signals in our large data sets. For two-body final states of scalar particles, such as $K^0_S K^-$, mathematical ambiguities in the partial-wave decomposition lead to the same intensity distribution for different combinations of amplitude values. We will discuss these ambiguities and present solutions to resolve or at least reduce the number of possible solutions. Resolving these issues will allow for a complementary analysis of the $a_J$-like resonance sector in these two final states.
In this short note we present aspects of the energy and charge deposition within the McDIPPER, a novel 3D resolved model for the initial state of ultrarelativistic Heavy-Ion collisions based on the $k_\perp$-factorized Color Glass Condensate hybrid approach. This framework is a initial-state Monte Carlo event generator which deposits the relevant conserved charges (energy, charge and baryon densities) both in the midrapidity and forward/backward regions of the collision. The event-by-event generator computes the gluon and (anti-) quark phase-space densities using the IP-Sat model, from where the conserved charges can be extracted directly. In this work we present the centrality and collision energy dependence for the deposited conserved quantities at midrapidity and the full event, the so-called $4\pi$ solid angle range.
In this work, we explore a new picture of baryon number ($\mathcal{B}$) violation inspired by the formal analogies between the Brout-Englert-Higgs model and the Ginzburg-Landau model. A possible manifestation of this new picture could be the transition between a pair of neutrons and a pair of antineutrons (i.e. $nn \rightarrow \bar{n}\bar{n}$), which violates $\mathcal{B}$ by 4 units. In the presence of the superfluid pairing interactions, two neutrons can form a Cooper pair and can be modeled by a semi-classical complex scalar field, which carries two units of $\mathcal{B}$. In the presence of the $\mathcal{B}$-violating terms, the system does not possess a continuous $U(1)$ symmetry but instead it respect a discrete $Z_2$ symmetry. Before the spontaneous breaking of the $Z_2$ symmetry, the ground state (vacuum) of the neutron Cooper field and that of the antineutron Cooper field should have degenerate energy levels. After the spontaneous breaking of the $Z_2$ symmetry, the degeneracy of the ground states would be removed and a domain wall that interpolates between the two inequavalent ground states can emerge. If the vacuum energy of the neutron Cooper field is higher than that of the antineutron Cooper field, the false vacuum ($nn$) would decay into the true vacuum ($\bar{n}\bar{n}$) through a quantum tunneling process across the domain wall. Therefore, The $nn \rightarrow \bar{n}\bar{n}$ transition process can be considered as a false vacuum decay through a quantum tunneling process induced by the superfluid pairing interactions. Both the $\mathcal{B}$-violating and CP-violating effects can be quite naturally accommodated in the $nn \rightarrow \bar{n}\bar{n}$ transition process. The $\mathcal{B}$-violating process accompanied by CP-violation would open a promising avenue for exploring new physics beyond the Standard Model.
In this paper we consider generalized Yukawa model, which consists of two Dirac fermions and two scalar fields. One of the scalar bosons in this model is assumed to be much heavier than the other particles, so it decouples at low energies. Low-energy effective Lagrangian (EL) of this model is derived. This Lagrangian describes contributions of the heavy scalar in observables. We consider cross-sections of $s$- and $t$-channel processes, obtained within the complete model and its low-energy approximation. Contributions of one-loop corrections in these cross-sections coming from light particles of the model are analyzed. These are corrections to the parameters of the heavy boson and contributions of one-loop mixing of light and heavy scalars. We identify ranges of the model Yukawa couplings where these corrections are significant. We get that if interaction between fermions and either light or heavy scalar is strong enough, then the derived EL could not be applied for description of the analyzed processes cross-sections even when heavy scalar decouples. Implications of present results for searches of new particles beyond the Standard model are considered.
A Bayesian calibration, using experimental data from 2.76 $A$ TeV Pb-Pb collisions at the LHC, of a novel hybrid model is presented in which the usual pre-hydrodynamic and viscous relativistic fluid dynamic (vRFD) stages are replaced by a viscous anisotropic hydrodynamic (VAH) core that smoothly interpolates between the initial expansion-dominated, approximately boost-invariant longitudinally free-streaming and the subsequent collision-dominated (3+1)-dimensional standard vRFD stages. This model yields meaningful constraints for the temperature-dependent specific shear and bulk viscosities, $(\eta/s)(T)$ and $(\zeta/s)(T)$, for temperatures up to about $700$ MeV (i.e. over twice the range that could be explored with earlier models). With its best-fit model parameters the calibrated VAH model makes highly successful predictions for additional $p_T$-dependent observables for which high-quality experimental data are available that were not used for the model calibration.
We present results for the large-$N$ limit of the chiral condensate computed from twisted reduced models. We followed a two-fold strategy, one constiting in extracting the condensate from the quark-mass dependence of the pion mass, the other consisting in extracting the condensate from the mode number of the Dirac operator.
We analyze the two-loop level $R$-parity violating supersymmetric contribution to the electric and chromoelectric dipole moments of fermions with a lepton and a gaugino in the intermediate state. It is found that this contribution can be sufficiently enhanced with large $\tan \beta$ and that it can have comparable size with the currently known $R$-parity violating Barr-Zee type process within TeV scale supersymmetry breaking. We also give new upper limits on $R$-parity violating couplings given by the atomic electric dipole moment and molecular beam experiments.
We extend the nonrelativistic QCD framework to explore the nature of the newly discovered di-$J/\psi$ resonances. Assuming them as either molecule-like states or tetraquarks, we calculated their hadroproduction cross sections at the LHC. We find that the observed resonances are most likely spin-2 particles, and there should exist their spin-0 counterparts near these resonances. The ratio of production cross sections of the observed resonances to the latent spin-0 ones are also presented, which might help to distinguish molecule-likes from tetraquarks.
We show that, in addition to the counting of canonical dimensions, a counting of loop orders is necessary to fully specify the power counting of Standard Model Effective Field Theory (SMEFT). Using concrete examples, we demonstrate that considering the canonical dimensions of operators alone may lead to inconsistent results. The counting of both, canonical dimensions and loop orders, establishes a clear hierarchy of the terms in SMEFT. In practice, this serves to identify, and focus on, the potentially dominating effects in any given high-energy process in a meaningful way. Additionally, this will lead to a consistent limitation of free parameters in SMEFT applications.
A new Goldstone particle named Majoron is introduced in order to explain the origin of neutrino mass by some new physics models assuming that neutrinos are Majorana particle. By expanding signal region and using likelihood analysis, it becomes possible to search for Majoron at experiments that is originally designed to search for $\mu-e$ conversion. For the COMET experiment, the sensitivity of process $\mu \rightarrow eJ$ is able to reach ${\mathcal{B}}(\mu \rightarrow eJ)=2.3\times 10^{-5}$ in Phase-I and $O(10^{-8})$ in Phase-II. Meanwhile, the sensitivities to search for Majoron at future experiments are also discussed in this article.
A wide range of dark matter candidates have been proposed and are actively being searched for in a large number of experiments, both at high (TeV) and low (sub meV) energies. One dark matter candidate, a deeply bound $uuddss$ sexaquark, $S$, with mass $\sim 2$ GeV (having the same quark content as the hypothesized H-dibaryon, but long lived) is particularly difficult to explore experimentally. In this paper, we propose a scheme in which such a state could be produced at rest through the formation of $\bar{p}-^3$He antiprotonic atoms and their annihilation into $S$ + $K^+K^+\pi^-$, identified both through the unique tag of a S=+2, Q=+1 final state, as well as through full kinematic reconstruction of the final state recoiling against it.
The optimisation (tuning) of the free parameters of Monte Carlo event generators by comparing their predictions with data is important since the simulations are used to calculate experimental efficiency and acceptance corrections, or provide predictions for signatures of hypothetical new processes in experiments. We present a tuning procedure that is based on Bayesian reasoning and that allows for a proper statistical interpretation of the results. The parameter space is fully explored using Markov Chain Monte Carlo. We apply the tuning procedure to the Herwig7 event generator with both the cluster and the string hadronization models and a large set of measurements from hadronic Z-boson decays produced at LEP in $e^{+}e^{-}$ collisions. Furthermore, we introduce a coherent propagation of uncertainties from the realm of parameters to the realm of observables and we show the effects of including experimental correlations of the measurements. To allow comparison with the approaches of other groups, we repeat the tuning considering weights for individual measurements.
Partial wave decomposition is one of the main tools within the modern S-matrix studies. We present a method to compute partial waves for $2\to2$ scattering of spinning particles in arbitrary spacetime dimension. We identify partial waves as matrix elements of the rotation group with definite covariance properties under a subgroup. This allows to use a variety of techniques from harmonic analysis in order to construct a novel algebra of weight-shifting operators. All spinning partial waves are generated by the action of these operators on a set of known scalar seeds. The text is accompanied by a {\it Mathematica} notebook to automatically generate partial waves. These results pave the way to a systematic studies of spinning S-matrix bootstrap and positivity bounds.
This is the first of two companion papers where we prove that the recently discovered non perturbative mechanism capable of giving mass to elementary fermions, in the presence of weak interactions can also generate a mass for the $W$, and can thus be used as a viable alternative to the Higgs scenario. The non perturbative fermion and $W$ masses have the form $m_f\sim C_f(\alpha)\Lambda_{RGI}$ and $M_W\sim g_wc_w(\alpha)\Lambda_{RGI}$ with $C_f(\alpha)$ and $c_w(\alpha)$ functions of the gauge couplings, $g_w$ the weak coupling and $\Lambda_{RGI}$ the RGI scale of the theory. These parametric structures imply that a realistic model must include a new sector of massive fermions (Tera-fermions) subjected, besides Standard Model interactions, to some kind of super-strong gauge interactions (Tera-interactions) so that the RGI scale of the full theory, $\Lambda_T$, will be in the few TeV region. The extension of the model by introducing hypercharge and particles singlets under strong interactions (leptons and Tera-leptons) is the focus of the companion paper, where we also discuss some phenomenological implications of this approach. One can show that, upon integrating out the (heavy) Tera-degrees of freedom, the resulting low energy effective Lagrangian closely resembles the Standard Model Lagrangian. The argument rests on the conjecture that the 125 GeV resonance detected at LHC is a $W^+W^-/ZZ$ composite state, bound by Tera-particle exchanges, and not an elementary object. Although we restrict to the one family case, neglecting weak isospin splitting, this scheme has a certain number of merits with respect to the Standard Model. It offers a radical solution of the Higgs mass tuning problem as there is no Higgs. It allows identifying $\Lambda_T$ as the electroweak scale. It helps reducing the number of Standard Model parameters as elementary particle masses are determined by the dynamics.
This is the second of two companion papers in which we continue developing the construction of an elementary particle model with no Higgs. Here we show that the recently identified non-perturbative field-theoretical feature, alternative to the Higgs mechanism and capable of giving masses to quarks, Tera-quarks and $W$, can also provide mass to leptons and Tera-leptons when the model is extended to include, besides strong, Tera-strong and weak interactions, also hypercharge. In the present approach elementary fermion masses are not free parameters but are determined by the dynamics of the theory. We derive parametric formulae for elementary particle masses from which we can ``predict'' the order of magnitude of the scale of the new Tera-interaction and get crude numerical estimates for mass ratios in fair agreement with phenomenology. The interest of considering elementary particle models endowed with this kind of non-perturbative mass generation mechanism is that they allow solving some of the conceptual problems of the present formulation of the Standard Model, namely origin of the electroweak scale and naturalness.
Using the result of the VES Collaboration for $Br(J/\psi\to\gamma f_0(1710))$, we estimate the absolute branching fractions for the $f_0(1710)$ decays into $\pi\pi$, $K\bar K$, $\eta\eta$, $\omega\omega$, and $\omega\phi$. In addition, we estimate $Br(\psi(2S)\to\gamma f_0(1710))\approx3.5\times10^{-5}$ and $Br(\Upsilon(1S)\to\gamma f_0(1710))\approx1\times10^{-5}$.
Both parameters in the Higgs field's potential, its mass and quartic coupling, appear fine-tuned to near-critical values, which gives rise to the hierarchy problem and the metastability of the electroweak vacuum. Whereas such behavior appears puzzling in the context of particle physics, it is a common feature of dynamical systems, which has led to the suggestion that the parameters of the Higgs potential could be set through some dynamical process. In this article, we discuss how this notion could be extended to physics beyond the Standard Model (SM). We first review in which sense the SM Higgs parameters can be understood as near-critical and show that this notion can be extrapolated in a unique way for a generic class of SM extensions. Our main result is a prediction for the parameters of such models in terms of their corresponding Standard Model effective field theory Wilson coefficients and corresponding matching scale. For generic models, our result suggests that the scale of new (bosonic) physics lies close to the instability scale. We explore the potentially observable consequences of this connection, and illustrate aspects of our analysis with a concrete example. Lastly, we discuss implications of our results for several mechanisms of dynamical vacuum selection associated with various Beyond-Standard-Model (BSM) constructions.
A model of baryogenesis is introduced where our usual visible Universe is a 3-brane coevolving with a hidden 3-brane in a multidimensional bulk. The visible matter and antimatter sectors are naturally coupled with the hidden matter and antimatter sectors, breaking the C/CP invariance and leading to baryogenesis occurring after the quark-gluon era. The issue of leptogenesis is also discussed. The symmetry breaking spontaneously occurs in relation to the presence of an extra scalar field supported by the $U(1)\otimes U(1)$ gauge group, which extends the conventional electromagnetic gauge field in the two-brane universe. Scalar waves also emerge as potential dark matter candidates and a means of constraining the model.
Cascaded Compton scattering and Breit-Wheeler (BW) processes play fundamental roles in high-energy astrophysical sources and laser-driven quantum electrodynamics (QED) plasmas. A thorough comprehension of the polarization transfer in these cascaded processes is essential for elucidating the polarization mechanism of high-energy cosmic gamma rays and laser-driven QED plasmas. In this study, we employ analytical cross-sectional calculations and Monte Carlo (MC) numerical simulations to investigate the polarization transfer in the cascade of electron-seeded inverse Compton scattering (ICS) and BW process. Theoretical analysis indicates that the polarization of background photons can effectively transfer to final-state particles in the first-generation cascade due to helicity transfer. Through MC simulations involving polarized background photons and non-polarized seed electrons, we reveal the characteristic polarization curves as a function of particle energy produced by the cascaded processes of ICS and BW pair production. Our results demonstrate that the first-generation photons from ICS exhibit the non-decayed stair-shape polarization curves, in contrast to the linearly decayed ones of the first-generation electrons. Interestingly, this polarization curve trend can be reversed in the second-generation cascade, facilitated by the presence of polarized first-generation BW pairs with fluctuant polarization curves. The cascade culminates with the production of second-generation BW pairs, due to diminished energy of second-generation photons below the threshold of BW process. Our findings provide crucial insights into the cascaded processes of Compton scattering and BW process, significantly contributing to the understanding and further exploration of laser-driven QED plasma creation in laboratory settings and high-energy astrophysics research.
Several indications for neutral scalars are observed at the LHC. One of them, a broad resonance peaked at about 650 GeV which we call H(650), was first observed by an outsider combining published histograms from ATLAS and CMS on ZZ -> 4 leptons searches, and this combination shows a local significance close to 4 s.d. Since then, CMS has reported two other indications at the same mass, with similar local significances: H->WW->2leptons+neutrinos and H(650)->bbh(125) where mbb~90 GeV and h(125)->gam gam. ATLAS has completed its analysis of ZZ->4 leptons from which we infer an indication for H(650) with 3.5 s.d. significance. Assuming that the mass is already known from the former set, and combining these three results, one gets a global statistical significance above 6 s.d. H(650) has a coupling to WW similar to h(125) and therefore we argue that a sum rule (SR) required by unitarity for W+W- scattering implies that there should be a compensating effect from a doubly charged scalar H++, with a large coupling to W+W+. We therefore predict that this mode should become visible through the vector boson fusion process W+W+->H++, naturally provided by LHC. A recent indication for H++(450)->W+W+ from ATLAS allows a model independent interpretation of this result through the SR constraint which gives BR(H++->W+W+)~10%, implying the occurrence of additional decay modes H+W+ and H+H+ from one or several light H+ with masses below mH++ - mW or MH++/2, that is mH+ < 370 GeV or 225 GeV. A similar analysis is performed for H+(375)->ZW, indicated by ATLAS and CMS. Both channels suggest a scalar field content similar to the Georgi Machacek model with triplets, at variance with the models usually considered. An alternate interpretation of the 650 GeV resonance as a tensor is also briefly discussed. Implications for precision measurements are presented.
The path optimization method, which is proposed to control the sign problem in quantum field theories with continuous degrees of freedom by machine learning, is applied to a spin model with discrete degrees of freedom. The path optimization method is applied by replacing the spins with dynamical variables via the Hubbard-Stratonovich transformation, and the sum with the integral. The one-dimensional (Lenz-)Ising model with a complex coupling constant is used as a laboratory for the sign problem in the spin model. The average phase factor is enhanced by the path optimization method, indicating that the method can weaken the sign problem. Our result reproduces the analytic values with controlled statistical errors.
The baryon-number violation (BV) happens in the standard electroweak model. According to the Bloch-wave picture, the BV event rate shall be significantly enhanced when the proton-proton collision center of mass (COM) energy goes beyond the sphaleron barrier height $E_{\rm sph}\simeq 9.0\,{\rm TeV}$. Here we compare the BV event rates at different COM energies, using the Bloch-wave band structure and the CT18 parton distribution function data, with the phase space suppression factor included. As an example, the BV cross-section at 25 TeV is 4 orders of magnitude bigger than its cross-section at 13 TeV. The probability of detection is further enhanced at higher energies since an event at higher energy will produce on average more same sign charged leptons.
Recently, Belle II reported the observation of the decay $B\to K M_X$, $M_X$ the missing mass, with the branching ratio much exceeding ${\cal B}(B\to K \nu\bar\nu)$ which is the only Standard Model (SM) process contributing to this reaction. If confirmed, this might be an indication of new nonSM particles produced in this decay. One of the possible explanations of the observed effect could be light dark-matter (DM) particles produced via a scalar mediator field. We give simple arguments, that a combined analysis of the $B\to K M_X$ and $B\to K^* M_X$ reactions would be a clean probe of the scalar mediator scenario: (i) making use of an observed value ${\cal B}(B\to K M_X)\simeq 5.4\, {\cal B}(B\to K \nu\bar\nu)_{\rm SM}$ and (ii) assuming that the effect is due to the light dark matter coupling to the top quark via a {\it scalar} mediator field, one finds an upper limit ${\cal B}(B\to K^* M_X) < 2.8 \, {\cal B}(B\to K^* \nu\bar\nu)_{\rm SM}$. Within the discussed scenario, this upper limit does not depend on the mass of the scalar mediator nor on the specific details of the unobserved dark-matter particles in the final state.
We present the first determination of the proton mechanical radius. The result was obtained by employing a novel theoretical approach that connects experimental data of deeply virtual Compton scattering with the spin = 2 interaction that is characteristic of gravity coupling with matter. We find that the proton mechanical radius is significantly smaller than its charge radius, consistent with the latest Lattice QCD computation.
The $U(1)_{L_\mu-L_\tau}$ extended Standard Model (SM) is anomaly free, and contains a massive $Z^\prime$ boson. The associated Higgs, which generates the $Z'$'s mass via spontaneous symmetry breaking (SSB) can mix with the SM Higgs. The new parameters relating to extra Higgs cannot be probed at the LHC with final states containing no more than $4$ leptons. Therefore, we use signatures with at least $6$ leptons to probe the parameter space through LHC experiments. Since SM predicts a lower cross section for a final state with at least $6$ leptons than is currently visible at the LHC, the background is negligible, thereby making this channel an extremely sensitive probe. We find that in a limited region of the parameter space this channel can strongly constrain the associated $U(1)_{L_\mu-L_\tau}$ coupling constant, even more so than for a $4$ lepton or fewer final state.
The transverse-momentum-dependent distributions (TMDs), which are defined by gauge-invariant 3D parton correlators with staple-shaped lightlike Wilson lines, can be calculated from quark and gluon correlators fixed in the Coulomb gauge on a Euclidean lattice. These quantities can be expressed gauge-invariantly as the correlators of dressed fields in the Coulomb gauge, which reduce to the standard TMD correlators in the infinite boost limit. In the framework of Large-Momentum Effective Theory, a quasi-TMD defined from such correlators in a large-momentum hadron state can be matched to the TMD via a factorization formula, whose exact form is derived using Soft Collinear Effective Theory and verified at one-loop order. Compared to the currently used gauge-invariant quasi-TMDs, this new method can substantially improve statistical precision and simplify renormalization, thus providing a more efficient way to calculate TMDs in lattice QCD.
Exploiting crossing symmetry, the hadron scale pion valence quark distribution function is used to predict the kindred elementary valence quark fragmentation function (FF). This function defines the kernel of a quark jet fragmentation equation, which is solved to obtain the full pion FFs. After evolution to a scale typical of FF fits to data, the results for quark FFs are seen to compare favourably with such fits. However, the gluon FF is markedly different. Notably, although FF evolution equations do not themselves guarantee momentum conservation, inclusion of a gluon FF which, for four quark flavours, distributes roughly 11% of the total light-front momentum fraction, is sufficient to restore momentum conservation under evolution. Overall, significant uncertainty is attached to FFs determined via fits to data; hence, the features of the predictions described herein could potentially provide useful guidance for future such studies.
We comment on the paper ``Numerical study of the SWKB condition of novel classes of exactly solvable systems'' [Y. Nasuda and N. Sawado, Mod. Phys. Lett. A 36, 2150025 (2021)]. We show that it misrepresents our prior work [J. Bougie, A. Gangopadhyaya and C. Rasinariu, J. Phys. A: Math. Theor. 51, 375202 (2018)], and clarify this misunderstanding.
We explore the T-duality web of 6D Heterotic Little String Theories, focusing on flavor algebra reducing deformations. A careful analysis of the full flavor algebra, including Abelian factors, shows that the flavor rank is preserved under T-duality. This suggests a new T-duality invariant in addition to the Coulomb branch dimension and the two-group structure constants. We also engineer Little String Theories with non-simply laced flavor algebras, whose appearance we attribute to certain discrete 3-form fluxes in M-theory. Geometrically, these theories are engineered in F-theory with non-K\"ahler favorable K3 fibers. This geometric origin leads us to propose that freezing fluxes are preserved across T-duality. Along the way, we discuss various exotic models, including two inequivalent $\text{Spin(32)}/\mathbb{Z}_2$ models that are dual to the same $\text{E}_8 \times \text{E}_8$ theory, and a family of self-T-dual models.
In this paper we investigate the vacuum expectation values of the field squared and the energy-momentum tensor associated to a charged massive scalar quantum field in a $(1+D)$-dimensional de Sitter spacetime induced by a plate (flat boundary) and a carrying-magnetic-flux cosmic string. In our analysis we admit that the flat boundary is perpendicular to the string, and the scalar field obeys the Robin boundary condition on the plate. In order to do develop this analysis, we obtain the complete set of normalized positive-energy solution of the Klein-Gordon equation compatible with the model setup. Having obtained these bosonic modes, we construct the corresponding Wightman function. The latter is given by the sum of two terms: one associated with the boundary-free spacetime, and the other induced by the flat boundary. Although we have imposed the Robin boundary condition on the field, we apply our formalism considering specifically the Dirichlet and Neumann boundary conditions. The corresponding parts have opposite signs. Because the analysis of bosonic vacuum polarization in boundary-free de Sitter space and in presence of a cosmic string, in some sense, has been developed in the literature, here we are mainly interested in the calculations of the effects induced by the boundary. In this way, closed expressions for the corresponding expectation values are provided, as well as their asymptotic behavior in different limiting regions. We show that the conical topology due to the cosmic string enhances the boundary induced vacuum polarization effects for both field squared and the energy-momentum tensor, compared to the case of a boundary in pure de Sitter spacetime. Moreover, the presence of cosmic string and boundary induce non-zero stress along the direction normal to the boundary. The corresponding vacuum force acting on the boundary is also investigated.
Cecotti and Vafa introduced the tt*-equation (topological-antitopological fusion equation), whose solutions describe massive deformations of supersymmetric conformal field theories. We describe some solutions of the tt*-equation constructed from the SU(2)$_k$-fusion algebra. The idea of the construction is due to Cecotti and Vafa, but we give a precise mathematical formulation and a description of the "holomorphic data" corresponding to the solutions by using the DPW method. Furthermore, we give a relation between the solutions and the representations of SU(2). As a special case, we consider the solutions corresponding to the supersymmetric A$_k$-minimal model.
This paper explores the dynamics of the Klein-Gordon oscillator in the presence of a cosmic string in Som Raychaudhuri spacetime. The exact solutions for the free case and the oscillator case are obtained and discussed. These solutions reveal the effects of the cosmic string and the spacetime geometry on the bosonic particles. To illustrate these results, some figures and tables have been included.
This is a survey article for the Encyclopedia of Mathematical Physics, 2nd Edition. Topological defects are described in the context of the 2-dimensional Ising model on the lattice, in 2-dimensional quantum field theory, in topological quantum field theory in arbitrary dimension, and in higher-dimensional quantum field theory with a focus on 4-dimensional quantum electrodynamics.
We show how a path integral for reduced K\"{a}hler-Dirac fermions suffers from a phase ambiguity associated with the fermion measure that is an analog of the measure problem seen for chiral fermions. However, unlike the case of chiral symmetry, a doubler free lattice action exists which is invariant under the corresponding onsite symmetry. This allows for a clear diagnosis and solution to the problem using mirror fermions resulting in a unique gauge invariant measure. By introducing an appropriate set of Yukawa interactions which are consistent with 't Hooft anomaly cancellation we conjecture the mirrors can be decoupled from low energy physics. Moreover, the minimal such K\"{a}hler-Dirac mirror model yields a light sector which corresponds, in the flat space continuum limit, to the Pati-Salam GUT model.
Formulating non-Abelian gauge theories as a tensor network is known to be challenging due to the internal degrees of freedom that result in the degeneracy in the singular value spectrum. In two dimensions, it is straightforward to 'trace out' these degrees of freedom with the use of character expansion, giving a reduced tensor network where the degeneracy associated with the internal symmetry is eliminated. In this work, we show that such an index loop also exists in higher dimensions in the form of a closed tensor network we call the 'armillary sphere'. This allows us to completely eliminate the matrix indices and reduce the overall size of the tensors in the same way as is possible in two dimensions. This formulation allows us to include significantly more representations with the same tensor size, thus making it possible to reach a greater level of numerical accuracy in the tensor renormalization group computations.
In this paper we propose the notion of a transposed Poisson superalgebra. We prove that a transposed Poisson superalgebra can be constructed by means of a commutative associative superalgebra and an even degree derivation of this algebra. Making use of this we construct two examples of transposed Poisson superalgebra. One of them is the graded differential algebra of differential forms on a smooth finite dimensional manifold, where we use the Lie derivative as an even degree derivation. The second example is the commutative superalgebra of basic fields of the quantum Yang-Mills theory, where we use the BRST-supersymmetry as an even degree derivation to define a graded Lie bracket. We show that a transposed Poisson superalgebra has six identities that play an important role in the study of the structure of this algebra.
We propose a notion of a ternary skew-symmetric covariant tensor of 3rd order, consider it as a 3-dimensional matrix and study a ten-dimensional complex space of these tensors. We split this space into a direct sum of two five-dimensional subspaces and in each subspace there is an irreducible representation of the rotation group SO(3) -> SO(5). We find two independent SO(3)-invariants of ternary skew-symmetric tensors, where one of them is the Hermitian metric and the other is the quadratic form. We find the stabilizer of this quadratic form and its invariant properties. Making use of these invariant properties we define a SO(3)-irreducible geometric structure on a five-dimensional complex Hermitian manifold. We study a connection on a five-dimensional complex Hermitian manifold with a SO(3)-irreducible geometric structure, find its curvature and torsion. The structures proposed in this paper and their study are motivated by a ternary generalization of the Pauli's principle proposed by R. Kerner.
Based on the freely-falling Unruh-Dewitt model, we study the influence of gravitational waves on the quantum multibody states, i.e. the twin-Fock (TF) state and the mixture of Dicke states. The amount of entanglement of quantum many-body states decreases first and then increases with increasing frequency of gravitational waves. In particular, for some fixed frequencies of gravitational waves, entanglement will increase with the increasing amplitude of gravitational waves, which is different from the usual thought of gravity-induced decoherence and could provide a novel understanding for the quantum property of gravitational waves.
We define two types of Witten's zeta functions according to Cartan's classification of compact symmetric spaces. The type II is the original Witten zeta function constructed by means of irreducible representations of the simple compact Lie group U. The type I Witten zeta functions, we introduce here, are related to the irreducible spherical representations of U. They arise in the harmonic analysis on compact symmetric spaces of the form U/K, where K is the maximal subgroup of U. To construct the type I zeta function we calculate the partition functions of 2d YM theory with broken gauge symmetry using the Migdal-Witten approach. We prove that for the rank one symmetric spaces the generating series for the values of the type I functions with integer arguments can be defined in terms of the generating series of the Riemann zeta-function.
We combine perturbation theory with analytic and numerical bootstrap techniques to study the critical point of the long-range Ising (LRI) model in two and three dimensions. This model interpolates between short-range Ising (SRI) and mean-field behaviour. We use the Lorentzian inversion formula to compute infinitely many three-loop corrections in the two-dimensional LRI near the mean-field end. We further exploit the exact OPE relations that follow from bulk locality of the LRI to compute infinitely many two-loop corrections near the mean-field end, as well as some one-loop corrections near SRI. By including such exact OPE relations in the crossing equations for LRI we set up a very constrained bootstrap problem, which we solve numerically using SDPB. We find a family of sharp kinks for two- and three-dimensional theories which compare favourably to perturbative predictions, as well as some Monte Carlo simulations for the two-dimensional LRI.
Solid partitions are the 4D generalization of the plane partitions in 3D and Young diagrams in 2D, and they can be visualized as stacking of 4D unit-size boxes in the positive corner of a 4D room. Physically, solid partitions arise naturally as 4D molten crystals that count equivariant D-brane BPS states on the simplest toric Calabi-Yau fourfold, $\mathbb{C}^4$, generalizing the 3D statement that plane partitions count equivariant D-brane BPS states on $\mathbb{C}^3$. In the construction of BPS algebras for toric Calabi-Yau threefolds, the so-called charge function on the 3D molten crystal is an important ingredient -- it is the generating function for the eigenvalues of an infinite tower of Cartan elements of the algebra. In this paper, we derive the charge function for solid partitions. Compared to the 3D case, the new feature is the appearance of contributions from certain 4-box and 5-box clusters, which will make the construction of the corresponding BPS algebra much more complicated than in the 3D.
This paper introduces two operations in quiver gauge theories. The first operation takes a quiver with a permutation symmetry $S_n$ and gives a quiver with adjoint loops. The corresponding 3d $\mathcal{N}=4$ Coulomb branches are related by an orbifold of $S_n$. The second operation takes a quiver with $n$ nodes connected by edges of multiplicity $k$ and replaces them by $n$ nodes of multiplicity $qk$. The corresponding Coulomb branch moduli spaces are related by an orbifold of type $\mathbb{Z}_q^{n-1}$. The first operation generalises known cases that appeared in the literature. These two operations can be combined to generate new relations between moduli spaces that are constructed using the magnetic construction.
We study the two-matrix model for double-scaled SYK model, called ETH matrix model introduced by Jafferis et al [arXiv:2209.02131]. If we set the parameters $q_A,q_B$ of this model to zero, the potential of this two-matrix model is given by the Gaussian terms and the $q$-commutator squared interaction. We find that this model is solvable in the large $N$ limit and we explicitly construct the planar one- and two-point function of resolvents in terms of elliptic functions.
We revisit the space of gapped quantum field theories with a global O(N) symmetry in two spacetime dimensions. Previous works using S-matrix bootstrap revealed a rich space in which integrable theories such as the non-linear sigma model appear at special points on the boundary, along with an abundance of unknown models hinting at a non conventional UV behaviour. We extend the S-matrix set-up by including into the bootstrap form factors and spectral functions for the stress-energy tensor and conserved O(N) currents. Sum rules allow us to put bounds on the central charges of the conformal field theory (CFT) in the UV. We find that a big portion of the boundary can only flow from CFTs with infinite central charges. We track this result down to a particular behaviour of the amplitudes in physical kinematics and discuss its physical implications.
The aim of this paper is to study the resurgent transseries structure of the inhomogeneous and $q$-deformed Painlev\'e II equations. Appearing in a variety of physical systems we here focus on their description of $(2,4)$-super minimal string theory with either D-branes or RR-flux backgrounds. In this context they appear as double scaled string equations of matrix models, and we relate the resurgent transseries structures appearing in this way with explicit matrix model computations. The main body of the paper is focused on studying the transseries structure of these equations as well as the corresponding resurgence analyses. Concretely, the aim will be to give a recursion relation for the transseries sectors and obtain the non-perturbative transmonomials -- {\it i.e.}, the instanton actions of the systems. From the resurgence point of view, the goal is to obtain Stokes data. These encode how the transseries parameters jump at the Stokes lines when turning around the complex plane in order to produce a global transseries solution. The main result will be a conjectured form for the transition functions of these transseries parameters. We explore how these equations are related to each other via the Miura map. In particular, we focus on how their resurgent properties can be translated into each other. We study the special solutions of the inhomogeneous Painlev\'e II equation and how these might be encoded in the transseries parameters. Specifically, we have a discussion on the Hastings--McLeod solution and some results on special function solutions. Finally, we discuss our results in the context of the matrix model and the (2,4)-minimal superstring theory.
In this paper, we have introduced a fundamentally different approach, based on a bottom-up methodology, to expand tree-level Yang-Mills (YM) amplitudes into Yang-Mills-scalar (YMS) amplitudes and Bi-adjoint-scalar (BAS) amplitudes. Our method relies solely on the intrinsic soft behavior of external gluons, eliminating the need for external aids such as Feynman rules or CHY rules. The recursive procedure consistently preserves explicit gauge invariance at every step, ultimately resulting in a manifest gauge-invariant outcome when the initial expression is already framed in a gauge-invariant manner. The resulting expansion can be directly analogized to the expansions of gravitational (GR) amplitudes using the double copy structure. When combined with the expansions of Einstein-Yang-Mills amplitudes obtained using the covariant double copy method from existing literature, the expansions presented in this note yield gauge-invariant BCJ numerators.
In this work we investigate possible actions for antisymmetric two-tensor field models subject to constraints that force the field to acquire a nonzero vacuum expectation value, thereby spontaneously breaking Lorentz invariance. In order to assure stability, we require that the associate Hamiltonian be bounded from below. It is shown that this requirement rules out any quadratic action constructed only from the the antisymmetric tensor field. We then explicitly construct a hybrid model consisting of the antisymmetric two-tensor field together with a vector field, subject to three constraints, that is stable in Minkowski space.
Sonoluminescence is a well known laboratory phenomenon where an oscillating gas bubble in the appropriate environment periodically emits a flash of light in the visible frequency range. In this submission, we study the system in the framework of analog gravity. We model the oscillating bubble in terms of analog geometry and propose a non-minimal coupling prescription of the electromagnetic field with the geometry. The geometry behaves as an analogous oscillating time dependent background in which repeated flux of photons are produced in a wide frequency range through parametric resonance from quantum vacuum. Due to our numerical limitation, we could reach the frequency up to $\sim 10^5 ~\mbox{m}^{-1}$. However, we numerically fit the spectrum in a polynomial form including the observed frequency range around $\sim 10^7 ~\mbox{m}^{-1}$. Our current analysis seems to suggest that parametric resonance in analog background may play a fundamental role in explaining such phenomena in the quantum field theory framework.
We find solutions of the heterotic string effective action describing the first-order $\alpha'$ corrections to two-charge black holes at finite temperature. Making explicit use of these solutions, we compute the corrections to the thermodynamic quantities: temperature, chemical potentials, mass, charges and entropy. We check that the first law of black hole mechanics is satisfied and that the thermodynamics agrees with the one extracted from the Euclidean on-shell action. Finally, we show that our results are in agreement with the corrections for the thermodynamics recently predicted by Chen, Maldacena and Witten.
An extended search for anomaly free matter coupled $N=(1,0)$ supergravity in six dimension is carried out by two different methods which we refer to as the graphical and rank methods. In the graphical method the anomaly free models are built from single gauge group models, called nodes, which can only have gravitational anomalies. We search for anomaly free theories with gauge groups $G_1\times...\times G_n$ with $n=1,2,...$ (any number of factors) and $G_1\times...\times G_n \times U(1)_R$ where $n=1,2,3$ and $U(1)_R$ is the $R$-symmetry group. While we primarily consider models with the tensor multiplet number $n_T=1$, we also provide some results for $n_T\ne 1$ with an unconstrained number of charged hypermultiplets. We find a large number of ungauged anomaly free theories. However, in the case of $R$-symmetry gauged models with $n_T=1$, in addition to the three known anomaly free theories with $G_1\times G_2 \times U(1)_R$ type symmetry, we find only six new remarkably anomaly free models with symmetry groups of the form $G_1\times G_2\times G_3 \times U(1)_R$. In the case of $n_T=1$ and ungauged models, excluding low rank group factors and considering only low lying representations, we find all anomaly free theories. Remarkably, the number of group factors does not exceed four in this class. The proof of completeness in this case relies on a bound which we establish for a parameter characterizing the difference between the number of non-singlet hypermultiplets and the dimension of the gauge group.
Perturbative superstring theory is revisited, with the goal of giving a simpler and more direct demonstration that multi-loop amplitudes are gauge-invariant (apart from known anomalies), satisfy space-time supersymmetry when expected, and have the expected infrared behavior. The main technical tool is to make the whole analysis, including especially those arguments that involve integration by parts, on supermoduli space, rather than after descending to ordinary moduli space.
We provide a prescription for computing two-point tree amplitudes in the pure spinor formalism that are finite and agree with the corresponding expression in the field theories. In [arXiv:1906.06051v1-arXiv:1909.03672v3], same results were presented for bosonic strings and it was mentioned they can be generalized to superstrings. The pure spinor formalism is a successful super-Poincare covariant approach to quantization of superstrings [arXiv:hep-th/0001035v2]. Because the pure spinor formalism is equivalent to other superstring formalisms, we explicitly verify the above claim. We introduce a mostly BRST exact operator in order to achieve this.
In this series of papers, we study a Hamiltonian model for 3+1d topological phases, based on a generalisation of lattice gauge theory known as "higher lattice gauge theory". Higher lattice gauge theory has so called "2-gauge fields" describing the parallel transport of lines, just as ordinary gauge fields describe the parallel transport of points. In the Hamiltonian model this is represented by having labels on the plaquettes of the lattice, as well as the edges. In this paper we summarize our findings in an accessible manner, with more detailed results and proofs to be presented in the other papers in the series. The Hamiltonian model supports both point-like and loop-like excitations, with non-trivial braiding between these excitations. We explicitly construct operators to produce and move these excitations, and use these to find the loop-loop and point-loop braiding relations. These creation operators also reveal that some of the excitations are confined, costing energy to separate. This is discussed in the context of condensation/confinement transitions between different cases of this model. We also discuss the topological charges of the model and use explicit measurement operators to re-derive a relationship between the number of charges measured by a 2-torus and the ground-state degeneracy of the model on the 3-torus. From these measurement operators, we can see that the ground state degeneracy on the 3-torus is related to the number of types of linked loop-like excitations.
We use the spread complexity of a time evolved state after a sudden quantum quench in the Lipkin-Meshkov-Glick (LMG) model prepared in the ground state as a probe of quantum phase transition when the system is quenched towards the critical point. By studying the growth of the effective number of elements of the Krylov basis, those contribute to the spread complexity more than a preassigned cut off, we show how the two phases of the LMG model can be distinguished. We also explore the time evolution of spread entropy after both non-critical and critical quenches. We show that the sum contributing to the spread entropy converges slowly in the symmetric phase of the LMG model compared to that of the broken phase, and for a critical quench, the spread entropy diverges logarithmically at late times.
We identify the maximal chiral algebra of conformal cyclic orbifolds. In terms of this extended algebra, the orbifold is a rational and diagonal conformal field theory, provided the mother theory itself is also rational and diagonal. The operator content and operator product expansion of the cyclic orbifolds are revisited in terms of this algebra. The fusion rules and fusion numbers are computed via the Verlinde formula. This allows one to predict which conformal blocks appear in a given four-point function of twisted or untwisted operators, which is relevant for the computation of various entanglement measures in one-dimensional critical systems.
To compute the string one-loop correction to the Kahler potential of moduli fields of string compactifications in Einstein-frame, one must compute: the string one-loop correction to the Einstein-Hilbert action, the string one-loop correction to the moduli kinetic terms, the string one-loop correction to the definition of the holomorphic coordinates. In this note, we compute the string one-loop correction to the Einstein-Hilbert action of type II string theory compactified on orientifolds of Calabi-Yau threefolds. We find that the one-loop correction is determined by the new supersymmetric index studied by Cecotti, Fendley, Intriligator, and Vafa. As a simple application, we apply our results to estimate the size of the one-loop corrections around a conifold point in the Kahler moduli space.
We study 3d theories containing $\mathcal{N}=3$ Chern-Simons vector multiplets coupled to the $\mathrm{SU}(N)^3$ flavour symmetry of 3d $T_N$ theories with Chern-Simons level $k_1$, $k_2$ and $k_3$. It was formerly pointed out that these theories flow to infrared SCFTs with enhanced $\mathcal{N}=4$ supersymmetry when $1/k_1+1/k_2+1/k_3=0$. We examine superconformal indices of these theories which reveal that supersymmetry of the infrared SCFTs may get enhanced to $4 \leq \mathcal{N} \leq 6$ if such a condition is satisfied. Moreover, even if the Chern-Simons levels do not obey the aforementioned condition, we find that there is still an infinite family of theories that flows to infrared SCFTs with $\mathcal{N}=4$ supersymmetry. The 't Hooft anomalies of the one-form symmetries of these theories are analysed. As a by-product, we observe that there is generally a decoupled topological sector in the infrared. When the infrared SCFTs have $\mathcal{N} \geq 4$ supersymmetry, we also study the Higgs and Coulomb branch limits of the indices which provide geometric information of the moduli space of the theories in question in terms of the Hilbert series.
By calculating inequivalent classical r-matrices for the $gl(2,\mathbb{R})$ Lie algebra as solutions of (modified) classical Yang-Baxter equation ((m)CYBE)), we classify the YB deformations of Wess-Zumino-Witten (WZW) model on the $GL(2,\mathbb{R})$ Lie group in twelve inequivalent families. Most importantly, it is shown that each of these models can be obtained from a Poisson-Lie T-dual $\sigma$-model in the presence of the spectator fields when the dual Lie group is considered to be Abelian, i.e. all deformed models have Poisson-Lie symmetry just as undeformed WZW model on the $GL(2,\mathbb{R})$. In this way, all deformed models are specified via spectator-dependent background matrices. For one case, the dual background is clearly found.
We consider a pulsating string near a non-extremal black p-brane (p=5 and p=6) and investigate the chaos in the corresponding string dynamics by examining the Fast Lyapunov indicator(FLI) and Poincare section. In our system, the energy and the charge play the role of control parameters. For generic values of these parameters, the numerical results show that the dynamics primarily fall into three modes: capture, escape to infinity, and quasiperiodic depending on the initial location (near to or far away from the black brane horizon) of the string. Finally, probing for different values of the winding number (n) the dynamics turns out to be sensitive to n. In particular, we observe the point particle (n=0) scenario to be integrable whereas at higher n the dynamics seems to be chaotic.
Analytic continuations of integer-valued parameters can lead to profound insights, such as angular momentum in Regge theory, the number of replicas in spin glasses, the number of internal degrees of freedom, the spacetime dimension in dimensional regularization and Wilson's renormalization group. In this work, we consider a new kind of analytic continuation of correlation functions, inspired by two recent approaches to underdetermined Dyson-Schwinger equations in $D$-dimensional spacetime. If the Green's functions $G_n=\<\phi^n\>$ admit analytic continuation to complex values of $n$, the two different approaches are unified by a novel principle for self-consistent problems: Singularities in the complex plane should be minimal. This principle manifests as the merging of different branches of Green's functions in the quartic theories. For $D=0$, we obtain the closed-form solutions of the general $g\phi^m$ theories, including the cases with complex coupling constant $g$ or non-integer power $m$. For $D=1$, we derive rapidly convergent results for the Hermitian quartic and non-Hermitian cubic theories by minimizing the complexity of the singularity at $n=\infty$.
The Celestial Holography program encompasses recent efforts to understand the flat space hologram in terms of a CFT living on the celestial sphere. A key development instigating these efforts came from understanding how soft limits of scattering encode infinite dimensional symmetry enhancements corresponding to the asymptotic symmetry group of the bulk spacetime. Historically, the construction of the bulk-boundary dual pair has followed bottom up approach matching symmetries on both sides. Recently, however, there has been exciting progress in formulating top down descriptions using insights from twisted holography. This chapter reviews salient aspects of the celestial construction, the status of the dictionary, and active research directions. This is a preprint version of a chapter prepared for the Encyclopedia of Mathematical Physics 2nd edition.
We review the effective field theory (EFT) bootstrap by formulating it as an infinite-dimensional semidefinite program (SDP), built from the crossing symmetric sum rules and the S-matrix primal ansatz. We apply the program to study the large-$N$ chiral perturbation theory ($\chi$PT) and observe excellent convergence of EFT bounds between the dual (rule-out) and primal (rule-in) methods. This convergence aligns with the predictions of duality theory in SDP, enabling us to analyze the bound states and resonances in the ultra-violet (UV) spectrum. Furthermore, we incorporate the upper bound of unitarity to uniformly constrain the EFT space from the UV scale $M$ using the primal method, thereby confirming the consistency of the large-$N$ expansion. In the end, we translate the large-$N$ $\chi$PT bounds to constrain the higher derivative corrections of holographic QCD models.
Recently, an infinite one-parameter generalisation of the Veneziano amplitude was bootstrapped using as input assumptions an integer mass spectrum, crossing symmetry, high-energy boundedness, and exchange of finite spins. This new result was dubbed \textit{hypergeometric Veneziano amplitude}, with the deformation parameter $r$ being a real number. Using the partial-wave decomposition and the positivity of said decomposition's coefficients we are able to bound the deformation parameter to $r \geq 0$ and, also, to obtain an upper bound on the number of spacetime dimensions $D \leq 26$, which is the critical dimension of bosonic string theory.
Topological holography is a holographic principle that describes the generalized global symmetry of a local quantum system in terms of a topological order in one higher dimension. This framework separates the topological data from the local dynamics of a theory and provides a unified description of the symmetry and duality in gapped and gapless phases of matter. In this work, we develop the topological holographic picture for (1+1)d quantum phases, including both gapped phases as well as a wide range of quantum critical points, including phase transitions between symmetry protected topological (SPT) phases, symmetry enriched quantum critical points, deconfined quantum critical points and intrinsically gapless SPT phases. Topological holography puts a strong constraint on the emergent symmetry and the anomaly for these critical theories. We show how the partition functions of these critical points can be obtained from dualizing (orbifolding) more familiar critical theories. The topological responses of the defect operators are also discussed in this framework. We further develop a topological holographic picture for conformal boundary states of (1+1)d rational conformal field theories. This framework provides a simple physical picture to understand conformal boundary states and also uncovers the nature of the gapped phases corresponding to the boundary states.
In this work the connection established in [7, 8] between a model of two linked polymers rings with fixed Gaussian linking number forming a 4-plat and the statistical mechanics of non-relativistic anyon particles is explored. The excluded volume interactions have been switched off and only the interactions of entropic origin arising from the topological constraints are considered. An interpretation from the polymer point of view of the field equations that minimize the energy of the model in the limit in which one of the spatial dimensions of the 4-plat becomes very large is provided. It is shown that the self-dual contributions are responsible for the long-range interactions that are necessary for preserving the global topological properties of the system during the thermal fluctuations. The non self-dual part is also related to the topological constraints, and takes into account the local interactions acting on the monomers in order to prevent the breaking of the polymer lines. It turns out that the energy landscape of the two linked rings is quite complex. Assuming as a rough approximation that the monomer densities of half of the 4-plat are constant, at least two points of energy minimum are found. Classes of non-trivial self-dual solutions of the self-dual field equations are derived. ... .
We present the recent development of a lightweight detector capable of accurate spatial, timing, and amplitude resolution of charged particles. The technology is based on double-sided double-metal p+\,--\,n\,--\,n+ micro-strip silicon sensors, ultra-light long aluminum-polyimide micro-cables for the analogue signal transfer, and a custom-developed SMX read-out ASIC capable of measurement of the time ($\Delta t \lesssim 5 \,\mathrm{ns}$) and amplitude. Dense detector integration enables a material budget $>0.3\,\% X_0$. A sophisticated powering and grounding scheme keeps the noise under control. In addition to its primary application in Silicon Tracking System of the future CBM experiment in Darmstadt, our detector will be utilized in other research applications.
To search for dark matter candidates with masses below $\mathcal{O}$(MeV), the SPLENDOR (Search for Particles of Light dark mattEr with Narrow-gap semiconDuctORs) experiment is developing novel narrow-bandgap semiconductors with electronic bandgaps on the order of 1-100 meV. In order to detect the charge signal produced by scattering or absorption events, SPLENDOR has designed a two-stage cryogenic HEMT-based amplifier with an estimated charge resolution approaching the single-electron level. A low-capacitance ($\sim$1.6 pF) HEMT is used as a buffer stage at $T=10\,\mathrm{mK}$ to mitigate effects of stray capacitance at the input. The buffered signal is then amplified by a higher-capacitance ($\sim$200 pF) HEMT amplifier stage at $T=4\,\mathrm{K}$. Importantly, the design of this amplifier makes it usable with any insulating material - allowing for rapid prototyping of a variety of novel detector materials. We present the two-stage cryogenic amplifier design, preliminary voltage noise performance, and estimated charge resolution of 7.2 electrons.
The Cabbibo-favored decay $\Lambda_{c}^+ \to \Xi^{0}K^{+}\pi^{0}$ is studied for the first time using 6.1 fb$^{-1}$ of $e^+e^-$ collision data at center-of-mass energies between 4.600 and 4.840 GeV, collected with the BESIII detector at the BEPCII collider. With a double-tag method, the branching fraction of the three-body decay $\Lambda_{c}^+ \to \Xi^{0}K^{+}\pi^{0}$ is measured to be $(7.79 \pm 1.46 _{\rm} \pm0.71 _{\rm}) \times 10^{ - 3}$, where the first and second uncertainties are statistical and systematic, respectively. The branching fraction of the two-body decay $\Lambda_{c}^+ \to \Xi(1530)^{0}K^+$ is $(5.99\pm1.04\pm0.29)\times10^{-3}$, which is consistent with the previous result of $(5.02\pm0.99\pm0.31)\times 10^{-3}$. In addition, the upper limit on the branching fraction of the doubly Cabbibo-suppressed decay $\Lambda_{c}^+ \to nK^+\pi^0$ is $7.1 \times 10^{-4}$ at the 90$\%$ confidence level. The upper limits on the branching fractions of $\Lambda_{c}^+ \to \Sigma^{0}K^{+}\pi^{0}$ and $\Lambda K^{+}\pi^{0}$ are also determined to be $1.8\times 10^{-3}$ and $ 2.0 \times 10^{-3}$, respectively.
A search is presented for high-mass exclusive diphoton production via photon-photon fusion in proton-proton collisions at $\sqrt{s}$ = 13 TeV in events where both protons survive the interaction. The analysis utilizes data corresponding to an integrated luminosity of 103 fb$^{-1}$ collected in 2016-2018 with the central CMS detector and the CMS and TOTEM precision proton spectrometer (PPS). Events that have two photons with high transverse momenta ($p_\mathrm{T}^\gamma > $ 100 GeV), back-to-back in azimuth, and with a large diphoton invariant mass ($m_{\gamma\gamma} \gt$ 350 GeV) are selected. To remove the dominant inclusive diphoton backgrounds, the kinematic properties of the protons detected in PPS are required to match those of the central diphoton system. Only events having opposite-side forward protons detected with a fractional momentum loss between 0.035 and 0.15 (0.18) for the detectors on the negative (positive) side of CMS are considered. One exclusive diphoton candidate is observed for an expected background of 1.1 events. Limits at 95% confidence level are derived for the four-photon anomalous coupling parameters $\lvert\zeta_1\rvert \lt$ 0.073 TeV$^{-4}$ and $\lvert\zeta_2\rvert \lt$ 0.15 TeV$^{-4}$, using an effective field theory. Additionally, upper limits are placed on the production of axion-like particles with coupling strength to photons $f^{-1}$ that varies from 0.03 TeV$^{-1}$ to 1 TeV$^{-1}$ over the mass range from 500 to 2000 GeV.
Antinuclei can be produced in space either by collisions of high-energy cosmic rays with the interstellar medium or from the annihilation of dark matter particles stemming into standard model particles. High-energy hadronic collisions at accelerators create a suitable environment for producing light (anti)nuclei. Hence, studying the production of (anti)nuclei in pp collisions at the LHC can provide crucial insights into the production mechanisms of nuclear states in our Universe. Recent measurements of the production of (anti)nuclei in and out of jets, and as a function of rapidity in pp collisions at \mbox{$\sqrt{s}$ = 13 TeV} have been carried out with ALICE. The latter allow for the extrapolation of the nuclear production models at forward rapidity, region of interest for indirect searches of dark matter. Recent results on the annihilation cross-section of antinuclei are also discussed in the context of astrophysical measurements of cosmic ray flux. Such information is essential to study the different sources of antinuclei in our Universe and to interpret any future measurement of antinuclei in space.
High granularity calorimeters have become increasingly crucial in modern particle physics experiments, and their importance is set to grow even further in the future. The CLUstering of Energy (CLUE) algorithm has shown excellent performance in clustering calorimeter hits in the High Granularity Calorimeter (HGCAL) developed for the Phase-2 upgrade of the CMS experiment. In this paper, we investigate the suitability of the CLUE algorithm for future collider experiments and test its capabilities outside the HGCAL software reconstruction. To this end, we developed a new package, k4Clue, which is now fully integrated into the Gaudi software framework and supports the EDM4hep data format for inputs and outputs. We demonstrate the performance of CLUE in three detectors for future colliders: CLICdet for the CLIC accelerator, CLD for the FCC-ee collider and a second calorimeter based on Noble Liquid technology also proposed for FCC-ee. We find excellent reconstruction performance for single gamma events, even in the presence of noise, and also compared with other baseline algorithms. Moreover, CLUE demonstrates impressive timing capabilities, outperforming the other algorithms and independently of the number of input hits. This work highlights the adaptability and versatility of the CLUE algorithm for a wide range of experiments and detectors and the algorithm's potential for future high-energy physics experiments beyond CMS.
Incorporating inductive biases into ML models is an active area of ML research, especially when ML models are applied to data about the physical world. Equivariant Graph Neural Networks (GNNs) have recently become a popular method for learning from physics data because they directly incorporate the symmetries of the underlying physical system. Drawing from the relevant literature around group equivariant networks, this paper presents a comprehensive evaluation of the proposed benefits of equivariant GNNs by using real-world particle physics reconstruction tasks as an evaluation test-bed. We demonstrate that many of the theoretical benefits generally associated with equivariant networks may not hold for realistic systems and introduce compelling directions for future research that will benefit both the scientific theory of ML and physics applications.
A search is presented for new Higgs bosons in proton-proton (pp) collision events in which a same-sign top quark pair is produced in association with a jet, via the pp $\to$ tH/A $\to$ t$\mathrm{\bar{t}}$c and pp $\to$ tH/A $\to$ t$\mathrm{\bar{t}}$u processes. Here, H and A represent the extra scalar and pseudoscalar boson, respectively, of the second Higgs doublet in the generalized two-Higgs-doublet model (g2HDM). The search is based on pp collision data collected at a center-of-mass energy of 13 TeV with the CMS detector at the LHC, corresponding to an integrated luminosity of 138 fb$^{-1}$. Final states with a same-sign lepton pair in association with jets and missing transverse momentum are considered. New Higgs bosons in the 200-1000 GeV mass range and new Yukawa couplings between 0.1 and 1.0 are targeted in the search, for scenarios in which either H or A appear alone, or in which they coexist and interfere. No significant excess above the standard model prediction is observed. Exclusion limits are derived in the context of the g2HDM.
Experimental activities involving multi-TeV muon collisions are a relatively recent endeavor. The community has limited experience in designing detectors for lepton interactions at center-of-mass energies of 10 TeV and beyond. This review provides a short overview of the machine characteristics and outlines potential sources of beam-induced background that could impact the detector performance. The strategy for mitigating the effects of beam-induced background on the detector at $\sqrt{s}=3$ TeV is discussed, focusing on the machine-detector interface, detector design, and the implementation of reconstruction algorithms. The physics potential at this center-of-mass energy is evaluated using a detailed detector simulation that incorporates the effects of beam-induced background. This evaluation concerns the Higgs boson couplings and the Higgs field potential sensitivity, that then are used to get confidence on the expectations at 10 TeV. The physics and detector requirements for an experiment at $\sqrt{s}=10$ TeV, outlined here, form the foundation for the initial detector concept at that center-of-mass energy .
The luminosity determination for the ATLAS detector at the LHC during Run 2 is presented, with $pp$ collisions at $\sqrt{s}=13$ TeV. The absolute luminosity scale is determined using van der Meer beam separation scans during dedicated running periods in each year, and extrapolated to the physics data-taking regime using complementary measurements from several luminosity-sensitive detectors. The total uncertainties in the integrated luminosities for each individual year of data-taking range from 0.9% to 1.1%, and are partially correlated between years. After standard data-quality selections, the full Run 2 $pp$ data sample corresponds to an integrated luminosity of $140.1\pm 1.2$ fb$^{-1}$, i.e. an uncertainty of 0.83%. A dedicated sample of low-pileup data recorded in 2017-18 for precision Standard Model physics measurements is analysed separately, and has an integrated luminosity of $338.1\pm 3.1$ pb$^{-1}$.
The particle physics community has agreed that an electron-positron collider is the next step for continued progress in this field, giving a unique opportunity for a detailed study of the Higgs boson. Several proposals are currently under evaluation of the international community. Any large particle accelerator will be an energy consumer and so, today, we must be concerned about its impact on the environment. This paper evaluates the carbon impact of the construction and operations of one of these Higgs factory proposals, the Cool Copper Collider. It introduces several strategies to lower the carbon impact of the accelerator. It proposes a metric to compare the carbon costs of Higgs factories, balancing physics reach, energy needs, and carbon footprint for both construction and operations, and compares the various Higgs factory proposals within this framework. For the Cool Copper Collider, the compact 8 km footprint and the possibility for cut-and-cover construction greatly reduce the dominant contribution from embodied carbon.
The electric dipole moment (EDM) of elementary particles, arising from flavor-diagonal $CP$ violation, serves as a powerful probe for new physics beyond the Standard Model and holds the potential to provide novel insights in unraveling the puzzle of the matter-dominated Universe. Hyperon EDM is a largely unexplored territory. In this paper, we present a comprehensive angular analysis that focuses on entangled hyperon-antihyperon pairs in $J/\psi$ decays for the indirect extraction of hyperon EDM. The statistical sensitivities are investigated for BESIII and the proposed Super Tau-Charm Facility (STCF). Leveraging the statistics from the BESIII experiment, the estimated sensitivity for $\Lambda$ EDM can reach an impressive level of $10^{-19}$ $e$ cm, achieving a 3-orders-of-magnitude improvement over the only existing measurement in a fixed-target experiment at Fermilab with similar statistics. The estimated sensitivities for the $\Sigma^+$, $\Xi^-$, and $\Xi^0$ hyperons at the same level of $10^{-19}$ $e$ cm will mark the first-ever achievement and the latter two will be the first exploration of hyperons with two strange valence quarks. The EDM measurements for hyperons conducted at the BESIII experiment will be a significant milestone and serve as a litmus test for new physics such as supersymmetry and the left-right symmetrical model. Furthermore, at the STCF experiment, the sensitivity of hyperon EDM measurements can be further enhanced by 2 orders of magnitude. Additionally, this angular analysis enables the determination of $CP$ violation in hyperon decays, the effective weak mixing angle, and beam polarization.
We introduce a pairing-based graph neural network, $\textit{GemiNet}$, for simulating quantum many-body systems. Our architecture augments a BCS mean-field wavefunction with a generalized pair amplitude parameterized by a graph neural network. Variational Monte Carlo with GemiNet simultaneously provides an accurate, flexible, and scalable method for simulating many-electron systems. We apply GemiNet to two-dimensional semiconductor electron-hole bilayers and obtain highly accurate results on a variety of interaction-induced phases, including the exciton Bose-Einstein condensate, electron-hole superconductor, and bilayer Wigner crystal. Our study demonstrates the potential of physically-motivated neural network wavefunctions for quantum materials simulations.
The parity mapping provides a geometrically local encoding of the Quantum Approximate Optimization Algorithm (QAOA), at the expense of having a quadratic qubit overhead for all-to-all connected problems. In this work, we benchmark the parity-encoded QAOA on spin-glass models. We address open questions in the scaling of this algorithm, and show that for fixed number of parity-encoded QAOA layers, the performance drops as $N^{-1/2}$. We perform tensor-network calculations to confirm this result, and comment on the concentration of optimal QAOA parameters over problem instances.
Arrays of neutral atoms trapped in optical tweezers have emerged as a leading platform for quantum information processing and quantum simulation due to their scalability, reconfigurable connectivity, and high-fidelity operations. Individual atoms are promising candidates for quantum networking due to their capability to emit indistinguishable photons that are entangled with their internal atomic states. Integrating atom arrays with photonic interfaces would enable distributed architectures in which nodes hosting many processing qubits could be efficiently linked together via the distribution of remote entanglement. However, many atom array techniques cease to work in close proximity to photonic interfaces, with atom detection via standard fluorescence imaging presenting a major challenge due to scattering from nearby photonic devices. Here, we demonstrate an architecture that combines atom arrays with up to 64 optical tweezers and a millimeter-scale photonic chip hosting more than 100 nanophotonic devices. We achieve high-fidelity (~99.2%), background-free imaging in close proximity to nanofabricated devices using a multichromatic excitation and detection scheme. The atoms can be imaged while trapped a few hundred nanometers above the dielectric surface, which we verify using Stark shift measurements of the modified trapping potential. Finally, we rearrange atoms into defect-free arrays and load them simultaneously onto the same or multiple devices.
Hopf insulator is a representative class of three-dimensional topological insulators beyond the standard topological classification methods based on K-theory. In this letter, we discover the metallic counterpart of the Hopf insulator in the non-Hermitian systems. While the Hopf invariant is not a stable topological index due to the additional non-Hermitian degree of freedom, we show that the PT-symmetry stabilizes the Hopf invariant even in the presence of the non-Hermiticity. In sharp contrast to the Hopf insulator phase in the Hermitian counterpart, we discover an interesting result that the non-Hermitian Hopf bundle exhibits the topologically protected non-Hermitian degeneracy, characterized by the two-dimensional surface of exceptional points. Despite the non-Hermiticity, the Hopf metal has the quantized Zak phase, which results in bulk-boundary correspondence by showing drumhead-like surface states at the boundary. Finally, we show that, by breaking PT-symmetry, the nodal surface deforms into the knotted exceptional lines. Our discovery of the Hopf metal phase firstly confirms the existence of the non-Hermitian topological phase outside the framework of the standard topological classifications.
Out of thermal equilibrium, bosonic quantum systems can Bose-condense away from the ground state, featuring a macroscopic occupation of an excited state, or even of multiple states in the so-called Bose-selection scenario. While theory has been developed describing such effects as they result from the nonequilibrium kinetics of quantum jumps, a theoretical understanding, and the development of practical strategies, to control and drive the system into desired Bose condensation patterns have been lacking. We show how fine-tuned single or multiple condensate modes, including their relative occupation, can be engineered by coupling the system to artificial quantum baths. Moreover, we propose a Bose `condenser', experimentally implementable in a superconducting circuit, where bath engineering is realized via auxiliary driven-damped two-level systems, that induces targeted Bose condensation into eigenstates of a chain of resonators. We further discuss the engineering of transition points between different Bose condensation configurations, which may find application for amplification, heat-flow control, and the design of highly-structured quantum baths.
Quantum networks allow for novel forms of quantum nonlocality. By exploiting the combination of entangled states and entangled measurements, strong nonlocal correlations can be generated across the entire network. So far, all proofs of this effect are essentially restricted to the idealized case of pure entangled states and projective local measurements. Here we present noise-robust proofs of network quantum nonlocality, for a class of quantum distributions on the triangle network that are based on entangled states and entangled measurements. The key ingredient is a result of approximate rigidity for local distributions that satisfy the so-called ``parity token counting'' property with high probability. Considering quantum distributions obtained with imperfect sources, we obtain noise robustness up to $\sim 80\%$ for dephasing noise and up to $\sim 0.67\%$ for white noise. Additionally, we can prove that all distributions in the vicinity of some ideal quantum distributions are nonlocal, with a bound on the total-variation distance. Our work opens interesting perspectives towards the practical implementation of quantum network nonlocality.
A tensor is a multidimensional array of numbers that can be used to store data, encode a computational relation and represent quantum entanglement. In this sense a tensor can be viewed as valuable resource whose transformation can lead to an understanding of structure in data, computational complexity and quantum information. In order to facilitate the understanding of this resource, we propose a family of information-theoretically constructed preorders on tensors, which can be used to compare tensors with each other and to assess the existence of transformations between them. The construction places copies of a given tensor at the edges of a hypergraph and allows transformations at the vertices. A preorder is then induced by the transformations possible in a given growing sequence of hypergraphs. The new family of preorders generalises the asymptotic restriction preorder which Strassen defined in order to study the computational complexity of matrix multiplication. We derive general properties of the preorders and their associated asymptotic notions of tensor rank and view recent results on tensor rank non-additivity, tensor networks and algebraic complexity in this unifying frame. We hope that this work will provide a useful vantage point for exploring tensors in applied mathematics, physics and computer science, but also from a purely mathematical point of view.
We present a family of Floquet circuits that can interpolate between non-interacting qubits, free propagation, generic interacting, and dual-unitary dynamics. We identify the operator entanglement entropy of the two-qubit gate as a good quantitative measure of the interaction strength. We test the persistence of localization in the vicinity of the non-interacting point by probing spectral statistics, decay of autocorrelators, and measuring entanglement growth. The finite-size analysis suggests that the many-body localized regime does not persist in the thermodynamic limit. Instead, our results are compatible with an integrability-breaking phenomenon.
In this study, we conducted a detailed investigation into the time evolution of the probability density within a 1D double-well potential hosting a Bose-Fermi mixture. This system comprised spinless bosons and spin one-half fermions with weak repulsive contact interactions. Notably, even at very low effective coupling constants, periodic probabilities were observed, indicating correlated tunneling of both bosons and fermions, leading to complete miscibility, which disappears when an external electric field is turned on. The electric field accentuated fermion-fermion interactions due to the Pauli exclusion principle, altering both boson density and interactions and leading to spatial redistribution of particles. These findings underscore the complex interplay between interactions, external fields, and spatial distributions within confined quantum systems. Our exploration of higher interaction strengths revealed conditions under which probability density functions are decoupled. Furthermore, we observed that increased fermion interaction, driven by the electric field, led to higher tunneling frequencies for both species because of the repulsive nature of the boson-fermion interaction. Conversely, increased boson-boson interaction resulted in complete tunneling of both species, especially when boson density was high, leading to effective fermion repulsion. Expanding our analysis to scenarios involving four bosons demonstrated that higher interaction values corresponded to increased oscillation frequencies in tunneling probabilities. Finally, by manipulating interaction parameters and activating the electric field, we achieved complete tunneling of both species, further increasing oscillation frequencies and resulting in intervals characterized by overlapping probability functions.
This paper is concerned with open quantum systems whose dynamic variables have an algebraic structure, similar to that of the Pauli matrices for finite-level systems. The Hamiltonian and the operators of coupling of the system to the external bosonic fields depend linearly on the system variables. The fields are represented by quantum Wiener processes which drive the system dynamics according to a quasilinear Hudson-Parthasarathy quantum stochastic differential equation whose drift vector and dispersion matrix are affine and linear functions of the system variables. This setting includes the zero-Hamiltonian isolated system dynamics as a particular case, where the system variables are constant in time, which makes them potentially applicable as a quantum memory. In a more realistic case of nonvanishing system-field coupling, we define a memory decoherence time when a mean-square deviation of the system variables from their initial values becomes relatively significant as specified by a weighting matrix and a fidelity parameter. We consider the decoherence time maximization over the energy parameters of the system and obtain a condition under which the zero Hamiltonian provides a suboptimal solution. This optimization problem is also discussed for a direct energy coupling interconnection of such systems.
Solving optimization problems with high performance is the target of existing works of Quantum Approximate Optimization Algorithm (QAOA). With this intention, we propose an advanced QAOA based on incremental learning, where the training trajectory is proactively segmented into incremental phases. Taking the MaxCut problem as our example, we randomly select a small subgraph from the whole graph and train the quantum circuit to get optimized parameters for the MaxCut of the subgraph in the first phase. Then in each subsequent incremental phase, a portion of the remaining nodes and edges are added to the current subgraph, and the circuit is retrained to get new optimized parameters. The above operation is repeated until the MaxCut problem on the whole graph is solved. The key point is that the optimized parameters of the previous phase will be reused in the initial parameters of the current phase. Numerous simulation experiments show our method has superior performance on Approximation Ratio (AR) and training time compared to prevalent works of QAOA. Specifically, the AR is higher than standard QAOA by 13.17% on weighted random graphs.
We consider the problem of discrimination between two pure quantum states. It is well known that the optimal measurement under both the error-probability and log-loss criteria is a projection, while under an ``erasure-distortion'' criterion it is a three-outcome positive operator-valued measure (POVM). These results were derived separately. We present a unified approach which finds the optimal measurement under any distortion measure that satisfies a convexity relation with respect to the Bhattacharyya distance. Namely, whenever the measure is relatively convex (resp. concave), the measurement is the projection (resp. three-outcome POVM) above. The three above-mentioned results are obtained as special cases of this simple derivation. As for further measures for which our result applies, we prove that Renyi entropies of order $1$ and above (resp. $1/2$ and below) are relatively convex (resp. concave). A special setting of great practical interest, is the discrimination between two coherent-light waveforms. In a remarkable work by Dolinar it was shown that a simple detector consisting of a photon counter and a feedback-controlled local oscillator obtains the quantum-optimal error probability. Later it was shown that the same detector (with the same local signal) is also optimal in the log-loss sense. By applying a similar convexity approach, we obtain in a unified manner the optimal signal for a variety of criteria.
The second quantum revolution has been picking up momentum over the last decade. Quantum technologies are starting to attract more attention from governments, private companies, investors, and public. The ability to control individual quantum systems for the purpose of information processing and communication is no longer a theoretical dream, but is steadily becoming routine in laboratories and startups around the world. With this comes the need to educate the future generation of quantum engineers. This textbook is a companion to our video lectures on Overview of Quantum Communications from the Q-Leap Education project known as Quantum Academy of Science and Technology. It is a gentle introduction to quantum networks, and is suitable for use as a textbook for undergraduate students of diverse background. No prior knowledge of quantum physics or quantum information is assumed. Exercises are included in each chapter.
The quantum superposition principle is reconsidered based on adiabatic theorem of quantum mechanics, nonadiabatic dressed states and experimental evidence. The physical mechanism and physical properties of the quantum superposition are revealed.
With the maturity achieved by deep learning techniques, intelligent systems that can assist physicians in the daily interpretation of clinical images can play a very important role. In addition, quantum techniques applied to deep learning can enhance this performance, and federated learning techniques can realize privacy-friendly collaborative learning among different participants, solving privacy issues due to the use of sensitive data and reducing the number of data to be collected for each individual participant. We present in this study a hybrid quantum neural network that can be used to quantify non-alcoholic liver steatosis and could be useful in the diagnostic process to determine a liver's suitability for transplantation; at the same time, we propose a federated learning approach based on a classical deep learning solution to solve the same problem, but using a reduced data set in each part. The liver steatosis image classification accuracy of the hybrid quantum neural network, the hybrid quantum ResNet model, consisted of 5 qubits and more than 100 variational gates, reaches 97%, which is 1.8% higher than its classical counterpart, ResNet. Crucially, that even with a reduced dataset, our hybrid approach consistently outperformed its classical counterpart, indicating superior generalization and less potential for overfitting in medical applications. In addition, a federated approach with multiple clients, up to 32, despite the lower accuracy, but still higher than 90%, would allow using, for each participant, a very small dataset, i.e., up to one-thirtieth. Our work, based over real-word clinical data can be regarded as a scalable and collaborative starting point, could thus fulfill the need for an effective and reliable computer-assisted system that facilitates the daily diagnostic work of the clinical pathologist.
Recently, hybrid entanglement (HE), which involves entangling a qubit with a coherent state, has demonstrated superior performance in various quantum information processing tasks, particularly in quantum key distribution [arXiv:2305.18906 (2023)]. Despite its theoretical advantages, the practical generation of these states in the laboratory has been a challenge. In this context, we introduce a deterministic and efficient approach for generating HE states using quantum walks. Our method achieves a remarkable fidelity of 99.90 % with just 20 time steps in a one-dimensional split-step quantum walk. This represents a significant improvement over prior approaches that yielded HE states only probabilistically, often with fidelities as low as 80 %. Our scheme not only provides a robust solution to the generation of HE states but also highlights a unique advantage of quantum walks, thereby contributing to the advancement of this burgeoning field. Moreover, our scheme is experimentally feasible with the current technology.
In this paper, we study the problem of learning in quantum games - and other classes of semidefinite games - with scalar, payoff-based feedback. For concreteness, we focus on the widely used matrix multiplicative weights (MMW) algorithm and, instead of requiring players to have full knowledge of the game (and/or each other's chosen states), we introduce a suite of minimal-information matrix multiplicative weights (3MW) methods tailored to different information frameworks. The main difficulty to attaining convergence in this setting is that, in contrast to classical finite games, quantum games have an infinite continuum of pure states (the quantum equivalent of pure strategies), so standard importance-weighting techniques for estimating payoff vectors cannot be employed. Instead, we borrow ideas from bandit convex optimization and we design a zeroth-order gradient sampler adapted to the semidefinite geometry of the problem at hand. As a first result, we show that the 3MW method with deterministic payoff feedback retains the $\mathcal{O}(1/\sqrt{T})$ convergence rate of the vanilla, full information MMW algorithm in quantum min-max games, even though the players only observe a single scalar. Subsequently, we relax the algorithm's information requirements even further and we provide a 3MW method that only requires players to observe a random realization of their payoff observable, and converges to equilibrium at an $\mathcal{O}(T^{-1/4})$ rate. Finally, going beyond zero-sum games, we show that a regularized variant of the proposed 3MW method guarantees local convergence with high probability to all equilibria that satisfy a certain first-order stability condition.
Quantum batteries are energy storage devices built using quantum mechanical objects, which are developed with the aim of outperforming their classical counterparts. Proposing optimal designs of quantum batteries which are able to exploit quantum advantages requires balancing the competing demands for fast charging, durable storage and effective work extraction. Here we study theoretically a bipartite quantum battery model, composed of a driven charger connected to an energy holder, within two paradigmatic cases of a driven-dissipative open quantum system: linear driving and quadratic driving. The linear battery is governed by a single exceptional point which splits the response of the battery into two regimes, one of which induces a good amount of useful work. Quadratic driving leads to a squeezed quantum battery, which generates plentiful useful work near to critical points associated with dissipative phase transitions. Our theoretical results may be realized with parametric cavities or nonlinear circuits, potentially leading to the manifestation of a quantum battery exhibiting squeezing.
Classification, the computational process of categorizing an input into pre-existing classes, is now a cornerstone in modern computation in the era of machine learning. Here we propose a new type of quantum classifier, based on quantum transport of particles in a trained quantum network. The classifier is based on sending a quantum particle into a network and measuring the particle's exit point, which serves as a "class" and can be determined by changing the network parameters. Using this scheme, we demonstrate three examples of classification; in the first, wave functions are classified according to their overlap with predetermined (random) groups. In the second, we classify wave-functions according to their level of localization. Both examples use small training sets and achieve over 90\% precision and recall. The third classification scheme is a "real-world problem", concerning classification of catalytic aromatic-aldehyde substrates according to their reactivity. Using experimental data, the quantum classifier reaches an average 86\% classification accuracy. We show that the quantum classifier outperforms its classical counterpart for these examples, thus demonstrating quantum advantage, especially in the regime of "small data". These results pave the way for a novel classification scheme, which can be implemented as an algorithm, and potentially realized experimentally on quantum hardware such as photonic networks.
We introduce a prototypical model for cavity polaritonic control of ultracold photochemistry by considering the resonant vibrational strong coupling of a rubidium dimer to a terahertz cavity. We demonstrate that at avoided crossings between a vibrational excitation and the vacuum photon absorption, the resulting polaritonic states between the molecule and photons can efficiently control the molecular vibrational Franck-Condon (FC) factors. Due to the entanglement between light and matter, FC factor is transferred from one polaritonic branch to other, leading to a polariton with a substantially enhanced FC factor. Utilizing this polariton state for photoassociation results in the enhanced formation of ultracold molecules. This work suggests a path to controlling photoassociation with cavity vacuum fields, and lays the ground for the emerging subfield of polaritonic ultracold chemistry.
Recent experiments demonstrate that a charge qubit consisting of a single electron bound to a solid neon surface exhibits an exceptionally long coherence time, making it a promising platform for quantum computing. However, some observations cast doubt on the direct correlation between the electron's binding mechanism and quantum states with the applied electric trapping potential. In this study, we introduce a theoretical framework to examine the electron's interactions with neon surface topography, such as bumps and valleys. By evaluating the surface charges induced by the electron, we demonstrate its strong perpendicular binding to the neon surface. The Schrodinger equation for the electron's lateral motion on the curved 2D surface is then solved for extensive topographical variations. Our results reveal that surface bumps can naturally bind an electron, forming unique ring-shaped quantum states that align with experimental observations. We also show that the electron's excitation energy can be smoothly tuned using an magnetic field to facilitate qubit operation. This study offers a leap in our understanding of e-neon qubit properties, laying the groundwork to guide its design and optimization for advancing quantum computing architectures.
NEWFOCUS is an EU COST Action targeted at exploring radical solutions that could influence the design of future wireless networks. The project aims to address some of the challenges associated with optical wireless communication (OWC) and to establish it as a complementary technology to the radio frequency (RF)-based wireless systems in order to meet the demanding requirements of the fifth generation (5G) and the future sixth generation (6G) backhaul and access networks. Only 6G will be able to widely serve the exponential growth in connected devices (i.e., more than 500 billion) in 2030, real-time holographic communication, future virtual reality, etc. Space is emerging as the new frontier in 5 and 6G and beyond communication networks, where it offers high-speed wireless coverage to remote areas both in lands and sees. This activity is supported by the recent development of low-altitude Earth orbit satellite mega-constellations. The focus of this 2nd White Paper is on the use of OWC as an enabling technology for medium- and long-range links for deployment in (i) smart-cities and intelligent transportation systems; (ii) first- and last-mile access and backhaul/fronthaul wireless networks; (iii) hybrid free-space optics/RF adaptive wireless connections; (iv) space-to-ground, inter-satellite, ground-to-air, and air-to-air communications; and (v) underwater communications.
The advancement of quantum photonic technologies relies on the ability to precisely control the degrees of freedom of optically active states. Here, we realize real-time, room-temperature tunable strong plasmon-exciton coupling in 2D semiconductor monolayers enabled by a general approach that combines strain engineering plus force- and voltage-adjustable plasmonic nanocavities. We show that the exciton energy and nanocavity plasmon resonance can be controllably toggled in concert by applying pressure with a plasmonic nanoprobe, allowing in operando control of detuning and coupling strength, with observed Rabi splittings >100 meV. Leveraging correlated force spectroscopy, nano-photoluminescence (nano-PL) and nano-Raman measurements, augmented with electromagnetic simulations, we identify distinct polariton bands and dark polariton states, and map their evolution as a function of nanogap and strain tuning. Uniquely, the system allows for manipulation of coupling strength over a range of cavity parameters without dramatically altering the detuning. Further, we establish that the tunable strong coupling is robust under multiple pressing cycles and repeated experiments over multiple nanobubbles. Finally, we show that the nanogap size can be directly modulated via an applied DC voltage between the substrate and plasmonic tip, highlighting the inherent nature of the concept as a plexcitonic nano-electro-mechanical system (NEMS). Our work demonstrates the potential to precisely control and tailor plexciton states localized in monolayer (1L) transition metal dichalcogenides (TMDs), paving the way for on-chip polariton-based nanophotonic applications spanning quantum information processing to photochemistry.
This paper presents a new quantum protocol designed to simultaneously transmit information from one source to many recipients. The proposed protocol, which is based on the phenomenon of entanglement, is completely distributed and is provably information-theoretically secure. Numerous existing quantum protocols guarantee secure information communication between two parties but are not amenable to generalization in situations where the source must transmit information to two or more parties, so they must be applied sequentially two or more times in such a setting. The main novelty of the new protocol is its extensibility and generality to situations involving one party that must simultaneously communicate different, in general, messages to an arbitrary number of spatially distributed parties. This is achieved by the special way employed to encode the transmitted information in the entangled state of the system, one of the distinguishing features compared to previous protocols. This protocol can prove expedient whenever an information broker, say, Alice, must communicate distinct secret messages to her agents, all in different geographical locations, in one go. Due to its relative complexity, compared to similar cryptographic protocols, as it involves communication among $n$ parties, and relies on $GHZ_{n}$ tuples, we provide an extensive and detailed security analysis so as to prove that it is information-theoretically secure. Finally, in terms of its implementation, the prevalent characteristic of the proposed protocol is its uniformity and simplicity because it only requires CNOT and Hadamard gates, and the local quantum circuits are identical for all information recipients.
The accurate computation of properties of large molecular systems is classically infeasible and is one of the applications in which it is hoped that quantum computers will demonstrate an advantage over classical devices. However, due to the limitations of present-day quantum hardware, variational-hybrid algorithms introduced to tackle these problems struggle to meet the accuracy and precision requirements of chemical applications. Here, we apply the Quantum Computed Moments (QCM) approach combined with a variety of noise-mitigation techniques to an 8 qubit/spin-orbital representation of the water molecule (H$_2$O). A noise-stable improvement on the variational result for a 4-excitation trial-state (circuit depth 25, 22 CNOTs) was obtained, with the ground-state energy computed to be within $1.4\pm1.2$ mHa of exact diagonalisation in the 14 spin-orbital basis. Thus, the QCM approach, despite an increased number of measurements and noisy quantum hardware (CNOT error rates c.1% corresponding to expected error rates on the trial-state circuit of order 20%), is able to determine the ground-state energy of a non-trivial molecular system at the required accuracy (c.0.1%). To the best of our knowledge, these results are the largest calculations performed on a physical quantum computer to date in terms of encoding individual spin-orbitals producing chemically relevant accuracy, and a promising indicator of how such hybrid approaches might scale to problems of interest in the low-error/fault-tolerant regimes as quantum computers develop.
We study the properties of the random quantum states induced from the uniformly random pure states on a bipartite quantum system by taking the partial trace over the larger subsystem. Most of the previous studies have adopted a viewpoint of "concentration of measure" and have focused on the behavior of the states close to the average. In contrast, we investigate the large deviation regime, where the states may be far from the average. We prove the following results: First, the probability that the induced random state is within a given set decreases no slower or faster than exponential in the dimension of the subsystem traced out. Second, the exponent is equal to the quantum relative entropy of the maximally mixed state and the given set, multiplied by the dimension of the remaining subsystem. Third, the total probability of a given set strongly concentrates around the element closest to the maximally mixed state, a property that we call conditional concentration. Along the same line, we also investigate an asymptotic behavior of coherence of random pure states in a single system with large dimensions.
Consider the problem of minimizing an expected logarithmic loss over either the probability simplex or the set of quantum density matrices. This problem encompasses tasks such as solving the Poisson inverse problem, computing the maximum-likelihood estimate for quantum state tomography, and approximating positive semi-definite matrix permanents with the currently tightest approximation ratio. Although the optimization problem is convex, standard iteration complexity guarantees for first-order methods do not directly apply due to the absence of Lipschitz continuity and smoothness in the loss function. In this work, we propose a stochastic first-order algorithm named $B$-sample stochastic dual averaging with the logarithmic barrier. For the Poisson inverse problem, our algorithm attains an $\varepsilon$-optimal solution in $\tilde{O} (d^2/\varepsilon^2)$ time, matching the state of the art. When computing the maximum-likelihood estimate for quantum state tomography, our algorithm yields an $\varepsilon$-optimal solution in $\tilde{O} (d^3/\varepsilon^2)$ time, where $d$ denotes the dimension. This improves on the time complexities of existing stochastic first-order methods by a factor of $d^{\omega-2}$ and those of batch methods by a factor of $d^2$, where $\omega$ denotes the matrix multiplication exponent. Numerical experiments demonstrate that empirically, our algorithm outperforms existing methods with explicit complexity guarantees.
In this paper, we study the interaction between atom and the finite-size Su-Schrieffer-Heeger (SSH) model. We find that when the finite SSH model in the trivial phase, it can be viewed as the atom coupling with the waveguide with the finite bandwidths and non-linear dispersion relation. However, for the SSH model in the topological phase, when we consider the frequency of the atom is resonant with the edge mode of the SSH model, we find that the atom state couples to the two edge states. In this case, we find that there exists a special channel that can be utilized to transfer the atomic excitation to the ends of the SSH model using adiabatic processes. When the atom couples to the different sub-lattice, the excitation of the atom can be transferred to the leftmost or rightmost end of the chain, which provides the potential application toward quantum information processing. Furthermore, The excitation transfer of excited states of atoms to the ends of the chain can also be realized without the adiabatic process. Our work provides a pathway for realizing controllable quantum information transfer based on the atom couples topological matter.
We study the effect of a $\delta$ distribution potential placed at $x_0\geq 0$ and multiplied by a parameter $\alpha$ on a quantum mechanical particle in an infinite square well over the segment $\left[-\,\frac{L}{2},\frac{L}{2}\right]$. We obtain the limit of the eigenfunctions of the time independent Schr\"{o}dinger equation as $\alpha\nearrow+\infty$ and as $\alpha\searrow-\infty$. We see how each solution of the Schr\"{o}dinger equation corresponding to $\alpha=0$ changes as $\alpha$ runs through the real line. When $x_0$ is a rational multiple of $L$, there exist solutions of the Schr\"{o}dinger equation which vanish at $x_0$ and are unaffected by the value of $\alpha$. We show that each one of these has an energy that coincides with the energy of a certain limiting eigenfunction obtained by taking $|\alpha|\to\infty$. The expectation value of the position of a particle with wave function equal to the limiting eigenfunction is $x_0$.
Solid-state quantum emitters coupled to integrated photonic nanostructures are quintessential for exploring fundamental phenomena in cavity quantum electrodynamics and widely employed in photonic quantum technologies such as non-classical light sources, quantum repeaters, and quantum transducers, etc. One of the most exciting promises from integrated quantum photonics is the potential of scalability that enables massive productions of miniaturized devices on a single chip. In reality, the yield of efficient and reproducible light-matter couplings is greatly hindered by the spectral and spatial mismatches between the single solid-state quantum emitters and confined or propagating optical modes supported by the photonic nanostructures, preventing the high-throughput realization of large-scale integrated quantum photonic circuits for more advanced quantum information processing tasks. In this work, we introduce the concept of hyperspectral imaging in quantum optics, for the first time, to address such a long-standing issue. By exploiting the extended mode with a unique dispersion in a 1D planar cavity, the spectral and spatial information of each individual quantum dot in an ensemble can be accurately and reliably extracted from a single wide-field photoluminescence image with super-resolutions. With the extracted quantum dot positions and emission wavelengths, surface-emitting quantum light sources and in-plane photonic circuits can be deterministically fabricated with a high-throughput by etching the 1D confined planar cavity into 3D confined micropillars and 2D confined waveguides. Further extension of this technique by employing an open planar cavity could be exploited for pursuing a variety of compact quantum photonic devices with expanded functionalities for large-scale integration. Our work is expected to change the landscape of integrated quantum photonic technology.
The rotating wave approximation (RWA) plays a central role in the quantum dynamics of two-level systems. We derive corrections to the RWA using the renormalization group approach to asymptotic analysis. We study both the Rabi and Jaynes-Cummings models and compare our analytical results with numerical calculations.
Exactly solvable models play an extremely important role in many fields of quantum physics. In this study, the Schr\"{o}dinger equation is applied for a solution of a two--dimensional (2D) problem for two particles interacting via Kratzer, and modified Kratzer potentials. We found the exact bound state solutions of the two--dimensional Schr\"{o}dinger equation with Kratzer--type potentials and present analytical expressions for the eigenvalues and eigenfunctions. The eigenfunctions are given in terms of the associated Laguerre polynomials.
We propose a technique for optimizing quantum programs by temporally stretching pre-calibrated pulses. As an example, we modify a three-qubit Toffoli gate implementation by using an off-the-shelf numerical optimization algorithm to shorten the cross-resonance pulses in the sequence. Preliminary quantum process tomography results suggest that our strategy sometimes halves a Toffoli gate's error in practice, increasing process fidelity from around 60% to around 80%. Unlike existing quantum control techniques, ours takes seconds to converge, demonstrating its potential utility when incorporated into a general-purpose compiler pass that improves both the time and the accuracy of quantum programs.
It is well-known that classical random walks on regular graphs converge to the uniform distribution. Quantum walks, in their various forms, are quantizations of their corresponding classical random walk processes. Gerhardt and Watrous (2003) demonstrated that continuous-time quantum walks do not converge to the uniform distribution on certain Cayley graphs of the Symmetric group, which by definition are all regular. In this paper, we demonstrate that discrete-time quantum walks, in the sense of quantized Markov chains as introduced by Szegedy (2004), also do not converge to the uniform distribution. We analyze the spectra of the Szegedy walk operators using the representation theory of the symmetric group. In the discrete setting, the analysis is complicated by the fact that we work within a Hilbert space of a higher dimension than the continuous case, spanned by pairs of vertices. Our techniques are general, and we believe they can be applied to derive similar analytical results for other non-commutative groups using the characters of their irreducible representation.
The presence of many degenerate $d/f$ orbitals makes polynuclear transition metal compounds such as iron-sulfur clusters in nitrogenase challenging for state-of-the-art quantum chemistry methods. To address this challenge, we present the first distributed multi-GPU (Graphics Processing Unit) ab initio density matrix renormalization (DMRG) algorithm, suitable for modern high-performance computing (HPC) infrastructures. The central idea is to parallelize the most computationally intensive part - the multiplication of $O(K^2)$ operators with a trial wavefunction, where $K$ is the number of spatial orbitals, by combining operator parallelism for distributing the workload with a batched algorithm for performing contractions on GPU. With this new implementation, we are able to reach an unprecedented accuracy (1 milli-Hartree per metal) for the ground-state energy of an active space model (114 electrons in 73 active orbitals) of the P-cluster with a bond dimension $D=14000$ on 48 GPUs (NVIDIA A100 80 GB SXM), which is nearly three times larger than the bond dimensions reported in previous DMRG calculations for the same system using only CPUs.
Quantum convolutional neural networks (QCNNs) have gathered attention as one of the most promising algorithms for quantum machine learning. Reduction in the cost of training as well as improvement in performance is required for practical implementation of these models. In this study, we propose a channel attention mechanism for QCNNs and show the effectiveness of this approach for quantum phase classification problems. Our attention mechanism creates multiple channels of output state based on measurement of quantum bits. This simple approach improves the performance of QCNNs and outperforms a conventional approach using feedforward neural networks as the additional post-processing.
We introduce a new notion called ${\cal Q}$-secure pseudorandom isometries (PRI). A pseudorandom isometry is an efficient quantum circuit that maps an $n$-qubit state to an $(n+m)$-qubit state in an isometric manner. In terms of security, we require that the output of a $q$-fold PRI on $\rho$, for $ \rho \in {\cal Q}$, for any polynomial $q$, should be computationally indistinguishable from the output of a $q$-fold Haar isometry on $\rho$. \par By fine-tuning ${\cal Q}$, we recover many existing notions of pseudorandomness. We present a construction of PRIs and assuming post-quantum one-way functions, we prove the security of ${\cal Q}$-secure pseudorandom isometries (PRI) for different interesting settings of ${\cal Q}$. \par We also demonstrate many cryptographic applications of PRIs, including, length extension theorems for quantum pseudorandomness notions, message authentication schemes for quantum states, multi-copy secure public and private encryption schemes, and succinct quantum commitments. }
Quantum control of integrated photonic devices has emerged as a powerful tool to overcome signal losses and obtain dynamic control over photon energies, without complex device customization. Here, we demonstrate the coherent quantum control of extraordinary optical transmission (EOT) in the visible regime by coupling plasmon resonances with a two-level quantum emitter (QE). We optimize the spectral response of plasmon-emitter coupled system in the light of coupled harmonic oscillator model. We analyze the shift in the resonance frequency of plasmon modes by varying the transition frequency of QE through external bias voltage. By sweeping the emission linewidth of QE from green to red light, hybrid mode resonance frequency shifts to longer wavelength with maximum energy shift upto 181 meV. Due to coherent coupling of phase-locked modes in the presence of QE, the lifetime of ultrafast plasmon enhances to an order of magnitude. We discuss the impact of spectral and temporal modulation of plasmon resonances on the characteristics of EOT signal through 3D finite difference time domain (FDTD) method. Our proposed method provides the active electronic control of EOT signal which makes it a feasible and compact element in integrated photonic circuits, for bio-sensing, high resolution imaging, and molecular spectroscopy applications.
Here we introduce an improved approach to Variational Quantum Attack Algorithms (VQAA) on crytographic protocols. Our methods provide robust quantum attacks to well-known cryptographic algorithms, more efficiently and with remarkably fewer qubits than previous approaches. We implement simulations of our attacks for symmetric-key protocols such as S-DES, S-AES and Blowfish. For instance, we show how our attack allows a classical simulation of a small 8-qubit quantum computer to find the secret key of one 32-bit Blowfish instance with 24 times fewer number of iterations than a brute-force attack. Our work also shows improvements in attack success rates for lightweight ciphers such as S-DES and S-AES. Further applications beyond symmetric-key cryptography are also discussed, including asymmetric-key protocols and hash functions. In addition, we also comment on potential future improvements of our methods. Our results bring one step closer assessing the vulnerability of large-size classical cryptographic protocols with Noisy Intermediate-Scale Quantum (NISQ) devices, and set the stage for future research in quantum cybersecurity.
Expressions for the entropy and equations for the quantum distribution functions in systems of non-interacting fermions and bosons with an arbitrary, including small, number of particles are obtained in the paper
We show that free many-body fermionic non-Hermitian systems require two distinct sets of topological invariants to describe the topology of energy bands and states respectively, with the latter yet to be explored. We identify 10 symmetry classes -- defined by particle-hole, linearized time-reversal, and linearized chiral symmetries, leading to a 10-fold classification for quantum states of many-body non-Hermitian topological phase. Unique topological invariants are defined in each class, dictating the topology of states. These findings pave the way for deeper understanding of the topological phases of many-body non-Hermitian systems.
We provide a theory for electronic transitions induced by ultrashort electromagnetic pulses in two-dimensional artificial relativistic atoms which are created by a charged impurity in a gapped graphene monolayer. Using a non-perturbative sudden-perturbation approximation, we derive and discuss analytical expressions for the probabilities for excitation, ionization and electron-hole pair creation in this system.
The preparation of pure quantum states with high degrees of macroscopicity is a central goal of ongoing experimental efforts to control quantum systems. We present a state preparation protocol which renders a mechanical oscillator with an arbitrarily large coherent amplitude in a manifestly nonclassical state. The protocol relies on coherent state preparation followed by a projective measurement of a single Raman scattered photon, making it particularly suitable for cavity optomechanics. The nonclassicality of the state is reflected by sub-Poissonian phonon statistics, which can be accessed by measuring the statistics of subsequently emitted Raman sideband photons. The proposed protocol would facilitate the observation of nonclassicality of a mechanical oscillator that moves macroscopically relative to motion at the single-phonon level.
Ultracold atomic systems have emerged as strong contenders amongst the various quantum systems relevant for developing and implementing quantum technologies due to their enhanced control and flexibility of the operating conditions. In this thesis, we explore persistent currents generated in a ring-shaped quantum gas of strongly interacting \textit{N}-component fermions, specifically the so-called SU(\textit{N}) fermions. Our results, apart from being a relevant contribution to many-body physics, prove the `primum mobile' for a new concept of matter-wave circuits based on SU(\textit{N}) fermionic platforms, opening an exciting chapter in the field of atomtronics. Indeed, the specific properties of quantization are expected to provide the core to fabricate quantum devices with enhanced sensitivity like interferometers. At the same time, SU(\textit{N}) fermionic circuits show promise in engineering cold atoms quantum simulators with this artificial fermionic matter.
In the burgeoning domain of distributed quantum computing, achieving consensus amidst adversarial settings remains a pivotal challenge. We introduce an enhancement to the Quantum Byzantine Agreement (QBA) protocol, uniquely incorporating advanced error mitigation techniques: Twirled Readout Error Extinction (T-REx) and dynamical decoupling (DD). Central to this refined approach is the utilization of a Noisy Intermediate Scale Quantum (NISQ) source device for heightened performance. Extensive tests on both simulated and real-world quantum devices, notably IBM's quantum computer, provide compelling evidence of the effectiveness of our T-REx and DD adaptations in mitigating prevalent quantum channel errors. Subsequent to the entanglement distribution, our protocol adopts a verification method reminiscent of Quantum Key Distribution (QKD) schemes. The Commander then issues orders encoded in specific quantum states, like Retreat or Attack. In situations where received orders diverge, lieutenants engage in structured games to reconcile discrepancies. Notably, the frequency of these games is contingent upon the Commander's strategies and the overall network size. Our empirical findings underscore the enhanced resilience and effectiveness of the protocol in diverse scenarios. Nonetheless, scalability emerges as a concern with the growth of the network size. To sum up, our research illuminates the considerable potential of fortified quantum consensus systems in the NISQ era, highlighting the imperative for sustained research in bolstering quantum ecosystems.
The use of Evolutionary Algorithms (EA) for solving Mathematical/Computational Optimization Problems is inspired by the biological processes of Evolution. Few of the primitives involved in the Evolutionary process/paradigm are selection of 'Fit' individuals (from a population sample) for retention, cloning, mutation, discarding, breeding, crossover etc. In the Evolutionary Algorithm abstraction, the individuals are deemed to be solution candidates to an Optimization problem and additional solution(/sets) are built by applying analogies to the above primitives (cloning, mutation etc.) by means of evaluating a 'Fitness' function/criterion. One such algorithm is Differential Evolution (DE) which can be used to compute the minima of functions such as the rastrigin function and rosenbrock function. This work is an attempt to study the result of applying the DE method on these functions with candidate individuals generated on classical Turing modeled computation and comparing the same with those on state of the art Quantum computation.The study benchmarks the convergence of these functions by varying the parameters initialized and reports timing, convergence, and resource utilization results.
In this paper, we investigate the influence of quasiperiodic perturbations on one-dimensional non-Hermitian diamond lattices that possess flat bands with an artificial magnetic flux $\theta$. Our study shows that the symmetry of these perturbations and the magnetic flux $\theta$ play a pivotal role in shaping the localization properties of the system. When $\theta=0$, the non-Hermitian lattice exhibits a single flat band in the crystalline case, and symmetric as well as antisymmetric perturbations can induce accurate mobility edges. In contrast, when $\theta=\pi$, the clean diamond lattice manifests three dispersionless bands referred to as an "all-band-flat" (ABF) structure, irrespective of the non-Hermitian parameter. The ABF structure restricts the transition from delocalized to localized states, as all states remain localized for any finite symmetric perturbation. Our numerical calculations further unveil that the ABF system subjected to antisymmetric perturbations exhibits multifractal-to-localized edges. Multifractal states are predominantly concentrated in the internal region of the spectrum. Additionally, we explore the case where $\theta$ lies within the range of $(0, \pi)$, revealing a diverse array of complex localization features within the system.
The recent realization of a two-site Kitaev chain featuring "poor man's Majorana" states demonstrates a path forward in the field of topological superconductivity. Harnessing the potential of these states for quantum information processing, however, requires increasing their robustness to external perturbations. Here, we form a two-site Kitaev chain using proximitized quantum dots hosting Yu-Shiba-Rusinov states. The strong hybridization between such states and the superconductor enables the creation of poor man's Majorana states with a gap larger than $70 \mathrm{~\mu eV}$. It also greatly reduces the charge dispersion compared to Kitaev chains made with non-proximitized quantum dots. The large gap and reduced sensitivity to charge fluctuations will benefit qubit manipulation and demonstration of non-abelian physics using poor man's Majorana states.
We describe a quantum algorithm based on an interior point method for solving a linear program with $n$ inequality constraints on $d$ variables. The algorithm explicitly returns a feasible solution that is $\epsilon$-close to optimal, and runs in time $\sqrt{n}\, \mathrm{poly}(d,\log(n),\log(1/\varepsilon))$ which is sublinear for tall linear programs (i.e., $n \gg d$). Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford [FOCS '14]. This requires us to efficiently approximate the Hessian and gradient of the barrier function, and these are our main contributions. To approximate the Hessian, we describe a quantum algorithm for the spectral approximation of $A^T A$ for a tall matrix $A \in \mathbb R^{n \times d}$. The algorithm uses leverage score sampling in combination with Grover search, and returns a $\delta$-approximation by making $O(\sqrt{nd}/\delta)$ row queries to $A$. This generalizes an earlier quantum speedup for graph sparsification by Apers and de Wolf [FOCS '20]. To approximate the gradient, we use a recent quantum algorithm for multivariate mean estimation by Cornelissen, Hamoudi and Jerbi [STOC '22]. While a naive implementation introduces a dependence on the condition number of the Hessian, we avoid this by pre-conditioning our random variable using our quantum algorithm for spectral approximation.
Anomaly detection is a crucial task in machine learning that involves identifying unusual patterns or events in data. It has numerous applications in various domains such as finance, healthcare, and cybersecurity. With the advent of quantum computing, there has been a growing interest in developing quantum approaches to anomaly detection. After reviewing traditional approaches to anomaly detection relying on statistical or distance-based methods, we will propose a Quadratic Unconstrained Binary Optimization (QUBO) model formulation of anomaly detection, compare it with classical methods, and discuss its scalability on current Quantum Processing Units (QPU).
Quantum superposition of high-dimensional states enables both computational speed-up and security in cryptographic protocols. However, the exponential complexity of tomographic processes makes certification of these properties a challenging task. In this work, we experimentally certify coherence witnesses tailored for quantum systems of increasing dimension, using pairwise overlap measurements enabled by a six-mode universal photonic processor fabricated with a femtosecond laser writing technology. In particular, we show the effectiveness of the proposed coherence and dimension witnesses for qudits of dimensions up to 5. We also demonstrate advantage in a quantum interrogation task, and show it is fueled by quantum contextuality. Our experimental results testify to the efficiency of this novel approach for the certification of quantum properties in programmable integrated photonic platforms
We propose an Indirect Quantum Approximate Optimization Algorithm (referred to as IQAOA) where the Quantum Alternating Operator Ansatz takes into consideration a general parameterized family of unitary operators to efficiently model the Hamiltonian describing the set of string vectors. This algorithm creates an efficient alternative to QAOA, where: 1) a Quantum parametrized circuit executed on a quantum machine models the set of string vectors; 2) a Classical meta-optimization loop executed on a classical machine; 3) an estimation of the average cost of each string vector computing, using a well know algorithm coming from the OR community that is problem dependent. The indirect encoding defined by dimensional string vector is mapped into a solution by an efficient coding/decoding mechanism. The main advantage is to obtain a quantum circuit with a strongly limited number of gates that could be executed on the noisy current quantum machines. The numerical experiments achieved with IQAOA permits to solve 8-customer instances TSP using the IBM simulator which are to the best of our knowledge the largest TSP ever solved using a QAOA based approach.
Quantum error correction (QEC) with single-shot decoding enables reduction of errors after every single round of noisy stabilizer measurement, easing the time-overhead requirements for fault tolerance. Notably, several classes of quantum low-density-parity-check (qLDPC) codes are known which facilitate single-shot decoding, potentially giving them an additional overhead advantage. However, the perceived advantage of single-shot decoding is limited because it can significantly degrade the effective code distance. This degradation may be compensated for by using a much larger code size to achieve the desired target logical error rate, at the cost of increasing the amount of syndrome information to be processed, as well as, increasing complexity of logical operations. Alternatively, in this work we study sliding-window decoding, which corrects errors from previous syndrome measurement rounds while leaving the most recent errors for future correction. We observe that sliding-window decoding significantly improves the logical memory lifetime and hence the effective distance compared to single-shot decoding on hypergraph-product codes and lifted-product codes. Remarkably, we find that this improvement may not cost a larger decoding complexity. Thus, the sliding-window strategy can be more desirable for fast and accurate decoding for fault-tolerant quantum computing with qLDPC codes.
The Kitaev spin liquid, stabilized as the ground state of the Kitaev honeycomb model, is a paradigmatic example of a topological $\mathbb{Z}_2$ quantum spin liquid. The fate of the Kitaev spin liquid in presence of an external magnetic field is a topic of current interest due to experiments, which apparently unveil a $\mathbb{Z}_2$ topological phase in the so-called Kitaev materials, and theoretical studies predicting the emergence of an intermediate quantum phase of debated nature before the appearance of a trivial partially polarized phase. In this work, we employ hierarchical mean-field theory, an algebraic and numerical method based on the use of clusters preserving relevant symmetries and short-range quantum correlations, to investigate the quantum phase diagram of the antiferromagnetic Kitaev's model in a [111] field. By using clusters of 24 sites, we predict that the Kitaev spin liquid transits through two intermediate phases characterized by stripe and chiral order, respectively, before entering the trivial partially polarized phase, differing from previous studies. We assess our results by performing exact diagonalization and computing the scaling of different observables, including the many-body Chern number and other topological quantities, thus establishing hierarchical mean-field theory as a method to study topological quantum spin liquids.
Quantum Computing allows, in principle, the encoding of the exponentially scaling many-electron wave function onto a linearly scaling qubit register, offering a promising solution to overcome the limitations of traditional quantum chemistry methods. An essential requirement for ground state quantum algorithms to be practical is the initialisation of the qubits to a high-quality approximation of the sought-after ground state. Quantum State Preparation (QSP) allows the preparation of approximate eigenstates obtained from classical calculations, but it is frequently treated as an oracle in quantum information. In this study, we conduct QSP on the ground state of prototypical strongly correlated systems, up to 28 qubits, using the Hyperion GPU-accelerated state-vector emulator. Various variational and non-variational methods are compared in terms of their circuit depth and classical complexity. Our results indicate that the recently developed Overlap-ADAPT-VQE algorithm offers the most advantageous performance for near-term applications.
Systems displaying quantum topological order feature robust characteristics that have been very attractive to quantum computing schemes. It has long been believed that the salient universal features of topologically ordered systems are invariably described by topological quantum field theories. In the current work, we illustrate that this is not necessarily so. Towards this end, we construct and study a rich class of two- and three-dimensional topologically ordered models featuring interacting anyons (both Abelian and non-Abelian). In these theories, the lowest excitation energies depend on the relative geometrical placement of the anyons leading to properties that cannot be described by topological quantum field theories. We examine these models by performing dualities to systems displaying conventional (i.e., Landau) orders. Our approach enables a general method for mapping general Landau type theories to topologically ordered dual models. The low-energy subspaces of our models are more resilient to thermal effects than those of surface codes.
The generation and manipulation of ultracold atomic ensembles in the quantum regime require the application of dynamically controllable microwave fields with ultra-low noise performance. Here, we present a low-phase-noise microwave source with two independently controllable output paths. Both paths generate frequencies in the range of $6.835\,$GHz $\pm$ $25\,$MHz for hyperfine transitions in $^{87}$Rb. The presented microwave source combines two commercially available frequency synthesizers: an ultra-low-noise oscillator at $7\,$GHz and a direct digital synthesizer for radiofrequencies. We demonstrate a low integrated phase noise of $480\,\mu$rad in the range of $10\,$Hz to $100\,$kHz and fast updates of frequency, amplitude and phase in sub-$\mu$s time scales. The highly dynamic control enables the generation of shaped pulse forms and the deployment of composite pulses to suppress the influence of various noise sources.
Security of a storage device against a tampering adversary has been a well-studied topic in classical cryptography. Such models give black-box access to an adversary, and the aim is to protect the stored message or abort the protocol if there is any tampering. In this work, we extend the scope of the theory of tamper detection codes against an adversary with quantum capabilities. We consider encoding and decoding schemes that are used to encode a $k$-qubit quantum message $\vert m\rangle$ to obtain an $n$-qubit quantum codeword $\vert {\psi_m} \rangle$. A quantum codeword $\vert {\psi_m} \rangle$ can be adversarially tampered via a unitary $U$ from some known tampering unitary family $\mathcal{U}_{\mathsf{Adv}}$ (acting on $\mathbb{C}^{2^n}$). Firstly, we initiate the general study of \emph{quantum tamper detection codes}, which detect if there is any tampering caused by the action of a unitary operator. In case there was no tampering, we would like to output the original message. We show that quantum tamper detection codes exist for any family of unitary operators $\mathcal{U}_{\mathsf{Adv}}$, such that $\vert\mathcal{U}_{\mathsf{Adv}} \vert < 2^{2^{\alpha n}}$ for some constant $\alpha \in (0,1/6)$; provided that unitary operators are not too close to the identity operator. Quantum tamper detection codes that we construct can be considered to be quantum variants of \emph{classical tamper detection codes} studied by Jafargholi and Wichs~['15], which are also known to exist under similar restrictions. Additionally, we show that when the message set $\mathcal{M}$ is classical, such a construction can be realized as a \emph{non-malleable code} against any $\mathcal{U}_{\mathsf{Adv}}$ of size up to $2^{2^{\alpha n}}$.
The quantum switch is a quantum process that creates a coherent control between different unitary operations, which is often described as a quantum process which transforms a pair of unitary operations $(U_1, U_2)$ into a controlled unitary operation that coherently applies them in different orders as ${\vert {0} \rangle\!\langle {0} \vert} \otimes U_1 U_2 + {\vert {1} \rangle\!\langle {1} \vert} \otimes U_2 U_1$. This description, however, does not directly define its action on non-unitary operations. The action of the quantum switch on non-unitary operations is then chosen to be a ``natural'' extension of its action on unitary operations. In general, the action of a process on non-unitary operations is not uniquely determined by its action on unitary operations. It may be that there could be a set of inequivalent extensions of the quantum switch for non-unitary operations. We prove, however, that the natural extension is the only possibility for the quantum switch for the 2-slot case. In other words, contrary to the general case, the action of the quantum switch on non-unitary operations (as a linear and completely CP preserving supermap) is completely determined by its action on unitary operations. We also discuss the general problem of when the complete description of a quantum process is uniquely determined by its action on unitary operations and identify a set of single-slot processes which are completely defined by their action on unitary operations.
While advances in quantum hardware occur in modest steps, simulators running on classical computers provide a valuable test bed for the construction of quantum algorithms. Given a unitary matrix that performs certain operation, obtaining the equivalent quantum circuit, even if as an approximation of the input unitary, is a non-trivial task and can be modeled as a search problem. This work presents an evolutionary search algorithm based on the island model concept, for the decomposition of unitary matrices in their equivalent circuit. Three problems are explored: the coin for the quantum walker, the Toffoli gate and the Fredkin gate. The algorithm proposed proved to be efficient in decomposition of quantum circuits, and as a generic approach, it is limited only by the available computational power.
The absence of thermalization in certain isolated many-body systems is of great fundamental interest. Many-body localization (MBL) is a widely studied mechanism for thermalization to fail in strongly disordered quantum systems, but it is still not understood precisely how the range of interactions affects the dynamical behavior and the existence of MBL, especially in dimensions $D>1$. By investigating nonequilibrium dynamics in strongly disordered $D=2$ electron systems with power-law interactions $\propto 1/r^{\alpha}$ and poor coupling to a thermal bath, here we observe MBL-like, prethermal dynamics for $\alpha=3$. In contrast, for $\alpha=1$, the system thermalizes, although the dynamics is glassy. Our results provide important insights for theory, especially since we obtained them on systems that are much closer to the thermodynamic limit than synthetic quantum systems employed in previous studies of MBL. Thus, our work is a key step towards further studies of ergodicity breaking and quantum entanglement in real materials.
Associative memories are devices storing information that can be fully retrieved given partial disclosure of it. We examine a toy model of associative memory and the ultimate limitations it is subjected to within the framework of general probabilistic theories (GPTs), which represent the most general class of physical theories satisfying some basic operational axioms. We ask ourselves how large the dimension of a GPT should be so that it can accommodate $2^m$ states with the property that any $N$ of them are perfectly distinguishable. Call $d(N,m)$ the minimal such dimension. Invoking an old result by Danzer and Gr\"unbaum, we prove that $d(2,m)=m+1$, to be compared with $O(2^m)$ when the GPT is required to be either classical or quantum. This yields an example of a task where GPTs outperform both classical and quantum theory exponentially. More generally, we resolve the case of fixed $N$ and asymptotically large $m$, proving that $d(N,m) \leq m^{1+o_N(1)}$ (as $m\to\infty$) for every $N\geq 2$, which yields again an exponential improvement over classical and quantum theories. Finally, we develop a numerical approach to the general problem of finding the largest $N$-wise mutually distinguishable set for a given GPT, which can be seen as an instance of the maximum clique problem on $N$-regular hypergraphs.
We study the hardness of the problem of finding the distance of quantum error-correcting codes. The analogous problem for classical codes is known to be NP-hard, even in approximate form. For quantum codes, various problems related to decoding are known to be NP-hard, but the hardness of the distance problem has not been studied before. In this work, we show that finding the minimum distance of stabilizer quantum codes exactly or approximately is NP-hard. This result is obtained by reducing the classical minimum distance problem to the quantum problem, using the CWS framework for quantum codes, which constructs a quantum code using a classical code and a graph. A main technical tool used for our result is a lower bound on the so-called graph state distance of 4-cycle free graphs. In particular, we show that for a 4-cycle free graph $G$, its graph state distance is either $\delta$ or $\delta+1$, where $\delta$ is the minimum vertex degree of $G$. Due to a well-known reduction from stabilizer codes to CSS codes, our results also imply that finding the minimum distance of CSS codes is also NP-hard.
Primitive polynomials over finite fields are crucial for various domains of computer science, including classical pseudo-random number generation, coding theory and post-quantum cryptography. Nevertheless, the pursuit of an efficient classical algorithm for generating random primitive polynomials over finite fields remains an ongoing challenge. In this paper, we show how to solve this problem efficiently through hybrid quantum-classical algorithms, and designs of the specific quantum circuits to implement them are also presented. Our research paves the way for the rapid and real-time generation of random primitive polynomials in diverse quantum communication and computation applications.
In this paper we present and analyze an information-theoretic task that consists in learning a bit of information by spatially moving the "target" particle that encodes it. We show that, on one hand, the task can be solved with the use of additional independently prepared quantum particles, only if these are indistinguishable from the target particle. On the other hand, the task can be solved with the use of distinguishable quantum particles, only if they are entangled with the target particle. Our task thus provides a new example in which the entanglement apparently inherent to independently prepared indistinguishable quantum particles is put into use for information processing. Importantly, a novelty of our protocol lies in that it does not require any spatial overlap between the involved particles. Besides analyzing the class of quantum-mechanical protocols that solve our task, we gesture towards possible ways of generalizing our results and of applying them in cryptography.
A quantum thermal machine is an open quantum system that enables the conversion between heat and work at the micro or nano-scale. Optimally controlling such out-of-equilibrium systems is a crucial yet challenging task with applications to quantum technologies and devices. We introduce a general model-free framework based on Reinforcement Learning to identify out-of-equilibrium thermodynamic cycles that are Pareto optimal trade-offs between power and efficiency for quantum heat engines and refrigerators. The method does not require any knowledge of the quantum thermal machine, nor of the system model, nor of the quantum state. Instead, it only observes the heat fluxes, so it is both applicable to simulations and experimental devices. We test our method on a model of an experimentally realistic refrigerator based on a superconducting qubit, and on a heat engine based on a quantum harmonic oscillator. In both cases, we identify the Pareto-front representing optimal power-efficiency tradeoffs, and the corresponding cycles. Such solutions outperform previous proposals made in the literature, such as optimized Otto cycles, reducing quantum friction.
We give a simple proof of the well known fact that the approximate eigenvalues provided by the Rayleigh-Ritz variational method are increasingly accurate upper bounds to the exact ones. To this end, we resort to the variational principle, mentioned in most textbooks on quantum chemistry, and to a well known set of projection operators. We think that present approach may be suitable for an advanced course on quantum mechanics or quantum chemistry.
In this work, we propose a two-stage procedure to systematically address quantum noise inherent in quantum measurements. The idea behind it is intuitive: we first detect and then eliminate quantum noise so that the classical noise assumption is satisfied and measurement error mitigation works. In the first stage, inspired by coherence witness in the resource theory of quantum coherence, we design an efficient method to detect quantum noise. It works by fitting the difference between two measurement statistics to the Fourier series, where the statistics are obtained using maximally coherent states with relative phase and maximally mixed states as inputs. The fitting coefficients quantitatively benchmark quantum noise. In the second stage, we design various methods to eliminate quantum noise, inspired by the Pauli twirling technique. They work by executing randomly sampled Pauli gates before the measurement device and conditionally flipping the measurement outcomes in such a way that the effective measurement device contains only classical noise. We demonstrate the feasibility of the two-stage procedure numerically on Baidu Quantum Platform. Remarkably, the results show that quantum noise in measurement devices is significantly suppressed, and the quantum computation accuracy is substantially improved. We highlight that the two-stage procedure complements existing measurement error mitigation techniques, and they together form a standard toolbox for manipulating measurement errors in near-term quantum devices.
We provide the first $\mathit{constant}$-$\mathit{round}$ construction of post-quantum non-malleable commitments under the minimal assumption that $\mathit{post}$-$\mathit{quantum}$ $\mathit{one}$-$\mathit{way}$ $\mathit{functions}$ exist. We achieve the standard notion of non-malleability with respect to commitments. Prior constructions required $\Omega(\log^*\lambda)$ rounds under the same assumption. We achieve our results through a new technique for constant-round non-malleable commitments which is easier to use in the post-quantum setting. The technique also yields an almost elementary proof of security for constant-round non-malleable commitments in the classical setting, which may be of independent interest. When combined with existing work, our results yield the first constant-round quantum-secure multiparty computation for both classical and quantum functionalities $\mathit{in}$ $\mathit{the}$ $\mathit{plain}$ $\mathit{model}$, under the $\mathit{polynomial}$ hardness of quantum fully-homomorphic encryption and quantum learning with errors.
We construct a metrology experiment in which the metrologist can sometimes amend her input state by simulating a closed timelike curve, a worldline that travels backward in time. The existence of closed timelike curves is hypothetical. Nevertheless, they can be simulated probabilistically by quantum-teleportation circuits. We leverage such simulations to pinpoint a counterintuitive nonclassical advantage achievable with entanglement. Our experiment echoes a common information-processing task: A metrologist must prepare probes to input into an unknown quantum interaction. The goal is to infer as much information per probe as possible. If the input is optimal, the information gained per probe can exceed any value achievable classically. The problem is that, only after the interaction does the metrologist learn which input would have been optimal. The metrologist can attempt to change her input by effectively teleporting the optimal input back in time, via entanglement manipulation. The effective time travel sometimes fails but ensures that, summed over trials, the metrologist's winnings are positive. Our Gedankenexperiment demonstrates that entanglement can generate operational advantages forbidden in classical chronology-respecting theories.
The quantum Jarzynski equality and the Crooks relation are fundamental laws connecting equilibrium processes with nonequilibrium fluctuations. They are promising tools to benchmark quantum devices and measure free energy differences. While they are well established theoretically and also experimental realizations for few-body systems already exist, their experimental validity in the quantum many-body regime has not been observed so far. Here, we present results for nonequilibrium protocols in systems with up to sixteen interacting degrees of freedom obtained on trapped ion and superconducting qubit quantum computers, which test the quantum Jarzynski equality and the Crooks relation in the many-body regime. To achieve this, we overcome present-day limitations in the preparation of thermal ensembles and in the measurement of work distributions on noisy intermediate-scale quantum devices. We discuss the accuracy to which the Jarzynski equality holds on different quantum computing platforms subject to platform-specific errors. The analysis reveals the validity of Jarzynski's equality in a regime with energy dissipation, compensated for by a fast unitary drive. This provides new insights for analyzing errors in many-body quantum simulators.
A prerequisite to the successful development of quantum computers and simulators is precise understanding of physical processes occurring therein, which can be achieved by measuring the quantum states they produce. However, the resources required for traditional quantum-state estimation scale exponentially with the system size, highlighting the need for alternative approaches. Here we demonstrate an efficient method for reconstruction of significantly entangled multi-qubit quantum states. Using a variational version of the matrix product state ansatz, we perform the tomography (in the pure-state approximation) of quantum states produced in a 20-qubit trapped-ion Ising-type quantum simulator, using the data acquired in only 27 bases with 1000 measurements in each basis. We observe superior state reconstruction quality and faster convergence compared to the methods based on neural network quantum state representations: restricted Boltzmann machines and feedforward neural networks with autoregressive architecture. Our results pave the way towards efficient experimental characterization of complex states produced by the quench dynamics of many-body quantum systems.
There has been much recent interest in near-term applications of quantum computers, i.e., using quantum circuits that have short decoherence times due to hardware limitations. Variational quantum algorithms (VQA), wherein an optimization algorithm implemented on a classical computer evaluates a parametrized quantum circuit as an objective function, are a leading framework in this space. An enormous breadth of algorithms in this framework have been proposed for solving a range of problems in machine learning, forecasting, applied physics, and combinatorial optimization, among others. In this paper, we analyze the iteration complexity of VQA, that is, the number of steps that VQA requires until its iterates satisfy a surrogate measure of optimality. We argue that although VQA procedures incorporate algorithms that can, in the idealized case, be modeled as classic procedures in the optimization literature, the particular nature of noise in near-term devices invalidates the claim of applicability of off-the-shelf analyses of these algorithms. Specifically, noise makes the evaluations of the objective function via quantum circuits biased. Commonly used optimization procedures, such as SPSA and the parameter shift rule, can thus be seen as derivative-free optimization algorithms with biased function evaluations, for which there are currently no iteration complexity guarantees in the literature. We derive the missing guarantees and find that the rate of convergence is unaffected. However, the level of bias contributes unfavorably to both the constant therein, and the asymptotic distance to stationarity, i.e., the more bias, the farther one is guaranteed, at best, to reach a stationary point of the VQA objective.
The role of entanglement in determining the non-classicality of a given interaction has gained significant traction over the last few years. In particular, as the basis for new experimental proposals to test the quantum nature of the gravitational field. Here we show that the rate of gravity mediated entanglement between two otherwise isolated optomechanical systems can be significantly increased by modulating the optomechanical coupling. This is most pronounced for low mass, high frequency systems - convenient for reaching the quantum regime - and can lead to improvements of several orders of magnitude, as well as a broadening of the measurement window. Nevertheless, significant obstacles still remain. In particular, we find that modulations increase decoherence effects at the same rate as the entanglement improvements. This adds to the growing evidence that the constraint on noise (acting on the position d.o.f) depends only on the particle mass, separation, and temperature of the environment and cannot be improved by novel quantum control. Finally, we highlight the close connection between the observation of quantum correlations and the limits of measurement precision derived via the Cram\'er-Rao Bound. An immediate consequence is that probing superpositions of the gravitational field places similar demands on detector sensitivity as entanglement verification.
Interference phenomena are often claimed to resist classical explanation. However, such claims are undermined by the fact that the specific aspects of the phenomenology upon which they are based can in fact be reproduced in a noncontextual ontological model [Catani et al., Quantum 7, 1119 (2023)]. This raises the question of what other aspects of the phenomenology of interference do in fact resist classical explanation. We answer this question by demonstrating that the most basic quantum wave-particle duality relation, which expresses the precise tradeoff between path distinguishability and fringe visibility, cannot be reproduced in any noncontextual model. We do this by showing that it is a specific type of uncertainty relation and then leveraging a recent result establishing that noncontextuality restricts the functional form of this uncertainty relation [Catani et al., Phys. Rev. Lett. 129, 240401 (2022)]. Finally, we discuss what sorts of interferometric experiment can demonstrate contextuality via the wave-particle duality relation.
The Dirac-Fock (DF) model replaces the Hartree-Fock (HF) approximation in quantum chemistry when relativistic effects cannot be neglected. Since the Dirac operator is not bounded from below, the notion of ground state is problematic in this model, and several definitions have been proposed in the literature. We give a new definition for the ground state of the DF energy, inspired of Lieb's relaxed variational principle for HF. Our definition and existence proof are simpler and more natural than in previous works on DF, but remains more technical than in the nonrelativistic case. One first needs to construct a set of physically admissible density matrices that satisfy a certain nonlinear fixed-point equation: we do this by introducing an iterative procedure, described in an abstract context. Then the ground state is found as a minimizer of the DF energy on this set.
Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performed by driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace in a process we refer to as a measurement-induced state transition (MIST). These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions experimentally with a transmon qubit by measuring their dependence on qubit frequency, average resonator photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, we show that occupation in these higher energy levels poses a major challenge for fast qubit reset.
The bra-ket formalism is generally applied to carry out representation-free considerations in quantum mechanics. It provides efficient means for this purpose, but at the same time, drawbacks are inherent in it. Some of these originate from the fact that the bra-ket notation is not suitable to handle the domains of the definitions of operators involved. An additional drawback is that the bra-ket formalism, being a dual-space formalism, excludes the natural one-space point of view. In the present work, a concise representation-free scheme is constructed. This scheme has no drawbacks of the bra-ket formalism. At the same time, this scheme provides the efficient means to carry out quantum mechanical considerations, such as all those offered by the bra-ket scheme. The present scheme allows both the one-space and dual-space interpretations, in contrast to the bra-ket scheme. Before the above, the strengths and drawbacks of the bra-ket formalism are addressed.
We treat the Cooper pairs in the superconducting electrodes of a Josephson junction (JJ) as an open system, coupled via Andreev scattering to external baths of electrons. The disequilibrium between the baths generates the direct-current bias applied to the JJ. In the weak-coupling limit we obtain a Markovian master equation that provides a simple dynamical description consistent with the main features of the JJ, including the form of the current-voltage characteristic, its hysteresis, and the appearance under periodic voltage driving of discrete Shapiro steps. For small dissipation, our model also exhibits a self-oscillation of the JJ's electrical dipole with frequency $\Omega = 2 e V / \hbar$ around mean voltage $V$. This self-oscillation, associated with "hidden attractors" of the nonlinear equations of motion, explains the observed production of monochromatic radiation with frequency $\Omega$ and its harmonics. We argue that this picture of the JJ as a quantum engine resolves open questions about the Josephson effect as an irreversible process and could open new perspectives in quantum thermodynamics and in the theory of dynamical systems.
Taking advantage of the fact that the cardinalities of hidden variables in network scenarios can be assumed to be finite without loss of generality, a numerical tool for finding explicit local models that reproduce a given statistical behaviour was developed. The numerical procedure was then validated using families of statistical behaviours for which the network-local boundary is known, in the bilocal scenario. Furthermore, the critical visibility for 3 notable distributions mixed with a uniform random noise is investigated in the triangle network without inputs. We provide conjectures for the critical visibilities of the Greenberger-Horne-Zeilinger (GHZ) and W distributions (which are roots of 4th degree polynomials), as well as a lower bound estimate of the critical visibility of the Elegant Joint Measurement distribution. The developed codes and documentation are publicly available at github.com/mariofilho281/localmodels
Thermal operations are a generic description for allowed state transitions under thermodynamic restrictions. However, the quest for simpler methods to encompass all these processes remains unfulfilled. We resolve this challenge through the catalytic use of thermal baths, which are assumed to be easily accessible. We select two sets of simplified operations: elementary thermal operations and Markovian thermal operations. They are known for their experimental feasibility, but fail to capture the full extent of thermal operations due to their innate Markovianity. We nevertheless demonstrate that this limitation can be overcome when the operations are enhanced by ambient-temperature Gibbs state catalysts. In essence, our result indicates that free states within thermal operations can act as catalysts that provide the necessary non-Markovianity for simpler operations. Furthermore, we prove that when any catalyst can be employed, different thermal processes (thermal operations, elementary thermal operations, and Markovian thermal operations) converge. Notably, our results extend to scenarios involving initial states with coherence in the energy eigenbasis, a notoriously difficult process to characterise.
We investigate a scheme for observing the third-order exceptional point (EP3) in an ion-cavity setting. In the lambda-type level configuration, the ion is driven by a pump field, and the resonator is probed with another weak laser field. We exploit the highly asymmetric branching ratio of an ion's excited state to satisfy the weak-excitation limit, which allows us to construct the non-Hermitian Hamiltonian $(H_{\textrm{nH}})$. Via fitting the cavity-transmission spectrum, the eigenvalues of $H_{\textrm{nH}}$ are obtained. The EP3 appears at a point where the Rabi frequency of the pump laser and the atom-cavity coupling constant balance the loss rates of the system. Feasible experimental parameters are provided.
We experimentally realize the Peregrine soliton in a highly particle-imbalanced two-component repulsive Bose-Einstein condensate (BEC) in the immiscible regime. The effective focusing dynamics and resulting modulational instability of the minority component provide the opportunity to dynamically create a Peregrine soliton with the aid of an attractive potential well that seeds the initial dynamics. The Peregrine soliton formation is highly reproducible, and our experiments allow us to separately monitor the minority and majority components, and to compare with the single component dynamics in the absence or presence of the well with varying depths. We showcase the centrality of each of the ingredients leveraged herein. Numerical corroborations and a theoretical basis for our findings are provided through 3D simulations emulating the experimental setting and through a one-dimensional analysis further exploring its evolution dynamics.
Stabilizer entropies (SE) measure deviations from stabilizer resources and as such are a fundamental ingredient for quantum advantage. In particular, the interplay of SE and entanglement is at the root of the complexity of classically simulating quantum many-body systems. In this paper, we study the dynamics of SE in a quantum many-body system away from the equilibrium after a quantum quench in an integrable system. We obtain two main results: (i) we show that SE, despite being an L-extensive quantity, equilibrates in a time that scales at most linearly with the subsystem size; and (ii) we show that there is a SE length increasing linearly in time, akin to correlations and entanglement spreading.
Bringing magnetic metals into superconducting states represents an important approach for realizing unconventional superconductors and potentially even topological superconductors. Altermagnetism, classified as a third basic collinear magnetic phase, gives rise to intriguing momentum-dependent spin-splitting of the band structure, and results in an even number of spin-polarized Fermi surfaces due to the symmetry-enforced zero net magnetization. In this work, we investigate the effect of this new magnetic order on the superconductivity of a two-dimensional metal with d-wave altermagnetism and Rashba spin-orbital coupling. Specifically we consider an extended attractive Hubbard interaction, and determine the types of superconducting pairing that can occur in this system and ascertain whether they possess topological properties. Through self-consistent mean-field calculations, we find that the system in general favors a mixture of spin-singlet s-wave and spin-triplet p-wave pairings, and that the altermagnetism is beneficial to the latter. Using symmetry arguments supported by detailed calculations, we show that a number of topological superconductors, including both first-order and second-order ones, can emerge when the p-wave pairing dominates. In particular, we find that the second-order topological superconductor is enforced by a $\mathcal{C}_{4z}\mathcal{T}$ symmetry, which renders the spin polarization of Majorana corner modes into a unique entangled structure. Our study demonstrates that altermagnetic metals are fascinating platforms for the exploration of intrinsic unconventional superconductivity and topological superconductivity.
We present experimental demonstrations of accurate and unambiguous single-shot discrimination between three quantum channels using a single trapped $^{40}\text{Ca}^{+}$ ion. The three channels cannot be distinguished unambiguously using repeated single channel queries, the natural classical analogue. We develop techniques for using the 6-dimensional $\text{D}_{5/2}$ state space for quantum information processing, and we implement protocols to discriminate quantum channel analogues of phase shift keying and amplitude shift keying data encodings used in classical radio communication. The demonstrations achieve discrimination accuracy exceeding $99\%$ in each case, limited entirely by known experimental imperfections.
In this work we characterize the false vacuum decay in the ferromagnetic quantum Ising chain with a weak longitudinal field subject to continuous monitoring of the local magnetization. Initializing the system in a metastable state, the false vacuum, we study the competition between coherent dynamics, which tends to create resonant bubbles of the true vacuum, and measurements which induce heating and reduce the amount of quantum correlations. To this end we exploit a numerical approach based on the combination of matrix product states with stochastic quantum trajectories which allows for the simulation of the trajectory-resolved non-equilibrium dynamics of interacting many-body systems in the presence of continuous measurements. We show how the presence of measurements affects the false vacuum decay: at short times the departure from the local minimum is accelerated while at long times the system thermalizes to an infinite-temperature incoherent mixture. For large measurement rates the system enters a quantum Zeno regime. The false vacuum decay and the thermalization physics are characterized in terms of the magnetization, connected correlation function, and the trajectory-resolved entanglement entropy.
The Blahut-Arimoto algorithm is a well-known method to compute classical channel capacities and rate-distortion functions. Recent works have extended this algorithm to compute various quantum analogs of these quantities. In this paper, we show how these Blahut-Arimoto algorithms are special instances of mirror descent, which is a type of Bregman proximal method, and a well-studied generalization of gradient descent for constrained convex optimization. Using recently developed convex analysis tools, we show how analysis based on relative smoothness and strong convexity recovers known sublinear and linear convergence rates for Blahut-Arimoto algorithms. This Bregman proximal viewpoint allows us to derive related algorithms with similar convergence guarantees to solve problems in information theory for which Blahut-Arimoto-type algorithms are not directly applicable. We apply this framework to compute energy-constrained classical and quantum channel capacities, classical and quantum rate-distortion functions, and approximations of the relative entropy of entanglement, all with provable convergence guarantees.
Unsupervised machine learning models build an internal representation of their training data without the need for explicit human guidance or feature engineering. This learned representation provides insights into which features of the data are relevant for the task at hand. In the context of quantum physics, training models to describe quantum states without human intervention offers a promising approach to gaining insight into how machines represent complex quantum states. The ability to interpret the learned representation may offer a new perspective on non-trivial features of quantum systems and their efficient representation. We train a generative model on two-qubit density matrices generated by a parameterized quantum circuit. In a series of computational experiments, we investigate the learned representation of the model and its internal understanding of the data. We observe that the model learns an interpretable representation which relates the quantum states to their underlying entanglement characteristics. In particular, our results demonstrate that the latent representation of the model is directly correlated with the entanglement measure concurrence. The insights from this study represent proof of concept towards interpretable machine learning of quantum states. Our approach offers insight into how machines learn to represent small-scale quantum systems autonomously.
The problem of efficient multiplication of large numbers has been a long-standing challenge in classical computation and has been extensively studied for centuries. It appears that the existing classical algorithms are close to their theoretical limit and offer little room for further enhancement. However, with the advent of quantum computers and the need for quantum algorithms that can perform multiplication on quantum hardware, a new paradigm emerges. In this paper, inspired by convolution theorem and quantum amplitude amplification paradigm we propose a quantum algorithms for integer multiplication with time complexity $O(\sqrt{n}\log^2 n)$ which outperforms the best-known classical algorithm, the Harvey algorithm with time complexity of $O(n \log n)$. Unlike the Harvey algorithm, our algorithm does not have the restriction of being applicable solely to extremely large numbers, making it a versatile choice for a wide range of integer multiplication tasks. The paper also reviews the history and development of classical multiplication algorithms and motivates us to explore how quantum resources can provide new perspectives and possibilities for this fundamental problem.
Establishing the fundamental chemical principles that govern molecular electronic quantum decoherence has remained an outstanding challenge. Fundamental questions such as how solvent and intramolecular vibrations or chemical functionalization contribute to the decoherence remain unanswered and are beyond the reach of state-of-the art theoretical and experimental approaches. Here we address this challenge by developing a strategy to isolate electronic decoherence pathways for molecular chromophores immersed in condensed phase environments that enables elucidating how electronic quantum coherence is lost. For this, we first identify resonant Raman spectroscopy as a general experimental method to reconstruct molecular spectral densities with full chemical complexity at room temperature, in solvent, and for fluorescent and non-fluorescent molecules. We then show how to quantitatively capture the decoherence dynamics from the spectral density and identify decoherence pathways by decomposing the overall coherence loss into contributions due to individual molecular vibrations and solvent modes. We illustrate the utility of the strategy by analyzing the electronic decoherence pathways of the DNA base thymine in water. Its electronic coherences decay in ~ 30 fs. The early-time decoherence is determined by intramolecular vibrations while the overall decay by solvent. Chemical substitution of thymine modulates the decoherence with hydrogen-bond interactions of the thymine ring with water leading to the fastest decoherence. Increasing temperature leads to faster decoherence as it enhances the importance of solvent contributions but leaves the early-time decoherence dynamics intact. The developed strategy opens key opportunities to establish the connection between molecular structure and quantum decoherence as needed to develop chemical strategies to rationally modulate it.
Solving combinatorial optimization problems efficiently through emerging hardware by converting the problem to its equivalent Ising model and obtaining its ground state is known as Ising computing. Phase-binarized oscillators (PBO), modeled through the Kuramoto model, have been proposed for Ising computing, and various device technologies have been used to experimentally implement such PBOs. In this paper, we show that an array of four dipole-coupled uniform-mode spin Hall nano oscillators (SHNOs) can be used to implement such PBOs and solve the NP-Hard combinatorial problem MaxCut on 4-node complete weighted graphs. We model the spintronic oscillators through two techniques: an approximate model for coupled magnetization dynamics of spin oscillators, and Landau Lifshitz Gilbert Slonckzweski (LLGS) equation-based more accurate magnetization dynamics modeling of such oscillators. Next, we compare the performance of these room-temperature-operating spin oscillators, as well as generalized PBOs, with two other alternative methods that solve the same MaxCut problem: a classical approximation algorithm, known as Goemans-Williamson's (GW) algorithm, and a Noisy Intermediate Scale Quantum (NISQ) algorithm, known as Quantum Approximation Optimization Algorithm (QAOA). For four types of graphs, with graph size up to twenty nodes, we show that approximation ratio (AR) and success probability (SP) obtained for generalized PBOs (Kuramoto model), as well as spin oscillators, are comparable to that for GW and much higher than that of QAOA for almost all graph instances. Moreover, unlike GW, the time to solution (TTS) for generalized PBOs and spin oscillators does not grow with graph size for the instances we have explored. This can be a major advantage for PBOs in general and spin oscillators specifically for solving these types of problems, along with the accuracy of solutions they deliver.
Quantum simulations of lattice gauge theories are anticipated to directly probe the real time dynamics of QCD, but scale unfavorably with the required truncation of the gauge fields. Improved Hamiltonians are derived to correct for the effects of gauge field truncations on the SU(3) Kogut-Susskind Hamiltonian. It is shown in $1+1D$ that this enables low chromo-electric field truncations to quantitatively reproduce features of the untruncated theory over a range of couplings and quark masses. In $3+1D$, an improved Hamiltonian is derived for lattice QCD with staggered massless fermions. It is shown in the strong coupling limit that the spectrum qualitatively reproduces aspects of two flavor QCD and simulations of a small system are performed on IBM's {\tt Perth} quantum processor.
A closed-loop lossy optomechanical system composed of one optical and two degenerate mechanical resonators is computationally studied. It represents an elementary synthetic plaquette with the loop phase originating from those of the coupling coefficients. As a specific quantum attribute, we explore how quadrature variances can be controlled in a targeted resonator through the plaquette phase. A stark disparity between the optical heating versus mechanical cooling factors is observed which is rooted in the high damping constant ratio of the optical and mechanical oscillators. To combine this with mechanical squeezing, an amplitude modulation is imposed over the cavity-pumping laser. Our numerical analysis is based on three approaches geared for complementary purposes: the time-integrator method for the instantaneous behavior and the Floquet technique for the steady-state or modulated response. The latter is further examined by the James' effective Hamiltonian method, which explicitly discloses the role of upper-sideband modulation for squeezing. We offer a physical insight into how the non-Hermiticity is instrumental in enhancing cooling and squeezing close to the exceptional points. This is linked to the behavior of complex eigenvalue loci as a function of the intermechanical resonator coupling. Moreover, we show that the parameter space comprises an exceptional surface, making the exceptional point singularities experimentally robust under parameter variations. However, the pump laser detuning breaks away from the exceptional surface unless it resides on the red-sideband by an amount sufficiently close to the mechanical resonance frequency.
A quantum phase space version of the continuity equation for systems with internal degrees of freedom is derived. The $1$ -- D Dirac equation is introduced and its phase space counterpart is found. The phase space representation of free motion and of scattering in a nonrelativistic and relativistic case for setups with internal degrees of freedom is discussed and illustrated. Properties of Wigner functions of unbound states are analysed.
We show how to achieve strong squeezing of a microwave output field by reservoir engineering a cavity magnomechanical system, consisting of a microwave cavity, a magnon mode, and a mechanical vibration mode. The magnon mode is simultaneously driven by two microwave fields at the blue and red sidebands associated with the vibration mode. The two-tone drive induces a squeezed magnonic reservoir for the intracavity field, leading to a squeezed cavity mode due to the cavity-magnon state swapping, which further yields a squeezed cavity output field. The squeezing of the output field is stationary and substantial using currently available parameters in cavity magnomechanics. The work indicates the potential of the cavity magnomechanical system in preparing squeezed microwave fields, and may find promising applications in quantum information science and quantum metrology.
The discovery of unidirectional invisibility and its broadband realization in optical media satisfying spatial Kramers-Kronig relations are important landmarks of non-Hermitian photonics. We offer a precise characterization of a higher-dimensional generalization of this effect and find sufficient conditions for its realization in the scattering of scalar waves in two and three dimensions and electromagnetic waves in three dimensions. More specifically, given a positive real number $\alpha$ and a continuum of unit vectors $\Omega$, we provide explicit conditions on the interaction potential (or the permittivity and permeability tensors of the scattering medium in the case of electromagnetic scattering) under which it displays perfect (non-approximate) invisibility whenever the incident wavenumber $k$ does not exceed $\alpha$ (i.e., $k\in(0,\alpha]$) and the direction of the incident wave vector ranges over $\Omega$. A distinctive feature of our approach is that it allows for the construction of potentials and linear dielectric media that display perfect directional invisibility in a finite frequency domain.
Spontaneous collapse models are modifications of standard quantum mechanics in which a physical mechanism is responsible for the collapse of the wavefunction, thus providing a way to solve the so-called "measurement problem". The two most famous of these models are the Ghirardi-Rimini-Weber (GRW) model and the Continuous Spontaneous Localisation (CSL) models. Here, we propose a new kind of non-relativistic spontaneous collapse model based on the idea of collapse points situated at fixed spacetime coordinates. This model shares properties of both GRW and CSL models, while starting from different assumptions. We show that it can lead to a dynamics quite similar to that of the GRW model while also naturally solving the problem of indistinguishable particles. On the other hand, we can also obtain the same master equation of the CSL models. Then, we show how our proposed model solves the measurement problem in a manner conceptually similar to the GRW model. Finally, we show how the proposed model can also accommodate for Newtonian gravity by treating the collapses as gravitational sources.
As quantum theory allows for information processing and computing tasks that otherwise are not possible with classical systems, there is a need and use of quantum Internet beyond existing network systems. At the same time, the realization of a desirably functional quantum Internet is hindered by fundamental and practical challenges such as high loss during transmission of quantum systems, decoherence due to interaction with the environment, fragility of quantum states, etc. We study the implications of these constraints by analyzing the limitations on the scaling and robustness of quantum Internet. Considering quantum networks, we present practical bottlenecks for secure communication, delegated computing, and resource distribution among end nodes. Motivated by the power of abstraction in graph theory (in association with quantum information theory), we consider graph-theoretic quantifiers to assess network robustness and provide critical values of communication lines for viable communication over quantum Internet. In particular, we begin by discussing limitations on usefulness of isotropic states as device-independent quantum key repeaters which otherwise could be useful for device-independent quantum key distribution. We consider some quantum networks of practical interest, ranging from satellite-based networks connecting far-off spatial locations to currently available quantum processor architectures within computers, and analyze their robustness to perform quantum information processing tasks. Some of these tasks form primitives for delegated quantum computing, e.g., entanglement distribution and quantum teleportation. For some examples of quantum networks, we present algorithms to perform different quantum network tasks of interest such as constructing the network structure, finding the shortest path between a pair of end nodes, and optimizing the flow of resources at a node.
In this paper, we consider giant atoms coupled to a one-dimensional topological waveguide reservoir. We studied the following two cases. In the bandgap regime, where the giant-atom frequency lies outside the band, we study the generation and distribution of giant atom-photon bound states and the difference between the topological waveguide in topological and trivial phases. When the strengths of the giant atoms coupled to the two sub-lattice points are equal, the photons distribution is symmetrical and the chiral photon distribution is exhibited when the coupling is different. The coherent interactions between giant atoms are induced by virtual photons, or can be understood as an overlap of photon bound-state wave functions, and decay exponentially with increasing distance between the giant atoms. We also find that the coherent interactions induced by the topological phase are larger than those induced by the trivial phase for the same bandgap width. In the band regime, the giant-atom frequency lies in the band, under the Born-Markov approximation, we obtained effective coherence and correlated dissipative interactions between the giant atoms mediated by topological waveguide reservoirs, which depend on the giant-atom coupling nodes. We analyze the effect of the form of the giant-atom coupling point on the decay, and on the associated dissipation. The results show that we can design the coupling form as well as the frequency of the giant atoms to achieve zero decay and correlation dissipation and non-zero coherent interactions. Finally we used this scheme to realize the excitation transfer of giant atoms. Our work will promote the study of topological matter coupled to giant atoms.
Color centers have emerged as a leading qubit candidate for realizing hybrid spin-photon quantum information technology. One major limitation of the platform, however, is that the characteristics of individual color-centers are often strain dependent. As an illustrative case, the silicon-vacancy center in diamond typically requires millikelvin temperatures in order to achieve long coherence properties, but strained silicon vacancy centers have been shown to operate at temperatures beyond 1K without phonon-mediated decoherence. In this work we combine high-stress silicon nitride thin films with diamond nanostructures in order to reproducibly create statically strained silicon-vacancy color centers (mean ground state splitting of 608 GHz) with strain magnitudes of $\sim 4 \times 10^{-4}$. Based on modeling, this strain should be sufficient to allow for operation of a majority silicon-vacancy centers within the measured sample at elevated temperatures (1.5K) without any degradation of their spin properties. This method offers a scalable approach to fabricate high-temperature operation quantum memories. Beyond silicon-vacancy centers, this method is sufficiently general that it can be easily extended to other platforms as well.
A key to understanding unconventional superconductivity lies in unraveling the pairing mechanism of mobile charge carriers in doped antiferromagnets, yielding an effective attraction between charges even in the presence of strong repulsive Coulomb interactions. Here, we study pairing in a minimal model of bilayer nickelates, featuring robust binding energies - despite dominant repulsive interactions - that are strongly enhanced in the finite doping regime. The mixed-dimensional (mixD) $t-J$ ladder we study features a crossover from tightly bound pairs of holes (closed channel) at small repulsion, to more spatially extended, correlated pairs of individual holes (open channel) at large repulsion. We derive an effective model for the latter, in which the attraction is mediated by the closed channel, in analogy to atomic Feshbach resonances. Using density matrix renormalization group (DMRG) simulations we reveal a dome of large binding energies at around $30\%$ doping and we observe the formation of a tetraparton density wave of plaquettes consisting of two spin-charge excitation pairs on neighboring rungs. Our work paves the way towards a microscopic theory of pairing in doped quantum magnets, in particular Ni-based superconductors, and our predictions can be tested in state-of-the-art quantum simulators.
Quantum algorithms have been widely studied in the context of combinatorial optimization problems. While this endeavor can often analytically and practically achieve quadratic speedups, theoretical and numeric studies remain limited, especially compared to the study of classical algorithms. We propose and study a new class of hybrid approaches to quantum optimization, termed Iterative Quantum Algorithms, which in particular generalizes the Recursive Quantum Approximate Optimization Algorithm. This paradigm can incorporate hard problem constraints, which we demonstrate by considering the Maximum Independent Set (MIS) problem. We show that, for QAOA with depth $p=1$, this algorithm performs exactly the same operations and selections as the classical greedy algorithm for MIS. We then turn to deeper $p>1$ circuits and other ways to modify the quantum algorithm that can no longer be easily mimicked by classical algorithms, and empirically confirm improved performance. Our work demonstrates the practical importance of incorporating proven classical techniques into more effective hybrid quantum-classical algorithms.
We study the work fluctuations in ergotropic heat engines, namely two-strokes quantum Otto engines where the work stroke is designed to extract the ergotropy (the maximum amount of work by a cyclic unitary evolution) from a couple of quantum systems at canonical equilibrium at two different temperatures, whereas the heat stroke thermalizes back the systems to their respective reservoirs. We provide an exhaustive study for the case of two qutrits whose energy levels are equally spaced at two different frequencies by deriving the complete work statistics. By varying the values of temperatures and frequencies, only three kinds of optimal unitary strokes are found: the swap operator $U_1$, an idle swap $U_2$ (where one of the qutrits is regarded as an effective qubit), and a non trivial permutation of energy eigenstates $U_3$, which indeed corresponds to the composition of the two previous unitaries, namely $U_3=U_2 U_1$. While $U_1$ and $U_2$ are Hermitian (and hence involutions), $U_3$ is not. This point has an impact on the thermodynamic uncertainty relations (TURs) which bound the signal-to-noise ratio of the extracted work in terms of the entropy production. In fact, we show that all TURs derived from a strong detailed fluctuation theorem are violated by the transformation $U_3$.
The problem of the quantizations of the $L$-shaped billiards and the like ones, i.e. each angle of which is equal to $\pi/2$ or $3\pi/2$, is considered using as a tool the Fourier series expansion method. The respective wave functions and the quantization conditions are written and discussed looking for and discussing about the superscars effects in such multi-rectangular billiards (MRB). It is found that a special set of POC modes effect the superscars phenomena in MRB in which the billiards are excited as a whole to the modes closest to the semiclassical ones existing in their approximated copies being MRB in which their parallel sides remain in rational relations between themselves.
In three dimensions, the Landau-Streater channel is nothing but the Werner-Holevo channel. Such a channel has no continuous parameter and hence cannot model an environmental noise. We consider its convex combination with the identity channel, making it suitable as a one-parameter noise model on qutrits. Moreover, whereas the original Werner-Holevo channel exhibits covariance under the complete unitary group $SU(3)$, the extended family maintains covariance only under the group $SO(3)$. This symmetry reduction allows us to investigate its impact on various properties of the original channel. In particular, we examine its influence on the channel's spectrum, divisibility, complementary channel, and exact or approximate degradability, as well as its various kinds of capacities. Specifically, we derive analytical expressions for the one-shot classical capacity and the entanglement-assisted capacity, accompanied by the establishment of lower and upper bounds for the quantum capacity.
In practical applications to free-space quantum communications, the utilization of active beam coupling and stabilization techniques offers notable advantages, particularly when dealing with limited detecting areas or coupling into single-mode fibers(SMFs) to mitigate background noise. In this work, we introduce highly-enhanced active beam-wander-correction technique, specifically tailored to efficiently couple and stabilize beams into SMFs, particularly in scenarios where initial optical alignment with the SMF is misaligned. To achieve this objective, we implement a SMF auto-coupling algorithm and a decoupled stabilization method, effectively and reliably correcting beam wander caused by atmospheric turbulence effects. The performance of the proposed technique is thoroughly validated through quantitative measurements of the temporal variation in coupling efficiency(coincidence counts) of a laser beam(entangled photons). The results show significant improvements in both mean values and standard deviations of the coupling efficiency, even in the presence of 2.6 km atmospheric turbulence effects. When utilizing a laser source, the coupling efficiency demonstrates a remarkable mean value increase of over 50 %, accompanied by a substantial 4.4-fold improvement in the standard deviation. For the entangled photon source, a fine mean value increase of 14 % and an approximate 2-fold improvement in the standard deviation are observed. Furthermore,the proposed technique successfully restores the fidelity of the polarization-entangled state, which has been compromised by atmospheric effects in the free-space channel, to a level close to the fidelity measured directly from the source. Our work will be helpful in designing spatial light-fiber coupling system not only for free-space quantum communications but also for high-speed laser communications.
We give a complete classification of the anyon sectors of Kitaev's quantum double model on the infinite triangular lattice and for finite gauge group $G$, including the non-abelian case. As conjectured, the anyon sectors of the model correspond precisely to the irreducible representations of the quantum double algebra of $G$. Our proof consists of two main parts. In the first part, we construct for each irreducible representation of the quantum double algebra a pure state and show that the GNS representations of these pure states are pairwise disjoint anyon sectors. In the second part we show that any anyon sector is unitarily equivalent to one of the anyon sectors constructed in the first part. The first part of the proof crucially uses a description of the states in question as string-net condensates. Purity is shown by characterising these states as the unique states that satisfy appropriate sets of local constraints. At the core of the proof is the fact that certain groups of local gauge transformations act freely and transitively on collections of local string-nets. For the second part, we show that any anyon sector contains a pure state that satisfies all but a finite number of these constraints. Using known techniques we can then construct a pure state in the anyon sector that satisfies all but one of these constraints. Finally, we show explicitly that any such state must be a vector state in one of the anyon sectors constructed in the first part.
Optical excitations in moir\'e transition metal dichalcogenide bilayers lead to the creation of excitons, as electron-hole bound states, that are generically considered within a Bose-Hubbard framework. Here, we demonstrate that these composite particles obey an angular momentum commutation relation that is generally non-bosonic. This emergent spin description of excitons indicates a limitation to their occupancy on each site, which is substantial in the weak electron-hole binding regime. The effective exciton theory is accordingly a spin Hamiltonian, which further becomes a Hubbard model of emergent bosons subject to an occupancy constraint after a Holstein-Primakoff transformation. We apply our theory to three commonly studied bilayers (MoSe2/WSe2, WSe2/WS2, and WSe2/MoS2) and show that in the relevant parameter regimes their allowed occupancies never exceed three excitons. Our systematic theory provides guidelines for future research on the many-body physics of moir\'e excitons.