Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-10 12:30 to 2023-11-14 11:30 | Next meeting is Friday Nov 1st, 11:30 am.
Studies of the Universe's transition to smoothness in the context of LCDM have all pointed to a transition radius no larger than ~300 Mpc. These are based on a broad array of tracers for the matter power spectrum, including galaxies, clusters, quasars, the Ly-alpha forest and anisotropies in the cosmic microwave background. It is therefore surprising, if not anomalous, to find many structures extending out over scales as large as ~2 Gpc, roughly an order of magnitude greater than expected. Such a disparity suggests that new physics may be contributing to the formation of large-scale structure, warranting a consideration of the alternative FLRW cosmology known as the $R_h=ct$ universe. This model has successfully eliminated many other problems in LCDM. In this paper, we calculate the fractal (or Hausdorff) dimension in this cosmology as a function of distance, showing a transition to smoothness at ~2.2 Gpc, fully accommodating all of the giant structures seen thus far. This outcome adds further observational support for R_h=ct over the standard model.
We analyze the physical properties of the gaseous intracluster medium (ICM) at the center of massive galaxy clusters with TNG-Cluster, a new cosmological magnetohydrodynamical simulation. Our sample contains 352 simulated clusters spanning a halo mass range of $10^{14} < {\rm M}_{\rm 500c} / M_\odot < 2 \times 10^{15}$ at $z=0$. We focus on the proposed classification of clusters into cool-core (CC) and non-cool-core (NCC) populations, the $z=0$ distribution of cluster central ICM properties, and the redshift evolution of the CC cluster population. We analyze resolved structure and radial profiles of entropy, temperature, electron number density, and pressure. To distinguish between CC and NCC clusters, we consider several criteria: central cooling time, central entropy, central density, X-ray concentration parameter, and density profile slope. According to TNG-Cluster and with no a-priori cluster selection, the distributions of these properties are unimodal, whereby CCs and NCCs represent the two extremes. Across the entire TNG-Cluster sample at $z=0$ and based on central cooling time, the strong CC fraction is $f_{\rm SCC} = 24\%$, compared to $f_{\rm WCC} = 60\% $ and $f_{\rm NCC} = 16\%$ for weak and non-cool-cores, respectively. However, the fraction of CCs depends strongly on both halo mass and redshift, although the magnitude and even direction of the trends vary with definition. The abundant statistics of simulated high-mass clusters in TNG-Cluster enables us to match observational samples and make a comparison with data. The CC fractions from $z=0$ to $z=2$ are in broad agreement with observations, as are radial profiles of thermodynamical quantities, globally as well as split for CC versus NCC halos. TNG-Cluster can therefore be used as a laboratory to study the evolution and transformations of cluster cores due to mergers, AGN feedback, and other physical processes.
The intracluster medium (ICM) of galaxy clusters encodes the impact of the physical processes that shape these massive halos, including feedback from central supermassive black holes (SMBHs). In this study we examine the gas thermodynamics, kinematics, and the effects of SMBH feedback on the core of Perseus-like galaxy clusters with a new simulation suite: TNG-Cluster. We first make a selection of simulated clusters similar to Perseus based on total mass and inner ICM properties, i.e. cool-core nature. We identify 30 Perseus-like systems among the 352 TNG-Cluster halos at $z=0$. Many exhibit thermodynamical profiles and X-ray morphologies with disturbed features such as ripples, bubbles and shock fronts that are qualitatively similar to X-ray observations of Perseus. To study observable gas motions, we generate XRISM mock X-ray observations and conduct a spectral analysis of the synthetic data. In agreement with existing Hitomi measurements, TNG-Cluster predicts subsonic gas turbulence in the central regions of Perseus-like clusters, with a typical line-of-sight velocity dispersion of 200 km/s. This implies that turbulent pressure contributes $< 10\%$ to the dominant thermal pressure. In TNG-Cluster, such low (inferred) values of ICM velocity dispersion coexist with high-velocity outflows and bulk motions of relatively small amounts of super-virial hot gas, moving up to thousands of km/s. However, detecting these outflows observationally may prove challenging due to their anisotropic nature and projection effects. Driven by SMBH feedback, such outflows are responsible for many morphological disturbances in the X-ray maps of cluster cores. They also increase both the inferred, and intrinsic, ICM velocity dispersion. This effect is somewhat stronger when velocity dispersion is measured from higher-energy lines.
The most massive galaxy clusters in the Universe host tens to hundreds of massive satellite galaxies, but it is unclear if these satellites are able to retain their own gaseous atmospheres. We analyze the evolution of $\sim90,000$ satellites of stellar mass $\sim10^{9-12.5}\,M_\odot$ around 352 galaxy clusters of mass $M_{\rm vir}\sim10^{14.3-15.4}\,M_\odot$ at $z=0$ from the new TNG-Cluster suite of cosmological magneto-hydrodynamical galaxy cluster simulations. The number of satellites per host increases with host mass, and the mass--richness relation broadly agrees with observations. A halo of mass $M_{\rm 200c}\sim10^{14.5}\,(10^{15})\,M_\odot$ hosts $\sim100\,(300)$ satellites today. Only a minority of satellites retain some gas, and this fraction increases with stellar mass. Lower mass satellites $\sim10^{9-10}\,M_\odot$ are more likely to retain part of their cold interstellar medium, consistent with ram pressure preferentially removing hot extended gas first. At higher stellar masses $\sim10^{10.5-12.5}\,M_\odot$ the fraction of gas-rich satellites increases to unity, and nearly all satellites retain a sizeable portion of their hot, spatially-extended circumgalactic medium (CGM), despite the ejective activity of their supermassive black holes. According to TNG-Cluster, the CGM of these gaseous satellites can be seen in soft X-ray emission (0.5-2.0 keV) that is $\gtrsim10$ times brighter than the local background. This X-ray surface brightness excess around satellites extends to $\sim30-100$ kpc, and is strongest for galaxies with higher stellar masses and larger host-centric distances. Approximately 10 per cent of the soft X-ray emission in cluster outskirts $\sim0.75-1.5R_{\rm 200c}$ originates from satellites. The CGM of member galaxies reflects the dynamics of cluster-satellite interactions and contributes to the observationally-inferred properties of the intracluster medium.
We introduce the new TNG-Cluster project, an addition to the IllustrisTNG suite of cosmological magnetohydrodynamical simulations of galaxy formation. Our objective is to significantly increase the statistical sampling of the most massive and rare objects in the Universe: galaxy clusters with log(M_200c / Msun) > 14.3 - 15.4 at z=0. To do so, we re-simulate 352 cluster regions drawn from a 1 Gpc volume, thirty-six times larger than TNG300, keeping entirely fixed the IllustrisTNG physical model as well as the numerical resolution. This new sample of hundreds of massive galaxy clusters enables studies of the assembly of high-mass ellipticals and their supermassive black holes (SMBHs), brightest cluster galaxies (BCGs), satellite galaxy evolution and environmental processes, jellyfish galaxies, intracluster medium (ICM) properties, cooling and active galactic nuclei (AGN) feedback, mergers and relaxedness, magnetic field amplification, chemical enrichment, and the galaxy-halo connection at the high-mass end, with observables from the optical to radio synchrotron and the Sunyaev-Zeldovich (SZ) effect, to X-ray emission, as well as their cosmological applications. We present an overview of the simulation, the cluster sample, selected comparisons to data, and a first look at the diversity and physical properties of our simulated clusters and their hot ICM.
Galaxy clusters are unique laboratories for studying astrophysical processes and their impact on gas kinematics. Despite their importance, the full complexity of gas motion within and around clusters remains poorly known. This paper is part of a series presenting first results from the new TNG-Cluster simulation, a suite of 352 massive clusters including the full cosmological context, mergers, accretion, baryonic processes, feedback, and magnetic fields. Studying the dynamics and coherence of gas flows, we find that gas motions in cluster cores and intermediate regions are largely balanced between inflows and outflows, exhibiting a Gaussian distribution centered at zero velocity. In the outskirts, even the net velocity distribution becomes asymmetric, featuring a double peak where the second peak reflects cosmic accretion. Across all cluster regions, the resulting net flow distribution reveals complex gas dynamics. These are strongly correlated with halo properties: at a given total cluster mass, unrelaxed, late-forming halos with less massive black holes and lower accretion rates exhibit a more dynamic behavior. Our analysis shows no clear relationship between line-of-sight and radial gas velocities, suggesting that line-of-sight velocity alone is insufficient to distinguish between inflowing and outflowing gas. Additional properties, such as temperature, can help break this degeneracy. A velocity structure function (VSF) analysis indicates more coherent gas motion in the outskirts and more disturbed kinematics towards halo centers. In all cluster regions, the VSF shows a slope close to the theoretical models of Kolmogorov (1/3), except within 50 kpc of the cluster cores, where the slope is significantly steeper. The outcome of TNG-Cluster broadly aligns with observations of the VSF of multiphase gas across different scales in galaxy clusters, ranging from 1 kpc to Megaparsec scales.
We present the $\nu\phi$MTH, a Mirror Twin Higgs (MTH) model realizing asymmetric reheating, baryogenesis and twin-baryogenesis through the out-of-equilibrium decay of a right-handed neutrino without any hard $\mathbb{Z}_2$ breaking. The MTH is the simplest Neutral Naturalness solution to the little hierarchy problem and predicts the existence of a twin dark sector related to the Standard Model (SM) by a $\mathbb{Z}_2$ symmetry that is only softly broken by a higher twin Higgs vacuum expectation value. The asymmetric reheating cools the twin sector compared to the visible one, thus evading cosmological bounds on $\Delta N_{\mathrm{eff}}$. The addition of (twin-)colored scalars allows for the generation of the visible baryon asymmetry and, by the virtue of the $\mathbb{Z}_2$ symmetry, also results in the generation of a twin baryon asymmetry. We identify a unique scenario with top-philic couplings for the new scalars that can satisfy all cosmological, proton decay and LHC constraints; yield the observed SM baryon asymmetry; and generate a wide range of possible twin baryon DM fractions, from negligible to unity. The viable regime of the theory contains several hints as to the possible structure of the Twin Higgs UV completion. Our results motivate the search for the rich cosmological and astrophysical signatures of twin baryons, and atomic dark matter more generally, at cosmological, galactic and stellar scales.
Measurements of the mean free path of Lyman-continuum photons in the intergalactic medium during the epoch of reionization can help constrain the nature of the sources as well as sinks of hydrogen-ionizing radiation. A recent approach to this measurement has been to utilize composite spectra of multiple quasars at $z\sim 6$, and infer the mean free path after correcting the spectra for the presence of quasar proximity zones. This has revealed not only a steep drop in the mean free path from $z=5$ to $z=6$, but also potentially a mild tension with reionization simulations. We critically examine such direct measurements of the mean free path for biases due to quasar environment, incomplete reionization, and quasar proximity zones. Using cosmological radiative transfer simulations of reionization combined with one-dimensional radiative transfer calculations of quasar proximity zones, we find that the bias in the mean free path due to overdensities around quasars is minimal at $z\sim 6$. Patchiness of reionization at this redshift also does not affect the measurements significantly. Fitting our model to the data results in a mean free path of $\lambda_{\mathrm{mfp}}=0.90^{+0.66}_{-0.40}$ pMpc at $z=6$, which is consistent with the recent measurements in the literature, indicating robustness with respect to the modelling of quasar proximity zones. We also compare various ways in which the mean free path has been defined in simulations before the end of reionization. Overall, our finding is that recent measurements of the mean free path appear to be robust relative to several sources of potential bias.
The thermal freeze-out mechanism in its classical form is tightly connected to physics beyond the Standard Model around the electroweak scale, which has been the target of enormous experimental efforts. In this work we study a dark matter model in which freeze-out is triggered by a strong first-order phase transition in a dark sector, and show that this phase transition must also happen close to the electroweak scale, i.e. in the temperature range relevant for gravitational wave searches with the LISA mission. Specifically, we consider the spontaneous breaking of a $U(1)^\prime$ gauge symmetry through the vacuum expectation value of a scalar field, which generates the mass of a fermionic dark matter candidate that subsequently annihilates into dark Higgs and gauge bosons. In this set-up the peak frequency of the gravitational wave background is tightly correlated with the dark matter relic abundance, and imposing the observed value for the latter implies that the former must lie in the milli-Hertz range. A peculiar feature of our set-up is that the dark sector is not necessarily in thermal equilibrium with the Standard Model during the phase transition, and hence the temperatures of the two sectors evolve independently. Nevertheless, the requirement that the universe does not enter an extended period of matter domination after the phase transition, which would strongly dilute any gravitational wave signal, places a lower bound on the portal coupling that governs the entropy transfer between the two sectors. As a result, the predictions for the peak frequency of gravitational waves in the LISA band are robust, while the amplitude can change depending on the initial dark sector temperature.
Recent measurements of the ionizing photon mean free path (MFP) based on composite quasar spectra may point to a late end to reionization at $z<6$. These measurements are challenging, however, because they rely on assumptions about the proximity zones of the quasars in the analysis. For example, some of the $z\sim 6$ quasars in the composite might have been close to large-scale regions where reionization was still ongoing ("neutral islands"), and it is unclear how this would affect the measurements. We address the question here with mock MFP measurements from radiative transfer simulations. We find that, even in the presence of neutral islands, the inferred MFP tracks to within $30 \%$ the true attenuation scale of the spatially averaged IGM, which includes opacity from both the ionized medium and the islands. During reionization, this scale is always shorter than the MFP in the ionized medium. The inferred MFP is sensitive at the $< 50\%$ level to assumptions about the quasar environments and lifetimes for realistic models. We demonstrate that future analyses with improved data may require explicitly modeling the effects of neutral islands on the composite spectra, and we outline a method for doing this. Lastly, we quantify the effects of neutral islands on Lyman-series transmission, which has been modeled with optically thin simulations in previous MFP analyses. Neutral islands can suppress transmission at $\lambda_{\rm rest} < 912$ \r{A} significantly, up to a factor of 2 for $z_{\rm qso} = 6$ in a plausible reionization scenario, owing to absorption by many closely spaced lines as quasar light redshifts into resonance. However, the suppression is almost entirely degenerate with the spectrum normalization, thus does not significantly bias the inferred MFP.
In this dissertation, the nature of Dark Energy (DE) is examined from both theoretical and phenomenological perspectives. The possibility of DE being a dynamic quantity in quantum field theory (QFT) in curved spacetime is studied. The primary aim is to go beyond the usual approach that relies on ad hoc fields and instead treat DE as a quantum vacuum under appropriate QFT renormalization. Specifically, the dynamic behavior of DE could arise from quantum vacuum fluctuations in the Universe, evolving alongside the background expansion. Thus, the evolution of the vacuum energy density can be expressed in terms of the Hubble function and its derivatives, $\rho_{\rm vac} =\rho_{\rm vac}(H)$. This approach yields a significant revelation: the equation of state of the quantum vacuum, derived from first principles, deviates from its traditional constant value of $w_{\rm vac}=-1$. Additionally, a new inflationary mechanism emerges in this context, rooted in the quantum effects in curved spacetime. Moreover, the thesis displays a phenomenological exploration of two related models that go beyond the $\Lambda$CDM model: the Brans-Dicke model with a cosmological constant and the Running Vacuum Model, which is related to the QFT calculations. These models have been tested under different datasets and scenarios to determine the constraints on their free parameters. The results of the fits are presented and discussed in relation to cosmological tensions concerning $H_0$ and $\sigma_8$. The conclusions drawn from this thesis indicate promising signals of the dynamic behavior of quantum vacuum, potentially impacting the cosmological constant problem and the cosmological tensions.
We perform a dynamical system analysis and a Bayesian model selection for a new set of interacting scenarios in the framework of modified holographic Ricci dark energy models (MHR-IDE). The dynamical analysis shows a modified radiation epoch and a late-time attractor corresponding to dark energy. We use a combination of background data such as type Ia supernovae, cosmic chronometers, cosmic microwave background, and baryon acoustic oscillations measurements. We find evidence against all the MHR-IDE scenarios studied with respect to $\Lambda$CDM, when the full joint analysis is considered.
We study the generalizations of the original Alcubierre warp drive metric to the case of curved spacetime background. We find that the presence of a horizon is essential when one moves from spherical coordinates to Cartesian coordinates in order to avoid additional singularities. For the specific case of Schwarzschild black hole, the horizon would be effectively absent for the observers inside the warp bubble, implying that warp drives may provide a safe route to cross horizons. Moreover, we discover that the black hole's gravitational field can decrease the amount of negative energy required to sustain a warp drive, which may be instrumental for creating microscopic warp drives in lab experiments. A BEC model is also introduced to propose possible test in the Analogue Gravity framework.
In this work, we examine the propagation of gravitational waves in cosmological and astrophysical spacetimes in the context of Einstein--Gauss-Bonnet gravity, in view of the GW170817 event. The perspective we approach the problem is to obtain a theory which can produce a gravitational wave speed that is equal to that of light in the vacuum, or at least the speed can be compatible with the constraints imposed by the GW170817 event. As we show, in the context of Einstein--Gauss-Bonnet gravity, the propagation speed of gravity waves in cosmological spacetimes can be compatible with the GW170817 event, and we reconstruct some viable models. However, the propagation of gravity waves in spherically symmetric spacetimes violates the GW170817 constraints, thus it is impossible for the gravitational wave that propagates in a spherically symmetric spacetime to have a propagating speed which is equal to that of light in the vacuum. The same conclusion applies to the Einstein--Gauss-Bonnet theory with two scalars. We discuss the possible implications of our results on spherically symmetric spacetimes.
The warm inflationary scenario is investigated in the context of affine gravity formalism. A general framework is provided for studying different single-field potentials. Using the sphaleron mechanism we explain the continuous dissipation of the inflaton field into radiation, leading to the $\Gamma=\Gamma_0 T^3$ dissipation coefficient. The treatment is performed in the weak and strong dissipation limits. We consider the quartic potential as a case study to provide a detailed study. Moreover, in this study, we discuss various constraints on inflationary models in general. We compare the theoretical results of the quartic potential model within warm inflation with the observational constraints from Planck $2018$ and BICEP/Keck 2018, as presented by the tensor-to-scalar ratio, spectral index and the perturbation spectrum.
The cosmic string contributes to our understanding and revelation of the fundamental structure and evolutionary patterns of the universe, unifying our knowledge of the cosmos and unveiling new physical laws and phenomena. Therefore, we anticipate the detection of Stochastic Gravitational Wave Background (SGWB) signals generated by cosmic strings in space-based detectors. We have analyzed the detection capabilities of individual space-based detectors, Lisa and Taiji, as well as the joint space-based detector network, Lisa-Taiji, for SGWB signals produced by cosmic strings, taking into account other astronomical noise sources. The results indicate that the Lisa-Taiji network exhibits superior capabilities in detecting SGWB signals generated by cosmic strings and can provide strong evidence. The Lisa-Taiji network can achieve an uncertainty estimation of $\Delta G\mu/G\mu<0.5$ for cosmic string tension $G\mu\sim4\times10^{-17}$, and can provide evidence for the presence of SGWB signals generated by cosmic strings at $G\mu\sim10^{-17}$, and strong evidence at $G\mu\sim10^{-16}$. Even in the presence of only SGWB signals, it can achieve a relative uncertainty of $\Delta G\mu/G\mu<0.5$ for cosmic string tension $G\mu<10^{-18}$, and provide strong evidence at $G\mu\sim10^{-17}$.
The strongly-coupled system like the quark-hadron transition (if it is of first order) is becoming an active play-yard for the physics of cosmological first-order phase transitions. However, the traditional field theoretic approach to strongly-coupled first-order phase transitions is of great challenge, driving recent efforts from holographic dual theories with explicit numerical simulations. These holographic numerical simulations have revealed an intriguing linear correlation between the phase pressure difference (pressure difference away from the wall) to the non-relativistic terminal velocity of an expanding planar wall, which has been reproduced analytically alongside both cylindrical and spherical walls from perfect-fluid hydrodynamics in our previous study but only for a bag equation of state. We have also found in our previous study a universal quadratic correlation between the wall pressure difference (pressure difference near the bubble wall) to the non-relativistic terminal wall velocity regardless of wall geometries. In this paper, we will generalize these analytic relations between the phase/wall pressure difference and terminal wall velocity into a more realistic equation of state beyond the simple bag model, providing the most general predictions so far for future tests from holographic numerical simulations of strongly-coupled first-order phase transitions
We extend current models of the halo occupation distribution (HOD) to include a flexible, empirical framework for the forward modeling of the intrinsic alignment (IA) of galaxies. A primary goal of this work is to produce mock galaxy catalogs for the purpose of validating existing models and methods for the mitigation of IA in weak lensing measurements. This technique can also be used to produce new, simulation-based predictions for IA and galaxy clustering. Our model is probabilistically formulated, and rests upon the assumption that the orientations of galaxies exhibit a correlation with their host dark matter (sub)halo orientation or with their position within the halo. We examine the necessary components and phenomenology of such a model by considering the alignments between (sub)halos in a cosmological dark matter only simulation. We then validate this model for a realistic galaxy population in a set of simulations in the Illustris-TNG suite. We create an HOD mock with Illustris-like correlations using our method, constraining the associated IA model parameters, with the $\chi^2_{\rm dof}$ between our model's correlations and those of Illustris matching as closely as 1.4 and 1.1 for orientation--position and orientation--orientation correlation functions, respectively. By modeling the misalignment between galaxies and their host halo, we show that the 3-dimensional two-point position and orientation correlation functions of simulated (sub)halos and galaxies can be accurately reproduced from quasi-linear scales down to $0.1~h^{-1}{\rm Mpc}$. We also find evidence for environmental influence on IA within a halo. Our publicly-available software provides a key component enabling efficient determination of Bayesian posteriors on IA model parameters using observational measurements of galaxy-orientation correlation functions in the highly nonlinear regime.
We have witnessed different values of the Hubble constant being found in the literature in the past years. Albeit, early measurements often result in an $H_0$ much smaller than those from late-time ones, producing a statistically significant discrepancy, and giving rise to the so-called Hubble tension. The trouble with the Hubble constant is often treated as a cosmological problem. However, the Hubble constant can be a laboratory to probe cosmology and particle physics models. In our work, we will investigate if the possibility of explaining the $H_0$ trouble using non-thermal dark matter production aided by phantom-like cosmology is consistent with the Cosmic Background Radiation (CMB) and Baryon Acoustic Oscillation (BAO) data. We performed a full Monte Carlo simulation using CMB and BAO datasets keeping the cosmological parameters $\Omega_b h^2$, $\Omega_c h^2$, $100\theta$, $\tau_{opt}$, and $w$ as priors and concluded that a non-thermal dark matter production aided by phantom-like cosmology yields at most $H_0=70.5$ km s$^{-1}$Mpc$^{-1}$ which is consistent with some late-time measurements. However, if $H_0> 72$ km s$^{-1}$ Mpc$^{-1}$ as many late-time observations indicate, an alternative solution to the Hubble trouble is needed. Lastly, we limited the fraction of relativistic dark matter at the matter-radiation equality to be at most 1\%.
We study chirality production in the pseudoscalar inflation model of magnetogenesis taking into account the Schwinger effect and particle collisions in plasma in the relaxation time approximation. We consider the Schwinger production of one Dirac fermion species by an Abelian gauge field in two cases: (i) the fermion carries only the weak charge with respect to the U(1) group and (ii) it is also charged with respect to another strongly coupled gauge group. While the gradient-expansion formalism is employed for the description of the evolution of gauge field, plasma is described by hydrodynamical approach which allows us to determine the number, energy density, and chirality of produced fermions. It is found that while chirality production is very efficient for both, weakly and strongly interacting fermions, the resulting gauge field is typically stronger in the case of strongly interacting fermions due to suppression of the Schwinger conductivity by particle collisions.
Aims. We aim to characterize the gas properties in the cluster outskirts ($R_{500}<r<R_{200}$) and in the detected inter-cluster filaments ($>R_{200}$) of the A3391/95 system and to compare them to predictions. Methods. We performed X-ray image and spectral analyses using the eROSITA PV data to assess the gas morphology and properties in the outskirts and the filaments in the directions of the previously detected Northern and Southern Filament of the A3391/95 system. We took particular care of the foreground. Results. In the filament-facing outskirts of A3391 and the Northern Clump, we find higher temperatures than typical cluster outskirts profiles, with a significance of $1.6-2.8\sigma$, suggesting heating due to their connections with the filaments. We confirm SB excess in the profiles of the Northern, Eastern, and Southern Filaments. From spectral analysis, we detect hot gas of ~1 keV for the Northern and Southern Filaments. The filament metallicities are below 10% solar metallicity and the $n_e$ range between 2.6 and $6.3\times10^{-5}~\mathrm{cm^{-3}}$. The characteristic properties of the Little Southern Clump (~1.5$R_{200}$ from A3395S in the Southern Filament) suggest that it is a small galaxy group. Excluding the LSC from the analysis of the Southern Filament decreases the gas density by 30%. This shows the importance of taking into account any clumps to avoid overestimation of the gas measurement in the outskirts and filament regions. Conclusions. The $n_e$ of the filaments are consistent with the WHIM properties as predicted by cosmological simulations, but the temperatures are higher, close to the upper WHIM temperature limit. As both filaments are short and located in a denser environment, stronger gravitational heating may be responsible for this temperature enhancement. The metallicities are low, but still within the expected range from the simulations.
We present a catalog of 689 galaxy cluster candidates detected at significance $\xi>4$ via their thermal Sunyaev-Zel'dovich (SZ) effect signature in 95 and 150 GHz data from the 500-square-degree SPTpol survey. We use optical and infrared data from the Dark Energy Camera and the Wide-field Infrared Survey Explorer (WISE) and \spitzer \ satellites, to confirm 544 of these candidates as clusters with $\sim94\%$ purity. The sample has an approximately redshift-independent mass threshold at redshift $z>0.25$ and spans $1.5 \times 10^{14} < M_{500c} < 9.1 \times 10^{14}$ $M_\odot/h_{70}$ \ and $0.03<z\lesssim1.6$ in mass and redshift, respectively; 21\% of the confirmed clusters are at $z>1$. We use external radio data from the Sydney University Molonglo Sky Survey (SUMSS) to estimate contamination to the SZ signal from synchrotron sources. The contamination reduces the recovered $\xi$ by a median value of 0.032, or $\sim0.8\%$ of the $\xi=4$ threshold value, and $\sim7\%$ of candidates have a predicted contamination greater than $\Delta \xi = 1$. With the exception of a small number of systems $(<1\%)$, an analysis of clusters detected in single-frequency 95 and 150 GHz data shows no significant contamination of the SZ signal by emission from dusty or synchrotron sources. This cluster sample will be a key component in upcoming astrophysical and cosmological analyses of clusters. The SPTpol millimeter-wave maps and associated data products used to produce this sample are available at https://pole.uchicago.edu/public/Data/Releases.html, and the NASA LAMBDA website. An interactive sky server with the SPTpol maps and Dark Energy Survey data release 2 images is also available at NCSA https://skyviewer.ncsa.illinois.edu.
We consider the effects of backreaction on axion-SU(2) dynamics during inflation. We use the linear evolution equations for the gauge field modes and compute their backreaction on the background quantities numerically using the Hartree approximation. We find a new dynamical attractor solution for the axion field and the vacuum expectation value of the gauge field, where the latter has an opposite sign with respect to the chromo-natural inflation solution. Our findings are of particular interest to the phenomenology of axion-SU(2) inflation, redefining parts of the viable parameter space. In addition, the backreaction effects lead to characteristic oscillatory features in the primordial gravitational wave background that are potentially detectable with upcoming gravitational wave detectors.
Third generation gravitational-wave (GW) detectors are expected to detect a large number of binary black holes (BBHs) to large redshifts, opening up an independent probe of the large scale structure using their clustering. This probe will be complementary to the probes using galaxy clustering -- GW events could be observed up to very large redshifts ($z \sim 10$) although the source localization will be much poorer at large distances ($\sim$ tens of square degrees). We explore the possibility of probing the large scale structure from the spatial distribution of the observed BBH population, using their two-point (auto)correlation function. We find that we can estimate the bias factor of population of BBH (up to $z \sim 0.7$) with a few years of observations with these detectors. Our method relies solely on the source-location posteriors obtained from the GW events and does not require any information from electromagnetic observations. This will help in identifying the type of galaxies that host the BBH population, thus shedding light on their origins.
We present a novel model that may provide an interpretation for a class of non-repeating FRBs -- short ($<1~\rm{s}$), bright ($0.1 - 1000~\rm{Jy}$) bursts of MHz-GHz frequency radio waves. The model has three ingredients -- compact object, a progenitor with effective magnetic field strength around $10^{10}~{\rm Gauss}$, and high frequency (MHz-GHz) gravitational waves (GWs). At resonance, the energy conversion from GWs to electromagnetic waves occurs when GWs pass through the magnetosphere of such compact objects due to the Gertsenshtein-Zel'dovich effect. This conversion produces bursts of electromagnetic waves in the MHz-GHz range, leading to FRBs. Our model has three key features: (i) predict peak-flux, (ii) can naturally explain the pulse width, and (iii) coherent nature of FRB. We thus conclude that the neutron star/magnetar could be the progenitor of FRBs. Further, our model offers a novel perspective on the indirection detection of GWs at high-frequency beyond detection capabilities. Thus, transient events like FRBs are a rich source for the current era of multi-messenger astronomy.
The discrepancy between the value of the Hubble constant $H_0$ in the late, local universe and the one obtained from the Planck collaboration representing an all-sky value for the early universe reached the 5-$\sigma$ level. Approaches to alleviate the tension contain a wide range of ansatzes: increasing uncertainties in data acquisition, reducing biases in the astrophysical models that underly the probes, or taking into account observer-dependent variances in the parameters of the cosmological background model. Yet, early and late universe probes are often treated as independent, they live on different length scales, and require different perturbations to be subtracted. Hence, fitting a flat Friedmann-Lema\^itre-Robertson-Walker cosmology to different probes at different cosmic epochs can yield different sets of cosmological parameter values. Tensions arise if these background fits and perturbing biases are not consistently calibrated or synchronised with respect to each other. This consistent model-fitting calibration is lacking between the two $H_0$ values mentioned above, thus causing a tension. As shown here, this interpretation resolves the $H_0$tension, if 15% of the matter-density parameter obtained from the fit to the cosmic microwave background, $\Omega_m = 0.315$, are assigned to decoupled perturbations yielding $\Omega_m = 0.267$ for the fit at redshifts of the supernova observations. Existing theoretical analyses and data evaluations which support this solution are given.
Higher-order theories of gravity are extensions to general relativity (GR) motivated mainly by high-energy physics searching for GR ultraviolet completeness. They are characterized by the inclusion of correction terms in the Einstein-Hilbert action that leads to higher-order field equations. In this paper, we propose investigating inflation due to the GR extension built with all correction terms up to the second-order involving only the scalar curvature $R$, namely, $R^{2}$, $R^{3}$, $R\square R$. We investigate inflation within the Friedmann cosmological background, where we study the phase space of the model, as well as explore inflation in slow-roll leading-order. Furthermore, we describe the evolution of scalar perturbations and properly establish the curvature perturbation. Finally, we confront the proposed model with recent observations from Planck, BICEP3/Keck, and BAO data.
Observations of local star-forming galaxies (SFGs) show a tight correlation between their singly ionized carbon line luminosity ($L_{\rm [C_{II}]}$) and star formation rate (SFR), suggesting that $L_{\rm [C_{II}]}$ may be a useful SFR tracer for galaxies. Some other galaxy populations, however, are found to have lower $L_{\rm [C_{II}]}{}/{}\rm SFR$ than the local SFGs, including the infrared-luminous, starburst galaxies at low and high redshifts, as well as some moderately star-forming galaxies at the epoch of re-ionization (EoR). The origin of this `$\rm [C_{II}]$ deficit' is unclear. In this work, we study the $L_{\rm [C_{II}]}$-SFR relation of galaxies using a sample of $z=0-8$ galaxies with $M_*\approx10^7-5\times10^{11}\,M_\odot$ extracted from cosmological volume and zoom-in simulations from the Feedback in Realistic Environments (FIRE) project. We find a simple analytic expression for $L_{\rm [C_{II}]}$/SFR of galaxies in terms of the following parameters: mass fraction of $\rm [C_{II}]$-emitting gas ($f_{\rm [C_{II}]}$), gas metallicity ($Z_{\rm gas}$), gas density ($n_{\rm gas}$) and gas depletion time ($t_{\rm dep}{}={}M_{\rm gas}{}/{}\rm SFR$). We find two distinct physical regimes, where $t_{\rm dep}$ ($Z_{\rm gas}$) is the main driver of the $\rm [C_{II}]$ deficit in $\rm H_2$-rich ($\rm H_2$-poor) galaxies. The observed $\rm [C_{II}]$ deficit of IR-luminous galaxies and early EoR galaxies, corresponding to the two different regimes, is due to short gas depletion time and low gas metallicity, respectively. Our result indicates that $\rm [C_{II}]$ deficit is a common phenomenon of galaxies, and caution needs to be taken when applying a constant $L_{\rm [C_{II}]}$-to-SFR conversion factor derived from local SFGs to estimate cosmic SFR density at high redshifts and interpret data from upcoming $\rm [C_{II}]$ line intensity mapping experiments.
The cosmological first-order phase transition (FOPT) can be of strong dynamics but with its bubble wall velocity difficult to be determined due to lack of detailed collision terms. Recent holographic numerical simulations of strongly coupled theories with a FOPT prefer a relatively small wall velocity linearly correlated with the phase pressure difference between false and true vacua for a planar wall. In this Letter, we have analytically revealed the non-relativistic limit of a planar/cylindrical/spherical wall expansion of a bubble strongly interacting with the thermal plasma. The planar-wall result reproduces the linear relation found previously in the holographic numerical simulations. The results for cylindrical and spherical walls can be directly tested in future numerical simulations. Once confirmed, the bubble wall velocity for a strongly coupled FOPT can be expressed purely in terms of the hydrodynamics without invoking the underlying microphysics.
[Abridged] Galaxy clusters are the most massive gravitationally-bound systems in the universe and are widely considered to be an effective cosmological probe. We propose the first Machine Learning method using galaxy cluster properties to derive unbiased constraints on a set of cosmological parameters, including Omega_m, sigma_8, Omega_b, and h_0. We train the machine learning model with mock catalogs including "measured" quantities from Magneticum multi-cosmology hydrodynamical simulations, like gas mass, gas bolometric luminosity, gas temperature, stellar mass, cluster radius, total mass, velocity dispersion, and redshift, and correctly predict all parameters with uncertainties of the order of ~14% for Omega_m, ~8% for sigma_8, ~6% for Omega_b, and ~3% for h_0. This first test is exceptionally promising, as it shows that machine learning can efficiently map the correlations in the multi-dimensional space of the observed quantities to the cosmological parameter space and narrow down the probability that a given sample belongs to a given cosmological parameter combination. In the future, these ML tools can be applied to cluster samples with multi-wavelength observations from surveys like LSST, CSST, Euclid, Roman in optical and near-infrared bands, and eROSITA in X-rays, to constrain both the cosmology and the effect of the baryonic feedback.
The elastic scattering between dark matter (DM) and radiation can potentially explain small-scale observations that the cold dark matter faces as a challenge, as damping density fluctuations via dark acoustic oscillations in the early universe erases small-scale structure. We study a semi-analytical subhalo model for interacting dark matter with radiation, based on the extended Press-Schechter formalism and subhalos' tidal evolution prescription. We also test the elastic scattering between DM and neutrinos using observations of Milky-Way satellites from the Dark Energy Survey and PanSTARRS1. We conservatively impose strong constraints on the DM-neutrino scattering cross section of $\sigma_{{\rm DM}\text{-}\nu,n}\propto E_\nu^n$ $(n=0,2,4)$ at $95\%$ confidence level (CL), $\sigma_{{\rm DM}\text{-}\nu,0}< 10^{-32}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})$, $\sigma_{{\rm DM}\text{-}\nu,2}< 10^{-43}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})(E_\nu/E_{\nu}^0)^2$ and $\sigma_{{\rm DM}\text{-}\nu,4}< 10^{-54}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})(E_\nu/E_{\nu}^0)^4$, where $E_\nu$ is the neutrino energy and $E_\nu^0$ is the average momentum of relic cosmic neutrinos today, $E_\nu^0 \simeq 6.1\ {\rm K}$. By imposing a satellite forming condition, we obtain the strongest upper bounds on the DM-neutrino cross section at $95\%$ CL, $\sigma_{{\rm DM}\text{-}\nu,0}< 4\times 10^{-34}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})$, $\sigma_{{\rm DM}\text{-}\nu,2}< 10^{-46}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})(E_\nu/E_{\nu}^0)^2$ and $\sigma_{{\rm DM}\text{-}\nu,4}< 7\times 10^{-59}\ {\rm cm^2}\ (m_{\rm DM}/{\rm GeV})(E_\nu/E_{\nu}^0)^4$.
Generative adversarial networks (GANs) are frequently utilized in astronomy to construct an emulator of numerical simulations. Nevertheless, training GANs can prove to be a precarious task, as they are prone to instability and often lead to mode collapse problems. Conversely, the diffusion model also has the ability to generate high-quality data without adversarial training. It has shown superiority over GANs with regard to several natural image datasets. In this study, we undertake a quantitative comparison between the denoising diffusion probabilistic model (DDPM) and StyleGAN2 (one of the most robust types of GANs) via a set of robust summary statistics from scattering transform. In particular, we utilize both models to generate the images of 21 cm brightness temperature mapping, as a case study, conditionally based on astrophysical parameters that govern the process of cosmic reionization. Using our new Fr\'echet Scattering Distance (FSD) as the evaluation metric to quantitatively compare the sample distribution between generative models and simulations, we demonstrate that DDPM outperforms StyleGAN2 on varied sizes of training sets. Through Fisher forecasts, we demonstrate that on our datasets, StyleGAN2 exhibits mode collapses in varied ways, while DDPM yields a more robust generation. We also explore the role of classifier-free guidance in DDPM and show the preference for a non-zero guidance scale only when the training data is limited. Our findings indicate that the diffusion model presents a promising alternative to GANs in the generation of accurate images. These images can subsequently provide reliable parameter constraints, particularly in the realm of astrophysics.
This note aims at investigating two different situations where the classical general relativistic dynamics competes with the evolution driven by Hawking evaporation. We focus, in particular, on binary systems of black holes emitting gravitational waves and gravitons, and on the cosmological evolution when black holes are immersed in their own radiation bath. Several non-trivial features are underlined in both cases.
The pulsar timing array (PTA) collaborations have recently suggested the presence of a gravitational wave background at nano-Hertz frequencies. In this paper, we explore potential inflationary interpretation of this signal within the context of a simple and health parity-violating gravity model termed the Nieh-Yan modified Teleparallel Gravity. Through this model, two inflationary scenarios are evaluated, both yielding significant polarized primordial gravitational waves (PGWs) that align well with the results from PTA observations. Furthermore, the resulting PGWs can display strong circular polarization and significant anisotropies in the PTA frequency band, which are distinct features to be verified by observations of both PTA and the cosmic microwave background.The detection of such a distinctive background of PGWs is expected to provide strong evidence supporting our scenarios and insights into inflationary dynamics and gravity theory.
The history of the seemingly simple problem of straight line fitting in the presence of both $x$ and $y$ errors has been fraught with misadventure, with statistically ad hoc and poorly tested methods abounding in the literature. The problem stems from the emergence of latent variables describing the "true" values of the independent variables, the priors on which have a significant impact on the regression result. By analytic calculation of maximum a posteriori values and biases, and comprehensive numerical mock tests, we assess the quality of possible priors. In the presence of intrinsic scatter, the only prior that we find to give reliably unbiased results in general is a mixture of one or more Gaussians with means and variances determined as part of the inference. We find that a single Gaussian is typically sufficient and dub this model Marginalised Normal Regression (MNR). We illustrate the necessity for MNR by comparing it to alternative methods on an important linear relation in cosmology, and extend it to nonlinear regression and an arbitrary covariance matrix linking $x$ and $y$. We publicly release a Python/Jax implementation of MNR and its Gaussian mixture model extension that is coupled to Hamiltonian Monte Carlo for efficient sampling, which we call ROXY (Regression and Optimisation with X and Y errors).
Recent attempts to fully resolve the Hubble tension from early dark energy models seem to favor a primordial Harrison-Zeldovich Universe with its scalar spectrum being extremely scale invariant. Restoring the Harrison-Zeldovich spectrum within the single-field inflationary paradigm appears to be infeasible, turning to the multi-field approach from either curvaton or waterfall models. In this Letter, we successfully align with the Harrison-Zeldovich spectrum within a single-field chaotic inflation by a non-minimal derivative coupling, and the previously disfavoured chaotic potential by Planck+BICEP/Keck data in the standard $\Lambda$CDM model now returns back to the scope of future polarization observations of the cosmic microwave background.
We address the properties of extreme black holes by considering the Christodoulou-Ruffini/Hawking mass-energy formula. By simple geometrical arguments, we found that the mass/energy formula is satisfied by two meaningful extreme black holes where mass (m), charge (Q), and angular momentum/spin (L) are incommensurable with the black hole's irreducible mass (m_{ir}). These black holes have been studied in the Christodoulou diagram and their topology in E^3 has been investigated by differential geometry. We show that one of the analyzed Kerr-Newman black holes corresponds to the case where the Gaussian curvature becomes zero at the poles. In the second extreme black hole examined, the fundamental quantities m, Q, and L are linked to the irreducible mass by coefficients that depend solely on the golden ratio number -\phi_-. In this case, we show that if this extreme black hole satisfies the Pythagorean fundamental forms relation at the umbilic points, then both the "scale parameter" (corresponding to twice the irreducible mass) and the Gauss curvature of the surface at the poles are equal to the golden ratio numbers. For these two extreme black holes, we calculate the energy extractible by reversible transformations finding that, in percentage, the energy extractable from the latter black hole is higher than the former one.
We revise the dynamics of interacting vector-like dark energy, a theoretical framework proposed to explain the accelerated expansion of the universe. By investigating the interaction between vector-like dark energy and dark matter, we analyze its effects on the cosmic expansion history and the thermodynamics of the accelerating universe. Our results demonstrate that the presence of interaction significantly influences the evolution of vector-like dark energy, leading to distinct features in its equation of state and energy density. We compare our findings with observational data and highlight the importance of considering interactions in future cosmological studies.
The common-envelope (CE) phase is a crucial stage in binary star evolution because the orbital separation can shrink drastically while ejecting the envelope of a giant star. Three-dimensional (3D) hydrodynamic simulations of CE evolution are indispensable to learning about the mechanisms that play a role during the CE phase. While these simulations offer great insight, they are computationally expensive. We propose a one-dimensional (1D) model to simulate the CE phase within the stellar evolution code $\texttt{MESA}$ by using a parametric drag force prescription for dynamical drag and adding the released orbital energy as heat into the envelope. We compute CE events of a $0.97\,\mathrm{M}_\odot$ asymptotic giant-branch star and a point mass companion with mass ratios of 0.25, 0.50, and 0.75, and compare them to 3D simulations of the same setup. The 1D CE model contains two free parameters, which we demonstrate are both needed to fit the spiral-in behavior and the fraction of ejected envelope mass of the 1D method to the 3D simulations. For mass ratios of 0.25 and 0.50, we find good-fitting 1D simulations, while for a mass ratio of 0.75, we do not find a satisfactory fit to the 3D simulation as some of the assumptions in the 1D method are no longer valid. In all our simulations, we find that the released recombination energy is important to accelerate the envelope and drive the ejection.
The most massive black holes in our Universe form binaries at the centre of merging galaxies. The recent evidence for a gravitational-wave (GW) background from pulsar timing may constitute the first observation that these supermassive black hole binaries (SMBHBs) merge. Yet, the most massive SMBHBs are out of reach of interferometric detectors and are exceedingly difficult to resolve individually with pulsar timing. These limitations call for unexplored strategies to detect individual SMBHBs in the uncharted frequency band $\lesssim10^{-5}\,\rm Hz$ in order to establish their abundance and decipher the coevolution with their host galaxies. Here we show that SMBHBs imprint detectable long-term modulations on GWs from stellar-mass binaries residing in the same galaxy. We determine that proposed deci-Hz GW interferometers sensitive to numerous stellar-mass binaries will uncover modulations from $\sim\mathcal{O}(10^{-1}$ - $10^4)$ SMBHBs with masses $\sim\mathcal{O}(10^7$ - $10^9)\,\rm M_\odot$ out to redshift $z\sim3.5$. This offers a unique opportunity to map the population of SMBHBs through cosmic time, which might remain inaccessible otherwise.
The Imaging X-ray Polarimetry Explorer measured with high significance the X-ray polarization of the brightest Z-source Sco X-1, resulting in the nominal 2-8 keV energy band in a polarization degree of 1.0(0.2)% and a polarization angle of 8(6){\deg} at 90% of confidence level. This observation was strictly simultaneous with observations performed by NICER, NuSTAR, and Insight-HXMT, which allowed for a precise characterization of its broad-band spectrum from soft to hard X-rays. The source has been observed mainly in its soft state, with short periods of flaring. We also observed low-frequency quasi-periodic oscillations. From a spectro-polarimetric analysis, we associate a polarization to the accretion disk at <3.2% at 90% of confidence level, compatible with expectations for an electron-scattering dominated optically thick atmosphere at the Sco X-1 inclination of 44{\deg}; for the higher-energy Comptonized component, we obtain a polarization of 1.3(0.4)%, in agreement with expectations for a slab of Thomson optical depth of ~7 and an electron temperature of ~3 keV. A polarization rotation with respect to previous observations by OSO-8 and PolarLight, and also with respect to the radio-jet position angle, is observed. This result may indicate a variation of the polarization with the source state that can be related to relativistic precession or to a change in the corona geometry with the accretion flow.
Superorbital periods that are observed in the brightness of Be/X-ray binaries may be driven by a misaligned and precessing Be star disc. We examine how the precessing disc model explains the superorbital variation of (i) the magnitude of the observed X-ray outbursts and (ii) the observed colour. With hydrodynamical simulations we show that the magnitude of the average accretion rate on to the neutron star, and therefore the X-ray outbursts, can vary by over an order of magnitude over the superorbital period for Be star spin-orbit misalignments $\gtrsim 70^\circ$ as a result of weak tidal truncation. Most Be/X-ray binaries are redder at optical maximum when the disc is viewed closest to face-on since the disc adds a large red component to the emission. However, A0538-66 is redder at optical minimum. This opposite behaviour requires an edge-on disc at optical minimum and a radially narrow disc such that it does not add a large red signature when viewed face-on. For A0538-66, the misalignment of the disc to the binary orbit must be about $70-80^\circ$ and the inclination of the binary orbit to the line of sight must be similarly high, although restricted to $<75^\circ$ by the absence of X-ray eclipses.
The double pulsar system, PSR J0737$-$3039A/B, consists of two neutron stars bound together in a highly relativistic orbit that is viewed nearly edge-on from the Earth. This alignment results in brief radio eclipses of the fast-rotating pulsar A when it passes behind the toroidal magnetosphere of the slow-rotating pulsar B. The morphology of these eclipses is strongly dependent on the geometric orientation and rotation phase of pulsar B, and their time-evolution can be used to constrain the geodetic precession rate of the pulsar. We demonstrate a Bayesian inference framework for modelling eclipse light-curves obtained with MeerKAT between 2019-2023. Using a hierarchical inference approach, we obtained a precession rate of $\Omega_{\rm SO}^{\rm B} = {5.16^{\circ}}^{+0.32^{\circ}}_{-0.34^{\circ}}$ yr$^{-1}$ for pulsar B, consistent with predictions from General Relativity to a relative uncertainty of 6.5%. This updated measurement provides a 6.1% test of relativistic spin-orbit coupling in the strong-field regime. We show that a simultaneous fit to all of our observed eclipses can in principle return a $\sim$1.5% test of spin-orbit coupling. However, systematic effects introduced by the current geometric orientation of pulsar B along with inconsistencies between the observed and predicted eclipse light curves result in difficult to quantify uncertainties. Assuming the validity of General Relativity, we definitively show that the spin-axis of pulsar B is misaligned from the total angular momentum vector by $40.6^{\circ} \pm 0.1^{\circ}$ and that the orbit of the system is inclined by approximately $90.5^{\circ}$ from the direction of our line of sight. Our measured geometry for pulsar B suggests the largely empty emission cone contains an elongated horseshoe shaped beam centered on the magnetic axis, and that it may not be re-detected as a radio pulsar until early-2035.
Particle-in-cell simulations have unveiled that shock-accelerated electrons do not follow a pure power-law distribution, but have an additional low-energy "thermal" part, which owns a considerable portion of the total energy of electrons. Investigating the effects of these thermal electrons on gamma-ray burst (GRB) afterglows may provide valuable insights into the particle acceleration mechanisms. We solve the continuity equation of electrons in the energy space, from which multi-wavelength afterglows are derived by incorporating processes including synchrotron radiation, synchrotron self-absorption, synchrotron self-Compton scattering, and gamma-gamma annihilation. First, there is an underlying positive correlation between temporal and spectral indices due to the cooling of electrons. Moreover, thermal electrons would result in the simultaneous non-monotonic variation in both spectral and temporal indices at multi-wavelength, which could be individually recorded by the 2.5-meter Wide Field Survey Telescope and Vera Rubin Observatory Legacy Survey of Space and Time (LSST). The thermal electrons could also be diagnosed from afterglow spectra by synergy observation in the optical (with LSST) and X-ray bands (with the Microchannel X-ray Telescope on board the Space Variable Objects Monitor). Finally, we use Monte Carlo simulations to obtain the distribution of peak flux ratio ($R_{\rm X}$) between soft and hard X-rays, and of the time delay ($\Delta t$) between peak times of soft X-ray and optical light curves. The thermal electrons significantly raise the upper limits of both $R_{\rm X}$ and $\Delta t$. Thus the distribution of GRB afterglows with thermal electrons is more dispersive in the $R_{\rm X} - \Delta t$ plane.
The present study aims to reinforce the evidence for the ~9 s pulsation in the gamma-ray binary LS 5039, derived with a Suzaku observation in 2007 and that with NuSTAR in 2016 (Yoneda et al 2000). Through a reanalysis of the NuSTAR data incorporating the orbital Doppler correction, the 9.0538 s pulsation was confirmed successfully even in the 3--10 keV range, where it was undetectable previously. This was attained by perceiving an energy-dependent drift in the pulse phase below 10 keV, and correcting the pulse timing of individual photons for that effect. Similarly, an archival 0.7--12 keV data set of LS 5039, taken with the ASCA GIS in 1999 October, was analyzed. The data showed possible periodicity at about 8.882 s, but again the energy-dependent phase drift was noticed below 10 keV. By correcting for this effect, and for the orbital Doppler delays in the LS 5039 system, the 2.8--12 keV periodicity became statistically significant at 8.891+- 0.001 s. The periods measured with ASCA, Suzaku, and NuSTAR approximately follow an average period derivative of dP/dt = 3.0 e-10 s/s. These results provide further evidence for the pulsation in this object, and strengthen the scenario by (Yoneda et al 2000), that the compact object in LS 5039 is a strongly magnetized neutron star.
Super Soft X-ray Sources (SSS) are white dwarf (WD) binaries that radiate almost entirely below $\sim$1~keV. Their X-ray spectra are often complex when viewed with the X-ray grating spectrometers, where numerous emission and absorption features are intermingled and hard to separate. The absorption features are mostly from the WD atmosphere, for which radiative transfer models have been constructed. The emission features are from the corona surrounding the WD atmosphere, in which incident emission from the WD surface is reprocessed. Modeling the corona requires different solvers and assumptions for the radiative transfer, which is yet to be achieved. We chose CAL87, a SSS in the Large Magellanic Cloud, which exhibits emission-dominated spectra from the corona as the WD atmosphere emission is assumed to be completely blocked by the accretion disk. We constructed a radiative transfer model for the corona using the two radiative transfer codes; xstar for a one-dimensional two-stream solver and MONACO for a three-dimensional Monte Carlo solver. We identified their differences and limitations in comparison to the spectra taken with the Reflection Grating Spectrometer onboard the XMM-Newton satellite. We finally obtained a sufficiently good spectral model of CAL87 based on the radiative transfer of the corona plus an additional collisionally ionized plasma. In the coming X-ray microcalorimeter era, it will be required to interpret spectra based on radiative transfer in a wider range of sources than what is presented here.
There have been significant developments in the period estimation tools and methods for analysing high energy pulsars in the past few decades. However, these tools lack well-standardised methods for calculating uncertainties in period estimation and other recovered parameters for Poisson--dominated data. Error estimation is important for assigning confidence intervals to the models we study, but due to their high computational cost, errors in the pulsar periods were largely ignored in the past. Furthermore, existing literature has often employed semi-analytical techniques that lack rigorous mathematical foundations or exhibit a predominant emphasis on the analysis of white noise and time series data. We present results from our numerical and analytical study of the error distribution of the recovered parameters of high energy pulsar data using the $Z_n^2$ method. We comprehensively formalise the measure of error for the generic pulsar period with much higher reliability than some common methods. Our error estimation method becomes more reliable and robust when observing pulsars for few kilo-seconds, especially for typical pulsars with periods ranging from a few milliseconds to a few seconds. We have verified our results with observations of the \emph{Crab} pulsar, as well as a large set of simulated pulsars. Our codes are publicly available for use.
The lateral density data obtained for different secondaries of an extensive air shower (EAS) from an array of detectors are usually described by some suitable lateral density functions (LDFs). Analyzing non-vertical simulated EASs generated with the CORSIKA code, it is found that the lateral and polar density distributions of electrons and muons are asymmetric in the ground plane. It means that typical expressions for symmetric lateral density functions (SLDFs) (\emph{e.g.} the Nishimura-Kamata-Greisen function) are inadequate to reconstruct the lateral and polar dependencies of such asymmetric electron or muon densities accurately. In order to provide a more consistent LDF for non-vertical shower reconstruction in the ground plane, the paper considers the issue of the modification of the SLDF analytically. The asymmetry arising from additional attenuation and correction of the positional coordinates (radial and polar) of cascade particles causes a gap length between the center of concentric equidensity ellipses and the EAS core. A toy function is introduced as a basic LDF to describe the asymmetric lateral and polar density distributions of electrons or muons of EASs, thereby predicting the gap length parameter. Consequently, the desired LDF describing the asymmetric density distributions of electrons and muons of EASs has emerged. We compare results from detailed simulations with the predictions of the analytical parametrization. The LDF derived in this work is found to be well-suited to reconstruct EASs in the ground plane directly.
Increased eccentricity of a black hole binary leads to reduced merger times. With n-body simulations and analytic approximations including the effects of general relativity (GR), we show that even a low mass companion orbiting a black hole binary can cause significant eccentricity oscillations of the binary as a result of the Kozai-Lidov mechanism. A companion with a mass as low as about 1% of the binary mass can drive the binary eccentricity up to >~ 0.8, while a mass of a few percent can drive eccentricities greater than 0.98. For low mass companions, this mechanism requires the companion to be on an orbit that is closer to retrograde than to prograde to the binary orbit and this may occur through capture of the third body. The effects of GR limit the radial range for the companion for which this mechanism works for the closest binaries. The merger timescale may be reduced by several orders of magnitude for a captured companion mass of only a few percent of the binary mass.
We present a start-to-end simulation aimed at studying the long-term fate of high mass X-ray binaries and whether a Thorne-\.Zytkow object (T\.ZO) might ultimately be produced. We analyze results from a 3D hydrodynamical simulation that models the eventual fate of LMC X-4, a compact high mass X-ray binary system, after the primary fills its Roche lobe and engulfs the neutron star companion. We discuss the outcome of this engulfment within the standard paradigm of T\.ZO formation. The post-merger angular momentum content of the stellar core is a key ingredient, as even a small amount of rotation can break spherical symmetry and produce a centrifugally supported accretion disk. Our findings suggest the inspiraling neutron star, upon merging with the core, can accrete efficiently via a disk at high rates ($\approx 10^{-2}M_\odot/{\rm s}$), subsequently collapsing into a black hole and triggering a bright transient with a luminosity and duration typical of an ultra-long gamma-ray burst. We propose that the canonical framework for T\.ZO formation via common envelope needs to be revised, as the significant post-merger accretion feedback will unavoidably unbind the vast majority of the surrounding envelope.
Unveiling the nature of progenitors is crucial for understanding the origin and the mechanism of core-collapse and thermonuclear supernovae (SNe). While several methods have been developed to derive stellar properties so far, many questions remain poorly understood. In this paper we demonstrate an observational approach to constrain progenitors of supernova remnants (SNRs) using abundances of carbon (C), nitrogen (N), and oxygen (O) in shock-heated circumstellar material (CSM). Our calculations with stellar evolution codes indicate that a total amount of these CNO elements will provide a more sensitive determination of the progenitor masses than the conventional method based on ejecta abundances. If the CNO lines (particularly those of C and N) are detected and measured their abundance ratios accurately, they can provide relatively robust constraint on the progenitor mass (and in some cases the rotation velocity) of SNRs. Since our method requires a better energy resolution and larger effective area in the soft X-ray band ($<1$~keV), XRISM launched on September 7, 2023 and next-generation microcalorimeter missions such as Athena, Lynx, LEM, and HUBS will bring a new insight into link between the progenitors and their remnants.
In this paper, we investigate the astrophysical processes of stellar-mass black holes (sMBHs) embedded in advection-dominated accretion flows (ADAFs) of supermassive black holes (SMBHs) in low-luminosity active galactic nuclei (AGNs). The sMBH is undergoing Bondi accretion at a rate lower than the SMBH. Outflows from the sMBH-ADAF dynamically interact with their surroundings and form a cavity inside the SMBH-ADAF, thereby quenching the accretion onto the SMBH. Rejuvenation of the Bondi accretion is rapidly done by turbulence. These processes give rise to quasi-periodic episodes of sMBH activities and create flickerings from relativistic jets developed by the Blandford-Znajek mechanism if the sMBH is maximally rotating. Accumulating successive sMBH-outflows trigger viscous instability of the SMBH-ADAF, leading to a flare following a series of flickerings. Recently, the similarity of near-infrared flare's orbits has been found by GRAVITY/VLTI astrometric observations of Sgr A$\!^{*}$: their loci during the last 4-years consist of a ring in agreement with the well-determined SMBH mass. We apply the present model to Sgr A$\!^{*}$, which shows quasi-periodic flickerings. A SMBHH of $\sim 40 M_{\odot}$ is preferred orbiting around the central SMBH of Sgr A$\!^{*}$ from fitting radio to X-ray continuum. Such an extreme mass ratio inspiraling (EMRI) provides an excellent laboratory for LISA, Taiji and Tianqin detection of mHz gravitational waves with strains of $\sim 10^{-17}$, as well as their polarization.
Pulsar wind nebulae are a possible final stage of the circumstellar evolution of massive stars, where a fast rotating, magnetised neutron star produces a powerful wind that interacts with the supernova ejecta. The shape of these so called plerionic supernova remnants is influenced by the distribution of circumstellar matter at the time of the explosion, itself impacted by the magnetic field of the ambient medium responsible for the expansion of the circumstellar bubble of the progenitor star. To understand the effects of magnetization on the circumstellar medium and resulting pulsar nebulae, we conduct 2D magnetohydrodynamical simulations. Our models explore the impact of the interstellar medium magnetic field on the morphology of a supernova remnant and pulsar wind nebula that develop in the circumstellar medium of massive star progenitor in the warm phase of the Milky Ways interstellar medium. Our simulations reveal that the jet like structures formed on both sides perpendicularly to the equatorial plane of the pulsar, creating complex radio synthetic synchrotron emissions. This morphology is characterized by a rectangular like remnant, which is typical of the circumstellar medium of massive stars in a magnetized medium, along with the appearance of a spinning top structure within the projected rectangle. We suggest that this mechanism may be partially responsible for the complex morphologies observed in pulsar wind nebulae that do not conform to the typical torus, jet or bow shock, tail shapes observed in most cases.
The radiation mechanism of fast radio bursts (FRBs) has been extensively studied but still remains elusive. In the search for dark matter candidates, the QCD axion and axionlike particles (ALPs) have emerged as prominent possibilities. These elusive particles can aggregate into dense structures called axion stars through Bose-Einstein condensation (BEC). Such axion stars could constitute a significant portion of the mysterious dark matter in the universe. When these axion stars grow beyond a critical mass, usually through processes like accretion or merging, they undergo a self-driven collapse. Traditionally, for spherically symmetric axion clumps, the interaction between axions and photons does not lead to parametric resonance, especially when the QCD axion-photon coupling is at standard levels. Nevertheless, our study indicates that even QCD axion stars with typical coupling values can trigger stimulated decay during their collapse, rather than producing relativistic axions through self-interactions. This process results in short radio bursts, with durations of around 0.1 seconds, and can be potentially observed using radio telescopes like FAST or SKA. Furthermore, we find that collapsing axion stars for ALPs with specific parameters may emit radio bursts lasting just milliseconds with a peak luminosity of $1.60\times10^{42}\rm{erg/s}$, matching the characteristics of the observed non-repeating FRBs.
We study the problem of reconstruction of high-energy cosmic rays mass composition from the experimental data of extensive air showers. We develop several machine learning methods for the reconstruction of energy spectra of separate primary nuclei at energies 1-100 PeV, using the public data and Monte-Carlo simulations of the KASCADE experiment from the KCDC platform. We estimate the uncertainties of our methods, including the unfolding procedure, and show that the overall accuracy exceeds that of the method used in the original studies of the KASCADE experiment.
Planets orbiting young stars are thought to experience atmospheric evaporation as a result of the host stars' high magnetic activity. We study the evaporation history and expected future of the three known transiting exoplanets in the young multiplanet system K2-198. Based on spectroscopic and photometric measurements, we estimate an age of the K-dwarf host star between 200 and 500 Myr, and calculate the high-energy environment of these planets using eROSITA X-ray measurements. We find that the innermost planet K2-198c has likely lost its primordial envelope within the first few tens of Myr regardless of the age at which the star drops out of the saturated X-ray regime. For the two outer planets, a range of initial envelope mass fractions is possible, depending on the not-yet-measured planetary mass and the stars' spin-down history. Regarding the future of the system, we find that the outermost planet K2-198b is stable against photoevaporation for a wide range of planetary masses, while the middle planet K2-198d is only able to retain an atmosphere for a mass range between ~7 and 18 Earth-masses. Lower-mass planets are too susceptible to mass loss, and a very thin present-day envelope for higher-mass planets is easily lost with the estimated mass-loss rates. Our results support the idea that all three planets started out above the radius valley in the (sub-)Neptune regime and were then transformed into their current states by atmospheric evaporation, but also stress the importance of measuring planetary masses for (young) multiplanet systems before conducting more detailed photoevaporation simulations.
Tidal disruption events\,(TDEs) provide a valuable probe in studying the dynamics of stars in the nuclear environments of galaxies. Recent observations show that TDEs are strongly overrepresented in post-starburst or "green valley" galaxies, although the underlying physical mechanism remains unclear. Considering the possible interaction between stars and active galactic nucleus\,(AGN) disk, the TDE rates can be greatly changed compared to those in quiescent galactic nuclei. In this work, we revisit TDE rates by incorporating an evolving AGN disk within the framework of the "loss cone" theory. We numerically evolve the Fokker-Planck equations by considering the star-disk interactions, in-situ star formation in the unstable region of the outer AGN disk and the evolution of the accretion process for supermassive black holes\,(SMBHs). We find that the TDE rates are enhanced by about two orders of magnitude shortly after the AGN transitions into a non-active stage. During this phase, the accumulated stars are rapidly scattered into the loss cone due to the disappearance of the inner standard thin disk. Our results provide an explanation for the overrepresentation of TDEs in post-starburst galaxies.
In this contribution, we briefly present the equation-of-state modelling for application to neutron stars and discuss current constraints coming from nuclear physics theory and experiments. To assess the impact of model uncertainties, we employ a nucleonic meta-modelling approach and perform a Bayesian analysis to generate posterior distributions for the equation of state with filters accounting for both our present low-density nuclear physics knowledge and high-density neutron-star physics constraints. The global structure of neutron stars thus predicted is discussed in connection with recent astrophysical observations.
We report on a simultaneous observational campaign with both Swift/XRT and NuSTAR targeting the symbiotic X-ray binary IGR J16194-2810. The main goal of the campaign was to investigate the possible presence of cyclotron scattering absorption features in the broad-band spectrum of the source, and help advance our understanding of the process of neutron star formation via the accretion-induced collapse of a white dwarf. The 1-30 keV spectrum of the source, as measured during our campaign, did not reveal the presence of any statistically significant absorption feature. The spectrum could be well described using a model comprising a thermal black-body hot component, most likely emerging from the surface of the accreting neutron star, and a power-law with no measurable cut-off energy (and affected by a modest absorption column density). Compared to previous analyses in the literature, we could rule out the presence of a colder thermal component emerging from an accretion disk, compatible with the idea that IGR J16194-2810 is a wind-fed binary (as most of the symbiotic X-ray binaries). Our results were strengthened by exploiting the archival XRT and INTEGRAL data, extending the validity of the spectral model used up to 0.3-40 keV and demonstrating that IGR J16194-2810 is unlikely to undergo significant spectral variability over time in the X-ray domain.
The Galactic Center Excess (GCE) $\gamma$-ray emission detected with the Large Area Telescope onboard the {\it Fermi Gamma-ray Space Telescope} has been considered as a possible sign for dark matter (DM) annihilation, but other possibilities such as the millisecond pulsar (MSP) origin have also been suggested. As a spectral fitting method, constructed based on properties of $\gamma$-ray MSPs, has been developed, we apply this method to the study of the GCE emission for the purpose of probing the MSP origin for the GCE. A number of $\sim$1660 MSPs can provide a fit to the spectrum of the GCE emission upto $\sim$10\,GeV, but the higher energy part of the spectrum requires additional emission components. We further carry out a stacking analysis of 30--500\,GeV data for relatively nearby $\gamma$-ray MSPs, and the resulting flux upper limits are still lower than those of the GCE emission. We consider the single DM annihilation channel $\tau^{+}\tau^{-}$ or channel $b\bar{b}$,or the combination of the two for comparison, and find they generally can provide better fits than MSPs. Combination of MSPs plus a DM channel are also tested, and MSPs plus the DM channel $b\bar{b}$ can always provide better fits. Comparing this combination case to the pure DM channel $b\bar{b}$, the MSP contribution is found to be marginally needed.
Models of neutron stars are considered in the case of a uniform density distribution. An algebraic equation, valid for any equation of state, is obtained. This equation allows one to find the approximate mass of a star of a given density without resorting to the integration of differential equations. The solutions presented in the paper for various equations of state, including more realistic ones, differ from the exact solutions obtained by the numerical integration of differential equations by at most ~20%
Mergers of neutron stars (NSs) and black holes (BHs) are nowadays observed routinely thanks to gravitational-wave (GW) astronomy. In the isolated binary-evolution channel, a common-envelope (CE) phase of a red supergiant (RSG) and a compact object is crucial to sufficiently shrink the orbit and thereby enable a merger via GW emission. Here, we use the outcomes of two three-dimensional (3D) magneto-hydrodynamic CE simulations of an initially 10.0 solar-mass RSG with a 5.0 solar-mass BH and a 1.4 solar-mass NS, respectively, to explore the further evolution and final fate of the post-CE binaries. Notably, the 3D simulations reveal that the post-CE binaries are likely surrounded by circumbinary disks (CBDs), which contain substantial mass and angular momentum to influence the subsequent evolution. The binary systems in MESA modelling undergo another phase of mass transfer (MT) and we find that most donor stars do not explode in ultra-stripped supernovae (SNe), but rather in Type Ib/c SNe. The final orbits of our models with the BH companion are too wide, and NS kicks are actually required to sufficiently perturb the orbit and thus facilitate a merger via GW emission. Moreover, by exploring the influence of CBDs, we find that mass accretion from the disk widens the binary orbit, while CBD-binary resonant interactions can shrink the separation and increase the eccentricity depending on the disk mass and lifetime. Efficient resonant contractions may even enable NS/BH to merge with the remnant He stars before a second SN explosion, which may be observed as gamma-ray burst-like transients, luminous fast blue optical transients and Thorne-\.Zytkow objects. For the surviving post-CE binaries, the CBD-binary interactions may significantly increase the GW-induced double compact merger fraction. We conclude that accounting for CBD may be crucial to better understand observed GW mergers.
PKS 1830$-$211 is a $\gamma$-ray emitting, high-redshift (z $= 2.507 \pm 0.002$), lensed flat-spectrum radio quasar. During the period mid-February to mid-April 2019, this source underwent a series of strong $\gamma$-ray flares that were detected by both AGILE-GRID and Fermi-LAT, reaching a maximum $\gamma$-ray flux of $F_{\rm E>100 MeV}\approx 2.3\times10^{-5}$ ph cm$^{-2}$ s$^{-1}$. Here we report on a coordinated campaign from both on-ground (Medicina, OVRO, REM, SRT) and orbiting facilities (AGILE, Fermi, INTEGRAL, NuSTAR, Swift, Chandra), with the aim of investigating the multi-wavelength properties of PKS 1830$-$211 through nearly simultaneous observations presented here for the first time. We find a possible break in the radio spectra in different epochs above 15 GHz, and a clear maximum of the 15 GHz data approximately 110 days after the $\gamma$-ray main activity periods. The spectral energy distribution shows a very pronounced Compton dominance (> 200) which challenges the canonical one-component emission model. Therefore we propose that the cooled electrons of the first component are re-accelerated to a second component by, e.g., kink or tearing instability during the $\gamma$-ray flaring periods. We also note that PKS 1830$-$211 could be a promising candidate for future observations with both Compton satellites (e.g., e-ASTROGAM) and Cherenkov arrays (CTAO) which will help, thanks to their improved sensitivity, in extending the data availability in energy bands currently uncovered.
We present a search for transient radio sources on time-scales of seconds to hours at 144 MHz using the LOFAR Two-metre Sky Survey (LoTSS). This search is conducted by examining short time-scale images derived from the LoTSS data. To allow imaging of LoTSS on short time-scales, a novel imaging and filtering strategy is introduced. This includes sky model source subtraction, no cleaning or primary beam correction, a simple source finder, fast filtering schemes and source catalogue matching. This new strategy is first tested by injecting simulated transients, with a range of flux densities and durations, into the data. We find the limiting sensitivity to be 113 and 6 mJy for 8 second and 1 hour transients respectively. The new imaging and filtering strategies are applied to 58 fields of the LoTSS survey, corresponding to LoTSS-DR1 (2% of the survey). One transient source is identified in the 8 second and 2 minute snapshot images. The source shows one minute duration flare in the 8 hour observation. Our method puts the most sensitive constraints on/estimates of the transient surface density at low frequencies at time-scales of seconds to hours; $<4.0\cdot 10^{-4} \; \text{deg}^{-2}$ at 1 hour at a sensitivity of 6.3 mJy; $5.7\cdot 10^{-7} \; \text{deg}^{-2}$ at 2 minutes at a sensitivity of 30 mJy; and $3.6\cdot 10^{-8} \; \text{deg}^{-2}$ at 8 seconds at a sensitivity of 113 mJy. In the future, we plan to apply the strategies presented in this paper to all LoTSS data.
For the analysis of gravitational-wave signals, fast and accurate gravitational-waveform models are required. These enable us to obtain information on the system properties from compact binary mergers. In this article, we introduce the NRTidalv3 model, which contains a closed-form expression that describes tidal effects, focusing on the description of binary neutron star systems. The model improves upon previous versions by employing a larger set of numerical-relativity data for its calibration, by including high-mass ratio systems covering also a wider range of equations of state. It also takes into account dynamical tidal effects and the known post-Newtonian mass-ratio dependence of individual calibration parameters. We implemented the model in the publicly available LALSuite software library by augmenting different binary black hole waveform models (IMRPhenomD, IMRPhenomX, and SEOBNRv5_ROM). We test the validity of NRTidalv3 by comparing it with numerical-relativity waveforms, as well as other tidal models. Finally, we perform parameter estimation for GW170817 and GW190425 with the new tidal approximant and find overall consistent results with respect to previous studies.
The very high energy gamma-ray source HESS J1809-193 has been detected by the LHAASO and HAWC observatory beyond 100 TeV energy. It is an interesting candidate for exploring the underlying mechanisms of gamma-ray production due to the presence of supernova remnants, pulsar and molecular clouds close to it. We have considered the injection of the energetic cosmic rays from a past explosion, whose reminiscent may be SNR G011.0-00.0, which is located within the extended gamma-ray source HESS J1809-193. We explain the multi-wavelength data from the region of HESS J1809-193 with synchrotron, inverse Compton, bremsstrahlung emission of cosmic ray electrons and secondary gamma-ray production in interactions of cosmic ray protons with the cold protons in the local molecular clouds within a time-dependent framework including the diffusion loss of cosmic rays. The observational data has been modelled with the secondary photons produced by the time-evolved cosmic ray spectrum, assuming the age of the explosion is 4500 years.
The eccentricity of binary black-hole mergers is predicted to be an indicator of the history of their formation. In particular, eccentricity is a strong signature of dynamical formation rather than formation by stellar evolution in isolated stellar systems. We investigate the efficacy of the existing quasicircular parameter estimation pipelines to determine the source parameters of such eccentric systems. We create a set of simulated signals with eccentricity up to 0.3 and find that as the eccentricity increases, the recovered mass parameters are consistent with those of a binary with up to a $\simeq 10\%$ higher chirp mass and mass ratio closer to unity. We also employ a full inspiral-merger-ringdown waveform model to perform parameter estimation on two gravitational wave events, GW151226 and GW170608, to investigate this bias on real data. We find that the correlation between the masses and eccentricity persists in real data, but that there is also a correlation between the measured eccentricity and effective spin. In particular, using a nonspinning prior results in a spurious eccentricity measurement for GW151226 as it exhibits signs of nonzero black-hole spin. Performing parameter estimation with an aligned spin, eccentric model, we constrain the eccentricities of GW151226 and GW170608 to be $<0.15$ and $<0.12$ respectively.
Recent gravitational-wave transient catalogs have used \pastro{}, the probability that a gravitational-wave candidate is astrophysical, to select interesting candidates for further analysis. Unlike false alarm rates, which exclusively capture the statistics of the instrumental noise triggers, \pastro{} incorporates the rate at which triggers are generated by both astrophysical signals and instrumental noise in estimating the probability that a candidate is astrophysical. Multiple search pipelines can independently calculate \pastro{}, each employing a specific data reduction. While the range of \pastro{} results can help indicate the range of uncertainties in its calculation, it complicates interpretation and subsequent analyses. We develop a statistical formalism to calculate a \emph{unified} \pastro{} for gravitational-wave candidates, consistently accounting for triggers from all pipelines, thereby incorporating extra information about a signal that is not available with any one single pipeline. We demonstrate the properties of this method using a toy model and by application to the publicly available list of gravitational-wave candidates from the first half of the third LIGO-Virgo-KAGRA observing run. Adopting a unified \pastro{} for future catalogs would provide a simple and easy-to-interpret selection criterion that incorporates a more complete understanding of the strengths of the different search pipelines
Once further confirmed in future analyses, the radius and mass measurement of HESS J1731-347 with $M=0.77^{+0.20}_{-0.17}~M_{\odot}$ and $R=10.4^{+0.86}_{-0.78}~\rm km$ will be among the lightest and smallest compact objects ever detected. This raises many questions about its nature and opens up the window for different theories to explain such a measurement. In this article, we use the information from Doroshenko et al. (2022) on the mass, radius, and surface temperature together with the multimessenger observations of neutron stars to investigate the possibility that HESS J1731-347 is one of the lightest observed neutron star, a strange quark star, a hybrid star with an early deconfinement phase transition, or a dark matter-admixed neutron star. The nucleonic and quark matter are modeled within realistic equation of states (EOSs) with a self-consistent calculation of the pairing gaps in quark matter. By performing the joint analysis of the thermal evolution and mass-radius constraint, we find evidence that within a 1$\sigma$ confidence level, HESS J1731-347 is consistent with the neutron star scenario with the soft EOS as well as with a strange and hybrid star with the early deconfinement phase transition with a strong quark pairing and neutron star admixed with dark matter.
We compute the spectrum of linearized gravitational excitations of black holes with substantial angular momentum in the presence of higher-derivative corrections to general relativity. We do so perturbatively to leading order in the higher-derivative couplings and up to order fourteen in the black hole angular momentum. This allows us to accurately predict quasinormal mode frequencies of black holes with spins up to about $70\%$ of the extremal value. For some higher-derivative corrections, we find that sizable rotation enhances the frequency shifts by almost an order of magnitude relative to the static case.
Accurately understanding the equation of state (EOS) of high-density, zero-temperature quark matter plays an essential role in constraining the behavior of dense strongly interacting matter inside the cores of neutron stars. In this Letter, we study the weak-coupling expansion of the EOS of cold quark matter and derive the complete, gauge-invariant contributions from the long-wavelength, dynamically screened gluonic sector at next-to-next-to-next-to-leading order (N3LO) in the strong coupling constant $\alpha_s$. This elevates the EOS result to the $O(\alpha_s^3 \ln \alpha_s)$ level, leaving only one unknown constant from the unscreened sector at N3LO, and places it on par with its high-temperature counterpart from 2003.
We investigate the X-ray variability properties of Seyfert1 Galaxies belonging to the BAT AGN Spectroscopic Survey (BASS). The sample includes 151 unobscured (N$_{\rm H}<10^{22}$ cm$^{-2}$) AGNs observed with XMM-Newton for a total exposure time of ~27 Ms, representing the deepest variability study done so far with high signal-to-noise XMM-Newton observations, almost doubling the number of observations analysed in previous works. We constrain the relation between the normalised excess variance and the 2-10 keV AGN luminosities, black hole masses and Eddington ratios. We find a highly significant correlation between $\sigma^{2}_{NXS}$ and $M_{\rm BH}$, with a scatter of ~0.85 dex. For sources with high $L_{2-10}$ this correlation has a lower normalization, confirming that more luminous (higher mass) AGNs show less variability. We explored the $\sigma^{2}_{NXS}$ vs $M_{\rm BH}$ relation for the sub-sample of sources with $M_{\rm BH}$ estimated via the "reverberation mapping" technique, finding a tighter anti-correlation, with a scatter of ~ 0.65 dex. We examine how the $\sigma^{2}_{NXS}$ changes with energy by studying the relation between the variability in the hard (3-10 keV) and the soft (0.2-1 keV)/medium (1-3 keV) energy bands, finding that the spectral components dominating the hard energy band are more variable than the spectral components dominating in softer energy bands, on timescales shorter than 10 ks.
We present analytical and numerical models of the bright long GRB 210822A at $z=1.736$. The intrinsic extreme brightness exhibited in the optical, which is very similar to other bright GRBs (e.g., GRBs 080319B, 130427A, 160625A 190114C, and 221009A), makes GRB 210822A an ideal case for studying the evolution of this particular kind of GRB. We use optical data from the RATIR instrument starting at $T+315.9$ s, with publicly available optical data from other ground-based observatories, as well as Swift/UVOT, and X-ray data from the Swift/XRT instrument. The temporal profiles and spectral properties during the late stages align consistently with the conventional forward shock model, complemented by a reverse shock element that dominates optical emissions during the initial phases ($T<300$ s). Furthermore, we observe a break at $T=80000$s that we interpreted as evidence of a jet break, which constrains the opening angle to be about $\theta_\mathrm{j}=(3-5)$ degrees. Finally, we apply a machine-learning technique to model the multi-wavelength light curve of GRB 210822A using the AFTERGLOWPY library. We estimate the angle of sight $\theta_{obs}=(6.4 \pm 0.1) \times 10^{-1}$ degrees, the energy $E_0=(7.9 \pm 1.6)\times 10^{53}$ ergs, the electron index $p=2.54 \pm 0.10$, the thermal energy fraction in electrons $\epsilon_\mathrm{e}=(4.63 \pm 0.91) \times 10^{-5}$ and in the magnetic field $\epsilon_\mathrm{B}= (8.66 \pm 1.01) \times 10^{-6}$, the efficiency $\chi = 0.89 \pm 0.01$, and the density of the surrounding medium $n_\mathrm{0} = 0.85 \pm 0.01 cm^{-3}$.
Accreting supermassive black holes (SMBHs) located at the center of galaxies are typically surrounded by large quantities of gas and dust. The structure and evolution of this circumnuclear material can be studied at different wavelengths, from the submillimeter to the X-rays. Recent X-ray studies have shown that the covering factor of the obscuring material tends to decrease with increasing Eddington ratio, likely due to radiative feedback on dusty gas. Here we study a sample of 549 nearby (z<0.1) hard X-ray (14-195 keV) selected non-blazar active galactic nuclei (AGN), and use the ratio between the AGN infrared and bolometric luminosity as a proxy of the covering factor. We find that, in agreement with what has been found by X-ray studies of the same sample, the covering factor decreases with increasing Eddington ratio. We also confirm previous findings which showed that obscured AGN typically have larger covering factors than unobscured sources. Finally, we find that the median covering factors of AGN located in different regions of the column density-Eddington ratio diagram are in good agreement with what would be expected from a radiation-regulated growth of SMBHs.
Collisionless shock waves have long been considered amongst the most prolific particle accelerators in the universe. Shocks alter the plasma they propagate through and often exhibit complex evolution across multiple scales. Interplanetary (IP) traveling shocks have been recorded in-situ for over half a century and act as a natural laboratory for experimentally verifying various aspects of large-scale collisionless shocks. A fundamentally interesting problem in both helio and astrophysics is the acceleration of electrons to relativistic energies (more than 300 keV) by traveling shocks. This letter presents first observations of field-aligned beams of relativistic electrons upstream of an IP shock observed thanks to the instrumental capabilities of Solar Orbiter. This study aims to present the characteristics of the electron beams close to the source and contribute towards understanding their acceleration mechanism. On 25 July 2022, Solar Orbiter encountered an IP shock at 0.98 AU. The shock was associated with an energetic storm particle event which also featured upstream field-aligned relativistic electron beams observed 14 minutes prior to the actual shock crossing. The distance of the beam's origin was investigated using a velocity dispersion analysis (VDA). Peak-intensity energy spectra were anaylzed and compared with those obtained from a semi-analytical fast-Fermi acceleration model. By leveraging Solar Orbiter's high-time resolution Energetic Particle Detector (EPD), we have successfully showcased an IP shock's ability to accelerate relativistic electron beams. Our proposed acceleration mechanism offers an explanation for the observed electron beam and its characteristics, while we also explore the potential contributions of more complex mechanisms.
Core-collapse supernovae (CCSNe) offer extremely valuable insights into the dynamics of galaxies. Neutrino time profiles from CCSNe, in particular, could reveal unique details about collapsing stars and particle behavior in dense environments. However, CCSNe in our galaxy and the Large Magellanic Cloud are rare and only one supernova neutrino observation has been made so far. To maximize the information obtained from the next Galactic CCSN, it is essential to combine analyses from multiple neutrino experiments in real time and transmit any relevant information to electromagnetic facilities within minutes. Locating the CCSN, in particular, is challenging, requiring disentangling CCSN localization information from observational features associated with the properties of the supernova progenitor and the physics of the neutrinos. Yet, being able to estimate the progenitor distance from the neutrino signal would be of great help for the optimisation of the electromagnetic follow-up campaign that will start soon after the propagation of the neutrino alert. Existing CCSN distance measurement algorithms based on neutrino observations hence rely on the assumption that neutrino properties can be described by the Standard Model. This paper presents a swift and robust approach to extract CCSN and neutrino physics information, leveraging diverse next-generation neutrino detectors to counteract potential measurement biases from Beyond the Standard Model effects.
The 30 Doradus region in the Large Magellanic Cloud (LMC) is the most energetic star-forming region in the Local Group. It is powered by the feedback from the massive stars in R136, the 1-2 Myr old central massive cluster. 30 Doradus has therefore long been regarded as a laboratory for studying star and star cluster formation under conditions reminiscent of the early Universe. We use JWST NIRCam observations to analyse how star formation proceeds in the region. Using selections based on theoretical isochrones on colour-magnitude diagrams, we identify populations of different ages. We select pre-main-sequence (PMS) stars and young stellar objects that show excess emission from warm dust or emission lines. Studying the spatial distribution of the different populations, we find that the youngest PMS stars with ages < 0.5 Myr are located in an elongated structure that stretches towards the north-east from the central cluster. The same structure is found in the sources that show an infrared excess, appears to be overlapping with cold molecular gas, and covers previously investigated sites of ongoing star formation. Pre-main-sequence stars with ages between 1 and 4 Myr and upper main-sequence stars are concentrated in the centre of R136, while older stars are more uniformly distributed across the field and likely belong to the LMC field population. Nonetheless, we find stars with excess emission from on dust or emission lines as far as 100 pc from the centre, indicating extended recent star formation. We interpret the elongated structure formed by the youngest PMS stars to be an indication of the still-ongoing hierarchical assembly of the R136 cluster. Additionally, the lower density of old PMS stars with emission due to ongoing accretion in the central region suggests that feedback from the R136 stars is effective in disrupting the disks of PMS stars.
Radio relics are diffuse synchrotron sources in the outskirts of merging galaxy clusters energized by the merger shocks. In this paper, we present an overview of the radio relics in massive cluster mergers identified in the new TNG-Cluster simulation. This is a suite of magnetohydrodynamical cosmological zoom-in simulations of 352 massive galaxy clusters with $M_{\rm 500c}= 10^{14.0-15.3}\rm~M_{\odot}$ sampled from a 1 Gpc-size cosmological box. The simulations are performed using the moving-mesh code AREPO with the galaxy formation model and high numerical resolution consistent with the TNG300 run of the IllustrisTNG series. We post-process the shock properties obtained from the on-the-fly shock finder to estimate the diffuse radio emission generated by cosmological shockwaves for a total of $\sim300$ radio relics at redshift $z=0-1$. TNG-Cluster returns a variety of radio relics with diverse morphologies, encompassing textbook examples of double radio relics, single relics, and ``inverted" radio relics that are convex to the cluster center. Moreover, the simulated radio relics reproduce both the abundance and statistical relations of observed relics. We find that extremely large radio relics ($>$ 2 Mpc) are predominantly produced in massive cluster mergers with $M_{\rm 500c}\gtrsim8\times10^{14}~\rm~M_{\odot}$. This underscores the significance of simulating massive mergers to study giant radio relics similar to those found in observations. We release a library of radio relics from the TNG-Cluster simulation, which will serve as a crucial reference for upcoming next-generation surveys.
The structure of magnetic fields in galaxies remains poorly constrained, despite the importance of magnetism in the evolution of galaxies. Radio synchrotron and far-infrared dust polarization (FIR) polarimetric observations are the best methods to measure galactic scale properties of magnetic fields in galaxies beyond the Milky Way. We use synthetic polarimetric observations of a simulated galaxy to identify and quantify the regions, scales, and interstellar medium (ISM) phases probed at FIR and radio wavelengths. Our studied suite of magnetohydrodynamical cosmological zoom-in simulations features high-resolutions (10 pc full-cell size) and multiple magnetization models. Our synthetic observations have a striking resemblance to those of observed galaxies. We find that the total and polarized radio emission extends to approximately double the altitude above the galactic disk (half-intensity disk thickness of $h_\text{I radio} \sim h_\text{PI radio} = 0.23 \pm 0.03$ kpc) relative to the FIR total and polarized emission that are concentrated in the disk midplane ($h_\text{I FIR} \sim h_\text{PI FIR} = 0.11 \pm 0.01$ kpc). Radio emission traces magnetic fields at scales of $\gtrsim 300$ pc, whereas FIR emission probes magnetic fields at the smallest scales of our simulations. These scales are comparable to our spatial resolution and well below the spatial resolution ($<300$ pc) of existing FIR polarimetric measurements. Finally, we confirm that synchrotron emission traces a combination of the warm neutral and cold neutral gas phases, whereas FIR emission follows the densest gas in the cold neutral phase in the simulation. These results are independent of the ISM magnetic field strength. The complementarity we measure between radio and FIR wavelengths motivates future multiwavelength polarimetric observations to advance our knowledge of extragalactic magnetism.
Giant Low Surface Brightness Galaxies (GLSBGs) are fundamentally distinct from normal galaxies (LSBGs) in star formation and evolution. In this work, we collected 27 local GLSBGs. They have high stellar masses (M*>10^10 Msolar) and low SFRs. With the specific SFRs lower than the characteristic value of the local star-forming (SF) galaxies of M*=10^10 Msolar(sSFR < 0.1 Gyr^-1), GLSBGs deviate from the SF main sequence (MS) defined for local SFGs respectively by E07 and S16 at the high M* regime. They are HI-rich systems with HI gas mass fractions (fHI) higher than the S16 MS galaxies, but have little molecular gas (H2), implying a low efficiency of HI-to-H2 transition due to the low HI surface densities that are far lower than the minimum of 6 - 8 Msolar pc^-2 required for shielding the formed H2 from photodissociation. For GLSBGs, the inner, bulge-dominated part with lower SFRs and higher M* is the main force pulling the entire GLSBG off from the MS, while the outer, disk-dominated part with relatively higher SFRs and lower M* reduces the deviations from the MS. For some cases, the outer, disk-dominated parts even tend to follow the MS. In the aspect of NUV - r versus g - r colors, the outer, disk-dominated parts are blue and behave similarly to the normal star-forming galaxies while the inner, bulge-dominated parts are in statistics red, indicating an inside-out star formation mechanism for the GLSBGs. They show few signs of external interactions in morphology, excluding the recent major merger scenario.
We present the results obtained by analysing new Astrosat/UVIT far ultraviolet (FUV) image of the collisional-ring galaxy Cartwheel. The FUV emission is principally associated with the star-forming outer ring, with no UV detection from the nucleus and inner ring. A few sources are detected in the region between the inner and the outer rings, all of which lie along the spokes. The FUV fluxes from the detected sources are combined with aperture-matched multi-band photometric data from archival images to explore the post-collision star formation history of the Cartwheel. The data were corrected for extinction using Av derived from the Balmer decrement ratios and commonly used extinction curves. We find that the ring regions contain stellar populations of wide range of ages, with the bulk of the FUV emission coming from non-ionizing stars, formed over the last 20 to 150 Myr, that are ~25 times more massive than the ionizing populations. On the other hand, regions belonging to the spokes have negligible current star formation, with the age of the dominant older population systematically increasing as its distance from the outer ring increases. The presence of populations of a wide range of ages in the ring suggests that the stars formed in the wave in the past were dragged along it to the current position of the ring. We derive an average steady star formation rate, SFR=5 Msun/yr, over the past 150 Myr, with an increase to ~18 Msun/yr in the recent 10 Myr.
In this work, we investigate whether the compact object at the center of the Milky Way is a naked singularity described by the $\textit{q}$-metric spacetime. Our fitting of the astrometric and spectroscopic data for the S2 star implies that similarly to the Schwarzschild black hole, the $\textit{q}$-metric naked singularity offers a satisfactory fit to the observed measurements. Additionally, it is shown that the shadow produced by the naked singularity is consistent with the shadow observed by the Event Horizon Telescope collaboration for Sgr-A*. It is worth mentioning that the spatial distribution of the S-stars favors the notion that the compact object at the center of our Galaxy can be described by an almost static spacetime. Based on these findings, the $\textit{q}$-metric naked singularity turns up as a compelling candidate for further investigation.
We delved into the assembly pathways and environments of compact groups (CGs) of galaxies using mock catalogues generated from semi-analytical models (SAMs) on the Millennium simulation. We investigate the ability of SAMs to replicate the observed CG environments and whether CGs with different assembly histories tend to inhabit specific cosmic environments. We also analyse whether the environment or the assembly history is more important in tailoring CG properties. We find that about half of the CGs in SAMs are non-embedded systems, 40% are inhabiting loose groups or nodes of filaments, while the rest distribute evenly in filaments and voids, in agreement with observations. We observe that early-assembled CGs preferentially inhabit large galaxy systems (~ 60%), while around 30% remain non-embedded. Conversely, lately-formed CGs exhibit the opposite trend. We also obtain that lately-formed CGs have lower velocity dispersions and smaller crossing times than early-formed CGs, but mainly because they are preferentially non-embedded. Those lately-formed CGs that inhabit large systems do not show the same features. Therefore, the environment plays a strong role in these properties for lately-formed CGs. Early-formed CGs are more evolved, displaying larger velocity dispersions, shorter crossing times, and more dominant first-ranked galaxies, regardless of the environment. Finally, the difference in brightness between the two brightest members of CGs is dependent only on the assembly history and not on the environment. CGs residing in diverse environments have undergone varied assembly processes, making them suitable for studying their evolution and the interplay of nature and nurture on their traits.
Strong iron lines are a common feature of the optical spectra of active galactic nuclei (AGNs) and quasars from $z\sim 6-7$ to the local Universe, and [Fe/Mg] ratios do not show cosmic evolution. During active episodes, accretion disks surrounding supermassive black holes (SMBHs) inevitably form stars in the self-gravitating part and these stars accrete with high accretion rates. In this paper, we investigate the population evolution of accretion-modified stars (AMSs) to produce irons and magnesium in AGNs. The AMSs as a new type of stars are allowed to have any metallicity but without significant loss from stellar winds since the winds are choked by the dense medium of the disks and return to the core stars. Mass functions of the AMS population show a pile-up or cutoff pile-up shape in top-heavy or top-dominant forms if the stellar winds are strong, consistent with the narrow range of supernovae (SN) explosions driven by the known pair-instability. This provides an efficient way to produce metals. Meanwhile, SN explosions support an inflated disk as a dusty torus. Furthermore, the evolving top-heavy initial mass functions (IMFs) lead to bright luminosity in infrared bands in dusty regions. This contributes a new component in infrared bands which is independent of the emissions from the central part of accretion disks, appearing as a long-term trending of the NIR continuum compared to optical variations. Moreover, the model can be further tested through reverberation mapping of emission lines, including LIGO/LISA detections of gravitational waves and signatures from spatially resolved observations of GRAVITY+/VLTI.
Grids of zero-age horizontal branch (ZAHB) models are presented, along with a suitable interpolation code, for -2.5 <= [Fe/H] <= -0.5, in steps of 0.2 dex, assuming Y = 0.25 and 0.29, [O/Fe] = +0.4 and +0.6, and [m/Fe] = 0.4 for all of the other alpha elements. The HB populations of 37 globular clusters (GCs) are fitted to these ZAHBs to derive their apparent distance moduli, (m-M)_V. With few exceptions, the best estimates of their reddenings from dust maps are adopted. The distance moduli are constrained using the prediction that (M_F606W-M_F814W)_0 colours of metal-poor, main-sequence stars at M_F606W >~ 5.0 have very little sensitivity to [Fe/H]. Intrinsic (M_F336W-M_F606W)_0 colours of blue HB stars, which provide valuable connections between GCs with exclusively blue HBs and other clusters of similar metallicity that also have red HB components, limit the uncertainties of relative (m-M)_V values to within +/- 0.03-0.04 mag. The ZAHB-based distances agree quite well with the distances derived by Baumgardt & Vasiliev (2021, MNRAS, 505, 5957). Their implications for GC ages are briefly discussed. Stellar rotation and mass loss appear to be more important than helium abundance variations in explaining the colour-magnitude diagrams of second-parameter GCs (those with anomalously very blue HBs for their metallicities).
We report the discovery of three double-peaked Lyman-$\alpha$ emitters (LAEs) exhibiting strong blue peak emission at 2.9 $\lesssim z \lesssim$ 4.8, in the VLT/MUSE data obtained as part of the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey. These strong blue peak systems provide a unique window into the scattering of Lyman-$\alpha$ photons by neutral hydrogen (HI), suggesting gas inflows along the line-of-sight and low HI column density. Two of them at $z=2.9$ and $z=3.6$ are spatially extended halos with their core regions clearly exhibiting stronger blue peak emissions than the red peak. However, spatial variations in the peak ratio and peak separation are evident over $25\times 26$ kpc ($z=2.9$) and $19\times28$ kpc ($z=3.6$) regions in these extended halos. Notably, these systems do not fall in the regime of Lyman-$\alpha$ blobs or nebulae. To the best of our knowledge, such a Lyman-$\alpha$ halo with a dominant blue core has not been observed previously. In contrast, the LAE at $z\sim4.8$ is a compact system spanning a $9\times9$ kpc region and stands as the highest-redshift strong blue peak emitter ever detected. The peak separation of the bright cores in these three systems ranges from $\Delta_{\mathrm{peak}}\sim370$ to $660$ km/s. The observed overall trend of decreasing peak separation with increasing radius is supposed to be controlled by HI column density and gas covering fraction. Based on various estimations, in contrast to the compact LAE, our halos are found to be good candidates for LyC leakers. These findings shed light on the complex interplay between Lyman-$\alpha$ emission, gas kinematics, and ionising radiation properties, offering valuable insights into the evolution and nature of high-redshift galaxies.
Binary stars are believed to be key determinants in understanding globular cluster evolution. In this paper, we present the Multi-band photometric analyses of five variables in the nearest galactic globular cluster M4, from the observations of CASE, M4 Core Project with HST for four variables (V48, V49, V51, and V55) and the data collected from T40 and C18 Telescopes of Wise Observatory for one variable (NV4). The light curves of five binaries are analyzed using the Wilson-Devinney method (WD) and their fundamental parameters have been derived. A period variation study was carried out using times of minima obtained from the literature for four binaries and the nature of the variation observed is discussed. The evolutionary state of systems is evaluated using M-R diagram, correlating with that of a few well-studied close binaries in other globular clusters. Based on the data obtained from the Gaia DR3 database, a three-dimensional Vector-Point Diagrams (VPD) was built to evaluate the cluster membership of the variables, and two out of them (V49 and NV4) were found to be not cluster members.
Far-infrared (FIR) observations from the \textit{Herschel Space Observatory} are used to estimate the IR properties of UV-selected galaxies. We stack the PACS (100, 160 $\mu \mathrm{m}$) and SPIRE (250, 350 and 500$\mu \mathrm{m}$) maps of the Chandra deep field south (CDFS) on a source list of galaxies selected in the rest-frame ultraviolet (UV) in a redshift range of $0.6-1.2$. This source list is created using observations from the XMM-OM telescope survey in the CDFS using the UVW1 (2910 {\AA}) filter. The stacked data are binned according to the UV luminosity function (LF) of these sources, and the average photometry of the UV-selected galaxies is estimated. By fitting modified black bodies and IR model templates to the stacked photometry, average dust temperatures and total IR luminosity are determined. The luminosity-weighted average temperatures do not show significant evolution between the redshift bins centred at 0.7 and 1.0. The infrared excess (IRX), unobscured, and obscured SFR values are obtained from the UV and IR luminosities. Dust attenuation is constant for UV luminosities above \num{9e10} $\mathrm{L_\odot}$, but increases as UV luminosity decreases below this threshold. It remains constant as a function of IR luminosities at fixed redshift across the luminosity range of our sources. In comparison to local luminous infrared galaxies (LIRGs) with similar SFRs, the higher redshift star-forming galaxies in the sample show a lesser degree of dust attenuation. Finally, the inferred dust attenuation is used to correct the unobscured star formation rate density (SFRD) in the redshift range of 0.6 to 1.2. The dust-corrected SFRDs are found to be consistent with measurements from IR-selected samples at the same redshifts.
Only recently, complex models that include the global dynamics from dwarf satellite galaxies, dark matter halo structure, gas infalls, and stellar disk in a cosmological context became available to study the dynamics of disk galaxies such as the Milky Way (MW). We use a MW model from a high-resolution hydrodynamical cosmological simulation named GARROTXA to establish the relationship between the vertical disturbances seen in its galactic disk and multiple perturbations, from the dark matter halo, satellites and gas. We calculate the bending modes in the galactic disk in the last 6 Gyr of evolution. To quantify the impact of dark matter and gas we compute the vertical acceleration exerted by these components onto the disk and compare them with the bending behavior with Fourier analysis. We find complex bending patterns at different radii and times, such as an inner retrograde mode with high frequency, as well as an outer slower retrograde mode excited at different times. The amplitudes of these bending modes are highest during the early stages of the thin disk formation and reach up to 8.5 km s-1 in the late disk evolution. We find that the infall of satellite galaxies leads to a tilt of the disk, and produces anisotropic gas accretion with subsequent star formation events, and supernovae, creating significant vertical accelerations onto the disk plane. The misalignment between the disk and the inner stellar/dark matter triaxial structure, formed during the ancient assembly of the galaxy, creates a strong vertical acceleration on the stars. We conclude that several agents trigger the bending of the stellar disk and its phase spirals in this simulation, including satellite galaxies, dark sub-halos, misaligned gaseous structures, and the inner dark matter profile, which coexist and influence each other, making it challenging to establish direct causality.
The emergence of supermassive black holes (SMBHs) in the early universe remains a topic of profound interest and debate. In this paper, we investigate the formation and growth of the first SMBHs within the framework of Modified Gravity (MOG), where gravity exhibits increased strength. We explore how MOG, as an alternative to the standard model, may offer novel insights into the emergence of SMBHs and potentially reconcile the discrepancies observed in the accretion and growth processes. We examine the dynamics of gas and matter in this modified gravitational framework, shedding light on the unique interplay between gravity and the formation of SMBHs.
The \ion{H}{I}-rich ultra-diffuse galaxies (HUDGs) offer a unique case for studies of star formation laws (SFLs) as they host low star formation efficiency (SFE) and low-metallicity environments where gas is predominantly atomic. We collect a sample of six HUDGs in the field and investigate their location in the extended Schmidt law($\Sigma_{\text {SFR }} \propto \left(\Sigma_{\text{star}}^{0.5} \Sigma_{\text{gas}}\right)^{1.09}$). They are consistent with this relationship well (with deviations of only 1.1 sigma). Furthermore, we find that HUDGs follow the tight correlation between the hydrostatic pressure in the galaxy mid-plane and the quantity on the x-axis ($\rm log(\Sigma_{star}^{0.5}\Sigma_{gas})$) of the extended Schmidt law. This result indicates that these HUDGs can be self-regulated systems that reach the dynamical and thermal equilibrium. In this framework, the stellar gravity compresses the disk vertically and counteracts the gas pressure in the galaxy mid-plane to regulate the star formation as suggested by some theoretical models.
In this work, we study the silicate dust content in the disk of one of the youngest eruptive stars, V900 Mon, at the highest angular resolution probing down to the inner 10 au of said disk, and study the historical evolution of the system traced in part by a newly discovered emission clump. We performed high-angular resolution mid-infrared interferometric observations of V900 Mon with MATISSE/VLTI with a spatial coverage ranging from 38-m to 130-m baselines, and compared them to archival MIDI/VLTI data. We also mined and re-analyzed archival optical and infrared photometry of the star to study its long-term evolution since its eruption in the 1990s. We complemented our findings with integral field spectroscopy data from MUSE/VLT. The MATISSE/VLTI data suggest a radial variation of the silicate feature in the dusty disk, whereby at large spatial scales ($\geq10$ au) the protostellar disk's emission is dominated by large-sized ($\geq1\,\mu m$) silicate grains, while at smaller spatial scales and closer to the star ($\leq5$ au), silicate emission is absent suggesting self-shielding. We propose that the self-shielding may be the result of small dust grains at the base of the collimated CO outflow previously detected by ALMA. A newly discovered knot in the MUSE/VLT data, located at a projected distance approximately 27,000 au from the star, is co-aligned with the molecular gas outflow at a P.A. of $250^o$ ($\pm5^o$) consistent with the position angle and inclination of the disk. The knot is seen in emission in H$\alpha$, [N II], and the [S II] doublet and its kinematic age is about 5150 years. This ejected material could originate from a previous eruption.
A constant stellar-mass to light ratio $M_{\star}/L$ has been widely-used in studies of galaxy dynamics and strong lensing, which aim at disentangling the mass density distributions of dark matter and baryons. In this work, we take early-type galaxies from the cosmological hydrodynamic IllustrisTNG-100 simulation to investigate possible systematic bias in the inferences due to a constant $M_{\star}/L$ assumption. To do so, we construct two-component matter density models, where one component describes the dark matter distribution, the other one for the stellar mass, which is made to follow the light profile by assuming a constant factor of $M_{\star}/L$. Specifically, we adopt multiple commonly used dark matter models and light distributions. We fit the two-component models directly to the {\it total} matter density distributions of simulated galaxies to eliminate systematics from other modelling procedures. We find that galaxies in general have more centrally-concentrated stellar mass profile than their light distribution. This is more significant among more massive galaxies, for which the $M_{\star}/L$ profile rises up markedly towards the centre and may often exhibit a dented feature due to on-going star formation at about one effective radius, encompassing a quenched bulge region. As a consequence, a constant $M_{\star}/L$ causes a model degeneracy to be artificially broken under specific model assumptions, resulting in strong and model-dependent biases on estimated properties, such as the central dark matter fraction and the initial mass function. Either a steeper dark matter profile with an over-predicted density fraction, or an over-predicted stellar mass normalization ($M_{\star}/L$) is often obtained through model fitting. The exact biased behaviour depends on the slope difference between mass and light, as well as on the adopted models for dark matter and light.
The arrival of the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST), Euclid-Wide and Roman wide area sensitive surveys will herald a new era in strong lens science in which the number of strong lenses known is expected to rise from $\mathcal{O}(10^3)$ to $\mathcal{O}(10^5)$. However, current lens-finding methods still require time-consuming follow-up visual inspection by strong-lens experts to remove false positives which is only set to increase with these surveys. In this work we demonstrate a range of methods to produce calibrated probabilities to help determine the veracity of any given lens candidate. To do this we use the classifications from citizen science and multiple neural networks for galaxies selected from the Hyper Suprime-Cam (HSC) survey. Our methodology is not restricted to particular classifier types and could be applied to any strong lens classifier which produces quantitative scores. Using these calibrated probabilities, we generate an ensemble classifier, combining citizen science and neural network lens finders. We find such an ensemble can provide improved classification over the individual classifiers. We find a false positive rate of $10^{-3}$ can be achieved with a completeness of $46\%$, compared to $34\%$ for the best individual classifier. Given the large number of galaxy-galaxy strong lenses anticipated in LSST, such improvement would still produce significant numbers of false positives, in which case using calibrated probabilities will be essential for population analysis of large populations of lenses.
Using deep JWST imaging from JADES, JEMS and SMILES, we characterize optically-faint and extremely red galaxies at $z>3$ that were previously missing from galaxy census estimates. The data indicate the existence of abundant, dusty and post-starburst-like galaxies down to $10^8$M$_\odot$, below the sensitivity limit of Spitzer and ALMA. Modeling the NIRCam and HST photometry of these red sources can result in extreme, high values for both stellar mass and star formation rate (SFR); however, including 7 MIRI filters out to 21$\mu$m results in decreased mass (median 0.6 dex for log$_{10}$M$^*$/M$_{\odot}>$10), and SFR (median 10$\times$ for SFR$>$100 M$_{\odot}$/yr). At $z>6$, our sample includes a high fraction of little red dots (LRDs; NIRCam-selected dust-reddened AGN candidates). We significantly measure older stellar populations in the LRDs out to rest-frame 3$\mu$m (the stellar bump) and rule out a dominant contribution from hot dust emission, a signature of AGN contamination to stellar population measurements. This allows us to measure their contribution to the cosmic census at $z>3$, below the typical detection limits of ALMA ($L_{\rm IR}<10^{12}L_\odot$). We find that these sources, which are overwhelmingly missed by HST and ALMA, could effectively double the obscured fraction of the star formation rate density at $4<z<6$ compared to some estimates, showing that prior to JWST, the obscured contribution from fainter sources could be underestimated. Finally, we identify five sources with evidence for Balmer breaks and high stellar masses at $5.5<z<7.7$. While spectroscopy is required to determine their nature, we discuss possible measurement systematics to explore with future data.
We present a novel implementation for active galactic nucleus (AGN) feedback through ultra-fast winds in the code gizmo. Our feedback recipe accounts for the angular dependence of radiative feedback upon black hole spin. We self-consistently evolve in time i) the gas accretion process from resolved scales to an unresolved AGN disc, ii) the evolution of the spin of the massive black hole (MBH), iii) the injection of AGN-driven winds into the resolved scales, and iv) the spin-induced anisotropy of the overall feedback process. We test our implementation by following the propagation of the wind-driven outflow into a homogeneous medium, and we compare the results against simple analytical models. Then, we consider an isolated galaxy setup and there we study the impact of the AGN feedback on the evolution of the MBH and the of the host galaxy. We find that: i) AGN feedback limits the gas inflow that powers the MBH, with a consequent weak impact on the host galaxy characterized by a star formation (SF) suppression of about a factor of two in the nuclear region; ii) the impact of AGN feedback on the host galaxy and on MBH growth is primarily determined by the AGN luminosity, rather than by its angular pattern set by the MBH spin; iii) the imprint of the angular pattern of the AGN radiation emission manifest in a more clear way at high accretion rates. At such high rates the more isotropic angular patterns, proper to higher spin values, sweep away gas in the nuclear region more easily, hence causing a slower MBH mass and spin growths and a higher quenching of the SF. We argue that the influence of spin-dependent anisotropy of AGN feedback on MBH and galaxy evolution is likely to be relevant in those scenarios characterized by high and prolonged MBH accretion episodes and by high AGN wind-galaxy coupling. Such conditions are more frequently met in galaxy mergers and/or high redshift galaxies.
Context. The presence of [$\alpha$/Fe]-[Fe/H] bi-modality in the Milky Way disc has animated the Galactic archaeology community since more than two decades. Aims. Our goal is to investigate the chemical, temporal, and kinematical structure of the Galactic discs using abundances, kinematics, and ages derived self-consistently with the new Bayesian framework SAPP. Methods. We employ the public Gaia-ESO spectra, as well as Gaia EDR3 astrometry and photometry. Stellar parameters and chemical abundances are determined for 13 426 stars using NLTE models of synthetic spectra. Ages are derived for a sub-sample of 2 898 stars, including subgiants and main-sequence stars. The sample probes a large range of Galactocentric radii, $\sim$ 3 to 12 kpc, and extends out of the disc plane to $\pm$ 2 kpc. Results. Our new data confirm the known bi-modality in the [Fe/H] - [$\alpha$/Fe] space, which is often viewed as the manifestation of the chemical thin and thick discs. The over-densities significantly overlap in metallicity, age, and kinematics, and none of these is a sufficient criterion for distinguishing between the two disc populations. Different from previous studies, we find that the $\alpha$-poor disc population has a very extended [Fe/H] distribution and contains $\sim$ 20$\%$ old stars with ages of up to $\sim$ 11 Gyr. Conclusions. Our results suggest that the Galactic thin disc was in place early, at look-back times corresponding to redshifts z $\sim$ 2 or more. At ages $\sim$ 9 to 11 Gyr, the two disc structures shared a period of co-evolution. Our data can be understood within the clumpy disc formation scenario that does not require a pre-existing thick disc to initiate a formation of the thin disc. We anticipate that a similar evolution can be realised in cosmological simulations of galaxy formation.
A substantial number of ultra-high redshift (8 < z < 17) galaxy candidates have been detected with JWST, posing the question: are these observational results surprising in the context of current galaxy formation models? We address this question using the well-established Santa Cruz semi-analytic models, implemented within merger trees from the new suite of cosmological N-body simulations GUREFT, which were carefully designed for ultra-high redshift studies. Using our fiducial models calibrated at z=0, we present predictions for stellar mass functions, rest-frame UV luminosity functions, and various scaling relations. We find that our (dust-free) models predict galaxy number densities at z~11 (z~13) that are an order of magnitude (a factor of ~30) lower than the observational estimates. We estimate the uncertainty in the observed number densities due to cosmic variance, and find that it leads to a fractional error of ~20-30% at z=11 (~30-80% at z=14) for a 100 sq arcmin field. We explore which processes in our models are most likely to be rate-limiting for the formation of luminous galaxies at these early epochs, considering the halo formation rate, gas cooling, star formation, and stellar feedback, and conclude that it is mainly efficient stellar-driven winds. We find that a modest boost of a factor of ~4 to the UV luminosities, which could arise from a top-heavy stellar initial mass function, would bring our current models into agreement with the observations. Adding a stochastic component to the UV luminosity can also reconcile our results with the observations.
We present pure spectroscopic constraints on the UV luminosity functions and cosmic star formation rate (SFR) densities from 25 galaxies at $z_\mathrm{spec}=8.61-13.20$. By reducing the JWST/NIRSpec spectra taken in multiple programs of ERO, ERS, GO, and DDT with our analysis technique, we independently confirm 16 galaxies at $z_\mathrm{spec}=8.61-11.40$ including new redshift determinations, and a bright interloper at $z_\mathrm{spec}=4.91$ that was claimed as a photometric candidate at z~16. In conjunction with nine galaxies at redshifts up to $z_\mathrm{spec}=13.20$ in the literature, we make a sample of 25 spectroscopically-confirmed galaxies in total and carefully derive the best estimates and lower limits of the UV luminosity functions. These UV luminosity function constraints are consistent with the previous photometric estimates within the uncertainties and indicate mild redshift evolution towards z~12 showing tensions with some theoretical models of rapid evolution. With these spectroscopic constraints, we obtain firm lower limits of the cosmic SFR densities and spectroscopically confirm a high SFR density at z~12 beyond the constant star-formation efficiency models, which supports earlier claims from the photometric studies. While there are no spectroscopically-confirmed galaxies with very large stellar masses violating the $\Lambda$CDM model due to the removal of the bright interloper, we confirm star-forming galaxies at $z_\mathrm{spec}=11-13$ with stellar masses much higher than model predictions. Our results indicate possibilities of high star-formation efficiency (>5%), hidden AGN, top-heavy initial mass function (possibly with Pop-III), and large scatter/variance. Having these successful and unsuccessful spectroscopy results, we suggest observational strategies for efficiently removing low redshift interlopers for future JWST programs.
With the launch of JWST and other scheduled missions aimed at probing the distant Universe, we are entering a new promising era for high-$z$ astronomy. One of our main goals is the detection of the first population of stars (Population III or Pop III stars), and models suggest that Pop III star formation is allowed well into the Epoch of Reionization (EoR), rendering this an attainable achievement. In this paper, we focus on our chance of detecting massive Pop IIIs at the moment of their death as Pair-Instability Supernovae (PISNe). We estimate the probability of discovering PISNe during the EoR in galaxies with different stellar masses ($7.5 \leq \mathrm{Log}(M_\star/\mathrm{M_\odot}) \leq 10.5$) from six dustyGadget simulations of $50h^{-1}$ cMpc per side. We further assess the expected number of PISNe in surveys with JWST/NIRCam and Roman/WFI. On average, less than one PISN is expected in all examined JWST fields at $z \simeq 8$ with $\Delta z = 1$, and O(1) PISN may be found in a $\sim 1$ deg$^2$ Roman field in the best-case scenario, although different assumptions on the Pop III IMF and/or Pop III star-formation efficiency can decrease this number substantially. Including the contribution from unresolved low-mass halos holds the potential for increased discoveries. JWST/NIRCam and Roman/WFI allow the detection of massive-progenitor ($\sim 250 ~ \mathrm{M_\odot}$) PISNe throughout all the optimal F200W-F356W, F277W-F444W, and F158-F213 colors. PISNe are also predominantly located at the outskirts of their hosting haloes, facilitating the disentangling of underlying stellar emission thanks to the spatial-resolution capabilities of the instruments.
The formation mechanism of massive stars remains one of the main open problems in astrophysics, in particular the relationship between the mass of the most massive stars, and that of the cores in which they form. Numerical simulations of the formation and evolution of large molecular clouds, within which dense cores and stars form self-consistently, show in general that the cores' masses increase in time, and also that the most massive stars tend to appear later (by a few to several Myr) than lower-mass stars. Here we present an idealized model that incorporates accretion onto the cores as well as onto the stars, in which the core's mass growth is regulated by a ``gravitational choking'' mechanism that does not involve any form of support. This process is of purely gravitational origin, and causes some of the mass accreted onto the core to stagnate there, rather than being transferred to the central stars. Thus, the simultaneous mass growth of the core and of the stellar mass can be computed. In addition, we estimate the mass of the most massive allowed star before its photoionizing radiation is capable of overcoming the accretion flow onto the core. This model constitutes a proof-of-concept for the simultaneous growth of the gas reservoir and the stellar mass, the delay in the formation of massive stars observed in cloud-scale numerical simulations, the need for massive, dense cores in order to form massive stars, and the observed correlation between the mass of the most massive star and the mass of the cluster it resides in. Also, our model implies that by the time massive stars begin to form in a core, a number of low-mass stars are expected to have already formed.
We present the OGLE collection of delta Scuti stars in the Large Magellanic Cloud and in its foreground. Our dataset encompasses a total of 15 256 objects, constituting the largest sample of extragalactic delta Sct stars published so far. In the case of 12 delta Sct pulsators, we detected additional eclipsing or ellipsoidal variations in their light curves. These are the first known candidates for binary systems containing delta Sct components beyond the Milky Way. We provide observational parameters for all variables, including pulsation periods, mean magnitudes, amplitudes, and Fourier coefficients, as well as long-term light curves in the I- and V-bands collected during the fourth phase of the OGLE project. We construct the period-luminosity (PL) diagram, in which fundamental-mode and first-overtone delta Sct stars form two nearly parallel ridges. The latter ridge is an extension of the PL relation obeyed by first-overtone classical Cepheids. The slopes of the PL relations for delta Sct variables are steeper than those for classical Cepheids, indicating that the continuous PL relation for first-overtone delta Sct variables and Cepheids is non-linear, exhibiting a break at a period of approximately 0.5 d. We also report the enhancement of the OGLE collection of Cepheids and RR Lyr stars with newly identified and reclassified objects, including pulsators contained in the recently published Gaia DR3 catalog of variable stars. As a by-product, we estimate the contamination rate in the Gaia DR3 catalogs of Cepheids and RR Lyr variables.
Observations of clusters suffer from issues such as completeness, projection effects, resolving individual stars and extinction. As such, how accurate are measurements and conclusions are likely to be? Here, we take cluster simulations (Westerlund2- and Orion- type), synthetically observe them to obtain luminosities, accounting for extinction and the inherent limits of Gaia, then place them within the real Gaia DR3 catalogue. We then attempt to rediscover the clusters at distances of between 500pc and 4300pc. We show the spatial and kinematic criteria which are best able to pick out the simulated clusters, maximising completeness and minimising contamination. We then compare the properties of the 'observed' clusters with the original simulations. We looked at the degree of clustering, the identification of clusters and subclusters within the datasets, and whether the clusters are expanding or contracting. Even with a high level of incompleteness (e.g. $<2\%$ stellar members identified), similar qualitative conclusions tend to be reached compared to the original dataset, but most quantitative conclusions are likely to be inaccurate. Accurate determination of the number, stellar membership and kinematic properties of subclusters, are the most problematic to correctly determine, particularly at larger distances due to the disappearance of cluster substructure as the data become more incomplete, but also at smaller distances where the misidentification of asterisms as true structure can be problematic. Unsurprisingly, we tend to obtain better quantitative agreement of properties for our more massive Westerlund2-type cluster. We also make optical style images of the clusters over our range of distances.
The large-scale gaseous shocks in the bulge of M31 can be naturally explained by a rotating stellar bar. We use gas dynamical models to provide an independent measurement of the bar pattern speed in M31. The gravitational potentials of our simulations are from a set of made-to-measure models constrained by stellar photometry and kinematics. If the inclination of the gas disk is fixed at $i = 77^{\circ}$, we find that a low pattern speed of $16-20\;\rm km\;s^{-1}\;kpc^{-1}$ is needed to match the observed position and amplitude of the shock features, as shock positions are too close to the bar major axis in high $\Omega_{b}$ models. The pattern speed can increase to $20-30\;\rm km\;s^{-1}\;kpc^{-1}$ if the inner gas disk has a slightly smaller inclination angle compared with the outer one. Including sub-grid physics such as star formation and stellar feedback has minor effects on the shock amplitude, and does not change the shock position significantly. If the inner gas disk is allowed to follow a varying inclination similar to the HI and ionized gas observations, the gas models with a pattern speed of $38\;\rm km\;s^{-1}\;kpc^{-1}$, which is consistent with stellar-dynamical models, can match both the shock features and the central gas features.
Closure invariants in interferometry carry calibration-independent information about the morphology of an observed object. Excepting simple cases, a mapping between closure invariants and morphologies is not well established. We aim to demonstrate that closure invariants can be used to classify the morphology and estimate the morphological parameters using simple Machine Learning models. We consider 6 morphological classes -- point-like, uniform circular disc, crescent, dual disc, crescent with elliptical accretion disc, and crescent with double jet lobes -- described by phenomenological parameters. Using simple logistic regression, multi-layer perceptron (MLP), convolutional neural network, and random forest models on closure invariants obtained from a sparse aperture coverage, we find that all models except logistic regression are able to classify the morphology with an $F_1$ score $\gtrsim 0.8$. The classification accuracy notably improves with greater aperture coverage. We also estimate morphological parameters of uniform circular disc, crescent, and dual disc using simple MLP models, and perform a parametric image reconstruction. The reconstructed images do not retain information about absolute position or intensity scale. The estimated parameters and reconstructed images are found to correspond well with the inputs. However, the prediction accuracy worsens with increasing morphological complexity. This proof-of-concept method opens an independent approach to interferometric imaging under challenging observing conditions such as that faced by the Event Horizon Telescope and Very Long Baseline Interferometry in general, and can complement other methods to robustly constrain an object's morphology.
Strong gravitational lensing can produce copies of gravitational-wave signals from the same source with the same waveform morphologies but different amplitudes and arrival times. Some of these strongly-lensed gravitational-wave signals can be demagnified and become sub-threshold. We present TESLA-X, an enhanced approach to the original GstLAL-based TargetEd Subthreshold Lensing seArch (TESLA) method, for improving the detection efficiency of these potential sub-threshold lensed signals. TESLA-X utilizes lensed injections to generate a targeted population model and a targeted template bank. We compare the performance of a full template bank search, TESLA, and TESLA-X methods via a simulation campaign, and demonstrate the performance of TESLA-X in recovering lensed injections, particularly targeting a mock event. Our results show that the TESLA-X method achieves a maximum of $\sim 20\%$ higher search sensitivity compared to the TESLA method within the sub-threshold regime, presenting a step towards detecting the first lensed gravitational wave. TESLA-X will be employed for the LIGO-Virgo-KAGRA's collaboration-wide analysis to search for lensing signatures in the fourth observing run.
Purpose: The Hard X-ray Modulation Telescope is China's first X-ray astronomy satellite launched on June 15th, 2017, dubbed Insight-HXMT. Active and passive thermal control measures are employed to keep devices at suitable temperatures. In this paper, we analyzed the on-orbit thermal monitoring data of the first 5 years and investigated the effect of thermal deformation on the point spread function (PSF) of the telescopes. Methods: We examined the data of the on-orbit temperatures measured using 157 thermistors placed on the collimators, detectors and their support structures and compared the results with the thermal control requirements. The thermal deformation was evaluated by the relative orientation of the two star sensors installed on the main support structure. its effect was estimated with evolution of the PSF obtained with calibration scanning observations of the Crab nebula. Conclusion: The on-orbit temperatures met the thermal control requirements thus far, and the effect of thermal deformation on the PSF was negligible after the on-orbit pointing calibration.
The Galclaim software is blue designed to identify association between astrophysical transient sources and host galaxy by computing the probability of chance alignment. It is distributed as an open source Python software. It is already used to identify, confirm or reject host galaxy candidates of GRBs and to validate or invalidate transient candidates in astrophysical observations. Such tools are also very useful to characterise archived transient candidates in large sky survey telescopes.
Estimation of planetary orbital and physical parameters from light-curve data relies heavily on the accurate interpretation of Transit Timing Variations (TTV) measurements. In this letter, we review the process of TTV measurement and compare two fitting paradigms - one that relies on making transit-by-transit timing estimates and then fitting a TTV model to the observed timings and one that relies on fitting a global flux model to the entire light-curve data simultaneously. The latter method is achieved either by solving for the underlying planetary motion (often referred to as "photodynamics"), or by using an approximate or empirical shape of the TTV signal. We show that across a large range of the transit SNR regime, the probability distribution function (PDF) of the mid-transit time significantly deviates from a Gaussian, even if the flux errors do distribute normally. Treating the timing uncertainties as if they are distributed normally leads, in such a case, to a wrong interpretation of the TTV measurements. We illustrate these points using numerical experiments and conclude that a fitting process that relies on a global flux fitting rather than the derived TTVs, should be preferred.
We propose a measure, the joint differential entropy of eigencolours, for determining the spatial complexity of exoplanets using only spatially unresolved light curve data. The measure can be used to search for habitable planets, based on the premise of a potential association between life and exoplanet complexity. We present an analysis using disk-integrated light curves from Earth, developed in previous studies, as a proxy for exoplanet data. We show that this quantity is distinct from previous measures of exoplanet complexity due to its sensitivity to spatial information that is masked by features with large mutual information between wavelengths, such as cloud cover. The measure has a natural upper limit and appears to avoid a strong bias toward specific planetary features. This makes it a candidate for being used as a generalisable measure of exoplanet habitability, since it is agnostic regarding the form that life could take.
We present Spright, a Python package that implements a fast and lightweight mass-density-radius relation for small planets. The relation represents the joint planetary radius and bulk density probability distribution as a mean posterior predictive distribution of an analytical three-component mixture model. The analytical model, in turn, represents the probability for the planetary bulk density as three generalised Student's t-distributions with radius-dependent weights and means based on theoretical composition models. The approach is based on Bayesian inference and aims to overcome the rigidity of simple parametric mass-radius relations and the danger of overfitting of non-parametric mass-radius relations. The package includes a set of pre-trained and ready-to-use relations based on two M dwarf catalogues, one FGK star catalogue, and two theoretical composition models for water-rich planets. The inference of new models is easy and fast, and the package includes a command line tool that allows for coding-free use of the relation, including the creation of publication-quality plots. Additionally, we study whether the current mass and radius observations of small exoplanets support the presence of a population of water-rich planets positioned between rocky planets and sub-Neptunes. The study is based on Bayesian model comparison and shows somewhat strong support against the existence of a water-world population around M dwarfs. However, the results of the study depend on the chosen theoretical water-world density model. A more conclusive result requires a larger sample of precisely characterised planets and community consensus on a realistic water world interior structure and atmospheric composition model.
ANAIS is a direct dark matter detection experiment whose goal is to confirm or refute in a model independent way the positive annual modulation signal claimed by DAMA/LIBRA. Consisting of 112.5 kg of NaI(Tl) scintillators, ANAIS-112 is taking data at the Canfranc Underground Laboratory in Spain since August, 2017. Results corresponding to the analysis of three years of data are compatible with the absence of modulation and incompatible with DAMA/LIBRA. However, testing this signal relies on the knowledge of the scintillation quenching factors (QF), which measure the relative efficiency for the conversion into light of the nuclear recoil energy with respect to the same energy deposited by electrons. Previous measurements of the QF in NaI(Tl) show a large dispersion. Consequently, in order to better understand the response of the ANAIS-112 detectors to nuclear recoils, a specific neutron calibration program has been developed. This program combines two different approaches: on the one hand, QF measurements were carried out in a monoenergetic neutron beam; on the other hand, the study presented here aims at the evaluation of the QF by exposing directly the ANAIS-112 crystals to neutrons from low activity $^{252}$Cf sources, placed outside the lead shielding. Comparison between these onsite neutron measurements and detailed GEANT4 simulations will be presented, confirming that this approach allows testing different QF models.
Recently low-mass dark matter direct searches have been hindered by a low energy background, drastically reducing the physics reach of the experiments. In the CRESST-III experiment, this signal is characterised by a significant increase of events below 200 eV. As the origin of this background is still unknown, it became necessary to develop new detector designs to reach a better understanding of the observations. Within the CRESST collaboration, three new different detector layouts have been developed and they are presented in this contribution.
Experimental 21 cm cosmology aims to detect the formation of the first stars during the cosmic dawn and the subsequent epoch of reionization by utilizing the 21 cm hydrogen line transition. While several experiments have published results that begin to constrain the shape of this signal, a definitive detection has yet to be achieved. In this paper, we investigate the influence of uncertain antenna-sky interactions on the possibility of detecting the signal. This paper aims to define the level of accuracy to which a simulated antenna beam pattern is required to agree with the actual observing beam pattern of the antenna to allow for a confident detection of the global 21 cm signal. By utilising singular value decomposition, we construct a set of antenna power patterns that incorporate minor, physically motivated variations. We take the absolute mean averaged difference between the original beam and the perturbed beam averaged over frequency ($\Delta D$) to quantifying this difference, identifying the correlation between $\Delta D$ and antenna temperature. To analyse the impact of $\Delta D$ on making a confident detection, we utilize the REACH Bayesian analysis pipeline and compare the Bayesian evidence $\log \mathcal{Z}$ and root-mean-square error for antenna beams of different $\Delta D$ values. Our calculations suggest that achieving an agreement between the original and perturbed antenna power pattern with $\Delta D$ better than -35 dB is necessary for confident detection of the global 21 cm signal. Furthermore, we discuss potential methods to achieve the required high level of accuracy within a global 21~cm experiment.
Exoplanet characterization missions planned for the future will soon enable searches for life beyond our solar system. Critical to the search will be the development of life detection strategies that can search for biosignatures while maintaining observational efficiency. In this work, we adopted a newly developed biosignature decision tree strategy for remote characterization of Earth-like exoplanets. The decision tree offers a step-by-step roadmap for detecting exoplanet biosignatures and excluding false positives based on Earth's biosphere and its evolution over time. We followed the pathways for characterizing a modern Earth-like planet and an Archean Earth-like planet and evaluated the observational trades associated with coronagraph bandpass combinations of designs consistent with The Habitable Worlds Observatory (HWO) precursor studies. With retrieval analyses of each bandpass (or combination), we demonstrate the utility of the decision tree and evaluated the uncertainty on a suite of biosignature chemical species and habitability indicators (i.e., the gas abundances of H$_2$O, O$_2$, O$_3$, CH$_4$, and CO$_2$). Notably for modern Earth, less than an order of magnitude spread in the 1-$\sigma$ uncertainties were achieved for the abundances of H$_2$O and O$_2$, planetary surface pressure, and atmospheric temperature with three strategically placed bandpasses (two in the visible and one in the near-infrared). For the Archean, CH$_4$ and H$_2$O were detectable in the visible with a single bandpass.
In this paper, we present YunMa, an exoplanet cloud simulation and retrieval package, which enables the study of cloud microphysics and radiative properties in exoplanetary atmospheres. YunMa simulates the vertical distribution and sizes of cloud particles and their corresponding scattering signature in transit spectra. We validated YunMa against results from the literature. When coupled to the TauREx 3 platform, an open Bayesian framework for spectral retrievals, YunMa enables the retrieval of the cloud properties and parameters from transit spectra of exoplanets. The sedimentation efficiency ($f_{\mathrm{sed}}$), which controls the cloud microphysics, is set as a free parameter in retrievals. We assess the retrieval performances of YunMa through 28 instances of a K2-18 b-like atmosphere with different fractions of H$_2$/He and N$_2$, and assuming water clouds. Our results show a substantial improvement in retrieval performances when using YunMa instead of a simple opaque cloud model and highlight the need to include cloud radiative transfer and microphysics to interpret the next-generation data for exoplanet atmospheres. This work also inspires instrumental development for future flagships by demonstrating retrieval performances with different data quality.
Cool astrophysical objects, such as (exo)planets, brown dwarfs, or asymptotic giant branch stars, can be strongly affected by condensation. Condensation does not only directly affect the chemical composition of the gas phase by removing elements but the condensed material also influences other chemical and physical processes in these objects. This includes, for example, the formation of clouds in planetary atmospheres and brown dwarfs or the dust-driven winds of evolved stars. In this study we introduce FastChem Cond, a new version of the FastChem equilibrium chemistry code that adds a treatment of equilibrium condensation. Determining the equilibrium composition under the impact of condensation is complicated by the fact that the number of condensates that can exist in equilibrium with the gas phase is limited by a phase rule. However, this phase rule does not directly provide information on which condensates are stable. As a major advantage of FastChem Cond is able to automatically select the set stable condensates satisfying the phase rule. Besides the normal equilibrium condensation, FastChem Cond can also be used with the rainout approximation that is commonly employed in atmospheres of brown dwarfs or (exo)planets. FastChem Cond is available as open-source code, released under the GPLv3 licence. In addition to the C++ code, FastChem Cond also offers a Python interface. Together with the code update we also add about 290 liquid and solid condensate species to FastChem.
The memory effect in electrodynamics, as discovered in 1981 by Staruszkiewicz, and also analysed later, consists of adiabatic shift of the position of a test particle. The proposed 'velocity kick' memory effect, supposedly discovered recently, is in contradiction to these findings. We show that the 'velocity kick' memory is an artefact resulting from an unjustified interchange of limits. This example is a warning against drawing uncritical conclusions for spacetime fields, from their asymptotic behavior.
Spin network technique is usually generalized to relativistic case by changing $SO(4)$ group -- Euclidean counterpart of the Lorentz group -- to its universal spin covering $SU(2)\times SU(2)$, or by using the representations of $SO(1,3)$ Lorentz group. We extend this approach by using inhomogeneous Lorentz group $\mathcal{P}=SO(1,3)\rtimes T_4$, which results in the simplification of the spin network technique. The labels on the network graph corresponding to the subgroup of translations $T_4$ make the intertwinners into the products of $SU(2)$ parts and the energy-momentum conservation delta functions. This maps relativistic spin networks to usual Feynman diagrams for the matter fields.
In AdS/CFT, we introduce a robust method for computing $n$-point gluon Mellin amplitudes, applicable in various spacetime dimensions. Using the Mellin transform and a recursive algorithm, we efficiently calculate tree-level gluon amplitudes. Our approach simplifies the representation of higher-point amplitudes, eliminating the need for complicated integrations. Crucially, the resulting amplitudes closely mirror those in flat space, allowing a straightforward dictionary between the two settings circumventing explicit calculations.
In this paper, the properties of higher dimensional holographic superconductors are studied in the background of $f(R)$ gravity and Born-Infeld electrodynamics. A specific model of $f(R)$ gravity is considered, allowing a perturbative approach to the problem. The Sturm-Liouville eigenvalue problem is used to analytically calculate the critical temperature and the condensation operator. An expression for the critical temperature in terms of the charge density including the correction from modified gravity is derived. It is seen that the higher values of the Born-Infeld coupling parameter make the condensation harder to form. In addition, the limiting values of this parameter, above which Born-Infeld electrodynamics cannot be applied, are found for different dimensions. Another interesting property is that the increasing modifications of $f(R)$ gravity lead to larger values of the critical temperature and a decrease in the condensation gap, which means that the condensation is easier to form.
The images of supermassive black holes captured by the Event Horizon Telescope (EHT) collaboration have allowed us to have access to the physical processes that occur in the vicinity of the event horizons of these objects. This has enabled us to learn more about the state of rotation of black holes, about the formation of relativistic jets in their vicinity, about the magnetic field in the regions close to them, and even about the existence of the photon ring. Furthermore, black hole imaging gives rise to a new way of testing general relativity in the strong field regime. This has initiated a line of research aimed at probing different physical scenarios. While many scenarios have been proposed in the literature that yield distortion effects that would be a priori detectable at the resolution achieved by future EHT observations, the vast majority of those scenarios involve strange objects or exotic matter content. Here, we consider a less heterodox scenario which, involving non-exotic matter, in the sense that it satisfies all energy conditions and is dynamically stable, also leads to a deformation of the black hole shadow. We consider a specific concentration of non-emitting, relativistic matter of zero optical depth forming a bubble around the black hole. Due to gravitational refraction, such a self-interacting -- dark -- matter concentration may produce sub-annular images, i.e. subleading images inside the photon ring. We calculate the ray tracing in the space-time geometry produced by such a matter configuration and obtain the corresponding black hole images. While for concreteness we restrict our analysis to a specific matter distribution, modeling the bubble as a thin-shell, effects qualitatively similar to those described here are expected to occur for more general density profiles.
The conventional phase space of classical physics treats space and time differently, and this difference carries over to field theories and quantum mechanics (QM). In this paper, the phase space is enhanced through two main extensions. Firstly, we promote the time choice of the Legendre transform to a dynamical variable. Secondly, we extend the Poisson brackets of matter fields to a spacetime symmetric form. The ensuing "spacetime phase space" is employed to obtain an explicitly covariant version of Hamilton equations for relativistic field theories. A canonical-like quantization of the formalism is then presented in which the fields satisfy spacetime commutation relations and the foliation is quantum. In this approach, the classical action is also promoted to an operator and retains explicit covariance through its non-separability in the matter-foliation partition. The problem of establishing a correspondence between the new noncausal framework (where fields at different times are independent) and conventional QM is solved through a generalization of spacelike correlators to spacetime. In this generalization, the Hamiltonian is replaced by the action, and conventional particles by off-shell particles. When the foliation is quantized, the previous map is recovered by conditioning on foliation eigenstates, in analogy with the Page and Wootters mechanism. We also provide an interpretation of the correspondence in which the causal structure of a given theory emerges from the quantum correlations between the system and an environment. This idea holds for general quantum systems and allows one to generalize the density matrix to an operator containing the information of correlators both in space and time.
The gravitational induced interference is here studied in the framework of Teleparallel Gravity. We derive the gravitational phase difference and we apply the result to the case of a Kerr spacetime. Afterwards, we compute the fringe shifts in an interference experiment of particles and discuss how to increase their values by changing the given parameters that include: the area in between the paths, the energy of the particles, the distance from the black hole, the mass and the spin of the black hole. It turns out that it is more difficult to detect the fringe shifts for massless particles than for massive particles. As a further application, we show how the mass of the black hole and its angular momentum can be obtained from the measurement of the fringe shifts. Finally, we compare the phase difference derived in Teleparallel Gravity with a previous work in General Relativity.
We study investigate the tensor and vector perturbations around all three branches of spatially flat universe with different connections in symmetric teleparallel gravity. The model we consider can cover both the case of f(Q) model and that of the non-minimal coupling between a scalar field and the non-metricity scalar. We focus on analyzing and comparing the propagation behavior and stability of the vectorial and tensorial gravitational waves on spatially flat universe with different connections.
Gravitational-wave (GW) observations provide unique information about compact objects. As detectors sensitivity increases, new astrophysical sources of GW could emerge. Close hyperbolic encounters are one such source class: scattering of stellar mass compact objects is expected to manifest as GW burst signals in the frequency band of current detectors. We present the search for GW from hyperbolic encounters in the second half of the third Advanced LIGO-Virgo observing run (O3b). We perform a model-informed search with machine-learning enhanced Coherent WaveBurst algorithm. No significant event has been identified in addition to known detections of compact binary coalescences. We inject in the O3b data non-spinning third Post-Newtonian order accurate hyperbolic encounter model with component masses between [2, 100] $M_{\odot}$, impact parameter in [60, 100] ${GM}/{c^2}$ and eccentricity in [1.05, 1.6]. We further discuss the properties of the simulation recovered. For the first time, we report the sensitivity volume achieved for such sources, which for O3b data reaches up to 3.9$\pm 1.4 \times 10^5$ Mpc$^3$year for compact objects with masses between [20, 40] $M_{\odot}$, corresponding to a rate density upper limit of 0.589$\pm$0.094 $\times10^{-5}$Mpc$^{-3}$year$^{-1}$. Finally, we present projected sensitive volume for the next observing runs of current detectors, namely O4 and O5.
We examine Friedman-Lemaitre-Robertson-Walker (FLRW) models derived from ``Cotton Gravity" (CG), a recently proposed gravity theory based on the Cotton tensor. We found in an alternative but equivalent formulation that when applied to FLRW models, CG allows for geometric degrees of freedom absent in General Relativity, specifically it leads to FLRW models with late time accelerated expansion driven by negative spatial curvature, without involving a dark energy source or an imposed cosmological constant. A covariant characterization for the $\Lambda$CDM model follows as the unique FLRW dust model in CG with constant negative spatial curvature. Our results suggest an appealing potential of CG for extending General Relativity and cosmological applications.
Unphysical equations of state result from the unrestricted use of the Synge G-trick of running the Einstein field equations backwards; in particular often this results in $\rho + p < 0$ which implies negative inertial mass density, which does not occur in reality. This is the basis of some unphysical spacetime models including phantom energy in cosmology and traversable wormholes. The slogan ``ER = EPR'' appears to have no basis in physics and is merely the result of vague and unbridled speculation. Wormholes (the ``ER'' of the slogan) are a mathematical curiosity of general relativity that have little to no application to a description of our universe. In contrast quantum correlations (the ``EPR'' of the slogan) are a fundamental property of quantum mechanics that follows from the principle of superposition and is true regardless of the properties of gravity. The speculative line of thought that led to ``ER = EPR'' is part of a current vogue for anti-geometrical thinking that runs counter to (and threatens to erase) the great geometrical insights of the global structure program of general relativity.
This thesis investigates low-dimensional models of nonperturbative quantum gravity, with a special focus on Causal Dynamical Triangulations (CDT). We define the so-called curvature profile, a new quantum gravitational observable based on the quantum Ricci curvature. We subsequently study its coarse-graining capabilities on a class of regular, two-dimensional polygons with isolated curvature singularities, and we determine the curvature profile of (1+1)-dimensional CDT with toroidal topology. Next, we focus on CDT in 2+1 dimensions, intvestigating the behavior of the two-dimensional spatial slice geometries. We then turn our attention to matrix models, exploring a differential reformulation of the integrals over one- and two-matrix ensembles. Finally, we provide a hands-on introduction to computer simulations of CDT quantum gravity.
In [arXiv:2309.10034], we recently proposed a modification of general relativity in which a non-dynamical term related to topology is introduced in the Einstein equation. The original motivation for this theory is to allow for the non-relativistic limit to exist in any physical spacetime topology. In the present paper, we derive an exact static vacuum spherically symmetric solution of this theory. The metric represents a black hole (with positive Komar mass) and a naked white hole (with negative and opposite Komar mass) at opposite poles of an $\mathbb{S}^3$ universe. The solution is similar to the Schwarzschild metric, but the spacelike infinity is cut, and replaced by a naked white hole at finite distance, implying that the spacelike hypersurfaces of the Penrose--Carter diagram are closed. This solution further confirms a result of [arXiv:2212.00675] suggesting that staticity of closed-space models in this theory requires a vanishing total mass. As a subcase of the solution, we also obtain a vacuum homogeneous 3-sphere, something not possible in general relativity. We also put in perspective the solution with other attempts at describing singularities on $\mathbb{S}^3$ and discuss how this theory could be used to study the behaviour of a black hole in an empty closed expanding universe.
We explore thermodynamics of an ideal gas system with heat conduction, incorporating a model that accommodates heat dependence. Our model is constructed based on i) the first law of thermodynamics from action formulation and ii) the existence condition of a (local) Lorentz boost between an Eckart observer and a Landau-Lifschitz observer--a condition that extends the stability criterion of thermal equilibrium. The implications of these conditions include: 1) Heat contributes to the energy density through the combination $q/n\Theta^2$ where $q$, $n$, and $\Theta$ represent heat, number density, and temperature, respectively. 2) The energy density exhibits a unique minumum at $q=0$ with respect to the contribution. 3) Our findings indicate an upper limit on the temperature of the ideal gas in thermal equilibrium. The upper limit is governed by the coefficient of the first non-vanishing contribution of heat around equilibrium to the energy density.
We investigate the properties of quantum entanglement and mutual information in the multi-event horizon Schwarzschild-de Sitter (SdS) spacetime for massless Dirac fields. We obtain the expression for the evolutions of the quantum state near the black hole event horizon (BEH) and cosmological event horizon (CEH) in the SdS spacetime. Under the Nariai limit, the physically accessible entanglement and mutual information are maximized, and the physically inaccessible correlations are zero. With the increase in temperature of either horizon, the physically accessible correlations experience degradation. Notably, the initial state remains entangled and can be utilized in entanglement-based quantum information processing tasks, which differs form the scalar field case. Furthermore, the degradation of physically accessible correlations is more pronounced for small-mass black holes. In contrast, the physically inaccessible correlations separated by the CEH monotonically increase with the radiation temperature, and such correlations are not decisively influenced by the effect of particle creation at the BEH. Moreover, a similar phenomenon is observed for the inaccessible correlations separated by the BEH. This result differs from the single event spacetime, in which the physically inaccessible entanglement is a monotonic function of the Hawking temperature.
The detection of low-frequency gravitational waves on Earth requires the reduction of displacement noise, which dominates the low-frequency band. One method to cancel test mass displacement noise is a neutron displacement-noise-free interferometer (DFI). This paper proposes a new neutron DFI configuration, a Sagnac-type neutron DFI, which uses a Sagnac interferometer in place of the Mach-Zehnder interferometer. We demonstrate that a sensitivity of the Sagnac-type neutron DFI is higher than that of a conventional neutron DFI with the same interferometer scale. This configuration is particularly significant for neutron DFIs with limited space for construction and limited flux from available neutron sources.
Presented are toy-models for sub-luminal and super-luminal warp-drives in 3+1 dimensions. The models are constructed in a chimeric manner - as different bulk space-times separated by thin membranes. The membranes contain perfect-fluid-like stress-energy tensors. The Israel junction conditions relate this stress-energy to a jump in extrinsic curvature across the brane, which in turn manifests as apparent acceleration in the bulk space-times. The acceleration on either side of the brane may be set individually by choice of model parameters. The Weak Energy Condition (WEC) is shown to be satisfied everywhere in both models. Although the branes in these toy models are not compact, it is demonstrated that super-luminal warp-drive is possible that satisfies the WEC. Additionally, the nature of these models provides framework for speculation on a mechanism for transition from sub-luminal to super-luminal warp. Neither quantum effects nor stability of the models is considered.
In $F(R,R_{\mu\nu}R^{\mu\nu},R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma})$ gravity, a general class of fourth-order theories of gravity, the multipole expansion of the metric outside a spatially compact source up to $1/c^3$ order is provided, and the closed-form expressions for the source multipole moments are all presented explicitly. Since the integrals involved in the multipole moments are performed only over the actual source distributions, our result yields a ready-to-use form of the metric. Three separated parts are contained in the expansion. Same as that in General Relativity, the massless tensor part showing the Coulomb-like dependence is characterized by the mass and spin multipole moments. As a contrast, both the massive scalar and tensor parts bear the Yukawa-like dependence on the two massive parameters, respectively, and predict the appearance of six additional sets of source multipole moments.
More recently, Fernandes \cite{Fernandes:2023vux} discovered analytic stationary and axially-symmetric black hole solutions within semiclassical gravity, driven by the trace anomaly. The study unveils some distinctive features of these solutions. In this paper, we compute the gravitational waves emitted from the \ac{EMRI} around these quantum-corrected rotating black holes using the kludge approximate method. Firstly, we derive the orbital energy, angular momentum and fundamental frequencies for orbits on the equatorial plane. We find that, for the gravitational radiation described by quadrupole formulas, the contribution from the trace anomaly only appears at higher-order terms in the energy flux when compared with the standard Kerr case. Therefore, we can compute the EMRI waveforms from the quantum-corrected rotating black hole using the Kerr fluxes. We assess the differences between the EMRI waveforms from rotating black holes with and without the trace anomaly by calculating the dephasing and mismatch. Our results demonstrate that space-borne gravitational wave detectors can distinguish the EMRI waveform from the quantum-corrected black holes with a fractional coupling constant of $\sim 10^{-3}$ within one year observation. Finally, we compute the constraint on the coupling constant using the Fisher information matrix method and find that the potential constraint on the coupling constant by LISA can be within the error $\sim 10^{-4}$ in suitable scenarios.
For asymptotically flat spacetimes, a conjecture by Strominger states that asymptotic BMS-supertranslations and their associated charges at past null infinity $\mathscr{I}^{-}$ can be related to those at future null infinity $\mathscr{I}^{+}$ via an antipodal map at spatial infinity $i^{0}$. We analyse the validity of this conjecture using Friedrich's formulation of spatial infinity, which gives rise to a regular initial value problem for the conformal field equations at spatial infinity. A central structure in this analysis is the cylinder at spatial infinity representing a blow-up of the standard spatial infinity point $i^{0}$ to a 2-sphere. The cylinder touches past and future null infinities $\mathscr{I}^{\pm}$ at the critical sets. We show that for a generic class of asymptotically Euclidean and regular initial data, BMS-supertranslation charges are not well-defined at the critical sets unless the initial data satisfies an extra regularity condition. We also show that given initial data that satisfy the regularity condition, BMS-supertranslation charges at the critical sets are fully determined by the initial data and that the relation between the charges at past null infinity and those at future null infinity directly follows from our regularity condition.
The stability of Reissner-N\"ordstrom black holes with an extremal mass-charge relation was determined by calculating the propagation speed of gravitational waves on this background in an effective field theory (EFT) of gravity. New results for metric components are shown, along with the corresponding new extremal relation, part of which differs by a global factor of 2 from the past published work. This new relation further develops the existing constraints on EFT parameters. The radial propagation speed for gravitational waves in the Regge-Wheeler gauge was calculated linearly for all perturbations, yielding exact luminality for all dimension-4 operators. The dimension-6 radial speed modifications introduce no constraints on the sign of the modified theory parameters from causality arguments, while the deviation from classical theories vanishes at both horizons. The angular speed was found to be altered for the dimension-4 operators, with possible new constraints on the modified theory being suggested from causality arguments. Results are consistent existing literature on Schwarzschild black hole backgrounds, with some EFT terms becoming active only in non-vacuum spacetimes such as Reissner-N\"ordstrom black holes.
Moving mirrors have been used for a long time as simple models for studying various properties of black hole radiation, such as the thermal spectrum and entanglement entropy. These models are typically constructed to mimic the collapse of a spherically symmetric distribution of matter in the asymptotically Minkowski background. We generalize this correspondence to the case of non-trivial background geometry (e.g. a black hole in the AdS spacetime) and somewhat relax the assumption of spherical symmetry.
We prove boundedness and polynomial decay statements for solutions of the Teukolsky system for electromagnetic-gravitational perturbations of a Kerr-Newman exterior background, with parameters satisfying $|a|, |Q| \ll M$. The identification and analysis of the Teukolsky system in Kerr-Newman has long been problematic due to the long-standing problem of coupling and failure of separability of the equations. Here, we analyze a system satisfied by novel gauge-invariant quantities representing gravitational and electromagnetic radiations for coupled perturbations of a Kerr-Newman black hole. The bounds are obtained by making use of a generalization of the Chandrasekhar transformation into a system of coupled generalized Regge-Wheeler equations derived in previous work. Crucial in our resolution is the use of a combined energy-momentum tensor for the system which exploits its symmetric structure, performing an effective decoupling of the perturbations. As for other black hole solutions, such bounds on the Teukolsky system provide the first step in proving the non linear stability of the Kerr-Newman metric to gravitational and electromagnetic perturbations.
A novel first order action principle has been proposed as the possible foundation for a more fundamental theory of General Relativity and the Standard Model. It is shown in this article that the proposal improves the consistency of the field equations and the Noether currents.
In the context of general relativity, the geodesic deviation equation (GDE) relates the Riemann curvature tensor to the relative acceleration of two neighboring geodesics. In this paper, we consider the GDE for the generalized hybrid Metric-Palatini gravity and apply it in this model to investigate the structure of time-like, space-like, and null geodesics in the homogeneous and isotropic universe. We propose a particular case $f(R,{\cal R})=R+{\cal R}$ to study the numerical behavior of the deviation vector $\eta(z)$ and the observer area-distance $r_{0}(z)$ with respect to redshift $z$. Also, we consider the GDE in the framework of the scalar-tensor representation of the generalized hybrid Metric-Palatini gravity {\it i.e.} $f(R, {\cal R} )$, in which the model can be considered as dynamically equivalent to a gravitational theory with two scalar fields. Finally, we extend our calculations to obtain the modification of the Mattig relation in this model.
We consider the classic problem of a compact fluid source that behaves non-relativistically and that radiates gravitational waves. The problem consists of determining the metric close to the source as well as far away from it. The non-relativistic nature of the source leads to a separation of scales resulting in an overlap region where both the $1/c$ and (multipolar) $G$-expansions are valid. Standard approaches to this problem (the Blanchet--Damour and the DIRE approach) use the harmonic gauge. We define a `post-Newtonian' class of gauges that admit a Newtonian regime in inertial coordinates. In this paper we set up a formalism to solve for the metric for any post-Newtonian gauge choice. Our methods are based on previous work on the covariant theory of non-relativistic gravity (a $1/c$-expansion of general relativity that uses post-Newton-Cartan variables). At the order of interest in the $1/c$ and $G$-expansions we split the variables into two sets: transverse and longitudinal. We show that for the transverse variables the problem can be reduced to inverting Laplacian and d'Alembertian operators on their respective domains subject to appropriate boundary conditions. The latter are regularity in the interior and asymptotic flatness with a Sommerfeld no-incoming radiation condition imposed at past null infinity. The longitudinal variables follow from the gauge choice. The full solution is then obtained by the method of matched asymptotic expansion. We show that our methods reproduce existing results in harmonic gauge to 2.5PN order.
We propose that the underlying context of holographic duality and the Ryu-Takayanagi formula is that the volume measure of spacetime is a probability measure constrained by quantum dynamics. We define quantum stochastic processes using joint quantum distributions which are realized in a quantum system as expectation values of products of projectors. In anti-de Sitter JT gravity, we show that Einstein's equations arise from the evolution of probability under the quantum stochastic process induced by the boundary, with the area of compactified space in the gravitational theory identified as a probability density evolving under the quantum process. Extrapolating these and related results in flat JT gravity found in arXiv:2108.10916, we conjecture that general relativity arises in the semi-classical limit of the evolution of probability with respect to quantum stochastic processes.
Einstein, Podolsky, and Rosen (EPR) proposed, via a thought experiment, that the uncertainty principle might not provide a complete description of reality. We propose that the linear generalized uncertainty principle (GUP) may resolve the EPR paradox by demonstrating vanishing uncertainty at the minimal measurable length. This may shed light on the completeness of quantum mechanics which leads us to propose an equivalency between the linear GUP and the Bekenstein bound, a bound that prescribes the maximum amount of information needed to completely describe a physical system up to quantum level. This equivalency is verified through explaining the Hydrogen's atom/nuclei radii as well as the value of the cosmological constant. In a recent published study, we verified that the Einstein-Rosen (ER) bridge originates from the minimal length or GUP. Considering these findings together, we propose that linear GUP could function as a model for the ER=EPR conjecture.
Two singularity theorems can be proven if one attempts to let a Lorentzian cobordism interpolate between two topologically distinct manifolds. On the other hand, Cartier and DeWitt-Morette have given a rigorous definition for quantum field theories (qfts) by means of path integrals. This article uses their results to study whether qfts can be made compatible with topology changes. We show that path integrals over metrics need a finite norm for the latter and for degenerate metrics, this problem can sometimes be resolved with tetrads. We prove that already in the neighborhood of some cuspidal singularities, difficulties can arise to define certain qfts. On the other hand, we show that simple qfts can be defined around conical singularities that result from a topology change in a simple setup. We argue that the ground state of many theories of quantum gravity will imply a small cosmological constant and, during the expansion of the universe, will cause frequent topology changes. Unfortunately, it is difficult to describe the transition amplitudes consistently due to the aforementioned problems. We argue that one needs to describe qfts by stochastic differential equations, and in the case of gravity, by Regge calculus in order to resolve this problem.
Hawking radiation can be regarded as a spontaneous and continuous creation of virtual particle-antiparticle pairs outside the event horizon of a black hole where strong tidal forces prevent the annihilation: the particle escapes to infinity contributing to the Hawking flux, while its corresponding antiparticle partner enters the event horizon and ultimately reaches the singularity. The aim of this paper is to investigate the energy density correlations between the Hawking particles and their partners across the event horizon of two models of non-singular black holes by calculating the two-point correlation function of the density operator of a massless scalar field. This analysis is motivated by the fact that in acoustic black holes particle-partner correlations are signalled by the presence of a peak in the equal time density-density correlator. Performing the calculation in a Schwarzschild black hole it was shown in [1] that the peak does not appear, mainly because of the singularity. It is then interesting to consider what happens when the singularity is not present. In the Hayward and Simpson-Visser non-singular black holes we show that the density-density correlator remains finite when the partner particle approaches the hypersurface that replaces the singularity, opening the possibility that partner-particle correlations can propagate towards other regions of spacetime instead of being lost in a singularity.
Working in a semi-classical setting, we consider solutions of the Einstein equations that exhibit light trapping in finite time according to distant observers. In spherical symmetry, we construct near-horizon quantities from the assumption of regularity of the renormalized expectation value of the energy-momentum tensor, and derive explicit coordinate transformations in the near-horizon region. We examine the boundary conditions appropriate for embedding the model into a cosmological background, describe their evaporation in the linear regime and highlight the observational consequences, while also discussing the implications for the laws of black hole mechanics.
We research on the neutrino pair annihilation $\nu+\overline{\nu}\longrightarrow e^{-}+e^{+}$ around a massive source in asymptotic safety. The ratio $\frac{\dot{Q}}{\dot{Q}_{Newt}}$ corresponding to the energy deposition per unit time over that in the Newtonian case is derived and calculated. We find that the quantum corrections to the black hole spacetime affect the emitted energy rate ratio for the annihilation. It is interesting that the more considerable quantum effect reduces the ratio value slightly. Although the energy conversion is damped because of the quantum correction, the energy deposition rate is enough during the neutrino-antineutrino annihilation. The corrected annihilation process can become a source of gamma ray burst. We also investigate the derivative $\frac{d\dot{Q}}{dr}$ relating to the star's radius $r$ to show that the quantum effect for the black hole will drop the ratio. The more manifest quantum gravity influence leads the weaker neutrino pair annihilation.
We consider the geodesic motions in the Kerr-Sen-AdS$_4$ spacetime. We obtain the equations of motion for light rays and test particles. Using the parametric diagrams, we shown some regions where the radial and latitudinal geodesic motions are allowed. We analyse the impact of parameter related to dilatonic scalar on the orbit and find that it will result in more rich and complex orbital types.
This work focuses on the examination of a regular black hole within Verlinde's emergent gravity, specifically investigating the Hayward-like (modified) solution. The study reveals the existence of three horizons under certain conditions, i.e., an event horizon and two Couchy horizons. Our results indicate regions which phase transitions occur based on the analysis of heat capacity and Hawking temperature. To compute the latter quantity, we utilize three distinct methods: the surface gravity approach, Hawking radiation, and the application of the first law of thermodynamics. In the case of the latter approach, it is imperative to introduce a correction to ensure the preservation of the Bekenstein-Hawking area law. Geodesic trajectories and critical orbits (photon spheres) are calculated, highlighting the presence of three light rings. Additionally, we investigate the black hole shadows. Furthermore, the quasinormal modes are explored using third- and sixth-order WKB approximations. In particular, we observe stable and unstable oscillations for certain frequencies. Finally, in order to comprehend the phenomena of time-dependent scattering in this scenario, we provide an investigation of the time-domain solution.
The Unruh effect is one of the first calculations of what one would see when transiting between an inertial reference frame with its quantum field vacuum state and a non-inertial (specifically, uniformly accelerating) reference frame. The inertial reference frame's vacuum state would not correspond to the vacuum state of the non-inertial frame and the observer in that frame would see radiation, with a corresponding Bose distribution and a temperature proportional to the acceleration (in natural units). In this paper, I compute the response of this non-inertial observer to a single frequency mode in the inertial frame and deduce that, indeed, the cumulative distribution (over the observer's proper time) of frequencies observed by the accelerating observer would be the Bose distribution with a temperature proportional to the acceleration. The conclusion is that the Unruh effect (and the related Hawking effect) is generic, in that it would appear with any incoming incoherent state and the Bose distribution is obtained as a consequence of the non-inertial frame's motion, rather than some special property of the quantum vacuum. As a consequence of the analysis of a coherent set of signals, I show to extract information from the spectrum that an accelerated observer would see (as well as from the radiation from a black hole).
We demonstrate some shortcomings of "fixing the equations," an increasingly popular remedy for time evolution problems of effective field theories (EFTs). We compare the EFTs and their "fixed" versions to the UV theories from which they can be derived in two cases: K-essence and nonlinear Proca theory. We find that when an EFT breaks down due to loss of hyperbolicity, fixing does not approximate the UV theory well if the UV solution does not quickly settle down to vacuum. We argue that this can be related to the EFT approximation itself becoming invalid, which cannot be rectified by fixing.
Accelerated light has been demonstrated with laser light and diffraction. Within the diffracting field it is possible to identify a portion that carries most of the beam energy, which propagates in a curved trajectory as it would have been accelerated by a gravitational field for instance. Here, we analyze the effects of this kind of acceleration over the entanglement between twin beams produced in spontaneous parametric down-conversion. Our results show that acceleration does not affect entanglement significantly, under ideal conditions. The optical scheme introduced can be useful in the understanding of processes in the boundary between gravitation and quantum physics.
We study the soft theorems for photons and gravitons at finite temperatures using the thermofield dynamics approach. The soft factors lose universality at finite temperatures as the soft amplitudes depend on the nature (or spin) of the particles participating in the scattering processes. However, at low temperatures, a universal behavior is observed in the cross-section of the soft processes. Further, we obtain the thermal contribution to the electromagnetic and gravitational memory effects and show that they are related to the soft factors consistently. The expected zero temperature results are obtained from the soft factors and memories. The thermal effects in soft theorems and memories seem to be sensitive to the spin of the particles involved in scattering.
We propose a new family of $\mathsf{U}(1)$ duality-invariant models for nonlinear ${\cal N}=1$ supersymmetric electrodynamics coupled to supergravity. It includes the Cribiori-Farakos-Tournoy-van Proeyen supergravity-matter theory for spontaneously broken local supersymmetry with a novel Fayet-Iliopoulos term without gauged $R$-symmetry. We present superconformal duality-invariant models, as well as new $\mathsf{U}(1)$ duality-invariant models for spontaneously broken local supersymmetry.
We study the linear stability of vacuum static, spherically symmetric solutions to the gravitational field equations of the Bergmann-Wagoner-Nordtvedt class of scalar-tensor theories (STT) of gravity, restricting ourselves to nonphantom theories, massless scalar fields and configurations with positive Schwarzschild mass. We consider only small radial (monopole) perturbations as the ones most likely to cause an instability. The problem reduces to the same Schroedinger-like master equation as is known for perturbations of Fisher's solution of general relativity (GR), but the corresponding boundary conditions that affect the final result of the study depend on the choice of the STT and a particular solution within it. The stability or instability conclusions are obtained for the Brans-Dicke, Barker and Schwinger STT as well as for GR nonminimally coupled to a scalar field with an arbitrary parameter $\xi$.
It was shown that Yang-Mills instantons on an internal space can trigger the expansion of our four-dimensional universe as well as the dynamical compactification of the internal space. We generalize the instanton-induced inflation and dynamical compactification to general Einstein manifolds with positive curvature and also to the FLRW metric with spatial curvature. We explicitly construct Yang-Mills instantons on all Einstein manifolds under consideration and find that the homogeneous and isotropic universe is allowed only if the internal space is homogeneous. We then consider the FLRW metric with spatial curvature as a solution of the eight-dimensional Einstein-Yang-Mills theory. We find that open universe $(k=-1)$ admits bouncing solutions unlike the other cases $(k=0, +1)$.
We construct the first analytic examples of self-gravitating anisotropic merons in the Einstein-Yang-Mills-Chern-Simons theory in three dimensions. The gauge field configurations have different meronic parameters along the three Maurer-Cartan $1$-forms and they are topologically nontrivial as the Chern-Simons invariant is nonzero. The corresponding backreacted metric is conformally a squashed three-sphere. The amount of squashing is related to the degree of anisotropy of the gauge field configurations that we compute explicitly in different limits of the squashing parameter. Moreover, the spectrum of the Dirac operator on this background is obtained explicitly for spin-1/2 spinors in the fundamental representation of $SU(2)$, and the genuine non-Abelian contributions to the spectrum are identified. The physical consequences of these results are discussed.
We present the Quantum Spectral Method for the analytical calculation of observables in classical periodic and quasi-periodic systems. It is based on a novel application of the correspondence principle, in which classical Fourier coefficients are obtained as the $\hbar\rightarrow 0$ limit of quantum matrix elements. Our method is particularly suited for self-force calculations for inspiralling bodies, where it could, for the first time, provide fully analytical expressions. We demonstrate our method by calculating an adiabatic electromagnetic inspiral, along with its associated radiation, at all orders in the multipole expansion.
The local two-dimensional Poincar\'e algebra near the horizon of an eternal AdS black hole, or in proximity to any bifurcate Killing horizon, is generated by the Killing flow and outward null translations on the horizon. In holography, this local Poincar\'e algebra is reflected as a pair of unitary flows in the boundary Hilbert space whose generators under modular flow grow and decay exponentially with a maximal Lyapunov exponent. This is a universal feature of many geometric vacua of quantum gravity. To explain this universality, we show that a two-dimensional Poincar\'e algebra emerges in any quantum system that has von Neumann subalgebras associated with half-infinite modular time intervals (modular future and past subalgebras) in a limit analogous to the near-horizon limit. In ergodic theory, quantum dynamical systems with future or past algebras are called quantum K-systems. The surprising statement is that modular K-systems are always maximally chaotic. Interacting quantum systems in the thermodynamic limit and large $N$ theories above the Hawking-Page phase transition are examples of physical theories with future/past subalgebras. We prove that the existence of (modular) future/past von Neumann subalgebras also implies a second law of (modular) thermodynamics and the exponential decay of (modular) correlators. We generalize our results from the modular flow to any dynamical flow with a positive generator and interpret the positivity condition as quantum detailed balance.
In this study, we construct a 1+1-dimensional, relativistic, free, complex scalar Quantum Field Theory on the noncommutative spacetime known as lightlike $\kappa$-Minkowski. The associated $\kappa$-Poincar\'e quantum group of isometries is triangular, and its quantum R matrix enables the definition of a braided algebra of N points that retains $\kappa$-Poincar\'e invariance. Leveraging our recent findings, we can now represent the generators of the deformed oscillator algebra as nonlinear redefinitions of undeformed oscillators, which are nonlocal in momentum space. The deformations manifest at the multiparticle level, as the one-particle states are identical to the undeformed ones. We successfully introduce a covariant and involutive deformed flip operator using the R matrix. The corresponding deformed (anti-)symmetrization operators are covariant and idempotent, allowing for a well-posed definition of multiparticle states, a result long sought in Quantum Field Theory on $\kappa$-Minkowski. We find that P and T are not symmetries of the theory, although PT (and hence CPT) is. We conclude by noticing that identical particles appear distinguishable in the new theory, and discuss the fate of the Pauli exclusion principle in this setting.
Phase transitions of Einstein-Gauss-Bonnet black holes are studied using Duan's $\phi-$ field topological current theory, where black holes are treated as topological defects in the thermodynamic parameter space. The kinetics of thermodynamic defects are studied using Duan's bifurcation theory. In this picture, a first-order phase transition between small/large black hole phases is interpreted as the interchange of winding numbers between the defects as a result of some action at a distance.We observe a first-order phase transition between small/large black holes for $D=5$ Gauss-Bonnet theory similar to Reissner-Nordstr\"{o}m black holes in AdS space. This implies that these black hole solutions share the same topology and phase structure. We have also studied the phase transition of neutral black holes in $D\geq 6$ and found a transition between unstable small and large stable black hole phases similar to the case of neutral black holes in AdS space. Recently, it has been conjectured that black holes with similar topological structure exhibit the same thermodynamic properties. Our results strengthen the conjecture by connecting the topological nature of black holes to phase transitions.
Cosmologies of the lower Bianchi types, i.e. except those of type VIII or IX, admit a two-dimensional Abelian subgroup of the isometry group, the $G_2$. In orthogonal perfect fluid cosmologies of all lower Bianchi types except for type VI$_{-1/9}$ the $G_2$ acts orthogonally-transitively, which is closely related to an eventual cessation of the oscillations and thus to a quiescent singularity. But due to a degeneracy in the momentum constraints, such cosmologies of type VI$_{-1/9}$ do not necessarily have this property. As a consequence, the dynamics of type VI$_{-1/9}$ orthogonal perfect fluid cosmologies have the same degrees of freedom as those of the higher types VIII and IX and their dynamics are expected to be markedly different compared to those of the other lower Bianchi types. In this article we take a different approach to quiescence, namely the presence of an orthogonal stiff fluid. On the one hand, this completes the analysis of the initial singularity for all Bianchi orthogonal stiff fluid cosmologies. On the other hand, this allows us to get a grasp of the underlying dynamics of type VI$_{-1/9}$ perfect fluid cosmologies, in particular the effect of orthogonal transitivity as well as possible (asymptotic) polarization conditions. In particular, we show that a generic type VI$_{-1/9}$ cosmology with an orthogonal stiff fluid has similar asymptotics as a generic Bianchi type VIII or IX cosmology with an orthogonal stiff fluid. The only exceptions to this genericity are solutions satisfying (asymptotic) polarization conditions, and solutions for which the $G_2$ acts orthogonally-transitively. Only in those cases may the limits of the eigenvalues of the expansion-normalized Weingarten map be negative.
We calculate the corrections due to noncommutativity of space on the Hamiltonian and then partition function of the canonical ensemble. We study some basic features of statistical mechanics and thermodynamics including equipartition and virial theorem and energy fluctuations: correspondence with microcanonical ensemble, in the framework of non-commutative canonical ensemble. The corrections imposed by noncommutativity of space are derived and the results are discussed.
Significant evidence exists for the apparent disappearance of electron-type neutrinos in radioactive source experiments. Yet, interpreted within the standard `3+1 sterile neutrino scenario', precision short-baseline measurements of electron antineutrinos from nuclear reactors strongly disagree with these results. Recently, it has been demonstrated that allowing for a finite wavepacket size for the reactor neutrinos can ameliorate such a tension, however the smallness of the required wavepackets is a subject of intense debate. In this work, we demonstrate that a `broad' sterile neutrino may relax this tension in much the same way. Such a phenomenological possibility can arise in plausible hidden sector scenarios, such as a clockwork-style sector, for which we provide a concrete microscopic model.
The top quark as the heaviest particle in the Standard Model (SM) defines an important mass scale for Higgs physics and the electroweak scale itself. It is therefore a well-motivated degree of freedom which could reveal the presence of new interactions beyond the SM. Correlating modifications of the top-Higgs interactions in the 2-Higgs-Doublet Model (2HDM), we analyse effective field theory deformations of these interactions from the point of view of a strong first-order electroweak phase transition (SFOEWPT). We show that such modifications are compatible with current Higgs data and that an SFOEWPT can be tantamount to a current overestimate of exotic Higgs searches' sensitivity at the LHC in $t\bar t$ and four top quark final states. We argue that these searches remain robust from the point of accidental signal-background interference so that the current experimental strategy might well lead to 2HDM-like discoveries in the near future.
Strong evidence for the breaking of lepton universality has recently been produced at the LHCb [Nature Physics, 18:277, 2022]. Beyond-Standard-Model physics introducing scalar leptoquarks may explain this observation. Such models give rise to low-energy parity- and time-reversal-violating phenomena in atoms and molecules. One of the leading effects among these phenomena is the nucleon-electron tensor-pseudotensor interaction when the low-energy experimental probe uses a quantum state of an atom or molecule predominantly characterized by closed electron shells. In the present paper the molecular interaction constant for the nucleon-electron tensor-pseudotensor interaction in the thallium-fluoride molecule -- used as such a sensitive probe by the CeNTREX collaboration [Quantum Sci. Technol., 6:044007, 2021] -- is calculated employing highly-correlated relativistic many-body theory. Accounting for up to quintuple excitations in the wavefunction expansion the final result is $W_T({\text{Tl)}} = -6.25 \pm 0.31\, $[$10^{-13} {\langle\Sigma\rangle}_A$ a.u.] Interelectron correlation effects on the tensor-pseudotensor interaction are studied for the first time in a molecule, and a common framework for the calculation of such effects in atoms and molecules is presented.
We study two-loop corrections to the scattering amplitude of four massive leptons in quantum electrodynamics. These amplitudes involve previously unknown elliptic Feynman integrals, which we compute analytically using the differential equation method. In doing so, we uncover the details of the elliptic geometry underlying this scattering amplitude and show how to exploit its properties to obtain compact, easy-to-evaluate series expansions that describe the scattering of four massive leptons in QED in the kinematical regions relevant for Bhabha and M{\o}ller scattering processes.
In this paper we confront the next-to-leading order (NLO) CGC/saturation approach of Ref. [1] with the experimental combined HERA data and obtain its parameters. The model includes two features that are in accordance with our theoretical knowledge of deep inelastic scattering. These consist of: $i$) the use of analytical solution for the non-linear Balitsky-Kovchegov (BK) evolution equation and $ii$) the exponential behavior of the saturation momentum on the impact parameter $b$-dependence, characterized by $Q_s$ $\propto\exp( -m b )$ which reproduce the correct behaviour of the scattering amplitude at large $b$ in accord with Froissart theorem. The model results are then compared to data at small-x for the structure function of the proton $F_{2}$, the longitudinal structure function $F_{L}$, the charm structure function $F_2^{c\bar{c}}$, the exclusive vector meson ($J/\psi,\phi,\rho$) production and Deeply Virtual Compton Scattering (DVCS). We obtain a good agreement for the processes in a wide kinematic range of $Q^2$ at small $x$. Our results provide a strong guide for finding an approach, based on Color Glass Condensate/saturation effective theory for high energy QCD, to make reliable predictions from first principles as well as for forthcoming experiments like the Electron-Ion Collider and the LHeC.
We review the two-zero mass matrix textures approach for Dirac neutrinos with the most recent global fit in the oscillation parameters. We found that three of the 15 possible textures are compatible with current experimental data, while the remaining two-zero textures are ruled out. Two textures are consistent with the neutrino masses' normal hierarchy and are CP-conserving. At the same time, the other one is compatible with both mass orderings and allows for CP violation. We also present the correlations between the oscillation parameters for the allowed two-zero textures.
One of the most straightforward extensions of the standard model (SM) is having an additional Higgs doublet to the SM, namely the two Higgs doublet models(THDM). In the type-I model, an additional Higgs doublet is introduced that does not couple to any fermion via the Yukawa interaction. Considering various theoretical and phenomenological constraints, we have found the best-fitted parameter set in the type-I model using the minimum $\chi^2$ method by scanning the model's parameter space. We show that this optimal parameter set can be probed by precisely analyzing the decay processes $D^+ \rightarrow \mu^+ \nu_\mu$, $B^0 \rightarrow K^* \mu^+ \mu^-$. Moreover, the decay channels $h \rightarrow s\bar{s}$, $W^+W^-$ and $gg$ can be used to distinguish the model from the SM. For a direct search of the model at future colliders, we have proposed an investigation of the $e^+e^- \rightarrow t\bar{t} b\bar{b}$ process at the ILC to detect the new physics of the model. Considering the initial state radiation correction and applying appropriate background cuts, the calculation result of the scattering cross section shows that it is feasible to observe a clear and unique signal in the $t\bar{b}$ invariant mass distribution corresponding to the charged Higgs pair creations.
The quantization of a massive spin $1/2$ field that satisfies the Klein-Gordon equation is studied. The framework is consistent, provided it is formulated as a pseudo-hermitian quantum field theory by the redefinition of the field dual and the identification of an operator that modifies the internal product of states in Hilbert space to preserve a real energy spectrum and unitary evolution. Since the fermion field has mass dimension one, the theory admits renormalizable fermion self-interactions.
The spontaneous breaking of the global lepton number, an accidental symmetry in the Standard Model of particle physics, results in a massless goldstone boson, the Majoron, which can be taken as a cold dark matter candidate with properties similar to these of the axion. In this letter, we propose a novel mass generation mechanism for the Majoron via radiative corrections induced by the interaction of a tiny lepton-number-violating (LNV) Majorana mass term of right-handed neutrinos in the canonical seesaw mechanism. We show that this LNV Majorana mass term not only generates the mass of the Majoron but also leads to a non-zero initial velocity of the Majoron, which subsequently impacts on the relic abundance of the Majoron generated in the early universe via the misalignment mechanism. With the assistance of the Weinberg operator, the same initial velocity may also generate the lepton asymmetry, which is subsequently transported into the baryon asymmetry of the universe (BAU) via the weak sphaleron process. As a result, the neutrino masses, dark matter and the BAU can be addressed in this concise theoretical framework.
This search for Magnetic Monopoles (MMs) and High Electric Charge Objects (HECOs) with spins 0, 1/2 and 1, uses for the first time the full MoEDAL detector, exposed to 6.6 fb^-1 proton-proton collisions at 13 TeV. The results are interpreted in terms of Drell-Yan and photon-fusion pair production. Mass limits on direct production of MMs of up to 10 Dirac magnetic charges and HECOs with electric charge in the range 5e to 350e were achieved. The charge limits placed on MM and HECO production are currently the strongest in the world. MoEDAL is the only LHC experiment capable of being directly calibrated for highly-ionizing particles using heavy ions and with a detector system dedicated to definitively measuring magnetic charge.
We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.
We present results from an ongoing project concerned with the computation of $\mathcal{O}(1/m_Q)$ and $\mathcal{O}(1/m_Q^2)$ relativistic corrections to the static potential. These corrections are extracted from Wilson loops with two chromo-field insertions. We use gradient flow, which allows to renormalize the inserted fields and leads to a significantly improved signal-to-noise ratio, providing access to loops with large spatial and temporal extents.
Motivated by the recent lattice QCD study of the $DD^*$ interaction at unphysical quark masses, we perform a theoretical study of the $DD^*$ interaction in covariant chiral effective field theory (ChEFT). In particular, we calculate the relevant leading-order two-pion exchange contributions. The results compare favorably with the lattice QCD results, supporting the conclusion that the intermediate-range $DD^*$ interaction is dominated by two-pion exchanges and the one-pion exchange contribution is absent. At a quantitative level, the covariant ChPT results agree better with the lattice QCD results than their non-relativistic counterparts, showing the relevance of relativistic corrections in the charm sector.
We use renormalization group summed perturbation theory (RGSPT) to improve perturbation series in quantum chromodynamics in the determination of some of the standard model parameters.
Collider observations have mainly been studied on an event-by-event basis, leveraging several kinematic techniques. However, the intrinsic topological imprints of the ensemble of new physics events can be strikingly different from the SM background ensemble. Traditional topological data analysis (TDA) is known for its stability against small perturbations. However, a plethora of rich information encoded in the clustering of ensembles is often lost due to the unweighted filtration of simplicial complexes. Taking a singlet extended model as an example, this work illustrates the rich global properties associated with the so-called distance-to-measure (DTM) filtration on Alpha complexes using weights determined from k-nearest neighbours.
Production of heavy fermions in ultraperipheral collisions ($pp\to p+\gamma\gamma+p\to p+\chi^{+}\chi^{-}+p$) and the semiexclusive reaction ($pp \to p+\gamma\gamma^{*}+X \to p+\chi^{+}\chi^{-}+X$) is considered. Differential and total cross sections for the energies of the planned $pp$ colliders are presented.
We determine the pure-gauge $\mathrm{SU}(3)$ topological susceptibility slope $\chi^\prime$, related to the next-to-leading-order term of the momentum expansion of the topological charge density 2-point correlator, from numerical lattice Monte Carlo simulations. Our strategy consists in performing a double-limit extrapolation: first we take the continuum limit at fixed smoothing radius, then we take the zero-smoothing-radius limit. Our final result is $\chi^\prime = [17.1(2.1)~\mathrm{MeV}]^2$. We also discuss a theoretical argument to predict its value in the large-$N$ limit, which turns out to be remarkably close to the obtained $N=3$ lattice result.
The thermal QCD dilepton production rate is calculated at next-to-leading order in the strong coupling and at finite baryon chemical potential. The two-loop virtual photon self-energy is evaluated using finite temperature field theory and combined consistently with the self-energy in the Landau-Pomeranchuk-Migdal regime. We present new results for a dense baryonic plasma. The rates are then integrated using (3+1)-dimensional fluid-dynamical simulations calibrated to reproduce hadronic experimental results obtained at RHIC at energies ranging from those of the Beam Energy Scan to $\sqrt{s_{_{\rm NN}}} = 200$ GeV. We elaborate on the ability for dileptons to relay information about the plasma baryonic content and temperature.
Embedding symmetries in the architectures of deep neural networks can improve classification and network convergence in the context of jet substructure. These results hint at the existence of symmetries in jet energy depositions, such as rotational symmetry, arising from physical features of the underlying processes. We introduce a new family of jet observables, Jet Rotational Metrics (JRMs), which provide insights into the degree of discrete rotational symmetry of a jet. We show that JRMs are formidable jet discriminants and that they capture information not efficiently captured by traditional jet observables, such as N-subjettiness and Energy-Flow Polynomials.
Possibilities of the formation of the pion-sigma meson field vortices in a rotating empty vessel (in vacuum) and in the pion-sigma Bose-Einstein condensates at a dynamically fixed particle number are studied within the linear $\sigma$ model at zero temperature. The charged $\sigma\pi^{\pm}$ and neutral $\sigma\pi^0$ complex field ansatze (models 1 and 2) are studied. First it is analysed, which of these configurations is energetically favorable in case of the system at rest. Then conditions are found, at which a chiral field storm can arise in a rotating empty vessel. In the model 1 an important role played by the electric field is demonstrated. Its appearance allows for formation of a supervortex (vortex with a large angular momentum) in case of the empty vessel rotating with an overcritical angular velocity. Also influence of external electric and magnetic fields is studied. Field configurations in presence and absence of the meson self-interaction are found in both models. Then it is shown that description of the charged (in model 1) and neutral (in model 2) rotating pion-sigma Bose-Einstein condensates is analogous to that for the Bose-Einstein condensates in cold atomic gases. Various field configurations such as vortex lines, rings and spirals are discussed. Conditions for existence of the rigid-body rotation of the vortex lattice are then analysed. Observational effects for vortex fields in rotating vessels, in energetic heavy-ion collisions and in rotating superheavy nuclei and nuclearites are discussed.
In this paper, we show that the common hard kernel of double-log-type factorization for certain space-like parton correlators in the context of lattice parton distributions, the {\it heavy-light Sudakov hard kernel} or equivalently, the {\it axial gauge quark field renormalization factor}, has linear infrared (IR) renormalon in its phase angle. We explicitly demonstrate how this IR renormalon correlates with ultraviolet (UV) renormalons of next-to-leading power soft contributions in two explicit examples: transverse momentum dependent (TMD) factorization of quasi wave function amplitude and threshold factorization of quark quasi-PDF. Theoretically, the pattern of renormalon cancellation complies with general expectations of perturbative series induced by marginal perturbation to UV fixed point. Practically, this linear renormalon explains the slow convergence of imaginary parts observed in lattice extraction of the Collins-Soper kernel and has the potential to reduce numerical uncertainty.
Working to all orders in dimensionally-regularized QCD, we study the radiation of a photon whose energy is much lower than that of external partons, but much larger than the masses of some quarks. We argue that the conventional soft photon theorem receives corrections at leading power in the photon energy, associated with soft virtual loops of massless fermions. These additive corrections give an overall factor times the non-radiative amplitude that is infrared finite and real to all orders in $\alpha_s$. Based on recent calculations of the three-loop soft gluon current, we identify the lowest-order three-loop correction.
The framed standard model (FSM), constructed to explain, with some success, why there should be 3 and apparently only 3 generations of quarks and leptons in nature falling into a hierarchical mass and mixing pattern, suggests also, among other things, a scalar boson U, with mass around 17 MeV and small couplings to quarks and leptons, which might explain the g-2 anomaly reported in experiment. The U arises in FSM initially as a state in the predicted `hidden sector' with mass around 17 MeV, which mixes with the standard model (SM) Higgs $h_W$, acquiring thereby a coupling to quarks and leptons and a mass just below 17 MeV. The initial purpose of the present paper is to check whether this proposal is compatible with experiment on semileptonic decays of Ks and Bs where the U can also appear. The answer to this we find is affirmative, in that the contribution of U to new physics as calculated in the FSM remains within the experimental bounds, but only if $m_U$ lies within a narrow range just below the unmixed mass. As a result from this, one has an estimate $m_U \sim 15 - 17$ MeV for the mass of $U$, and from some further considerations the estimate $\Gamma_U \sim 0.02$ eV for its width, both of which may be useful for an eventual search for it in experiment. And, if found, it will be, for the FSM, not just the discovery of a predicted new particle, but the opening of a window into a whole ``hidden sector" containing at least some, perhaps ven the bulk, of the dark matter in the universe.
We discuss the constraints on the Higgs sector coming from the requirement of the generation of the matter-antimatter asymmetry at the electroweak phase transition. These relate to both a strongly first order first transition, necessary for the preservation of the generated baryon asymmetry, as well as CP-violation, necessary for its generation. This scenario may lead to exotic decays of the Standard Model like Higgs, a deviation of the di-Higgs production cross section, or CP-violation in the Higgs sector. All these aspects are expected to be probed by the LHC as well as by electric dipole moment experiments, among others. Further phenomenological implications are discussed in this short review.
Dileptons produced during heavy-ion collisions represent a unique probe of the QCD phase diagram, and convey information about the state of the strongly interacting system at the moment their preceding off-shell photon is created. In this study, we compute thermal dilepton yields from Au+Au collisions performed at different beam energies, employing a (3+1)-dimensional dynamic framework combined with emission rates accurate at next-to-leading order in perturbation theory and which include baryon chemical potential dependencies. By comparing the effective temperature extracted from the thermal dilepton invariant mass spectrum with the average temperature of the fluid, we offer a robust quantitative validation of dileptons as effective probe of the early quark-gluon plasma stage.
In the framework of a scalar QFT, we evaluate the decay of an initial massive state into two massless particles through a triangle-shaped diagram in which virtual fields propagate. Under certain conditions, the decaying state can be seen as a bound state, thus it is analogous to the neutral pion (quark-antiquark pair) and to the positronium (electron-positron pair), which decay into two photons. While the pion is a relativistic composite object, the positronium is a non-relativistic compound close to threshold. We exam similarities and differences between these two types of bound states.
In this work, we consider scattering amplitudes relevant for high-precision Large Hadron Collider (LHC) phenomenology. We analyse the general structure of amplitudes, and we review state-of-the-art methods for computing them. We discuss advantages and shortcomings of these methods, and we point out the bottlenecks in modern amplitude computations. As a practical illustration, we present frontier applications relevant for multi-loop multi-scale processes. We compute the helicity amplitudes for diphoton production in gluon fusion and photon+jet production in proton scattering in three-loop massless Quantum Chromodynamics (QCD). We have adopted a new projector-based prescription to compute helicity amplitudes in the 't Hooft-Veltman scheme. We also rederived the minimal set of independent Feynman integrals for this problem using the differential equations method, and we confirmed their intricate analytic properties. By employing modern methods for integral reduction, we provide the final results in a compact form, which is appropriate for efficient numerical evaluation. Beyond QCD, we have computed the two-loop mixed QCD-electroweak amplitudes for Z+jet production in proton scattering in light-quark-initiated channels, without closed fermion loops. This process provides important insight into the high-precision studies of the Standard Model, as well as into Dark Matter searches at the LHC. We have employed a numerical approach based on high-precision evaluation of Feynman integrals with the modern Auxiliary Mass Flow method. The obtained numerical results in all relevant partonic channels are evaluated on a two-dimensional grid appropriate for further phenomenological applications.
We investigate the exclusive $J/\psi$ production at the future Electron-ion collider in China by utilizing the eSTARlight event generator. We model the cross-section and kinematics by fitting to the world data of $J/\psi$ photoproduction. Projected statistical uncertainties on $J/\psi$ production are based on the design of a central detector, which consists of a tracker and vertex subsystem. The precision of the pseudo-data allows us to probe the near-threshold mechanism, e.g. the re-scattering effect. The significance of the forward amplitudes is discussed as well. The design and optimization of the detector enhance the potential for exploring the near-threshold region and the realm of high four-momentum transfer squared.
Absorptive corrections, which are known to suppress proton-neutron transitions with a large fractional momentum z -> 1 in pp collisions, become dramatically strong on a nuclear target, and they push the partial cross sections of leading neutron production to the very periphery of the nucleus. The mechanism of the pion and axial vector a1-meson interference, which successfully explains the observed single-spin asymmetry in a polarized pp -> nX, is extended to the collisions of polarized protons with nuclei. When corrected for nuclear effects, it explains the observed single-spin azimuthal asymmetry of neutrons that is produced in inelastic events, which is where the nucleus violently breaks up. This single-spin asymmetry is found to be negative and nearly A-independent.
The neutrinophilic two Higgs doublet model is one of the simplest models to explain the origin of tiny Dirac neutrino masses. This model introduces a new Higgs doublet with eV scale VEV to naturally generate the tiny neutrino masses. Depending on the same Yukawa coupling, the neutrino oscillation patterns can be probed with the dilepton signature from the decay of charged scalar $H^\pm$. For example, the normal hierarchy predicts BR$(H^+\to e^+\nu)\ll$ BR$(H^+\to \mu^+\nu)\approx$ BR$(H^+\to \tau^+\nu)\simeq0.5$ when the lightest neutrino mass is below 0.01 eV, while the inverted hierarchy predicts BR$(H^+\to e^+\nu)/2\simeq$ BR$(H^+\to \mu^+\nu)\simeq$ BR$(H^+\to \tau^+\nu)\simeq0.25$. By precise measurement of BR$(H^+\to \ell^+\nu)$, we are hopefully to probe the lightest neutrino mass and the atmospheric mixing angle $\theta_{23}$. Through the detailed simulation of the dilepton signature and corresponding backgrounds, we find that the 3 TeV CLIC could discover $M_{H^+}\lesssim1220$ GeV for NH and $M_{H^+}\lesssim1280$ GeV for IH. Meanwhile, the future 100 TeV FCC-hh collider could probe $M_{H^+}\lesssim1810$ GeV for NH and $M_{H^+}\lesssim2060$ GeV for IH.
The electromagnetic and gravitational form factors of decuplet baryons are systematically studied with a covariant quark-diquark approach. The model parameters are firstly discussed and determined through comparison with the lattice calculation results integrally. Then, the electromagnetic properties of the systems including electromagnetic radii, magnetic moments, and electric-quadrupole moments are calculated. The obtained results are in agreement with experimental measurements and the results of other models. Finally, the gravitational form factors and the mechanical properties of the decuplet baryons, such as mass radii, energy densities, and spin distributions, are also calculated and discussed.
We give a systematic theoretical treatment of linear quantum detectors used in modern high energy physics experiments, including dark matter cavity haloscopes, gravitational wave detectors, and impulsive mechanical sensors. We show how to derive the coupling of signals of interest to these devices, and how to calculate noise spectra, signal-to-noise ratios, and detection sensitivities. We emphasize the role of quantum vacuum and thermal noise in these systems. Finally, we review ways in which advanced quantum techniques -- squeezing, non-demolition measurements, and entanglement -- can be or currently are used to enhance these searches.
In recent years, machine learning (ML) techniques have emerged as powerful tools in studying many-body complex systems, encompassing phase transitions in various domains of physics. This mini-review provides a concise yet comprehensive examination of the advancements achieved in applying ML for investigating phase transitions, with a primary emphasis on those involved in nuclear matter studies.
We present improved predictions of a class of event-shape distributions called angularity for a contribution from an effective operator $H\to gg$ in Higgs hadronic decay that suffers from large perturbative uncertainties. In the frame of Soft-Collinear Effective Theory, logarithmic terms of the distribution are resummed at NNLL$^{'}$ accuracy, for which 2-loop constant of gluon-jet function for angularity is independently determined by a fit to fixed-order distribution at NLO corresponding to ${\mathcal{O}}(\alpha_s^2)$ relative to the born rate. Our determination shows reasonable agreement with value in a thesis recently released. In the fit, we use an asymptotic form with a fractional power conjectured from recoil corrections at one-loop order and it improves the accuracy of determination in positive values of angularity parameter $a$. The resummed distribution is matched to the NLO fixed-order results to make our predictions valid at all angularity values. We also discuss the first moment and subtracted moment of angularity as a function of $a$ that allow to extract information on leading and subleading nonperturbative corrections associated with gluons.
Lattice studies suggest that at zero baryon chemical potential and increasing temperature there are three characteristic regimes in QCD that are connected by smooth analytical crossovers: a hadron gas regime at T < T_ch ~ 155 MeV, an intermediate regime, called stringy fluid, at T_ch < T < ~ 3 T_ch, and a quark-gluon plasma regime at higher temperatures. These regimes have been interpreted to reflect different approximate symmetries and effective degrees of freedom. In the hadron gas the effective degrees of freedom are hadrons and the approximate chiral symmetry of QCD is spontaneously broken. The intermediate regime has been interpreted as lacking spontaneous chiral symmetry breaking along with the emergence of new approximate symmetry, chiral spin symmetry, that is not a symmetry of the Dirac Lagrangian, but is a symmetry of the confining part of the QCD Lagrangian. While the high temperature regime is the usual quark-gluon plasma which is often considered to reflect ``deconfinement'' in some way. This paper explores the behavior of these regimes of QCD as the number of colors in the theory, N_c, gets large. In the large N_c limit the theory is center-symmetric and notions of confinement and deconfinement are unambiguous. The energy density is O(N_c^0) in the meson gas, O(N_c^1) in the intermediate regime and O(N_c^2) in the quark-gluon plasma regime. In the large N_c limit these regimes may become distinct phases separated by first order phase transitions. The intermediate phase has the peculiar feature that glueballs should exist and have properties that are unchanged from what is seen in the vacuum (up to 1/N_c corrections), while the ordinary dilute gas of mesons with broken chiral symmetry disappears and approximate chiral spin symmetry should emerge.
We perform the renormalization group improved collinear resummation of the photon-gluon impact factors. We construct the resummed cross section for virtual photon-photon ($\gamma^*\gamma^*$) scattering which incorporates the impact factors and BFKL gluon Green's function up to the next-to-leading logarithmic accuracy in energy. The impact factors include important kinematical effects which are responsible for the most singular poles in Mellin space at next-to-leading order. Further conditions on the resummed cross section are obtained by requiring the consistency with the collinear limits. Our analysis is consistent with previous impact factor calculations at NLO, apart from a new term proportional to $C_F$ that we find for the longitudinal polarization. Finally, we use the resummed cross section to compare with the LEP data on the $\gamma^*\gamma^*$ cross section and with previous calculations. The resummed result is lower than the leading logarithmic approximation but higher than the pure next-to-leading one, and is consistent with the experimental data.
We first assemble a full set of the Boltzmann Equation in Diffusion Approximation (BEDA) for studying thermalization/hydrodynamization as well as the production of massless quarks and antiquarks in out of equilibrium systems. In the BEDA, the time evolution of a generic system is characterized by the following space-time dependent quantities: the jet quenching parameter, the effective temperature, and two more for each quark flavor that describe the conversion between gluons and quarks/antiquarks via the $2\leftrightarrow2$ processes. Out of the latter two quantities, an effective net quark chemical potential is defined, which equals the net quark chemical potential after thermal equilibration. We then study thermalization and the production of three flavors of massless quarks and antiquarks in spatially homogeneous systems initially filled only with gluons. A parametric understanding of thermalization and quark production is obtained for either initially very dense or dilute systems, which are complemented by detailed numerical simulations for intermediate values of initial gluon occupancy $f_0$. For a wide range of $f_0$, the final equilibration time is determined to be about one order of magnitude longer than that in the corresponding pure gluon systems. Moreover, during the final stage of the thermalization process for $f_0\geq 10^{-4}$, gluons are found to thermalize earlier than quarks and antiquarks, undergoing the top-down thermalization.
The leading-order chiral Lagrangian for the baryon octet and decuplet states coupled to Goldstone bosons and external sources contains six low-energy constants. Five of them are fairly well known from phenomenology, but the sixth one is practically unknown. This coupling constant provides the strength of the (p-wave) coupling of Goldstone bosons to decuplet states. Its size and even sign are under debate. Quark model and QCD for a large number of colors provide predictions, but some recent phenomenological analyses suggest even an opposite sign for the Delta-pion coupling. The Goldberger-Treiman relation connects this coupling constant to the axial charge of the Delta baryon. This suggests a Wu-type experiment to determine the unknown low-energy constant. While this is not feasible in the Delta sector because of the large hadronic width of the Delta, there is a flavor symmetry related process that is accessible: the weak semileptonic decay of the Omega baryon to a spin 3/2 cascade baryon. A broad research program is suggested that can pin down at least the rough size and the sign of the last unknown low-energy constant of the leading-order Lagrangian. It encompasses experimental measurements, in particular the forward-backward asymmetry of the semileptonic decay, together with a determination of the quark-mass dependences using lattice QCD for the narrow decuplet states and chiral perturbation theory to extrapolate to the Delta sector. Besides discussing the strategy of the research program, the present work provides a feasibility check based on a simple leading-order calculation.
Measuring the total charm cross section is important for the comparison to theoretical predictions of the highest precision available for charm today, which are completely known up to NNLO QCD for the total inclusive cross sections. These are also independent of charm fragmentation, while practical measurements of charm hadrons in a fiducial phase space are not. Recently the LHC experiments have reported non-universality of charm fragmentation, which shows that e.g. charm baryon-to-meson ratios are not universal in different collision systems, and that the related production fractions also depend on transverse momentum. This breaks the charm fragmentation universality that was assumed until recently for the extrapolation of experimental measurements to the full total charm cross section phase space. A proposal is made how to address this non-universality in a data driven way without the need to implement any particular non-universal fragmentation model. As a practical example, this method is applied to the extrapolation of published LHC measurements of $D^0$ production at $\sqrt{s}=5$ TeV to the corresponding total charm cross section, which fully accounts for charm fragmentation non-universality for the first time. The result, $8.43 ^{+1.05}_{-1.16}(\text{total})$ mb, differs substantially from the one assuming charm fragmentation universality, but still compares well to theoretical QCD predictions up to NNLO.
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. We continue study of the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD), avoiding soft-emission approximations. In this particular paper, we show (i) how the "instantaneous" interactions of Light-Cone Perturbation Theory must be included in the calculation to account for effects of longitudinally-polarized gauge bosons in intermediate states, and (ii) how to compute virtual corrections to LPM emission rates, which will be necessary in order to make infrared-safe calculations of the characteristics of in-medium QCD showering of high-energy partons. In order to develop these topics in as simple a context as possible, we will focus in the current paper not on QCD but on large-$N_f$ QED, where $N_f$ is the number of electron flavors.
We analyze the boson masses and their mixing in the Minimal Supersymmetric $SU(3)_{C}\otimes SU(3)_{L}\otimes U(1)_{N}$ Model, and we will show all the numerical results are in agreement with actual current experimental limits.
Using a (3+1)-dimensional hybrid framework with parametric initial conditions, we study the rapidity-dependent directed flow $v_1(y)$ of identified particles, including pions, kaons, protons, and lambdas in heavy-ion collisions. Cases involving Au+Au collisions are considered, performed at $\sqrt{s_{\rm NN}}$ ranging from 7.7 to 200 GeV. The dynamics in the beam direction is constrained using the measured pseudo-rapidity distribution of charged particles and the net proton rapidity distribution. Within this framework, the directed flow of mesons is driven by the sideward pressure gradient from the tilted source, and that of baryons mainly due to the initial asymmetric baryon distribution with respect to the beam axis driven by the transverse expansion. Our approach successfully reproduces the rapidity- and beam energy-dependence of $v_1$ for both mesons and baryons. We find that the $v_1(y)$ of baryons has strong constraining power on the initial baryon stopping, and together with that of mesons, the directed flow probes the equation of state of the dense nuclear matter at finite chemical potentials.
I provide a new idea based on geometric analysis to obtain a positive mass gap in pure non-abelian renormalizable Yang-Mills theory. The orbit space, that is the space of connections of Yang-Mills theory modulo gauge transformations, is equipped with a Riemannian metric that naturally arises from the kinetic part of reduced classical action and admits a positive definite sectional curvature. The corresponding regularized \textit{Bakry-\'Emery} Ricci curvature (if positive) is shown to produce a mass gap for $2+1$ and $3+1$ dimensional Yang-Mills theory assuming the existence of a quantized Yang-Mills theory on $(\mathbb{R}^{1+2},\eta)$ and $(\mathbb{R}^{1+3},\eta)$, respectively. My result on the gap calculation, described at least as a heuristic one, applies to non-abelian Yang-Mills theory with any compact semi-simple Lie group in the aforementioned dimensions. In $2+1$ dimensions, the square of the Yang-Mils coupling constant $g^{2}_{YM}$ has the dimension of mass, and therefore the spectral gap of the Hamiltonian is essentially proportional to $g^{2}_{YM}$ with proportionality constant being purely numerical as expected. Due to the dimensional restriction on $3+1$ dimensional Yang-Mills theory, it seems one ought to introduce a length scale to obtain an energy scale. It turns out that a certain `trace' operation on the infinite-dimensional geometry naturally introduces a length scale that has to be fixed by measuring the energy of the lowest glu-ball state. However, this remains to be understood in a rigorous way.
By employing the nonlinear Regge trajectory relation $M=m_R+\beta_x(x+c_{0x})^{2/3}\,\,(x=l,\,n_r)$, we investigate the heavy-heavy systems, such as the doubly heavy diquarks, the doubly heavy mesons, the heavy-heavy baryons, and the heavy-heavy tetraquarks. The fitted Regge trajectories illustrate that these heavy-heavy systems satisfy the above formula and show the existence of a universal description of the heavy-heavy systems. The universality embodies not only the universal behavior $M{\sim}x^{2/3}$ but also the universal parameters. The values of $c_{fn_r}$ and $c_{fl}$ vary with different heavy-heavy systems, but they are close to one. There is an inequality $\beta_{n_r}>\beta_{l}$, and it holds for all the heavy-heavy systems. Moreover, the expression of $\beta_x$ [Eq. (11)] explains its variation with the change of the constituents' masses.
With regard to the leptonic magnetic dipole moment anomaly as well as the $W$-boson mass excess, we study the DFSZ axion model. Considering theoretical and experimental constraints, we show that the muon and electron $g-2$ anomalies can be explained within the parameter space of the model for extra Higgs bosons with mass spectra around the electroweak scale and for an almost equivalent contribution of one- and two-loop diagrams. A negative electron $g-2$ could be achieved by introducing heavy neutrinos. Furthermore, the $W$ boson mass excess can be consistently addressed within the mass range of the matter content testable at collider experiments.
We carry out a systematic analysis of the Cabbibo-favored (CF) and singly Cabbibo-suppressed (SCS) decays of $D^0\to VV$, and demonstrate that the long-distance mechanism due to the final-state interactions (FSIs) can provide a natural explanation for these mysterious polarization puzzles observed in $D^0\to VV$ in experiment. More observables, which can be measured at BESIII and possibly at LHCb, are also suggested.
We consider the problem of including a finite number of scattering centers in dynamical energy loss and classical DGLV formalism. Previously, either one or an infinite number of scattering centers were considered in energy loss calculations, while attempts to relax such approximations were largely inconclusive or incomplete. In reality, however, the number of scattering centers is generally estimated to be 4-5 at RHIC and the LHC, making the above approximations (a priori) inadequate and this theoretical problem significant for QGP tomography. We derived explicit analytical expressions for dynamical energy loss and DGLV up to the $4^{th}$ order in opacity, resulting in complex mathematical expressions that were, to our knowledge, obtained for the first time. These expressions were then implemented into an appropriately generalized DREENA framework to calculate the effects of higher orders in opacity on a wide range of high-$p_\perp$ light and heavy flavor predictions. Results of extensive numerical analysis, together with interpretations of nonintuitive results, are presented. We find that, for both RHIC and the LHC, higher-order effects on high-$p_\perp$ observables are small, and the approximation of a single scattering center is adequate for dynamical energy loss and DGLV formalisms.
Future searches for new physics beyond the Standard Model are without doubt in need of a diverse approach and experiments with complementary sensitivities to different types of classes of models. One of the directions that should be explored is feebly interacting particles (FIPs) with masses below the electroweak scale. The interest in FIPs has significantly increased in the last ten years. Searches for FIPs at colliders have intrinsic limitations in the region they may probe, significantly restricting exploration of the mass range $m_{\text{FIP}} < 5-10$\,GeV/c$^2$. Beam dump-like experiments, characterized by the possibility of extremely high luminosity at relatively high energies and the effective coverage of the production and decay acceptance, are the perfect option to generically explore the ``coupling frontier'' of the light FIPs. Several proposals for beam-dump detectors are currently being considered by CERN for implementation at the SPS ECN3 beam facility. In this we paper we analyse in depth how the characteristic geometric parameters of a beam dump experiment influence the signal yield. We apply an inclusive approach by considering the phenomenology of different types of FIPs. From the various production modes and kinematics, we demonstrate that the optimal layout that maximises the production and decay acceptance consists of a detector located on the beam-axis, at the shortest possible distance from the target defined by the systems required to suppress the beam-induced backgrounds.
The exotic states $X(3872)$ and $Z_c(3900)$ have long been conjectured as isoscalar and isovector $\bar{D}^*D$ molecules. In this work, we first propose the triangle diagram mechanism to investigate their productions in $B$ decays as well as their heavy quark spin symmetry partners, $X_2(4013)$ and $Z_c(4020)$. We show that the large isospin breaking of the ratio $\mathcal{B}[B^+ \to X(3872) K^+]/\mathcal{B}[B^0 \to X(3872) K^0] $ can be attributed to the isospin breaking of the neutral and charged $\bar{D}^*D$ components in their wave functions. For the same reason, the branching fractions of $Z_c(3900)$ in $B$ decays are smaller than the corresponding ones of $X(3872)$ by at least one order of magnitude, which naturally explains its non-observation. A hierarchy for the production fractions of $X(3872)$, $Z_c(3900)$, $X_2(4013)$, and $Z_c(4020)$ in $B$ decays, consistent with all existing data, is predicted. Furthermore, with the factorization ansatz we extract the decay constants of $X(3872)$, $Z_c(3900)$, and $Z_c(4020)$ as $\bar{D}^*D^{(*)}$ molecules via the $B$ decays, and then calculate their branching fractions in the relevant $B_{(s)}$ decays, which turn out to agree with all existing experimental data.The mechanism we proposed is useful to elucidate the internal structure of the many exotic hadrons discovered so far and to extract the decay constants of hadronic molecules, which can be used to predict their production in related processes.
The QED hadronic vacuum polarization function plays an important role in the determination of precision electroweak observables and of the anomalous magnetic moment of the muon. These contributions have been computed from data, by means of dispersion relations affecting the electron positron hadronic cross sections, or by first principle lattice-QCD computations in the Standard Model. Today there is a discrepancy between the two approaches for determining these contributions, which affects the comparison of the measurement of the anomalous magnetic moment of the muon with the theoretical predictions. In this article, we revisit the idea that this discrepancy may be explained by the presence of a new light gauge boson that couples to the first generation quark and leptons and has a mass below the GeV scale. We discuss the requirements for its consistency with observations and the phenomenological implications of such a gauge extension.
We study whether it is possible to use high-$p_\perp$ data/theory to constrain the temperature dependence of the shear viscosity over entropy density ratio $\eta/s$ of the matter formed in ultrarelativistic heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). We use two approaches: i) We calculate high-$p_\perp$ $R_{AA}$ and flow coefficients $v_2$, $v_3$ and $v_4$ assuming different $(\eta/s)(T)$ of the fluid-dynamically evolving medium. ii) We calculate the quenching strength ($\hat{q}/T^3$) from our dynamical energy loss model and convert it to $\eta/s$ as a function of temperature. It turned out that the first approach can not distinguish between different $(\eta/s)(T)$ assumptions when the evolution is constrained to reproduce the low-$p_\perp$ data. In distinction, $(\eta/s)(T)$ calculated using the second approach agrees surprisingly well with the $(\eta/s)(T)$ inferred through state-of-the-art Bayesian analyses of the low-$p_\perp$ data even in the vicinity of $T_c$, while providing much smaller uncertainties at high temperatures.
I investigate the use of the SU(3) Clebsch-Gordan coefficients in light of the relations of completeness and closure. I show that in the case of $\alpha_V = F/(F+D)~\neq$ 1, there is an additional interaction: the exchange of a $\rho$ meson between a $\Lambda$ and a $\Sigma^0$ hyperon that only affects the symmetric coupling. I then calculate these additional coupling constants and show that this recovers the completeness and closure of the SU(3) Clebsch-Gordan coefficients for all values of $\alpha_V$. Besides, it increases the symmetry of the theory, once now we can group the baryon octet into four doublets. Finally, I add the new coupling constants to study numerical results in the hyperon onset in dense nuclear matter assuming $\alpha_V$ as a free parameter.
We find a simple spin Hamiltonian to describe physical states of $2+1$ dimensional SU(2) lattice gauge theory on a honeycomb lattice with a truncation of the electric field representation at $j_{\rm max}=\frac{1}{2}$. The simple spin Hamiltonian only contains local products of Pauli matrices, even though Gauss's law has been completely integrated out.
The absence of semitauonic decays of charmed hadrons makes the decay processes mediated by the quark-level $c\to d \tau^+ \nu_{\tau}$ transition inadequate for probing a generic new physics (NP) with all kinds of Dirac structures. To fill in this gap, we consider in this paper the quasielastic neutrino scattering process $\nu_{\tau}+n\to \tau^-+\Lambda_c$, and propose searching for NP through the polarizations of the $\tau$ lepton and the $\Lambda_c$ baryon. In the framework of a general low-energy effective Lagrangian, we perform a comprehensive analysis of the (differential) cross sections and polarization vectors of the process both within the Standard Model and in various NP scenarios, and scrutinize possible NP signals. We also explore the influence on our findings due to the uncertainties and the different parametrizations of the $\Lambda_c \to N$ transition form factors, and show that they have become one of the major challenges to further constrain possible NP through the quasielastic scattering process.
Perturbative calculations predict that the Standard Model (SM) effective potential should have a new minimum, well beyond the Planck scale, much deeper than the electroweak vacuum. As it is not obvious that gravitational effects can get so strong to stabilize the potential, most authors have accepted the metastability scenario in a cosmological perspective. This perspective is needed to explain why the theory remains trapped into our electroweak vacuum, but requires to control the properties of matter in the extreme conditions of the early universe. Alternatively, one can consider the completely different idea of a non-perturbative effective potential which, as at the beginning of the SM, is restricted to the pure $\Phi^4$ sector yet consistent with the now existing analytical and numerical studies. In this approach, where the electroweak vacuum is the lowest-energy state, besides the resonance of mass $m_h=125$ GeV defined by the quadratic shape of the potential at its minimum, the Higgs field should exhibit a second resonance with mass $690\pm10({\rm stat})\pm20({\rm sys})$ GeV associated with the zero-point energy determining the potential depth. Despite its large mass, this would couple to longitudinal $W$s with the same typical strength as the low-mass state at 125 GeV and represent a relatively narrow resonance of width $\Gamma_H=30\div 38$ GeV, mainly produced at LHC by gluon-gluon fusion. So it is interesting that, in the LHC data, one can find various indications for a new resonance in the expected mass range with a non-negligible statistical significance. As this could become an important new discovery by just adding two missing samples of RUN2 data, we outline further refinements of the theoretical predictions that could be obtained by implementing unitarity constraints, in the presence of fermion and gauge fields, with coupled-channel calculations used for meson spectroscopy.
We update the full set of $\bar{B}_s\rightarrow K$ form factors using light-cone sum rules with an on-shell kaon. Our approach determines the relevant sum rule parameters -- the duality thresholds -- from a Bayesian fit for the first time. Using a modified version of the Boyd-Grinstein-Lebed parametrisation, we combine our sum rule results at low momentum transfer $q^2$ with more precise lattice QCD results at large $q^2$. We obtain a consistent description of the form factors in the full $q^2$ range. Applying these results to a recent LHCb measurement of branching ratios for the decays $B_s^0 \to \lbrace K^-, D_s^-\rbrace \mu^+\nu_\mu$, we determine the ratio of Cabibbo-Kobayashi-Maskawa elements $$ \notag \left|\frac{V_{ub}}{V_{cb}}\right|_{q^2<7\; \textrm{GeV}^2} = 0.0681\pm 0.0040 \quad \text{and} \quad \left|\frac{V_{ub}}{V_{cb}}\right|_{q^2>7 \;\textrm{GeV}^2} = 0.0801\pm 0.0047 \ , $$ which are mutually compatible at the $1.9\sigma$ level. We further comment on the sensitivity to Beyond the Standard Model effects through measurements of the shape of $B_s^0 \to K^- \mu^+\nu_\mu$ decays, in light of recent limits on such effects from other exclusive $b\to u\ell\nu$ processes.
A theory explaining the non-observation of the dark matter and the source of the dark energy is presented in this letter. By integrating the asymmetrical potential and the Higgs potential, we provide a model with instantaneous symmetrical breaking and stable symmetrical breaking, resulting in the non-observed dark matter and observed matter respectively. Two crucial parameters in this model are the frequency and strength of the symmetry breaking from the vacuum: the former helps explain the impact of the effective mass from the dark matter; the latter determines the source of the dark energy. The expected strength in a certain period varies, causing the accelerating or deccelerating expansions of the universe. Considering the expected strength correlated with the vacuum expectation value and basing on the possible variations of the measured masses of the fundamental particles such as Z boson over time, one can perhaps derive the exact stage of the current universe.
The $\Sigma_{c}(2800)^+$ and $\Lambda_c(2940)^+$ states are among the most interesting and intriguing particles whose internal structures have not yet been elucidated. Inspired by this, the magnetic dipole moments of the $\Sigma_{c}(2800)^+$ and $\Lambda_c(2940)^+$ states with quantum numbers $J^P = \frac{1}{2}^-$ and $J^P = \frac{3}{2}^-$, respectively, are analyzed in the framework of QCD light-cone sum rules, assuming that they have a molecule composed of a nucleon and a $D^{(*)}$ meson. The magnetic dipole moments are obtained as $\mu_{\Sigma_c^{+}}=0.26 \pm 0.05~\mu_N$ and $\mu_{\Lambda_c^{+}}=-0.31 \pm 0.04~\mu_N$. The magnetic dipole moment is the leading-order response of a bound system to a weak external magnetic field. It therefore offers an excellent platform to explore the inner organization of hadrons governed by the quark-gluon dynamics of QCD. Comparison of the findings of this analysis with future experimental results on the magnetic dipole moments of the $\Sigma_{c}(2800)^+$ and $\Lambda_c(2940)^+$ states may shed light on the nature and internal organization of these states. The electric quadrupole and magnetic octupole moments of the $\Lambda_c(2940)^+$ states have also been calculated, and these values are determined to be $\mathcal Q_{\Lambda_c^+} = (0.65 \pm 0.25) \times 10^{-3}$~fm$^2$ and $ \mathcal O_{\Lambda_c^+} = -(0.38 \pm 0.10) \times 10^{-3}$~fm$^3$, respectively. The values of the electric quadrupole and magnetic octupole moments show a non-spherical charge distribution. We hope that our estimates of the electromagnetic properties of the $\Sigma_{c}(2800)^+$ and $\Lambda_c(2940)^+$ states, together with the results of other theoretical studies of the spectroscopic parameters of these states, will be useful for their search in future experiments and will help us to define the exact internal structures of these states.
We examine $W$ and $Z$ boson pair production processes at electron-positron collider experiments in the $SU(3)_C\times SO(5)_W\times U(1)_X$ gauge-Higgs unification (GHU) model. We find that the deviation of the total cross section for the $e^-e^+\to W^-W^+$ process from the Standard Model (SM) in the GHU model with parameter sets, which are consistent with the current experiments, is about 0.5% to 1.5% and 0.6% to 2.2% for $\sqrt{s}=250$GeV and 500GeV, respectively, depending on the initial electron and positron polarization. We find that for the $e^-e^+\to ZZ$ process the deviation from the SM in the GHU model is at most 1%. We find that unitality bound for the $e^-e^+\to W^-W^+$ process is satisfied in the GHU model as in the SM, as a consequence of the relationship among coupling constants.
In this work, we study the flavor-changing neutral-current process $B_{c}\to{D}_{s}^{*}(\to{D}_{s}\pi)\ell^{+}\ell^{-}$ ($\ell$= $e$, $\mu$, $\tau$). The relevant weak transition form factors are obtained by using the covariant light-front quark model, in which, the main inputs, i.e., the meson wave functions of $B_{c}$ and $D_{s}^{*}$, are adopted as the numerical wave functions from the solution of the Schr\"{o}dinger equation with the modified Godfrey-Isgur model. With the obtained form factors, we further investigate the relevant branching fractions and their ratios, and some angular observables, i.e., the forward-backward asymmetry $A_{FB}$, the polarization fractions $F_{L(T)}$, and the $CP$-averaged angular coefficients $S_{i}$ and the $CP$ asymmetry coefficients $A_{i}$. We also present our results of the clean angular observables $P_{1,2,3}$ and $P^{\prime}_{4,5,6,8}$, which can reduce the uncertainties from the form factors. Our results show that the corresponding branching fractions of the electron or muon channels can reach up to $10^{-8}$. With more data being accumulated in the LHCb experiment, our results are helpful for exploring this process, and deepen our understanding of the physics around the $b\to{s}\ell^{+}\ell^{-}$ process.
The motivation behind exploring jet quenching-like phenomena in small systems arises from the experimental observation of heavy-ion-like behavior of particle production in high-multiplicity proton-proton ($p+p$) collisions. Quantifying the jet quenching in $p+p$ collisions is a challenging task, as the magnitude of the nuclear modification factor ($R_{\rm AA}$ or $R_{\rm CP}$), which is used to quantify jet quenching, is influenced by several factors, such as the estimation of centrality and the scaling factor. The most common method of centrality estimation employed by the ALICE collaboration is based on measuring charged-particle multiplicity with the V0 detector situated at the forward rapidity. This technique of centrality estimation makes the event sample biased towards hard processes like multijet final states. This bias of the V0 detector towards hard processes makes it difficult to study the jet quenching effect in high-multiplicity $p+p$ collisions. In the present article, we propose to explore the use of a new and robust event classifier, flattenicity which is sensitive to both the multiple soft partonic interactions and hard processes. The $\mathcal{P}_{\rm CP}$, a quantity analogous to $R_{\rm CP}$, has been estimated for high-multiplicity $p+p$ collisions at $\sqrt{s} = 13$ TeV using \texttt{PYTHIA8} model for both the V0M (the multiplicity classes selected based on V0 detector acceptance) as well as flattenicity. The evolution of $\mathcal{P}_{\rm CP}$ with $p_{\rm T}$ shows a heavy-ion-like effect for flattencity which is attributed to the selection of softer transverse momentum particles in high-multiplicity $p+p$ collisions.
In this article, the multi-region structure of transverse momentum ($p_T$) spectra of identified particles produced in relativistic collisions is studied by the multi-component standard distribution (the Boltzmann, Fermi-Dirac, or Bose-Einstein distribution) in the framework of a multi-source thermal model. Results are interpreted in the framework of string model phenomenology in which the multi-region of $p_T$ spectra corresponds to the string hadronization in the cascade process of string breaking. The contributions of the string hadronizations from the first-, second-, and third-, i.e., last-generations of string breakings mainly form high-, intermediate-, and low-$p_T$ regions, respectively. From the high- to low-$p_T$ regions, the extracted volume parameter increases rapidly, and temperature and flow velocity parameters decrease gradually. The multi-region of $p_T$ spectra reflects the volume, temperature, and flow velocity dynamics of the system evolution. Due to the successful application of the multi-component standard distribution, this work reflects that the simple classical theory can still play a great role in the field of complex relativistic collisions.
We perform a comprehensive analysis of the scattering of matter and gravitational Kaluza-Klein (KK) modes in five-dimensional gravity theories. We consider matter localized on a brane as well as in the bulk of the extra dimension for scalars, fermions and vectors respectively, and consider an arbitrary warped background. While naive power-counting suggests that there are amplitudes which grow as fast as ${\cal O}(s^3)$ [where $s$ is the center-of-mass scattering energy-squared], we demonstrate that cancellations between the various contributions result in a total amplitude which grows no faster than ${\cal O}(s)$. Extending previous work on the self-interactions of the gravitational KK modes, we show that these cancellations occur due to sum-rule relations between the couplings and the masses of the modes that can be proven from the properties of the mode equations describing the gravity and matter wavefunctions. We demonstrate that these properties are tied to the underlying diffeomorphism invariance of the five-dimensional theory. We discuss how our results generalize when the size of the extra dimension is stabilized via the Goldberger-Wise mechanism. Our conclusions are of particular relevance for freeze-out and freeze-in relic abundance calculations for dark matter models including a spin-2 portal arising from an underlying five-dimensional theory.
We study the contributions of the $B\rightarrow \psi(3770)K[\psi(3770)\rightarrow D\bar{D}]$, $B\rightarrow K^*(1410)\pi[K^*(1410)\rightarrow K\pi]$ and $B\rightarrow X(3872)K[X(3872)\rightarrow J/\psi\gamma, \psi(2S)\gamma, D\bar{D}\pi, J/\psi\omega, J/\psi\pi\pi$ and $D\bar{D}^*\pi]$ quasi-two-body decays. There are no existing previous measurement of the three-body branching fractions for three final states of the $X(3872)\rightarrow J/\psi\gamma$, $\psi(2S)\gamma$ and $D\bar{D}\pi$ but several quasi-two-body modes that can decay to this final state have been seen.
We study the heavy-dense limit of QCD on the lattice with heavy quarks at high density. The effective three dimensional theory has a sign problem which is alleviated by sign optimization where the path integration domain is deformed in complex space in a way that minimizes the phase oscillations. We simulate the theory via a Hybrid-Monte-Carlo, for different volumes, both to leading order and next-to-next-to leading order in the hopping expansion, and show that sign optimization successfully mitigates the sign problem at large enough volumes where usual re-weighting methods fail. Finally we show that there is a significant overlap between the complex manifold generated by sign optimization and the Lefschetz thimbles associated with the theory.
We introduce and study a novel class of classical integrable many-body systems obtained by generalized $T\Bar{T}$-deformations of free particles. Deformation terms are bilinears in densities and currents for the continuum of charges counting asymptotic particles of different momenta. In these models, which we dub ``semiclassical Bethe systems'' for their link with the dynamics of Bethe ansatz wave packets, many-body scattering processes are factorised, and two-body scattering shifts can be set to an almost arbitrary function of momenta. The dynamics is local but inherently different from that of known classical integrable systems. At short scales, the geometry of the deformation is dynamically resolved: either particles are slowed down (more space available), or accelerated via a novel classical particle-pair creation/annihilation process (less space available). The thermodynamics both at finite and infinite volumes is described by the equations of (or akin to) the thermodynamic Bethe ansatz, and at large scales generalized hydrodynamics emerge.
In this paper we compute the celestial operator product expansion between two outgoing positive helicity gravitons in the self dual gravity. It has been shown that the self dual gravity is a $ w_{1+\infty} $-invariant theory whose scattering amplitudes are one loop exact with all positive helicity gravitons. Celestial $w_{1+\infty}$ symmetry is generated by an infinite tower of (conformally soft) gravitons which are holomorphic conserved currents. We find that at any given order only a \textit{finite} number of $w_{1+\infty}$ descendants contribute to the OPE. This is somewhat surprising because the spectrum of conformal dimensions in celestial CFT is not bounded from below. However, this is consistent with our earlier analysis based on the representation theory of $w_{1+\infty}$. The phenomenon of truncation suggests that in some (unknown) formulation the spectrum of conformal dimensions in the dual two dimensional theory can be bounded from below.
We formulate the effective field theory (EFT) of vector-tensor gravity for perturbations around an arbitrary background with a ${\it timelike}$ vector profile, which can be applied to study black hole perturbations. The vector profile spontaneously breaks both the time diffeomorphism and the $U(1)$ symmetry, leaving their combination and the spatial diffeomorphism as the residual symmetries in the unitary gauge. We derive two sets of consistency relations which guarantee the residual symmetries of the EFT. Also, we provide the dictionary between our EFT coefficients and those of generalized Proca (GP) theories, which enables us to identify a simple subclass of the EFT that includes the GP theories as a special case. For this subclass, we consider the stealth Schwarzschild(-de Sitter) background solution with a constant temporal component of the vector field and study the decoupling limit of the longitudinal mode of the vector field, explicitly showing that the strong coupling problem arises due to vanishing sound speeds. This is in sharp contrast to the case of gauged ghost condensate, in which perturbations are weakly coupled thanks to certain higher-derivative terms, i.e., the scordatura terms. This implies that, in order to consistently describe this type of stealth solutions within the EFT, the scordatura terms must necessarily be taken into account in addition to those already included in the simple subclass.
In this article, we elucidate the structure and properties of a class of anomalous high-energy states of matter-free $U(1)$ quantum link gauge theory Hamiltonians using numerical and analytical methods. Such anomalous states, known as quantum many-body scars in the literature, have generated a lot of interest due to their athermal nature. Our starting Hamiltonian is $H = \mathcal{O}_{\mathrm{kin}} + \lambda \mathcal{O}_{\mathrm{pot}}$, where $\lambda$ is a real-valued coupling, and $\mathcal{O}_{\mathrm{kin}}$ ($\mathcal{O}_{\mathrm{pot}}$) are summed local diagonal (off-diagonal) operators in the electric flux basis acting on the elementary plaquette $\square$. The spectrum of the model in its spin-$\frac{1}{2}$ representation on $L_x \times L_y$ lattices reveal the existence of sublattice scars, $|\psi_s \rangle$, which satisfy $\mathcal{O}_{\mathrm{pot},\square} |\psi_s\rangle = |\psi_s\rangle$ for all elementary plaquettes on one sublattice and $ \mathcal{O}_{\mathrm{pot},\square} | \psi_s \rangle =0 $ on the other, while being simultaneous zero modes or nonzero integer-valued eigenstates of $\mathcal{O}_{\mathrm{kin}}$. We demonstrate a ``triangle relation'' connecting the sublattice scars with nonzero integer eigenvalues of $ \mathcal{O}_{\mathrm{kin}} $ to particular sublattice scars with $\mathcal{O}_{\mathrm{kin}} = 0$ eigenvalues. A fraction of the sublattice scars have a simple description in terms of emergent short singlets, on which we place analytic bounds. We further construct a long-ranged parent Hamiltonian for which all sublattice scars in the null space of $ \mathcal{O}_{\mathrm{kin}} $ become unique ground states and elucidate some of the properties of its spectrum. In particular, zero energy states of this parent Hamiltonian turn out to be exact scars of another $U(1)$ quantum link model with a staggered short-ranged diagonal term.
Adinkra networks arise in the Carroll limit of supersymmetric QFT. Extensions of adinkras that are infinite dimensional graphs have never previously been discussed in the literature. We call these "infinite unfolded'' adinkras and study the properties of their realization on familiar 4D, $\cal N$ = 1 supermultiplets. A new feature in "unfolded'' adinkras is the appearance of quantities whose actions resemble BRST operators within Verma-like modules. New "net-centric" quantities ${\widetilde \chi}_{(1)}$ and ${\widetilde \chi}_{(2)}$ are introduced, which along with quantity $\chi_{\rm o}$, describe distinctions between familiar supermultiplets in 4D, $\cal N $ = 1 theories. A previously unobserved property in all adinkras that we call "adinkra vorticity" is noted.
On the basis of (i) the discrete and continuous symmetries (and corresponding conserved charges), (ii) the ensuing algebraic structures of the symmetry operators and conserved charges, and (iii) a few basic concepts behind the subject of differential geometry, we show that the celebrated Friedberg-Lee-Pang-Ren (FLPR) quantum mechanical model (describing the motion of a single non-relativistic particle of unit mass under the influence of the spatial 2D rotationally invariant general potential) provides a tractable physical example for the Hodge theory where the symmetry operators and conserved charges lead to the physical realizations of the de Rham cohomological operators of differential geometry within the framework of Becchi-Rouet-Stora-Tyutin (BRST) formalism. We concisely mention the Hodge decomposition theorem in the quantum Hilbert space of states and discuss the physicality criteria w.r.t. the conserved and nilpotent versions of the (anti-)BRST and (anti-)co-BRST charges (and the physical consequences that ensue from them).
Using the boundary state formalism and thermo field dynamics approach, we study a D$p$-brane at finite temperature in which the background Kalb-Ramond field, a $U(1)$ gauge potential, and tachyon field are turned on together with a general tangential dynamics. The thermal entropy of the brane will be studied. In addition, the behavior of the entropy after the tachyon condensation process will be investigated and some thermodynamic interpretations will be extracted.
We investigate the causality and stability of three different relativistic dissipative fluid-dynamical formulations emerging from a system of classical, ultra-relativistic scalar particles self-interacting via a quartic potential. For this particular interaction, all transport coefficients of Navier-Stokes, Bemfica-Disconzi-Noronha-Kovtun and second-order transient theories can be computed in analytical form. We first show that Navier-Stokes theory is acausal and unstable regardless of the matching conditions. On the other hand, BDNK theory can be linearly causal and stable for a particular set of matching choices that does not contain the so-called exotic Eckart prescription. In particular, using the Li\'enard-Chipart criterion, we obtain a set of sufficient conditions that guarantee the stability of the theory. Last, second-order transient hydrodynamic theory in Landau matching is shown to be linearly causal and stable.
Modular Graph Functions (MGFs) emerge from the low-energy expansion of the amplitude integral over the configuration of punctures on a torus in studying string scattering amplitude at one-loop. These functions are SL(2,$\mathbb{Z}$)-invariant. To find the string scattering amplitude, we must integrate them over the moduli space of the torus. In this paper, we use the iterated integral representation of MGFs to establish a depth-dependent basis for them up to depth three, where depth refers to the number of iterations in the integral. This basis has a suitable Laplace equation. We integrate this basis from depth zero to depth three over the fundamental domain of SL(2,$\mathbb{Z}$) with a cut-off.
The double sine-Gordon field theory in the weak confinement regime is studied. It represents the small non-integrable deformation of the standard sine-Gordon model caused by the cosine perturbation with the frequency reduced by the factor of 2. This perturbation leads to the confinement of the sine-Gordon solitons, which become coupled into the 'meson' bound states. We classify the meson states in the weak confinement regime, and obtain three asymptotic expansions for their masses, which can be used in different regions of the model parameters. It is shown, that the sine-Gordon breathers, slightly deformed by the perturbation term, transform into the mesons upon increase of the sine-Gordon coupling constant.
We construct the $\mathrm{SL}(2, \mathbb C)$ quartic vertex with a generic stub parameter for the bosonic closed string field theory by characterizing the vertex region in the moduli space of 4-punctured sphere, and providing the necessary and sufficient constraints for the local coordinate maps. While $\mathrm{SL}(2, \mathbb C)$ vertices are not known to have a nice geometric recursive construction like the minimal area or hyperbolic vertices, they can be studied analytically which makes them more convenient for simple computations. In particular, we obtain exact formulas for the parametrization and volume of the vertex region as a function of the stub parameter. The main objective of having an explicit quartic vertex is to later study its decomposition using auxiliary fields.
Geometrical properties of spacetime are difficult to study in nonperturbative approaches to quantum gravity like Causal Dynamical Triangulations (CDT), where one uses simplicial manifolds to define the gravitational path integral, instead of Riemannian manifolds. In particular, in CDT one only relies on two mathematical tools, a distance measure and a volume measure. In this paper, we define a notion of scalar curvature, for metric spaces endowed with a volume measure or a random walk, without assuming nor using notions of tensor calculus. Furthermore, we directly define the Ricci scalar, without the need of defining and computing the Riemann or the Ricci tensor a priori. For this, we make use of quantities, like the surface of a geodesic sphere, or the return probability of scalar diffusion processes, that can be computed both in these metric spaces, as in a Riemannian manifold, where they receive scalar curvature contributions. Our definitions recover the classical results of scalar curvature when the sets are Riemannian manifolds. We propose seven methods to compute the scalar curvature in these spaces, and we compare their features in natural implementations in discrete spaces. The defined generalized scalar curvatures are easily implemented on discrete spaces, like graphs. We present the results of our definitions on random triangulations of a 2D sphere and plane. Additionally, we show the results of our generalized scalar curvatures on the quantum geometries of 2D CDT, where we find that all our definitions indicate a flat ground state of the gravitational path integral.
We calculated the thermal partition function for the massless free bosonic Higher spin fields (Fronsdal theory) of thermodynamical system by using Feynman path integral formalism and also we calculated the free energy, average energy, energy fluctuations, specific heat and entropy by applying the dimensional regularization method. Moreover, we isolated the UV divergences of the above physical quantities by dimensional regularization method. Nevertheless, we observed the duality between the thermodynamical system of massless free bosonic HS fields on $d \geq 4$-dimensional Minkowski spacetime and thermodynamical system of Klein-Gordon scalar fields on 4-dimensional Minkowski spacetime at the thermal equilibrium condition. We also calculated the fluctuations in the energy value $E$ is quite negligible at the thermodynamic limit $V \to \infty$ (IR divergence) and if $T \to \infty$ also negligible in fluctuations in the energy value. This energy fluctuations is depends on the dimensionality of the system i.e $d \geq 4$ and also depends on the temperature of HS fields. The entropy of thermodynamical system of massless free bosonic HS fields diverges logarithmically at temperature is infinity and also diverges at thermodynamic limit i.e $ V \to \infty$ and also depends on the spin ($s$) of the massless free bosonic HS fileds. The entropy of massless free Bosonic HS fields depends not only temperature of the system but also depends on the dimensionality of the system i.e $d \geq 4$. However, the entropy value is finite for finite value of temperature and finite value of volume of massless free bosonic HS fields.
The emerging field of quantum machine learning has the potential of revolutionizing our perspectives of quantum computing and artificial intelligence. In the predominantly empirical realm of quantum machine learning, a theoretical void persists. This paper addresses the gap by highlighting the quantum cross entropy, a pivotal counterpart to the classical cross entropy. We establish quantum cross entropy's role in quantum data compression, a fundamental machine learning task, by demonstrating that it acts as the compression rate for sub-optimal quantum source coding. Our approach involves a novel, universal quantum data compression protocol based on the quantum generalization of variable-length coding and the principle of quantum strong typicality. This reveals that quantum cross entropy can effectively serve as a loss function in quantum machine learning algorithms. Furthermore, we illustrate that the minimum of quantum cross entropy aligns with the von Neumann entropy, reinforcing its role as the optimal compression rate and underscoring its significance in advancing our understanding of quantum machine learning's theoretical framework.
We study the quantum degeneracies of BPS black holes of octonionic magical supergravity in five dimensions that is defined by the exceptional Jordan algebra. We define the quantum degeneracy purely number theoretically as the number of distinct states in the charge space with a given set of invariant labels of the discrete U-duality group. We argue that the quantum degeneracies of spherically symmetric stationary BPS black holes of octonionic magical supergravity in five dimensions are given by the Fourier coefficients of the modular forms of the exceptional group $E_{7(-25)}$. The charges of the black holes take values in the lattice defined by the exceptional Jordan algebra $J_3^{\mathbb{O}}(\mathcal{R})$ over integral octonions $\mathcal{R}$. The quantum degeneracies of charge states of rank one and rank two BPS black holes (zero area) are given by the Fourier coefficients of singular modular forms $E_4(Z)$ and $E_8(Z)=(E_4(Z))^2$ of $E_{7(-25)}(Z)$. The rank 3 (large) BPS black holes will be studied elsewhere. Following the work of N. Elkies and B. Gross on the embeddings of cubic rings $A$ into the exceptional Jordan algebra and their actions on the 24 dimensional orthogonal quadratic subspace of $J_3^{\mathbb{O}}(\mathcal{R})$, we show that the quantum degeneracies of rank one black holes described by such embeddings are given by the Fourier coefficients of the Hilbert modular forms of $SL(2,A)$. If the discriminant of the cubic ring $A$ is $D=p^2$ with $p$ a prime number then the isotropic lines in the 24 dimensional quadratic space define a pair of Niemeier lattices which can be taken as charge lattices of some BPS black holes. For $p=7$ they are the Leech lattice with no roots and the lattice $A_6^4$ with 168 root vectors. We also review the current status of the searches for the M/superstring theoretic origins of the octonionic magical supergravity.
Symmetries corresponding to local transformations of the fundamental fields that leave the action invariant give rise to (invertible) topological defects, which obey group-like fusion rules. One can construct more general (codimension-one) topological defects by specifying a map between gauge-invariant operators from one side of the defect and such operators on the other side. In this work, we apply such construction to Maxwell theory in four dimensions and to the free compact scalar theory in two dimensions. In the case of Maxwell theory, we show that a topological defect that mixes the field strength $F$ and its Hodge dual $\star F$ can be at most an $SO(2)$ rotation. For rational values of the bulk coupling and the $\theta$-angle we find an explicit defect Lagrangian that realizes values of the $SO(2)$ angle $\varphi$ such that $\cos \varphi$ is also rational. We further determine the action of such defects on Wilson and 't Hooft lines and show that they are in general non-invertible. We repeat the analysis for the free compact scalar $\phi$ in two dimensions. In this case we find only four discrete maps: the trivial one, a $Z_2$ map $d\phi \rightarrow -d\phi$, a $\mathcal{T}$-duality-like map $d\phi \rightarrow i \star d\phi$, and the product of the last two.
In this paper we study the OPE between two positive helicity outgoing gluons in the celestial CFT for the Yang-Mills theory chirally coupled to a massive scalar background. This theory breaks the translation as well as scale invariance. We compute the subleading terms in the OPE expansion and show that they are same as the subleading terms of the OPE expansions in the MHV sector. As a result the amplitudes of this theory also satisfy the set of differential equations obtained previously for MHV amplitudes in pure YM theory. This is not surprising because the symmetries coming from the leading and subleading soft gluon theorems do not change in the presence of a massive scalar background.
We revisit the question addressed in recent papers by Garriga et al: What determines the rest frame of pair nucleation in a constant electric field? The conclusion reached in these papers is that pairs are observed to nucleate at rest in the rest frame of the detector which is used to detect the pairs. A similar conclusion should apply to bubble nucleation in a false vacuum. This conclusion however is subject to doubt due to the unphysical nature of the model of a constant eternal electric field that was used by Garriga et al. The number density of pairs in such a field would be infinite at any finite time. Here we address the same question in a more realistic model where the electric field is turned on at a finite time $t_0$ in the past. The process of turning on the field breaks the Lorentz invariance of the model and could in principle influence the frame of pair nucleation. We find however that the conclusion of Garriga et al still holds in the limit $t_0 \to -\infty$. This shows that the setup process of the electric field does not have a lasting effect on the observed rest frame of pair nucleation. {On the other hand, the electric current and charge density due to the pairs are determined by the way in which the electric field was turned on.
The duality between type IIA superstring theory and M-theory enables us to lift bound states of D$0$-branes and $n$ parallel D$6$-branes to M-theory compactified on an $n$-centered multi-Taub-NUT space $\mathbb{TN}_{n}$. Accordingly, the rank $n$ K-theoretic Donaldson-Thomas invariants of $\mathbb{C}^{3}$ are connected with the index of M-theory on $\mathbb{C}^{3}\times\mathbb{TN}_{n}$. In this paper, we extend this connection by considering intersecting D$6$-branes. In the presence of a suitable Neveu-Schwarz $B$-field, the system preserves two supercharges. This system is T-dual to the configuration of tetrahedron instantons which we introduced in \cite{Pomoni:2021hkn}. We conjecture a closed-form expression for the K-theoretic tetrahedron instanton partition function, which is the generating function of the D$0$-D$6$ partition functions. We find that the tetrahedron instanton partition function coincides with the partition function of the magnificent four model for special values of the parameters, leading us to conjecture that our system of intersecting D$6$-branes can be obtained from the annihilation of D$8$-branes and anti-D$8$-branes. Remarkably, the K-theoretic tetrahedron instanton partition function allows an interpretation in terms of the index of M-theory on a noncompact Calabi-Yau fivefold which is related to a superposition of Kaluza-Klein monopoles. The dimensional reduction of the system allows us to express the cohomological tetrahedron instanton partition function in terms of the MacMahon function, generalizing the correspondence between Gromov-Witten invariants and Donaldson-Thomas invariants for Calabi-Yau threefolds.
We study out-of-equilibrium dynamics caused by global quantum quenches in fractonic scalar field theories. We consider several types of quenches, in particular, the mass quench in theories with different types of discrete rotational symmetries ($\mathbb{Z}_4$ and $\mathbb{Z}_8$), as well as an instantaneous quench via the transition between them. We also investigate fractonic boundary quenches, where the initial state is prepared on a finite-width slab in Euclidean time. We find that perturbing a fractonic system in finite volume especially highlights the restricted mobility via the formation and subsequent evolution of specific $\mathbb{Z}_4$-symmetric spatial structures. We discuss a generalization to $\mathbb{Z}_n$-symmetric field theories, and introduce a proper regularization, which allows us to explicitly deal with divergences inherent to fractonic field theories.
A new efficient approach to the analysis of nonlinear higher-spin equations, that treats democratically auxiliary spinor variables $Z_A$ and integration homotopy parameters in the non-linear vertices of the higher-spin theory, is developed. Being most general, the proposed approach is the same time far simpler than those available so far. In particular, it is free from the necessity to use the Schouten identity. Remarkably, the problem of reconstruction of higher-spin vertices is mapped to certain polyhedra cohomology in terms of homotopy parameters themselves. The new scheme provides a powerful tool for the study of higher-order corrections in higher-spin theory and, in particular, its spin-locality. It is illustrated by the analysis of the lower order vertices, reproducing not only the results obtained previously by the shifted homotopy approach but also projectively-compact vertices with the minimal number of derivatives, that were so far unreachable within that scheme.
We develop the diagrammatic technique of quiver subtraction to facilitate the identification and evaluation of the $SU(n)$ hyper-K\"ahler quotient (HKQ) of the Coulomb branch of a $3d$ $\mathcal{N}=4$ unitary quiver theory. The target quivers are drawn from a wide range of theories, typically classified as ''good'' or ''ugly'', which satisfy identified selection criteria. Our subtraction procedure uses quotient quivers that are ''bad'', differing thereby from quiver subtractions based on Kraft-Procesi transitions. The procedure identifies one or more resultant quivers, the union of whose Coulomb branches corresponds to the desired HKQ. Examples include quivers whose Coulomb branches are moduli spaces of free fields, closures of nilpotent orbits of classical and exceptional type, and slices in the affine Grassmanian. We calculate the Hilbert Series and Highest Weight Generating functions for HKQ examples of low rank. For certain families of quivers, we are able to conjecture HWGs for arbitrary rank. We examine the commutation relations between quotient quiver subtraction and other diagrammatic techniques, such as Kraft-Procesi transitions, quiver folding, and discrete quotients.
In our earlier work, we studied the $\hat{Z}$-invariant(or homological blocks) for $SO(3)$ gauge group and we found it to be same as $\hat{Z}^{SU(2)}$. This motivated us to study the $\hat{Z}$-invariant for quotient groups $SU(N)/\mathbb{Z}_m$, where $m$ is some divisor of $N$. Interestingly, we find that $\hat{Z}$-invariant is independent of $m$.
In this paper we present a new solution of the star-triangle relation having positive Boltzmann weights. The solution defines an exactly solvable two-dimensional Ising-type (edge interaction) model of statistical mechanics where the local "spin variables" can take arbitrary integer values, i.e., the number of possible spin states at each site of the lattice is infinite. There is also an equivalent "dual" formulation of the model, where the spins take continuous real values on the circle. From algebraic point of view this model is closely related to the to the 6-vertex model. It is connected with the construction of an intertwiner for two infinite-dimensional representations of the quantum affine algebra $U_q(\widehat{sl}(2))$ without the highest and lowest weights. The partition function of the model in the large lattice limit is calculated by the inversion relation method. Amazingly, it coincides with the partition function of the off-critical 8-vertex free-fermion model.
In this paper, we study symmetry-resolved entanglement entropy in free bosonic quantum many-body systems. Precisely, by making use of the lattice regularization scheme, we compute symmetry-resolved R\'enyi entropies for free complex scalar fields as well as for a simple class of non-local field theories in which entanglement entropy exhibits volume-law scaling. We present effective and approximate eigenvalues for the correlation matrix used to compute the symmetry-resolved entanglement entropy and show that they are consistent with the numerical results. Furthermore, we explore the equipartition of entanglement entropy and verify an effective equipartition in the massless limit. Finally, we make a comment on the entanglement entropy in the non-local quantum field theories and write down an explicit expression for the symmetry-resolved R\'enyi entropies.
We reconstruct type II supergravities by using building blocks of $O(d) \times O(d)$ invariants.These invariants are obtained by explicitly analyzing $O(d) \times O(d)$ transformations of 10 dimensional massless fields. Similar constructions are done by employing double field theory or generalized geometry, but we completed the reconstruction within the framework of the supergravities.
To our best knowledge, the lowest-order non-perturbative stringy interaction between an NS brane and a Dp brane remains unknown. We here present the non-perturbative stringy amplitudes for a system of an F-string and a Dp brane and a system of an NS 5 brane and a Dp brane for $0 \le p \le 6$. In either case, the F or NS5 and the Dp are placed parallel at a separation. We obtain the respective amplitudes, starting from the amplitude for a system of a D1 and a D3 for the former and that for a system of a D5 and a D3 system for the latter, either of which can be computed via the known D-brane technique, based on the IIB S-duality and various T-dualities plus the consistency of both, along with the respective known long-range amplitudes.
Understanding the values and origin of fundamental physical constants, one of the grandest challenges in modern science, has been discussed in particle physics, astronomy and cosmology. More recently, it was realised that fundamental constants have a bio-friendly window set by life processes involving motion and flow. This window is related to intrinsic fluid properties such as energy and length scales in condensed matter set by fundamental constants. Here, we discuss important extrinsic factors governing the viscosity of complex fluids operating in life processes due to collective effects. We show that both extrinsic and intrinsic factors affecting viscosity need to be taken into account when estimating the bio-friendly range of fundamental constants from life processes, and our discussion provides a straightforward recipe for doing this. We also find that the relative role of extrinsic and intrinsic factors depends on the range of variability of these intrinsic and extrinsic factors. Remarkably, the viscosity of a complex fluid such as blood with significant extrinsic effects is not far from the intrinsic viscosity calculated using the fundamental constants only, and we discuss the reason for this in terms of dynamics of contact points between cells.
We obtain a uniform decomposition into Casimir eigenspaces (most of which are irreducible) of the fourth power of the adjoint representation $\mathfrak{g}^{\otimes 4}$ for all simple Lie algebras. We present universal, in Vogel's sense, formulae for the dimensions and split Casimir operator's eigenvalues of all terms in this decomposition. We assume that a similar uniform decomposition into Casimir eigenspaces with universal dimension formulae exists for an arbitrary power of the adjoint representations.
Evidence for the singly Cabibbo suppressed decay $\Lambda_c^+\to p\pi^0$ is reported for the first time with a statistical significance of $3.7\sigma$ based on 6.0 $\mathrm{fb}^{-1}$ of $e^+e^-$ collision data collected at center-of-mass energies between 4.600 and 4.843 GeV with the BESIII detector at the BEPCII collider. The absolute branching fraction of $\Lambda_c^+\to p\pi^0$ is measured to be $(1.56^{+0.72}_{-0.58}\pm0.20)\times 10^{-4}$. Combining with the branching fraction of $\Lambda_c^+\to n\pi^+$, $(6.6\pm1.2\pm0.4)\times10^{-4}$, the ratio of the branching fractions $\Lambda_c^+\to n\pi^+$ and $\Lambda_c^+\to p\pi^0$ is calculated to be $4.2^{+2.2}_{-1.9}$; this is an important input for the understanding of the decay mechanisms of charmed baryons. In addition, the absolute branching fraction of $\Lambda_c^+\to p\eta$ is measured to be $(1.63\pm0.31_{\rm stat}\pm0.11_{\rm syst}) \times10^{-3}$, which is consistent with previous measurements.
Based on $(10.09 \pm 0.04) \times 10^9$ $J/\psi$ events collected with the BESIII detector operating at the BEPCII collider, a partial wave analysis of the decay $J/\psi \to \phi \pi^{0}\eta$ is performed. We observe for the first time two new structures on the $\phi\eta$ invariant mass distribution, with statistical significances of $24.0\sigma$ and $16.9\sigma$; the first with $J^{\rm PC}$ = $1^{+-}$, mass M = (1911 $\pm$ 6 (stat.) $\pm$ 14 (sys.))~MeV/$c^{2}$, and width $\Gamma = $ (149 $\pm$ 12 (stat.) $\pm$ 23 (sys.))~MeV, the second with $J^{\rm PC}$ = $1^{--}$, mass M = (1996 $\pm$ 11 (stat.) $\pm$ 30 (sys.))~MeV/$c^{2}$, and width $\Gamma$ = (148 $\pm$ 16 (stat.) $\pm$ 66 (sys.))~MeV. These measurements provide important input for the strangeonium spectrum. In addition, the $f_0(980)-a_0(980)^0$ mixing signal in $J/\psi \to \phi f_0(980) \to \phi a_0(980)^0$ and the corresponding electromagnetic decay $J/\psi \to \phi a_0(980)^0$ are measured with improved precision, providing crucial information to understand the nature of $a_0(980)^0$ and $f_0(980)$.
We measure the tau-to-light-lepton ratio of inclusive $B$-meson branching fractions $R(X_{\tau/\ell}) \equiv \mathcal{B}(B\to X \tau \nu)/\mathcal{B}(B \to X \ell \nu)$, where $\ell$ indicates an electron or muon, and thereby test the universality of charged-current weak interactions. We select events that have one fully reconstructed $B$ meson and a charged lepton candidate from $189~\mathrm{fb}^{-1}$ of electron-positron collision data collected with the Belle II detector. We find $R(X_{\tau/\ell}) = 0.228 \pm 0.016~(\mathrm{stat}) \pm 0.036~(\mathrm{syst})$, in agreement with standard-model expectations. This is the first direct measurement of $R(X_{\tau/\ell})$.
We report the highest-energy observation of entanglement, in top$-$antitop quark events produced at the Large Hadron Collider, using a proton$-$proton collision data set with a center-of-mass energy of $\sqrt{s}=13$ TeV and an integrated luminosity of 140 fb$^{-1}$ recorded with the ATLAS experiment. Spin entanglement is detected from the measurement of a single observable $D$, inferred from the angle between the charged leptons in their parent top- and antitop-quark rest frames. The observable is measured in a narrow interval around the top$-$antitop quark production threshold, where the entanglement detection is expected to be significant. It is reported in a fiducial phase space defined with stable particles to minimize the uncertainties that stem from limitations of the Monte Carlo event generators and the parton shower model in modelling top-quark pair production. The entanglement marker is measured to be $D=-0.547 \pm 0.002~\text{(stat.)} \pm 0.021~\text{(syst.)}$ for $340 < m_{t\bar{t}} < 380 $ GeV. The observed result is more than five standard deviations from a scenario without entanglement and hence constitutes both the first observation of entanglement in a pair of quarks and the highest-energy observation of entanglement to date.
As a prototype detector for the SHiP Surrounding Background Tagger (SBT), we constructed a cell (120 cm x 80 cm x 25 cm) made from corten steel that is filled with liquid scintillator (LS) composed of linear alkylbenzene (LAB) and 2,5-diphenyloxazole (PPO). The detector is equipped with two Wavelength-shifting Optical Modules (WOMs) for light collection of the primary scintillation photons. Each WOM consists of an acrylic tube that is dip-coated with a wavelength-shifting layer on its surface. Via internal total reflection, the secondary photons emitted by the molecules of the wavelength shifter are guided to a ring-shaped array of 40 silicon photomultipliers (SiPMs) coupled to the WOM for light detection. The granularity of these SiPM arrays provides an innovative method to gain spatial information on the particle crossing point. Several improvements in the detector design significantly increased the light yield with respect to earlier proof-of-principle detectors. We report on the performance of this prototype detector during an exposure to high-energy positrons at the DESY II test beam facility by measuring the collected photoelectron yield and the signal time-of-arrival in each of the SiPM arrays. Results on the detection efficiency and the reconstructed energy deposition of the incident positrons are presented, as well as the spatial and time resolution of the detector. These results are then compared to Monte Carlo simulations.
DarkSide-20k will be the next liquid argon TPC built to perform direct search for dark matter under the form of WIMPs. Its calibration to both signal and backgrounds is key as very few events are expected in WIMPs search. In the following proceeding, aspects of the calibration of the TPC of DarkSide-20k are presented: the calibration system itself, the simulations of the calibration programs and the simulations of the impact of the calibration system on the rest of the detector (reduction of the light collection efficiency in the veto buffer, induced background by the system in the TPC and veto).
This letter documents a search for flavour-changing neutral currents (FCNCs), which are strongly suppressed in the Standard Model, in events with a photon and a top quark with the ATLAS detector. The analysis uses data collected in $pp$ collisions at $\sqrt{s} = 13$ TeV during Run 2 of the LHC, corresponding to an integrated luminosity of 139 fb$^{-1}$. Both FCNC top-quark production and decay are considered. The final state consists of a charged lepton, missing transverse momentum, a $b$-tagged jet, one high-momentum photon and possibly additional jets. A multiclass deep neural network is used to classify events either as signal in one of the two categories, FCNC production or decay, or as background. No significant excess of events over the background prediction is observed and 95% CL upper limits are placed on the strength of left- and right-handed FCNC interactions. The 95% CL bounds on the branching fractions for the FCNC top-quark decays, estimated from both top-quark production and decay, are $\mathcal{B}(t\rightarrow u\gamma) < 0.85 \times 10^{-5}$ and $\mathcal{B}(t\to c\gamma) < 4.2 \times 10^{-5}$ for a left-handed $tq\gamma$ coupling, and $\mathcal{B}(t\to u\gamma) < 1.2 \times 10^{-5}$ and $\mathcal{B}(t\to c\gamma) < 4.5 \times 10^{-5}$ for a right-handed coupling.
The experiments at LHC are implementing novel and challenging detector upgrades for the High Luminosity LHC, among which the tracking systems. This paper reports on performance studies, illustrated by an electron trigger, using a simplified pixel tracker. To achieve a real-time trigger (e.g. processing HL-LHC collision events at 40 MHz), simple algorithms are developed for reconstructing pixel-based tracks and track isolation, utilizing look-up tables based on pixel detector information. Significant gains in electron trigger performance are seen when pixel detector information is included. In particular, a rate reduction up to a factor of 20 is obtained with a signal selection efficiency of more than 95\% over the whole $\eta$ coverage of this detector. Furthermore, it reconstructs p-p collision points in the beam axis (z) direction, with a high precision of 20 $\mu$m resolution in the very central region ($|\eta| < 0.8$), and, up to 380 $\mu$m in the forward region (2.7 $< |\eta| <$ 3.0). This study as well as the results can easily be adapted to the muon case and to the different tracking systems at LHC and other machines beyond the HL-LHC. The feasibility of such a real-time processing of the pixel information is mainly constrained by the Level-1 trigger latency of the experiment. How this might be overcome by the Front-End ASIC design, new processors and embedded Artificial Intelligence algorithms is briefly tackled as well.
The parameterization of the nucleon structure through Generalized Parton Distributions (GPDs) shed a new light on the nucleon internal dynamics. For its direct interpretation, Deeply Virtual Compton Scattering (DVCS) is the golden channel for GPDs investigation. The DVCS process interferes with the Bethe-Heitler (BH) mechanism to constitute the leading order amplitude of the $eN \to eN\gamma$ process. The study of the $ep\gamma$ reaction with polarized positron and electron beams gives a complete set of unique observables to unravel the different contributions to the $ep \gamma$ cross section. This separates the different reaction amplitudes, providing a direct access to their real and imaginary parts which procures crucial constraints on the model dependences and associated systematic uncertainties on GPDs extraction. The real part of the BH-DVCS interference amplitude is particularly sensitive to the $D$-term which parameterizes the Gravitational Form Factors of the nucleon. The separation of the imaginary parts of the interference and DVCS amplitudes provides insights on possible higher-twist effects. We propose to measure the unpolarized and polarized Beam Charge Asymmetries (BCAs) of the $\vec{e}^{\pm}p \to e^{\pm}p \gamma$ process on an unpolarized hydrogen target with {\tt CLAS12}, using polarized positron and electron beams at 10.6 GeV. The azimuthal and $t$-dependences of the unpolarized and polarized BCAs will be measured over a large $(x_B,Q^2)$ phase space using a 100 day run with a luminosity of 0.66$\times 10^{35}$cm$^{-2}\cdot$s$^{-1}$.
This paper explores the potential application of quantum and hybrid quantum-classical neural networks in power flow analysis. Experiments are conducted using two small-size datasets based on the IEEE 4-bus and 33-bus test systems. A systematic performance comparison is also conducted among quantum, hybrid quantum-classical, and classical neural networks. The comparison is based on (i) generalization ability, (ii) robustness, (iii) training dataset size needed, (iv) training error. (v) training computational time, and (vi) training process stability. The results show that the developed quantum-classical neural network outperforms both quantum and classical neural networks, and hence can improve deep learning-based power flow analysis in the noisy-intermediate-scale quantum (NISQ) era.
Nonlocality is a fundamental aspect of quantum mechanics and an important resource in quantum information science. The ``spooky" nature of nonlocality has stimulated persistent research enthusiasm to uncover its glamorous mystery. Here I report on the discovery of Einstein locality that clarifies an essential confusion between Bell nonlocality and Einstein nonlocality. The Einstein locality of quantum mechanics is recognized via exploring the superposition principle and an Einstein locality model is built to gain a pictorial insight into the essence of Bell nonlocality. Moreover, existential results of double-slit experiments with entangled photons are presented as the evidence of Einstein locality in reality. This work should constitute a conceptual breakthrough towards understanding nonlocality and will have far-reaching impacts on quantum science and technology.
The time evolution of quantum many-body systems is one of the most promising applications for near-term quantum computers. However, the utility of current quantum devices is strongly hampered by the proliferation of hardware errors. The minimization of the circuit depth for a given quantum algorithm is therefore highly desirable, since shallow circuits generally are less vulnerable to decoherence. Recently, it was shown that variational circuits are a promising approach to outperform current state-of-the-art methods such as Trotter decomposition, although the optimal choice of parameters is a computationally demanding task. In this work, we demonstrate a simplification of the variational optimization of circuits implementing the time evolution operator of local Hamiltonians by directly encoding symmetries of the physical system under consideration. We study the expressibility of such constrained variational circuits for different models and symmetries. Our results show that the encoding of symmetries allows a reduction of optimization cost by more than one order of magnitude and scalability to arbitrary large system sizes, without loosing accuracy in most systems. Furthermore, we discuss the exceptions in constrained systems and provide an explanation by means of an restricted lightcone width after imposing the constraints into the circuits.
Deconfined quantum criticality describes continuous phase transitions that are not captured by the Landau-Ginzburg paradigm. Here, we investigate deconfined quantum critical points in the long-range, anisotropic Heisenberg chain. With matrix product state simulations, we show that the model undergoes a continuous phase transition from a valence bond solid to an antiferromagnet. We extract the critical exponents of the transition and connect them to an effective field theory obtained from bosonization techniques. We show that beyond stabilizing the valance bond order, the long-range interactions are irrelevant and the transition is well described by a double frequency sine-Gordon model. We propose how to realize and probe deconfined quantum criticality in our model with trapped-ion quantum simulators.
Controlling single-electron states becomes increasingly important due to the wide-ranging advances in electron quantum optics. Single-electron control enables coherent manipulation of individual electrons and the ability to exploit the wave nature of electrons, which offers various opportunities for quantum information processing, sensing, and metrology. A unique opportunity offering new degrees of freedom for single-electron control is provided when considering non-uniform magnetic fields. Considering the modeling perspective, conventional electron quantum transport theories are commonly based on gauge-dependent electromagnetic potentials. A direct formulation in terms of intuitive electromagnetic fields is thus not possible. In an effort to rectify this, a gauge-invariant formulation of the Wigner equation for general electromagnetic fields has been proposed in [Nedjalkov et al., Phys. Rev. B., 2019, 99, 014423]. However, the complexity of this equation requires to derive a more convenient formulation for linear electromagnetic fields [Nedjalkov et al., Phys. Rev. A., 2022, 106, 052213]. This formulation directly includes the classical formulation of the Lorentz force and higher-order terms depending on the magnetic field gradient, that are negligible for small variations of the magnetic field. In this work, we generalize this equation in order to include a general, non-uniform electric field and a linear, non-uniform magnetic field. The thus obtained formulation has been applied to investigate the capabilities of a linear, non-uniform magnetic field to control single-electron states in terms of trajectory, interference patterns, and dispersion. This has led to explore a new type of transport inside electronic waveguides based on snake trajectories and also to explore the possibility to split wavepackets to realize edge states.
We introduce quantum homomorphisms between quantum hypergraphs through the existence of perfect strategies for quantum non-local games, canonically associated with the quantum hypergraphs. We show that the relation of homomorphism of a given type satisfies natural analogues of the properties of a pre-order. We show that quantum hypergraph homomorphisms of local type are closely related, and in some cases identical, to the TRO equivalence of finite dimensionally acting operator spaces, canonically associated with the hypergraphs.
Phonons traveling in solid-state devices emerges as a universal excitation for coupling different physical systems. Microwave phonons have a similar wavelength to optical photons in solids, which is desirable for microwave-optical transduction of classical and quantum signal. Conceivably, building optomechanical integrated circuits (OMIC) that guide both photons and phonons can interconnect photonic and phononic devices. Here, we demonstrate an OMIC including an optomechanical ring resonator (OMR), where co-resonate infrared photons and GHz phonons induce significantly enhanced interconversion. Our platform is hybrid, using wide bandgap semiconductor gallium phosphide (GaP) for wave-guiding and piezoelectric zinc oxide (ZnO) for phonon generation. The OMR features photonic and phononic quality factors of $>1\times10^5$ and $3.2\times10^3$, respectively. The interconversion between photonic modes achieved an internal conversion efficiency $\eta_i=(2.1\pm0.1)%$ and a total device efficiency $\eta_{tot}=0.57\times10^{-6}$ at low acoustic pump power 1.6 mW. The efficient conversion in OMICs enables microwave-optical transduction in quantum information and microwave photonics applications.
Mathematical analysis on the existence of eigenvalues is vital, as it corresponds to the occurrence of localization, an exceptionally important property of quantum walks. Previous studies have demonstrated that eigenvalue analysis utilizing the transfer matrix proves beneficial for space inhomogeneous three-state quantum walks with a specific class of coin matrices, including Grover matrices. In this research, we turn our attention to the transfer matrix of three-state quantum walks with a general coin matrix. Building upon previous research methodologies, we dive deeper into investigating the properties of the transfer matrix and employ numerical analysis to derive eigenvalues for models that were previously unanalyzable.
Confocal fluorescence microscopy is widely applied for the study of point-like emitters such as biomolecules, material defects, and quantum light sources. Confocal techniques offer increased optical resolution, dramatic fluorescence background rejection and sub-nanometer localization, useful in super-resolution imaging of fluorescent biomarkers, single-molecule tracking, or the characterization of quantum emitters. However, rapid, noise-robust automated 3D focusing on point-like emitters has been missing for confocal microscopes. Here, we introduce FiND (Focusing in Noisy Domain), an imaging-free, non-trained 3D focusing framework that requires no hardware add-ons or modifications. FiND achieves focusing for signal-to-noise ratios down to 1, with a few-shot operation for signal-to-noise ratios above 5. FiND enables unsupervised, large-scale focusing on a heterogeneous set of quantum emitters. Additionally, we demonstrate the potential of FiND for real-time 3D tracking by following the drift trajectory of a single NV center indefinitely with a positional precision of < 10 nm. Our results show that FiND is a useful focusing framework for the scalable analysis of point-like emitters in biology, material science, and quantum optics.
We study the $p$-R\'{e}nyi entropy power inequality with a weight factor $t$ on two independent continuous random variables $X$ and $Y$. The extension essentially relies on a modulation on the sharp Young's inequality due to Bobkov and Marsiglietti. Our research provides a key result that can be used as a fundamental research finding in quantum Shannon theory, as it offers a R\'{e}nyi version of the entropy power inequality for quantum systems.
Quantum measurements based on mutually unbiased bases (MUB) play crucial roles in foundational studies and quantum information processing. It is known that there exist inequivalent MUB, but little is known about their operational distinctions, not to say experimental demonstration. In this work, by virtue of a simple estimation problem we experimentally demonstrate the operational distinctions between inequivalent triples of MUB in dimension 4 based on high-precision photonic systems. The experimental estimation fidelities coincide well with the theoretical predictions with only 0.16$\%$ average deviation, which is 25 times less than the difference (4.1$\%$) between the maximum estimation fidelity and the minimum estimation fidelity. Our experiments clearly demonstrate that inequivalent MUB have different information extraction capabilities and different merits for quantum information processing.
The rise of the 4H-silicon-carbide-on-insulator (SiCOI) platform marks a promising pathway towards the realization of monolithic quantum photonic networks. However, the challenge of establishing room-temperature entangled registers on these integrated photonics platforms remains unresolved. Herein, we demonstrate the first entangled processor on the SiCOI platform. We show that both deterministic generation of single divacancy electron spins and near-unity spin initialization of a single $^{13}$C nuclear spin can be achieved on SiCOI at room temperature. Besides coherently manipulating the single nuclear spin, a maximally entangled state with a fidelity of 0.89 has been prepared on this CMOS-compatible semiconductor-integrated photonics system. This work establishes the foundation for compact and on-chip solutions within existing defect-based computing and sensing protocols, positioning the SiCOI platform as the most promising candidate for integrated monolithic quantum photonic networks.
The non-Hermitian skin effect (NHSE), featured by the collapse of bulk-band eigenstates into the localized boundary modes of the systems, is one of most striking properties in the fields of non-Hermitian physics. Unique physical phenomena related to the NHSE have attracted a lot of interest, however, their experimental realizations usually require nonreciprocal hopping, which faces a great challenge in ultracold-atom systems. In this work, we propose to realize the NHSE in a 1D optical lattice by periodically-driven ultracold atoms in the presence of staggered atomic loss. By studying the effective Floquet Hamiltonian in the high-frequency approximation, we reveal the underlying mechanism for the periodic-driving-induced the NHSE. We found that the robust NHSE can be tuned by driving phase, which is manifested by the dynamical localization. Most remarkably, we uncover the periodic-driving-induced critical skin effect for two coupled chains with different driving phases, accompanied by the appearance of size-dependent topological in-gap modes. Our studies provide a feasible way for observing the NHSE and exploring corresponding unique physical phenomena due to the interplay of non-Hermiticity and many-body statistics in ultracold-atom systems.
Quantum Bit String Comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any $n$-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two $n$-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity, and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.
We present a scheme for controlling quantum correlations by applying feedback to the cavity mode that exits a cavity while interacting with a mechanical oscillator and magnons. In a hybrid cavity magnomechanical system with a movable mirror, the proposed coherent feedback scheme allows for the enhancement of both bipartite and tripartite quantum correlations. Moreover, we demonstrate that the resulting entanglement remains robust with respect to ambient temperatures in the presence of coherent feedback control.
We consider quantum computer architectures where interactions are mediated between hot qubits that are not in their mechanical ground state. Such situations occur, e.g., when not cooling ideally, or when moving ions or atoms around. We introduce quantum gates between logically encoded systems that consist of multiple physical ones and show how the encoding can be used to make these gates resilient against such imperfections. We demonstrate that, in this way, one can improve gate fidelities by enlarging the logical system, and counteract the effect of unknown positions or position fluctuations of involved particles. We consider both a classical treatment of positions in terms of probability distributions, as well a quantum treatment using mechanical eigenmodes. We analyze different settings including a cool logical system mediating interactions between two hot systems, as well as two logical systems consisting of hot physical systems whose positions fluctuate collectively or individually. In all cases, we demonstrate a significant improvement of gate fidelities, which provides a platform-independent tool to mitigate thermal noise.
In a recent paper, [Gampel, F. and Gajda, M., Phys. Rev. A 107, 012420, (2023)], the authors claimed they are proposing a new model to explain the existence of classical trajectories in the quantum domain. The idea is based on simultaneous position and momentum measurements and a "jump Markov process". Consequently, they have interpreted the emergence of classical trajectories as sets of detection events. They successfully implemented the model for a free particle and for one under a harmonic potential. Here, we show that the continuous observation limit is a realization of a coherent semiclassical expansion; Also, as has already been demonstrated, the jump process is not necessary and is not observable. In other words, the collapse, as they propose, is a non-go theorem; even if it is real, it can not be measured under the needed assumptions to obtain Newtonian classical dynamics.
The finding of non-Hermitian skin effect has revolutionized our understanding of non-Hermitian topological phases, where the usual bulk-boundary correspondence is broken and new topological phases specific to non-Hermitian system are uncovered. Hybrid skin-topological effect (HSTE) is a class of newly discovered non-Hermitian topological states that simultaneously supports skin-localized topological edge states and extended bulk states. Here we provide a brief review of HSTE, starting from different mechanics that have been used to realize HSTE, including non-reciprocal couplings, onsite gain/loss, and non-Euclidean lattice geometries. We also review some theoretical developments closely related to the HSTE, including the concept of higher-order non-Hermitian skin effect, parity-time symmetry engineering, and non-Hermitian chiral skin effect. Finally, we summarize recent experimental exploration of HSTE, including its realization in electric circuits systems, non-Hermitian photonic crystals, and active matter systems. We hope this review can make the concept of hybrid-skin effect clearer and inspire new finding of non-Hermitian topological states in higher dimensional systems.
Diamond quantum sensing is an emerging technology for probing multiple physico-chemical parameters in the nano- to micro-scale dimensions within diverse chemical and biological contexts. Integrating these sensors into microfluidic devices enables the precise quantification and analysis of small sample volumes in microscale channels. In this Perspective, we present recent advancements in the integration of diamond quantum sensors with microfluidic devices and explore their prospects with a focus on forthcoming technological developments.
Post-selected metrology, also known as probabilistic metrology, can be employed as an efficient filter or compression channel to compress the number of samples without significant loss of precision. This metrological scheme is especially advantageous when the final measurements are either very noisy or expensive in practical experiments. In this work, we put forward a general theory on the compression channels in post-selected metrology. We define the basic notations characterizing the compression quality and illuminate the underlying structure. Previous experiments on post-selected optical phase estimation and weak-value amplification are shown to be particular cases of this general theory. Furthermore, we discover that for two categories of bipartite systems, the compression loss can be made arbitrarily small even when the compression channel is restricted to one subsystem. These findings can be employed to distribute quantum measurements so that the measurement noise and cost are dramatically reduced. Therefore, we expect they will find immediate applications in quantum technology.
Quantum computing holds the potential for quantum advantage in optimization problems, which requires advances in quantum algorithms and hardware specifications. Adiabatic quantum optimization is conceptually a valid solution that suffers from limited hardware coherence times. In this sense, counterdiabatic quantum protocols provide a shortcut to this process, steering the system along its ground state with fast-changing Hamiltonian. In this work, we take full advantage of a digitized-counterdiabatic quantum optimization (DCQO) algorithm to find an optimal solution of the $p$-spin model up to 4-local interactions. We choose a suitable scheduling function and initial Hamiltonian such that a single-layer quantum circuit suffices to produce a good ground-state overlap. By further optimizing parameters using variational methods, we solve with unit accuracy 2-spin, 3-spin, and 4-spin problems for $100\%$, $93\%$, and $83\%$ of instances, respectively. As a particular case of the latter, we also solve factorization problems involving 5, 9, and 12 qubits. Due to the low computational overhead, our compact approach may become a valuable tool towards quantum advantage in the NISQ era.
A generic architecture of n-bit quantum operators is proposed for cost-effective transpilation, based on the layouts and the number of n neighbor physical qubits for IBM quantum computers, where n >= 3. This proposed architecture is termed "GALA-n quantum operator". The GALA-n quantum operator is designed using the visual approach of the Bloch sphere, from the visual representations of the rotational quantum operations for IBM native gates (square root of X, X, RZ, and CNOT). In this paper, we also proposed a new formula for the quantum cost, which calculates the total numbers of native gates, SWAP gates, and the depth of the final transpiled quantum circuits. This formula is termed the "transpilation quantum cost". After transpilation, our proposed GALA-n quantum operator always has a lower transpilation quantum cost than that of conventional n-bit quantum operators, which are mainly constructed from costly n-bit Toffoli gates.
Bose-Einstein condensation is an intriguing phenomenon that has garnered significant attention in recent decades. The number of atoms within the condensate determines the scale of experiments that can be performed, making it crucial for quantum simulations. Consequently, a condensate of thulium atoms at a 1064-nm dipole trap was successfully achieved, and optimization of the atom count was performed. Surprisingly, the number of atoms exhibited saturation, closely resembling the count achieved in a dipole trap at 532 nm. Drawing insights from machine learning results, it was concluded that a 3-body recombination process was likely limiting the number of atoms. This limitation was successfully overcome by leveraging Fano-Feshbach resonances. Additionally, optimization of the cooling time was implemented.
There has been much recent progress in controlling $p$-orbital degrees of freedom in optical lattices, for example with lattice shaking, sublattice swapping, and lattice potential programming. Here, we present a protocol of preparing lowest Landau level (LLL) states of cold atoms by adiabatically compressing $p$-orbital Bose-Einstein condensates confined in two-dimensional optical lattices. The system starts from a chiral $p+ip$ Bose-Einstein condensate (BEC) state, which acquires finite angular momentum by spontaneous symmetry breaking. Such chiral BEC states have been achieved in recent optical lattice experiments for cold atoms loaded in the $p$-bands. Through an adiabatic adjustment of the lattice potential, we compress the three-dimensional BEC into a two-dimensional system, in which the orbital degrees of freedom continuously morph into LLL states. This process is enforced by the discrete rotation symmetry of the lattice potential. The final quantum state inherits large angular momentum from the original chiral $p+ip$ state, with one quantized unit per particle. We investigate the quantum many-body ground state of interacting bosons in the LLL considering contact repulsion. This leads to an exotic gapped BEC state. Our theory can be readily tested in experiments for the required techniques are all accessible to the current optical lattice experiments.
The inverse scattering transform is developed to solve the Maxwell-Bloch system of equations that describes two-level systems with inhomogeneous broadening, in the case of optical pulses that do not vanish at infinity in the future. The direct problem, which is formulated in terms of a suitably-defined uniformization variable, combines features of the formalism with decaying as well as non-decaying fields. The inverse problem is formulated in terms of a $2\times 2$ matrix Riemann-Hilbert problem. A novel aspect of the problem is that no reflectionless solutions can exist, and solitons are always accompanied by radiation. At the same time, it is also shown that, when the medium is initially in the ground state, the radiative components of the solutions decay upon propagation into the medium, giving rise to an asymptotically reflectionless states. Like what happens when the optical pulse decays rapidly in the distant past and the distant future, a medium that is initially excited decays to the stable ground state as $t\to \infty$ and for sufficiently large propagation distances. Finally, the asymptotic state of the medium and certain features of the optical pulse inside the medium are considered, and the emergence of a transition region upon propagation in the medium is briefly discussed.
L. E. Ballentine's remarks in Physics Today about the QBist interpretation of quantum mechanics are generally wide of the mark.
The quantum to classical transition (QCT) is one of the central mysteries in quantum physics. This process is generally interpreted as state collapse from measurement or decoherence from interacting with the environment. Here we define the quantumness of a Hamiltonian by the free energy difference between its quantum and classical descriptions, which vanishes during QCT. We apply this criterion to the many-body Rabi model and study its scaling law across the phase transition, finding that not only the temperature and Planck constant, but also all the model parameters are important for this transition. We show that the Jaynes-Cummings and anti Jaynes-Cummings models exhibit greater quantumness than the Rabi model. Moreover, we show that the rotating wave and anti-rotating wave terms in this model have opposite quantumness in QCT. We demonstrate that the quantumness may be enhanced or suppressed at the critical point. Finally, we estimate the quantumness of the Rabi model in current trapped ion experiments. The quantumness provides an important tool to characterize the QCT in a vast number of many-body models.
We propose a design of a superconducting quantum memristive device in the microwave regime, that is, a microwave quantum memristor. It comprises two linked resonators, where the primary one is coupled to a superconducting quantum interference device (SQUID), allowing the adjustment of the resonator properties with an external magnetic flux. The auxiliary resonator is operated through weak measurements, providing feedback to the primary resonator via the SQUID and establishing stable memristive behavior via the external magnetic flux. The device operates with a classical input signal in one cavity while reading the response in the other, serving as a fundamental building block for arrays of microwave quantum memristors. In this sense, we observe that a bipartite setup can retain its memristive behavior while gaining entanglement and quantum correlations. Our findings open the door to the experimental implementation of memristive superconducting quantum devices and arrays of microwave quantum memristors on the path to neuromorphic quantum computing.
Let $A$ be a sparse Hermitian matrix, $f(x)$ be a univariate function, and $i, j$ be two indices. In this work, we investigate the query complexity of approximating $\bra{i} f(A) \ket{j}$. We show that for any continuous function $f(x):[-1,1]\rightarrow [-1,1]$, the quantum query complexity of computing $\bra{i} f(A) \ket{j}\pm \varepsilon/4$ is lower bounded by $\Omega(\widetilde{\deg}_\varepsilon(f))$. The upper bound is at most quadratic in $\widetilde{\deg}_\varepsilon(f)$ and is linear in $\widetilde{\deg}_\varepsilon(f)$ under certain mild assumptions on $A$. Here the approximate degree $\widetilde{\deg}_\varepsilon(f)$ is the minimum degree such that there is a polynomial of that degree approximating $f$ up to additive error $\varepsilon$ in the interval $[-1,1]$. We also show that the classical query complexity is lower bounded by $\widetilde{\Omega}(2^{\widetilde{\deg}_{2\varepsilon}(f)/6})$. Our results show that the quantum and classical separation is exponential for any continuous function of sparse Hermitian matrices, and also imply the optimality of implementing smooth functions of sparse Hermitian matrices by quantum singular value transformation. The main techniques we used are the dual polynomial method for functions over the reals, linear semi-infinite programming, and tridiagonal matrices.
Rydberg-atom synthetic dimensions in the form of a lattice of n$^3S_1$ levels, $58\leq n \leq 63$, coupled through two-photon microwave excitation are used to examine dynamics within the single-particle Su-Schrieffer-Heeger (SSH) Hamiltonian. This paradigmatic model of topological matter describes a particle hopping on a one-dimensional lattice with staggered hopping rates. Tunneling rates between lattice sites and on-site potentials are set by the microwave amplitudes and detuning, respectively. An atom is first excited to a Rydberg state that lies within the lattice and then subject to the microwave dressing fields. After some time, the dressing fields are turned off and the evolution of the population distribution in the different final lattice sites monitored using field ionization. The measurements show the existence of long-lived symmetry-protected edge states and reveal the existence of direct long-distance tunneling between the edge states. The results are in good agreement with model calculations and further demonstrate the potential of Rydberg-atom synthetic dimensions to simulate and faithfully reproduce complex Hamiltonians.
Qudit-based quantum computation offers unique advantages over qubit-based systems in terms of noise mitigation capabilities as well as algorithmic complexity improvements. However, the software ecosystem for multi-state quantum systems is severely limited. In this paper, we highlight a quantum workflow for describing and compiling qudit systems. We investigate the design and implementation of a quantum compiler for qudit systems. We also explore several key theoretical properties of qudit computing as well as efficient optimization techniques. Finally, we provide demonstrations using physical quantum computers as well as simulations of the proposed quantum toolchain.
Great progress has been made in the field of quantum inertial sensors, from the laboratory to the real-world use, which will ultimately require sensor miniaturization and ruggedization for dynamic motion. However, lateral atomic movement with respect to the sensing axis limits inertial sensing, which inspires the atom guides to ensure transverse atomic confinement. Compared to a relatively large free-space optical mode, an evanescent field (EF) mode with an advantageously small mode area allows for stronger atom-light interactions and lower optical power requirements for the atom guides. We study EF atom guides in nanofibers and waveguides with traveling evanescent waves, rather than EF optical lattices with standing evanescent waves. Our preliminary demonstration of an EF atom guide takes place on a nanofiber using 685 nm and 937 nm lights, and we experimentally show that the atomic coherence of the EF atom guide (685/937 nm) is similar to the EF optical lattice. By replacing the 685 nm light with 793 nm light, we further demonstrate the sub-5mW EF atom guide (793/937 nm) on nanofibers and assess relative power reduction of 61$\%$ on nanofibers and 78$\%$ on membrane waveguides, compared to nanofibers (685/937 nm), for the same optical potential depth. Power reduction is beneficial for reducing total optical power requirements and improving heat dissipation in vacuum for atom guiding on chip-based membrane-photonic devices. To connect this work with EF-guided atom interferometry, we evaluate the viability and potential benefits of using the low-power light configuration on membrane waveguides, presenting fabricated linear and circular waveguides for future demonstrations. This is a promising step towards a chip-scale quantum inertial sensor array with reduced size, weight, and power.
We suggest generalized robustness for quantifying nonlocality and investigate its properties by comparing it with white-noise and standard robustness measures. As a result, we show that white-noise robustness does not fulfill monotonicity under local operation and shared randomness, whereas the other measures do. To compare the standard and generalized robustness measures, we introduce the concept of inequivalence, which indicates a reversal in the order relationship depending on the choice of monotones. From an operational perspective, the inequivalence of monotones for resourceful objects implies the absence of free operations that connect them. Applying this concept, we find that standard and generalized robustness measures are inequivalent between even- and odd-dimensional cases up to eight dimensions. This is obtained using randomly performed CGLMP measurement settings in a maximally entangled state. This study contributes to the resource theory of nonlocality and sheds light on comparing monotones by using the concept of inequivalence valid for all resource theories.
We exploit InSb's magnetic-induced optical properties to propose THz sub-wavelength antenna designs that actively tune the radiative decay rates of dipole emitters at their proximity. The proposed designs include a spherical InSb antenna and a cylindrical Si-InSb hybrid antenna that demonstrate distinct behaviors; the former dramatically enhances both radiative and non-radiative decay rates in the epsilon-near-zero region due to the dominant contribution of the Zeeman splitting electric octupole mode. The latter realizes significant radiative decay rate enhancement via magnetic octupole mode, mitigating the quenching process and accelerating the photon production rate. A deep learning-based optimization of emitter positioning further enhances the quantum efficiency of the proposed hybrid system. These novel mechanisms are potentially promising for tunable THz single-photon sources in integrated quantum networks.
We consider how to tell the time-ordering associated with measurement data from quantum experiments at two times and any number of qubits. We define an arrow of time inference problem. We consider conditions on the initial and final states that are symmetric or asymmetric under time reversal. We represent the spatiotemporal measurement data via the pseudo density matrix space-time state. There is a forward process which is CPTP and a reverse process which is obtained via a novel recovery map based on inverting unitary dilations. For asymmetric conditions, the protocol determines whether the data is consistent with the unitary dilation recovery map or the CPTP map. For symmetric conditions, the recovery map yields a valid unitary and the experiment may have taken place in either direction. We also discuss adapting the approach to the Leifer-Spekkens or Process matrix space-time states.
Quantum computing is under rapid development, and today there are several cloud-based, quantum computers (QCs) of modest size (>100s of physical qubits). Although these QCs, along with their highly-specialized classical support infrastructure, are in limited supply, they are readily available for remote access and programming. This work shows the viability of using intrinsic quantum hardware properties for fingerprinting cloud-based QCs that exist today. We demonstrate the reliability of intrinsic fingerprinting with real QC characterization data, as well as simulated QC data, and we detail a quantum physically unclonable function (Q-PUF) scheme for secure key generation using unique fingerprint data combined with fuzzy extraction. We use fixed-frequency transmon qubits for prototyping our methods.
In this paper, we explore the application of semidefinite programming to the realm of quantum codes, specifically focusing on codeword stabilized (CWS) codes with entanglement assistance. Notably, we utilize the isotropic subgroup of the CWS group and the set of word operators of a CWS-type quantum code to derive an upper bound on the minimum distance. Furthermore, this characterization can be incorporated into the associated distance enumerators, enabling us to construct semidefinite constraints that lead to SDP bounds on the minimum distance or size of CWS-type quantum codes. We illustrate several instances where SDP bounds outperform LP bounds, and there are even cases where LP fails to yield meaningful results, while SDP consistently provides tight and relevant bounds. Finally, we also provide interpretations of the Shor-Laflamme weight enumerators and shadow enumerators for codeword stabilized codes, enhancing our understanding of quantum codes.
Reaching useful fault-tolerant quantum computation relies on successfully implementing quantum error correction (QEC). In QEC, quantum gates and measurements are performed to stabilize the computational qubits, and classical processing is used to convert the measurements into estimated logical Pauli frame updates or logical measurement results. While QEC research has concentrated on developing and evaluating QEC codes and decoding algorithms, specification and clarification of the requirements for the classical control system running QEC codes are lacking. Here, we elucidate the roles of the QEC control system, the necessity to implement low latency feed-forward quantum operations, and suggest near-term benchmarks that confront the classical bottlenecks for QEC quantum computation. These benchmarks are based on the latency between a measurement and the operation that depends on it and incorporate the different control aspects such as quantum-classical parallelization capabilities and decoding throughput. Using a dynamical system analysis, we show how the QEC control system latency performance determines the operation regime of a QEC circuit: latency divergence, where quantum calculations are unfeasible, classical-controller limited runtime, or quantum-operation limited runtime where the classical operations do not delay the quantum circuit. This analysis and the proposed benchmarks aim to allow the evaluation and development of QEC control systems toward their realization as a main component in fault-tolerant quantum computation.
An interpretive phenomenological approach is adopted to investigate scientists' attitudes and practices related to hype in science communication. Twenty-four active quantum physicists participated in 5 focus groups. Through a semi-structured questionnaire, their use of hype, attitudes, behaviours, and perspectives on hype in science communication were observed. The main results show that scientists primarily attribute hype generation to themselves, major corporations, and marketing departments. They see hype as crucial for research funding and use it strategically, despite concerns. Scientists view hype as coercive, compromising their work's integrity, leading to mostly negative feelings about it, except for collaborator-generated hype. A dissonance exists between scientists' involvement in hype, their opinions, and the negative emotions it triggers. They manage this by attributing responsibility to the academic system, downplaying their practices. This reveals hype in science communication as a calculated, persuasive tactic by academic stakeholders, aligning with a neoliberal view of science. Implications extend to science communication, media studies, regulation, and academia.
Background: Several approaches are currently trying to understand the generation of angular momentum in the fission fragments. The microscopic TDDFT and statistical FREYA lead to different predictions concerning the opening angle distribution formed between the two spins in particular at 0 and 180 degrees. Purpose: This letter aims to investigate how the geometry and the quantum nature of spins impact the distribution of opening angles to understand what leads to different model predictions. Method: Various assumptions of K distribution (K=0, isotropic, isotropic with total K=0, and from TDFFT) are investigated in a quantum approach. These distributions are then compared to the classical limit using the Clebsch-Gordan coefficients in the limit of $\hbar$ approaches zero. Results: It is shown that in all the schematic scenario the quantal distribution of opening angle lead to the expected behavior in the classical limit. The model shows that the quantal nature of the spins prevents the population of opening angles close to 0 and 180 degrees. The difference in opening angle in the 2D and isotropic 3D distribution is discussed and it is shown that the realistic TDFFT opening angle distribution presents an intermediate behavior between the two cases. Conclusions: The last comparison reveals two key differences between the two models' predictions: the quantal spins' nature in TDDFT and the assumption of zero K values in FREYA.
Optical quantum sensing promises measurement precision beyond classical sensors termed the Heisenberg limit (HL). However, conventional methodologies often rely on prior knowledge of the target system to achieve HL, presenting challenges in practical applications. Addressing this limitation, we introduce an innovative Deep Learning-based Quantum Sensing scheme (DQS), enabling optical quantum sensors to attain HL in agnostic environments. DQS incorporates two essential components: a Graph Neural Network (GNN) predictor and a trigonometric interpolation algorithm. Operating within a data-driven paradigm, DQS utilizes the GNN predictor, trained on offline data, to unveil the intrinsic relationships between the optical setups employed in preparing the probe state and the resulting quantum Fisher information (QFI) after interaction with the agnostic environment. This distilled knowledge facilitates the identification of optimal optical setups associated with maximal QFI. Subsequently, DQS employs a trigonometric interpolation algorithm to recover the unknown parameter estimates for the identified optical setups. Extensive experiments are conducted to investigate the performance of DQS under different settings up to eight photons. Our findings not only offer a new lens through which to accelerate optical quantum sensing tasks but also catalyze future research integrating deep learning and quantum mechanics.
$SU(N_c)$ gauge theories in the strong coupling limit can be described by integer variables representing monomers, dimers and baryon loops. We demonstrate how the D-wave quantum annealer can perform importance sampling on $U(N_c)$ gauge theory in the strong coupling formulation of this theory. In addition to causing a sign problem in importance sampling, baryon loops induce a complex QUBO matrix which cannot be optimized by the D-Wave annealer. Instead we show that simulating the sign-problem free quenched action on the D-Wave is sufficient when combined with a sign reweighting method. As the first test on $SU(3)$ gauge theory, we simulate on $2 \times 2$ lattice and compare the results with its analytic solutions.
Quantum lattices are pivotal in the burgeoning fields of quantum materials and information science. Rapid developments in microscopy and quantum engineering allow for preparing and monitoring wave-packet dynamics on quantum lattices with increasing spatial and temporal resolution. Motivated by these emerging research interests, we present an analytical study of wave packet diffusivity and diffusion length on tight-binding quantum lattices subject to stochastic noise. Our analysis points to the crucial role of spatial coherence and predicts a set of novel phenomena: noise can enhance the transient diffusivity and diffusion length of sufficiently extended initial states; A smooth Gaussian initial state spreads slower than a localized initial state; A standing or traveling initial state with large momentum spreads faster than a localized initial state and exhibits a noise-induced peak in the transient diffusivity; The change in the time-dependent diffusivity and diffusion length relative to a localized initial state follows a universal dependence on the Gaussian width. These theoretical predictions and the underlying mechanism of spatial coherence suggest the possibility of controlling the wave packet dynamics on quantum lattices by spatial manipulations, which will have implications for materials science and quantum technologies.
Quantum error-correcting codes are crucial for quantum computing and communication. Currently, these codes are mainly categorized into additive, non-additive, and surface codes. Additive and non-additive codes utilize one or more invariant subspaces of the stabilizer G to construct quantum codes. Therefore, the selection of these invariant subspaces is a key issue. In this paper, we propose a solution to this problem by introducing quotient space codes and a construction method for quotient space quantum codes. This new framework unifies additive and non-additive quantum codes. We demonstrate the codeword stabilizer codes as a special case within this framework and supplement its error-correction distance. Furthermore, we provide a simple proof of the Singleton bound for this quantum code by establishing the code bound of quotient space codes and discuss the code bounds for pure and impure codes. The quotient space approach offers a concise and clear mathematical form for the study of quantum codes.
Self-testing is a method to certify quantum states and measurements in a device-independent way. The device-independent certification of quantum properties is purely based on input-output measurement statistics of the involved devices with minimal knowledge about their internal workings. Bipartite pure entangled states can be self-tested, but, in the case of multipartite pure entangled states, the answer is not so straightforward. Nevertheless, \v{S}upi\'{c} et al. recently introduced a novel self-testing method for any pure entangled quantum state, which leverages network assistance and relies on bipartite entangled measurements. Hence, their scheme loses the true device-independent flavor of self-testing. In this regard, we provide a self-testing scheme for genuine multipartite pure entangle states in the true sense by employing a generalized Hardy-type non-local argument. It is important to note that our approach involves only local operations and classical communications and it does not depend on bipartite entangled measurements and is free from any network assistance. In addition, we provide the device-independent bound of the maximum probability of success of the generalized Hardy-type nonlocality test.
The quantum transduction between microwave and optical photons is essential for realizing scalable quantum computers with superconducting qubits. Due to the large frequency difference between microwave and optical ranges, the transduction needs to be done via intermediate bosonic modes or nonlinear processes. So far, the transduction efficiency $\eta$ via the magneto-optic Faraday effect (i.e., the light-magnon interaction) in the ferromagnet YIG has been demonstrated to be small as $\eta\sim 10^{-8} \mathrm{-} 10^{-15}$ due to the sample size limitation inside the cavity. Here, we take advantage of the fact that three-dimensional topological insulator thin films exhibit a topological Faraday effect that is independent of the sample thickness. This leads to a large Faraday rotation angle and therefore enhanced light-magnon interaction in the thin film limit. We show theoretically that the transduction efficiency can be greatly improved to $\eta\sim10^{-4}$ by utilizing the heterostructures consisting of topological insulator thin films such as Bi$_2$Se$_3$ and ferromagnetic insulator thin films such as YIG.
In this paper we develop a novel method to solve problems involving quantum optical systems coupled to coherent quantum feedback loops featuring time delays. Our method is based on exact mappings of such non-Markovian problems to equivalent Markovian driven dissipative quantum many-body problems. In this work we show that the resulting Markovian quantum many-body problems can be solved (numerically) exactly and efficiently using tensor network methods for a series of paradigmatic examples, consisting of driven quantum systems coupled to waveguides at several distant points. In particular, we show that our method allows solving problems in so far inaccessible regimes, including problems with arbitrary long time delays and arbitrary numbers of excitations in the delay lines. We obtain solutions for the full real-time dynamics as well as the steady state in all these regimes. Finally, motivated by our results, we develop a novel mean-field approach, which allows us to find the solution semi-analytically and identify parameter regimes where this approximation is in excellent agreement with our exact tensor network results.
Semiconductor-based superconducting qubits offer a versatile platform for studying hybrid quantum devices in circuit quantum electrodynamics (cQED) architecture. Most of these cQED experiments utilize coplanar waveguides, where the incorporation of DC gate lines is straightforward. Here, we present a technique for probing gate-tunable hybrid devices using a three-dimensional (3D) microwave cavity. A recess is machined inside the cavity wall for the placement of devices and gate lines. We validate this design using a hybrid device based on an InAs-Al nanowire Josephson junction. The coupling between the device and the cavity is facilitated by a long superconducting strip, the antenna. The Josephson junction and the antenna together form a gatemon qubit. We further demonstrate the gate-tunable cavity shift and two-tone qubit spectroscopy. This technique could be used to probe various quantum devices and materials in a 3D cQED architecture that requires DC gate voltages.
Recently, Apers and Piddock [TQC '23] strengthened the natural connection between quantum walks and electrical networks by considering Kirchhoff's Law and Ohm's Law. In this work, we develop the multidimensional electrical network by defining Kirchhoff's Alternative Law and Ohm's Alternative Law based on the novel multidimensional quantum walk framework by Jeffery and Zur [STOC '23]. This multidimensional electrical network allows us to sample from the electrical flow obtained via a multidimensional quantum walk algorithm and achieve exponential quantum-classical separations for certain graph problems. We first use this framework to find a marked vertex in one-dimensional random hierarchical graphs as defined by Balasubramanian, Li, and Harrow [arXiv '23]. In this work, they generalised the well known exponential quantum-classical separation of the welded tree problem by Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman [STOC '03] to random hierarchical graphs. Our result partially recovers their results with an arguably simpler analysis. Furthermore, by constructing a $3$-regular graph based on welded trees, this framework also allows us to show an exponential speedup for the pathfinding problem. This solves one of the open problems by Li [arXiv '23], where they construct a non-regular graph and use the degree information to achieve a similar speedup. In analogy to the connection between the (edge-vertex) incidence matrix of a graph and Kirchhoff's Law and Ohm's Law in an electrical network, we also rebuild the connection between the alternative incidence matrix and Kirchhoff's Alternative Law and Ohm's Alternative Law. By establishing this connection, we expect that the multidimensional electrical network could have more applications beyond quantum walks.
We study recently discussed XX spin chain with non-local dephasing [arXiv:2310.03069] in a steady-state boundary-driven setting, confirming superdiffusive magnetization transport in the thermodynamic limit. The emergence of superdiffusion is rather interesting as the Lindblad operators causing it are a coherent sum of two terms, each of which would separately cause diffusion. One therefore has a quantum phenomenon where a coherent sum of two diffusive terms results in superdiffusion. We also study perturbations of the superdiffusive model, finding that breaking the exact form of dissipators, as well as adding interactions to the XX chain, results in superdiffusion changing into diffusion.
By analyzing the characteristics of hardware-native Ising Models and their performance on current and next generation quantum annealers, we provide a framework for determining the prospect of advantage utilizing adiabatic evolution compared to classical heuristics like simulated annealing. We conduct Ising Model experiments with coefficients drawn from a variety of different distributions and provide a range for the necessary moments of the distributions that lead to frustration in classical heuristics. By identifying the relationships between the linear and quadratic terms of the models, analysis can be done a priori to determine problem instance suitability on annealers. We then extend these experiments to a prototype of D-Wave's next generation device, showing further performance improvements compared to the current Advantage annealers.
In ab-initio electronic structure simulations, fermion-to-qubit mappings represent the initial encoding step of the fermionic problem into qubits. This work introduces a physically-inspired method for constructing mappings that significantly simplify entanglement requirements when simulating states of interest. The presence of electronic excitations drives the construction of our mappings, reducing correlations for target states in the qubit space. To benchmark our method, we simulate ground states of small molecules and observe an enhanced performance when compared to classical and quantum variational approaches from prior research employing conventional mappings. In particular, on the quantum side, our mappings require a reduced number of entangling layers to achieve chemical accuracy for the $LiH$, $H_2$, $(H_2)_2$ and $H_4$ molecules using the RY hardware efficient ansatz. In addition, our mappings also provide an enhanced ground state simulation performance in the density matrix renormalization group algorithm for the $N_2$ molecule.
We present a single-photon based device-independent quantum key distribution scheme secure even against no-signaling eavesdropping, which covers quantum attacks, as well as ones beyond laws of physics but still obeying the rules for spatially separated events. The operational scheme is inspired by the Tan-Walls-Collett (1991) interferometric setup, which used the entangled state of two exit modes of a 50-50 beamsplitter resulting from a single photon entering one of its input ports, and weak homodyne measurements at two spatially separated measurement stations. The physics and non-classically of such an arrangement has been understood only recently. Our protocol links basic features of the first two emblematic protocols, BB84 and Ekert91, namely the random bits of the cryptographic key are obtained by measurements on single photon, while the security is positively tested if one observes a violation of a specific Bell inequality. The security analysis presented here is based on a decomposition of the correlations into extremal points of a no-signaling polytope which allows for identification of the optimal strategy for any eavesdropping constrained only by the no-signalling principle. For this strategy, the key rate is calculated, which is then connected with the violation of a specific Clauser-Horne inequality. We also adapt this analysis to propose a self-testing quantum random number generator.
We introduce the first minimal and complete equational theory for quantum circuits. Hence, we show that any true equation on quantum circuits can be derived from simple rules, all of them being standard except a novel but intuitive one which states that a multi-control $2\pi$ rotation is nothing but the identity. Our work improves on the recent complete equational theories for quantum circuits, by getting rid of several rules including a fairly unpractical one. One of our main contributions is to prove the minimality of the equational theory, i.e. none of the rules can be derived from the other ones. More generally, we demonstrate that any complete equational theory on quantum circuits (when all gates are unitary) requires rules acting on an unbounded number of qubits. Finally, we also simplify the complete equational theories for quantum circuits with ancillary qubits and/or qubit discarding.
Disorder can fundamentally modify the transport properties of a system. A striking example is Anderson localization, suppressing transport due to destructive interference of propagation paths. In inhomogeneous many-body systems, not all particles are localized for finite-strength disorder, and the system can become partially diffusive. Unravelling the intricate signatures of localization from such observed diffusion is a long-standing problem. Here, we experimentally study a degenerate, spin-polarized Fermi gas in a disorder potential formed by an optical speckle pattern. We record the diffusion in the disordered potential upon release from an external confining potential. We compare different methods to analyze the resulting density distributions, including a new method to capture particle dynamics by evaluating absorption-image statistics. Using standard observables, such as diffusion exponent and coefficient, localized fraction, or localization length, we find that some show signatures for a transition to localization above a critical disorder strength, while others show a smooth crossover to a modified diffusion regime. In laterally displaced disorder, we spatially resolve different transport regimes simultaneously which allows us to extract the subdiffusion exponent expected for weak localization. Our work emphasizes that the transition toward localization can be investigated by closely analyzing the system's diffusion, offering ways of revealing localization effects beyond the signature of exponentially decaying density distribution.
The combination of quantum many-body and machine learning techniques has recently proved to be a fertile ground for new developments in quantum computing. Several works have shown that it is possible to classically efficiently predict the expectation values of local observables on all states within a phase of matter using a machine learning algorithm after learning from data obtained from other states in the same phase. However, existing results are restricted to phases of matter such as ground states of gapped Hamiltonians and Gibbs states that exhibit exponential decay of correlations. In this work, we drop this requirement and show how it is possible to learn local expectation values for all states in a phase, where we adopt the Lindbladian phase definition by Coser \& P\'erez-Garc\'ia [Coser \& P\'erez-Garc\'ia, Quantum 3, 174 (2019)], which defines states to be in the same phase if we can drive one to other rapidly with a local Lindbladian. This definition encompasses the better-known Hamiltonian definition of phase of matter for gapped ground state phases, and further applies to any family of states connected by short unitary circuits, as well as non-equilibrium phases of matter, and those stable under external dissipative interactions. Under this definition, we show that $N = O(\log(n/\delta)2^{polylog(1/\epsilon)})$ samples suffice to learn local expectation values within a phase for a system with $n$ qubits, to error $\epsilon$ with failure probability $\delta$. This sample complexity is comparable to previous results on learning gapped and thermal phases, and it encompasses previous results of this nature in a unified way. Furthermore, we also show that we can learn families of states which go beyond the Lindbladian definition of phase, and we derive bounds on the sample complexity which are dependent on the mixing time between states under a Lindbladian evolution.
A model of joint random walk of two agents on an infinite plane is considered. The agents possess no means of mutual classical communication, but have access to quantum entanglement resource which is used according to a pre-arranged protocol. Depending on the details of the protocol, an effective force of attraction or repulsion emerges between the two agents. The emergence of this force from quantum entanglement is interpreted in terms of spherical or hyperbolic geometries for attraction or repulsion, respectively.
Polyatomic molecules have rich structural features that make them uniquely suited to applications in quantum information science, quantum simulation, ultracold chemistry, and searches for physics beyond the Standard Model. However, a key challenge is fully controlling both the internal quantum state and the motional degrees of freedom of the molecules. Here, we demonstrate the creation of an optical tweezer array of individual polyatomic molecules, CaOH, with quantum control of their internal quantum state. The complex quantum structure of CaOH results in a non-trivial dependence of the molecules' behavior on the tweezer light wavelength. We control this interaction and directly and nondestructively image individual molecules in the tweezer array with >90% fidelity. The molecules are manipulated at the single internal quantum state level, thus demonstrating coherent state control in a tweezer array. The platform demonstrated here will enable a variety of experiments using individual polyatomic molecules with arbitrary spatial arrangement.
We analyze quantum-geometric bounds on optical weights in topological phases with pairs of bands hosting non-trivial Euler class, a multi-gap invariant characterizing non-Abelian band topology. We show how the bounds constrain the combined optical weights of the Euler bands at different dopings and further restrict the size of the adjacent band gaps. In this process, we also consider the associated interband contributions to DC conductivities in the flat-band limit. We physically validate these results by recasting the bound in terms of transition rates associated with the optical absorption of light, and demonstrate how the Euler connections and curvatures can be determined through the use of momentum and frequency-resolved optical measurements, allowing for a direct measurement of this multi-band invariant. Additionally, we prove that the bound holds beyond the degenerate limit of Euler bands, resulting in nodal topology captured by the patch Euler class. In this context, we deduce optical manifestations of Euler topology within $\vec{k} \cdot \vec{p}$ models, which include AC conductivity, and third-order jerk photoconductivities in doped Euler semimetals. We showcase our findings with numerical validation in lattice-regularized models that benchmark effective theories for real materials and are themselves directly realizable in metamaterials and optical lattices.
Many topological or critical aspects of the Kitaev chain are well known, with several classic results. In contrast, the study of the critical behavior of the strong Majorana zero modes (MZM) has been overlooked. Here we introduce two topological markers which, surprisingly, exhibit non-trivial signatures over the entire (1+1) Ising critical line. We first analytically compute the MZM fidelity ${\cal{F}}_{\rm MZM}$--a measure of the MZM mapping between parity sectors. It takes a universal value along the (1+1) Ising critical line, ${\cal{F}}_{\rm MZM}=\sqrt{8}/\pi$, independent of the energy. We also obtain an exact analytical result for the critical MZM occupation number ${{\cal N}}_{\rm MZM}$ which depends on the Catalan's constant ${\cal G}\approx 0.91596559$, for both the ground-state (${{\cal N}}_{\rm MZM}=1/2-4{\cal{G}}/\pi^2\approx 0.12877$) and the first excited state (${{\cal N}}_{\rm MZM}=1/2+(8-4{\cal{G}})/\pi^2\approx 0.93934$). We further compute finite-size corrections which identically vanish for the special ratio $\Delta/t=\sqrt{2}-1$ between pairing and hopping in the critical Kitaev chain.
Lexicographically minimal string rotation (LMSR) is a problem to find the minimal one among all rotations of a string in the lexicographical order, which is widely used in equality checking of graphs, polygons, automata and chemical structures. In this paper, we propose an $O(n^{3/4})$ quantum query algorithm for LMSR. In particular, the algorithm has average-case query complexity $O(\sqrt n \log n)$, which is shown to be asymptotically optimal up to a polylogarithmic factor, compared to its $\Omega\left(\sqrt{n/\log n}\right)$ lower bound. Furthermore, we show that our quantum algorithm outperforms any (classical) randomized algorithms in both worst and average cases. As an application, it is used in benzenoid identification and disjoint-cycle automata minimization.
One of the major obstacles faced by quantum-enabled technology is the environmental noise that causes decoherence in the quantum system, thereby destroying much of its quantum aspects and introducing errors while the system undergoes quantum operations and processing. A number of techniques have been invented to mitigate the environmental effects, and many of these techniques are specific to the environment and the quantum tasks at hand. Here, we propose a protocol that makes arbitrary environments effectively noise-free or transparent using an ancilla, which, in particular, is well suited to protect information stored in atoms. The ancilla, which is the photons, is allowed to undergo restricted but a wide class of noisy operations. The protocol transfers the information of the system onto the decoherence-free subspace and later retrieves it back to the system. Consequently, it enables full protection of quantum information and entanglement in the atomic system from decoherence. We propose experimental schemes to implement this protocol on atomic systems in an optical cavity.
In several cases, open quantum systems can be successfully described using master equations relying on Born-Markov approximations, but going beyond these approaches has become often necessary. In this work, we introduce the NCA and NCA-Markov dynamical maps for open quantum systems, which go beyond these master equations replacing the Born approximation with a self-consistent approximation, known as non-crossing approximation (NCA). These maps are formally similar to master equations, but allow to capture non-perturbative effects of the environment at a moderate extra numerical cost. To demonstrate their capabilities, we apply them to the spin-boson model at zero temperature for both a Ohmic and a sub-Ohmic environment, showing that they can both qualitatively capture its strong-coupling behaviour, and be quantitatively correct beyond standard master equations.
In this work, we analytically and numerically study the sideband transition dynamics of the driven quantum Rabi model (QRM). We focus in particular on the conditions when the external transverse drive fields induce first-order sideband transitions. Inducing sideband transitions between two different systems is an essential technique for various physical models, including the QRM. However, despite its importance, a precise analytical study has not been reported yet that successfully explains the sideband transition rates in a driven QRM applicable for all system parameter configurations. In our study, we analytically derive the sideband transition rates based on second-order perturbation theory, not relying on the rotating wave approximation (RWA) \cite{RWA}. Our formula are valid for all ranges of drive frequencies and system's parameters. Our analytical derived formula agrees well with the numerical results in a regime of moderate drive amplitudes. Interestingly, we have found a non-trivial longitudinal drive effect derived from the transverse drive Hamiltonian. This accounts for significant corrections to the sideband transition rates that are expected without considering the derived longitudinal effect. Using this approach, one can precisely estimate the sideband transition rates in the QRM not confining themselves within specific parameter regimes. This provides important contributions for understanding experiments described by the driven QRM.
In this work, we analyze a number of noisy quantum channels on a family of qudit states. The channels studied are the dit-flip noise, phase flip noise, dit-phase flip noise, depolarizing noise, non-Markovian Amplitude Damping Channel (ADC), dephasing noise, and depolarization noise. To gauge the effect of noise, the fidelity between the original and the final states is studied. The change of coherence under the action of noisy channels is also studied, both analytically and numerically. Our approach is advantageous as it has an explicit relation to the original approach to the multi-qubit hypergraph states.
{\it Learning finite automata} (termed as {\it model learning}) has become an important field in machine learning and has been useful realistic applications. Quantum finite automata (QFA) are simple models of quantum computers with finite memory. Due to their simplicity, QFA have well physical realizability, but one-way QFA still have essential advantages over classical finite automata with regard to state complexity (two-way QFA are more powerful than classical finite automata in computation ability as well). As a different problem in {\it quantum learning theory} and {\it quantum machine learning}, in this paper, our purpose is to initiate the study of {\it learning QFA with queries} (naturally it may be termed as {\it quantum model learning}), and the main results are regarding learning two basic one-way QFA: (1) We propose a learning algorithm for measure-once one-way QFA (MO-1QFA) with query complexity of polynomial time; (2) We propose a learning algorithm for measure-many one-way QFA (MM-1QFA) with query complexity of polynomial-time, as well.
Understanding Quantum Technologies 2023 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum computing algorithms, software development tools and use cases, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an update to the 2022 and 2021 editions published respectively in October 2022 and October 2021. An update log is provided at the end of the book.
In adiabatic quantum annealing the required run-time to reach a given ground-state fidelity is dictated by the size of the minimum gap that appears between the ground and first excited state in the annealing spectrum. In general the presence of avoided level crossings demands an exponential increase in the annealing time with the system size which has consequences both for the efficiency of the algorithm and the required qubit coherence times. One promising avenue being explored to produce more favourable gap scaling is the introduction of non-stoquastic XX-couplings in the form of a catalyst - of particular interest are catalysts which utilise accessible information about the optimisation problem in their construction. Here we show extreme sensitivity of the effect of an XX-catalyst to subtle changes in the encoding of the optimisation problem. In particular, we observe that a targeted catalyst containing a single coupling at constant strength can significantly reduce the gap closing with system size at an avoided level crossing. For slightly different encodings of the same problems however, these same catalysts result in closing gaps in the annealing spectrum. To understand the origin of these closing gaps, we study how the evolution of the ground-state vector is altered by the presence of the catalyst and find that the negative components of the ground-state vector are key to understanding the response of the gap spectrum. We also consider how and when these closing gaps could be utilised in diabatic quantum annealing protocols - a promising alternative to adiabatic quantum annealing in which transitions to higher energy levels are exploited to reduce the run time of the algorithm.
Molecular vibrations couple to visible light only weakly, have small mutual interactions, and hence are often ignored for non-linear optics. Here we show the extreme confinement provided by plasmonic nano- and pico-cavities can sufficiently enhance optomechanical coupling so that intense laser illumination drastically softens the molecular bonds. This optomechanical pumping regime produces strong distortions of the Raman vibrational spectrum related to giant vibrational frequency shifts from an optical spring effect which is hundred-fold larger than in traditional cavities. The theoretical simulations accounting for the multimodal nanocavity response and near-field-induced collective phonon interactions are consistent with the experimentally-observed non-linear behavior exhibited in the Raman spectra of nanoparticle-on-mirror constructs illuminated by ultrafast laser pulses. Further, we show indications that plasmonic picocavities allow us to access the optical spring effect in single molecules with continuous illumination. Driving the collective phonon in the nanocavity paves the way to control reversible bond softening, as well as irreversible chemistry.
The advent of quantum technologies brought forward much attention to the theoretical characterization of the computational resources they provide. A method to quantify quantum resources is to use a class of functions called magic monotones and stabilizer entropies, which are, however, notoriously hard and impractical to evaluate for large system sizes. In recent studies, a fundamental connection between information scrambling, the magic monotone mana and 2-Renyi stabilizer entropy was established. This connection simplified magic monotone calculation, but this class of methods still suffers from exponential scaling with respect to the number of qubits. In this work, we establish a way to sample an out-of-time-order correlator that approximates magic monotones and 2-Renyi stabilizer entropy. We numerically show the relation of these sampled correlators to different non-stabilizerness measures for both qubit and qutrit systems and provide an analytical relation to 2-Renyi stabilizer entropy. Furthermore, we put forward and simulate a protocol to measure the monotonic behaviour of magic for the time evolution of local Hamiltonians.
The most complicated and challenging system within a light-pulse atom interferometer (LPAI) is the laser system, which controls the frequencies and intensities of multiple laser beams over time to configure quantum gravity and inertial sensors. The main function of an LPAI laser system is to perform cold-atom generation and state-selective detection and to generate coherent two-photon process for the light-pulse sequence. Substantial miniaturization and ruggedization of the laser system can be achieved by bringing together most key functions of the laser and optical system onto a photonic integrated circuit (PIC). Here we demonstrate a high-performance silicon photonic carrier-suppressed single-sideband (CS-SSB) modulator PIC with dual-parallel Mach-Zehnder modulators (DP-MZMs) operating near 1560 nm, which can dynamically shift the frequency of the light for the desired function within the LPAI. Independent RF control of channels in SSB modulator enables the extensive study of imbalances in both the optical and RF phases and amplitudes to simultaneously reach 30 dB carrier suppression and unprecedented 47.8 dB sideband suppression with peak conversion efficiency of -6.846 dB (20.7 %). Using a silicon photonic SSB modulator with time-multiplexed frequency shifting in an LPAI laser system, we demonstrate cold-atom generation, state-selective detection, and the realization of atom interferometer fringes to estimate gravitational acceleration, $g \approx 9.77 \pm 0.01 \,\rm{m/s^2}$, in a Rubidium ($^{87}$Rb) atom system.
A strictly time-domain formulation of the log-sensitivity of the error signal to structured plant uncertainty is presented and analyzed through simple but representative classical and quantum systems. Results demonstrate that across a wide range of physical systems, maximization of performance (minimization of the error signal) asymptotically or at a specific time comes at the cost of increased log-sensitivity, implying a time-domain constraint analogous to the frequency-domain identity $\mathbf{S(s) + T(s) = I}$. While of limited value in classical problems based on asymptotic stabilization or tracking, such a time-domain formulation is valuable in assessing the reduced robustness cost concomitant with high-fidelity quantum control schemes predicated on time-based performance measures.
Microwave driving is a ubiquitous technique for superconducting qubits (SCQs), but the dressed states description based on the conventionally used perturbation theory cannot fully capture the dynamics in the strong driving limit. Comprehensive studies beyond these approximations applicable to transmon-based circuit quantum electrodynamics (QED) systems are unfortunately rare as the relevant works have been mainly limited to single-mode or two-state systems. In this work, we investigate a microwave-dressed transmon coupled to a single quantized mode over a wide range of driving parameters. We reveal that the interaction between the transmon and resonator as well as the properties of each mode is significantly renormalized in the strong driving limit. Unlike previous theoretical works, we establish a non-recursive, and non-Floquet theory beyond the perturbative regimes, which excellently quantifies the experiments. This work expands our fundamental understanding of dressed cavity QED-like systems beyond the conventional approximations. Our work will also contribute to fast quantum gate implementation, qubit parameter engineering, and fundamental studies on driven nonlinear systems.
We propose a quantum error mitigation strategy for the variational quantum eigensolver (VQE) algorithm. We find, via numerical simulation, that very small amounts of coherent noise in VQE can cause substantially large errors that are difficult to suppress by conventional mitigation methods, and yet our proposed mitigation strategy is able to significantly reduce these errors. The proposed strategy is a combination of previously reported techniques, namely randomized compiling (RC) and zero-noise extrapolation (ZNE). Intuitively, randomized compiling turns coherent errors in the circuit into stochastic Pauli errors, which facilitates extrapolation to the zero-noise limit when evaluating the cost function. Our numerical simulation of VQE for small molecules shows that the proposed strategy can mitigate energy errors induced by various types of coherent noise by up to two orders of magnitude.
Bosonic cat qubits stabilized by two-photon driven dissipation benefit from exponential suppression of bit-flip errors and an extensive set of gates preserving this protection. These properties make them promising building blocks of a hardware-efficient and fault-tolerant quantum processor. In this paper, we propose a performance optimization of the repetition cat code architecture using fast but noisy CNOT gates for stabilizer measurements. This optimization leads to high thresholds for the physical figure of merit, given as the ratio between intrinsic single-photon loss rate of the bosonic mode and the engineered two-photon loss rate, as well as a very interesting scaling below threshold of the required overhead, to reach an expected level of logical error rate. Relying on the specific error models for cat qubit operations, this optimization exploits fast parity measurements, using accelerated low-fidelity CNOT gates, combined with fast ancilla parity-check qubits. The significant enhancement in the performance is explained by: 1- the highly asymmetric error model of cat qubit CNOT gates with a major component on control (ancilla) qubits, and 2- the robustness of the error correction performance in presence of the leakage induced by fast operations. In order to demonstrate these performances, we develop a method to sample the repetition code under circuit-level noise that also takes into account cat qubit state leakage.
In quantum information, trace distance is a basic metric of distinguishability between quantum states. However, there is no known efficient approach to estimate the value of trace distance in general. In this paper, we propose efficient quantum algorithms for estimating the trace distance within additive error $\varepsilon$ between mixed quantum states of rank $r$. Specifically, we first provide a quantum algorithm using $r \cdot \widetilde O(1/\varepsilon^2)$ queries to the quantum circuits that prepare the purifications of quantum states. Then, we modify this quantum algorithm to obtain another algorithm using $\widetilde O(r^2/\varepsilon^5)$ samples of quantum states, which can be applied to quantum state certification. These algorithms have query/sample complexities that are independent of the dimension $N$ of quantum states, and their time complexities only incur an extra $O(\log (N))$ factor. In addition, we show that the decision version of low-rank trace distance estimation is $\mathsf{BQP}$-complete.
We experimentally demonstrate stable trapping and controlled manipulation of silica microspheres in a structured optical beam consisting of a dark focus surrounded by light in all directions - the so-called Dark Focus Tweezer. Results from power spectrum and potential analysis demonstrate the non-harmonicity of the trapping potential landspace, which is reconstructed from experimental data in agreement to Lorentz-Mie numerical simulations. Applications of the dark tweezer in levitated optomechanics and biophysics are discussed.
Qubits built out of Majorana zero modes (MZMs) constitute the primary path towards topologically protected quantum computing. Simulating the braiding process of multiple MZMs corresponds to the quantum dynamics of a superconducting many-body system. It is crucial to study the Majorana dynamics both in the presence of all other quasiparticles and for reasonably large system sizes. We present a method to calculate arbitrary many-body wavefunctions as well as their expectation values, correlators and overlaps from time evolved single-particle states of a superconductor, allowing for significantly larger system sizes. We calculate the fidelity, transition probabilities, and joint parities of Majorana pairs to track the quality of the braiding process. We show how the braiding success depends on the speed of the braid. Moreover, we demonstrate the topological CNOT two-qubit gate as an example of two-qubit entanglement. Our work opens the path to test and analyze the many theoretical implementations of Majorana qubits. Moreover, this method can be used to study the dynamics of any non-interacting superconductor.
This work investigates the out-of-equilibrium dynamics of dipole and higher-moment conserving systems with long-range interactions, drawing inspiration from trapped ion experiments in strongly tilted potentials. We introduce a hierarchical sequence of multipole-conserving models characterized by power-law decaying couplings. Although the moments are always globally conserved, adjusting the power-law exponents of the couplings induces various regimes in which only a subset of multipole moments are effectively locally conserved. We examine the late-time hydrodynamics analytically and numerically using an effective classical framework, uncovering a rich dynamical phase diagram that includes subdiffusion, conventional diffusion, and L\'evy flights. Our results are unified in an analytic reciprocal relationship that captures the nested hierarchy of hydrodynamics in multipole conserving systems where only a subset of the moments are locally conserved. Moreover, we extend our findings to higher dimensions and explore the emergence of long-time scales, reminiscent of pre-thermal regimes, in systems with low charge density. Lastly, we corroborate our results through state-of-the-art numerical simulations of a fully quantum long-range dipole-conserving system and discuss their relevance to trapped-ion experimental setups.
Quantum machine learning with variational quantum algorithms (VQA) has been actively investigated as a practical algorithm in the noisy intermediate-scale quantum (NISQ) era. Recent researches reveal that the data reuploading, which repeatedly encode classical data into quantum circuit, is necessary for obtaining the expressive quantum machine learning model in the conventional quantum computing architecture. However, the data reuploding tends to require large amount of quantum resources, which motivates us to find an alternative strategy for realizing the expressive quantum machine learning efficiently. In this paper, we propose quantum machine learning with Kerr-nonlinear Parametric Oscillators (KPOs), as another promising quantum computing device. The key idea is that we use not only the ground state and first excited state but also use higher excited states, which allows us to use a large Hilbert space even if we have a single KPO. Our numerical simulations show that the expressibility of our method with only one mode of the KPO is much higher than that of the conventional method with six qubits. Our results pave the way towards resource efficient quantum machine learning, which is essential for the practical applications in the NISQ era.
We develop and apply an extension of the randomized compiling (RC) protocol that includes \new{a special treatment of} neighboring qubits and dramatically reduces crosstalk \new{effects caused by the application of faulty gates on}\old{ between} superconducting qubits in IBMQ quantum computers (\texttt{ibm\_lagos} and \texttt{ibmq\_ehningen}). Crosstalk errors, stemming from CNOT two-qubit gates, are a crucial source of errors on numerous quantum computing platforms. For the IBMQ machines, their magnitude is \older{usually underestimated} \newest{often overlooked}\older{by the benchmark protocols provided by the manufacturer}. Our RC protocol turns coherent noise due to crosstalk into a depolarising noise channel that can then be treated using established error mitigation schemes, such as noise estimation circuits. We apply our approach to the quantum simulation of the non-equilibrium dynamics of the Bardeen-Cooper-Schrieffer (BCS) Hamiltonian for superconductivity, a particularly challenging model to simulate on quantum hardware because of the long-range interaction of Cooper pairs. With 135 CNOT gates, we work in a regime where crosstalk, as opposed to either trotterization or qubit decoherence, dominates the error. Our twirling of neighboring qubits is shown to dramatically improve the noise estimation protocol without the need to add new qubits or circuits and allows for a quantitative simulation of the BCS model.
The Bell experiment is discussed in the light of a new approach to the foundation of quantum mechanics. It is concluded from the basic model that the mind of any observer must be limited in some way: In certain contexts, he is simply not able to keep enough variables in his mind when making decisions. This has consequences for Bell's theorem, but it also seems to have wider consequences.
We investigate the rotational properties of a two-component, two-dimensional self-bound quantum droplet, which is confined in a harmonic potential and compare them with the well-known problem of a single-component atomic gas with contact interactions. For a fixed value of the trap frequency, choosing some representative values of the atom number, we determine the lowest-energy state, as the angular momentum increases. For a sufficiently small number of atoms, the angular momentum is carried via center-of-mass excitation. For larger values, when the angular momentum is sufficiently small, we observe vortex excitation instead. Depending on the actual atom number, one or more vortices enter the droplet. Beyond some critical value of the angular momentum, however, the droplet does not accommodate more vortices and the additional angular momentum is carried via center-of-mass excitation in a "mixed" state. Finally, the excitation spectrum is also briefly discussed.
Despite a very good understanding of single-particle Anderson localization in one-dimensional (1D) disordered systems, many-body effects are still full of surprises, a famous example being the interaction-driven many-body localization (MBL) problem, about which much has been written, and perhaps the best is yet to come. Interestingly enough the non-interacting limit provides a natural playground to study non-trivial multiparticle physics, offering the possibility to test some general mechanisms with very large-scale exact diagonalization simulations. In this work, we first revisit the 1D many-body Anderson insulator through the lens of extreme value theory, focusing on the extreme polarizations of the equivalent spin chain model in a random magnetic field. A many-body-induced chain breaking mechanism is explored numerically, and compared to an analytically solvable toy model. A unified description, from weak to large disorder strengths $W$ emerges, where the disorder-dependent average localization length $\xi(W)$ governs the extreme events leading to chain breaks. In particular, tails of the local magnetization distributions are controlled by $\xi(W)$. Remarkably, we also obtain a quantitative understanding of the full distribution of the extreme polarizations, which is given by a Fr\'echet-type law. In a second part, we explore finite interaction physics and the MBL question. For the available system sizes, we numerically quantify the difference in the extreme value distributions between the interacting problem and the non-interacting Anderson case. Strikingly, we observe a sharp "extreme-statistics transition" as $W$ changes, which may coincide with the MBL transition.
When reformulated as a resource theory, thermodynamics can analyze system behaviors in the single-shot regime. In this, the work required to implement state transitions is bounded by $\alpha$-Renyi divergences and so differs in identifying efficient operations compared to stochastic thermodynamics. Thus, a detailed understanding of the difference between stochastic thermodynamics and resource-theoretic thermodynamics is needed. To this end, we study reversibility in the single-shot regime, generalizing the two-level work reservoirs used there to multi-level work reservoirs. This achieves reversibility in any transition in the single-shot regime. Building on this, we systematically explore multi-level work reservoirs in the nondissipation regime with and without catalysts. The resource-theoretic results show that two-level work reservoirs undershoot Landauer's bound, misleadingly implying energy dissipation during computation. In contrast, we demonstrate that multi-level work reservoirs achieve Landauer's bound and produce zero entropy.
Interfaces of light and matter serve as a platform for exciting many-body physics and photonic quantum technologies. Due to the recent experimental realization of atomic arrays at sub-wavelength spacings, collective interaction effects such as superradiance have regained substantial interest. Their analytical and numerical treatment is however quite challenging. Here we develop a semiclassical approach to this problem that allows to describe the coherent and dissipative many-body dynamics of interacting spins while taking into account lowest-order quantum fluctuations. For this purpose we extend the discrete truncated Wigner approximation, originally developed for unitarily coupled spins, to include collective, dissipative spin processes by means of truncated correspondence rules. This maps the dynamics of the atomic ensemble onto a set of semiclassical, numerically inexpensive stochastic differential equations. We benchmark our method with exact results for the case of Dicke decay, which shows excellent agreement. For small arrays we compare to exact simulations and a second order cumulant expansion, again showing good agreement at early times and at moderate to strong driving. We conclude by studying the radiative properties of a spatially extended three-dimensional, coherently driven gas and compare the coherence of the emitted light to experimental results.
The papers by Janszky and Adam [Phys. Rev. A {\bf 46}, 6091 (1992)] and Chen \textit{et al.} [Phys. Rev. Lett. {\bf 104}, 063002 (2010)] are examples of works where one can find equivalences belonging to the following class: quantum harmonic oscillators subjected to different time-dependent frequency modulations, during a certain time interval $\tau$, exhibit exactly the same final null squeezing parameter ($r_f=0$). In the present paper, we discuss a more general class of squeezing equivalence, where the final squeezing parameter can be non-null ($r_f\geq0$). We show that when the interest is in controlling the forms of the frequency modulations, but keeping free the values of $r_f$ and $\tau$, this in general demands numerical calculations to find these values leading to squeezing equivalences (a particular case of this procedure recovers the equivalence found by Jansky and Adams). On the other hand, when the interest is not in previously controlling the form of these frequencies, but rather $r_f$ and $\tau$ (and also some constraints, such as minimization of energy), one can have analytical solutions for these frequencies leading to squeezing equivalences (particular cases of this procedure are usually applied in problems of shortcuts to adiabaticity, as done by Chen \textit{et al.}). In this way, this more general squeezing equivalence discussed here is connected to recent and important topics in the literature as, for instance, generation of squeezed states and the obtaining of shortcuts to adiabaticity.
We propose a physical principle for implementation of controllable interactions of identical electromagnetic bosons (excitons or polaritons) in two-dimensional (2D) semiconductors. The key ingredients are tightly bound biexcitons and in-plane anisotropy of the host structure due to, e.g., a uniaxial strain. We show that anisotropy-induced splitting of the radiative exciton doublet couples the biexciton state to continua of boson scattering states. As a result, two-body elastic scattering of bosons may be resonantly amplified when energetically tuned close to the biexciton by applying a transverse magnetic field or tuning the coupling with the microcavity photon mode. At the resonance, bosonic fields undergo quantum reaction of fusion accompanied by their squeezing. For excitons, we predict giant molecules (Feshbach dimers) which can be obtained from a biexciton via rapid adiabatic sweeping of the magnetic field across the resonance. The molecules possess non-trivial entanglement properties. Our proposal holds promise for the strongly-correlated photonics and quantum chemistry of light.
Near-term quantum simulators are mostly based on qubit-based architectures. However, their imperfect nature significantly limits their practical application. The situation is even worse for simulating fermionic systems, which underlie most of material science and chemistry, as one has to adopt fermion-to-qubit encodings which create significant additional resource overhead and trainability issues. Thanks to recent advances in trapping and manipulation of neutral atoms in optical tweezers, digital fermionic quantum simulators are becoming viable. A key question is whether these emerging fermionic simulators can outperform qubit-based simulators for characterizing strongly correlated electronic systems. Here, we perform a comprehensive comparison of resource efficiency between qubit and fermionic simulators for variational ground-state emulation of fermionic systems in both condensed matter systems and quantum chemistry problems. We show that the fermionic simulators indeed outperform their qubit counterparts with respect to resources for quantum evolution (circuit depth), as well as classical optimization (number of required parameters and iterations). In addition, they show less sensitivity to the random initialization of the circuit. The relative advantage of fermionic simulators becomes even more pronounced as interaction becomes stronger, or tunneling is allowed in more than one dimension, as well as for spinful fermions. Importantly, this improvement is scalable, i.e., the performance gap between fermionic and qubit simulators only grows for bigger system sizes.
Two hybrid quantum-classical reservoir computing models are presented to reproduce low-order statistical properties of a two-dimensional turbulent Rayleigh-B\'enard convection flow at a Rayleigh number Ra=1e+5 and a Prandtl number Pr=10. These properties comprise the mean vertical profiles of the root mean square velocity and temperature and the turbulent convective heat flux. Both quantum algorithms differ by the arrangement of the circuit layers of the quantum reservoir, in particular the entanglement layers. The second of the two quantum circuit architectures, denoted as H2, enables a complete execution of the reservoir update inside the quantum circuit without the usage of external memory. Their performance is compared with that of a classical reservoir computing model. Therefore, all three models have to learn the nonlinear and chaotic dynamics of the turbulent flow at hand in a lower-dimensional latent data space which is spanned by the time-dependent expansion coefficients of the 16 most energetic Proper Orthogonal Decomposition (POD) modes. These training data are generated by a POD snapshot analysis from direct numerical simulations of the original turbulent flow. All reservoir computing models are operated in the reconstruction mode. We analyse different measures of the reconstruction error in dependence on the hyperparameters which are specific for the quantum cases or shared with the classical counterpart, such as the reservoir size and the leaking rate. We show that both quantum algorithms are able to reconstruct the essential statistical properties of the turbulent convection flow successfully with similar performance compared to the classical reservoir network. Most importantly, the quantum reservoirs are by a factor of 4 to 8 smaller in comparison to the classical case.
We analyse the quantum Cheshire cat using contextuality theory, to see if this can tell us anything about how best to interpret this paradox. We show that this scenario can be analysed using the relation between three different measurements, which seem to result in a logical contradiction. We discuss how this contextual behaviour links to weak values, and coherences between prohibited states. Rather than showing a property of the particle is disembodied, the quantum Cheshire cat instead demonstrates the effects of these coherences, which are typically found in pre- and postselected systems.
The Cross-resonance (CR) gate architecture that exploits fixed-frequency transmon qubits and fixed couplings is a leading candidate for quantum computing. Nonetheless, without the tunability of qubit parameters such as qubit frequencies and couplings, gate operations can be limited by the presence of quantum crosstalk arising from the always-on couplings. When increasing system sizes, this can become even more serious considering frequency collisions caused by fabrication uncertainties. Here, we introduce a CR gate-based transmon architecture with passive mitigation of both quantum crosstalk and frequency collisions. Assuming typical parameters, we show that ZZ crosstalk can be suppressed while maintaining XY couplings to support fast, high-fidelity CR gates. The architecture also allows one to go beyond the existing literature by extending the operating regions in which fast, high-fidelity CR gates are possible, thus alleviating the frequency-collision issue. To examine the practicality, we analyze the CR gate performance in multiqubit lattices and provide an intuitive model for identifying and mitigating the dominant source of error. For the state-of-the-art precision in setting frequencies, we further investigate its impact on the gates. We find that ZZ crosstalk and frequency collisions can be largely mitigated for neighboring qubits, while interactions beyond near neighbor qubits can introduce new frequency collisions. As the strength is typically at the sub-MHz level, adding weak off-resonant drives to selectively shift qubits can mitigate the collisions. This work could be useful for suppressing quantum crosstalk and improving gate fidelities in large-scale quantum processors based on fixed-frequency qubits and fixed couplings.
Motivated by entanglement protection, our work utilizes a resonance effect to enhance optomechanical entanglement in the coherent-state representation. We propose a filtering model to filter out the significant detuning components between a thermal-mechanical mode and its surrounding heat baths in the weak coupling limit. We reveal that protecting continuous-variable entanglement involves the elimination of degrees of freedom associated with significant detuning components, thereby resisting decoherence. We construct a nonlinear Langevin equation of the filtering model and numerically show that the filtering model doubles the robustness of the stationary maximum optomechanical entanglement to the thermal fluctuation noise and mechanical damping. Furthermore, we generalize these results to an optical cavity array with one oscillating end-mirror to investigate the long-distance optimal optomechanical entanglement transfer. Our study breaks new ground for applying the resonance effect to protect quantum systems from decoherence and advancing the possibilities of large-scale quantum information processing and quantum network construction.
The development of a microwave electrometer with inherent uncertainty approaching its ultimate limit carries both fundamental and technological significance. Recently, the Rydberg electrometer has garnered considerable attention due to its exceptional sensitivity, small-size, and broad tunability. This specific quantum sensor utilizes low-entropy laser beams to detect disturbances in atomic internal states, thereby circumventing the intrinsic thermal noise encountered by its classical counterparts. However, due to the thermal motion of atoms, the advanced Rydberg-atom microwave electrometer falls considerably short of the standard quantum limit by over three orders of magnitude. In this study, we utilize an optically thin medium with approximately 5.2e5 laser-cooled atoms to implement heterodyne detection. By mitigating a variety of noises and strategically optimizing the parameters of the Rydberg electrometer, our study achieves an electric-field sensitivity of 10.0 nV/cm/Hz^1/2 at a 100 Hz repetition rate, reaching a factor of 2.6 above the standard quantum limit and a minimum detectable field of 540 pV/cm. We also provide an in-depth analysis of noise mechanisms and determine optimal parameters to bolster the performance of Rydberg-atom sensors. Our work provides insights into the inherent capacities and limitations of Rydberg electrometers, while offering superior sensitivity for detecting weak microwave signals in numerous applications.
We report on the experimental realization and characterization of a qubit analog with semiconductor exciton-polaritons. In our system, a condensate of exciton-polaritons is confined by a spatially-patterned pump laser in an annular trap that supports energy-degenerate circulating currents of the polariton superfluid. Using temporal interference measurements, we observe coherent oscillations between a pair of counter-circulating superfluid vortex states of the polaritons coupled by elastic scattering off the laser-imprinted potential. The qubit basis states correspond to the symmetric and antisymmetric superpositions of the two vortex states forming orthogonal double-lobe spatial wavefunctions. By engineering the potential, we tune the coupling and coherent oscillations between the two circulating current states, control the energies of the qubit basis states, and initialize the qubit in the desired state. The dynamics of the system is accurately reproduced by our theoretical two-state model, and we discuss potential avenues to achieve complete control over our polaritonic qubits and realize controllable interactions between such qubits to implement quantum gates and algorithms analogous to quantum computation with standard qubits.
We discuss a laser-free, two-qubit geometric phase gate technique for generating high-fidelity entanglement between two trapped ions. The scheme works by ramping the spin-dependent force on and off slowly relative to the gate detunings, which adiabatically eliminates the spin-motion entanglement (AESE). We show how gates performed with AESE can eliminate spin-motion entanglement with multiple modes simultaneously, without having to specifically tune the control field detunings. This is because the spin-motion entanglement is suppressed by operating the control fields in a certain parametric limit, rather than by engineering an optimized control sequence. We also discuss physical implementations that use either electronic or ferromagnetic magnetic field gradients. In the latter, we show how to ``AESE" the system by smoothly turning on the \textit{effective} spin-dependent force by shelving from a magnetic field insensitive state to a magnetic field sensitive state slowly relative to the gate mode frequencies. We show how to do this with a Rabi or adiabatic rapid passage transition. Finally, we show how gating with AESE significantly decreases the gate's sensitivity to common sources of motional decoherence, making it easier to perform high-fidelity gates at Doppler temperatures.
Understanding thermodynamical measurement noise is of central importance for electrical and optical precision measurements from mass-fabricated semiconductor sensors, where the Brownian motion of charge carriers poses limits, to optical reference cavities for atomic clocks or gravitational wave detection, which are limited by thermorefractive and thermoelastic noise due to the transduction of temperature fluctuations to the refractive index and length fluctuations. Here, we discover that unexpectedly charge carrier density fluctuations give rise to a novel noise process in recently emerged electro-optic photonic integrated circuits. We show that Lithium Niobate and Lithium Tantalate photonic integrated microresonators exhibit an unexpected Flicker type (i.e. $1/f^{1.2}$) scaling in their noise properties, significantly deviating from the well-established thermorefractive noise theory. We show that this noise is consistent with thermodynamical charge noise, which leads to electrical field fluctuations that are transduced via the strong Pockels effects of electro-optic materials. Our results establish electrical Johnson-Nyquist noise as the fundamental limitation for Pockels integrated photonics, crucial for determining performance limits for both classical and quantum devices, ranging from ultra-fast tunable and low-noise lasers, Pockels soliton microcombs, to quantum transduction, squeezed light or entangled photon-pair generation. Equally, this observation offers optical methods to probe mesoscopic charge fluctuations with exceptional precision.
We investigate the dynamics of the driven Jaynes-Cummings model, where a two-level atom interacts with a quantized field and both, atom and field, are driven by an external classical field. Via an invariant approach, we are able to transform the corresponding Hamiltonian into the one of the standard Jaynes-Cummings model. Subsequently, the exact analytical solution of the Schr\"odinger equation for the driven system is obtained and employed to analyze some of its dynamical variables.
The accurate modeling of mode hybridization and calculation of radiative relaxation rates have been crucial to the design and optimization of superconducting quantum devices. In this work, we introduce a spectral theory for the electrohydrodynamics of superconductors that enables the extraction of the relaxation rates of excitations in a general three-dimensional distribution of superconducting bodies. Our approach addresses the long-standing problem of formulating a modal description of open systems that is both efficient and allows for second quantization of the radiative hybridized fields. This is achieved through the implementation of finite but transparent boundaries through which radiation can propagate into and out of the computational domain. The resulting spectral problem is defined within a coarse-grained formulation of the electrohydrodynamical equations that is suitable for the analysis of the non-equilibrium dynamics of multiscale superconducting quantum systems.
We discuss protocols for quantum position verification schemes based on the standard quantum cryptographic assumption that a tagging device can keep classical data secure [Kent, 2011]. Our schemes use a classical key replenished by quantum key distribution. The position verification requires no quantum communication or quantum information processing. The security of classical data makes the schemes secure against non-local spoofing attacks that apply to schemes that do not use secure tags. The schemes are practical with current technology and allow for errors and losses. We describe how a proof-of-principle demonstration might be carried out.
In this paper, we construct 2-dimensional bipartite cluster states and perform single-qubit measurements on the bulk qubits. We explore the entanglement scaling of the unmeasured 1-dimensional boundary state and show that under certain conditions, the boundary state can undergo a volume-law to an area-law entanglement transition driven by variations in the measurement angle. We bridge this boundary state entanglement transition and the measurement-induced phase transition in the non-unitary 1+1-dimensional circuit via the transfer matrix method. We also explore the application of this entanglement transition on the computational complexity problems. Specifically, we establish a relation between the boundary state entanglement transition and the sampling complexity of the bipartite $2$d cluster state, which is directly related to the computational complexity of the corresponding Ising partition function with complex parameters. By examining the boundary state entanglement scaling, we numerically identify the parameter regime for which the $2$d quantum state can be efficiently sampled, which indicates that the Ising partition function can be evaluated efficiently in such a region.
The classical Remez inequality bounds the maximum of the absolute value of a polynomial of degree $d$ on a segment through the maximum of its absolute value on any subset $E$ of positive Lebesgue measure of this segment. Similarly, in several variables the maximum of the absolute value of a polynomial of degree $d$ over a larger set is bounded by the maximum of the absolute value of a polynomial on a subset. There are many such inequalities in the literature, but all of them get spoiled when dimension grows. This article is devoted to the dimension free estimates of this type, where a subset is a grid or a rather sparse subset of the grid. The motivation for the dimension free Remez inequality came very naturally from the quantum learning theory, where we need to approximately restore with large probability a big matrix by a relatively small number of random queries, see \cite{VZ22}, \cite{SVZ}. Our dimension free inequality gives time-efficient and sample-optimal algorithms for learning low-degree polynomials of astronomically large number of variables as well as low-degree quantum observables on qudit ensembles, see \cite{SVZ} for those applications.
In this work, we consider the problem of secure key leasing, also known as revocable cryptography (Agarwal et. al. Eurocrypt' 23, Ananth et. al. TCC' 23), as a strengthened security notion of its predecessor put forward in Ananth et. al. Eurocrypt' 21. This problem aims to leverage unclonable nature of quantum information to allow a lessor to lease a quantum key with reusability for evaluating a classical functionality. Later, the lessor can request the lessee to provably delete the key and then the lessee will be completely deprived of the capability to evaluate. In this work, we construct a secure key leasing scheme to lease a decryption key of a (classical) public-key, homomorphic encryption scheme from standard lattice assumptions. We achieve strong form of security where: * The entire protocol uses only classical communication between a classical lessor (client) and a quantum lessee (server). * Assuming standard assumptions, our security definition ensures that every computationally bounded quantum adversary could not simultaneously provide a valid classical deletion certificate and yet distinguish ciphertexts. Our security relies on the hardness of learning with errors assumption. Our scheme is the first scheme to be based on a standard assumption and satisfying the two properties above.
In three dimensions, the Landau-Streater channel is nothing but the Werner-Holevo channel. Such a channel has no continuous parameter and hence cannot model an environmental noise. We consider its convex combination with the identity channel, making it suitable as a one-parameter noise model on qutrits. Moreover, whereas the original Werner-Holevo channel exhibits covariance under the complete unitary group $SU(3)$, the extended family maintains covariance only under the group $SO(3)$. This symmetry reduction allows us to investigate its impact on various properties of the original channel. In particular, we examine its influence on the channel's spectrum, divisibility, complementary channel, and exact or approximate degradability, as well as its various kinds of capacities. Specifically, we derive analytical expressions for the one-shot classical capacity and the entanglement-assisted capacity, accompanied by the establishment of lower and upper bounds for the quantum capacity.
We study anisotropic thermalization in dilute gases of microwave shielded polar molecular fermions. For collision energies above the threshold regime, we find that thermalization is suppressed due to a strong preference for forward scattering and a reduction in total cross section with energy, significantly reducing the efficiency of evaporative cooling. We perform close-coupling calculations on the effective potential energy surface derived by Deng et al. [Phys. Rev. Lett. 130, 183001 (2023)], to obtain accurate 2-body elastic differential cross sections across a range of collision energies. We use Gaussian process regression to obtain a global representation of the differential cross section, over a wide range of collision angles and energies. The route to equilibrium is then analyzed with cross-dimensional rethermalization experiments, quantified by a measure of collisional efficiency toward achieving thermalization.
Manipulating individual trapped ions at the single quantum level has become standard practice in radio-frequency ion traps, enabling applications from quantum information processing to precision metrology. The key ingredient is ground-state cooling of the particle's motion through resolved-sideband laser cooling. Ultra-high-presicion experiments using Penning ion traps will greatly benefit from the reduction of systematic errors offered by full motional control, with applications to atomic masses and $g$-factor measurements, determinations of fundamental constants or related tests of fundamental physics. In addition, it will allow to implement quantum logic spectroscopy, a technique that has enabled a new class of precision measurements in radio-frequency ion traps. Here we demonstrate resolved-sideband laser cooling of the axial motion of a single $^9$Be$^+$ ion in a cryogenic 5 Tesla Penning trap system using a two-photon stimulated-Raman process, reaching a mean phonon number of $\bar{n}_z = 0.10(4)$. This is a fundamental step in the implementation of quantum logic spectroscopy for matter-antimatter comparison tests in the baryonic sector of the Standard Model and a key step towards improved precision experiments in Penning traps operating at the quantum limit.
In this paper, we propose a novel bipartite entanglement purification protocol built upon hashing and upon the guessing random additive noise decoding (GRAND) approach recently devised for classical error correction codes. Our protocol offers substantial advantages over existing hashing protocols, requiring fewer qubits for purification, achieving higher fidelities, and delivering better yields with reduced computational costs. We provide numerical and semi-analytical results to corroborate our findings and provide a detailed comparison with the hashing protocol of Bennet et al. Although that pioneering work devised performance bounds, it did not offer an explicit construction for implementation. The present work fills that gap, offering both an explicit and more efficient purification method. We demonstrate that our protocol is capable of purifying states with noise on the order of 10% per Bell pair even with a small ensemble of 16 pairs. The work explores a measurement-based implementation of the protocol to address practical setups with noise. This work opens the path to practical and efficient entanglement purification using hashing-based methods with feasible computational costs. Compared to the original hashing protocol, the proposed method can achieve some desired fidelity with a number of initial resources up to one hundred times smaller. Therefore, the proposed method seems well-fit for future quantum networks with a limited number of resources and entails a relatively low computational overhead.
We introduce a new notion called ${\cal Q}$-secure pseudorandom isometries (PRI). A pseudorandom isometry is an efficient quantum circuit that maps an $n$-qubit state to an $(n+m)$-qubit state in an isometric manner. In terms of security, we require that the output of a $q$-fold PRI on $\rho$, for $ \rho \in {\cal Q}$, for any polynomial $q$, should be computationally indistinguishable from the output of a $q$-fold Haar isometry on $\rho$. By fine-tuning ${\cal Q}$, we recover many existing notions of pseudorandomness. We present a construction of PRIs and assuming post-quantum one-way functions, we prove the security of ${\cal Q}$-secure pseudorandom isometries (PRI) for different interesting settings of ${\cal Q}$. We also demonstrate many cryptographic applications of PRIs, including, length extension theorems for quantum pseudorandomness notions, message authentication schemes for quantum states, multi-copy secure public and private encryption schemes, and succinct quantum commitments.
Quantum Computing allows, in principle, the encoding of the exponentially scaling many-electron wave function onto a linearly scaling qubit register, offering a promising solution to overcome the limitations of traditional quantum chemistry methods. An essential requirement for ground state quantum algorithms to be practical is the initialisation of the qubits to a high-quality approximation of the sought-after ground state. Quantum State Preparation (QSP) allows the preparation of approximate eigenstates obtained from classical calculations, but it is frequently treated as an oracle in quantum information. In this study, we conduct QSP on the ground state of prototypical strongly correlated systems, up to 28 qubits, using the Hyperion GPU-accelerated state-vector emulator. Various variational and non-variational methods are compared in terms of their circuit depth and classical complexity. Our results indicate that the recently developed Overlap-ADAPT-VQE algorithm offers the most advantageous performance for near-term applications.
The non-Hermiticity of the system gives rise to distinct knot topology that has no Hermitian counterpart. Here, we report a comprehensive study of the knot topology in gapped non-Hermitian systems based on the universal dilation method with a long coherence time nitrogen-vacancy center in a $^{\text{12}}$C isotope purified diamond. Both the braiding patterns of energy bands and the eigenstate topology are revealed. Furthermore, the global biorthogonal Berry phase related to the eigenstate topology has been successfully observed, which identifies the topological invariance for the non-Hermitian system. Our method paves the way for further exploration of the interplay among band braiding, eigenstate topology and symmetries in non-Hermitian quantum systems.
Quantum Key Distribution (QKD) is a prominent application in the field of quantum cryptography providing information-theoretic security for secret key exchange. The implementation of QKD systems on photonic integrated circuits (PICs) can reduce the size and cost of such systems and facilitate their deployment in practical infrastructures. To this end, continuous-variable (CV) QKD systems are particularly well-suited as they do not require single-photon detectors, whose integration is presently challenging. Here we present a CV-QKD receiver based on a silicon PIC capable of performing balanced detection. We characterize its performance in a laboratory QKD setup using a frequency multiplexed pilot scheme with specifically designed data processing allowing for high modulation and secret key rates. The obtained excess noise values are compatible with asymptotic secret key rates of 2.4 Mbit/s and 220 kbit/s at an emulated distance of 10 km and 23 km, respectively. These results demonstrate the potential of this technology towards fully integrated devices suitable for high-speed, metropolitan-distance secure communication.