Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-14 11:30 to 2023-11-17 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
We investigate the string solutions and cosmological implications of the gauge ${\rm U(1)_Z}\,\times$ global ${\rm U(1)_{PQ}}$ model. With two hierarchical symmetry-breaking scales, the model exhibits three distinct string solutions: a conventional global string, a global string with a heavy core, and a gauge string as a bound state of the two global strings. This model reveals rich phenomenological implications in cosmology. During the evolution of the universe, these three types of strings can form a Y-junction configuration. Intriguingly, when incorporating this model with the QCD axion framework, the heavy-core global strings emit more axion particles compared to conventional axion cosmic strings due to their higher tension. This radiation significantly enhances the QCD axion dark matter abundance, thereby opening up the QCD axion mass window. Consequently, axions with masses exceeding $\sim 10^{-5}\, {\rm eV}$ have the potential to constitute the whole dark matter abundance. Furthermore, in contrast to conventional gauge strings, the gauge strings in this model exhibit a distinctive behavior by radiating axions.
Many models of dark matter include self-interactions beyond gravity. A variety of astrophysical observations have previously been used to place limits on the strength of such self-interactions. However, previous works have generally focused either on short-range interactions resulting in individual dark matter particles scattering from one another, or on effectively infinite-range interactions which sum over entire dark matter halos. In this work, we focus on the intermediate regime: forces with range much larger than dark matter particles' inter-particle spacing, but still shorter than the length scales of known halos. We show that gradients in the dark matter density of such halos would still lead to observable effects. We focus primarily on effects in the Bullet Cluster, where finite-range forces would lead either to a modification of the collision velocity of the cluster or to a separation of the dark matter and the galaxies of each cluster after the collision. We also consider constraints from the binding of ultrafaint dwarf galaxy halos, and from gravitational lensing of the Abell 370 cluster. Taken together, these observations allow us to set the strongest constraints on dark matter self-interactions over at least five orders of magnitude in range, surpassing existing limits by many orders of magnitude throughout.
Massive neutrinos have non-negligible impact on the formation of large-scale structures. We investigate the impact of massive neutrinos on the halo assembly bias effect, measured by the relative halo bias $\hat{b}$ as a function of the curvature of the initial density peak $\hat{s}$, neutrino excess $\epsilon_\nu$, or halo concentration $\hat{c}$, using a large suite of $\Sigma M_\nu{=}0.0$ eV and $0.4$ eV simulations with the same initial conditions. By tracing dark matter haloes back to their initial density peaks, we construct a catalogue of halo ``twins'' that collapsed from the same peaks but evolved separately with and without massive neutrinos, thereby isolating any effect of neutrinos on halo formation. We detect a $2\%$ weakening of the halo assembly bias as measured by $\hat{b}(\epsilon_\nu)$ in the presence of massive neutrinos. Due to the significant correlation between $\hat{s}$ and $\epsilon_\nu$~($r_{cc}{=}0.319$), the impact of neutrinos persists in the halo assembly bias measured by $\hat{b}(\hat{s})$ but reduced by an order of magnitude to $0.1\%$. As the correlation between $\hat{c}$ and $\epsilon_\nu$ drops to $r_{cc}{=}0.087$, we do not detect any neutrino-induced impact on $\hat{b}(\hat{c})$, consistent with earlier studies. We also discover an equivalent assembly bias effect for the ``neutrino haloes'', whose concentrations are anti-correlated with the large-scale clustering of neutrinos.
Recently we estimated that about 5 percent of supermassive black hole triple systems are fundamentally unpredictable. These gargantuan chaotic systems are able to exponentially magnify Planck length perturbations to astronomical scales within their interaction timescale. These results were obtained in the zero angular momentum limit, which we naively expected to be the most chaotic. Here, we generalise to triple systems with arbitrary angular momenta by systematically varying the initial virial ratio. We find the surprising result that increasing the angular momentum enhances the chaotic properties of triples. This is not only explained by the longer life times, allowing for a prolonged exponential growth, but also the maximum Lyapunov exponent itself increases. For the ensemble of initially virialised triple systems, we conclude that the percentage of unpredictable supermassive black hole triples increases to about 30 percent. A further increase up to about 50 percent is reached when considering triples on smaller astrophysical scales. Fundamental unpredictability is thus a generic feature of chaotic, self-gravitating triple populations.
We place limits on dark matter made up of compact objects significantly heavier than a solar mass, such as MACHOs or primordial black holes (PBHs). In galaxies, the gas of such objects is generally hotter than the gas of stars and will thus heat the gas of stars even through purely gravitational interactions. Ultrafaint dwarf galaxies (UFDs) maximize this effect. Observations of the half-light radius in UFDs thus place limits on MACHO dark matter. We build upon previous constraints with an improved heating rate calculation including both direct and tidal heating, and consideration of the heavier mass range above $10^4 \, M_\odot$. Additionally we find that MACHOs may lose energy and migrate in to the center of the UFD, increasing the heat transfer to the stars. UFDs can constrain MACHO dark matter with masses between about $10 M_\odot$ and $10^8 M_\odot$ and these are the strongest constraints over most of this range.
Recent studies on the environmental dependence of Type Ia supernova (SN Ia) luminosity focus on the local environment where the SN exploded, considering that this is more directly linked to the SN progenitors. However, there is a debate about the local environmental, specifically local star formation rate (SFR), dependence of the SN Ia luminosity. A recent study claims that the dependence is insignificant ($0.051 \pm 0.020$ mag; $2.6\sigma$), based on the local SFR measurement by fitting local $ugrizy$ photometry data. However, we find that this photometric local SFR measurement is inaccurate. We argue this based on the theoretical background of SFR measurement and the methodology used to make that claim with their local $ugrizy$ photometry data, especially due to a limited range of extinction parameters used when fitting the data. Therefore, we re-analyse the same host galaxies with the same fitting code, but with more physically motivated extinction treatments and global $ugriz$ photometry of host galaxies. We estimate global stellar mass and SFR. Then, local star formation environments are inferred by using the method which showed that SNe Ia in globally passive galaxies have locally passive environments, while those in globally star-forming low-mass galaxies have locally star-forming environments. We find that there is significant local environmental dependence of SN Ia luminosities: SNe Ia in locally star-forming environments are $0.072\pm0.021$ mag ($3.4\sigma$) fainter than those in locally passive environments, even though SN Ia luminosities have been further corrected by the BBC method that reduces the size of the dependence.
The nature and origin of supermassive black holes (SMBHs) remain an open matter of debate within the scientific community. While various theoretical scenarios have been proposed, each with specific observational signatures, the lack of sufficiently sensitive X-ray observations hinders the progress of observational tests. In this white paper, we present how AXIS will contribute to solving this issue. With an angular resolution of 1.5$^{\prime\prime}$ on-axis and minimal off-axis degradation, we have designed a deep survey capable of reaching flux limits in the [0.5-2] keV range of approximately 2$\times$10$^{-18}$ \fcgs~ over an area of 0.13 deg$^2$ in approximately 7 million seconds (7 Ms). Furthermore, we have planned an intermediate depth survey covering approximately 2 deg$^2$ and reaching flux limits of about 2$\times$10$^{-17}$ \fcgs ~ in order to detect a significant number of SMBHs with X-ray luminosities (L$_X$) of approximately 10$^{42}$ \lx up to z$\sim$10. These observations will enable AXIS to detect SMBHs with masses smaller than 10$^5$ \ms, assuming Eddington-limited accretion and a typical bolometric correction for Type II AGN. AXIS will provide valuable information on the seeding and population synthesis models of SMBH, allowing for more accurate constraints on their initial mass function (IMF) and accretion history from z$\sim$0-10. To accomplish this, AXIS will leverage the unique synergy of survey telescopes such as JWST, Roman, Euclid, LSST, and the new generation of 30m class telescopes. These instruments will provide optical identification and redshift measurements, while AXIS will discover the smoking gun of nuclear activity, particularly in the case of highly obscured AGN or peculiar UV spectra as predicted and recently observed in the early Universe.
Domain walls (DWs) are topological defects originating from phase transitions in the early universe. In the presence of an energy imbalance between distinct vacua, enclosed DW cavities shrink until the entire network disappears. By studying the dynamics of thin-shell bubbles in General Relativity, we demonstrate that closed DWs with sizes exceeding the cosmic horizon tend to annihilate later than the average. This delayed annihilation allows for the formation of large overdensities, which, upon entering the Hubble horizon, eventually collapse to form Primordial Black Holes (PBHs). We rely on 3D percolation theory to calculate the number density of these late-annihilating DWs, enabling us to infer the abundance of PBHs. A key insight from our study is that DW networks with the potential to emit observable Gravitational Waves are also likely to yield detectable PBHs. Additionally, we study the production of wormholes connected to baby-universes and conclude on the possibility to generate a multiverse.
The theory of superfluid dark matter is characterized by self-interacting sub-eV particles that thermalize and condense to form a superfluid core in galaxies. Massive black holes at the center of galaxies, however, modify the dark matter distribution and result in a density enhancement in their vicinity known as dark matter spikes. The presence of these spikes affects the evolution of binary systems by modifying their gravitational wave emission and inducing dynamical friction effects on the orbiting bodies. In this work, we assess the role of dynamical friction for bodies moving through a superfluid core enhanced by a central massive black hole. As a first step, we compute the dynamical friction force experienced by bodies moving in a circular orbit. Then, we estimate the gravitational wave dephasing of the binary, showing that the effect of the superfluid drag force is beyond the reach of space-based experiments like LISA, contrarily to collisionless dark matter, therefore providing an opportunity to distinguish these dark matter models.
Despite having important cosmological implications, the reheating phase is believed to play a crucial role in cosmology and particle physics model building. Conventionally, the model of reheating with an arbitrary coupling of inflaton to massless fields naturally lacks precise prediction and hence difficult to verify through observation. In this paper, we propose a simple and natural reheating mechanism where the particle physics model, namely the Type-I seesaw is shown to play a major role in the entire reheating process where inflaton is coupled with all the fields only gravitationally. Besides successfully resolving the well-known neutrino mass and baryon asymmetry problems, the scenario offers successful reheating, predicts distinct primordial gravitational wave spectrum and non-vanishing lowest active neutrino mass. Our novel mechanism opens up a new avenue of integrating particle physics and cosmology in the context of reheating.
The existence of large-scale anisotropy can not be ruled out by the cosmic microwave background (CMB) radiation. Over the years, several models have been proposed in the context of anisotropic inflation to account for CMB's cold spot and hemispheric asymmetry. However, any small-scale anisotropy, if exists during inflation, is not constrained due to its nonlinear evolution in the subsequent phase. This small-scale anisotropy during inflation can play a non-trivial role in giving rise to the cosmic magnetic field, which is the subject of our present study. Assuming a particular phenomenological form of an anisotropic inflationary universe, we have shown that it can generate a large-scale magnetic field at $1$-Mpc scale with a magnitude $\sim 4\times 10^{-20}~G$, within the observed bound. Because of the anisotropy, the conformal flatness property is lost, and the Maxwell field is generated even without explicit coupling. This immediately resolves the strong coupling problem in the standard magnetogenesis scenario. In addition, assuming very low conductivity during the reheating era, we can further observe the evolution of the electromagnetic field with the equation of state (EoS) $\omega_{eff}$ and its effects on the present-day magnetic field.
As electromagnetic showers may alter the abundance of Helium, Lithium, and Deuterium, we can place severe constraints on the lifetime and amount of electromagnetic energy injected by long-lived particles. Considering up-to-date measurements of the light element abundances that point to $Y_p=0.245\pm 0.003$,$({\rm D/H})= (2.527\pm 0.03)\times 10^{-5}$,$(^7{\rm Li/H})=1.58 ^{+0.35}_{-0.28} \times 10^{-10}$, $(^6{\rm Li/^7 Li})=0.05$, and the baryon-to-photon ratio obtained from the Cosmic Microwave Background data, $\eta=6.104 \times 10^{-10}$, we derive upper limits on the fraction of electromagnetic energy produced by long-lived particles. Our findings apply to decaying dark matter models, long-lived gravitinos, and other non-thermal processes that occurred in the early universe between $10^2-10^{10}$ seconds.
The Axion Dark Matter eXperiment (ADMX) has previously excluded Dine-Fischler-Srednicki-Zhitnisky (DFSZ) axions between 680-790 MHz under the assumption that the dark matter is described by the isothermal halo model. However, the precise nature of the velocity distribution of dark matter is still unknown, and alternative models have been proposed. We report the results of a non-virialized axion search over the mass range 2.81-3.31 {\mu}eV, corresponding to the frequency range 680-800 MHz. This analysis marks the most sensitive search for non-virialized axions sensitive to Doppler effects in the Milky Way Halo to date. Accounting for frequency shifts due to the detector's motion through the Galaxy, we exclude cold flow relic axions with a velocity dispersion of order 10^-7 c with 95% confidence.
We explore the cosmological implications of the local limit of nonlocal gravity, which is a classical generalization of Einstein's theory of gravitation within the framework of teleparallelism. An appropriate solution of this theory is the modified Cartesian flat cosmological model. The main purpose of this paper is to study linear perturbations about the orthonormal tetrad frame field adapted to the standard comoving observers in this model. The observational viability of the perturbed model is examined using all available data regarding the cosmic microwave background. The implications of the linearly perturbed modified Cartesian flat model are examined and it is shown that the model is capable of alleviating the $H_0$ tension.
We investigate the dynamics of a rapid-turning dark energy model, which arises from certain inspiraling field-space trajectories. We find the speed of sound $c_s$ of the dark energy perturbations around the background and show that $c_s$ is monotonically decreasing with time. Furthermore, it has a positive-definite lower bound that implies a certain clustering scale. We also extend the previously known background solution for dark energy to an exact solution that includes matter. This allows us to address the implications of our model for two cosmological tensions. More precisely, we argue that the $\sigma_8$ tension can be alleviated generically, while reducing the Hubble tension requires certain constraints on the parameter space of the model. Notably, a necessary condition for alleviating the Hubble tension is that the transition from matter domination to the dark energy epoch begins earlier than in $\Lambda$CDM.
In the past few years, many mesoscale systems have been proposed as possible detectors of sub-GeV dark matter particles. In this work, we point out the feasibility of probing dark matter-nucleon scattering cross section using superconductor-based quantum devices with meV-scale energy threshold. We compute new limits on spin-independent dark matter scattering cross section using the existing power measurement data from three different experiments for MeV to 10 GeV mass. We derive the limits for both halo and thermalized dark matter populations.
Galaxy angular momenta (spins) contain valuable cosmological information, complementing with their positions and velocities. The baryonic spin direction of galaxies have been probed as a reliable tracer of their host halos and the primordial spin modes. Here we use the TNG100 simulation of the IllustrisTNG project to study the spin magnitude correlations between dark matter, gas and stellar components of galaxy-halo systems, and their evolutions across the cosmic history. We find that these components generate similar initial spin magnitudes from the same tidal torque in Lagrangian space. At low redshifts, the gas component still traces the spin magnitude of dark matter halo and the primordial spin magnitude. However, the traceability of stellar component depends on the $ex$ $situ$ stellar mass fraction, $f_{\rm acc}$. Our results suggest that the galaxy baryonic spin magnitude can also serve as a tracer of their host halo and the initial perturbations, and the similarity of their evolution histories affects the galaxy-halo correlations.
Polarization of the cosmic microwave background (CMB) can help probe the fundamental physics behind cosmic inflation via the measurement of primordial $B$ modes. As this requires exquisite control over instrumental systematics, some next-generation CMB experiments plan to use a rotating half-wave plate (HWP) as polarization modulator. However, the HWP non-idealities, if not properly treated in the analysis, can result in additional systematics. In this paper, we present a simple, semi-analytical end-to-end model to propagate the HWP non-idealities through the macro-steps that make up any CMB experiment (observation of multi-frequency maps, foreground cleaning, and power spectra estimation) and compute the HWP-induced bias on the estimated tensor-to-scalar ratio, $r$. We find that the effective polarization efficiency of the HWP suppresses the polarization signal, leading to an underestimation of $r$. Laboratory measurements of the properties of the HWP can be used to calibrate this effect, but we show how gain calibration of the CMB temperature can also be used to partially mitigate it. On the basis of our findings, we present a set of recommendations for the HWP design that can help maximize the benefits of gain calibration.
We constrain cosmology and baryonic feedback scenarios with a joint analysis of weak lensing, galaxy clustering, cosmic microwave background (CMB) lensing, and their cross-correlations (so-called 6$\times$2) using data from the Dark Energy Survey (DES) Y1 and the Planck satellite mission. Noteworthy features of our 6$\times$2 pipeline are: We extend CMB lensing cross-correlation measurements to a band surrounding the DES Y1 footprint (a $\sim 25\%$ gain in pairs), and we develop analytic covariance capabilities that account for different footprints and all cross-terms in the 6$\times$2 analysis. We also measure the DES Y1 cosmic shear two-point correlation function (2PCF) down to $0.^\prime 25$, but find that going below $2.^\prime 5$ does not increase cosmological information due to shape noise. We model baryonic physics uncertainties via the amplitude of Principal Components (PCs) derived from a set of hydro-simulations. Given our statistical uncertainties, varying the first PC amplitude $Q_1$ is sufficient to model small-scale cosmic shear 2PCF. For DES Y1+Planck 6$\times$2 we find $S_8=0.799\pm0.016$, comparable to the 5$\times$2 result of DES Y3+SPT/Planck $S_8=0.773\pm0.016$. Combined with our most informative cosmology priors -- baryon acoustic oscillation (BAO), big bang nucleosynthesis (BBN), type Ia supernovae (SNe Ia), and Planck 2018 EE+lowE, we measure $S_8=0.817\pm 0.011$. Regarding baryonic physics constraints, our 6$\times$2 analysis finds $Q_1=2.8\pm1.8$. Combined with the aforementioned priors, it improves the constraint to $Q_1=3.5\pm1.3$. For comparison, the strongest feedback scenario considered in this paper, the cosmo-OWLS AGN ($\Delta T_\mathrm{heat}=10^{8.7}$ K), corresponds to $Q_1=5.84$.
We apply the gradient expansion approximation to the light-cone gauge, obtaining a separate universe picture at non-linear order in perturbation theory within this framework. Thereafter, we use it to generalize the $\delta N$ formalism in terms of light-cone perturbations. As a consistency check, we demonstrate the conservation of the gauge invariant curvature perturbation on uniform density hypersurface $\zeta$ at the completely non-linear level. The approach studied provides a self-consistent framework to connect at non-linear level quantities from the primordial universe, such as $\zeta$, written in terms of the light-cone parameters, to late time observables.
Milky-Way and intergalactic dust extinction and reddening must be accounted for in measurements of distances throughout the universe. This work provides a comprehensive review of the various impacts of cosmic dust focusing specifically on its effects on two key distance indicators used in the distance ladder: Cepheid variable stars and Type Ia supernovae. We review the formalism used for computing and accounting for dust extinction and reddening as a function of wavelength. We also detail the current state of the art knowledge of dust properties in the Milky Way and in host galaxies. We discuss how dust has been accounted for in both the Cepheid and SN distance measurements. Finally, we show how current uncertainties on dust modeling impact the inferred luminosities and distances, but that measurements of the Hubble constant remain robust to these uncertainties.
The Cosmic Microwave Background (CMB) radiation offers a unique window into the early Universe, facilitating precise examinations of fundamental cosmological theories. However, the quest for detecting B-modes in the CMB, predicted by theoretical models of inflation, faces substantial challenges in terms of calibration and foreground modeling. The COSMOCal (COsmic Survey of Millimeter wavelengths Objects for CMB experiments Calibration) project aims at enhancing the accuracy of the absolute calibration of the polarization angle $\psi$ of current and future CMB experiments. The concept includes the build of a very well known artificial source emitting in the frequency range [20-350] GHz that would act as an absolute calibrator for several polarization facilities on Earth. A feasibility study to place the artificial source in geostationary orbit, in the far field for all the telescopes on Earth, is ongoing. In the meanwhile ongoing hardware work is dedicated to build a prototype to test the technology, the precision and the stability of the polarization recovering in the 1 mm band (220-300 GHz). High-resolution experiments as the NIKA2 camera at the IRAM 30m telescope will be deployed for such use. Once carefully calibrated ($\Delta\psi$ < 0.1 degrees) it will be used to observe astrophysical sources such as the Crab nebula, which is the best candidate in the sky for the absolute calibration of CMB experiments.
Primordial black holes (PBHs) are mainly characterized by their mass function, in which there may be some huge suppression for certain mass spans. If this is the case, the absence of these PBHs will form mass gaps. In this paper, we investigate the PBH mass function with mass gap. Firstly, to obtain a data-supported PBH mass function with mass gap for subsolar masses PBHs, we fine-tune the coefficients of a model-independent power spectrum of primordial curvature perturbations. Then we take this unique PBH mass function into consideration and calculate the energy density spectrum of the stochastic gravitational wave background from PBH mergers. We find the location of its first peak almost has no relationship with the mass gap and is only determined by the probability distribution of frequencies at which PBH binaries merge. Apart from the first peak, there must be an accompanying smaller trough at higher frequency resulting from the mass gap. Therefore, the detection of this smaller trough will provide more information about inflation and PBH formation.
We revisit a consistency test for the speed of light variability, using the latest cosmological observations. This exercise can serve as a new diagnostics for the standard cosmological model and distinguish between the minimal varying speed of light in the Friedmann-Lema\^{i}tre-Robertson-Walker universe. We deploy Gaussian processes to reconstruct cosmic distances and ages in the redshift range $0<z<2$ utilizing the Pantheon compilation of type-Ia supernova luminosity distances (SN), cosmic chronometers from differential galaxy ages (CC), and measurements of both radial and transverse modes of baryon acoustic oscillations ($r$-BAO and $a$-BAO) respectively. Such a test has the advantage of being independent of any non-zero cosmic curvature assumption - which can be degenerated with some variable speed of light models - as well as any dark energy model. We also examine the impact of cosmological priors on our analysis, such as the Hubble constant, supernova absolute magnitude, and the sound horizon scale. We find null evidence for the speed of light variability hypothesis for most choices of priors and data-set combinations. However, mild deviations are seen at $\sim 2\sigma$ confidence level for redshifts $z<1$ with some specific prior choices when $r$-BAO data is employed, and at $z>1$ with a particular reconstruction kernel when $a$-BAO data are included. Still, we ascribe no statistical significance to this result bearing in mind the degeneracy between the associated priors for combined analysis, and incompleteness of the $a$-BAO data set at higher $z$.
We present a detailed exposition on the prospects of the formation of Primordial Black Holes (PBHs) during Slow Roll (SR) to Ultra Slow Roll (USR) sharp transitions in the framework of single-field inflation. We use an effective field theory (EFT) approach in order to keep the analysis model-independent and applicable to both the canonical and non-canonical cases. We show in detail how renormalizing the power spectrum to one loop order in $P(X,\phi)$ theories severely limits the prospects for PBH formation in a single-field inflationary framework. We demonstrate that for the allowed range of effective sound speed, $1<c_s<1.17$, the consistency of one-loop corrected power spectrum leaves a small window for black hole masses, $M_{\rm PBH}\sim \mathcal{O}(10^2-10^3)$gm to have sufficient e-foldings, $\Delta {\cal N}_{\rm Total}\sim {\cal O}(54-59)$ for inflation. We confirm that adding an SR regime after USR before the end of inflation does not significantly alter our conclusions. Our findings for sharp transition strictly rule out the possibility of generating large masses of PBHs from all possible models of single-field inflation (canonical and non-canonical). Our results are at least valid for the situation where constraints from the loop effects are computed using either Late-Time (LT) or Adiabatic-Wave function (AF) scheme followed by Power Spectrum (PS) renormalization schemes.
Deep surveys of the CMB polarization have more information on the lensing signal than the quadratic estimators (QE) can capture. We showed in a recent work that a CMB lensing power spectrum built from a single optimized CMB lensing mass map, working in close analogy to state-of-the-art QE techniques, can result in an essentially optimal spectrum estimator at reasonable numerical cost. We extend this analysis here to account for real-life non-idealities including masking and realistic instrumental noise maps. As in the QE case, it is necessary to include small corrections to account for the estimator response to these anisotropies, which we demonstrate can be estimated easily from simulations. The realization-dependent debiasing of the spectrum remains robust, allowing unbiased recovery of the band powers even in cases where the statistical model used for the lensing map reconstruction is grossly wrong. This allows now robust and at the same time optimal CMB lensing constraints from CMB data, on all scales relevant for the inference of the neutrino mass, or other parameters of our cosmological model.
The seed of dark matter can be generated from light spectator fields during inflation through a similar mechanism that the seed of observed large scale structures are produced from the inflaton field. The accumulated energy density of the corresponding excited modes, which is subdominant during inflation, dominates energy density of the universe later around the time of matter and radiation equality and plays the role of dark matter. For spin-2 spectator fields, Higuchi bound may seem to prevent excitation of such light modes since deviation of the inflationary background from the exact de Sitter spacetime is very small. However, sizable interactions with the inflaton field breaks (part of) isometries of the de Sitter space in the inflationary background and relaxes the Higuchi bound. Looking for this possibility in the context of effective field theory of inflation, we suggest a dark matter model consisting of spin-2 particles that produce during inflation.
We perform SubHalo Abundance Matching (SHAM) studies on UNIT simulations with \{$\sigma, V_{\rm ceil}, v_{\rm smear}$\}-SHAM and \{$\sigma, V_{\rm ceil},f_{\rm sat}$\}-SHAM. They are designed to reproduce the clustering on 5--30$\,\hmpc$ of Luminous Red Galaxies (LRGs), Emission Line Galaxies (ELGs) and Quasi-Stellar Objects (QSOs) at $0.4<z<3.5$ from DESI One Percent Survey. $V_{\rm ceil}$ is the incompleteness of the massive host (sub)haloes and is the key to the generalized SHAM. $v_{\rm smear}$ models the clustering effect of redshift uncertainties, providing measurments consistent with those from repeat observations. A free satellite fraction $f_{\rm sat}$ is necessary to reproduce the clustering of ELGs. We find ELGs present a more complex galaxy--halo mass relation than LRGs reflected in their weak constraints on $\sigma$. LRGs, QSOs and ELGs show increasing $V_{\rm ceil}$ values, corresponding to the massive galaxy incompleteness of LRGs, the quenched star formation of ELGs and the quenched black hole accretion of QSOs. For LRGs, a Gaussian $v_{\rm smear}$ presents a better profile for sub-samples at redshift bins than a Lorentzian profile used for other tracers. The impact of the statistical redshift uncertainty on ELG clustering is negligible. The best-fitting satellite fraction for DESI ELGs is around 4 per cent, lower than previous estimations for ELGs. The mean halo mass log$_{10}(\langle M_{\rm vir}\rangle)$ in $\Msun{}$ for LRGs, ELGs and QSOs are ${13.16\pm0.01}$, ${11.90\pm0.06}$ and ${12.66\pm0.45}$ respectively. Our generalized SHAM algorithms facilitate the production of mult-tracer galaxy mocks for cosmological tests.
Scalars are widely used in cosmology to model novel phenomena such as the late-time cosmic acceleration. These are effective field theories with highly nonlinear interactions, including Horndeski theory/generalized galileon and beyond. We use the latest fully crossing symmetric positivity bounds to constrain these cosmological EFTs. These positivity bounds, based on fundamental principles of quantum field theory such as causality and unitarity, are able to constrain the EFT coefficients both from above and below. We first map the mass dependence of the fully crossing symmetric bounds, and find that a nonzero mass generically enlarges the positivity regions. We show that fine-tunings in the EFT construction can significantly reduce the viable regions and sometimes can be precarious. Then, we apply the positivity bounds to several models in the Horndeski class and beyond, explicitly listing the ready-to-use bounds with the model parameters, and discuss the implications for these models. The new positivity bounds are found to severely constrain some of these models, in which positivity requires the mass to be parametrically close to the cutoff of the EFT, effectively ruling them out. The examples include massive galileon, the original beyond Horndeski model, and DHOST theory with unity speed of gravity and nearly constant Newton's coupling.
We investigate the cosmological consequences of light sterile neutrinos with altered dispersion relations (ADRs) and couplings to an ultra-light, axion-like scalar field. In particular we study the impact on the number of additional, light, fermionic degrees of freedom and primordial nucleosynthesis. While the ADR leads to a new potential term in the Hamiltonian, the coupling to the scalar field results in a time dependent, effective mass contribution. We solve the quantum kinetic equations (QKEs) for the neutrino density matrix and find that in certain parameter regions both new physics effects can individually yield a suppressed population of sterile neutrino species and the correct observed amount of helium in nucleosynthesis. Combining both effects opens up new patches of parameter space excluded by experimental bounds applying to models featuring only one of the effects.
We present a new high-resolution free-form mass model of Abell 2744, combining both weak-lensing (WL) and strong-lensing (SL) datasets from JWST. The SL dataset comprises 286 multiple images, presenting the most extensive SL constraint to date for a single cluster. The WL dataset, employing photo-$z$ selection, yields a source density of ~ 350 arcmin$^{-2}$, marking the densest WL constraint ever. The combined mass reconstruction enables the highest-resolution mass map of Abell 2744 within the ~ 1.8 Mpc$\times$1.8 Mpc reconstruction region to date, revealing an isosceles triangular structure with two legs of ~ 1 Mpc and a base of ~ 0.6 Mpc. Although our algorithm MAximum-entropy ReconStruction (${\tt MARS}$) is entirely blind to the cluster galaxy distribution, the resulting mass reconstruction remarkably well traces the brightest cluster galaxies with the five strongest mass peaks coinciding with the five most luminous cluster galaxies within $\lesssim 2''$. We do not detect any unusual mass peaks that are not traced by the cluster galaxies, unlike the findings in previous studies. Our mass model shows the smallest scatters of SL multiple images in both source (~0".05) and image (~0".1) planes, which are lower than the previous studies by a factor of ~ 4. Although ${\tt MARS}$ represents the mass field with an extremely large number of ~ 300,000 free parameters, it converges to a solution within a few hours thanks to our utilization of the deep learning technique. We make our mass and magnification maps publicly available.
We study how structural properties of globular clusters and dwarf galaxies are linked to their orbits in the Milky Way halo. From the inner to the outer halo, orbital energy increases and stellar-systems gradually move out of internal equilibrium: in the inner halo, high-surface brightness globular clusters are at pseudo-equilibrium, while further away, low-surface brightness clusters and dwarfs appear more tidally disturbed. Dwarf galaxies are the latest to arrive into the halo as indicated by their large orbital energies and pericenters, and have no time for more than one orbit. Their (gas-rich) progenitors likely lost their gas during their recent arrival in the Galactic halo. If dwarfs are at equilibrium with their dark matter (DM) content, the DM density should anti-correlate with pericenter. However, the transformation of DM dominated dwarfs from gas-rich rotation-supported into gas-poor dispersion-supported systems is unlikely accomplished during a single orbit. We suggest instead that the above anti-correlation is brought by the combination of ram-pressure stripping and of Galactic tidal shocks. Recent gas removal leads to an expansion of their stellar content caused by the associated gravity loss, making them sufficiently fragile to be transformed near pericenter passage. Out of equilibrium dwarfs would explain the observed anti-correlation of kinematics-based DM density with pericenter without invoking DM density itself, questioning its previous estimates. Ram-pressure stripping and tidal shocks may contribute to the dwarf velocity dispersion excess. It predicts the presence of numerous stars in their outskirts and a few young stars in their cores.
Most Milky Way dwarf galaxies are much less bound to their host than are relics of Gaia-Sausage-Enceladus and Sgr. These dwarfs are expected to have fallen into the Galactic halo less than 3 Gyr ago, and will therefore have undergone no more than one full orbit. Here, we have performed hydrodynamical simulations of this process, assuming that their progenitors are gas-rich, rotation-supported dwarfs. We follow their transformation through interactions with the hot corona and gravitational field of the Galaxy. Our dedicated simulations reproduce the structural properties of three dwarf galaxies: Sculptor, Antlia II and, with somewhat a lower accuracy, Crater II. This includes reproducing their large velocity dispersions, which are caused by ram-pressure stripping and Galactic tidal shocks. Differences between dwarfs can be interpreted as due to different orbital paths, as well as to different initial conditions for their progenitor gas and stellar contents. However, we failed to suppress in a single orbit the rotational support of our Sculptor analog if it is fully dark-matter dominated. In addition, we have found that classical dwarf galaxies like Sculptor may have stellar cores sufficiently dense to survive the pericenter passage through adiabatic contraction. On the contrary, our Antlia II and Crater II analogs are tidally stripped, explaining their large sizes, extremely low surface brightnesses, and velocity dispersion. This modeling explains differences between dwarf galaxies by reproducing them as being at different stages of out-of-equilibrium stellar systems.
The Advanced X-ray Imaging Satellite (AXIS) promises revolutionary science in the X-ray and multi-messenger time domain. AXIS will leverage excellent spatial resolution (<1.5 arcsec), sensitivity (80x that of Swift), and a large collecting area (5-10x that of Chandra) across a 24-arcmin diameter field of view to discover and characterize a wide range of X-ray transients from supernova-shock breakouts to tidal disruption events to highly variable supermassive black holes. The observatory's ability to localize and monitor faint X-ray sources opens up new opportunities to hunt for counterparts to distant binary neutron star mergers, fast radio bursts, and exotic phenomena like fast X-ray transients. AXIS will offer a response time of <2 hours to community alerts, enabling studies of gravitational wave sources, high-energy neutrino emitters, X-ray binaries, magnetars, and other targets of opportunity. This white paper highlights some of the discovery science that will be driven by AXIS in this burgeoning field of time domain and multi-messenger astrophysics.
Stellar and black hole feedback heat and disperse surrounding cold gas clouds, launching gas flows off circumnuclear and galactic disks and producing a dynamic interstellar medium. On large scales bordering the cosmic web, feedback drives enriched gas out of galaxies and groups, seeding the intergalactic medium with heavy elements. In this way, feedback shapes galaxy evolution by shutting down star formation and ultimately curtailing the growth of structure after the peak at redshift 2-3. To understand the complex interplay between gravity and feedback, we must resolve both the key physics within galaxies and map the impact of these processes over large scales, out into the cosmic web. The Advanced X-ray Imaging Satellite (AXIS) is a proposed X-ray probe mission for the 2030s with arcsecond spatial resolution, large effective area, and low background. AXIS will untangle the interactions of winds, radiation, jets, and supernovae with the surrounding ISM across the wide range of mass scales and large volumes driving galaxy evolution and trace the establishment of feedback back to the main event at cosmic noon.
Pairs of active galactic nuclei (AGN) are observational flags of merger-driven SMBH growth, and represent an observable link between galaxy mergers and gravitational wave (GW) events. Thus, studying these systems across their various evolutionary phases can help quantify the role mergers play in the growth of SMBHs as well as future GW signals expected to be detected by pulsar timing arrays (PTAs). At the earliest stage, the system can be classified as a "dual AGN" where the SMBHs are gravitationally unbound and have typical separations <30 kpc, and at the latest stage the system can be classified as a "binary AGN" where the two massive host galaxies have likely been interacting for hundreds of megayears to gigayears. However, detecting and confirming pairs of AGN is non-trivial, and is complicated by the unique characteristics of merger-environments. To date, there are less than 50 X-ray confirmed dual AGN and only 1 strong binary AGN candidate. AXIS will revolutionize the field of dual AGN: the point-spread-function (PSF), field-of-view (FOV), and effective area (Aeff) are expected to result in the detection of hundreds to thousands of new dual AGN across the redshift range 0 < z < 4. The AXIS AGN surveys will result in the first X-ray study that quantifies the frequency of dual AGN as a function of redshift up to z = 3.5.
Compact objects and supernova remnants provide nearby laboratories to probe the fate of stars after they die, and the way they impact, and are impacted by, their surrounding medium. The past five decades have significantly advanced our understanding of these objects, and showed that they are most relevant to our understanding of some of the most mysterious energetic events in the distant Universe, including Fast Radio Bursts and Gravitational Wave sources. However, many questions remain to be answered. These include: What powers the diversity of explosive phenomena across the electromagnetic spectrum? What are the mass and spin distributions of neutron stars and stellar mass black holes? How do interacting compact binaries with white dwarfs - the electromagnetic counterparts to gravitational wave LISA sources - form and behave? Which objects inhabit the faint end of the X-ray luminosity function? How do relativistic winds impact their surroundings? What do neutron star kicks reveal about fundamental physics and supernova explosions? How do supernova remnant shocks impact cosmic magnetism? This plethora of questions will be addressed with AXIS - the Advanced X-ray Imaging Satellite - a NASA Probe Mission Concept designed to be the premier high-angular resolution X-ray mission for the next decade. AXIS, thanks to its combined (a) unprecedented imaging resolution over its full field of view, (b) unprecedented sensitivity to faint objects due to its large effective area and low background, and (c) rapid response capability, will provide a giant leap in discovering and identifying populations of compact objects (isolated and binaries), particularly in crowded regions such as globular clusters and the Galactic Center, while addressing science questions and priorities of the US Decadal Survey for Astronomy and Astrophysics (Astro2020).
One of the key research themes identified by the Astro2020 decadal survey is Worlds and Suns in Context. The Advanced X-ray Imaging Satellite (AXIS) is a proposed NASA APEX mission that will become the prime high-energy instrument for studying star-planet connections from birth to death. This work explores the major advances in this broad domain of research that will be enabled by the AXIS mission, through X-ray observations of stars in clusters spanning a broad range of ages, flaring M-dwarf stars known to host exoplanets, and young stars exhibiting accretion interactions with their protoplanetary disks. In addition, we explore the ability of AXIS to use planetary nebulae, white dwarfs, and the Solar System to constrain important physical processes from the microscopic (e.g., charge exchange) to the macroscopic (e.g., stellar wind interactions with the surrounding interstellar medium).
Microquasar stellar systems emit electromagnetic radiation and high-energy particles. Thanks to their location within our own galaxy, they can be observed in high detail. Still, many of their inner workings remain elusive; hence, simulations, as the link between observations and theory, are highly useful. In this paper, both high-energy particle and synchrotron radio emissions from simulated microquasar jets are calculated using special relativistic imaging. A finite ray speed imaging algorithm is employed on hydrodynamic simulation data, producing synthetic images seen from a stationary observer. A hydrodynamical model is integrated in the above emission models. Synthetic spectra and maps are then produced that can be compared to observations from detector arrays. As an application, the model synthetically observes microquasars during an episodic ejection at two different spatio-temporal scales: one on the neutrino emission region scale and the other on the synchrotron radio emission scale. The results are compared to the sensitivity of existing detectors.
We study rotating compact stars that are mixtures of the ordinary nuclear matter in a neutron star and fermionic dark matter. After deriving equations describing a slowly rotating system made up of an arbitrary number of perfect fluids, we specialize to the two-fluid case, where the first fluid describes ordinary matter and the second fluid describes dark matter. Electromagnetic observations of the moment of inertia and angular momentum directly probe ordinary matter and not dark matter. Upon taking this into account, we show that the I-Love-Q relations for dark matter admixed neutrons stars can deviate significantly from the standard single-fluid relationships.
We report on the results of a simulation based study of colliding magnetized plasma flows. Our set-up mimics pulsed power laboratory astrophysical experiments but, with an appropriate frame change, are relevant to astrophysical jets with internal velocity variations. We track the evolution of the interaction region where the two flows collide. Cooling via radiative loses are included in the calculation. We systematically vary plasma beta ($\beta_m$) in the flows, the strength of the cooling ($\Lambda_0$) and the exponent ($\alpha$) of temperature-dependence of the cooling function. We find that for strong magnetic fields a counter-propagating jet called a "spine" is driven by pressure from shocked toroidal fields. The spines eventually become unstable and break apart. We demonstrate how formation and evolution of the spines depends on initial flow parameters and provide a simple analytic model that captures the basic features of the flow.
A proposed setting for thermonuclear (Type Ia) supernovae is a white dwarf that has gained mass from a companion to the point of carbon ignition in the core. There is a simmering phase in the early stages of burning that involves the formation and growth of a core convection zone. One aspect of this phase is the convective Urca process, a linking of weak nuclear reactions to convection that may alter the composition and structure of the white dwarf. Convective Urca is not well understood and requires 3D fluid simulations to realistically model. Additionally, the convection is relatively slow (Mach number less than 0.005) so a low-Mach method is needed to make simulating computationally feasible. Using the MAESTROeX low-Mach hydrodynamics code, we investigate recent changes to how the weak reactions are modeled in the convective Urca simulations. We present results that quantify the changes to the reaction rates and their impact on the evolution of the simulation.
We develop two new highly efficient estimators to measure the polarization (Stokes parameters) in experiments that constrain the position angle of individual photons such as scattering and gas-pixel-detector polarimeters, and analyse in detail a previously proposed estimator. All three of these estimators are at least fifty percent more efficient on typical datasets than the standard estimator used in the field. We present analytic estimates of the variance of these estimators and numerical experiments to verify these estimates. Two of the three estimators can be calculated quickly and directly through summations over the measurements of individual photons.
Rotating black holes exhibit a remarkable set of hidden symmetries near their horizon. They have been shown to determine phenomena such as absorption scattering, superradiance and more recently tidal deformations, also known as Love numbers. They have also led to a proposal for a dual thermal CFT with left and right movers recovering the entropy of the black hole. In this work we provide a constructive explanation of these hidden symmetries via analytic continuation to Klein signature. We first show that the near-horizon region of extremal black holes is a Kleinian static solution with mass $M$ and NUT charge $N$. We then analyze the self-dual solution, namely a Kerr black hole with a NUT charge $N=\pm M$. Remarkably, the self-dual solution is self-similar to its near-horizon and hence approximate symmetries become exact: in particular, the original two isometries of Kerr are promoted to seven exact symmetries embedded in a conformal algebra. We analyze its full conformal group in Kleinian twistor space, where a breaking $SO(4,2) \to SL(2,\mathbb{R})\times SL(2,\mathbb{R})$ occurs due to the insertion of a preferred time direction for the black hole. Finally, we show that its spectrum is integrable and behaves as the Hydrogen atom, being solvable in terms of elementary polynomials. Perturbing to astrophysical black holes with $N=0$, we obtain a hyperfine splitting structure.
J1935+2154 is known as a source of soft gamma radiation. Hyperflares from magnetar were detected at a frequency of 1.25 GHz on the FAST telescope in May 2020. The magnetar enters the survey conducted on LPA radio telescope at 111 MHz. A check of the previously published (Fedorova et al. 2021) pulse from the magnetar SGR1935+2154 was carried out. The data received on the LPA is recorded in parallel in two modes having low and high frequency-time resolution: 6 channels with a channel width of 415 kHz and a time resolution of \Delta t = 100 ms; 32 channels with a channel width of 78 kHz and a time resolution of \Delta t = 12.5 ms. The original search was carried out using data with low time-frequency resolution. The search for dispersed signals in the meter wavelength range is difficult, compared with the search in the decimeter range, due to scattering proportional to the fourth power of frequency and dispersion smearing of the pulse in frequency channels proportional to the second power of frequency. In order to collect a broadened pulse signal and obtain the best value of S/N, the search was carried out using an algorithm based on the convolution of multichannel data with a scattered pulse pattern. The shape of the template corresponds to the shape of a scattered pulse with a dispersion measure (DM) of 375 pc/cm3. For repeated verification, the same data was used in which the pulse from the magnetar was detected. An additional check of the published pulse was also carried out using data with a higher frequency-time resolution. Since the dispersion smearing in the frequency channel in the 32-channel data is 5 times less than in the 6-channel data, an increase approximately 2 times in the S/N pulse could be expected. Pulse radiation with S/N>4 having a pulse peak shift depending on DM from SGR1935+2154 was not detected in either 32-channel or 6-channel data.
One approach to testing general relativity (GR) introduces free parameters in the post-Newtonian (PN) expansion of the gravitational-wave (GW) phase. If systematic errors on these testing GR (TGR) parameters exceed the statistical errors, this may signal a false violation of GR. Here, we consider systematic errors produced by unmodeled binary eccentricity. Since the eccentricity of GW events in ground-based detectors is expected to be small or negligible, the use of quasicircular waveform models for testing GR may be safe when analyzing a small number of events. However, as the catalog size of GW detections increases, more stringent bounds on GR deviations can be placed by combining information from multiple events. In that case, even small systematic biases may become significant. We apply the approach of hierarchical Bayesian inference to model the posterior probability distributions of the TGR parameters inferred from a population of eccentric binary black holes (BBHs). We assume each TGR parameter value varies across the BBH population according to a Gaussian distribution. We compute the posterior distributions for these Gaussian hyperparameters. This is done for LIGO and Cosmic Explorer (CE). We find that systematic biases from unmodeled eccentricity can signal false GR violations for both detectors when considering constraints set by a catalog of events. We also compute the projected bounds on the $10$ TGR parameters when eccentricity is included as a parameter in the waveform model. We find that the first four dimensionless TGR deformation parameters can be bounded at $90\%$ confidence to $\delta \hat{\varphi}_i \lesssim 10^{-2}$ for LIGO and $\lesssim 10^{-3}$ for CE [where $i=(0,1,2,3)$]. In comparison to the circular orbit case, the combined bounds on the TGR parameters worsen by a modest factor of $\lesssim 2$ when eccentricity is included in the waveform.
Vast amounts of astronomical photometric data are generated from various projects, requiring significant efforts to identify variable stars and other object classes. In light of this, a general, widely applicable classification framework would simplify the task of designing custom classifiers. We present a novel deep learning framework for classifying light curves using a weakly supervised object detection model. Our framework identifies the optimal windows for both light curves and power spectra automatically, and zooms in on their corresponding data. This allows for automatic feature extraction from both time and frequency domains, enabling our model to handle data across different scales and sampling intervals. We train our model on datasets obtained from both space-based and ground-based multi-band observations of variable stars and transients. We achieve an accuracy of 87% for combined variables and transient events, which is comparable to the performance of previous feature-based models. Our trained model can be utilized directly to other missions, such as ASAS-SN, without requiring any retraining or fine-tuning. To address known issues with miscalibrated predictive probabilities, we apply conformal prediction to generate robust predictive sets that guarantee true label coverage with a given probability. Additionally, we incorporate various anomaly detection algorithms to empower our model with the ability to identify out-of-distribution objects. Our framework is implemented in the Deep-LC toolkit, which is an open-source Python package hosted on Github and PyPI.
Magnetar magnetospheres are strongly twisted, and are able to power sudden energetic events through the rapid release of stored electromagnetic energy. In this paper, we investigate twisted relativistic force-free axisymmetric magnetospheres of rotating neutron stars. We obtain numerical solutions of such configurations using the method of simultaneous relaxation for the magnetic field inside and outside the light-cylinder. We introduce a toroidal magnetic field in the region of closed field-lines that is associated with a poloidal electric current distribution in that region, and explore various mathematical expressions for that distribution. We find that, by increasing the twist, a larger fraction of magnetic field-lines crosses the light-cylinder and opens up to infinity, thus increasing the size of the polar caps and enhancing the spin-down rate. We also find that, for moderately to strongly twisted magnetospheres, the region of closed field-lines ends at some distance inside the light-cylinder. We discuss the implications of these solutions on the variation of magnetar spin-down rates, moding and nulling of pulsars, the relation between the angular shear and the twist and the overall shape of the magnetosphere.
Pulsar search is always the basis of pulsar navigation, gravitational wave detection and other research topics. Currently, the volume of pulsar candidates collected by Five-hundred-meter Aperture Spherical radio Telescope (FAST) shows an explosive growth rate that has brought challenges for its pulsar candidate filtering System. Particularly, the multi-view heterogeneous data and class imbalance between true pulsars and non-pulsar candidates have negative effects on traditional single-modal supervised classification methods. In this study, a multi-modal and semi-supervised learning based pulsar candidate sifting algorithm is presented, which adopts a hybrid ensemble clustering scheme of density-based and partition-based methods combined with a feature-level fusion strategy for input data and a data partition strategy for parallelization. Experiments on both HTRU (The High Time Resolution Universe Survey) 2 and FAST actual observation data demonstrate that the proposed algorithm could excellently identify the pulsars: On HTRU2, the precision and recall rates of its parallel mode reach 0.981 and 0.988. On FAST data, those of its parallel mode reach 0.891 and 0.961, meanwhile, the running time also significantly decrease with the increment of parallel nodes within limits. So, we can get the conclusion that our algorithm could be a feasible idea for large scale pulsar candidate sifting of FAST drift scan observation.
The classical nova, AT 2023prq, was discovered on 2023 August 15 and is located at a distance of 46 kpc from the Andromeda Galaxy (M 31). Here we report photometry and spectroscopy of the nova. The 'very fast' ($t_{2,r^{\prime}}\sim3.4$ d) and low luminosity ($M_{r^{\prime}}\sim-7.6$) nature of the transient along with the helium in its spectra would indicate that AT 2023prq is a 'faint-and-fast' He/N nova. Additionally, at such a large distance from the centre of M 31, AT 2023prq is a member of the halo nova population.
Revealing the temporal evolution of individual heavy elements synthesized in the merger ejecta from binary neutron star mergers not only improves our understanding of the origin of heavy elements beyond iron but also clarifies the energy sources of kilonovae. In this work, we present a comprehensive analysis of the temporal evolution of the energy fraction of each nuclide based on the $r$-process nucleosynthesis simulations. The heavy elements dominating the kilonova emission within $\sim100$~days are identified, including $^{127}$Sb, $^{128}$Sb, $^{129}$Sb, $^{130}$Sb, $^{129}$Te, $^{132}$I, $^{222}$Rn, $^{223}$Ra, $^{224}$Ra, and $^{225}$Ac. It is found that the late-time kilonova light curve ($t\gtrsim20$~days) is highly sensitive to the presence of the heavy element $^{225}$Ac (with a half-life of 10.0~days). Our analysis shows that the James Webb Space Telescope (JWST), with its high sensitivity in the near-infrared band, is a powerful instrument for the identification of these specific heavy elements.
We conduct numerical simulations of multiple nova eruptions in detached, widely separated symbiotic systems that include an asymptotic giant branch (AGB) companion to investigate the impact of white dwarf (WD) mass and binary separation on the evolution of the system. The accretion rate is determined using the Bondi-Hoyle-Lyttleton method, incorporating orbital momentum loss caused by factors such as gravitational radiation, magnetic braking, and drag. The WD in such a system accretes matter coming from the strong wind of an AGB companion until it finishes shedding its envelope. This occurs on an evolutionary time scale of $\approx 3 \times 10^5$ years. Throughout all simulations, we use a consistent AGB model with an initial mass of $1.0 \mathrm {M_\odot}$ while varying the WD mass and binary separation, as they are the critical factors influencing nova eruption behavior. We find that the accretion rate fluctuates between high and low rates during the evolutionary period, significantly impacted by the AGB's mass loss rate. We show that unlike novae in cataclysmic variables, the orbital period may either increase or decrease during evolution, depending on the model, while the separation consistently decreases. Furthermore, we have identified cases in which the WDs produce weak, non-ejective novae and experience mass gain. This suggests that provided the accretion efficiency can be achieved by a more massive WD and maintained for long enough, they could potentially serve as progenitors for type Ia supernovae.
Non-accreting neutron stars display diverse characteristics, leading us to classify them into several groups. This chapter is an observational driven review in which we survey the properties of the different classes of isolated neutron stars: from the 'normal' rotation-powered pulsars, to magnetars, the most magnetic neutron stars in the Universe we know of; from central compact objects (sometimes called also anti-magnetars) in supernova remnants, to the X-ray dim isolated neutron stars. We also highlight a few sources that have exhibited features straddling those of different sub-groups, blurring the apparent diversity of the neutron star zoo and pointing to a gran unification.
We propose a mechanism to explain the low-frequency QPOs observed in X-ray binary systems and AGNs. To do this, we perturbed stable accretion disks around Kerr and EGB black holes at different angular velocities, revealing the characteristics of shock waves and oscillations presented on the disk. Applying this perturbation to scenarios with different alpha values for EGB black holes and different spin parameters for Kerr black holes, we numerically observed changes in the dynamic structure of the disk and oscillations. Through various numerical modeling, we found that the formation of one- and two-armed spiral shock waves on the disk serves as a mechanism for the generation of QPOs. We compared the QPOs obtained from numerical calculations with the low-frequency QPOs observed in $X-$ray binary systems and AGN sources. We found that the results obtained are highly consistent with observations. We observed that the shock mechanism on the disk, which leads to quasi-periodic oscillations, explains the X-ray binaries and AGNs studied in this article. As a result of the numerical findings, we find that QPOs are more strongly dependent on the EGB constant rather than the black hole's spin parameter However, we highlighted that the primary impact on oscillations and QPOs is driven by the perturbation's angular velocity. According to the results obtained from the models, it has been observed that the perturbation's asymptotic speed at V_{\infty}=0.2 is responsible for generating QPO frequencies independently of the black hole's spin parameter and the EGB coupling constant. Therefore, for the moderate value of V_{\infty}, a two-armed spiral shock wave formed on the disk is suggested as a decisive mechanism in explaining low-frequency QPOs.
The Sagittarius Dwarf Spheroidal galaxy (Sgr) is investigated as a target for DM annihilation searches utilising J-factor distributions calculated directly from a high-resolution hydrodynamic simulation of the infall and tidal disruption of Sgr around the Milky Way. In contrast to past studies, the simulation incorporates DM, stellar and gaseous components for both the Milky Way and the Sgr progenitor galaxy. The simulated distributions account for significant tidal disruption affecting the DM density profile. Our estimate of the J-factor value for Sgr, $J_{\text{Sgr}}=1.48\times 10^{10}$ M$_\odot^2$ kpc$^{-5}$ ($6.46\times10^{16}\ \text{GeV}\ \text{cm}^{-5}$), is significantly lower than found in prior studies. This value, while formally a lower limit, is likely close to the true J-factor value for Sgr. It implies a DM cross-section incompatibly large in comparison with existing constraints would be required to attribute recently observed $\gamma$-ray emission from Sgr to DM annihilation. We also calculate a J-factor value using a NFW profile fitted to the simulated DM density distribution to facilitate comparison with past studies. This NFW J-factor value supports the conclusion that most past studies have overestimated the dark matter density of Sgr on small scales. This, together with the fact that the Sgr has recently been shown to emit $\gamma$-rays of astrophysical origin, complicate the use of Sgr in indirect DM detection searches.
In this work, we study the images of a Kerr black hole (BH) immersed in uniform magnetic fields, illuminated by the synchrotron radiation of charged particles in the jet. We particularly focus on the spontaneously vortical motions (SVMs) of charged particles in the jet region and investigate the polarized images of electromagnetic radiations from the trajectories along SVMs. We notice that there is a critical value $\omega_c$ for charged particle released at a given initial position and subjected an outward force, and once $|qB_0/m|=|\omega_B|>|\omega_c|$ charged particles can move along SVMs in the jet region. We obtain the polarized images of the electromagnetic radiations from the trajectories along SVMs. Our simplified model suggests that the SVM radiations can act as the light source to illuminate the BH and form a photon ring structure.
We discuss in detail the possibility that the ``type-II majoron'' -- that is, the pseudo Nambu-Goldstone boson that arises in the context of the type-II seesaw mechanism if the lepton number is spontaneously broken by an additional singlet scalar -- account for the dark matter (DM) observed in the universe. We study the requirements the model's parameters have to fulfill in order to reproduce the measured DM relic abundance through two possible production mechanisms in the early universe, freeze-in and misalignment, both during a standard radiation-dominated era and early matter domination. We then study possible signals of type-II majoron DM and the present and expected constraints on the parameter space that can be obtained from cosmological observations, direct detection experiments, and present and future searches for decaying DM at neutrino telescopes and cosmic-ray experiments. We find that -- depending on the majoron mass, the production mechanism, and the vacuum expectation value of the type-II triplet -- all of the three decay modes (photons, electrons, neutrinos) of majoron DM particles can yield observable signals at future indirect searches for DM. Furthermore, in a corner of the parameter space, detection of majoron DM is possible through electron recoil at running and future direct detection experiments.
The interiors of neutron stars reach densities and temperatures beyond the limits of terrestrial experiments, providing vital laboratories for probing nuclear physics. While the star's interior is not directly observable, its pressure and density determine the star's macroscopic structure which affects the spectra observed in telescopes. The relationship between the observations and the internal state is complex and partially intractable, presenting difficulties for inference. Previous work has focused on the regression from stellar spectra of parameters describing the internal state. We demonstrate a calculation of the full likelihood of the internal state parameters given observations, accomplished by replacing intractable elements with machine learning models trained on samples of simulated stars. Our machine-learning-derived likelihood allows us to perform maximum a posteriori estimation of the parameters of interest, as well as full scans. We demonstrate the technique by inferring stellar mass and radius from an individual stellar spectrum, as well as equation of state parameters from a set of spectra. Our results are more precise than pure regression models, reducing the width of the parameter residuals by 11.8% in the most realistic scenario. The neural networks will be released as a tool for fast simulation of neutron star properties and observed spectra.
The ability of deep learning (DL) approaches to learn generalised signal and noise models, coupled with their fast inference on GPUs, holds great promise for enhancing gravitational-wave (GW) searches in terms of speed, parameter space coverage, and search sensitivity. However, the opaque nature of DL models severely harms their reliability. In this work, we meticulously develop a DL model stage-wise and work towards improving its robustness and reliability. First, we address the problems in maintaining the purity of training data by deriving a new metric that better reflects the visual strength of the 'chirp' signal features in the data. Using a reduced, smooth representation obtained through a variational auto-encoder (VAE), we build a classifier to search for compact binary coalescence (CBC) signals. Our tests on real LIGO data show an impressive performance of the model. However, upon probing the robustness of the model through adversarial attacks, its simple failure modes were identified, underlining how such models can still be highly fragile. As a first step towards bringing robustness, we retrain the model in a novel framework involving a generative adversarial network (GAN). Over the course of training, the model learns to eliminate the primary modes of failure identified by the adversaries. Although absolute robustness is practically impossible to achieve, we demonstrate some fundamental improvements earned through such training, like sparseness and reduced degeneracy in the extracted features at different layers inside the model. We show that these gains are achieved at practically zero loss in terms of model performance on real LIGO data before and after GAN training. Through a direct search on 8.8 days of LIGO data, we recover two significant CBC events from GWTC-2.1, GW190519_153544 and GW190521_074359. We also report the search sensitivity obtained from an injection study.
Magnetorotational instability (MRI)-driven turbulence and dynamo phenomena are analyzed using direct statistical simulations. Our approach begins by developing a unified mean-field model that combines the traditionally decoupled problems of the large-scale dynamo and angular-momentum transport in accretion disks. The model consists of a hierarchical set of equations, capturing up to the second-order cumulants, while a statistical closure approximation is employed to model the three-point correlators. We highlight the web of interactions that connect different components of stress tensors -- Maxwell, Reynolds, and Faraday -- through shear, rotation, correlators associated with mean fields, and nonlinear terms. We determine the dominant interactions crucial for the development and sustenance of MRI turbulence. Our general mean field model for the MRI-driven system allows for a self-consistent construction of the electromotive force, inclusive of inhomogeneities and anisotropies. Within the realm of large-scale magnetic field dynamo, we identify two key mechanisms -- the rotation-shear-current effect and the rotation-shear-vorticity effect -- that are responsible for generating the radial and vertical magnetic fields, respectively. We provide the explicit (nonperturbative) form of the transport coefficients associated with each of these dynamo effects. Notably, both of these mechanisms rely on the intrinsic presence of large-scale vorticity dynamo within MRI turbulence.
Magnetic reconnection is a ubiquitous phenomenon for magnetized plasmas and leads to the rapid reconfiguration of magnetic field lines. During reconnection events, plasma is heated and accelerated until the magnetic field lines enclose and capture the plasma within a circular configuration. These plasmoids could therefore observationally manifest themselves as hot spots that are associated with flaring behavior in supermassive black hole systems, such as Sagittarius A$^\ast$. We have developed a novel algorithm for identifying plasmoid structures, which incorporates watershed and custom closed contouring steps. From the identified plasmoids, we determine the plasma characteristics and energetics in magnetohydrodynamical simulations. The algorithm's performance is showcased for a high-resolution suite of axisymmetric ideal and resistive magnetohydrodynamical simulations of turbulent accretion discs surrounding a supermassive black hole. For validation purposes, we also evaluate several Harris current sheets that are well-investigated in the literature. Interestingly, we recover the characteristic power-law distribution of plasmoid sizes for both the black hole and Harris sheet simulations. This indicates that while the dynamics are vastly different, with different dominant plasma instabilities, the plasmoid creation behavior is similar. Plasmoid occurrence rates for resistive general relativistic magnetohydrodynamical simulations are significantly higher than for the ideal counterpart. Moreover, the largest identified plasmoids are consistent with sizes typically assumed for semi-analytical interpretation of observations. We recover a positive correlation between the plasmoid formation rate and a decrease in black-hole-horizon-penetrating magnetic flux. The developed algorithm has enabled an extensive quantitative analysis of plasmoid formation in black hole accretion simulations.
Nearby radio galaxies (RGs) of Fanaroff-Riley Class I (FR-I) are considered possible sites for the production of observed ultra-high-energy cosmic rays (UHECRs). Among those, some exhibit blazar-like inner jets, while others display plume-like structures. We reproduce the flow dynamics of FR-I jets using relativistic hydrodynamic simulations. Subsequently, we track the transport and energization of cosmic ray (CR) particles within the simulated jet flows using Monte Carlo simulations. The key determinant of flow dynamics is the mean Lorentz factor of the jet-spine flow, $\langle\Gamma\rangle_{\rm{spine}}$. When $\langle\Gamma\rangle_{\rm{spine}}\gtrsim$ several, the jet spine remains almost unimpeded, but for $\langle\Gamma\rangle_{\rm{spine}}\lesssim$ a few, substantial jet deceleration occurs. CRs gain energy mainly through diffusive shock acceleration for $E\lesssim1$~EeV and shear acceleration for $E\gtrsim1$~EeV. The time-asymptotic energy spectrum of CRs escaping from the jet can be modeled by a double power law, transitioning from $\sim E^{-0.6}$ to $\sim E^{-2.6}$ around a break energy, $E_{\rm{break}}$, with an exponential cutoff at $E_{\rm{break}}\langle\Gamma\rangle_{\rm{spine}}^2$. $E_{\rm{break}}$ is limited either by the Hillas confinement condition or by particle escape from the cocoon via fast spatial diffusion. The spectral slopes primarily arise from multiple episodes of shock and relativistic shear accelerations, and the confinement-escape processes within the cocoon. The exponential cutoff is determined by non-gradual shear acceleration that boosts the energy of high-energy CRs by a factor of $\sim \langle\Gamma\rangle_{\rm{spine}}^2$. We suggest that the model spectrum derived in this work could be employed to investigate the contribution of RGs to the observed population of UHECRs.
RX J0440.9+4431 is an accreting X-ray pulsar (XRP) that remained relatively unexplored until recently, when major X-ray outburst activity enabled more in-depth studies. Here, we report on the discovery of ${\sim}0.2$ Hz quasi-periodic oscillations (QPOs) from this source observed with $Fermi$-GBM. The appearance of QPOs in RX J0440.9+4431 is thricely transient, that is, QPOs appear only above a certain luminosity, only at certain pulse phases (namely corresponding to the peak of its sine-like pulse profile), and only for a few oscillations at time. We argue that this newly discovered phenomenon (appearance of thricely transient QPOs -- or ATTO) occurs if QPOs are fed through an accretion disk whose inner region viscosity is unstable to mass accretion rate and temperature variations. Such variations are triggered when the source switches to the super-critical accretion regime and the emission pattern changes. We also argue that the emission region configuration is likely responsible for the observed QPOs spin-phase dependence.
Self-lensing flares (SLFs) are expected to be produced once or twice per orbit by an accreting massive black hole binary (MBHB), if the eclipsing MBHBs are observed close to edge-on. SLFs can provide valuable electromagnetic (EM) signatures to accompany the gravitational waves (GWs) detectable by the upcoming Laser Interferometer Space Antenna (LISA). EM follow-ups are crucial for, e.g., sky-localization, and constraining the Hubble constant and the graviton mass. We use high-resolution two-dimensional viscous hydrodynamical simulations of a circumbinary disk (CBD) embedding a MBHB. We then use very high-cadence output of these hydrodynamical simulation inputs for a general-relativistic ray-tracing code to produce synthetic spectra and phase-folded light curves. Our main results show a significant periodic amplification of the flux with the characteristic shape of a sharp flare with a central dip, as the foreground black hole (BH) transits across the minidisk and shadow of the background BH, respectively. These corroborate previous conclusions based on the microlensing approximation and analytical toy models of the emission geometry. We also find that at lower inclinations, without some occlusion of the minidisk emission by the CBD, shocks from quasi-periodic mass-trading between the minidisks can produce bright flares which can mimic SLFs and could hinder their identification.
Theoretical studies of angular momentum transport suggest that isolated stellar-mass black holes are born with negligible dimensionless spin magnitudes $\chi \lesssim 0.01$. However, recent gravitational-wave observations indicate $\gtrsim 15\%$ of binary black hole systems contain at least one black hole with a non-negligible spin magnitude. One explanation is that the first-born black hole spins up the stellar core of what will become the second-born black hole through tidal interactions. Typically, the second-born black hole is the ``secondary'' (less-massive) black hole, though, it may become the ``primary'' (more-massive) black hole through a process known as mass-ratio reversal. We investigate this hypothesis by analysing data from the third gravitational-wave transient catalog (GWTC-3) using a ``single-spin'' framework in which only one black hole may spin in any given binary. Given this assumption, we show that at least $28\%$ (90% credibility) of the LIGO--Virgo--KAGRA binaries contain a primary with significant spin, possibly indicative of mass-ratio reversal. We find no evidence for binaries that contain a secondary with significant spin. However, the single-spin framework is moderately disfavoured (natural log Bayes factor $\ln B = 3.1$) when compared to a model that allows both black holes to spin. If future studies can firmly establish that most merging binaries contain two spinning black holes, it may call into question our understanding of formation mechanisms for binary black holes or the efficiency of angular momentum transport in black hole progenitors.
Detailed measurements of the spectral structure of cosmic-ray electrons and positrons from 10.6 GeV to 7.5 TeV are presented from over 7 years of observations with the CALorimetric Electron Telescope (CALET) on the International Space Station. Because of the excellent energy resolution (a few percent above 10 GeV) and the outstanding e/p separation (10$^5$), CALET provides optimal performance for a detailed search of structures in the energy spectrum. The analysis uses data up to the end of 2022, and the statistics of observed electron candidates has increased more than 3 times since the last publication in 2018. By adopting an updated boosted decision tree analysis, a sufficient proton rejection power up to 7.5 TeV is achieved, with a residual proton contamination less than 10%. The observed energy spectrum becomes gradually harder in the lower energy region from around 30 GeV, consistently with AMS-02, but from 300 to 600 GeV it is considerably softer than the spectra measured by DAMPE and Fermi-LAT. At high energies, the spectrum presents a sharp break around 1 TeV, with a spectral index change from -3.15 to -3.91, and a broken power law fitting the data in the energy range from 30 GeV to 4.8 TeV better than a single power law with 6.9 sigma significance, which is compatible with the DAMPE results. The break is consistent with the expected effects of radiation loss during the propagation from distant sources (except the highest energy bin). We have fitted the spectrum with a model consistent with the positron flux measured by AMS-02 below 1 TeV and interpreted the electron + positron spectrum with possible contributions from pulsars and nearby sources. Above 4.8 TeV, a possible contribution from known nearby supernova remnants, including Vela, is addressed by an event-by-event analysis providing a higher proton-rejection power than a purely statistical analysis.
We present a search for transient radio sources on time-scales of seconds to hours at 144 MHz using the LOFAR Two-metre Sky Survey (LoTSS). This search is conducted by examining short time-scale images derived from the LoTSS data. To allow imaging of LoTSS on short time-scales, a novel imaging and filtering strategy is introduced. This includes sky model source subtraction, no cleaning or primary beam correction, a simple source finder, fast filtering schemes and source catalogue matching. This new strategy is first tested by injecting simulated transients, with a range of flux densities and durations, into the data. We find the limiting sensitivity to be 113 and 6 mJy for 8 second and 1 hour transients respectively. The new imaging and filtering strategies are applied to 58 fields of the LoTSS survey, corresponding to LoTSS-DR1 (2% of the survey). One transient source is identified in the 8 second and 2 minute snapshot images. The source shows one minute duration flare in the 8 hour observation. Our method puts the most sensitive constraints on/estimates of the transient surface density at low frequencies at time-scales of seconds to hours; $<4.0\cdot 10^{-4} \; \text{deg}^{-2}$ at 1 hour at a sensitivity of 6.3 mJy; $5.7\cdot 10^{-7} \; \text{deg}^{-2}$ at 2 minutes at a sensitivity of 30 mJy; and $3.6\cdot 10^{-8} \; \text{deg}^{-2}$ at 8 seconds at a sensitivity of 113 mJy. In the future, we plan to apply the strategies presented in this paper to all LoTSS data.
In observational studies of infrared dark clouds, the number of detections of CO freeze-out onto dust grains (CO depletion) at pc-scale is extremely limited, and the conditions for its occurrence are, therefore, still unknown. We report a new object where pc-scale CO depletion is expected. As a part of Kagoshima Galactic Object survey with Nobeyama 45-m telescope by Mapping in Ammonia lines (KAGONMA), we have made mapping observations of NH3 inversion transition lines towards the star-forming region associated with the CMa OB1 including IRAS 07077-1026, IRAS 07081-1028, and PGCC G224.28-0.82. By comparing the spatial distributions of the NH3 (1,1) and C18O (J=1-0), an intensity anti-correlation was found in IRAS 07077-1026 and IRAS 07081-1028 on the ~1 pc scale. Furthermore, we obtained a lower abundance of C18O at least in IRAS 07077-1026 than in the other parts of the star-forming region. After examining high density gas dissipation, photodissociation, and CO depletion, we concluded that the intensity anti-correlation in IRAS 07077-1026 is due to CO depletion. On the other hand, in the vicinity of the centre of PGCC G224.28-0.82, the emission line intensities of both the NH3 (1,1) and C18O (J=1-0) were strongly detected, although the gas temperature and density were similar to IRAS 07077-1026. This indicates that there are situations where C18O (J=1-0) cannot trace dense gas on the pc scale and implies that the conditional differences that C18O (J=1-0) can and cannot trace dense gas are unclear.
We provide an explanation for the reduced dynamical friction on galactic bars in spinning dark matter halos. Earlier work based on linear theory predicted an increase in dynamical friction when dark halos have a net forward rotation, because prograde orbits couple to bars more strongly than retrograde orbits. Subsequent numerical studies, however, found the opposite trend: dynamical friction weakens with increasing spin of the halo. We revisit this problem and demonstrate that linear theory in fact correctly predicts a reduced torque in forward-rotating halos. We show that shifting the halo mass from retrograde to prograde phase space generates a positive gradient in the distribution function near the origin of the z-angular momentum (Lz=0), which results in a resonant transfer of Lz to the bar, making the net dynamical friction weaker. While this effect is subdominant for the major resonances, including the corotation resonance, it leads to a significant positive torque on the bar for the series of direct radial resonances, as these resonances are strongest at Lz=0. The overall dynamical friction from spinning halos is shown to decrease with the halo's spin, in agreement with the secular behavior of N-body simulations. We validate our linear calculation by computing the nonlinear torque from individual resonances using the angle-averaged Hamiltonian.
We present the integrated VLT-MUSE spectrum of the central 2'x2' (30x30 pc$^{2}$) of NGC 2070, the dominant giant HII region of the Tarantula Nebula in the Large Magellanic Cloud, together withan empirical far-ultraviolet spectrum constructed via LMC template stars from the ULLYSES survey and Hubble Tarantula Treasury Project UV photometry. NGC 2070 provides a unique opportunity to compare results from individual stellar populations (e.g. VLT FLAMES Tarantula Survey) in a metal-poor starburst region to the integrated results from the population synthesis tools Starburst99, Charlot & Bruzual and BPASS. The metallicity of NGC 2070 inferred from standard nebular strong line calibrations is 0.4$\pm$0.1 dex lower than obtained from direct methods. The Halpha inferred age of 4.2 Myr from Starburst99 is close to the median age of OB stars within the region, although individual stars span a broad range of 1-7 Myr. The inferred stellar mass is close to that obtained for the rich star cluster R136 within NGC 2070, although this contributes only 21% to the integrated far-UV continuum. HeII 1640 emission is dominated by classical WR stars and main sequence WNh+Of/WN stars. 18% of the NGC~2070 far UV continuum flux arises from very massive stars with >100 Msun, including several very luminous Of supergiants. None of the predicted population synthesis models at low metallicities are able to successfully reproduce the far-UV spectrum of NGC 2070. We attribute issues to the treatment of mass-loss in very massive stars, the lack of contemporary empirical metal-poor templates, plus WR stars produced via binary evolution.
We present results of a search for transiting exoplanets in 10-yr long photometry with thousands of epochs taken in the direction of the Galactic bulge. This photometry was collected in the fourth phase of the Optical Gravitational Lensing Experiment (OGLE-IV). Our search covered approx. 222 000 stars brighter than I = 15.5 mag. Selected transits were verified using a probabilistic method. The search resulted in 99 high-probability candidates for transiting exoplanets. The estimated distances to these targets are between 0.4 kpc and 5.5 kpc, which is a significantly wider range than for previous transit searches. The planets found are Jupiter-size, with the exception of one (named OGLE-TR-1003b) located in the hot Neptune desert. If the candidate is confirmed, it can be important for studies of highly irradiated intermediate-size planets. The existing long-term, high-cadence photometry of our candidates increases the chances of detecting transit timing variations at long timescales. Selected candidates will be observed by the future NASA flagship mission, the Nancy Grace Roman Space Telescope, in its search for Galactic bulge microlensing events, which will further enhance the photometric coverage of these stars.
Nearly every massive galaxy contains a supermassive black hole (BH) at its center. For decades, both theory and numerical simulations have indicated that BHs play a central role in regulating the growth and quenching of galaxies. Specifically, BH feedback by heating or blowing out the interstellar medium (ISM) serves as the groundwork for current models of massive galaxy formation. However, direct evidence for such an impact on the galaxy-wide ISM from BHs has only been found in some extreme objects. For general galaxy populations, it remains unclear whether and how BHs impact the ISM. Here based on a large sample of nearby galaxies with measurements of masses of both black holes and atomic hydrogen, the major component of cold ISM, we reveal that the atomic hydrogen content ($f_{\rm HI} = M_{\rm HI}/M_{\star}$) is tightly and anti-correlated with black hole mass ($M_{\rm BH}$) with $f_{\rm HI} \propto M^{-\alpha}_{\rm BH}$ ($\alpha \sim 0.5-0.6$). This correlation is valid across five orders of magnitude in $M_{\rm BH}$. Once this correlation is taken into account, $f_{\rm HI}$ loses dependence on other galactic parameters, demonstrating that $M_{\rm BH}$ serves as the primary driver of $f_{\rm HI}$. These findings provide critical evidence for how the accumulated energy from BH accretion impacts galaxy-wide ISM, representing a crucial step forward in our understanding on the role of BHs in regulating the growth and quenching of massive galaxies.
Our understanding of how the size of galaxies has evolved over cosmic time is based on the use of the half-light (effective) radius as a size indicator. Although the half-light radius has many advantages for structurally parameterising galaxies, it does not provide a measure of the global extent of the objects, but only an indication of the size of the region containing the innermost 50% of the galaxy's light. Therefore, the observed mild evolution of the effective radius of disc galaxies with cosmic time is conditioned by the evolution of the central part of the galaxies rather than by the evolutionary properties of the whole structure. Expanding on the works by Trujillo et al. (2020) and Chamba et al. (2022), we study the size evolution of disc galaxies using as a size indicator the radial location of the gas density threshold for star formation. As a proxy to evaluate this quantity, we use the radial position of the truncation (edge) in the stellar surface mass density profiles of galaxies. To conduct this task, we have selected 1048 disc galaxies with M$_{\rm stellar}$ $>$ 10$^{10}$ M$_{\odot}$ and spectroscopic redshifts up to z=1 within the HST CANDELS fields. We have derived their surface brightness, colour and stellar mass density profiles. Using the new size indicator, the observed scatter of the size-mass relation (~0.1 dex) decreases by a factor of ~2 compared to that using the effective radius. At a fixed stellar mass, Milky Way-like (M$_{\rm stellar}$ ~ 5$\times$10$^{10}$ M$_{\odot}$) disc galaxies have on average increased their sizes by a factor of two in the last 8 Gyr, while the surface stellar mass density at the edge position has decreased by more than an order of magnitude from ~13 M$_{\odot}$/pc$^2$ (z=1) to ~1 M$_{\odot}$/pc$^2$ (z=0). These results reflect a dramatic evolution of the outer part of MW-like disc galaxies, growing ~1.5 kpc Gyr$^{-1}$.
Recent discoveries of a super-virial hot phase of the Milky Way circumgalactic medium (CGM) has launched new questions regarding the multi-phase structure of the CGM around the Galaxy. We use 1.05 Ms of archival Chandra/HETG observations to characterize highly ionized metal absorption at z=0 along the line of sight of the quasar NGC 3783. We detect two distinct temperature phases with T$_1 = 5.83^{+0.15}_{-0.07}$ K, warm-hot virial temperature, and T$_2=6.61^{+0.12}_{-0.06}$ K, hot super-virial temperature. The super-virial hot phase coexisting with the warm-hot virial phase has been detected in absorption along only two other sightlines and in one stacking analysis. There is scatter in temperature of the hot as well as warm-hot gas. Similar to previous observations, we detect super-solar abundance ratios of metals in the hot phase, with a Ne/O ratio 2$\sigma$ above solar mixtures. These new detections continue the mystery of the mechanism behind the super-virial hot phase, but provide evidence that this is a true property of the CGM rather than an isolated observation. The super-virial CGM could hold the key to understanding the physical and chemical history of the Milky Way.
Hierarchical triple-star systems consists of three components organised into an inner binary ($M_{1}$,$M_{2}$) and a more distant outer tertiary ($M_{3}$) star. The LAMOST Medium-Resolution Spectroscopic Survey (LAMOST-MRS) has offered a great sample for the study of triple system populations. We used the Peak Amplitude Ratio (PAR) method to obtain the mass ratio ($q_\mathrm{{in}}$, $q_\mathrm{{out}}$) of a triple system from its normalised spectrum. By calculating Cross-Correlation Function (CCF), we determined the correlation between the mass ratio $q_\mathrm{{out}}$ ($M_{3}$/($M_{1}$+$M_{2}$)) and the amplitude ratio ($A_{3}$/($A_{1}$+$A_{2}$)). We derived $q_\mathrm{{in}}$ of $0.5-1.0$ and $q_\mathrm{{out}}$ between 0.2 and 0.8. By fitting a power-law function of the corrected $q_\mathrm{{in}}$ distribution, the $\gamma_\mathrm{{in}}$ are estimated to be $-0.654\pm2.915$, $4.304\pm1.125$ and $11.371\pm1.309$ for A, F and G type stars. The derived $\gamma_\mathrm{{in}}$-values increase as the mass decrease, indicating that less massive stars are more likely to have companion stars with similar masses. By fitting a power-law function of the corrected $q_\mathrm{{out}}$ distribution, the ${\gamma_\mathrm{{out}}}$ are estimated to be $-2.016\pm0.172$, $-1.962\pm0.853$ and $-1.238\pm0.141$ for G, F and A type stars, respectively. The ${\gamma_\mathrm{{out}}}$-values show a trend of growth toward lower primary star masses.
We present radio observations of 24 confirmed and candidate strongly lensed quasars identified by the Gaia Gravitational Lenses (GraL) working group. We detect radio emission from 8 systems in 5.5 and 9 GHz observations with the Australia Telescope Compact Array (ATCA), and 12 systems in 6 GHz observations with the Karl G. Jansky Very Large Array (VLA). The resolution of our ATCA observations is insufficient to resolve the radio emission into multiple lensed images, but we do detect multiple images from 11 VLA targets. We have analysed these systems using our observations in conjunction with existing optical measurements, including measuring offsets between the radio and optical positions, for each image and building updated lens models. These observations significantly expand the existing sample of lensed radio quasars, suggest that most lensed systems are detectable at radio wavelengths with targeted observations, and demonstrate the feasibility of population studies with high resolution radio imaging.
RR Lyrae stars are pulsating stars, many of which also show a long-term variation called the Blazhko effect which is a modulation of amplitude and phase of the lightcurve. In this work, we searched for the incidence rate of the Blazhko effect in the first-overtone pulsating RR Lyrae (RRc) stars of the Galactic halo. The focus was on the Stripe 82 region in the Galactic halo which was studied by Sesar et al using the Sloan Digital Sky Survey (SDSS) data. In their work, 104 RR Lyrae stars were classified as RRc type. We combined their SDSS light curves with Zwicky Transient Facility (ZTF) data, and use them to document the Blazhko properties of these RRc stars. Our analysis showed that among the 104 RRc stars, 8 were rather RRd stars, and were excluded from the study. Out of remaining 96, 34 were Blazhko type, 62 were non-Blazhko type, giving the incidence rate of 35.42% for Blazhko RRc stars. The shortest Blazhko period found was 12.808 +/- 0.001 d for SDSS 747380, while the longest was 3100 +/- 126 d for SDSS 3585856. Combining SDSS and ZTF data sets increased the probability of detecting the small variations due to the Blazhko effect, and thus provided a unique opportunity to find longer Blazhko periods. We found that 85% of RRc stars had the Blazhko period longer than 200 d.
We determine the ortho/para ratios of NH2D and NHD2 in two dense, starless cores, where their formation is supposed to be dominated by gas-phase reactions, which, in turn, is predicted to result in deviations from the statistical spin ratios. The Large APEX sub-Millimeter Array (LAsMA) multibeam receiver of the Atacama Pathfinder EXperiment (APEX) telescope was used to observe the prestellar cores H-MM1 and Oph D in Ophiuchus in the ground-state lines of ortho and para NH2D and NHD2. The fractional abundances of these molecules were derived employing 3D radiative transfer modelling, using different assumptions about the abundance profiles as functions of density. We also ran gas-grain chemistry models with different scenarios concerning proton or deuteron exchanges and chemical desorption from grains to find out if one of these models can reproduce the observed spin ratios. The observationally deduced ortho/para ratios of NH2D and NHD2 are in both cores within 10% of their statistical values 3 and 2, respectively, and taking 3-sigma limits, deviations from these of about 20% are allowed. Of the chemistry models tested here, the model that assumes proton hop (as opposed to full scrambling) in reactions contributing to ammonia formation, and a constant efficiency of chemical desorption, comes nearest to the observed abundances and spin ratios. The nuclear spin ratios derived here are in contrast with spin-state chemistry models that assume full scrambling in proton donation and hydrogen abstraction reactions leading to deuterated ammonia. The efficiency of chemical desorption influences strongly the predicted abundances of NH3, NH2D, and NHD2, but has a lesser effect on their ortho/para ratios. For these the proton exchange scenario in the gas is decisive. We suggest that this is because of rapid re-processing of ammonia and related cations by gas-phase ion-molecule reactions.
We present JCMT POL-2 850 um dust polarization observations and Mimir H band stellar polarization observations toward the starless core L1512. We detect the highly-ordered core-scale magnetic field traced by the POL-2 data, of which the field orientation is consistent with the parsec-scale magnetic fields traced by Planck data, suggesting the large-scale fields thread from the low-density region to the dense core region in this cloud. The surrounding magnetic field traced by the Mimir data shows a wider variation in the field orientation, suggesting there could be a transition of magnetic field morphology at the envelope scale. L1512 was suggested to be presumably older than 1.4 Myr in a previous study via time-dependent chemical analysis, hinting that the magnetic field could be strong enough to slow the collapse of L1512. In this study, we use the Davis-Chandrasekhar-Fermi method to derive a plane-of-sky magnetic field strength ($B_{pos}$) of 18$\pm$7 uG and an observed mass-to-flux ratio ($\lambda_{obs}$) of 3.5$\pm$2.4, suggesting that L1512 is magnetically supercritical. However, the absence of significant infall motion and the presence of an oscillating envelope are inconsistent with the magnetically supercritical condition. Using a Virial analysis, we suggest the presence of a hitherto hidden line-of-sight magnetic field strength of ~27 uG with a mass-to-flux ratio ($\lambda_{tot}$) of ~1.6, in which case both magnetic and kinetic pressures are important in supporting the L1512 core. On the other hand, L1512 may have just reached supercriticality and will collapse at any time.
We consider a static, spherically symmetric space-time with an electric field arising from a quadratic metric-affine extension of General Relativity. Such a space-time is free of singularities in the centre of the black holes, while at large distances it quickly boils down to the usual Reissner-Nordstr\"om solution. We probe this space-time metric, which is uniquely characterized by two length scales, $r_q$ and $\ell$, using the astrometric and spectroscopic measurements of the orbital motion of the S2 star around the Galactic Center. Our analysis constrains $r_q$ to be below $2.7M$ for values $\ell<120 AU$, strongly favouring a central object that resembles a Schwarzschild black hole.
There is a class of binary post-AGB stars that are surrounded by Keplerian disks and that often present outflows resulting from gas escaping from the disk. We present maps and complex models of 12CO and 13CO J=2-1 emission lines for four objects: AC Herculis, 89 Herculis, IRAS 19125+0343, and R Scuti. Our maps and models allow us to study their morphology, kinematics, and mass distribution. Our maps and modeling of AC Her reveal that 95% of the total nebular mass is located in the disk. So this source is a disk-dominated source (like the Red Rectangle, IW Carinae, IRAS 08544-4431). On the contrary, our maps and modeling of 89 Herculis, IRAS 19125+0343, and R Scuti suggest that the outflow is the dominant component of the nebula, resulting in a new subclass nebulae around binary post-AGB stars: the outflow-dominated ones. Besides CO, the molecular content of this kind of sources was barely known. We also present the first and very deep single-dish radio molecular survey in the 1.3, 2, 3, 7, and 13 mm bands. Our results allow us to classify our sources as O- or C-rich. We also conclude that these sources present in general a low molecular richness, especially those that are disk-dominated, compared to circumstellar envelopes around AGB stars and other post-AGB stars. This thesis presents a comprehensive study at millimetre wavelengths. On the one hand, we perform a detailed kinetic study of these objects through NOEMA interferometric observations and complex models. On the other hand, we study the chemistry of these sources, thanks to our sensitive single-dish observations. The union of these different methods yields a comprehensive study of the molecular gas present in these sources. Hopefully, this Ph.D. thesis will become a reference for future studies of molecular gas in nebulae around binary post-AGB stars.
The various successes of Milgrom's MOND have led to suggestions that its critical acceleration parameter $a_0 \approx 1.2\times 10^{-10}\,mtrs/sec^2$ is a fundamental physical constant in the same category as the gravitational constant (for example), and therefore requiring no further explanation. There is no independent evidence supporting this conjecture. Motivated by empirical indications of self-similarities on the exterior part of the optical disk (the optical annulus), we describe a statistical analysis of four large samples of optical rotation curves and find that quantitative indicators of self-similar dynamics on the optical annulus are irreducibly present in each of the samples. These symmetries lead to the unambiguous identification of a characteristic point, $(R_c,V_c)$, on each annular rotation curve where $R_c \approx f(M,S)$ and $V_c \approx g(M)$ for absolute magnitude $M$ and surface brightness $S$. This opens the door to an investigation of the behaviour of the associated characteristic acceleration $a_c \equiv V_c^2/R_c$ across each sample. The first observation is that since $a_c \approx g^2(M)/f(M,S)$, then $a_c$ is a constant within any given disk, but varies between disks. Calculation then shows that $a_c$ varies in the approximate range $(1.2\pm0.5)\times 10^{-10}\,mtrs/sec^2$ for each sample. It follows that Milgrom's $a_0$ is effectively identical to $a_c$, and his critical acceleration boundary is actually the characteristic boundary, $R=R_c$, on any given disk. Since $a_c$ varies between galaxies, then so must $a_0$ also. In summary,Milgrom's critical acceleration boundary is an objective characteristic of the optical disk and $a_0$ cannot be a fundamental physical constant.
The wavelength-coverage and sensitivity of JWST now enables us to probe the rest-frame UV - optical spectral energy distributions (SEDs) of galaxies at high-redshift ($z>4$). From these SEDs it is, in principle, through SED fitting possible to infer key physical properties, including stellar masses, star formation rates, and dust attenuation. These in turn can be compared with the predictions of galaxy formation simulations allowing us to validate and refine the incorporated physics. However, the inference of physical properties, particularly from photometry alone, can lead to large uncertainties and potential biases. Instead, it is now possible, and common, for simulations to be \emph{forward-modelled} to yield synthetic observations that can be compared directly to real observations. In this work, we measure the JWST broadband fluxes and colours of a robust sample of $5<z<10$ galaxies using the Cosmic Evolution Early Release Science (CEERS) Survey. We then analyse predictions from a variety of models using the same methodology and compare the NIRCam/F277W magnitude distribution and NIRCam colours with observations. We find that the predicted and observed magnitude distributions are similar, at least at $5<z<8$. At $z>8$ the distributions differ somewhat, though our observed sample size is small and thus susceptible to statistical fluctuations. Likewise, the predicted and observed colour evolution show broad agreement, at least at $5<z<8$. There is however some disagreement between the observed and modelled strength of the strong line contribution. In particular all the models fails to reproduce the F410M-F444W colour at $z>8$, though, again, the sample size is small here.
We study the resolved stellar populations and derive the star formation history of NGC 5474, a peculiar star-forming dwarf galaxy at a distance of $\sim 7$ Mpc, using Hubble Space Telescope Advanced Camera for Surveys data from the Legacy Extragalactic UV Survey (LEGUS) program. We apply an improved colour-magnitude diagram fitting technique based on the code SFERA and use the latest PARSEC-COLIBRI stellar models. Our results are the following. The off-centre bulge-like structure, suggested to constitute the bulge of the galaxy, is dominated by star formation (SF) activity initiated $14$ Gyr ago and lasted at least up to $1$ Gyr ago. Nevertheless, this component shows clear evidence of prolonged SF activity (lasting until $\sim 10$ Myr ago). We estimate the total stellar mass of the bulge-like structure to be $(5.0 \pm 0.3) \times 10^{8}$ \MSUN. Such a mass is consistent with published suggestions that this structure is in fact an independent system orbiting around and not within NGC 5474's disc. The stellar over-density located to the South-West of the bulge-like structure shows a significant SF event older than $1$ Gyr, while it is characterised by two recent peaks of SF, around $\sim10$ and $\sim100$ Myr ago. In the last Gyr, the behavior of the stellar disc is consistent with what is known in the literature as `gasping'. The synchronised burst at $10-35$ Myr in all components might hint to the recent gravitational interaction between the stellar bulge-like structure and the disc of NGC 5474.
Here we explore the impact of all major factors, such as the non-homogeneous gas distribution, galactic rotation and gravity, on the observational appearance of superbubbles in nearly face-on spiral galaxies. The results of our 3D numerical simulations are confronted to the observed gas column density distribution in the largest South-East superbubble in the late-type spiral galaxy NGC 628. We make use of the star formation history inside the bubble derived from the resolved stellar population seen in the HST images to obtain its energy and demonstrate that the results of numerical simulations are in good agreement with the observed gas surface density distribution. We also show that the observed gas column density distribution constraints the gaseous disk scale height and the midplane gas density if the energy input rate could be obtained from observations. This implies that observations of large holes in the interstellar gas distribution and their stellar populations have the potential power to solve the midplane gas density - gaseous disk scale-height degeneracy problem in nearly face-on galaxies. The possible role of superbubbles in driving the secondary star formation in galaxies is also briefly discussed.
Convolutional neural networks (CNNs) are the state-of-the-art technique for identifying strong gravitational lenses. Although they are highly successful in recovering genuine lens systems with a high true-positive rate, the unbalanced nature of the data set (lens systems are rare), still leads to a high false positive rate. For these techniques to be successful in upcoming surveys (e.g. with Euclid) most emphasis should be set on reducing false positives, rather than on reducing false negatives. In this paper, we introduce densely connected neural networks (DenseNets) as the CNN architecture in a new pipeline-ensemble model containing an ensemble of classification CNNs and regression CNNs to classify and rank-order lenses, respectively. We show that DenseNets achieve comparable true positive rates but considerably lower false positive rates (when compared to residual networks; ResNets). Thus, we recommend DenseNets for future missions involving large data sets, such as Euclid, where low false positive rates play a key role in the automated follow-up and analysis of large numbers of strong gravitational lens candidates when human vetting is no longer feasible
The radio astronomy community is adopting deep learning techniques to deal with the huge data volumes expected from the next-generation of radio observatories. Bayesian neural networks (BNNs) provide a principled way to model uncertainty in the predictions made by deep learning models and will play an important role in extracting well-calibrated uncertainty estimates from the outputs of these models. However, most commonly used approximate Bayesian inference techniques such as variational inference and MCMC-based algorithms experience a "cold posterior effect (CPE)", according to which the posterior must be down-weighted in order to get good predictive performance. The CPE has been linked to several factors such as data augmentation or dataset curation leading to a misspecified likelihood and prior misspecification. In this work we use MCMC sampling to show that a Gaussian parametric family is a poor variational approximation to the true posterior and gives rise to the CPE previously observed in morphological classification of radio galaxies using variational inference based BNNs.
We report the discovery of broad components with P-Cygni profiles of the hydrogen and helium emission lines in the two low-redshift low-metallicity dwarf compact star-forming galaxies (SFG), SBS 1420+540 and J1444+4840. We found small stellar masses of 10^{6.24} and 10^{6.59} M$_\odot$, low oxygen abundances 12+log O/H of 7.75 and 7.45, high velocity dispersions reaching $\sigma$ ~700 and ~1200km/s, high terminal velocities of the stellar wind of ~1000 and ~1000-1700km/s, respectively, and large EW(H$\beta$) of ~300A for both. For SBS 1420+540, we succeeded in capturing an eruption phase by monitoring the variations of the broad-to-narrow component flux ratio. We observe a sharp increase of that ratio by a factor 4 in 2017 and a decrease by about an order of magnitude in 2023. The peak luminosity of ~10^{40}ergs/s of the broad component in $L$(H$\alpha$) lasted for about 6 years out of a three-decades monitoring. This leads us to conclude that there is probably a LBV candidate (LBVc) in this galaxy. As for J1444+4840, its very high $L$(H$\alpha$) of about 10^{41}ergs/s, close to values observed in active galactic nuclei (AGNs) and Type IIn Supernovae (SNe), and the variability of no more than 20 per cent of the broad-to-narrow flux ratio of the hydrogen and helium emission lines over a 8-year monitoring do not allow us to definitively conclude that it contains a LBVc. On the other hand, the possibility that the line variations are due to a long-lived stellar transient of type LBV/SNIIn cannot be ruled out.
We propose to learn latent space representations of radio galaxies, and train a very deep variational autoencoder (\protect\Verb+VDVAE+) on RGZ DR1, an unlabeled dataset, to this end. We show that the encoded features can be leveraged for downstream tasks such as classifying galaxies in labeled datasets, and similarity search. Results show that the model is able to reconstruct its given inputs, capturing the salient features of the latter. We use the latent codes of galaxy images, from MiraBest Confident and FR-DEEP NVSS datasets, to train various non-neural network classifiers. It is found that the latter can differentiate FRI from FRII galaxies achieving \textit{accuracy} $\ge 76\%$, \textit{roc-auc} $\ge 0.86$, \textit{specificity} $\ge 0.73$ and \textit{recall} $\ge 0.78$ on MiraBest Confident dataset, comparable to results obtained in previous studies. The performance of simple classifiers trained on FR-DEEP NVSS data representations is on par with that of a deep learning classifier (CNN based) trained on images in previous work, highlighting how powerful the compressed information is. We successfully exploit the learned representations to search for galaxies in a dataset that are semantically similar to a query image belonging to a different dataset. Although generating new galaxy images (e.g. for data augmentation) is not our primary objective, we find that the \protect\Verb+VDVAE+ model is a relatively good emulator. Finally, as a step toward detecting anomaly/novelty, a density estimator -- Masked Autoregressive Flow (\protect\Verb+MAF+) -- is trained on the latent codes, such that the log-likelihood of data can be estimated. The downstream tasks conducted in this work demonstrate the meaningfulness of the latent codes.
We present the first large-scale 3D kinematic study of ~2000 spectroscopically-confirmed young stars (<20 Myr) in 18 star clusters and OB associations (hereafter groups) from the combination of Gaia astrometry and Gaia-ESO Survey spectroscopy. We measure 3D velocity dispersions for all groups, which range from 0.61 to 7.4 km/s (1D velocity dispersions of 0.35 to 4.3 km/s). We find the majority of groups have anisotropic velocity dispersions, suggesting they are not dynamically relaxed. From the 3D velocity dispersions, measured radii and estimates of total mass we estimate the virial state and find that all systems are super-virial when only the stellar mass is considered, but that some systems are sub-virial when the mass of the molecular cloud is taken into account. We observe an approximately linear correlation between the 3D velocity dispersion and the group mass, which would imply that the virial state of groups scales as the square root of the group mass. However, we do not observe a strong correlation between virial state and group mass. In agreement with their virial state we find that nearly all of the groups studied are in the process of expanding and that the expansion is anisotropic, implying that groups were not spherical prior to expansion. One group, Rho Oph, is found to be contracting and in a sub-virial state (when the mass of the surrounding molecular cloud is considered). This work provides a glimpse of the potential of the combination of Gaia and data from the next generation of spectroscopic surveys.
The ages of young star clusters are fundamental clocks to constrain the formation and evolution of pre-main-sequence stars and their protoplanetary disks and exoplanets. However, dating methods for very young clusters often disagree, casting doubts on the accuracy of the derived ages. We propose a new method to derive the kinematic age of star clusters based on the evaporation ages of their stars. The method is validated and calibrated using hundreds of clusters identified in a supernova-driven simulation of the interstellar medium forming stars for approximately 40 Myr within a 250 pc region. We demonstrate that the clusters' evaporation-age uncertainty can be as small as about 10% for clusters with a large enough number of evaporated stars and small but realistic observational errors. We have obtained evaporation ages for a pilot sample of 10 clusters, finding a good agreement with their published isochronal ages. The evaporation ages will provide important constraints for modeling the pre-main-sequence evolution of low-mass stars, as well as to investigate the star-formation and gas-evaporation history of young clusters. These ages can be more accurate than isochronal ages for very young clusters, for which observations and models are more uncertain.
Context. The presence of [$\alpha$/Fe]-[Fe/H] bi-modality in the Milky Way disc has animated the Galactic archaeology community since more than two decades. Aims. Our goal is to investigate the chemical, temporal, and kinematical structure of the Galactic discs using abundances, kinematics, and ages derived self-consistently with the new Bayesian framework SAPP. Methods. We employ the public Gaia-ESO spectra, as well as Gaia EDR3 astrometry and photometry. Stellar parameters and chemical abundances are determined for 13 426 stars using NLTE models of synthetic spectra. Ages are derived for a sub-sample of 2 898 stars, including subgiants and main-sequence stars. The sample probes a large range of Galactocentric radii, $\sim$ 3 to 12 kpc, and extends out of the disc plane to $\pm$ 2 kpc. Results. Our new data confirm the known bi-modality in the [Fe/H] - [$\alpha$/Fe] space, which is often viewed as the manifestation of the chemical thin and thick discs. The over-densities significantly overlap in metallicity, age, and kinematics, and none of these is a sufficient criterion for distinguishing between the two disc populations. Different from previous studies, we find that the $\alpha$-poor disc population has a very extended [Fe/H] distribution and contains $\sim$ 20$\%$ old stars with ages of up to $\sim$ 11 Gyr. Conclusions. Our results suggest that the Galactic thin disc was in place early, at look-back times corresponding to redshifts z $\sim$ 2 or more. At ages $\sim$ 9 to 11 Gyr, the two disc structures shared a period of co-evolution. Our data can be understood within the clumpy disc formation scenario that does not require a pre-existing thick disc to initiate a formation of the thin disc. We anticipate that a similar evolution can be realised in cosmological simulations of galaxy formation.
We present a new selection of 358 blue compact dwarf galaxies (BCDs) from 5,000 square degrees in the Dark Energy Survey (DES), and the spectroscopic follow-up of a subsample of 68 objects. For the subsample of 34 objects with deep spectra, we measure the metallicity via the direct T$_e$ method using the auroral [\oiii]$\lambda$ 4363 emission line. These BCDs have an average oxygen abundance of 12+log(O/H)= 7.8, with stellar masses between 10$^7$ to 10$^8$ M$_\odot$ and specific star formation rates between $\sim$ 10$^{-9}$ to 10$^{-7}$ yr$^{-1}$. We compare the position of our BCDs with the Mass-metallicity (M-Z) and Luminosity-metallicity (L-Z) relation derived from the Local Volume Legacy sample. We find the scatter about the M-Z relation is smaller than the scatter about the L-Z relation. We identify a correlation between the offsets from the M-Z and L-Z relation that we suggest is due to the contribution of metal-poor inflows. Finally, we explore the validity of the mass-metallicity-SFR fundamental plane in the mass range probed by our galaxies. We find that BCDs with stellar masses smaller than $10^{8}$M$_{\odot}$ do not follow the extrapolation of the fundamental plane. This result suggests that mechanisms other than the balance between inflows and outflows may be at play in regulating the position of low mass galaxies in the M-Z-SFR space.
We present a new analysis of the rest-frame UV and optical spectra of a sample of three $z>8$ galaxies discovered behind the gravitational lensing cluster RX\,J2129.4+0009. We combine these observations with $z>7.5$ galaxies from the literature, for which similar measurements are available. As already pointed out in other studies, the high [\oiii]$\lambda$5007/[\oii]$\lambda$3727 ratios ($O_{32}$) and steep UV continuum slopes ($\beta$) are consistent with the values observed for low redshift Lyman continuum emitters, suggesting that such galaxies contribute to the ionizing budget of the intergalactic medium. We construct a logistic regression model to estimate the probability of a galaxy being a Lyman continuum emitter based on the measured \MUV, $\beta$, and $O_{32}$. Using this probability and the UV luminosity function, we construct an empirical model that estimates the contribution of high redshift galaxies to reionization. The preferred scenario in our analysis shows that at $z\sim8$, the average escape fraction of the galaxy population (i.e., including both LyC emitters and non-emitters) varies with \MUV, with intermediate UV luminosity ($-19<M_{UV}<-16$) galaxies having larger escape fraction. Galaxies with faint UV luminosity ($-16<M_{UV}<-13.5$) contribute most of the ionizing photons. The relative contribution of faint versus bright galaxies depends on redshift, with the intermediate UV galaxies becoming more important over time. UV bright galaxies, although more likely to be LCEs at a given log($O_{32}$) and $\beta$, contribute the least of the total ionizing photon budget.
Active galaxies contain a supermassive black hole at their center, which grows by accreting matter from the surrounding galaxy. The accretion process in the central ~10 parsecs has not been directly resolved in previous observations, due to the small apparent angular sizes involved. We observed the active nucleus of the Circinus Galaxy using sub-millimeter interferometry. A dense inflow of molecular gas is evident on sub-parsec scales. We calculate that less than 3% of this inflow is accreted by the black hole, with the rest being ejected by multiphase outflows, providing feedback to the host galaxy. The observations also reveal a dense gas disk surrounding the inflow; the disk is gravitationally unstable which drives the accretion into the central ~1 parsec.
When a detailed model of a stellar population is unavailable, it is most common to assume that stellar masses are independently and identically distributed according to some distribution: the universal initial mass function (IMF). However, stellar masses resulting from causal, long-ranged physics cannot be truly random and independent, and the IMF may vary with environment. To compare stochastic sampling with a physical model, we run a suite of 100 STARFORGE radiation magnetohydrodynamics simulations of low-mass star cluster formation in $2000M_\odot$ clouds that form $\sim 200$ stars each on average. The stacked IMF from the simulated clouds has a sharp truncation at $\sim 28 M_\odot$, well below the typically-assumed maximum stellar mass $M_{\rm up} \sim 100-150M_\odot$ and the total cluster mass. The sequence of star formation is not totally random: massive stars tend to start accreting sooner and finish later than the average star. However, final cluster properties such as maximum stellar mass and total luminosity have a similar amount of cloud-to-cloud scatter to random sampling. Therefore stochastic sampling does not generally model the stellar demographics of a star cluster as it is forming, but may describe the end result fairly well, if the correct IMF -- and its environment-dependent upper cutoff -- are known.
Investigating Protostellar Accretion (IPA) is a JWST Cycle~1 GO program that uses NIRSpec IFU and MIRI MRS to obtain 2.9--28~$\mu$m spectral cubes of young, deeply embedded protostars with luminosities of 0.2 to 10,000~L$_{\odot}$ and central masses of 0.15 to 12~M$_{\odot}$. In this Letter, we report the discovery of a highly collimated atomic jet from the Class~0 protostar IRAS~16253$-$2429, the lowest luminosity source ($L_\mathrm{bol}$ = 0.2 $L_\odot$) in the IPA program. The collimated jet is detected in multiple [Fe~II] lines, [Ne~II], [Ni~II], and H~I lines, but not in molecular emission. The atomic jet has a velocity of about 169~$\pm$~15~km\,s$^{-1}$, after correcting for inclination. The width of the jet increases with distance from the central protostar from 23 to~60 au, corresponding to an opening angle of 2.6~$\pm$~0.5\arcdeg. By comparing the measured flux ratios of various fine structure lines to those predicted by simple shock models, we derive a shock velocity of 54~km\,s$^{-1}$ and a preshock density of 2.0$\times10^{3}$~cm$^{-3}$ at the base of the jet. From these quantities, and assuming a cylindrical cross-section for the jet, we derive an upper limit for the mass loss rate from the protostar of 1.1$\times10^{-10}~M_{\odot}$~yr~$^{-1}$. The low mass loss rate is consistent with simultaneous measurements of low mass accretion rate ($2.4~\pm~0.8~\times~10^{-9}~M_{\odot}$~yr$^{-1}$) for IRAS~16253$-$2429 from JWST observations (Watson et al. in prep), indicating that the protostar is in a quiescent accretion phase. Our results demonstrate that very low-mass protostars can drive highly collimated, atomic jets, even during the quiescent phase.
This document describes BinCodex, a common format for the output of binary population synthesis (BPS) codes agreed upon by the members of the LISA Synthetic UCB Catalogue Group. The goal of the format is to provide a common reference framework to describe the evolution of a single, isolated binary system or a population of isolated binaries.
The performance of the proposed MATHUSLA detector as an instrument for studying the physics of cosmic rays by measuring extensive air showers is presented. The MATHUSLA detector is designed to observe and study the decay of long-lived particles produced at the pp interaction point of the CMS detector at CERN during the HL-LHC data-taking period. The proposed MATHUSLA detector will be composed of many layers of long scintillating bars that cannot measure more than one hit per bar and correctly report the hit coordinate in case of multiple hits. This study shows that adding a layer of RPC detectors with both analogue and digital readout significantly enhances the capabilities of MATHUSLA to measure the local densities and arrival times of charged particles at the front of air showers. We discuss open issues in cosmic-ray physics that the proposed MATHUSLA detector with an additional layer of RPC detectors could address and conclude by comparing with other air-shower facilities that measure cosmic rays in the PeV energy range.
In this report, we present the results of our analysis of trace position changes during NIRISS/SOSS observations. We examine the visit-to-visit impact of the GR700XD pupil wheel (PW) position alignment on trace positions for spectral orders 1 and 2 using the data obtained to date. Our goal is to improve the wavelength solution by correlating the trace positions on the detector with the PW position angle. We find that there is a one-to-one correspondence between PW position and spectral trace rotation for both orders. This allowed us in turn to find an analytic model that is able to predict a trace position/shape as a function of PW position with sub-pixel accuracy of about ~0.1 pixels. Such a function can be used to predict the trace position in low signal-to-noise ratio cases, and/or as a template to track trace position changes as function of time in Time Series Observations (TSOs).
When utilizing the NIRISS/SOSS mode on JWST, the pupil wheel (tasked with orienting the GR700XD grism into the optical path) does not consistently settle into its commanded position resulting in a minor misalignment with deviations of a few fractions of a degree. These small offsets subsequently introduce noticeable changes in the trace positions of the NIRISS SOSS spectral orders between visits. This inconsistency, in turn, can lead to variations of the wavelength solution. In this report, we present the visit-to-visit characterization of the NIRISS GR700XD Wavelength Calibration for spectral orders 1 and 2. Employing data from Calibration Program 1512 (PI: Espinoza), which intentionally and randomly sampled assorted pupil wheel positions during observations of the A-star BD+60-1753, as well as data from preceding commissioning and calibration activities to model this effect, we demonstrate that the wavelength solution can fluctuate in a predictable fashion between visits by up to a few pixels. We show that via two independent polynomial regression models for spectral orders 1 and 2, respectively, using the measured x-pixel positions of known Hydrogen absorption features in the A-star spectra and pupil wheel positions as regressors, we can accurately predict the wavelength solution for a particular visit with an RMS error within a few tenths of a pixel. We incorporate these models in PASTASOSS, a Python package for predicting the GR700XD spectral traces, which now allows to accurately predict spectral trace positions and their associated wavelengths for any NIRISS/SOSS observation.
Track reconstruction algorithms are critical for polarization measurements. In addition to traditional moment-based track reconstruction approaches, convolutional neural networks (CNN) are a promising alternative. However, hexagonal grid track images in gas pixel detectors (GPD) for better anisotropy do not match the classical rectangle-based CNN, and converting the track images from hexagonal to square results in loss of information. We developed a new hexagonal CNN algorithm for track reconstruction and polarization estimation in X-ray polarimeters, which was used to extract emission angles and absorption points from photoelectron track images and predict the uncertainty of the predicted emission angles. The simulated data of PolarLight test were used to train and test the hexagonal CNN models. For individual energies, the hexagonal CNN algorithm produced 15-30% improvements in modulation factor compared to moment analysis method for 100% polarized data, and its performance was comparable to rectangle-based CNN algorithm newly developed by IXPE team, but at a much less computational cost.
Complementary metal-oxide semiconductor (CMOS) sensors have been widely used as soft X-ray detectors in several fields owing to their recent developments and unique advantages. The parameters of CMOS detectors have been extensively studied and evaluated. However, the key parameter signal-to-noise ratio in certain fields has not been sufficiently studied. In this study, we analysed the charge distribution of the CMOS detector GSENSE2020BSI and proposed a two-dimensional segmentation method to discriminate signals according to the charge distribution. The effect of the two-dimensional segmentation method on the GSENSE2020BSI dectector was qualitatively evaluated. The optimal feature parameters used in the two-dimensional segmentation method was studied for G2020BSI. However, the two-dimensional segmentation method is insensitive to feature parameters.
Line observations of young stellar objects (YSOs) at (sub)millimeter wavelengths provide essential information of gas kinematics in star and planet forming environments. For Class 0 and I YSOs, identification of Keplerian rotation is of particular interest, because it reveals presence of rotationally-supported disks that are still being embedded in infalling envelopes and enables us to dynamically measure the protostellar mass. We have developed a python library SLAM (Spectral Line Analysis/Modeling) with a primary focus on analyses of emission line data at (sub)millimeter wavelengths. Here, we present an overview of the pvanalysis tool from SLAM, which is designed to identify Keplerian rotation of a disk and measure the dynamical mass of a central object using a position-velocity (PV) diagram of emission line data. The advantage of this tool is that it analyzes observational features of given data and thus requires few computational time and parameter assumptions, in contrast to detailed radiative transfer modelings. In this article, we introduce the basic concept and usage of this tool, present an application to observational data, and discuss remaining caveats.
Low-background liquid xenon detectors are utilized in the investigation of rare events, including dark matter and neutrinoless double beta decay. For their calibration, gaseous $^{220}$Rn can be used. After being introduced into the xenon, its progeny isotope $^{212}$Pb induces homogeneously distributed, low-energy ($<30$ keV) electronic recoil interactions. We report on the characterization of such a source for use in the XENONnT experiment. It consists of four commercially available $^{228}$Th sources with an activity of 55 kBq. These sources provide a high $^{220}$Rn emanation rate of about 8 kBq. We find no indication for the release of the long-lived $^{228}$Th above 1.7 mBq. Though an unexpected $^{222}$Rn emanation rate of about 3.6 mBq is observed, this source is still in line with the requirements for the XENONnT experiment.
The Galclaim software is designed to identify association between astrophysical transient sources and host galaxy by computing the probability of chance alignment. It is distributed as an open source Python software. It is already used to identify, confirm or reject host galaxy candidates of GRBs and to validate or invalidate transient candidates in astrophysical observations. Such tools are also very useful to characterise archived transient candidates in large sky survey telescopes.
The formalization of the modular energy operator within the curved spacetime is achieved through the timeless approach proposed by Page and Wootters. The investigation is motivated by the peculiar behavior of the near horizon region of a black hole and its quantum effects, leading to a restriction of the study to the immediate vicinity. The focus lies on the perspective of a static observer positioned close to the horizon. This paper highlights the alteration of the modular energy's behavior in this region compared to flat spacetime. Furthermore, it is observed that the geometry of the spacetime influences the non-local properties of the modular energy. Moreover, within the event horizon of the black hole, the modular energy exhibits a completely distinct behavior, rendering its modular behavior imperceptible in this specific region.
In this study, we explore the black bounce solution in Rastall gravity and its potential source field, which can be described as a black hole or wormhole solution depending on certain parameters. We focus on the Bardeen-Type black bounce and Simpson-Visser solution, and we aim to identify an appropriate source field for these solutions. Our findings suggest that in Rastall gravity, a source for the black bounce solution with non-linear electromagnetic can be found. However, in the presence of a non-linear electromagnetic source, it is impossible to identify an appropriate source for the black bounce solution without a scalar field. We also investigate the energy conditions outside the event horizon for two types of black bounce solutions: Simpson-Visser and Bardeen. We find that these solutions do not satisfy the null energy condition, but we also reveal that Rastall gravity has more flexibility for maintaining some of the energy conditions by selecting an appropriate value for the Rastall parameter $\gamma$.
We show how to capture both the non-unitary Page curve and replica wormhole-like contributions that restore unitarity in a toy quantum system with random dynamics. The motivation is to find the simplest dynamical model that captures this aspect of gravitational physics. In our model, we evolve with an ensemble of Hamiltonians with GUE statistics within microcanonical windows. The entropy of the averaged state gives the non-unitary curve, the averaged entropy gives the unitary curve, and the difference comes from matrix index contractions in the Haar averaging that connect the density matrices in a replica wormhole-like manner.
We revisit the charged rotating Ba\~nados-Teitelboim-Zanelli (BTZ) solution in the three-dimensional Einstein-Maxwell-$\Lambda$ system. After the erroneous announcement of its discovery at the end of the original BTZ paper in 1992, the solution was first obtained by Cl\'ement in the paper published in 1996 by coordinate transformations from the charged non-rotating BTZ solution. While Cl\'ement's form of the solution is valid only for ${\Lambda<0}$, we present a new form for a wider range of $\Lambda$ by uniform scaling transformations and a reparametrization. We also introduce new coordinates corresponding to the Doran coordinates in the Kerr spacetime, in which the metric and also its inverse are regular at the Killing horizon, and described by elementary functions. Lastly, we show that (i) the algebraic Cotton type of the spacetime is type III on the Killing horizon and type I away from the horizon, and (ii) the energy-momentum tensor for the Maxwell field is of the Hawking-Ellis type I everywhere.
The detection of gravitational waves resulting by the LIGO-Virgo-Kagra collaboration has inaugurated a new era in gravitational physics, providing an opportunity to test general relativity and its modifications in the strong gravity regime. One such test involves the study of the ringdown phase of gravitational waves from binary black-hole coalescence, which can be decomposed into a superposition of quasinormal modes. In general relativity, the spectra of quasinormal modes depend on the mass, spin, and charge of the final black hole, but they can be influenced by additional properties of the black hole, as well as corrections to general relativity. In this work, we employ the modified Teukolsky formalism developed in a previous study to investigate perturbations of slowly rotating black holes in a modified theory known as dynamical Chern-Simons gravity. Specifically, we derive the master equations for the $\Psi_0$ and $\Psi_4$ Weyl scalar perturbations that characterize the radiative part of gravitational perturbations, as well as for the scalar field perturbations. We employ metric reconstruction techniques to obtain explicit expressions for all relevant quantities. Finally, by leveraging the properties of spin-weighted spheroidal harmonics to eliminate the angular dependence from the evolution equations, we derive two, radial, second-order, ordinary differential equations for $\Psi_0$ and $\Psi_4$, respectively. These equations are coupled to another radial, second-order, ordinary differential equation for the scalar field perturbations. This work is the first attempt to derive a master equation for black holes in dynamical Chern-Simons gravity using curvature perturbations. The master equations can be numerically integrated to obtain the quasinormal mode spectrum of slowly rotating black holes in this theory, making progress in the study of ringdown in dynamical Chern-Simons gravity.
The two-point correlation function of a massive field $\langle \chi(\tau)\chi(0)\rangle$, measured along an observer's worldline in de Sitter (dS), decays exponentially as $\tau \to \infty$. Meanwhile, every dS observer is surrounded by a horizon and the holographic interpretation of the horizon entropy $S_{\rm dS}$ suggests that the correlation function should stop decaying, and start behaving erratically at late times. We find evidence for this expectation in Jackiw-Teitelboim gravity by finding a topologically nontrivial saddle, which is suppressed by $e^{-S_{\rm dS}}$, and which gives a constant contribution to $|\langle \chi(\tau)\chi(0)\rangle|^2$. This constant might have the interpretation of the late-time average of $|\langle \chi(\tau)\chi(0)\rangle|^2$ over all microscopic theories that have the same low-energy effective description.
We investigate the gravitational Aharonov-Bohm effect, by placing a quantum system in free-fall around a gravitating body {\it e.g.} a satellite orbiting the Earth. Since the system is in free-fall, by the equivalence principle, the quantum system is locally in flat, gravity-free space-time - it is screened from the gravitational field. For a slightly elliptical orbit, the gravitational potential will change with time. This leads to the energy levels of the quantum system developing side bands which is the signature for this version of the Aharonov-Bohm effect. This contrasts with the normal signature of the Aharonov-Bohm effect of shifting of interference fringes.
Kerr-Newman de Sitter (KNdS) spacetimes have a rich thermodynamic structure that involves multiple horizons, and so differs in key respects from asymptotically flat or AdS black holes. In this paper, we show that certain features of KNdS spacetimes can be reproduced by a constrained system of $N$ non-interacting spins in a magnetic field. Both the KNdS and spin systems have bounded energy and entropy, a maximum of the entropy in the interior of the energy range, and a symmetry that maps lower energy states to higher energy states with the same entropy. Consequently, both systems have a temperature that can be positive or negative, where the gravitational temperature is defined analogously to that of the spins. We find that the number of spins $N$ corresponds to $1/\Lambda$ for black holes with very small charge $q$ and rotation parameter $a$, and scales like $\sqrt{(a^2+q^2)/\Lambda}$ for larger values of $a$ and $q$. By studying constrained spin systems, we provide insight into the thermodynamics of KNdS spacetimes and its quantum mechanical description.
Combining previous results, the D3-anti-D3-brane pair potential U(T,{\phi}) is presented here, where the inflaton {\phi} measures the separation between the D3-brane and the anti-D3-brane, and the complex scalar mode T becomes tachyonic when the annihilation of the branes happens as they collide. Besides the distinct form of the inflationary potential, this hybrid inflationary model differs from a typical hybrid model in 2 important aspects: (1) U(T,{\phi}) becomes complex when T becomes tachyonic, where Im U(T,{\phi}) plays an important role in the dynamics towards the end of the inflationary epoch; (2) tunnelling during the inflationary epoch can happen; this is particularly relevant if there are multiple D3-anti-D3-brane pairs in different warped throats. Besides the production of cosmic superstrings, the model offers the possibility of first order phase transition that may generate large enough density perturbation for the primordial black hole production. Stochastic gravitational wave background from these sources remain to be fully investigated.
In this work, the influence of the boundary term $B$ is analyzed in a string-like thick brane scenario in the gravity context $f(T, B)$. For that, three models of $f(T, B)$ are proposed, i.e., $f_1(T, B)=T+k{B}^{n}$, $f_2(T, B)=T+k(-T+B)^{n}$ and $f_3(T, B)=T+k_1T^2+k_2{B}^2$, where $n$, $k$ and $k_{1,2}$ are parameters that control the deviation from the usual teleparallelism. The first relevant result obtained was the appearance of a super-located tower in the core for energy density. Furthermore, the greater the influence of the boundary term, the new maximums and minimums appear in the energy density. All this indicates the emergence of capable structures from split to the brane. The second relevant result was obtained by analyzing the gravitational perturbations, where the effective potential presents the supersymmetric form of quantum mechanics, leading to well-localized massless modes.
Integrable structures arise in general relativity when the spacetime possesses a pair of commuting Killing vectors admitting 2-spaces orthogonal to the group orbits. The physical interpretation of such spacetimes depends on the norm of the Killing vectors. They include stationary axisymmetric spacetimes, Einstein-Rosen waves with two polarizations, Gowdy models, and colliding plane gravitational waves. We review the general formalism of linear systems with variable spectral parameter, solution generating techniques, and various classes of exact solutions. In the case of the Einstein-Rosen waves, we also discuss the Poisson algebra of charges and its quantization. This is an invited contribution to the 2nd edition of the Encyclopedia of Mathematical Physics.
The knowledge of what entered black hole (BH) is completely lost as it evaporates. This contradicts the unitarity principle of quantum mechanics and is referred to as the information loss paradox. Understanding the end stages of BH evaporation is key to resolving this paradox. As a first step, we need to have exact models that can mimic 4-D BHs in General relativity in classical limit and have a systematic way to include high-energy corrections. While there are various models in the literature, there is no systematic procedure by which one can study high-energy corrections. In this work, for the first time, we obtain Callan, Giddings, Harvey, and Strominger (CGHS) -- a (1+1)-D -- model from 4-D Horndeski action -- the most general scalar-tensor theory that does not lead to Ostrogradsky ghosts. We then show that 4-D Horndeski action can systematically provide a route to include higher-order terms relevant at the end stages of black hole evaporation. We derive the leading order Hawking flux while discussing some intriguing characteristics of the corrected CGHS models. We compare our results with other works and discuss the implications for primordial BHs.
Einstein's theory in the vacuum was recently shown to possess an $SO(2)$ duality invariance, which is broken by coupling to matter. Duality invariance can be restored by enlarging the phase space of the theory to allow for violations of the algebraic Bianchi identity. We show that in cases where the matter content can be understood as a component of the torsion tensor duality can be restored and we compute the corresponding duality current. We consider the case of NS-NS gravity as an example and find that the duality current is given by the divergence of the axion. In the linearized approximation of the low energy heterotic string theory these results imply that duality of the generalized Riemann curvature tensor implements Riemannian, axion-dilaton and electro-magnetic dualities simultaneously.
We study the degrees of freedom of $R^2$ gravity in flat spacetime with two approaches. By rewriting the theory a la Stueckelberg, and implementing Lorentz-like gauges to the metric perturbations, we confirm that the pure theory propagates one scalar degree of freedom, while the full theory contains two tensor modes in addition. We then consider the degrees of freedom by directly examining the metric perturbations. We show that the degrees of freedom of the full theory match with those obtained with the manifestly covariant approach. In contrast, we find that the pure $R^2$ gravity has no degrees of freedom. We show that a similar discrepancy between the two approaches appears also in a theory dual to the three-form, and appears due to the Lorentz-like gauges, which lead to the fictitious modes even after the residual gauge redundancy has been taken into account. At first sight, this implies a discontinuity between the full theory and the pure case. By studying the first-order corrections of the full $R^2$ gravity beyond the linear regime, we show that at high-energies, both scalar and tensor degrees of freedom become strongly coupled. This implies that the apparent discontinuity of pure and full $R^2$ gravity is just an artefact of the perturbation theory, and further supports the absence of degrees of freedom in the pure $R^2$ gravity.
The 5n-vector ensemble method is a multiple test for the targeted search of continuous gravitational waves from an ensemble of known pulsars. This method can improve the detection probability combining the results from individually undetectable pulsars if few signals are near the detection threshold. In this paper, we apply the 5n-vector ensemble method to the O3 data set from the LIGO and Virgo detectors considering an ensemble of 201 known pulsars. We find no evidence for a signal from the ensemble and set a 95% credible upper limit on the mean ellipticity assuming a common exponential distribution for the pulsars' ellipticities. Using two independent hierarchical Bayesian procedures, we find upper limits of $1.2 \times 10^{-9}$ and $2.5 \times 10^{-9}$ on the mean ellipticity for the 201 analyzed pulsars.
An integral translation of Paolo Gulmanelli's seminar notes "Su una teoria dello spin isotopico" (Casa Editrice, Milano, 1957) is presented. This is the only exposition that contains all elements of Pauli's attempt at a non-Abelian Kaluza-Klein theory in 1953.
Recently, it was shown that two distant test masses, each prepared in a spatially superposed quantum state, become entangled through their mutual gravitational interaction. This entanglement, it was argued, is a signature of the quantum nature of gravity. We extend this treatment to a many-body system in a general setup and study the entanglement properties of the time-evolved state. We exactly compute the time-dependent I-concurrence for every bipartition and obtain the necessary and sufficient condition for the creation of genuine many-body entanglement. We further show that this entanglement is of generalised GHZ type when certain conditions are met. We also evaluate the amount of multipartite entanglement in the system using a set of generalised Meyer-Wallach measures.
We review recent calculations of quasinormal modes and asymptotic tails of the Bardeen spacetime interpreted as a quantum corrected Schwarzschild-like black holes. Massless electromagnetic and Dirac fields and massive scalar field are considered. The first few overtones are much more sensitive to the change of the quantum correction parameter than the fundamental mode, because such correction deforms the black hole geometry near the event horizon. While the asymptotic tails of massless fields are identical to those for the Schwarzschild case, the tails for a massive field differs from the Schwarzschild limit at both intermediate and asymptotic times.
In this work, we construct a fractional matter sector for general relativity. In particular, we propose a suitable fractional anisotropy function relating both the tangential and radial pressure of a spherically symmetric fluid based on the Gr\"unwald-Letnikov fractional derivative. The system is closed by implementing the polytropic equation of state for the radial pressure. We solve the system of integro-differential equations by Euler's method and explore the behavior of the physical quantities, namely, the normalized density energy, the normalized mass function, and the compactness.
In this work, we analyze the behavior of light and matter as they pass near and through a traversable wormhole. In particular, we study the trajectories of massive and massless particles and the dust accretion around a traversable wormhole previously reported in Eur. Phys. J. C \textbf{82} (2022) no.7, 605. For massive particles, we integrate the trajectory equation for ingoing and outgoing geodesics and classify the orbits of particles scattered by the wormhole in accordance with their asymptotic behavior far from the throat. We represent all the time--like trajectories in an embedding surface where it is shown explicitly the trajectories of i) particles that deviate from the throat and remain in the same universe, ii) particles that traverse the wormhole to another universe, and iii) particles that get trapped in the wormhole in unstable circular orbits. For the massless particles, we numerically integrate the trajectory equation to show the ray-tracing around the wormhole specifying the particles that traverse the wormhole and those that are only deviated by the throat. For the study of accretion, we consider the steady and spherically symmetric accretion of dust. Our results show that the wormhole parameters can significantly affect the behavior of light and matter near the wormhole. Some comparisons with the behavior of matter around black holes are made.
A new form of quasiclassical space-time dynamics for constrained systems reveals how quantum effects can be derived systematically from canonical quantization of gravitational systems. These quasiclassical methods lead to additional fields, representing quantum fluctuations and higher moments, that are coupled to the classical metric components. The new fields describe non-adiabatic quantum dynamics and can be interpreted as implicit formulations of non-local quantum corrections in a field theory. This field-theory aspect is studied here for the first time, applied to a gravitational system for which a tractable model is constructed. Static solutions for the relevant fields can be obtained in almost closed form. They reveal new properties of potential near-horizon and asymptotic effects in canonical quantum gravity and demonstrate the overall consistency of the formalism.
We show that the Einstein equations in the vacuum are invariant under an $SO(2)$ duality symmetry which rotates the curvature 2-form into its tangent space Hodge dual. Akin to electric-magnetic duality in gauge theory, the duality operation maps classical solutions into each other. As an example, we demonstrate that the Kerr solution is non-linearly mapped by duality into Kerr-Taub-NUT.
By applying the symmetric and trace-free formalism in terms of the irreducible Cartesian tensors, the metric for the external gravitational field of a spatially compact stationary source is provided in $F(X,Y,Z)$ gravity, a generic fourth-order theory of gravity, where $X:=R$ is Ricci scalar, $Y:=R_{\mu\nu}R^{\mu\nu}$ is Ricci square, and $Z:=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ is Riemann square. A new type of gauge condition is proposed so that the linearized gravitational field equations of $F(X,Y,Z)$ gravity are greatly simplified, and then, the stationary metric in the region exterior to the source is derived. In the process of applying the result, integrations are performed only over the domain occupied by the source. The multipole expansion of the metric potential in $F(X,Y,Z)$ gravity for a spatially compact stationary source is also presented. In the expansion, the corrections of $F(X,Y,Z)$ gravity to General Relativity are Yukawa-like ones, dependent on two characteristic lengths. Two additional sets of mass-type source multipole moments appear in the corrections and the salient feature characterizing them is that the integrations in their expressions are always modulated by a common radial factor related to the source distribution.
In the work (arXiv:1109.3846 [gr-qc]), to obtain a simple and economic formulation of field equations for generalised theories of gravity described by the Lagrangian $\sqrt{-g}L\big(g^{\alpha\beta},R_{\mu\nu\rho\sigma}\big)$, the key equality $\big(\partial L/\partial g^{\mu\nu}\big)_{R_{\alpha\beta\kappa\omega}} =2P_{\mu}^{~\lambda\rho\sigma}R_{\nu\lambda\rho\sigma}$ was derived. In this note, it is demonstrated that such an equality can be directly derived from an off-shell Noether current associated with an arbitrary vector field. As byproducts, a generalized Bianchi identity related to the divergence for the expression of field equations, together with the Noether potential, is obtained. On the basis of the above, we further propose a systematic procedure to derive the equations of motion from the Noether current, and then this procedure is extended to more general higher-order gravities endowed with the Lagrangian encompassing additional terms of the covariant derivatives of the Riemann tensor. To our knowledge, both the detailed expressions for field equations and the Noether potential associated with such theories are first given at a general level. All the results reveal that using the Noether current to determine field equations establishes a straightforward connection between the symmetry of the Lagrangian and the equations of motion and such a remedy even can avoid calculating the derivative of the Lagrangian density with respect to the metric.
In the realm of astrophysics, black holes exist within nonvacuum cosmological backgrounds, making it crucial to investigate how these backgrounds influence the properties of black holes. In this work, we first introduce a novel static spherically-symmetric exact solution of Einstein field equations representing a surrounded hairy black hole. This solution represents a generalization of the hairy Schwarzschild solution recently derived using the extended gravitational decoupling method. Then, we discuss how the new induced modification terms attributed to the primary hairs and various background fields affect the geodesic motion in comparison to the conventional Schwarzschild case. Although these modifications may appear insignificant in most cases, we identify specific conditions where they can be comparable to the Schwarzschild case for some particular background fields.
Electromagnetic and gravitational perturbations on Kerr spacetime can be reconstructed from solutions to the Teukolsky equations. We study the canonical quantization of solutions to these equations for any integer spin. Our quantization scheme involves the analysis of the Hertz potential and one of the Newman-Penrose scalars, which must be related via the Teukolsky-Starobinsky identities. We show that the canonical commutation relations between the fields can be implemented if and only if the Teukolsky-Starobinsky constants are positive, which is the case both for gravitational perturbations and Maxwell fields. We also obtain the Hadamard parametrix of the Teukolsky equation, which is the basic ingredient for a local and covariant renormalization scheme for non-linear observables. We also discuss the relation of the canonical energy of Teukolsky fields to that of gravitational perturbations.
This work examines some aspects related to the existence of negative mass. The requirement for the partition function to converge leads to two distinct approaches. Initially, convergence is achieved by assuming a negative absolute temperature, which results in an imaginary partition function and complex entropy. Subsequently, convergence is maintained by keeping the absolute temperature positive while introducing an imaginary velocity. This modification leads to a positive partition function and real entropy. It seems the utilization of imaginary velocity may yield more plausible physical results compared to the use of negative temperature, at least for the partition function and entropy.
If we imagine rewinding the universe to early times, the scale factor shrinks and the existence of a finite spatial volume may play a role in quantum tunnelling effects in a closed universe. It has recently been shown that such finite volume effects dynamically generate an effective equation of state that could support a cosmological bounce. In this work we extend the analysis to the case in which a (homogeneous) anisotropy is present, and identify a criteria for a successful bounce in terms of the size of the closed universe and the properties of the quantum field.
In recent years, there has been significant interest in the field of extended black hole thermodynamics, where the cosmological constant and/or other coupling parameters are treated as thermodynamic variables. Drawing inspiration from the Iyer-Wald formalism, which reveals the intrinsic and universal structure of conventional black hole thermodynamics, we illustrate that a proper extension of this formalism also unveils the underlying theoretical structure of extended black hole thermodynamics. As a remarkable consequence, for any gravitational theory described by a diffeomorphism invariant action, it is always possible to construct a consistent extended thermodynamics using this extended formalism.
Very recently Harada proposed a gravitational theory which is of third order in the derivatives of the metric tensor with the property that any solution of Einstein's field equations (EFEs) possibly with a cosmological constant is necessarily a solution of the new theory. He then applied his theory to derive a second-order ODE for the evolution of the scale factor of the FLRW metric. Remarkably he showed that, even in a matter-dominated universe with zero cosmological constant, there is a late-time transition from decelerating to accelerating expansion. Harada also derived a generalisation of the Schwarzschild solution. However, as his starting point he assumed an unnecessarily restricted form for a static spherically symmetric metric. In this note the most general spherically symmetric static vacuum solution of the theory is derived. Mantica and Molinari have shown that Harada's theory may be recast into the form of the EFEs with an additional source term in the form of a second-order conformal Killing tensor(CKT). Accordingly they have dubbed the theory conformal Killing gravity. Then, using a result in a previous paper of theirs on CKTs in generalised Robertson-Walker spacetimes, they rederived Harada's generalised evolution equation for the scale factor of the FLRW metric. However, Mantica and Molinari appear to have overlooked the fact that all solutions of the new theory (except those satisfying the EFEs) admit a non-trivial second-order Killing tensor. Such Killing tensors are invaluable when considering the geodesics of a metric as they lead to a second quadratic invariant of the motion in addition to that derived from the metric.
In the realm of lower-dimensional accelerating spacetimes, it is well-established that the presence of domain walls, which are co-dimension one topological defects, is a necessary condition for their construction. We expand the geometric framework by adding a conformally coupled scalar field. This endeavor leads to the identification of several new families of three-dimensional accelerating spacetimes with asymptotically locally anti-de Sitter (AdS) behavior. Notably, one of these solutions showcases a hairy generalization of the accelerating BTZ black hole. This solution is constructed at both slow and rapid phases of acceleration, and its connection with established vacuum spacetime models is explicitly elucidated. The inclusion of the scalar field imparts a non-constant Ricci curvature to the domain wall, thereby rendering these configurations particularly suitable for the construction of two-dimensional quantum black holes. To establish a well-posed variational principle in the presence of the domain wall, two essential steps are undertaken. First, we extend the conventional renormalized AdS$_3$ action to accommodate the presence of the scalar field. Second, to establish a well-posed variational principle, we extend the renormalized AdS$_3$ action to include the scalar field and incorporate the Gibbons--Hawking--York term for internal boundaries and domain wall tension. We engage in holographic computations, thereby determining the explicit form of the holographic stress tensor. In this context, the stress tensor can be expressed as that of a perfect fluid situated on a curved background. Additionally, it paves the road to ascertain the spacetime mass. Finally, we close by demonstrating the existence of three-dimensional accelerating spacetimes with asymptotically locally flat and asymptotically locally de Sitter geometries, particularly those embodying black holes.
We construct a one-dimensional dual theory that effectively describes the sector of the (2+1)D flat gravity phase space near a Flat Space Cosmology (FSC) saddle labeled by definite mass and angular momentum. This Schwarzian type action describes the dynamics of the (Pseudo-) Goldstone Bosons of BMS$_3$ algebra on a circle as the symmetry is spontaneously and anomalously broken. This 1D theory, living on the celestial circle, provides an explicit construction of a celestial dual in (2+1)D. We use it to calculate the semiclassical entropy of Flat Space Cosmologies and find perfect agreement with existing literature.
The memory effect in electrodynamics, as discovered in 1981 by Staruszkiewicz, and also analysed later, consists of adiabatic shift of the position of a test particle. The proposed 'velocity kick' memory effect, supposedly discovered recently, is in contradiction to these findings. We show that the 'velocity kick' memory is an artefact resulting from an unjustified interchange of limits. This example is a warning against drawing uncritical conclusions for spacetime fields, from their asymptotic behavior.
In $F(R,R_{\mu\nu}R^{\mu\nu},R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma})$ gravity, a general class of fourth-order theories of gravity, the multipole expansion of the metric outside a spatially compact source up to $1/c^3$ order is provided, and the closed-form expressions for the source multipole moments are all presented explicitly. Since the integrals involved in the multipole moments are performed only over the actual source distributions, our result yields a ready-to-use form of the metric. Three separated parts are contained in the expansion. Same as that in General Relativity, the massless tensor part showing the Coulomb-like dependence is characterized by the mass and spin multipole moments. As a contrast, both the massive scalar and tensor parts bear the Yukawa-like dependence on the two massive parameters, respectively, and predict the appearance of six additional sets of source multipole moments.
We define a new strategy to scan jet substructure in heavy-ion collisions. The scope is multifold: (i) test the dominance of vacuum jet dynamics at early times, (ii) capture the transition from coherent to incoherent jet energy loss, and (iii) study elastic scatterings in the medium, which are either hard and perturbative or soft and responsible for jet thermalisation. To achieve that, we analyse the angular distribution of the hardest splitting, $\theta_{\rm hard}$, above a transverse momentum scale, $k_t^{\rm min}$, in high-$p_t$ jets. Sufficiently high values of $k_t^{\rm min}$ target the regime in which the observable is uniquely determined by vacuum-like splittings and energy loss, leaving the jet substructure unmodified compared to proton-proton collisions. Decreasing $k_t^{\rm min}$ enhances the sensitivity to the relation between energy loss and the intra-jet structure and, in particular, to observe signatures of colour decoherence at small angles. At wider angles it also becomes sensitive to hard elastic scatterings with the medium and, therefore, the perturbative regime of medium response. Choosing $k_t^{\rm min}\approx 0$ leads to order one effects of non-perturbative origin such as hadronisation and, potentially, soft scatterings responsible for jet thermalisation. We perform a comprehensive analysis of this observable with three state-of-the-art jet-quenching Monte Carlo event generators. Our study paves the way for defining jet observables in heavy-ion collisions dominated by perturbative QCD and thus calculable from first principles.
A new mechanism for chiral symmetry restoration at extreme high magnetic fields is proposed in the context of the Magnetic Catalysis scenario in Weyl Semimetals. Contrary to previous proposals, here we show that, at very large magnetic fields, the transverse velocity of the axion field, the phase mode of the chiral condensate $\langle\bar{\psi}\psi\rangle$, becomes effectively one-dimensional and its fluctuations destroy a possible nonzero value of this fermionic condensate. We also show that, despite of the $U(1)$ chiral symmetry is not broken at extremely large magnetic fields, the spectrum of the system is comprised by a well defined gapless bosonic excitation, connected to the axion mode, and a correlated insulating fermionic liquid that is neutral to $U(1)$ chiral transformations. We also discuss some consequences of this theory that can be contrasted to experiments.
We study the top-Higgs coupling with a CP violating phase $\xi$ at a future multi-TeV muon collider. We focus on processes that are directly sensitive to the top quark Yukawa coupling: $t\bar{t}h$, $tbh\mu\nu$, and $t\bar{t}h\nu\bar{\nu}$ with $h\rightarrow b\bar{b}$ and semileptonic top decays. At different energies, different processes dominate the cross section, providing complementary information. At and above an energy of $\mathcal{O}(10)$ TeV, vector boson fusion processes dominate. As we show, in the Standard Model there is destructive interference in the vector boson fusion processes $t\bar{t}h\nu\bar{\nu}$ and $tbh\mu\nu$ between the top quark Yukawa and Higgs-gauge boson couplings. A CP violating phase changes this interference, and the cross section measurement is very sensitive to the size of the CP violating angle. Although we find that the cross sections are measured to $\mathcal{O}(50\%)$ statistical uncertainty at $1\sigma$, a 10 and 30 TeV muon collider can bound the CP violating angle $|\xi|\lesssim9.0^\circ$ and $|\xi|\lesssim5.4^\circ$, respectively. However, cross section measurements are insensitive to the sign of the CP violating angle. To determine that the coupling is truly CP violating, observables sensitive to CP violation must be measured. We find in the $t\bar{t}h$ process the azimuthal angle between the $t+\bar{t}$ plane and the initial state muon+Higgs plane shows good discrimination for $\xi=\pm0.1\pi$. For the $tbh\mu\nu$ and $t\bar{t}h\nu\bar{\nu}$ processes, the operator proportional to $\left(\vec{p}_\mu\times\vec{p}_h\right)\cdot \vec{p}_t$ is sensitive to the sign of CP phase $\xi$. From these observables, we construct asymmetry parameters that show good distinction between different values and signs of the CP violating angle.
Infrared and collinear (IRC) safety has long been used a proxy for robustness when developing new jet substructure observables. This guiding philosophy has been carried into the deep learning era, where IRC-safe neural networks have been used for many jet studies. For graph-based neural networks, the most straightforward way to achieve IRC safety is to weight particle inputs by their energies. However, energy-weighting by itself does not guarantee that perturbative calculations of machine-learned observables will enjoy small non-perturbative corrections. In this paper, we demonstrate the sensitivity of IRC-safe networks to non-perturbative effects, by training an energy flow network (EFN) to maximize its sensitivity to hadronization. We then show how to construct Lipschitz Energy Flow Networks (L-EFNs), which are both IRC safe and relatively insensitive to non-perturbative corrections. We demonstrate the performance of L-EFNs on generated samples of quark and gluon jets, and showcase fascinating differences between the learned latent representations of EFNs and L-EFNs.
By tagging one or two intact protons in the forward direction, it is possible to select and measure exclusive photon-fusion processes at the LHC. The same processes can also be measured in heavy ion collisions, and are often denoted as ultraperipheral collisions (UPC) processes. Such measurements open up the possibility of probing certain dimension-8 operators and their positivity bounds at the LHC. As a demonstration, we perform a phenomenological study on the $\gamma\gamma\to \ell^+\ell^-$ processes, and find out that the measurements of this process at the HL-LHC provide reaches on a set of dimension-8 operator coefficients that are comparable to the ones at future lepton colliders. We also point out that the $\gamma q\to \gamma q$ process could potentially have better reaches on similar types of operators due to its larger cross section, but a more detailed experimental study is need to estimate the signal and background rates of this process. The validity of effective field theory (EFT) and the robustness of the positivity interpretation are also discussed.
Heavy neutral leptons (HNLs) are often among the hypothetical ingredients behind nonzero neutrino masses. If sufficiently light, they can be produced and detected in fixed-target-like experiments. We show that if the HNLs belong to a richer -- but rather generic -- dark sector, their production mechanism can deviate dramatically from expectations associated to the standard-model weak interactions. In more detail, we postulate that the dark sector contains an axion-like particle (ALP) that naturally decays into HNLs. Since ALPs mix with the pseudoscalar hadrons, the HNL flux might be predominantly associated to the production of neutral mesons (e.g., $\pi^0$, $\eta$) as opposed to charge hadrons (e.g., $\pi^\pm$, $K^\pm$). In this case, the physics responsible for HNL production and decay are not directly related and experiments like DUNE might be sensitive to HNLs that are too weakly coupled to the standard model to be produced via weak interactions, as is generically the case of HNLs that play a direct role in the type-I seesaw mechanism.
In this paper, we scrutinize a radiatively generated QCD $\theta$ parameter at the two-loop level based on both full analytical loop functions with the Fock-Schwinger gauge method and the effective field theory approach, using simplified models. We observe that the radiatively generated $\theta$ parameters at the low energy scale precisely match between them. It provides validity to perturbative loop calculations of the QCD $\theta$ parameter with the Fock-Schwinger gauge method. Furthermore, it is also shown that the ordinary Fujikawa method for the radiative $\theta$ parameter by using $\bar\theta = - arg det M_q^{loop}$ does not cover all contributions in the simplified models. But, we also find that when there is a scale hierarchy in $CP$-violating sector, evaluation of the Fujikawa method is numerically sufficient. As an application, we calculate the radiative $\theta$ parameter at the two-loop level in a slightly extended Nelson-Barr model, where the spontaneous $CP$ violation occurs to solve the strong $CP$ problem. It is found a part of the radiative $\theta$ parameters cannot be described by the Fujikawa method.
NOvA is a long-baseline neutrino oscillation experiment that measures oscillations in charged-current $\nu_{\mu} \rightarrow \nu_{\mu}$ (disappearance) and $\nu_{\mu} \rightarrow \nu_{e}$ (appearance) channels, and their antineutrino counterparts, using neutrinos of energies around 2 GeV over a distance of 810 km. In this work we reanalyze the dataset first examined in our previous paper [Phys. Rev. D 106, 032004 (2022)] using an alternative statistical approach based on Bayesian Markov Chain Monte Carlo. We measure oscillation parameters consistent with the previous results. We also extend our inferences to include the first NOvA measurements of the reactor mixing angle $\theta_{13}$ and the Jarlskog invariant. We use these results to quantify the strength of our inferences about CP violation, as well as to examine the effects of constraints from short-baseline measurements of $\theta_{13}$ using antineutrinos from nuclear reactors when making NOvA measurements of $\theta_{23}$. Our long-baseline measurement of $\theta_{13}$ is also shown to be consistent with the reactor measurements, supporting the general applicability and robustness of the PMNS framework for neutrino oscillations.
We compute the relaxation times for massive quarks and anti-quarks to align their spins with the angular velocity in a rigidly rotating medium at finite temperature and baryon density. The rotation effects are implemented using a fermion propagator immersed in a cylindrical rotating environment. The relaxation time is computed as the inverse of the interaction rate to produce an asymmetry between the quark (anti-quark) spin components along and opposite to the angular velocity. For conditions resembling heavy-ion collisions, the relaxation times for quarks are smaller than for anti-quarks. For semi-central collisions the relaxation time is within the possible life-time of the QGP for all collision energies. However, for anti-quarks this happens only for collision energies $\sqrt{s_{NN}}\gtrsim 50$ GeV. The results are quantified in terms of the intrinsic quark and anti-quark polarizations, namely, the probability to build the spin asymmetry as a function of time. Our results show that these intrinsic polarizations tend to 1 with time at different rates given by the relaxation times with quarks reaching a sizable asymmetry at a faster pace. These are key results to further elucidate the mechanisms of hyperon polarization in relativistic heavy-ion collisions.
It was found that the isoscalar-scalar and isovector-scalar mesons play significant roles in nuclear matter physics. However, the underlying structures of these resonances are not yet well understood. We construct a three-flavor baryonic extended linear sigma model including both the two- and four-quark constitutes of scalar mesons with respect to the axial transformation and study the nuclear matter properties with relativistic mean field method. The nuclear matter properties at saturation density and hadron spectrums are well reproduced with this model simultaneously and, only the two-quark component of the scalar mesons couple to baryon fields and dominates the nuclear force. A plateau-like structure is found in the symmetry energy of nuclear matter due to the multi-meson couplings, and it's crucial for understanding the neutron skin thickness of $^{208}\rm Pb$ and the tidal deformation of neutron star from GW170817. At high density regions, the vector mesons are found to be rather crucial, especially the ones coupling with four-quark configurations. This model can be easily extended to study neutron stars, even with hyperons in their interior.
We present results of simulations of light-nuclei production in Au+Au collisions at collision energy of $\sqrt{s_{NN}}=$ 3 GeV within updated Three-fluid Hydrodynamics-based Event Simulator Extended by UrQMD (Ultra-relativistic Quantum Molecular Dynamics) final State interactions (THESEUS). The results are compared with recent STAR data. The light-nuclei production is treated within the thermodynamical approach on equal basis with hadrons. The only additional parameter related to the light nuclei is the energy density of late freeze-out that imitates afterburner stage of the collision because the light nuclei do not participate in the UrQMD evolution. It is found that the late freeze-out is preferable for deuterons, tritons, and $^3$He. Remarkably, the $^4$He observables are better reproduced with the standard freeze-out. This suggests that the $^4$He nuclei better survive in the afterburner stage because they are more spatially compact and tightly bound objects. This is an argument in favor of dynamical treatment of light nuclei. The simulations indicate that the collision dynamics is determined by the hadronic phase. The calculated results reveal not perfect but a good reproduction of the data on bulk observables and directed flow. The elliptic flow turns out to be more intricate.
We present the one-loop correction to the $W$ boson mass in U(1)$_z$ type extensions of the standard model. We compare it to an approximation, often used in high energy physics tools. We point out that if the $Z'$ boson -- predicted in U(1)$_z$ type extensions -- is much heavier than the $Z$ boson, then the use of the complete set of one-loop corrections is necessary.
Polarization and spin correlations have been studied in detail for top quarks at the LHC, but have been explored very little for the other flavors of quarks. In this paper we consider the processes $pp\to q\bar{q}$ with $q = b$, $c$ or $s$. Utilizing the partial preservation of the quark's spin information in baryons in the jet produced by the quark, we examine possible analysis strategies for ATLAS and CMS to measure the quark polarization and spin correlations. We find polarization measurements for the $b$ and $c$ quarks to be feasible, even with the currently available datasets. Spin correlation measurements for $b\bar b$ are possible using the CMS Run 2 parked data, while such measurements for $c\bar c$ will become possible with higher integrated luminosity. For the $s$ quark, we find the measurements to be challenging with the standard triggers. We also provide leading-order QCD predictions for the polarization and spin correlations expected in the $b\bar b$ and $c\bar c$ samples with the cuts envisioned for the above analyses. Apart from establishing experimentally the existence of spin correlations in $b\bar b$ and $c\bar c$ systems produced in $pp$ collisions, the proposed measurements can provide new information on the polarization transfer from quarks to baryons and might even be sensitive to physics beyond the Standard Model.
A mechanism of the heavy quark dominance in the orbital excitation is proposed in this paper which is testified to be reasonable for singly and doubly heavy baryons. In the relativistic quark model, an analysis of the Hamiltonian figures out the mechanism that the excitation mode with lower energy levels is always associated with the heavy quark(s), and the splitting of the energy levels is suppressed by the heavy quark(s). So, the heavy quarks dominate the orbital excitation of singly and doubly heavy baryons. Furthermore, a physical understanding of this mechanism is given in a semi-classical way. Accordingly, the predicted mass spectra of singly and doubly heavy baryons confirm the rationality of this mechanism. In addition, an interesting consequence of this mechanism is that a heavy-light meson is more likely to be produced in the strong decay of the high-orbital excited states, which is supported by experiments. This mechanism is rooted in the breakdown of the mass symmetry. Therefore, it may be also valid for other multi-quark systems, such as the tetraquarks Qqqq and QQqq, or the pentaquarks Qqqqq and QQqqq.
The Muon g-2 experiment at Fermilab has published the first result on Run-1 dataset in 2021 showing a good agreement with the previous experimental result at Brookhaven National Laboratory at comparable precision (0.46 ppm). In August 2023 we released our new result from Run-2 and Run-3 datasets which allowed to measure $a_\mu$ to 0.21 ppm, a more than two-fold improved precision respect to Run-1, and which allowed to reach a precision of 0.20 ppm when combined with the Run-1 result. We will discuss the improvements of the Run-2/3 analysis respect to Run-1, the current status of the theory prediction, and the future prospects.
In this work we investigate the inclusive photoproduction of the $C$-odd $S$-wave fully-charmed tetraquark at electron-ion colliders within the nonrelativistic QCD (NRQCD) factorization framework, at the lowest order in velocity and $\alpha_s$. The value of the NRQCD long-distance matrix element is estimated from two phenomenological potential models. Our studies reveal that the photoproduction of $1^{+-}$ fully-charmed tetraquark may be difficult to observe at HERA and EicC, nevertheless its observation prospect at EIC appears to be bright.
During the last 15 years the Radio MonteCarLow Working Group has been providing valuable support to the development of radiative corrections and Monte Carlo event generators for low energy $e^+e^-$ data and $\tau$-lepton decays. While the working group has been operating for more than 15 years without a formal basis for funding, parts of our program have recently been included as a Joint Research Initiative in the group application of the European hadron physics community, STRONG2020, to the European Union, with a more specific goal of creating an annotated database for low-energy hadronic cross sections in $e^+e^-$ collisions. In parallel the theory community is continuing its effort towards the realization of improved Monte Carlo generators with for low energy $e^+e^-$ data into hadrons. Full NNLO corrections in the leptonic sector are to be combined with an improved treatment of radiative corrections involving pions. This is of relevance for the precise determination of the leading hadronic contribution to the muon g-2. We will report on these initiatives.
The anomalies observed in the $W$ mass measurements at the CDF-II experiments and the excesses seen around 95~GeV at the Large Hadron Collider (LHC) motivate this work, in which we investigate and constrain the parameter space of the Scale Invariant Scotogenic Model with a Majorana dark matter candidate. The scanned parameters are chosen to be consistent with the dark matter relic density and the observed excesses at $\sim95$~GeV signal strength rates in $\gamma\gamma$, $b\bar b$ and $\tau^+\tau^-$ final states. Furthermore, the model's viable parameters can be probed in both the LHC and future $e^+e^-$ colliders for di-Higgs production.
UV-completions of quantum field theories (QFT's) based on string-inspired nonlocality have been proposed to improve the high-energy behavior of local QFT, with the hope of including gravity. One problematic issue is how to realize spontaneous symmetry breaking without introducing an infinite tower of ghosts in the perturbative spectrum. In this letter, a weakly nonlocal extension of the Standard Model (SM) is proposed: the Fuzzy Standard Model (FSM). It is a smooth deformation of the SM based on covariant star-products of fields. This new formalism realizes electroweak symmetry breaking without ghosts at tree-level, and it does not introduce any new elementary particles. We argue that the FSM has several appealing theoretical and phenomenological features that deserve to be investigated in future works.
We describe the calculation of the three-loop QCD corrections to quark and gluon form factors. The relevant three-loop Feynman diagrams are evaluated and the resulting three-loop Feynman integrals are reduced to a small set of known master integrals by using integration-by-parts relations. Our calculation confirms the recent results by Baikov et al.\ for the three-loop form factors. In addition, we derive the subleading ${\cal O}(\epsilon)$ terms for the fermion-loop type contributions to the three-loop form factors which are required for the extraction of the fermionic contributions to the four-loop quark and gluon collinear anomalous dimensions. The finite parts of the form factors are used to determine the hard matching coefficients for the Drell-Yan process and inclusive Higgs-production in soft-collinear effective theory.
In this paper, the analytic sufficient and necessary conditions are obtained for the CP conserving two-Higgs-doublet potential to be bounded from below by using the co-positivity of tensors. This is achieved by treating the potential as a quartic homogeneous polynomial about the moduli of the two Higgs doublet fields, where the 'angles' is described as the misalignment of the two doublets, then solving three minimum problems with respect to the misalignment. Finally, the analytic conditions are established with the help of the corresponding theory and methods of higher order tensors.
We present a variational quantum eigensolver (VQE) algorithm for the efficient bootstrapping of the causal representation of multiloop Feynman diagrams in the Loop-Tree Duality (LTD) or, equivalently, the selection of acyclic configurations in directed graphs. A loop Hamiltonian based on the adjacency matrix describing a multiloop topology, and whose different energy levels correspond to the number of cycles, is minimized by VQE to identify the causal or acyclic configurations. The algorithm has been adapted to select multiple degenerated minima and thus achieves higher detection rates. A performance comparison with a Grover's based algorithm is discussed in detail. The VQE approach requires, in general, fewer qubits and shorter circuits for its implementation, albeit with lesser success rates.
The analytic continuations (ACs) of the double variable Horn $H_1$ and $H_5$ functions have been derived for the first time using the automated symbolic $\textit{Mathematica}$ package $\texttt{Olsson.wl}$. The use of Pfaff-Euler transformations have been emphasised to derive AC to cover regions which are otherwise not possible. The corresponding region of convergence (ROC) is obtained using its companion package $\texttt{ROC2.wl}$. A $\textit{Mathematica}$ package $\texttt{HornH1H5.wl}$, containing all the derived ACs and the associated ROCs, along with a demonstration file of the same is made publicly available in this https://github.com/souvik5151/Horn\_H1\_H5.git .
We present a unified framework for the perturbative factorization connecting Euclidean correlations to light-cone correlations. Starting from nonlocal quark and gluon bilinear correlators, we derive the relevant hard-matching kernel up to the next-to-leading-order, both for the flavor singlet and non-singlet combinations, in non-forward and forward kinematics, and in coordinate and momentum space. The results for the generalized distribution functions (GPDs), parton distribution functions (PDFs), and distribution amplitudes (DAs) are obtained by choosing appropriate kinematics. The renormalization and matching are done in a state-of-the-art scheme. We also clarify some issues raised on the perturbative matching of GPDs in the literature. Our results provide a complete manual for extracting all leading-twist GPDs, PDFs as well as DAs from lattice simulations of Euclidean correlations in a state-of-the-art strategy, either in coordinate or in momentum space factorization approach.
Extending the Higgs sector of the Standard Model (SM) by just one additional Higgs doublet field leads to the two-Higgs-doublet model (2HDM). In the Type-I $Z_2$-symmetric limit of the 2HDM, all the five new physical Higgs states can be fairly light, $\mathcal{O}(100)$\,GeV or less, without being in conflict with current data from the direct Higgs boson searches and the $B$-physics measurements. In this article, we establish that the new neutral as well as the charged Higgs bosons in this model can all be simultaneously observable in the multi-$b$ final state. The statistical significance of the signature for each of these Higgs states, resulting from the electro-weak (EW) production of their pairs, can exceed 5$\sigma$ at the 13\,TeV High-Luminosity Large Hadron Collider (HL-LHC). Since the parameter space configurations where this is achievable are precluded in the other, more extensively pursued, 2HDM Types, an experimental validation of our findings would be a clear indication that the true underlying Higgs sector in nature is the Type-I 2HDM.
In this article, we review and update implications of the muon anomalous magnetic moment (muon $g-2$) anomaly for two-Higgs-doublet models (2HDMs), which are classified according to imposed symmetries and their resulting Yukawa sector. In the minimal setup, the muon $g-2$ anomaly can be accommodated by the type-X (lepto-philic) 2HDM, flavor-aligned 2HDM (FA2HDM), muon-specific 2HDM ($\mu$2HDM), and $\mu\tau$-flavor violating 2HDM. We summarize all relevant experimental constraints from high-energy collider experiments and flavor experiments, as well as the theoretical constraints from the perturbative unitarity and vacuum stability bounds, to these 2HDMs in light of the muon $g-2$ anomaly. We clarify the available parameter spaces of these 2HDMs and investigate how to probe the remaining parameter regions in future experiments. In particular, we find that, due to the updated $B_s\to\mu^+ \mu^-$ measurement, the remaining parameter region of the FA2HDM is almost equivalent to the one of the type-X 2HDM. Furthermore, based on collider simulations, we find that the type-X 2HDM is excluded and the $\mu$2HDM scenario will be covered with the upcoming Run 3 data.
We consider a gauge symmetry extension of the standard model given by $SU(3)_C\otimes SU(2)_L\otimes U(1)_X\otimes U(1)_N\otimes Z_2$ with minimal particle content, where $X$ and $N$ are family dependent but determining the hypercharge as $Y=X+N$, while $Z_2$ is an exact discrete symmetry. In our scenario, $X$ (while $N$ is followed by $X-Y$) and $Z_2$ charge assignments are inspired by the number of fermion families and the stability of dark matter, as observed, respectively. We examine the mass spectra of fermions, scalars, and gauge bosons, as well as their interactions, in presence of a kinetic mixing term between $U(1)_{X,N}$ gauge fields. We discuss in detail the phenomenology of the new gauge boson and the right-handed neutrino dark matter stabilized by $Z_2$ conservation. We obtain parameter spaces simultaneously satisfying the recent CDF $W$-boson mass, electroweak precision measurements, particle colliders, as well as dark matter observables, if the kinetic mixing parameter is not necessarily small.
We study three-flavor QCD in a uniform magnetic field using chiral perturbation theory ($\chi$PT). We construct the vacuum free energy density, quark condensate shifts induced by the magnetic field and the renormalized magnetization to $\mathcal{O}(p^6)$ in the chiral expansion. We find that the calculation of the free energy is greatly simplified by cancellations among two-loop diagrams involving charged mesons. In comparing our results with recent $2+1$-flavor lattice QCD data, we find that the light quark condensate shift at $\mathcal{O}(p^6)$ is in better agreement than the shift at $\mathcal{O}(p^4)$. We also find that the renormalized magnetization, due to its smallness, possesses large uncertainties at $\mathcal{O}(p^{6})$ due to the uncertainties in the low-energy constants.
Observation of lepton number violation would represent a groundbreaking discovery with profound consequences for fundamental physics and as such, it has motivated an extensive experimental program searching for neutrinoless double beta decay. However, the violation of lepton number can be also tested by a variety of other observables. We focus on the possibilities of probing this fundamental symmetry within the framework of the Standard Model Effective Field Theory (SMEFT) beyond the minimal dimension-5. Specifically, we study the bounds on $\Delta L = 2$ dimension-7 effective operators beyond the electron flavor imposed by all relevant low-energy observables and confront them with derived high-energy collider limits. We also discuss how the synergy of the analyzed multi-frontier observables can play a crucial role in distinguishing among different dimension-7 SMEFT operators.
We explore the possibility that an $SU(2)_L$ triplet scalar with hypercharge $Y=0$ is the origin of the $95\,$GeV diphoton excess. For a small mixing angle with the Standard Model Higgs, its neutral component has naturally a sizable branching ratio to $\gamma\gamma$ such that its Drell-Yan production via $pp\to W^*\to H H^\pm$ is sufficient to obtain the desired signal strength, where $H^\pm$ is the charged Higgs component of the triplet. The predictions of this setup are: 1) The $\gamma\gamma$ signal has a $p_T$ spectrum different from gluon fusion but similar to associated production. 2) Photons are produced in association with tau leptons and jets, but generally do not fall into the vector-boson fusion category. 3) The existence of a charged Higgs with $m_{H^\pm}\approx\!(95\pm5)\,$GeV leading to $\sigma(pp\to \tau\tau\nu\nu)\approx0.4\,$pb, which is of the same level as the current limit and can be discovered with Run 3 data. 4) A positive definite shift in the $W$ mass as suggested by the current global electroweak fit.
The absence of semitauonic decays of charmed hadrons makes the decay processes mediated by the quark-level $c\to d \tau^+ \nu_{\tau}$ transition inadequate for probing a generic new physics (NP) with all kinds of Dirac structures. To fill in this gap, we consider in this paper the quasielastic neutrino scattering process $\nu_{\tau}+n\to \tau^-+\Lambda_c$, and propose searching for NP through the polarizations of the $\tau$ lepton and the $\Lambda_c$ baryon. In the framework of a general low-energy effective Lagrangian, we perform a comprehensive analysis of the (differential) cross sections and polarization vectors of the process both within the Standard Model and in various NP scenarios, and scrutinize possible NP signals. We also explore the influence on our findings due to the uncertainties and the different parametrizations of the $\Lambda_c \to N$ transition form factors, and show that they have become one of the major challenges to further constrain possible NP through the quasielastic scattering process.
In an effort to understand nuclei in terms of quarks we develop an effective theory to low-energy quantum chromodynamics in which a single quark contained in a nucleus is driven by a mean field due to other constituents of the nucleus. We analyze the reason why the number of $d$ quarks in light stable nuclei is much the same as that of $u$ quarks, while for heavier nuclei beginning with ${\rm Ca}^{40}$, the number of $d$ quarks is greater than the number of $u$ quarks. To account for the finiteness of the periodic table, we invoke a version of gauge/gravity duality between the dynamical affair in stable nuclei and that in extremal black holes. With the assumption that the end of stability for heavy nuclei is dual to the occurrence of a naked singularity, we find that the maximal number of protons in stable nuclei is $Z_{\max}^{\rm H}\approx 82$.
Based on the chiral perturbation theory at the leading order, we show the presence of a new phase in rapidly rotating QCD matter with two flavors, that is a domain-wall Skyrmion phase. Based on the chiral Lagrangian with a Wess-Zumino-Witten (WZW) term responsible for the chiral anomaly and chiral vortical effect, it was shown that the ground state is a chiral soliton lattice(CSL) consisting of a stack of $\eta$-solitons in a high density region under rapid rotation. In a large parameter region, a single $\eta$-soliton decays into a pair of non-Abelian solitons, each of which carries ${\rm SU}(2)_{\rm V}/{\rm U}(1) \simeq {\mathbb C}P^1 \simeq S^2$ moduli as a consequence of the spontaneously broken vector symmetry ${\rm SU}(2)_{\rm V}$. In such a non-Abelian CSL, we construct the effective world-volume theory of a single non-Abelian soliton to obtain a $d=2+1$ dimensional ${\mathbb C}P^1$ model with a topological term originated from the WZW term. We show that when the chemical potential is larger than a critical value, a topological lump supported by the second homotopy group $\pi_2(S^2) \simeq {\mathbb Z}$ has negative energy and is spontaneously created, implying the domain-wall Skyrmion phase. This lump corresponds in the bulk to a Skyrmion supported by the third homotopy group $\pi_3[ {\rm SU}(2)] \simeq {\mathbb Z}$ carrying a baryon number. This composite state is called a domain-wall Skyrmion, and is stable even in the absence of the Skyrme term. An analytic formula for the effective nucleon mass in this medium is obtained as $4\sqrt{2}\pi f_{\pi}f_\eta/m_{\pi} \sim 1.21$ GeV with the decay constants $f_{\pi}$ and $f_\eta$ of the pions and $\eta$ meson, respectively, and the pion mass $m_{\pi}$, which is surprisingly close to the nucleon mass in the QCD vacuum.
Using the formalism of generalized fractional derivatives, a two-dimensional non-relativistic meson system is studied. The mesons are interacting by a Cornell potential. The system is formulated in the domain of the symplectic quantum mechanics by means of the generalized fractional Nikiforov-Uvarov method. The corresponding Wigner function and the energy eigenvalues are then derived. The effect of fractional parameters $\alpha$ and $\beta$ with the ground state solution is analyzed through the Wigner function for the charm-anticharm, bottom-antibottom and $b\overline{c}$ mesons. One of the fundamental achievements of such Cornell model is the determination of heavy quarkonia mass spectra. We have computed these masses and the
QCD matter in strong magnetic field exhibits a rich phase structure. In the presence of an external magnetic field, the chiral Lagrangian for two flavors is accompanied by the Wess-Zumino-Witten (WZW) term containing an anomalous coupling of the neutral pion $\pi_0$ to the magnetic field via the chiral anomaly. Due to this term, the ground state is inhomogeneous in the form of either chiral soliton lattice (CSL), an array of solitons in the direction of magnetic field, or domain-wall Skyrmion (DWSk) phase in which Skyrmions supported by $\pi_3[{\rm SU}(2)] \simeq {\mathbb Z}$ appear inside the solitons as topological lumps supported by $\pi_2(S^2) \simeq {\mathbb Z}$ in the effective worldvolume theory of the soliton. In this paper, we determine the phase boundary between the CSL and DWSk phases beyond the single-soliton approximation, within the leading order of chiral perturbation theory. To this end, we explore a domain-wall Skyrmion chain in multiple soliton configurations. First, we construct the effective theory of the CSL by the moduli approximation, and obtain the ${\mathbb C}P^1$ model or O(3) model, gauged by a background electromagnetic gauge field, with two kinds of topological terms coming from the WZW term: one is the topological lump charge in 2+1 dimensional worldvolume and the other is a topological term counting the soliton number. Topological lumps in the 2+1 dimensional worldvolume theory are superconducting rings and their sizes are constrained by the flux quantization condition. The negative energy condition of the lumps yields the phase boundary between the CSL and DWSk phases. We find that a large region inside the CSL is occupied by the DWSk phase, and that the CSL remains metastable in the DWSk phase in the vicinity of the phase boundary.
Parity violation in QCD is a consequence of the so-called QCD $\theta$-vacuum. As a result, parity-odd fragmentation functions are introduced and they bring in new observables in the back-to-back dihadron productions in $e^+e^-$-annihilation experiment [Phys.Rev.Lett. 106 (2011) 042001]. Therefore, the experimental measurements on the corresponding parity-odd fragmentation functions can shed light on the local CP violation effect in QCD. In this paper, we investigate the decay contribution to those parity-odd fragmentation functions and compute their contribution to these new observables. In principle, the decay contribution should/can be excluded in the theoretical analysis and experimental measurements. However, this is usually not the common practice so far. Furthermore, in light of that the value of the $\theta$-parameter is extremely small ($\theta < 3 \times 10^{-10}$), we should be very careful while constraining those parity-odd fragmentation functions from experiments.
The electromagnetic and gravitational form factors of decuplet baryons are systematically studied with a covariant quark-diquark approach. The model parameters are firstly discussed and determined through comparison with the lattice calculation results integrally. Then, the electromagnetic properties of the systems including electromagnetic radii, magnetic moments, and electric-quadrupole moments are calculated. The obtained results are in agreement with experimental measurements and the results of other models. Finally, the gravitational form factors and the mechanical properties of the decuplet baryons, such as mass radii, energy densities, and spin distributions, are also calculated and discussed.
We study correlations between geometric subfactors living on the Ryu-Takayanagi surface that bounds the entanglement wedge. Using the surface-state correspondence and the bit threads program, we are able to calculate mutual information and conditional mutual information between subfactors. This enables us to count the shared Bell pairs between subfactors, and we propose an entanglement distillation procedure over these subsystems via a SWAP gate protocol. We comment on extending to multipartite entanglement.
We devise a general method for obtaining $0$-form noninvertible discrete chiral symmetries in $4$-dimensional $SU(N)/\mathbb Z_p$ and $SU(N)\times U(1)/\mathbb Z_p$ gauge theories with matter in arbitrary representations, where $\mathbb Z_p$ is a subgroup of the electric $1$-form center symmetry. Our approach involves placing the theory on a three-torus and utilizing the Hamiltonian formalism to construct noninvertible operators by introducing twists compatible with the gauging of $\mathbb Z_p$. These theories exhibit electric $1$-form and magnetic $1$-form global symmetries, and their generators play a crucial role in constructing the corresponding Hilbert space. The noninvertible operators are demonstrated to project onto specific Hilbert space sectors characterized by particular magnetic fluxes. Furthermore, when subjected to twists by the electric $1$-form global symmetry, these surviving sectors reveal an anomaly between the noninvertible and the $1$-form symmetries. We argue that an anomaly implies that certain sectors, characterized by the eigenvalues of the electric symmetry generators, exhibit multi-fold degeneracies. When we couple these theories to axions, infrared axionic noninvertible operators inherit the ultraviolet structure of the theory, including the projective nature of the operators and their anomalies. We discuss various examples of vector and chiral gauge theories that showcase the versatility of our approach.
We show that the characteristic function of the probability distribution associated with the change of an observable in a two-point measurement protocol with a perturbation can be written as an auto-correlation function between an initial state and a certain unitary evolved state by an effective unitary operator. Using this identification, we probe how the evolved state spreads in the corresponding conjugate space, by defining a notion of the complexity of the spread of this evolved state. For a sudden quench scenario, where the parameters of an initial Hamiltonian (taken as the observable measured in the two-point measurement protocol) are suddenly changed to a new set of values, we first obtain the corresponding Krylov basis vectors and the associated Lanczos coefficients for an initial pure state, and obtain the spread complexity. Interestingly, we find that in such a protocol, the Lanczos coefficients can be related to various cost functions used in the geometric formulation of circuit complexity, for example the one used to define Fubini-Study complexity. We illustrate the evolution of spread complexity both analytically, by using Lie algebraic techniques, and by performing numerical computations. This is done for cases when the Hamiltonian before and after the quench are taken as different combinations of chaotic and integrable spin chains. We show that the complexity saturates for large values of the parameter only when the pre-quench Hamiltonian is chaotic. Further, in these examples we also discuss the important role played by the initial state which is determined by the time-evolved perturbation operator.
We show how starting with one-string space of states in BRST formalism one can construct a large class of physical quantities containing, in particular, scattering amplitudes for bosonic string and superstring. The same techniques work for heterotic string.
In this work we study Liouville conformal blocks with degenerate primaries and one operator in an irregular representation of the Virasoro algebra. Using an algebraic approach, we derive modified BPZ equations satisfied by such blocks and subsequently construct corresponding integral representations based on integration over non-compact Lefschetz cycles. The integral representations are then used to derive novel types of flat connections on the irregular conformal block bundle.
We analyse the scalar curvature of the vector multiplet moduli space $\mathcal{M}^{\rm VM}_X$ of type IIA string theory compactified on a Calabi--Yau manifold $X$. While the volume of $\mathcal{M}^{\rm VM}_X$ is known to be finite, cases have been found where the scalar curvature diverges positively along trajectories of infinite distance. We classify the asymptotic behaviour of the scalar curvature for all large volume limits within $\mathcal{M}^{\rm VM}_X$, for any choice of $X$, and provide the source of the divergence both in geometric and physical terms. Geometrically, there are effective divisors whose volumes do not vary along the limit. Physically, the EFT subsector associated to such divisors is decoupled from gravity along the limit, and defines a rigid $\mathcal{N}=2$ field theory with a non-vanishing moduli space curvature $R_{\rm rigid}$. We propose that the relation between scalar curvature divergences and field theories that can be decoupled from gravity is a common trait of moduli spaces compatible with quantum gravity.
The principle of maximum ignorance posits that the coarse-grained description of a system is maximally agnostic about its underlying microscopic structure. We briefly review this principle for random matrix theory and for the eigenstate thermalization hypothesis. We then apply this principle in holography to construct ensembles of random mixed states. This leads to an ensemble of microstates which models our microscopic ignorance, and which on average reproduces the effective semiclassical physics of a given bulk state. We call this ensemble the state-averaging ansatz. The output of our model is a prediction for semiclassical contributions to variances and higher statistical moments over the ensemble of microstates. The statistical moments provide coarse-grained -- yet gravitationally non-perturbative -- information about the microstructure of the individual states of the ensemble. We show that these contributions exactly match the on-shell action of known wormhole configurations of the gravitational path integral. These results strengthen the view that wormholes simply parametrize the ignorance of the microstructure of a fundamental state, given a fixed semiclassical bulk description.
We compute the entanglement entropy of Hawking radiation in a bath attached to a deformed eternal AdS black hole dual to field theories where each theory is backreacted by the presence of a uniform static distribution of heavy fundamental quarks. The entanglement entropy of the Hawking radiation increases linearly with time until an island emerges after the Page time, at that point the entanglement entropy saturates to a constant value twice the Bekenstein-Hawking entropy of the deformed black hole resulting in the Page curve. Furthermore, we study the effects of the backreaction on the Page curve and observe that introducing deformation delays the appearance of island and shifts the Page curve to a later time.
We study the nonlinear dynamical evolution of spinodal decomposition in a first-order superfluid phase transition using a simple holographic model in the probe limit. We first confirm the linear stability analysis based on quasinormal modes and verify the existence of a critical length scale related to a gradient instability -- negative speed of sound squared -- of the superfluid sound mode, which is a consequence of a negative thermodynamic charge susceptibility. We present a comparison between our case and the standard Cahn-Hilliard equation for spinodal instability, in which a critical length scale can be also derived based on a diffusive instability. We then perform several numerical tests which include the nonlinear time evolution directly from an unstable state and fast quenches from a stable to an unstable state in the spinodal region. Our numerical results provide a real time description of spinodal decomposition and phase separation in one and two spatial dimensions. We reveal the existence of four different stages in the dynamical evolution, and characterize their main properties. Finally, we investigate the strength of dynamical heterogeneity using the spatial variance of the local chemical potential and we correlate the latter to other features of the dynamical evolution.
Twisted four-dimensional supersymmetric Yang-Mills theory famously gives a useful point of view on the Donaldson and Seiberg-Witten invariants of four-manifolds. In this paper we generalize the construction to include a path integral formulation of generalizations of Donaldson invariants for smooth \underline{families} of four-manifolds. Mathematically these are equivariant cohomology classes for the action of the oriented diffeomorphism group on the space of metrics on the manifold. In principle these cohomology classes should contain nontrivial information about the topology of the diffeomorphism group of the four-manifold. We show that the invariants may be interpreted as the standard topologically twisted path integral of four-dimensional $\mathcal{N}=2$ supersymmetric Yang-Mills coupled to topologically twisted background fields of conformal supergravity.
Using a manifestly supersymmetric formalism, we determine the general structure of two- and three- point functions of the supercurrent and the flavour current of N = 2 superconformal field theories. We also express them in terms of N = 1 superfields and compare to the generic N = 1 correlation functions. A general discussion of the N = 2 supercurrent superfield and the multiplet of anomalies and their definition as derivatives with respect to the supergravity prepotentials is also included.
We study the quantum degeneracies of BPS black holes of octonionic magical supergravity in five dimensions that is defined by the exceptional Jordan algebra. We define the quantum degeneracy purely number theoretically as the number of distinct states in the charge space with a given set of invariant labels of the discrete U-duality group. We argue that the quantum degeneracies of spherically symmetric stationary BPS black holes of octonionic magical supergravity in five dimensions are given by the Fourier coefficients of the modular forms of the exceptional group $E_{7(-25)}$. The charges of the black holes take values in the lattice defined by the exceptional Jordan algebra $J_3^{\mathbb{O}}(\mathcal{R})$ over integral octonions $\mathcal{R}$. The quantum degeneracies of charge states of rank one and rank two BPS black holes (zero area) are given by the Fourier coefficients of singular modular forms $E_4(Z)$ and $E_8(Z)=(E_4(Z))^2$ of $E_{7(-25)}(Z)$. The rank 3 (large) BPS black holes will be studied elsewhere. Following the work of N. Elkies and B. Gross on the embeddings of cubic rings $A$ into the exceptional Jordan algebra and their actions on the 24 dimensional orthogonal quadratic subspace of $J_3^{\mathbb{O}}(\mathcal{R})$, we show that the quantum degeneracies of rank one black holes described by such embeddings are given by the Fourier coefficients of the Hilbert modular forms of $SL(2,A)$. If the discriminant of the cubic ring $A$ is $D=p^2$ with $p$ a prime number then the isotropic lines in the 24 dimensional quadratic space define a pair of Niemeier lattices which can be taken as charge lattices of some BPS black holes. For $p=7$ they are the Leech lattice with no roots and the lattice $A_6^4$ with 168 root vectors. We also review the current status of the searches for the M/superstring theoretic origins of the octonionic magical supergravity.
We investigate the existence of self-dual configurations in the restricted gauged baby Skyrme model enlarged with a $Z_2$--symmetry, which introduces a real scalar field. For such a purpose, we implement the Bogomol'nyi procedure that provides a lower bound for the energy and the respective self-dual equations whose solutions saturate such a bound. Aiming to solve the self-dual equations, we specifically focused on a class of topological structures called compacton. We obtain the corresponding numerical solutions within two distinct scenarios, each defined by a scalar field, allowing us to describe different magnetic media. Finally, we analyze how the compacton profiles change when immersed in each medium.
In this Letter we consider the out-of-equilibrium phenomena in the complex Sachdev-Ye-Kitaev (SYK) model supplemented with the attractive Hubbard interaction (SYK+U). This model provides the clear-cut transition from non-Fermi liquid phase in pure SYK to the superconducting phase through the pseudogap phase with non-synchronized Cooper pairs. We investigate the quench of the phase soft mode in this model and the relaxation to the equilibrium state. Using the relation with Hamiltonian mean field (HMF) model we show that the SYK+U model enjoys the several interesting phenomena, like violent relaxation, quasi-stationary long living states, out-of-equilibrium finite time phase transitions, non-extensivity and tower of condensates. We comment on the holographic dual gravity counterparts of these phenomena.
We study $d$-dimensional scalar field theory in the Local Potential Approximation of the functional renormalization group. Sturm-Liouville methods allow the eigenoperator equation to be cast as a Schrodinger-type equation. Combining solutions in the large field limit with the Wentzel-Kramers-Brillouin approximation, we solve analytically for the scaling dimension $d_n$ of high dimension potential-type operators $\mathcal{O}_n(\varphi)$ around a non-trivial fixed point. We find that $d_n = n(d-d_\varphi)$ to leading order in $n$ as $n \to \infty$, where $d_\varphi=\frac{1}{2}(d-2+\eta)$ is the scaling dimension of the field, $\varphi$, and determine the power-law growth of the subleading correction. For $O(N)$ invariant scalar field theory, the scaling dimension is just double this, for all fixed $N\geq0$ and additionally for $N=-2,-4,\ldots \,.$ These results are universal, independent of the choice of cutoff function which we keep general throughout, subject only to some weak constraints.
The supertwistor and bi-supertwistor formulations for ${\mathcal N}$-extended anti-de Sitter (AdS) superspace in four dimensions, ${\rm AdS}^{4|4\mathcal N}$, were derived two years ago in arXiv:2108.03907. In the present paper, we introduce a novel realisation of the ${\mathcal N}$-extended AdS supergroup $\mathsf{OSp}(\mathcal{N}|4;\mathbb{R})$ and apply it to develop a coset construction for ${\rm AdS}^{4|4\mathcal N}$ and the corresponding differential geometry. This realisation naturally leads to an atlas on ${\rm AdS}^{4|4\mathcal N}$ (that is a generalisation of the stereographic projection for a sphere) that consists of two charts with chiral transition functions for ${\mathcal N}>0$. A manifestly $\mathsf{OSp}(\mathcal{N}|4;\mathbb{R})$ invariant model for a superparticle in ${\rm AdS}^{4|4\mathcal N}$ is proposed. Additionally, by employing a conformal superspace approach, we describe the most general conformally flat $\mathcal N$-extended supergeometry. This construction is then specialised to the case of ${\rm AdS}^{4|4\mathcal N}$.
In this paper, we explore the relationship between holographic Wilsonian renormalization groups and stochastic quantization in conformally coupled scalar theory in AdS$_{4}$. The relationship between these two different frameworks is firstly proposed in arXiv:1209.2242 and tested in various free theories. However, research on the theory with interactions has recently begun. In this paper, we show that the stochastic four-point function obtained by the Langevin equation is completely captured by the holographic quadruple trace deformation when the Euclidean action $S_{E}$ is given by $S_{E}=-2I_{os}$ where $I_{os}$ is the holographic on-shell action in the conformally coupled scalar theory in AdS$_{4},$ together with a condition that the stochastic fictitious time $t$ is also identified with AdS radial variable $r$. We extensively explore a case that the boundary condition on the conformal boundary is Dirichlet boundary condition, and in that case, the stochastic three-point function trivially vanishes. This agrees with that the holographic triple trace deformation vanishes when Dirichlet boundary condition is applied on the conformal boundary.
In \cite{1808.03288}, logarithmic correction to subleading soft photon and soft graviton theorems have been derived in four spacetime dimensions from the ratio of IR-finite S-matrices. This has been achieved after factoring out IR-divergent components from the traditional electromagnetic and gravitational S-matrices using Grammer-Yennie prescription. Although the loop corrected subleading soft theorems are derived from one-loop scattering amplitudes involving scalar particles in a minimally coupled theory with scalar contact interaction, it has been conjectured that the soft factors are universal (theory independent) and one-loop exact (don't receive corrections from higher loops). This paper extends the analysis conducted in \cite{1808.03288} to encompass general spinning particle scattering with non-minimal couplings permitted by gauge invariance and general coordinate invariance. By re-deriving the $\ln\omega$ soft factors in this generic setup, we establish their universal nature. Furthermore, we summarize the results of loop corrected soft photon and graviton theorems up to sub-subleading order, which follows from the analysis of one and two loop QED and quantum gravity S-matrices. While the classical versions of these soft factors have already been derived in the literature, we put forth conjectures regarding the quantum soft factors and outline potential strategies for their derivation.
As in random matrix theories, eigenvector/value distributions are important quantities of random tensors in their applications. Recently, real eigenvector/value distributions of Gaussian random tensors have been explicitly computed by expressing them as partition functions of quantum field theories with quartic interactions. This procedure to compute distributions in random tensors is general, powerful and intuitive, because one can take advantage of well-developed techniques and knowledge of quantum field theories. In this paper we extend the procedure to the cases that random tensors have mean backgrounds and eigenvector equations have random deviations. In particular, we study in detail the case that the background is a rank-one tensor, namely, the case of a spiked tensor. We discuss the condition under which the background rank-one tensor has a visible peak in the eigenvector distribution. We obtain a threshold value, which agrees with a previous result in the literature.
We construct symmetry-preserving lattice regularizations of 2d QED with one and two flavors of Dirac fermions, as well as the `3450' chiral gauge theory, by leveraging bosonization and recently-proposed modifications of Villain-type lattice actions. The internal global symmetries act just as locally on the lattice as they do in the continuum, the anomalies are reproduced at finite lattice spacing, and in each case we find a sign-problem-free dual formulation.
We consider D1-D5-P states in the untwisted sector of the D1-D5 orbifold CFT where we excite one copy of the seed CFT with a left-moving superconformal descendant. When the theory is deformed away from this region of moduli space these states can `lift', despite being BPS at the orbifold point. For descendants formed from the supersymmetry $G^{\alpha}_{\!\dot{A},-s}$ and R-symmetry $J^a_{-n}$ current modes we obtain explicit results for the expectation value of the lifts for various subfamilies of states at second order in the deformation parameter. A smooth $\sim\sqrt{h}$ behaviour is observed in the lifts of these subfamilies for large dimensions. Using covering space Ward identities we then find a compact expression for the lift of the above $J^a_{-n}$ descendant states valid for arbitrary dimensions. In the large-dimension limit this lift scales as $\sim\sqrt{h}\,$, strengthening the conjecture that this is a universal property of the lift of D1-D5-P states. We observe that the lift is not simply a function of the total dimension, but depends on how the descendant level is partitioned amongst modes.
Geometrical properties of spacetime are difficult to study in nonperturbative approaches to quantum gravity like Causal Dynamical Triangulations (CDT), where one uses simplicial manifolds to define the gravitational path integral, instead of Riemannian manifolds. In particular, in CDT one only relies on two mathematical tools, a distance measure and a volume measure. In this paper, we define a notion of scalar curvature, for metric spaces endowed with a volume measure or a random walk, without assuming nor using notions of tensor calculus. Furthermore, we directly define the Ricci scalar, without the need of defining and computing the Riemann or the Ricci tensor a priori. For this, we make use of quantities, like the surface of a geodesic sphere, or the return probability of scalar diffusion processes, that can be computed in these metric spaces, as in a Riemannian manifold, where they receive scalar curvature contributions. Our definitions recover the classical results of scalar curvature when the sets are Riemannian manifolds. We propose seven methods to compute the scalar curvature in these spaces, and we compare their features in natural implementations in discrete spaces. The defined generalized scalar curvatures are easily implemented on discrete spaces, like graphs. We present the results of our definitions on random triangulations of a 2D sphere and plane. Additionally, we show the results of our generalized scalar curvatures on the quantum geometries of 2D CDT, where we find that all our definitions indicate a flat ground state of the gravitational path integral.
We propose that the domain walls formed in a classical Ginzburg-Landau model can exhibit topologically stable but thermodynamically metastable states. This proposal relies on Allen-Cahn's assertion that the velocity of domain wall at some point is proportional to the mean curvature at that point. From this assertion we speculate that domain wall resembles a rubber band that can winds the background geometry in a nontrivial way and can exist permanently. We numerically verify our proposal in two and three spatial dimensions by using periodic boundary conditions as well as Neumann boundary conditions. We find that there are always possibilities to form topologically stable domain walls in the final equilibrium states. However, from the aspects of thermodynamics these topologically nontrivial domain walls have higher free energies and are thermodynamically metastable. These metastable states that are protected by topology could potentially serve as storage media in the computer and information technology industry.
A timely and long-term programme of kaon decay measurements at an unprecedented level of precision is presented, leveraging the capabilities of the CERN Super Proton Synchrotron (SPS). The proposed HIKE programme is firmly anchored on the experience built up studying kaon decays at the SPS over the past four decades, and includes rare processes, CP violation, dark sectors, symmetry tests and other tests of the Standard Model. The programme is based on a staged approach involving experiments with charged and neutral kaon beams, as well as operation in beam-dump mode. The various phases will rely on a common infrastructure and set of detectors.
Results obtained by the RPC ECOgas@GIF++ Collaboration, using Resistive Plate Chambers operated with new, eco-friendly gas mixtures, based on Tetrafluoropropene and carbon dioxide, are shown and discussed in this paper. Tests aimed to assess the performance of this kind of detectors in high-irradiation conditions, analogous to the ones foreseen for the coming years at the Large Hadron Collider experiments, were performed, and demonstrate a performance basically similar to the one obtained with the gas mixtures currently in use, based on Tetrafluoroethane, which is being progressively phased out for its possible contribution to the greenhouse effect. Long term aging tests are also being carried out, with the goal to demonstrate the possibility of using these eco-friendly gas mixtures during the whole High Luminosity phase of the Large Hadron Collider.
Current and future experiments need to know the stopping power of liquid argon. It is used directly in calibration, where commonly the minimum-ionizing portion of muon tracks is used as a standard candle. Similarly, muon range is used as a measure of muon energy. More broadly, the stopping power figures into the simulation of all charged particles, and so uncertainty propagates widely throughout data analysis of all sorts. The main parameter that controls stopping power is the mean excitation energy, or I-value. Direct experimental information for argon's I-value come primarily from measurements of gaseous argon, with a very limited amount of information from solid argon, and none from liquid argon. A powerful source of indirect information is also available from oscillator strength distribution calculations. We perform a new calculation and find that from oscillator strength information alone, the I-value of gaseous argon is $(187\pm 5)$\,eV. In combination with the direct measurements and other calculations, we recommend $(187\pm 4)$\,eV for gaseous argon. For liquid argon, we evaluate the difference in central value and uncertainty incurred by the difference of phase and recommend $(197\pm 7)$\,eV. All uncertainties are given to 68\% C.L.
A search for the very rare $D^0 \to \mu^+\mu^-$ decay is performed using data collected by the LHCb experiment in proton-proton collisions at $\sqrt{s} = 7$, 8 and $13~\rm{TeV}$, corresponding to an integrated luminosity of $9~\rm{fb^{-1}}$. The search is optimised for $D^0$ mesons from $D^{\ast+}\to D^0\pi^+$ decays but is also sensitive to $D^0$ mesons from other sources. No evidence for an excess of events over the expected background is observed. An upper limit on the branching fraction of this decay is set at $\mathcal{B}(D^0 \to \mu^+\mu^-) < 3.1 \times 10^{-9}$ at a $90\%$ CL. This represents the world's most stringent limit, constraining models of physics beyond the Standard Model.
This Letter reports the observation of single top quarks produced together with a photon, which directly probes the electroweak coupling of the top quark. The analysis uses 139 fb$^{-1}$ of 13 TeV proton-proton collision data collected with the ATLAS detector at the Large Hadron Collider. Requiring a photon with transverse momentum larger than 20 GeV and within the detector acceptance, the fiducial cross section is measured to be 688 $\pm$ 23 (stat.) $^{+75}_{-71}$ (syst.) fb, to be compared with the standard model prediction of 515 $^{+36}_{-42}$ fb at next-to-leading order in QCD.
Accurate jet charge identification is essential for precise electroweak and flavor measurements at the high-energy frontier. We propose a novel method called the Leading Particle Jet Charge method (LPJC) to determine the jet charge based on information about the leading charged particle. Tested on Z - bb and Z - cc samples at a center-of-mass energy of 91.2GeV, the LPJC achieves an effective tagging power of 20%/9% for the c/b jet, respectively. Combined with the Weighted Jet Charge method (WJC), we develop a Heavy Flavor Jet Charge method (HFJC), which achieves an effective tagging power of 39%/20% for c/b jet, respectively. This paper also discusses the dependencies between jet charge identification performance and the fragmentation process of heavy flavor jets, and critical detector performances.
The first observation of the decays $J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0}$ and $J\!/\!\psi \rightarrow p \bar{\Sigma}^{-} K_{S}^{0}$ is reported using $(10087\pm44)\times10^{6}$ $J\!/\!\psi$ events recorded by the BESIII detector at the BEPCII storage ring. The branching fractions of each channel are determined to be $\mathcal{B}(J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0})=(1.361 \pm 0.006 \pm 0.025) \times 10^{-4}$ and $\mathcal{B}(J\!/\!\psi \rightarrow p \bar{\Sigma}^{-} K_{S}^{0})=(1.352 \pm 0.006 \pm 0.025) \times 10^{-4}$. The combined result is $\mathcal{B}(J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0} +c.c.)=(2.725 \pm 0.009 \pm 0.050) \times 10^{-4}$, where the first uncertainty is statistical and the second systematic. The results presented are in good agreement with the branching fractions of the isospin partner decay $J\!/\!\psi \rightarrow p K^- \bar\Sigma^0 + c.c.$.
Based on $(10.09 \pm 0.04) \times 10^9$ $J/\psi$ events collected with the BESIII detector operating at the BEPCII collider, a partial wave analysis of the decay $J/\psi \to \phi \pi^{0}\eta$ is performed. We observe for the first time two new structures on the $\phi\eta$ invariant mass distribution, with statistical significances of $24.0\sigma$ and $16.9\sigma$; the first with $J^{\rm PC}$ = $1^{+-}$, mass M = (1911 $\pm$ 6 (stat.) $\pm$ 14 (sys.))~MeV/$c^{2}$, and width $\Gamma = $ (149 $\pm$ 12 (stat.) $\pm$ 23 (sys.))~MeV, the second with $J^{\rm PC}$ = $1^{--}$, mass M = (1996 $\pm$ 11 (stat.) $\pm$ 30 (sys.))~MeV/$c^{2}$, and width $\Gamma$ = (148 $\pm$ 16 (stat.) $\pm$ 66 (sys.))~MeV. These measurements provide important input for the strangeonium spectrum. In addition, the $f_0(980)-a_0(980)^0$ mixing signal in $J/\psi \to \phi f_0(980) \to \phi a_0(980)^0$ and the corresponding electromagnetic decay $J/\psi \to \phi a_0(980)^0$ are measured with improved precision, providing crucial information to understand the nature of $a_0(980)^0$ and $f_0(980)$.
DarkSide-20k will be the next liquid argon TPC built to perform direct search for dark matter under the form of WIMPs. Its calibration to both signal and backgrounds is key as very few events are expected in WIMPs search. In the following proceeding, aspects of the calibration of the TPC of DarkSide-20k are presented: the calibration system itself, the simulations of the calibration programs and the simulations of the impact of the calibration system on the rest of the detector (reduction of the light collection efficiency in the veto buffer, induced background by the system in the TPC and veto).
We measure the decay time and the attenuation length of newly developed wavelength-shifting fibers, YS series from Kuraray, which have fast response. Using a 405 nm laser, the decay times of the YS-2, 4, and 6 are measured to be $3.70 \pm 0.04$ ns, $2.06 \pm 0.03$ ns, and $1.50 \pm 0.02$ ns, respectively. The decay time of Y-11 is measured to be $7.16 \pm 0.09$ ns using the same system. All fibers are found to have similar attenuation lengths of more than 4 meters. When combined with the plastic scintillators EJ-200 and EJ-204, the YS series have better time resolution than Y-11, with light yields of 60-100% of Y-11.
The industry of quantum technologies is rapidly expanding, offering promising opportunities for various scientific domains. Among these emerging technologies, Quantum Machine Learning (QML) has attracted considerable attention due to its potential to revolutionize data processing and analysis. In this paper, we investigate the application of QML in the field of remote sensing. It is believed that QML can provide valuable insights for analysis of data from space. We delve into the common beliefs surrounding the quantum advantage in QML for remote sensing and highlight the open challenges that need to be addressed. To shed light on the challenges, we conduct a study focused on the problem of kernel value concentration, a phenomenon that adversely affects the runtime of quantum computers. Our findings indicate that while this issue negatively impacts quantum computer performance, it does not entirely negate the potential quantum advantage in QML for remote sensing.
Machine learning tasks are an exciting application for quantum computers, as it has been proven that they can learn certain problems more efficiently than classical ones. Applying quantum machine learning algorithms to classical data can have many important applications, as qubits allow for dealing with exponentially more data than classical bits. However, preparing the corresponding quantum states usually requires an exponential number of gates and therefore may ruin any potential quantum speedups. Here, we show that classical data with a sufficiently quickly decaying Fourier spectrum after being mapped to a quantum state can be well-approximated by states with a small Schmidt rank (i.e., matrix product states) and we derive explicit error bounds. These approximated states can, in turn, be prepared on a quantum computer with a linear number of nearest-neighbor two-qubit gates. We confirm our results numerically on a set of $1024\times1024$-pixel images taken from the 'Imagenette' dataset. Additionally, we consider different variational circuit ans\"atze and demonstrate numerically that one-dimensional sequential circuits achieve the same compression quality as more powerful ans\"atze.
We develop quantum information processing primitives for the planar rotor, the state space of a particle on a circle. By interpreting rotor wavefunctions as periodically identified wavefunctions of a harmonic oscillator, we determine the group of bosonic Gaussian operations inherited by the rotor. This $n$-rotor Clifford group, $\text{U}(1)^{n(n+1)/2} \rtimes \text{GL}_n(\mathbb{Z})$, is represented by continuous $\text{U}(1)$ gates generated by polynomials quadratic in angular momenta, as well as discrete $\text{GL}_n(\mathbb Z)$ momentum sign-flip and sum gates. We classify homological rotor error-correcting codes [arXiv:2303.13723] and various rotor states based on equivalence under Clifford operations. Reversing direction, we map homological rotor codes and rotor Clifford operations back into oscillators by interpreting occupation-number states as rotor states of non-negative angular momentum. This yields new multimode homological bosonic codes protecting against dephasing and changes in occupation number, along with their corresponding encoding and decoding circuits. In particular, we show how to non-destructively measure the oscillator phase using conditional occupation-number addition and post selection. We also outline several rotor and oscillator varieties of the GKP-stabilizer codes [arXiv:1903.12615].
Assessment of practical quantum information processing (QIP) remains partial without understanding limits imposed by noise. Unfortunately, mere description of noise grows exponentially with system size, becoming cumbersome even for modest sized systems of imminent practical interest. We fulfill the need for estimates on performing noisy quantum state preparation, verification, and observation. To do the estimation we propose fast numerical algorithms to maximize the expectation value of any $d$-dimensional observable over states of bounded purity. This bound on purity factors in noise in a measurable way. Our fastest algorithm takes $O(d)$ steps if the eigendecomposition of the observable is known, otherwise takes $O(d^3)$ steps at worst. The algorithms also solve maximum likelihood estimation for quantum state tomography with convex and even non-convex purity constraints. Numerics show performance of our key sub-routine (it finds in linear time a probability vector with bounded norm that most overlaps with a fixed vector) can be several orders of magnitude faster than a common state-of-the-art convex optimization solver. Our work fosters a practical way forward to asses limitations on QIP imposed by quantum noise. Along the way, we also give a simple but fundamental insight, noisy systems (equivalently noisy Hamiltonians) always give higher ground-state energy than their noiseless counterparts.
Understanding the stability of strongly correlated phases of matter when coupled to environmental degrees of freedom is crucial for identifying the conditions under which these states may be observed. Here, we focus on the paradigmatic one-dimensional Bose-Hubbard model, and study the stability of the Luttinger liquid and Mott insulating phases in the presence of local particle exchange with site-independent baths of non-interacting bosons. We perform a numerically exact analysis of this model by adapting the recently developed wormhole quantum Monte Carlo method for retarded interactions to a continuous-time formulation with worm updates; we show how the wormhole updates can be easily implemented in this scheme. For an Ohmic bath, our numerical findings confirm the scaling prediction that the Luttinger-liquid phase becomes unstable at infinitesimal bath coupling. We show that the ensuing phase is a long-range ordered superfluid with spontaneously-broken U(1) symmetry. While the Mott insulator remains a distinct phase for small bath coupling, it exhibits diverging compressibility and non-integer local boson occupation in the thermodynamic limit. Upon increasing the bath coupling, this phase undergoes a transition to a long-range ordered superfluid. Finally, we discuss the effects of super-Ohmic dissipation on the Luttinger-liquid phase. Our results are compatible with a stable dissipative Luttinger-liquid phase that transitions to a long-range ordered superfluid at a finite system-bath coupling.
We analyze the ergodic properties of a metallic wavefunction for the Anderson model in a disordered random-regular graph with branching number $k=2.$ A few q-moments $I_q$ associated with the zero energy eigenvector are numerically computed up to sizes $N=4\times 10^6.$ We extract their corresponding fractal dimensions $D_q$ in the thermodynamic limit together with correlated volumes $N_q$ that control finite-size effects. At intermediate values of disorder $W,$ we obtain ergodicity $D_q=1$ for $q=1,2$ and correlation volumes that increase fast upon approaching the Anderson transition $\log(\log(N_q))\sim W.$ We then focus on the extraction of the volume $N_0$ associated with the typical value of the wavefunction $e^{<\log|\psi|^2>},$ which follows a similar tendency as the ones for $N_1$ or $N_2.$ Its value at intermediate disorders is close, but smaller, to the so-called ergodic volume previously found via the super-symmetric formalism and belief propagator algorithms. None of the computed correlated volumes shows a tendency to diverge up to disorders $W\approx 15$, specifically none with exponent $\nu=1/2$. Deeper in the metal, we characterize the crossover to system sizes much smaller than the first correlated volume $N_1\gg N.$ Once this crossover has taken place, we obtain evidence of a scaling in which the derivative of the first fractal dimension $D_1$ behaves critically with an exponent $\nu=1.$
Non-signalling conditions encode minimal requirements that any (quantum) systems put into spatial arrangements must satisfy in order to be consistent with special relativity. Recent works have argued that in scenarios involving more that two parties, conditions compatible with relativistic causality do not have to satisfy all possible non-signalling conditions but only a subset of them. Here we show that correlations satisfying only this subset of constraints have to satisfy highly non-local monogamy relations between the effects of space-like separated random variables. These monogamy relations take the form of new entropic inequalities between the various systems and we give a general method to derive them. Using these monogamy relations we refute previous suggestions for physical mechanisms that could lead to relativistically causal correlations, demonstrating that such mechanisms would lead to superluminal signalling.
This pedagogical article solves an interesting problem in quantum measure theory. Although a quantum measure $\mu$ is a generalization of an ordinary probability measure, $\mu$ need not satisfy the usual additivity condition. Instead, $\mu$ satisfies a grade-2 additivity condition. Besides the quantum measure problem, we present three additional combinatorial problems. These are (1)\enspace A sum of binomial coefficients problem;\enspace (2)\enspace A recurrence relation problem; and\enspace (3)\enspace An interated vector problem. We show that these three problems are equivalent in that they have a common solution. We then show that this solves the original quantum measure problem.
Blockmodeling of a given problem represented by an $N\times N$ adjacency matrix can be found by swapping rows and columns of the matrix (i.e. multiplying matrix from left and right by a permutation matrix). In general, through performing this task, row and column permutations affect the fitness value in optimization: For an $N\times N$ matrix, it requires $O(N)$ computations to find (or update) the fitness value of a candidate solution. On quantum computers, permutations can be applied in parallel and efficiently, and their implementations can be as simple as a single qubit operation (a NOT gate on a qubit) which takes an $O(1)$ time algorithmic step. In this paper, using permutation matrices, we describe a quantum blockmodeling for data analysis tasks. In the model, the measurement outcome of a small group of qubits are mapped to indicate the fitness value. Therefore, we show that it is possible to find or update the fitness value in $O(log(N))$ time. This lead us to show that when the number of iterations are less than $log(N)$ time, it may be possible to reach the same solution exponentially faster on quantum computers in comparison to classical computers. In addition, since on quantum circuits the different sequence of permutations can be applied in parallel (superpositon), the machine learning task in this model can be implemented more efficiently on quantum computers.
Efficient transfer of quantum information between remote parties is a crucial challenge for quantum communication over atmospheric channels. Random fluctuations of the channel transmittance are a major disturbing factor for its practical implementation. We study correlations between channel transmittances at different moments of time and focus on two transmission protocols. The first is related to the robustness of both discrete- and continuous-variable entanglement between time-separated light pulses, showing a possibility to enlarge the effective dimension of the Hilbert space. The second addresses a preselection of high-transmittance events by testing them with bright classical pulses followed by quantum light. Our results show a high capacity of the time-coherence resource for encoding and transferring quantum states of light in atmospheric channels.
Let $n\geq 8$ be an integer divisible by 4. The Clifford-cyclotomic gate set $\mathcal{G}_n$ is the universal gate set obtained by extending the Clifford gates with the $z$-rotation $T_n = \mathrm{diag}(1,\zeta_n)$, where $\zeta_n$ is a primitive $n$-th root of unity. In this note, we show that, when $n$ is a power of 2, a multiqubit unitary matrix $U$ can be exactly represented by a circuit over $\mathcal{G}_n$ if and only if the entries of $U$ belong to the ring $\mathbb{Z}[1/2,\zeta_n]$. We moreover show that $\log(n)-2$ ancillas are always sufficient to construct a circuit for $U$. Our results generalize prior work to an infinite family of gate sets and show that the limitations that apply to single-qubit unitaries, for which the correspondence between Clifford-cyclotomic operators and matrices over $\mathbb{Z}[1/2,\zeta_n]$ fails for all but finitely many values of $n$, can be overcome through the use of ancillas.
While the Vlasov-Maxwell equations provide an \textit{ab-initio} description of collisionless plasmas, solving them is often impractical due to high computational costs. In this work, we implement a semi-implicit Vlasov-Maxwell solver utilizing the quantized tensor network (QTN) framework. This framework allows one to efficiently represent and manipulate low-rank approximations of high-dimensional data sets. As a result, the cost of the solver scales polynomially with parameter $D$ (the so-called bond dimension), which is directly related to the error associated with the low-rank approximation. By increasing $D$, convergence to the dynamics that the solver would obtain without any low-rank approximation is guaranteed. We find that for the 2D3V test problems considered here, a modest $D=64$ appears to be sufficient for capturing the expected physics, despite the simulations using a total of $2^{36}$ grid points and thus requiring $D=2^{18}$ for exact calculations. Additionally, we utilize a QTN time evolution scheme based on the Dirac-Frenkel variational principle, which allows us to use larger time steps than that prescribed by the Courant-Friedrichs-Lewy (CFL) constraint. As such, the QTN format appears to be a promising means of approximately solving the Vlasov-Maxwell equations with significantly reduced cost.
Quantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs. As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection. Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs -- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection. Finally, we construct qsiO relative to an efficient quantum oracle.
In this work, we employ differential evolution algorithm to identify the optimal configurations of small atomic ensembles supporting quantum states with maximal radiative lifetime. We demonstrate that atoms mostly tend to assemble in quasi-regular structures with specific geometry strongly depending on the minimal interatomic distance $r_{min}$. We identified the clear underlying physics that governs the suppression of the radiative losses in particular geometries. However, we reveal that the specific configurations in small ensembles are not easily predictable based on the knowledge established for the arrays of large size. In particular, the states that inherit their properties from bound states in continuum in infinite lattices turn out to be the most subradiant in a wide range of $r_{min}$ values. We also show that for small interatomic distance the chains with modulated interatomic distances exhibit fast exponential decrease of the radiative losses with the size of the ensemble.
It has been experimentally demonstrated that molecular-vibration polaritons formed by strong coupling of a molecular vibration to an infrared cavity mode can significantly modify the physical properties and chemical reactivity of various molecular systems. However, a complete theoretical understanding of the underlying mechanisms of the modifications remains elusive due to the complexity of the hybrid system, especially the collective nature of polaritonic states in systems containing many molecules. We develop here the semiclassical theory of molecular-vibration-polariton dynamics based on the truncated Wigner approximation that is tractable in large molecular systems and simultaneously captures the quantum character of photons in the optical cavity. The theory is then applied to investigate the nuclear dynamics of a system of identical diatomic molecules having the ground-state Morse potential and strongly coupled to an infrared cavity mode in the ultrastrong coupling regime. The collective and resonance effects of the molecular-vibration-polariton formation on the nuclear dynamics are observed.
In this manuscript, we present a coherence measure based on the quantum optimal transport cost in terms of convex roof extended method. We also obtain the analytical solutions of the quantifier for pure states. At last, we propose an operational interpretation of the coherence measure for pure states.
We utilize degenerate perturbation theory to investigate continuous-time quantum search on second-order truncated simplex lattices. In this work, we show that the construction of the Hamiltonian must consider the structure of the lattice. This idea enables effective application of degenerate perturbation theory to third- and higher-order lattices. We identify two constraints on the reduction of the dimension of the Hamiltonian. In addition, we elucidate the influence of the distinct configurations of marked vertices on the quantum search.
We propose a quantum state distance and develop a family of geometrical quantum speed limits (QSLs) for open and closed systems. The QSL time includes an alternative function by which we derive three QSL times with particularly chosen functions. It indicates that two QSL times are exactly the ones presented in Ref. [1] and [2], respectively, and the third one can provide a unified QSL time for both open and closed systems. The three QSL times are attainable for any given initial state in the sense that there exists a dynamics driving the initial state to evolve along the geodesic. We numerically compare the tightness of the three QSL times, which typically promises a tighter QSL time if optimizing the alternative function.
The Quantum Approximate Optimization Algorithm (QAOA), a pivotal paradigm in the realm of variational quantum algorithms (VQAs), offers promising computational advantages for tackling combinatorial optimization problems. Well-defined initial circuit parameters, responsible for preparing a parameterized quantum state encoding the solution, play a key role in optimizing QAOA. However, classical optimization techniques encounter challenges in discerning optimal parameters that align with the optimal solution. In this work, we propose a hybrid optimization approach that integrates Gated Recurrent Units (GRU), Convolutional Neural Networks (CNN), and a bilinear strategy as an innovative alternative to conventional optimizers for predicting optimal parameters of QAOA circuits. GRU serves to stochastically initialize favorable parameters for depth-1 circuits, while CNN predicts initial parameters for depth-2 circuits based on the optimized parameters of depth-1 circuits. To assess the efficacy of our approach, we conducted a comparative analysis with traditional initialization methods using QAOA on Erd\H{o}s-R\'enyi graph instances, revealing superior optimal approximation ratios. We employ the bilinear strategy to initialize QAOA circuit parameters at greater depths, with reference parameters obtained from GRU-CNN optimization. This approach allows us to forecast parameters for a depth-12 QAOA circuit, yielding a remarkable approximation ratio of 0.998 across 10 qubits, which surpasses that of the random initialization strategy and the PPN2 method at a depth of 10. The proposed hybrid GRU-CNN bilinear optimization method significantly improves the effectiveness and accuracy of parameters initialization, offering a promising iterative framework for QAOA that elevates its performance.
Estimation of physical observables for unknown quantum states is an important problem that underlies a wide range of fields, including quantum information processing, quantum physics, and quantum chemistry. In the context of quantum computation, in particular, existing studies have mainly focused on holistic state tomography or estimation on specific observables with known classical descriptions, while this lacks the important class of problems where the estimation target itself relies on the measurement outcome. In this work, we propose an adaptive measurement optimization method that is useful for the quantum subspace methods, namely the variational simulation methods that utilize classical postprocessing on measurement outcomes. The proposed method first determines the measurement protocol based on QSE calculation for classically simulatable states, and then adaptively updates the protocol according to the quantum measurement result. As a numerical demonstration, we have shown for excited-state simulation of molecules that (i) we are able to reduce the number of measurements by an order of magnitude by constructing an appropriate measurement strategy (ii) the adaptive iteration converges successfully even for strongly correlated molecule of H$_4$. Our work reveals that the potential of the QSE method can be empowered by elaborated measurement protocols, and opens a path to further pursue efficient quantum measurement techniques in practical computations.
We present perturbative energy eigenvalues (up to second order) of Coulomb- and harmonic oscillator-type fields within a perturbation scheme. We have the required unperturbed eigenvalues ($E_{n}^{(0)}$) analytically obtained by using similarities between the expressions obtained from unperturbed Hamiltonian(s) for two fields and obtained from the ones for one-dimensional generalized Morse field. We use the Langer transformation for this aim. We need the diagonal and non-diagonal matrix elements of unperturbed and perturbed Hamiltonians to get energy eigenvalues perturbatively, which are obtained with help of some recursion identities or some integrals of generalized Laguerre polynomials having analytical results.
The concept of non-Hermiticity has expanded the understanding of band topology leading to the emergence of counter-intuitive phenomena. One example is the non-Hermitian skin effect (NHSE), which involves the concentration of eigenstates at the boundary. However, despite the potential insights that can be gained from high-dimensional non-Hermitian quantum systems in areas like curved space, high-order topological phases, and black holes, the realization of this effect in high dimensions remains unexplored. Here, we create a two-dimensional (2D) non-Hermitian topological band for ultracold fermions in spin-orbit-coupled optical lattices with tunable dissipation, and experimentally examine the spectral topology in the complex eigenenergy plane. We experimentally demonstrate pronounced nonzero spectral winding numbers when the dissipation is added to the system, which establishes the existence of 2D skin effect. We also demonstrate that a pair of exceptional points (EPs) are created in the momentum space, connected by an open-ended bulk Fermi arc, in contrast to closed loops found in Hermitian systems. The associated EPs emerge and shift with increasing dissipation, leading to the formation of the Fermi arc. Our work sets the stage for further investigation into simulating non-Hermitian physics in high dimensions and paves the way for understanding the interplay of quantum statistics with NHSE.
Excitation energy transport can be significantly enhanced by strong light-matter interactions. In the present work, we explore intriguing features of coherent transient exciton wave packet dynamics on a lossless disordered polaritonic wire. Our main results can be understood in terms of the effective exciton group velocity, a new quantity we obtain from the polariton dispersion. Under weak and moderate disorder, we find that the early wave packet spread velocity is controlled by the overlap of the initial exciton momentum distribution and its effective group velocity. Conversely, when disorder is stronger, the initial state is nearly irrelevant, and red-shifted cavities support excitons with greater mobility. Our findings provide guiding principles for optimizing ultrafast coherent exciton transport based on the magnitude of disorder and the polariton dispersion. The presented perspectives may be valuable for understanding and designing new polaritonic platforms for enhanced exciton energy transport.
We explore the defect formation and universally critical dynamics in two-dimensional (2D) two-component Bose-Einstein condensates(BECs) subjected to two types of potential traps: a homogeneous trap and a harmonic trap.We focus on the non-equilibrium universal dynamics of the miscible-immiscible phase transition with both linear and nonlinear quenching types.Although there exists spatial independence of the critical point, we find that the inhomogeneity of trap doesn't affect the phase transition of system and the critical exponents can still be explained by the homogeneous Kibble-Zurek mechanism. By analyzing the Bogoliubov excitations, we establish a power-law relationship between the domain correlation length, the phase transition delay, and the quench time.Furthermore, through real-time simulations of phase transition dynamics, the formation of domain defects and the delay of phase transition in non-equilibrium dynamics are demonstrated, along with the corresponding universal scaling of correlation length and phase transition delay for various quench time and quench coefficients, which align well with our analytical predictions.Our study confirms that the universality class of two-component BECs remains unaffected by dimensionality, while the larger nonlinear coefficients effectively suppress non-adiabatic excitations, offering a novel perspective for addressing adiabatic evolution.
Squeezed states of light have been used extensively to increase the precision of measurements, from the detection of gravitational waves to the search for dark matter. In the optical domain, high levels of vacuum noise squeezing are possible due to the availability of low loss optical components and high-performance squeezers. At microwave frequencies, however, limitations of the squeezing devices and the high insertion loss of microwave components makes squeezing vacuum noise an exceptionally difficult task. Here we demonstrate a new record for the direct measurement of microwave squeezing. We use an ultra low loss setup and weakly-nonlinear kinetic inductance parametric amplifiers to squeeze microwave noise 7.8(2) dB below the vacuum level. The amplifiers exhibit a resilience to magnetic fields and permit the demonstration of record squeezing levels inside fields of up to 2 T. Finally, we exploit the high critical temperature of our amplifiers to squeeze a warm thermal environment, achieving vacuum level noise at a temperature of 1.8 K. These results enable experiments that combine squeezing with magnetic fields and permit quantum-limited microwave measurements at elevated temperatures, significantly reducing the complexity and cost of the cryogenic systems required for such experiments.
We explored decoding methods for the surface code under depolarizing noise by mapping the problem into the Ising model optimization. We consider two kinds of mapping with and without a soft constraint and also various optimization solvers, including simulated annealing implemented on a CPU, "Fujitsu Digital Annealer" (DA), a hardware architecture specialized for the Ising problems, and CPLEX, an exact integer programming solver. We find that the proposed Ising-based decoding approaches provide higher accuracy compared to the minimum-weight perfect matching (MWPM) algorithm for depolarizing noise and comparable to minimum distance decoding using CPLEX. While decoding time is longer than MWPM when we compare it with a single core CPU, our method is amenable to parallelization and easy to implement on dedicated hardware, suggesting potential future speedups. Regarding the mapping methods to the Ising model with and without a soft constraint, the SA decoder yielded higher accuracy without a soft constraint. In contrast, the DA decoder shows less difference between the two mapping methods, which indicates that DA can find a better solution with smaller number of iterations even under the soft constraint. Our results are important for devising efficient and fast decoders feasible with quantum computer control devices.
It is a common perception that a sharp projective measurement in one side of the Bell experiment destroys the entanglement of the shared state, thereby preventing the demonstration of sequential sharing of nonlocality. In contrast, we introduce a local randomness-assisted projective measurement protocol, enabling the sharing of nonlocality by an arbitrary number of sequential observers (Bobs) with a single spatially separated party Alice. Subsequently, a crucial feature of the interplay between the degrees of incompatibility of observables of both parties is revealed, enabling the unbounded sharing of nonlocality. Our findings, not only offer a new paradigm for understanding the fundamental nature of incompatibility in demonstrating quantum nonlocality but also pave a new path for various information processing tasks based on local randomness-assisted projective measurement.
Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization Dandelion Chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel QuantumEyes integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.
Two quantum systems, each described as a random-matrix ensemble. are coupled to each other via a number of transition states. Each system is strongly coupled to a large number of channels. The average transmission probability is the product of three factors describing, respectively, formation of the first system from the entrance channel, decay of the second system through the exit channel, and transport through the transition states. Each of the transition states contributes a Breit-Wigner resonance. In general, the resonances overlap.
Let U be a universe on n elements, let k be a positive integer, and let F be a family of (implicitly defined) subsets of U. We consider the problems of partitioning U into k sets from F, covering U with k sets from F, and packing k non-intersecting sets from F into U. Classically, these problems can be solved via inclusion-exclusion in O*(2^n) time [BjorklundHK09]. Quantumly, there are faster algorithms for graph coloring with running time O(1.9140^n) [ShimizuM22] and for Set Cover with a small number of sets with running time O(1.7274^n |F|^O(1)) [AmbainisBIKPV19]. In this paper, we give a quantum speedup for Set Partition, Set Cover, and Set Packing whenever there is a classical enumeration algorithm that lends itself to a quadratic quantum speedup, which, for any subinstance on a subset X of U, enumerates at least one member of a k-partition, k-cover, or k-packing (if one exists) restricted to (or projected onto, in the case of k-cover) the set X in O*(c^{|X|}) time with c<2. Our bounded-error quantum algorithm runs in O*((2+c)^(n/2)) for Set Partition, Set Cover, and Set Packing. When c<=1.147899, our algorithm is slightly faster than O*((2+c)^(n/2)); when c approaches 1, it matches the running time of [AmbainisBIKPV19] for Set Cover when |F| is subexponential in n. For Graph Coloring, we further improve the running time to O(1.7956^n) by leveraging faster algorithms for coloring with a small number of colors to better balance our divide-and-conquer steps. For Domatic Number, we obtain a O((2-\epsilon)^n) running time for some \epsilon>0.
Topological characteristics in quantum systems typically determine the ground state, while the corresponding quantum phase transition (QPT) can be identified through quenching dynamics. Based on the exact results of extended Kitaev chains, we demonstrate that the system dynamics can be comprehended through the precession of an ensemble of free-pseudo spins under a magnetic field. The topology of the driven Hamiltonian is determined by the average winding number of the non-equilibrium state. Furthermore, we establish that the singularity of the dynamical quantum phase transition (DQPT) arises from two perpendicular pseudo-spin vectors associated with the preand post-quenched Hamiltonians. Moreover, we investigate the distinct behaviors of the dynamic pairing order parameter in both topological and non-topological regions. These findings offer valuable insights into the non-equilibrium behavior of topological superconductors, contributing to the understanding of the resilience of topological properties in driven quantum systems.
We investigate theoretically the enhancement of mechanical squeezing in a multimode optomechanical system by introducing a coherent phonon-photon interaction via the backward stimulated Brillouin scattering (BSBS) process. The coherent photon-phonon interaction where two optical modes couple to a Brillouin acoustic mode with a large decay rate provides an extra channel for the cooling of a Duffing mechanical oscillator. The squeezing degree and the robustness to the thermal noises of the Duffing mechanical mode can be enhanced greatly. When the Duffing nonlinearity is weak, the squeezing degree of the mechanical mode in the presence of BSBS can be improved more than one order of magnitude compared with the absence of BSBS. Our scheme may be extended to other quantum systems to study novel quantum effects.
Quantum supervised learning, utilizing variational circuits, stands out as a promising technology for NISQ devices due to its efficiency in hardware resource utilization during the creation of quantum feature maps and the implementation of hardware-efficient ansatz with trainable parameters. Despite these advantages, the training of quantum models encounters challenges, notably the barren plateau phenomenon, leading to stagnation in learning during optimization iterations. This study proposes an innovative approach: an evolutionary-enhanced ansatz-free supervised learning model. In contrast to parametrized circuits, our model employs circuits with variable topology that evolves through an elitist method, mitigating the barren plateau issue. Additionally, we introduce a novel concept, the superposition of multi-hot encodings, facilitating the treatment of multi-classification problems. Our framework successfully avoids barren plateaus, resulting in enhanced model accuracy. Comparative analysis with variational quantum classifiers from the technology's state-of-the-art reveal a substantial improvement in training efficiency and precision. Furthermore, we conduct tests on a challenging dataset class, traditionally problematic for conventional kernel machines, demonstrating a potential alternative path for achieving quantum advantage in supervised learning for NISQ era.
Randomized measurements (RMs) provide a practical scheme to probe complex many-body quantum systems. While they are a very powerful tool to extract local information, global properties such as entropy or bipartite entanglement remain hard to probe, requiring a number of measurements or classical post-processing resources growing exponentially in the system size. In this work, we address the problem of estimating global entropies and mixed-state entanglement via partial-transposed (PT) moments, and show that efficient estimation strategies exist under the assumption that all the spatial correlation lengths are finite. Focusing on one-dimensional systems, we identify a set of approximate factorization conditions (AFCs) on the system density matrix which allow us to reconstruct entropies and PT moments from information on local subsystems. Combined with the RM toolbox, this yields a simple strategy for entropy and entanglement estimation which is provably accurate assuming that the state to be measured satisfies the AFCs, and which only requires polynomially-many measurements and post-processing operations. We prove that the AFCs hold for finite-depth quantum-circuit states and translation-invariant matrix-product density operators, and provide numerical evidence that they are satisfied in more general, physically-interesting cases, including thermal states of local Hamiltonians. We argue that our method could be practically useful to detect bipartite mixed-state entanglement for large numbers of qubits available in today's quantum platforms.
An exactly solvable model of a trapped interacting Bose-Einstein condensate (BEC) coupled in the dipole approximation to a quantized light mode in a cavity is presented. The model can be seen as a generalization of the harmonic-interaction model for a trapped BEC coupled to a bosonic bath. After obtaining the ground-state energy and wavefunction in closed form, we focus on computing the correlations in the system. The reduced one-particle density matrices of the bosons and the cavity are constructed and diagonalized analytically, and the von Neumann entanglement entropy of the BEC and the cavity is also expressed explicitly as a function of the number and mass of the bosons, frequencies of the trap and cavity, and the cavity-boson coupling strength. The results allow one to study the impact of the cavity on the bosons and vice versa on an equal footing. As an application we investigate a specific case of basic interest for itself, namely, non-interacting bosons in a cavity. We find that both the bosons and the cavity develop correlations in a complementary manner while increasing the coupling between them. Whereas the cavity wavepacket broadens in Fock space, the BEC density saturates in real space. On the other hand, while the cavity depletion saturates, and hence does the BEC-cavity entanglement entropy, the BEC becomes strongly correlated and eventually increasingly fragmented. The latter phenomenon implies single-trap fragmentation of otherwise ideal bosons, where their induced long-range interaction is mediated by the cavity. Finally, as a complimentary investigation, the mean-field equations for the BEC-cavity system are solved analytically as well, and the breakdown of mean-field theory for the cavity and the bosons with increasing coupling is discussed. Further applications are envisaged.
Entangled photons (biphotons) in the time-frequency degree of freedom play an important role in both foundational physics and advanced quantum technologies. How to fully characterize them becomes a key scientific issue. Here, by introducing a frequency shift in one arm of interferometers, we propose theoretically a generalized combination quantum interferometer which allows simultaneous measurement of the amplitude and phase of biphotons associated with both frequency sum and difference in a single interferometer, performing the full tomography of biphotons. The results are compared with the Hong-Ou-Mandel and N00N state interferometers which only allows to perform the partial tomography of biphotons, and an experimental feasibility is also discussed. This provides an alternative method for full characterization of an arbitrary two-photon state with exchange symmetry and might become a useful tool for high-dimensional quantum information processing.
Quantum error mitigation (QEM) is crucial for near-term quantum devices, as noise inherently exists in physical quantum systems and undermines the accuracy of quantum algorithms. A typical purification-based QEM method, called Virtual Distillation (VD), aims to mitigate state preparation errors and achieve effective exponential suppression using multiple copies of the noisy state. However, imperfect VD circuit implementation may yield negative mitigation outcomes, potentially more severe than those achieved without QEM. To address this, we introduce Circuit-Noise-Resilient Virtual Distillation (CNR-VD). This method, featuring a calibration procedure that utilizes easily-prepared input states, refines the outcomes of VD when its circuit is contaminated by noise, seeking to recover the results of an ideally conducted VD circuit. Simulation results demonstrate that the CNR-VD estimator effectively reduces deviations induced by noise in the VD circuit, showcasing improvements in accuracy by an order of magnitude at most compared to the original VD. Meanwhile, CNR-VD elevates the gate noise threshold for VD, enabling positive effects even in the presence of higher noise levels. Furthermore, the strength of our work lies in its applicability beyond specific QEM algorithms, as the estimator can also be applied to generic Hadamard-Test circuits. The proposed CNR-VD significantly enhances the noise-resilience of VD, and thus is anticipated to elevate the performance of quantum algorithm implementations on near-term quantum devices.
We make two contributions pertaining to the study of the quantum chromatic numbers of small graphs. Firstly, in an elegant paper, Man\v{c}inska and Roberson [\textit{Baltic Journal on Modern Computing}, 4(4), 846-859, 2016] gave an example of a graph $G_{14}$ on 14 vertices with quantum chromatic number 4 and classical chromatic number 5, and conjectured that this is the smallest graph exhibiting a separation between the two parameters. We describe a computer-assisted proof of this conjecture, thereby resolving a longstanding open problem in quantum graph theory. Our second contribution pertains to the study of the rank-$r$ quantum chromatic numbers. While it can now be shown that for every $r$, $\chi_q$ and $\chi^{(r)}_q$ are distinct, few small examples of separations between these parameters are known. We give the smallest known example of such a separation in the form of a graph $G_{21}$ on 21 vertices with $\chi_q(G_{21}) = \chi^{(2)}_q(G_{21}) = 4$ and $ \xi(G_{21}) = \chi^{(1)}_q(G_{21}) = \chi(G_{21}) = 5$. The previous record was held by a graph $G_{msg}$ on 57 vertices that was first considered in the aforementioned paper of Man\v{c}inska and Roberson and which satisfies $\chi_q(G_{msg}) = 3$ and $\chi^{(1)}_q(G_{msg}) = 4$. In addition, $G_{21}$ provides the first provable separation between the parameters $\chi^{(1)}_q$ and $\chi^{(2)}_q$. We believe that our techniques for constructing $G_{21}$ and lower bounding its orthogonal rank could be of independent interest.
The following submission constitutes a guide and an introduction to a collection of articles submitted as a Ph.D. dissertation at the University of Gda\'nsk. In the dissertation, we study the fundamental limitations within the selected quantum and supra-quantum cryptographic scenarios in the form of upper bounds on the achievable key rates. We investigate various security paradigms, bipartite and multipartite settings, as well as single-shot and asymptotic regimes. Our studies, however, extend beyond the derivations of the upper bounds on the secret key rates in the mentioned scenarios. In particular, we propose a novel type of rerouting attack on the quantum Internet for which we find a countermeasure and benchmark its efficiency. Furthermore, we propose several upper bounds on the performance of quantum (key) repeaters settings. We derive a lower bound on the secret key agreement capacity of a quantum network, which we tighten in an important case of a bidirectional quantum network. The squashed nonlocality derived here as an upper bound on the secret key rate is a novel non-faithful measure of nonlocality. Furthermore, the notion of the non-signaling complete extension arising from the complete extension postulate as a counterpart of purification of a quantum state allows us to study analogies between non-signaling and quantum key distribution scenarios.
Disorder can have dramatic impact on the transport properties of quantum systems. On the one hand, Anderson localization, arising from destructive quantum interference of multiple-scattering paths, can halt transport entirely. On the other hand, processes involving time-dependent random forces such as Fermi acceleration, proposed as a mechanism for high-energy cosmic particles, can expedite particle transport significantly. The competition of these two effects in time-dependent inhomogeneous or disordered potentials can give rise to interesting dynamics but experimental observations are scarce. Here, we experimentally study the dynamics of an ultracold, non-interacting Fermi gas expanding inside a disorder potential with finite spatial and temporal correlations. Depending on the disorder's strength and rate of change, we observe several distinct regimes of tunable anomalous diffusion, ranging from weak localization and subdiffusion to superdiffusion. Especially for strong disorder, where the expansion shows effects of localization, an intermediate regime is present in which quantum interference appears to counteract acceleration. Our system connects the phenomena of Anderson localization with second-order Fermi acceleration and paves the way to experimentally investigating Fermi acceleration when entering the regime of quantum transport.
A variable-range interacting Ising model with spin-1/2 particles exhibits distinct behavior depending on the fall-off rates in the range of interactions, notably non-local (NL), quasi-local (QL), and local. It is unknown if such a transition occurs in this model with an arbitrary spin quantum number. We establish its existence by analyzing the profiles of entanglement entropy, mutual information, and genuine multipartite entanglement (GME) of the weighted graph state (WGS), which is prepared when the multi-level maximally coherent state at each site evolves according to the spin-s Ising Hamiltonian. Specifically, we demonstrate that the scaling of time-averaged mutual information and the divergence in the first derivative of GME with respect to the fall-off rate in the WGS can indicate the transition point from NL to QL, which scales logarithmically with individual spin dimension. Additionally, we suggest that the existence of a saturation value of a finite number of qudits capable of mimicking the GME pattern of an arbitrarily large system-size can reveal the second transition point between quasi-local and local regions.
We propose a new scheme to control the shape of the Autler-Townes (AT) doublet in the photoelectron spectrum from atomic resonance-enhanced multiphoton ionization (REMPI). The scheme is based on the interference of two AT doublets created by ionization of the strongly driven atom from the ground and the resonantly excited state using tailored bichromatic femtosecond (fs) laser pulses. In this scheme, the quantum phase of the photoelectrons is crucial for the manipulation of the AT doublet. The laser polarization state and the relative optical phase between the two colors are used to manipulate the interference pattern. We develop an analytical model to describe the bichromatic REMPI process and provide a physical picture of the control mechanism. To validate the model, the results are compared to an ab initio calculation based on the solution of the 2D time-dependent Schr\"odinger equation for the non-perturbative interaction of an atom with intense polarization-shaped bichromatic fs-laser pulses. Our results indicate that the control mechanism is robust with respect to the laser intensity facilitating its experimental observation.
Silicon color centers have recently emerged as promising candidates for commercial quantum technology, yet their interaction with electric fields has yet to be investigated. In this paper, we demonstrate electrical manipulation of telecom silicon color centers by fabricating lateral electrical diodes with an integrated G center ensemble in a commercial silicon on insulator wafer. The ensemble optical response is characterized under application of a reverse-biased DC electric field, observing both 100% modulation of fluorescence signal, and wavelength redshift of approximately 1.4 GHz/V above a threshold voltage. Finally, we use G center fluorescence to directly image the electric field distribution within the devices, obtaining insight into the spatial and voltage-dependent variation of the junction depletion region and the associated mediating effects on the ensemble. Strong correlation between emitter-field coupling and generated photocurrent is observed. Our demonstration enables electrical control and stabilization of semiconductor quantum emitters.
This paper presents a laboratory activity aimed at developing knowledge and expertise in microwave applications at cryogenic temperatures. The experience focuses on the detection of single infrared photons through Microwave Kinetic Inductance Detectors (MKIDs). The experimental setup, theoretical concepts, and activities involved are detailed, highlighting the skills and knowledge gained through the experience. This experiment is designed for postgraduate students in the field of quantum technologies.
Trapped samples of ultracold molecules are often short-lived, because close collisions between them result in trap loss. We investigate the use of shielding with static electric fields to create repulsive barriers between polar molecules to prevent such loss. Shielding is very effective even for RbCs, with a relatively low dipole moment, and even more effective for molecules such as NaK, NaRb and NaCs, with progressively larger dipoles. Varying the electric field allows substantial control over the scattering length, which will be crucial for the stability or collapse of molecular Bose-Einstein condensates. This arises because the dipole-dipole interaction creates a long-range attraction that is tunable with electric field. For RbCs, the scattering length is positive across the range where shielding is effective, because the repulsion responsible for shielding dominates. For NaK, the scattering length can be tuned across zero to negative values. For NaRb and NaCs, the attraction is strong enough to support tetraatomic bound states, and the scattering length passes through resonant poles where these states cross threshold. For KAg and CsAg, there are multiple bound states and multiple poles. For each molecule, we calculate the variation of scattering length with field and comment on the possibilities for exploring new physics.
Photon loss is the biggest enemy for scalable photonic quantum information processing. This problem can be tackled by using quantum error correction, provided that the overall photon loss is below a threshold of 1/3. However, all reported on-demand and indistinguishable single-photon sources still fall short of this threshold. Here, by using tailor shaped laser pulse excitation on a high-quantum efficiency single quantum dot deterministically coupled to a tunable open microcavity, we demonstrate a high-performance source with a single-photon purity of 0.9795(6), photon indistinguishability of 0.986(16), and an overall system efficiency of 0.717(20), simultaneously. This source for the first time reaches the efficiency threshold for scalable photonic quantum computing. With this source, we further demonstrate 1.87(13) dB intensity squeezing, and consecutive 40-photon events with 1.67 mHz count rate.
Perturbative methods are attractive to describe the electronic structure of molecular systems because of their low-computational cost and systematically improvable character. In this work, a two-step perturbative approach is introduced combining multi-state Rayleigh-Schr\"odinger (effective Hamiltonian theory) and state-specific Brillouin-Wigner schemes to treat degenerate configurations and yield an efficient evaluation of multiple energies. The first step produces model functions and an updated definition of the perturbative partitioning of the Hamiltonian. The second step inherits the improved starting point provided in the first step, enabling then faster processing of the perturbative corrections for each individual state. The here-proposed two-step method is exemplified on a model-Hamiltonian of increasing complexity.
One of the demanding frontiers in ultracold science is identifying laser cooling schemes for complex atoms and molecules, out of their vast spectra of internal states. Motivated by a need to expand the set of available ultracold molecules for applications in fundamental physics, chemistry, astrochemistry, and quantum simulation, we propose and demonstrate an automated graph-based search approach for viable laser cooling schemes. The method is time efficient and the outcomes greatly surpass the results of manual searches used so far. We discover new laser cooling schemes for C$_2$, OH$^+$, CN, YO, and CO$_2$ that can be viewed as surprising or counterintuitive compared to previously identified laser cooling schemes. In addition, a central insight of this work is that the reinterpretation of quantum states and transitions between them as a graph can dramatically enhance our ability to identify new quantum control schemes for complex quantum systems. As such, this approach will also be applicable to complex atoms and, in fact, any complex many-body quantum system with a discrete spectrum of internal states.
We prove a new concentration result for non-catalytic decoupling by showing that, for suitably large $t$, applying a unitary chosen uniformly at random from an approximate $t$-design on a quantum system followed by a fixed quantum operation almost decouples, with high probability, the given system from another reference system to which it may initially have been correlated. Earlier works either did not obtain high decoupling probability, or used provably inefficient unitaries, or required catalytic entanglement for decoupling. In contrast, our approximate unitary designs always guarantee decoupling with exponentially high probability and, under certain conditions, lead to computationally efficient unitaries. As a result we conclude that, under suitable conditions, efficiently implementable approximate unitary designs achieve relative thermalisation in quantum thermodynamics with exponentially high probability. We also show the scrambling property of black hole, when the black hole evolution is according to pseudorandom approximate unitary $t$-design, as opposed to the Haar random evolution considered earlier by Hayden-Preskill.
We introduce a two-parameter ensemble of generalized $2\times 2$ real symmetric random matrices called the $\beta$-Rosenzweig-Porter ensemble (\brpe), parameterized by $\beta$, a fictitious inverse temperature of the analogous Coulomb gas model, and $\gamma$, controlling the relative strength of disorder. \brpe\ encompasses RPE from all of the Dyson's threefold symmetry classes: orthogonal, unitary and symplectic for $\beta=1,2,4$. Firstly, we study the energy correlations by calculating the density and 2nd moment of the Nearest Neighbor Spacing (NNS) and robustly quantify the crossover among various degrees of level repulsions. Secondly, the dynamical properties are determined from an exact calculation of the temporal evolution of the fidelity enabling an identification of the characteristic Thouless and the equilibration timescales. The relative depth of the correlation hole in the average fidelity serves as a dynamical signature of the crossover from chaos to integrability and enables us to construct the phase diagram of \brpe\ in the $\gamma$-$\beta$ plane. Our results are in qualitative agreement with numerically computed fidelity for $N\gg2$ matrix ensembles. Furthermore, we observe that for large $N$ the 2nd moment of NNS and the relative depth of the correlation hole exhibit a second order phase transition at $\gamma=2$.
We quantize the electromagnetic field in the presence of a nonmoving dielectric sphere in vacuum. The sphere is assumed to be lossless, dispersionless, isotropic, and homogeneous. The quantization is performed using normalized eigenmodes as well as plane-wave modes. We specify two useful alternative bases of normalized eigenmodes: spherical eigenmodes and scattering eigenmodes. A canonical transformation between plane-wave modes and normalized eigenmodes is derived. This formalism is employed to study the scattering of a single photon, coherent squeezed light, and two-photon states off a dielectric sphere. In the latter case we calculate the second-order correlation function of the scattered field, thereby unveiling the angular distribution of the Hong-Ou-Mandel interference for a dielectric sphere acting as a three-dimensional beam splitter. Our results are analytically derived for an arbitrary size of the dielectric sphere with a particular emphasis on the small-particle limit. This work sets the theoretical foundation for describing the quantum interaction between light and the motional, rotational and vibrational degrees of freedom of a dielectric sphere.
We study the ground-state properties of self-bound dipolar droplets in quasi-two-dimensional geometry by using the Gaussian state theory. We show that there exist two quantum phases corresponding to the macroscopic squeezed vacuum and squeezed coherent states. We further show that the radial size versus atom number curve exhibits a double-dip structure, as a result of the multiple quantum phases. In particular, we find that the critical atom number for the self-bound droplets is determined by the quantum phases, which allows us to distinguish the quantum state and validates the Gaussian state theory.
We show that quantum dynamics of any systems with $SU(1,1)$ symmetry give rise to emergent Anti-de Sitter spacetimes in 2+1 dimensions (AdS$_{2+1}$). Using the continuous circuit depth, a quantum evolution is mapped to a trajectory in AdS$_{2+1}$. Whereas the time measured in laboratories becomes either the proper time or the proper distance, quench dynamics follow geodesics of AdS$_{2+1}$. Such a geometric approach provides a unified interpretation of a wide range of prototypical phenomena that appear disconnected. For instance, the light cone of AdS$_{2+1}$ underlies expansions of unitary fermions released from harmonic traps, the onsite of parametric amplifications, and the exceptional points that represent the $PT$ symmetry breaking in non-Hermitian systems. Our work provides a transparent means to optimize quantum controls by exploiting shortest paths in the emergent spacetimes. It also allows experimentalists to engineer emergent spacetimes and induce tunnelings between different AdS$_{2+1}$.
Quantum coherence, nonlocality, and contextuality are key resources for quantum advantage in metrology, communication, and computation. We introduce a graph-based approach to derive classicality inequalities that bound local, non-contextual, and coherence-free models, offering a unified description of these seemingly disparate quantum resources. Our approach generalizes recently proposed basis-independent coherence witnesses, and recovers all non-contextuality inequalities of the exclusivity graph approach. Moreover, violations of certain classicality inequalities witness preparation contextuality. We describe an algorithm to find all such classicality inequalities, and use it to analyze some of the simplest scenarios.
We investigate the time evolution of the second-order R\'enyi entropy (RE) for bosons in a one-dimensional optical lattice following a sudden quench of the hopping amplitude $J$. Specifically, we examine systems that are quenched into the strongly correlated Mott-insulating (MI) regime with $J/U\ll 1$ ($U$ denotes the strength of the on-site repulsive interaction) from the MI limit with $J=0$. In this regime, the low-energy excited states can be effectively described by fermionic quasiparticles known as doublons and holons. They are excited in entangled pairs through the quench dynamics. By developing an effective theory, we derive a direct relation between the RE and correlation functions associated with doublons and holons. This relation allows us to analytically calculate the RE and obtain a physical picture for the RE, both in the ground state and during time evolution through the quench dynamics, in terms of doublon holon pairs. In particular, we show that the RE is proportional to the population of doublon-holon pairs that span the boundary of the subsystem. Our quasiparticle picture introduces some remarkable features that are absent in previous studies on the dynamics of entanglement entropy in free-fermion models. It provides with valuable insights into the dynamics of entanglement entropy in strongly-correlated systems.
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than two qubits. In this study, we use deep convolutional neural networks, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. The neural models' code and training schemes, as well as data generation procedures, are available at github.com/Maticraft/quantum_correlations.
Phase stabilization of distant quantum time-bin interferometers is a major challenge for quantum communication networks, and is typically achieved by exchanging optical reference signals, which can be particularly challenging over free-space channels. We demonstrate a novel approach using reference frame independent time-bin quantum key distribution that completely avoids the need for active relative phase stabilization while simultaneously overcoming a highly multi-mode channel without any active mode filtering. We realized a proof-of-concept demonstration using hybrid polarization and time-bin entangled photons, that achieved a sustained asymptotic secure key rate of greater than 0.06 bits/coincidence over a 15m multi-mode fiber optical channel. This is achieved without any mode filtering, mode sorting, adaptive optics, active basis selection, or active phase alignment. This scheme enables passive self-compensating time-bin quantum communication which can be readily applied to long-distance links and various wavelengths, and could be useful for a variety of spatially multi-mode and fluctuating channels involving rapidly moving platforms, including airborne and satellite systems.
We introduce a quantum error mitigation technique based on probabilistic error cancellation to eliminate errors which have accumulated during the application of a quantum circuit. Our approach is based on applying an optimal "denoiser" after the action of a noisy circuit and can be performed with an arbitrary number of extra gates. The denoiser is given by an ensemble of circuits distributed with a quasiprobability distribution. For a simple noise model, we show that efficient, local denoisers can be found, and we demonstrate their effectiveness for the digital quantum simulation of the time evolution of simple spin chains.
Quantum key distribution (QKD) networks are expected to enable information-theoretical secure (ITS) communication over a large-scale network. Most researches on relay-based QKD network assume that all relays or nodes are completely trustworthy. However, the malicious behavior of any single node can undermine security of QKD networks. Current research on Quantum Key Distribution (QKD) networks primarily addresses passive attacks, such as eavesdropping, conducted by malicious nodes. Although there are proposed solutions like majority voting and secret sharing for point-to-point QKD systems to counter active attacks, these strategies are not directly transferable to QKD network research due to different security requirements. We propose the a new paradigm for the security requirements of QKD networks and addresses the active attack by collaborate malicious nodes. Firstly, regarding security, we introduce the ITS distributed authentication scheme, which additionally offers two crucial security properties to QKD networks: identity unforgeability and non-repudiation. Secondly, concerning correctness, we propose an ITS fault-tolerant consensus scheme based on our ITS distributed authentication to ensure global consistency, enabling participating nodes to collaborate correctly in a more practical manner. Through our simulation, we have shown that our scheme exhibits a significantly lower growth trend in key consumption compared to the original pre-shared keys scheme. For instance, in larger networks such as when the nodes number is 80, our scheme's key consumption is only 13.1\% of the pre-shared keys scheme.
Towards the practical use of quantum computers in the NISQ era, as well as the realization of fault-tolerant quantum computers that utilize quantum error correction codes, pressing needs have emerged for the control hardware and software platforms. In particular, a clear demand has arisen for platforms that allow classical processing to be integrated with quantum processing. While recent works discuss the requirements for such quantum-classical processing integration that is formulated at the gate-level, pulse-level discussions are lacking and are critically important. Moreover, defining concrete performance benchmarks for the control system at the pulse-level is key to the necessary quantum-classical integration. In this work, we categorize the requirements for quantum-classical processing at the pulse-level, demonstrate these requirements with a variety of use cases, including recently published works, and propose well-defined performance benchmarks for quantum control systems. We utilize a comprehensive pulse-level language that allows embedding universal classical processing in the quantum program and hence allows for a general formulation of benchmarks. We expect the metrics defined in this work to form a solid basis to continue to push the boundaries of quantum computing via control systems, bridging the gap between low-level and application-level implementations with relevant metrics.
The quantum state associated to an unknown experimental preparation procedure can be determined by performing quantum state tomography. If the statistical uncertainty in the data dominates over other experimental errors, then a tomographic reconstruction procedure must express this uncertainty. A rigorous way to accomplish this is via statistical confidence regions in state space. Naturally, the size of this region decreases when increasing the number of samples, but it also depends critically on the construction method of the region. We compare recent methods for constructing confidence regions as well as a reference method based on a Gaussian approximation. For the comparison, we propose an operational measure with the finding, that there is a significant difference between methods, but which method is preferable can depend on the details of the state preparation scenario.
We present a quantum algorithm to achieve higher-order transformations of Hamiltonian dynamics. Namely, the algorithm takes as input a finite number of queries to a black-box seed Hamiltonian dynamics to simulate a desired Hamiltonian. Our algorithm efficiently simulates linear transformations of any seed Hamiltonian with a bounded energy range consisting of a polynomial number of terms in system size, making use of only controlled-Pauli gates and time-correlated randomness. This algorithm is an instance of quantum functional programming, where the desired function is specified as a concatenation of higher-order quantum transformations. By way of example, we demonstrate the simulation of negative time-evolution and time-reversal, and perform a Hamiltonian learning task.
Stabilizer simulation can efficiently simulate an important class of quantum circuits consisting exclusively of Clifford gates. However, all existing extensions of this simulation to arbitrary quantum circuits including non-Clifford gates suffer from an exponential runtime. To address this challenge, we present a novel approach for efficient stabilizer simulation on arbitrary quantum circuits, at the cost of lost precision. Our key idea is to compress an exponential sum representation of the quantum state into a single abstract summand covering (at least) all occurring summands. This allows us to introduce an abstract stabilizer simulator that efficiently manipulates abstract summands by over-approximating the effect of circuit operations including Clifford gates, non-Clifford gates, and (internal) measurements. We implemented our abstract simulator in a tool called Abstraqt and experimentally demonstrate that Abstraqt can establish circuit properties intractable for existing techniques.
Multipartite entanglement plays a crucial role for the design of the Quantum Internet, due to its peculiarities with no classical counterpart. Yet, for entanglement-based quantum networks, a key open issue is constituted by the lack of an effective entanglement access control (EAC) strategy for properly handling and coordinating the quantum nodes in accessing the entangled resource. In this paper, we design a quantum-genuine entanglement access control (EAC) to solve the contention problem arising in accessing a multipartite entangled resource. The proposed quantum-genuine EAC is able to: i) fairly select a subset of nodes granted with the access to the contended resource; ii) preserve the privacy and anonymity of the identities of the selected nodes; iii) avoid to delegate the signaling arising with entanglement access control to the classical network. We also conduct a theoretical analysis of noise effects on the proposed EAC. This theoretical analysis is able to catch the complex noise effects on the EAC through meaningful parameters.
Source radiation (radiation reaction) and vacuum-field fluctuations can be seen as two inseparable contributions to processes such as spontaneous emission, the Lamb shift, or the Casimir force. Here, we propose how they can be individually probed and their space-time structure revealed in electro-optic sampling experiments. This allows to experimentally study causality at the single photon level and to reveal space- and time-like correlations in the quantum vacuum. A connection to the time-domain fluctuation-dissipation theorem is also made.
We investigate spin-charge separation of a spin-1/2 Fermi system confined in a triple well where multiple bands are occupied. We assume that our finite fermionic system is close to fully spin polarized while being doped by a hole and an impurity fermion with opposite spin. Our setup involves ferromagnetic couplings among the particles in different bands, leading to the development of strong spin-transport correlations in an intermediate interaction regime. Interactions are then strong enough to lift the degeneracy among singlet and triplet spin configurations in the well of the spin impurity but not strong enough to prohibit hole-induced magnetic excitations to the singlet state. Despite the strong spin-hole correlations, the system exhibits spin-charge deconfinement allowing for long-range entanglement of the spatial and spin degrees of freedom.
Quantum computing requires a universal set of gate operations; regarding gates as rotations, any rotation angle must be possible. However a real device may only be capable of $B$ bits of resolution, i.e. it might support only $2^B$ possible variants of a given physical gate. Naive discretization of an algorithm's gates to the nearest available options causes coherent errors, while decomposing an impermissible gate into several allowed operations increases circuit depth. Conversely, demanding higher $B$ can greatly complexify hardware. Here we explore an alternative: Probabilistic Angle Interpolation (PAI). This effectively implements any desired, continuously parametrised rotation by randomly choosing one of three discretised gate settings and postprocessing individual circuit outputs. The approach is particularly relevant for near-term applications where one would in any case average over many runs of circuit executions to estimate expected values. While PAI increases that sampling cost, we prove that a) the approach is optimal in the sense that PAI achieves the least possible overhead and c) the overhead is remarkably modest even with thousands of parametrised gates and only $7$ bits of resolution available. This is a profound relaxation of engineering requirements for first generation quantum computers where even $5-6$ bits of resolution may suffice and, as we demonstrate, the approach is many orders of magnitude more efficient than prior techniques. Moreover we conclude that, even for more mature late-NISQ hardware, no more than $9$ bits will be necessary.
A restricted form of Landauer's Principle, independent of computational considerations, is shown to hold for thermal systems by reference to the joint entropy associated with conjugate observables. It is shown that the source of the compensating entropy for irreversible physical processes is due to the ontological uncertainty attending values of such mutually incompatible observables, rather than due to epistemic uncertainty as traditionally assumed in the information-theoretic approach. In particular, it is explicitly shown that erasure of logical (epistemic) information via reset operations is not equivalent to erasure of thermodynamic entropy, so that the traditional, information-theoretic form of Landauer's Principle is not supported by the physics. A further implication of the analysis is that there is no Maxwell's Demon in the real world.
The dissipative variant of the Ising model in a transverse field is one of the most important models in the analysis of open quantum many-body systems, due to its paradigmatic character for understanding driven-dissipative quantum phase transitions, as well as its relevance in modelling diverse experimental platforms in atomic physics and quantum simulation. Here, we present an exact solution for the steady state of the transverse-field Ising model in the limit of infinite-range interactions, with local dissipation and inhomogeneous transverse fields. Our solution holds despite the lack of any collective spin symmetry or even permutation symmetry. It allows us to investigate first- and second-order dissipative phase transitions, driven-dissipative criticality, and captures the emergence of a surprising "spin blockade" phenomenon. The ability of the solution to describe spatially-varying local fields provides a new tool to study disordered open quantum systems in regimes that would be extremely difficult to treat with numerical methods.
The problem of decomposing an arbitrary Clifford element into a sequence of Clifford gates is known as Clifford synthesis. Drawing inspiration from similarities between this and the famous Rubik's Cube problem, we develop a machine learning approach for Clifford synthesis based on learning an approximation to the distance to the identity. This approach is probabilistic and computationally intensive. However, when a decomposition is successfully found, it often involves fewer gates than existing synthesis algorithms. Additionally, our approach is much more flexible than existing algorithms in that arbitrary gate sets, device topologies, and gate fidelities may incorporated, thus allowing for the approach to be tailored to a specific device.
We give an almost complete characterization of the hardness of $c$-coloring $\chi$-chromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that: 1) We give a new distributed algorithm that finds a $c$-coloring in $\chi$-chromatic graphs in $\tilde{\mathcal{O}}(n^{\frac{1}{\alpha}})$ rounds, with $\alpha = \bigl\lfloor\frac{c-1}{\chi - 1}\bigr\rfloor$. 2) We prove that any distributed algorithm for this problem requires $\Omega(n^{\frac{1}{\alpha}})$ rounds. Our upper bound holds in the classical, deterministic LOCAL model, while the near-matching lower bound holds in the non-signaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantum-LOCAL, even with a pre-shared quantum state. We also show that similar arguments can be used to prove that, e.g., 3-coloring 2-dimensional grids or $c$-coloring trees remain hard problems even for the non-signaling model, and in particular do not admit any quantum advantage. Our lower-bound arguments are purely graph-theoretic at heart; no background on quantum information theory is needed to establish the proofs.
In this work we identify coherent electron-vibron interactions between near-resonant and non-resonant electronic levels that contribute beyond standard optomechanical models for off-resonant or resonance SERS. By developing an open-system quantum model using first molecular interaction principles, we show how the Raman interference of both resonant and non-resonant contributions can provide several orders of magnitude modifications of the SERS peaks with respect to former optomechanical models and over the fluorescence backgrounds. This cooperative optomechanical mechanism allows for generating an enhancement of nonclassical photon pair correlations between Stokes and anti-Stokes photons, which can be detected by photon-counting measurements. Our results demonstrate Raman enhancements and suppressions of coherent nature that significantly impact the standard estimations of the optomechanical contribution from SERS spectra and their quantum mechanical observable effects.
Nanoscale quantum dots in microwave cavities can be used as a laboratory for exploring electron-electron interactions and their spin in the presence of quantized light and a magnetic field. We show how a simple theoretical model of this interplay at resonance predicts complex but measurable effects. New polariton states emerge that combine spin, relative modes, and radiation. These states have intricate spin-space correlations and undergo polariton transitions controlled by the microwave cavity field. We uncover novel topological effects involving highly correlated spin and charge density that display singlet-triplet and inhomogeneous Bell-state distributions. Signatures of these transitions are imprinted in the photon distribution, which will allow for optical read-out protocols in future experiments and nanoscale quantum technologies.
This article examines the current status of quantum computing in Earth observation (EO) and satellite imagery. We analyze the potential limitations and applications of quantum learning models when dealing with satellite data, considering the persistent challenges of profiting from quantum advantage and finding the optimal sharing between high-performance computing (HPC) and quantum computing (QC). We then assess some parameterized quantum circuit models transpiled into a Clifford+T universal gate set. The T-gates shed light on the quantum resources required to deploy quantum models, either on an HPC system or several QC systems. In particular, if the T-gates cannot be simulated efficiently on an HPC system, we can apply a quantum computer and its computational power over conventional techniques. Our quantum resource estimation showed that quantum machine learning (QML) models, with a sufficient number of T-gates, provide the quantum advantage if and only if they generalize on unseen data points better than their classical counterparts deployed on the HPC system and they break the symmetry in their weights at each learning iteration like in conventional deep neural networks. We also estimated the quantum resources required for some QML models as an initial innovation. Lastly, we defined the optimal sharing between an HPC+QC system for executing QML models for hyperspectral satellite images. These are a unique dataset compared to other satellite images since they have a limited number of input qubits and a small number of labeled benchmark images, making them less challenging to deploy on quantum computers.
We investigate the dynamics of the driven Jaynes-Cummings model, where a two-level atom interacts with a quantized field and both, atom and field, are driven by an external classical field. Via an invariant approach, we are able to transform the corresponding Hamiltonian into the one of the standard Jaynes-Cummings model. Subsequently, the exact analytical solution of the Schr\"odinger equation for the driven system is obtained and employed to analyze some of its dynamical variables.
Local unitary transforms cannot affect the quantum correlations between two systems sharing an entangled state although they do influence the outcomes of local measurements. By considering local squeezing operations we introduce an extended family of observables allowing violation of the CHSH Bell inequality for two-mode Gaussian systems. We show that local squeezing can enable or enhance the identification of non-local two-mode states. In particular, we show that local squeezing followed by photons/no-photons discrimination can suffice to reveal non-locality in a broad ensemble of pure and mixed two-mode Gaussian states.
We study a class of bistochastic matrices generalizing unistochastic matrices. Given a complex bipartite unitary operator, we construct a bistochastic matrix having as entries the normalized squares of Frobenius norm of the blocks. We show that the closure of the set of generalized unistochastic matrices is the whole Birkhoff polytope. We characterize the points on the edges of the Birkhoff polytope that belong to a given level of our family of sets, proving that the different (non-convex) levels have a rich inclusion structure. We also study the corresponding generalization of orthostochastic matrices. Finally, we introduce and study the natural probability measures induced on our sets by the Haar measure of the unitary group. These probability measures interpolate between the natural measure on the set of unistochastic matrices and the Dirac measure supported on the van der Waerden matrix.
As quantum processors advance, the emergence of large-scale decentralized systems involving interacting quantum-enabled agents is on the horizon. Recent research efforts have explored quantum versions of Nash and correlated equilibria as solution concepts of strategic quantum interactions, but these approaches did not directly connect to decentralized adaptive setups where agents possess limited information. This paper delves into the dynamics of quantum-enabled agents within decentralized systems that employ no-regret algorithms to update their behaviors over time. Specifically, we investigate two-player quantum zero-sum games and polymatrix quantum zero-sum games, showing that no-regret algorithms converge to separable quantum Nash equilibria in time-average. In the case of general multi-player quantum games, our work leads to a novel solution concept, (separable) quantum coarse correlated equilibria (QCCE), as the convergent outcome of the time-averaged behavior no-regret algorithms, offering a natural solution concept for decentralized quantum systems. Finally, we show that computing QCCEs can be formulated as a semidefinite program and establish the existence of entangled (i.e., non-separable) QCCEs, which cannot be approached via the current paradigm of no-regret learning.
We present a framework for quantization of electromagnetic field in the presence of dielectric media with time-varying optical properties. Considering a microscopic model for the dielectric as a collection of matter fields interacting with the electromagnetic environment, we allow for the possibility of dynamically varying light-matter coupling. We obtain the normal modes of the coupled light-matter degrees of freedom, showing that the corresponding creation and annihilation operators obey equal-time canonical commutation relations. We show that these normal modes can consequently couple to quantum emitters in the vicinity of dynamic dielectric media, and the resulting radiative properties of atoms are thus obtained. Our results are pertinent to time-varying boundary conditions realizable across a wide range of state-of-the-art physical platforms and timescales.
We consider the time-dependent Schr\"odinger equation that is generated on the bosonic Fock space by a long-range quantum many-body Hamiltonian. We derive the first bound on the maximal speed of particle transport in these systems that is thermodynamically stable and holds all the way down to microscopic length scales. For this, we develop a novel multiscale rendition of the ASTLO (adiabatic spacetime localization observables) method. Our result opens the door to deriving the first thermodynamically stable Lieb-Robinson bounds on general local operators for these long-range interacting bosonic systems.
We investigate the dynamics of a particle in a binary lattice with staggered on-site energies. An additional static force is introduced which further adjusts the on-site energies. The binary lattice appears to be unrelated to the semiclassical Rabi model, which describes a periodically driven two-level system. However, in a certain parity subspace, the Floquet Hamiltonian of the semiclassical Rabi model can be exactly mapped to that of the binary lattice. These connections provide a different perspective for analyzing lattice systems. At resonance, namely that the mismatch of on-site energies between adjacent sites is nearly multiple of the strength of the static force, the level anticrossing occurs. This phenomenon is closely related to the Bloch-Siegert shift in the semiclassical Rabi model. At the $n$th order resonance, an initially localized particle exhibits periodic jumps between site $0$ and site $(2n+1)$, rather than continuous hopping between adjacent sites. The binary lattice with a static force serves as a bridge linking condensed matter physics and quantum optics, due to its connection with the semiclassical Rabi model.
The occurrence of a second-order superradiant quantum phase transition is brought to light in a quantum system consisting of two interacting qubits coupled to the same quantized field mode. We introduce an appropriate thermodynamic-like limit for the integrable two-qubit quantum Rabi model with spin-spin interaction. Namely, it is determined by the infinite ratios of the spin-spin and the spin-mode couplings to the mode frequency, regardless of the spin-to-mode frequency ratios.
We study the properties of the random quantum states induced from the uniformly random pure states on a bipartite quantum system by taking the partial trace over the larger subsystem. Most of the previous studies have adopted a viewpoint of "concentration of measure" and have focused on the behavior of the states close to the average. In contrast, we investigate the large deviation regime, where the states may be far from the average. We prove the following results: First, the probability that the induced random state is within a given set decreases no slower or faster than exponential in the dimension of the subsystem traced out. Second, the exponent is equal to the quantum relative entropy of the maximally mixed state and the given set, multiplied by the dimension of the remaining subsystem. Third, the total probability of a given set strongly concentrates around the element closest to the maximally mixed state, a property that we call conditional concentration. Along the same line, we also investigate an asymptotic behavior of coherence of random pure states in a single system with large dimensions.
Color centers in diamond are quantum systems with optically active spin-states that show long coherence times and are therefore a promising candidate for the development of efficient spin-photon interfaces. However, only a small portion of the emitted photons is generated by the coherent optical transition of the zero-phonon line (ZPL), which limits the overall performance of the system. Embedding these emitters in photonic crystal cavities improves the coupling to the ZPL photons and increases their emission rate. Here, we demonstrate the fabrication process of "Sawfish" cavities, a design recently proposed that has the experimentally-realistic potential to simultaneously enhance the emission rate by a factor of 46 and couple photons into a single-mode fiber with an efficiency of 88%. The presented process allows for the fabrication of fully suspended devices with a total length of 20.5 $\mu$m and features size as small as 40 nm. The optical characterization shows fundamental mode resonances that follow the behavior expected from the corresponding design parameters and quality (Q) factors as high as 3825. Finally, we investigate the effects of nanofabrication on the devices and show that, despite a noticeable erosion of the fine features, the measured cavity resonances deviate by only 0.9 (1.2)% from the corresponding simulated values. This proves that the Sawfish design is robust against fabrication imperfections, which makes it an attractive choice for the development of quantum photonic networks.
Quantum Bit String Comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any $n$-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two $n$-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity, and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.
We present a scheme for controlling quantum correlations by applying feedback to the cavity mode that exits a cavity while interacting with a mechanical oscillator and magnons. In a hybrid cavity magnomechanical system with a movable mirror, the proposed coherent feedback scheme allows for the enhancement of both bipartite and tripartite quantum correlations. Moreover, we demonstrate that the resulting entanglement remains robust with respect to ambient temperatures in the presence of coherent feedback control.
Sizable hyperpolarisation, i.e. an imbalance of the occupation numbers of nuclear spins in a sample deviating from thermal equilibrium, is needed in various fields of science. For example, hyperpolarised tracers are utilised in magnetic resonance imaging in medicine (MRI) and polarised beams and targets are employed in nuclear physics to study the spin dependence of nuclear forces. Here we show that the quantum interference of transitions induced by radio-wave pumping with longitudinal and radial pulses are able to produce large polarisations at small magnetic fields. This method is easier than established methods, theoretically understood and experimentally proven for beams of metastable hydrogen atoms in the keV energy range. It should also work for a variety of samples at rest. Thus, this technique opens the door for a new generation of polarised tracers, possibly low-field MRI with better spatial resolution or the production of polarised fuel to increase the efficiency of fusion reactors by manipulating the involved cross sections.