Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-10-31 11:30 to 2023-11-03 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
We provide a detailed study of natural inflation with a periodic non-minimal coupling, which is a well-motivated inflationary model that admits an explicit UV completion. We demonstrate that this construction can satisfy the most recent observational constraints from Planck and the BICEP/Keck collaborations. We also compute the corresponding relic gravitational wave background due to tensor perturbations and show that future space-borne interferometers, such as DECIGO, BBO and ALIA, may be able to detect it. Next, we extend this analysis and establish the validity of these results in a multi-field model featuring an additional $R^2$ term in the action, which allows us to interpolate between natural and scalaron (a.k.a.~Starobinsky) inflation. We investigate the conditions under which the aforementioned future interferometers will have the capability to differentiate between pure natural inflation and natural-scalaron inflation. The latter analysis could open the door to distinguishing between single-field and multi-field inflation through gravitational wave observations in more general contexts.
Gravitational microlensing is one of the strongest observational techniques to observe non-luminous astrophysical bodies. Existing microlensing observations provide tantalizing evidence of a population of low-mass objects whose origin is unknown. These events may be caused by terrestrial-mass free-floating planets or by exotic objects such as primordial black holes. However, the nature of these objects cannot be resolved on an event-by-event basis, as the induced light curve is degenerate for lensing bodies of identical mass. One must instead statistically compare \textit{distributions} of lensing events to determine the nature of the lensing population. While existing surveys lack the statistics required to identify multiple subpopulations of lenses, this will change with the launch of the Nancy Grace Roman Space Telescope. Roman's Galactic Bulge Time Domain Survey is expected to observe hundreds of low-mass microlensing events, enabling a robust statistical characterization of this population. In this paper, we show that by exploiting features in the distribution of lensing event durations, Roman will be sensitive to a subpopulation of primordial black holes hidden amongst a background of free-floating planets. Roman's reach will extend to primordial black hole dark matter fractions as low as $f_\text{PBH} = 10^{-4}$ at peak sensitivity, and will be able to conclusively determine the origin of existing ultrashort-timescale microlensing events. A positive detection would provide evidence that a significant fraction of the cosmological dark matter consists of macroscopic, non-luminous objects.
Galaxies of different types are not equally distributed in the Local Universe. In particular, the supergalactic plane is prominent among the brightest ellipticals, but inconspicuous among the brightest disk galaxies. This striking difference provides a unique test for our understanding of galaxy and structure formation. Here we use the SIBELIUS DARK constrained simulation to confront the predictions of the standard Lambda Cold Dark Matter ($\Lambda$CDM) model and standard galaxy formation theory with these observations. We find that SIBELIUS DARK reproduces the spatial distributions of disks and ellipticals and, in particular, the observed excess of massive ellipticals near the supergalactic equator. We show that this follows directly from the local large-scale structure and from the standard galaxy formation paradigm, wherein disk galaxies evolve mostly in isolation, while giant ellipticals congregate in the massive clusters that define the supergalactic plane. Rather than being anomalous as earlier works have suggested, the distributions of giant ellipticals and disks in the Local Universe and in relation to the supergalactic plane are key predictions of the $\Lambda$CDM model.
We investigate the possibility of learning the representations of cosmological multifield dataset from the CAMELS project. We train a very deep variational encoder on images which comprise three channels, namely gas density (Mgas), neutral hydrogen density (HI), and magnetic field amplitudes (B). The clustering of the images in feature space with respect to some cosmological/astrophysical parameters (e.g. $\Omega_{\rm m}$) suggests that the generative model has learned latent space representations of the high dimensional inputs. We assess the quality of the latent codes by conducting a linear test on the extracted features, and find that a single dense layer is capable of recovering some of the parameters to a promising level of accuracy, especially the matter density whose prediction corresponds to a coefficient of determination $R^{2}$ = 0.93. Furthermore, results show that the generative model is able to produce images that exhibit statistical properties which are consistent with those of the training data, down to scales of $k\sim 4h/{\rm Mpc}.$
We provide a measurement of the deficit in the Sunyaev-Zel'dovich Compton-$y$ signal towards cosmic voids, by stacking a catalogue of 97,090 voids constructed with BOSS-DR12 data, on the $y$ maps built on data from the Atacama Cosmology Telescope (ACT) DR4 and the Planck satellite. We detect the void signal with a significance of $7.3\,\sigma$ with ACT and $9.7\,\sigma$ with Planck, obtaining agreements in the associated void radial $y$ profiles extracted from both maps. The inner-void profile (for angular separations within the void angular radius) is reconstructed with significances of $4.7\sigma$ and $6.1\sigma$ with ACT and Planck, respectively; we model such profile using a simple model that assumes uniform gas (under)density and temperature, which enables us to place constraints on the product $(-\delta_{\rm v}T_{\rm e})$ of the void density contrast (negative) and the electron temperature. The best-fit values from the two data sets are $(-\delta_{\rm v}T_{\rm e})=(6.5\pm 2.3)\times 10^{5}\,\text{K}$ for ACT and $(8.6 \pm 2.1)\times 10^{5}\,\text{K}$ for Planck ($68\%$ C.L.), which are in good agreement under uncertainty. The data allow us to place lower limits on the expected void electron temperature at $2.7\times10^5\,\text{K}$ with ACT and $5.1\times10^5\,\text{K}$ with Planck ($95\%$ C.L.); these results can transform into upper limits for the ratio between the void electron density and the cosmic mean as $n^{\rm v}_{\rm e}/\bar{n}_{\rm e}\leqslant 0.73$ and $0.49$ ($95\%$ C.L.), respectively. Our findings prove the feasibility of using tSZ observations to constrain the gas properties inside cosmic voids, and confirm that voids are under-pressured regions compared to their surroundings.
Efficiently analyzing maps from upcoming large-scale surveys requires gaining direct access to a high-dimensional likelihood and generating large-scale fields with high fidelity, which both represent major challenges. Using CAMELS simulations, we employ the state-of-the-art score-based diffusion models to simultaneously achieve both tasks. We show that our model, HIDM, is able to efficiently generate high fidelity large scale HI maps that are in a good agreement with the CAMELS's power spectrum, probability distribution, and likelihood up to second moments. HIDM represents a step forward towards maximizing the scientific return of future large scale surveys.
Line intensity mapping is a promising probe of the universe's large-scale structure. We explore the sensitivity of the DSA-2000, a forthcoming array consisting of over 2000 dishes, to the statistical power spectrum of neutral hydrogen's 21 cm emission line. These measurements would reveal the distribution of neutral hydrogen throughout the near-redshift universe without necessitating resolving individual sources. The success of these measurements relies on the instrument's sensitivity and resilience to systematics. We show that the DSA-2000 will have the sensitivity needed to detect the 21 cm power spectrum at z=0.5 and across power spectrum modes of 0.03-31.32 h/Mpc with 0.1 h/Mpc resolution. We find that supplementing the nominal array design with a dense core of 200 antennas will expand its sensitivity at low power spectrum modes and enable measurement of Baryon Acoustic Oscillations (BAOs). Finally, we present a qualitative discussion of the DSA-2000's unique resilience to sources of systematic error that can preclude 21 cm intensity mapping.
We investigate the feasibility of using COmoving Lagrangian Acceleration (COLA) technique to efficiently generate galaxy mock catalogues that can accurately reproduce the statistical properties of observed galaxies. Our proposed scheme combines the subhalo abundance matching (SHAM) procedure with COLA simulations, utilizing only three free parameters: the scatter magnitude ($\sigma_{\rm scat}$) in SHAM, the initial redshift ($z_{\rm init}$) of the COLA simulation, and the time stride ($da$) used by COLA. In this proof-of-concept study, we focus on a subset of BOSS CMASS NGC galaxies within the redshift range $z\in [0.45, 0.55]$. We perform $\mathtt{GADGET}$ simulation and low-resolution COLA simulations with various combinations of $(z_{\rm init}, da)$, each using $1024^{3}$ particles in an $800~h^{-1}{\rm Mpc}$ box. By minimizing the difference between COLA mock and CMASS NGC galaxies for the monopole of the two-point correlation function (2PCF), we obtain the optimal $\sigma_{\rm scat}$. We have found that by setting $z_{\rm init}=29$ and $da=1/30$, we achieve a good agreement between COLA mock and CMASS NGC galaxies within the range of 4 to $20~h^{-1}{\rm Mpc}$, with a computational cost two orders of magnitude lower than that of the N-body code. Moreover, a detailed verification is performed by comparing various statistical properties, such as anisotropic 2PCF, three-point clustering, and power spectrum multipoles, which shows similar performance between GADGET mock and COLA mock catalogues with the CMASS NGC galaxies. Furthermore, we assess the robustness of the COLA mock catalogues across different cosmological models, demonstrating consistent results in the resulting 2PCFs. Our findings suggest that COLA simulations are a promising tool for efficiently generating mock catalogues for emulators and machine learning analyses in exploring the large-scale structure of the Universe.
Wash-in leptogenesis is a powerful mechanism to generate the baryon asymmetry of the Universe that treats right-handed-neutrino interactions on the same footing as electroweak sphaleron processes: as mere spectator processes acting on the background of chemical potentials in the Standard Model plasma. Successful wash-in leptogenesis requires this chemical background to be CP-violating, which can be achieved by violating any of the more than ten global charges that are conserved in the Standard Model at very high temperatures. In this paper, we demonstrate that the primordial charge asymmetries required for wash-in leptogenesis can be readily produced by evaporating primordial black holes (PBHs). Our argument is based on the fact that the Hawking radiation emitted by PBHs contains more or less any state in the particle spectrum. Therefore, if heavy states with CP-violating decays are present in the ultraviolet, PBH evaporation will unavoidably lead to the production of these states. We illustrate this scenario by means of a simple toy model where PBH evaporation leads to the production of heavy particles that we call asymmetrons and whose decay results in a primordial charge asymmetry for right-handed electrons, which in turn sets the initial conditions for wash-in leptogenesis. We focus on the parameter region where the decay of the initial thermal asymmetron abundance occurs long before PBH evaporation and only results in a negligible primordial charge asymmetry. PBH evaporation at later times then serves as a mechanism to resurrect the asymmetron abundance and ensure the successful generation of the baryon asymmetry after all. We conclude that PBHs can act as asymmetry-producing machines that grant access to whatever CP-violating physics may be present in the ultraviolet, rekindling it at lower energies where it can be reprocessed into a baryon asymmetry by right-handed neutrinos.
This paper introduces a technique called NKL, which cleans both polarized and unpolarized foregrounds from HI intensity maps by applying a Karhunen-Lo\`eve transform on the needlet coefficients. In NKL, one takes advantage of correlations not only along the line of sight, but also between different angular regions, referred to as ``chunks". This provides a distinct advantage over many of the standard techniques applied to map-space that one finds in the literature, which do not consider such spatial correlations. Moreover, the NKL technique does not require any priors on the nature of the foregrounds, which is important when considering polarized foregrounds. We also introduce a modified version of GNILC, referred to as MGNILC, which incorporates an approximation of the foregrounds to improve performance. The NKL and MGNILC techniques are tested on simulated maps which include polarized foregrounds. Their performance is compared to the GNILC, GMCA, ICA and PCA techniques. Two separate tests were performed. One at $1.84 < z < 2.55$ and the other at $0.31 < z < 0.45$. NKL was found to provide the best performance in both tests, providing a factor of 10 to 50 improvement over GNILC at $k < 0.1\,{\rm hMpc^{-1}}$ in the higher redshift case and $k < 0.03 \,{\rm hMpc^{-1}}$ in the lower redshift case. However, none of the methods were found to recover the power spectrum satisfactorily at all BAO scales.
The reconstruction of the large scale velocity field from the grouped Cosmicflows-4 (CF4) database is presented. The lognormal bias of the inferred distances and velocities data is corrected by the Bias Gaussianization correction (BGc) scheme, and the linear density and velocity fields are reconstructed by means of the Wiener filter (WF) and constrained realizations (CRs) algorithm. These tools are tested against a suite of random and constrained Cosmicflows-3-like mock data. The CF4 data consists of 3 main subsamples - the 6dFGS and the SDSS data - and the `others'. The individual contributions of the subsamples have been studied. The quantitative analysis of the velocity field is done mostly by the mean overdensity ($\Delta_L(R)$) and the bulk velocity ($V_{\mathrm{bulk}}(R)$) profiles of the velocity field out to $300\, h^{-1}{\rm Mpc}$. The $V_{\mathrm{bulk}}(R)$ and $\Delta_{\mathrm L}(R)$ profiles of the CF4 data without its 6dFGS component are consistent with the cosmic variance to within $1\sigma$. The 6dFGS sample dominates the $V_{\mathrm{bulk}}$ ($\Delta_{\mathrm L}$) profile beyond $\sim120\, h^{-1}{\rm Mpc}$, and drives it to roughly a $3.4\sigma$ ($-1.9\sigma$) excess (deficiency) relative to the cosmic variance at $R\sim250\ (190)\ \, h^{-1}{\rm Mpc}$. The excess in the amplitude of $V_{\mathrm{bulk}}$ is dominated by its Supergalactic X component, roughly in the direction of the Shapley Concentration. The amplitude and alignment of the inferred velocity field from the CF4 data is at $\sim(2\,-\,3)\,\sigma$ discrepancy with respect to the $\Lambda$CDM model. Namely, it is somewhat atypical but yet there is no compelling tension with the model.
Recent progress has revealed a number of constraints that cosmological correlators and the closely related field-theoretic wavefunction must obey as a consequence of unitarity, locality, causality and the choice of initial state. When combined with symmetries, namely homogeneity, isotropy and scale invariance, these constraints enable one to compute large classes of simple observables, an approach known as (boostless) cosmological bootstrap. Here we show that it is possible to relax the restriction of scale invariance, if one retains a discrete scaling subgroup. We find an infinite class of solutions to the weaker bootstrap constraints and show that they reproduce and extend resonant non-Gaussianity, which arises in well-motivated models such as axion monodromy inflation. We find no evidence of the new non-Gaussian shapes in the Planck data. Intriguingly, our results can be re-interpreted as a deformation of the scale-invariant case to include a complex order of the total energy pole, or more evocatively interactions with a complex number of derivatives. We also discuss for the first time IR-divergent resonant contributions and highlight an inconsequential inconsistency in the previous literature.
Observations of 21\,cm line from neutral hydrogen promise to be an exciting new probe of astrophysics and cosmology during the Cosmic Dawn and through the Epoch of Reionization (EoR) to when dark energy accelerates the expansion of the Universe. At each of these epochs, separating bright foregrounds from the cosmological signal is a primary challenge that requires exquisite calibration. In this paper, we present a new calibration method called \textsc{nucal} that extends redundant-baseline calibration, allowing spectral variation in antenna responses to be solved for by using correlations between visibilities measuring the same angular Fourier modes at different frequencies. By modeling the chromaticity of the beam-weighted sky with a tunable set of discrete prolate spheroidal sequences (DPSS), we develop a calibration loop that optimizes for spectrally smooth calibrated visibilities. Crucially, this technique does not require explicit models of the sky or the primary beam. With simulations that incorporate realistic source and beam chromaticity, we show that this method solves for unsmooth bandpass features, exposes narrowband interference systematics, and suppresses smooth-spectrum foregrounds below the level of 21\,cm reionization models, even within much of the so-called "wedge" region where current foreground mitigation techniques struggle. We show that this foreground subtraction can be performed with minimal cosmological signal loss for certain well-sampled angular Fourier modes, making spectral-redundant calibration a promising technique for current and next-generation 21\,cm intensity mapping experiments.
We reanalyse the Pantheon+ supernova catalogue to compare a cosmology with non-FLRW evolution, the "timescape cosmology", with the standard $\Lambda$CDM cosmology. To this end, we consider the Pantheon+ supernova catalogue, which is the largest available Type Ia supernova dataset for a geometric comparison between the two models. We construct a covariance matrix to be as independent of cosmology as possible, including independence from the FLRW geometry and peculiar velocity with respect to FLRW average evolution. Within this framework, which goes far beyond most other definitions of "model independence", we introduce new statistics to refine Type Ia supernova (SneIa) light-curve analysis. In addition to conventional galaxy correlation functions used to define the scale of statistical homogeneity we introduce empirical statistics which enables a refined analysis of the distribution biases of SneIa light-curve parameters $\beta c$ and $\alpha x_1$. For lower redshifts, the Bayesian analysis highlights important features attributable to the increased number of low-redshift supernovae, the artefacts of model-dependent light-curve fitting and the cosmic structure through which we observe supernovae. This indicates the need for cosmology-independent data reduction to conduct a stronger investigation of the emergence of statistical homogeneity and to compare alternative cosmologies in light of recent challenges to the standard model. "Dark energy" is generally invoked as a place-holder for "new physics". Our from-first-principles reanalysis of the Pantheon+ catalogue supports future deeper studies of the interplay of matter and nonlinear spacetime geometry, in a data-driven setting. For the first time in 25 years, we find evidence that the Pantheon+ catalogue already contains such a wealth of data that with further reanalysis, a genuine "paradigm shift" may soon emerge. [Abridged]
We present an emulator for the two-point clustering of biased tracers in real space. We construct this emulator using neural networks calibrated with more than $400$ cosmological models in a 8-dimensional cosmological parameter space that includes massive neutrinos an dynamical dark energy. The properties of biased tracers are described via a Lagrangian perturbative bias expansion which is advected to Eulerian space using the displacement field of numerical simulations. The cosmology-dependence is captured thanks to a cosmology-rescaling algorithm. We show that our emulator is capable of describing the power spectrum of galaxy formation simulations for a sample mimicking that of a typical Emission-Line survey at $z \sim 1$ with an accuracy of $1-2\%$ up to nonlinear scales $k \sim 0.7 h \mathrm{Mpc}^{-1}$.
Upcoming large galaxy surveys will subject the standard cosmological model, $\Lambda$CDM, to new precision tests. These can be tightened considerably if theoretical models of galaxy formation are available that can predict galaxy clustering and galaxy-galaxy lensing on the full range of measurable scales throughout volumes as large as those of the surveys and with sufficient flexibility that uncertain aspects of the underlying astrophysics can be marginalised over. This, in particular, requires mock galaxy catalogues in large cosmological volumes that can be directly compared to observation, and can be optimised empirically by Monte Carlo Markov Chains or other similar schemes to eliminate or estimate astrophysical parameters related to galaxy formation when constraining cosmology. Semi-analytic galaxy formation methods implemented on top of cosmological dark matter simulations offer a computationally efficient approach to construct physically based and flexibly parametrised galaxy formation models, and as such they are more potent than still faster, but purely empirical models. Here we introduce an updated methodology for the semi-analytic L-GALAXIES code, allowing it to be applied to simulations of the new MillenniumTNG project, producing galaxies directly on fully continuous past lightcones, potentially over the full sky, out to high redshift, and for all galaxies more massive than $\sim 10^8\,{\rm M}_\odot$. We investigate the numerical convergence of the resulting predictions, and study the projected galaxy clustering signals of different samples. The new methodology can be viewed as an important step towards more faithful forward-modelling of observational data, helping to reduce systematic distortions in the comparison of theory to observations.
Measurement of the global 21-cm signal during Cosmic Dawn (CD) and the Epoch of Reionization (EoR) is made difficult by bright foreground emission which is 2-5 orders of magnitude larger than the expected signal. Fitting for a physics-motivated parametric forward model of the data within a Bayesian framework provides a robust means to separate the signal from the foregrounds, given sufficient information about the instrument and sky. It has previously been demonstrated that, within such a modelling framework, a foreground model of sufficient fidelity can be generated by dividing the sky into $N$ regions and scaling a base map assuming a distinct uniform spectral index in each region. Using the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) as our fiducial instrument, we show that, if unaccounted-for, amplitude errors in low-frequency radio maps used for our base map model will prevent recovery of the 21-cm signal within this framework, and that the level of bias in the recovered 21-cm signal is proportional to the amplitude and the correlation length of the base-map errors in the region. We introduce an updated foreground model that is capable of accounting for these measurement errors by fitting for a monopole offset and a set of spatially-dependent scale factors describing the ratio of the true and model sky temperatures, with the size of the set determined by Bayesian evidence-based model comparison. We show that our model is flexible enough to account for multiple foreground error scenarios allowing the 21-cm sky-averaged signal to be detected without bias from simulated observations with a smooth conical log spiral antenna.
Diffuse radio recombination lines (RRLs) in the Galaxy are possible foregrounds for redshifted 21~cm experiments. We use EDGES drift scans centered at $-26.7^o$~declination to characterize diffuse RRLs across the southern sky. We find RRLs averaged over the large antenna beam ($ 72^o \times 110^o $) reach minimum amplitudes between right ascensions~2-6~h. In this region, the C$\alpha$ absorption amplitude is $33\pm11$~mK (1$\sigma$) averaged over 50-87~MHz ($27\gtrsim z \gtrsim15$ for the 21~cm line) and increases strongly as frequency decreases. C$\beta$ and H$\alpha$ lines are consistent with no detection with amplitudes of $13\pm14$ and $12\pm10$~mK (1$\sigma$), respectively. At 108-124.5~MHz ($z\approx11$) in the same region, we find no evidence for carbon or hydrogen lines at the noise level of 3.4~mK (1$\sigma$). Conservatively assuming observed lines come broadly from the diffuse interstellar medium, as opposed to a few compact regions, these amplitudes provide upper limits on the intrinsic diffuse lines. The observations support expectations that Galactic RRLs can be neglected as significant foregrounds for a large region of sky until redshifted 21~cm experiments, particularly those targeting Cosmic Dawn, move beyond the detection phase. We fit models of the spectral dependence of the lines averaged over the large beam of EDGES, which may contain multiple line sources with possible line blending, and find that including degrees of freedom for expected smooth, frequency-dependent deviations from local thermodynamic equilibrium (LTE) is preferred over simple LTE assumptions for C$\alpha$ and H$\alpha$ lines. For C$\alpha$ we estimate departure coefficients $0.79<b_n\beta_n<4.5$ along the inner Galactic Plane and $0<b_n\beta_n<2.3$ away from the inner Galactic Plane.
Hubble constant $H_0$ and weighted amplitude of matter fluctuations $S_8$ determinations are biased to higher and lower values, respectively, in the late Universe with respect to early Universe values inferred by the Planck collaboration within flat $\Lambda$CDM cosmology. If these anomalies are physical, i.e. not due to systematics, they naively suggest that $H_0$ decreases and $S_8$ increases with effective redshift. Here, subjecting matter density today $\Omega_{m}$ to a prior, corresponding to a combination of Planck CMB and BAO data, we perform a consistency test of the Planck-$\Lambda$CDM cosmology and show that $S_8$ determinations from $f \sigma_8(z)$ constraints increase with effective redshift. Due to the redshift evolution, a $\sim 3 \sigma$ tension in the $S_8$ parameter with Planck at lower redshifts remarkably becomes consistent with Planck within $1 \sigma$ at high redshifts. This provides corroborating support for an $S_8$ discrepancy that is physical in origin. We further confirm that the flat $\Lambda$CDM model is preferred over a theoretically ad hoc model with a jump in $S_8$ at a given redshift. In the absence of the CMB+BAO $\Omega_m$ prior, we find that $> 3 \sigma$ tensions with Planck in low redshift data are ameliorated by shifts in the parameters in high redshift data. Results here and elsewhere suggest that the $\Lambda$CDM cosmological parameters are redshift dependent. Fitting parameters that evolve with redshift is a recognisable hallmark of model breakdown.
We forecast the prospects for cross-correlating future line intensity mapping (LIM) surveys with the current and future Ly-$\alpha$ forest data. We use large cosmological hydrodynamic simulations to model the expected emission signal for the CO rotational transition in the COMAP LIM experiment at the 5-year benchmark and the Ly-$\alpha$ forest absorption signal for various surveys, including eBOSS, DESI, and PFS. We show that CO$\times$Ly-$\alpha$ forest can significantly enhance the detection signal-to-noise ratio of CO, with a $200$ to $300 \%$ improvement when cross-correlated with the forest observed in the Prime Focus Spectrograph (PFS) survey and a $50$ to $75\%$ enhancement for the currently available eBOSS or the upcoming DESI observations. We compare to the signal-to-noise improvements expected for a galaxy survey and show that CO$\times$Ly-$\alpha$ is competitive with even a spectroscopic galaxy survey in raw signal-to-noise. Furthermore, our study suggests that the clustering of CO emission is tightly constrained by CO$\times$Ly-$\alpha$ forest, due to the increased signal-to-noise ratio and the simplicity of Ly-$\alpha$ absorption power spectrum modeling. Any foreground contamination or systematics are expected not to be shared between LIM surveys and Ly-$\alpha$ forest observations; this provides an unbiased inference. Our findings highlight the potential benefits of utilizing the Ly-$\alpha$ forest to aid in the initial detection of signals in line intensity experiments. For example, we also estimate that [CII]$\times$Ly-$\alpha$ forest measurements from EXCLAIM and DESI/eBOSS, respectively, should have a larger signal-to-noise ratio than planned [CII]$\times$quasar observations by about an order of magnitude. Our results can be readily applied to actual data thanks to the observed quasar spectra in eBOSS Stripe 82, which overlaps with several LIM surveys.
We present a novel approach for estimating cosmological parameters, $\Omega_m$, $\sigma_8$, $w_0$, and one derived parameter, $S_8$, from 3D lightcone data of dark matter halos in redshift space covering a sky area of $40^\circ \times 40^\circ$ and redshift range of $0.3 < z < 0.8$, binned to $64^3$ voxels. Using two deep learning algorithms, Convolutional Neural Network (CNN) and Vision Transformer (ViT), we compare their performance with the standard two-point correlation (2pcf) function. Our results indicate that CNN yields the best performance, while ViT also demonstrates significant potential in predicting cosmological parameters. By combining the outcomes of Vision Transformer, Convolution Neural Network, and 2pcf, we achieved a substantial reduction in error compared to the 2pcf alone. To better understand the inner workings of the machine learning algorithms, we employed the Grad-CAM method to investigate the sources of essential information in activation maps of the CNN and ViT. Our findings suggest that the algorithms focus on different parts of the density field and redshift depending on which parameter they are predicting. This proof-of-concept work paves the way for incorporating deep learning methods to estimate cosmological parameters from large-scale structures, potentially leading to tighter constraints and improved understanding of the Universe.
Unveiling the true nature of Dark Matter (DM), which manifests itself only through gravity, is one of the principal quests in physics. Leading candidates for DM are weakly interacting massive particles (WIMPs) or ultralight bosons (axions), at opposite extremes in mass scales, that have been postulated by competing theories to solve deficiencies in the Standard Model of particle physics. Whereas DM WIMPs behave like discrete particles ($\varrho$DM), quantum interference between DM axions is manifested as waves ($\psi$DM). Here, we show that gravitational lensing leaves signatures in multiply-lensed images of background galaxies that reveal whether the foreground lensing galaxy inhabits a $\varrho$DM or $\psi$DM halo. Whereas $\varrho$DM lens models leave well documented anomalies between the predicted and observed brightnesses and positions of multiply-lensed images, $\psi$DM lens models correctly predict the level of anomalies left over by $\varrho$DM lens models. More challengingly, when subjected to a battery of tests for reproducing the quadruply-lensed triplet images in the system HS 0810+2554, $\psi$DM is able to reproduce all aspects of this system whereas $\varrho$DM often fails. The ability of $\psi$DM to resolve lensing anomalies even in demanding cases like HS 0810+2554, together with its success in reproducing other astrophysical observations, tilt the balance toward new physics invoking axions.
We present refined cosmological parameter constraints derived from a cosmic shear analysis of the fourth data release of the Kilo-Degree Survey (KiDS-1000). Our main improvements include enhanced galaxy shape measurements made possible by an updated version of the lensfit code and improved shear calibration achieved with a newly developed suite of multi-band image simulations. Additionally, we incorporated recent advancements in cosmological inference from the joint Dark Energy Survey Year 3 and KiDS-1000 cosmic shear analysis. Assuming a spatially flat standard cosmological model, we constrain $S_8\equiv\sigma_8(\Omega_{\rm m}/0.3)^{0.5} = 0.776_{-0.027-0.003}^{+0.029+0.002}$, where the second set of uncertainties accounts for the systematic uncertainties within the shear calibration. These systematic uncertainties stem from minor deviations from realism in the image simulations and the sensitivity of the shear measurement algorithm to the morphology of the galaxy sample. Despite these changes, our results align with previous KiDS studies and other weak lensing surveys, and we find a ${\sim}2.3\sigma$ level of tension with the Planck cosmic microwave background constraints on $S_8$.
Several pulsar timing array (PTA) collaborations recently announced the first detection of a stochastic gravitational wave (GW) background, leaving open the question of its source. We explore the possibility that it originates from cosmic inflation, a guaranteed source of primordial GW. The inflationary GW background amplitude is enhanced at PTA scales by a non-standard early cosmological evolution, driven by Dirac-Born-Infeld (DBI) scalar dynamics motivated by string theory. The resulting GW energy density has a broken power-law frequency profile, entering the PTA band with a peak amplitude consistent with the recent GW detection. After this initial DBI kination epoch, the dynamics starts a new phase mainly controlled by the scalar potential. It provides a realization of an early dark energy scenario aimed at relaxing the $H_0$ tension, and a late dark energy model which explains the current cosmological acceleration with no need of a cosmological constant. Hence our mechanism - besides providing a possible explanation for the recent PTA results - connects them with testable properties of the physics of the dark universe.
The Baryon Acoustic Oscillations (BAO) from Integrated Neutral Gas Observations (BINGO) radio telescope will use the neutral Hydrogen emission line to map the Universe in the redshift range $0.127 \le z \le 0.449$, with the main goal of probing BAO. In addition, the instrument optical design and hardware configuration support the search for Fast Radio Bursts (FRBs). In this work, we propose the use of a BINGO Interferometry System (BIS) including new auxiliary, smaller, radio telescopes (hereafter \emph{outriggers}). The interferometric approach makes it possible to pinpoint the FRB sources in the sky. We present here the results of several BIS configurations combining BINGO horns with and without mirrors ($4$ m, $5$ m, and $6$ m) and 5, 7, 9, or 10 for single horns. We developed a new {\tt Python} package, the {\tt FRBlip}, which generates synthetic FRB mock catalogs and computes, based on a telescope model, the observed signal-to-noise ratio (S/N) that we used to compute numerically the detection rates of the telescopes and how many interferometry pairs of telescopes (\emph{baselines}) can observe an FRB. FRBs observed by more than one baseline are the ones whose location can be determined. We thus evaluate the performance of BIS regarding FRB localization. We found that BIS will be able to localize 23 FRBs yearly with single horn outriggers in the best configuration (using 10 outriggers of 6 m mirrors), with redshift $z \leq 0.96$; the full localization capability depends on the number and the type of the outriggers. Wider beams are best to pinpoint FRB sources because potential candidates will be observed by more baselines, while narrow beams look deep in redshift. The BIS can be a powerful extension of the regular BINGO telescope, dedicated to observe hundreds of FRBs during Phase 1. Many of them will be well localized with a single horn + 6 m dish as outriggers.(Abridged)
We investigate the strong gravitational lensing phenomena caused by a black hole with a dark matter halo. In this study, we examine strong gravitational lensing with two significant dark matter models: the universal rotation curve model and the cold dark matter model. To do this, we first numerically estimate the strong lensing coefficients and strong deflection angles for both the universal rotation curve and cold dark matter models. It is observed that the deflection angle, denoted as $\alpha_D$, increases with the parameter $\alpha$ while holding the value of $\gamma \cdot 2M$ constant. Additionally, it increases with the parameter $\gamma \cdot 2M$ while keeping the value of $\alpha$ constant. The strong deflection angle, $\alpha_D$, for the black hole with a dark matter halo, with parameters $\alpha=0$ and $\gamma \cdot 2M$, greatly enhances the gravitational bending effect and surpasses the corresponding case of the standard Schwarzschild black hole ($A=0, B=0, \alpha=0, \gamma \cdot 2M=0$). Furthermore, we investigate the astrophysical consequences through strong gravitational lensing observations, using examples of various supermassive black holes such as $M87^{*}$ and $SgrA^{*}$ located at the centers of several galaxies. It is observed that black holes with dark matter halos can be quantitatively distinguished and characterized from the standard Schwarzschild black hole ($A=0, B=0, \alpha=0, \gamma \cdot 2M=0$). The findings in our analysis suggest that observational tests for black holes influenced by dark matter halos are indeed feasible and viable.
We have found numerically initial conditions in the $(R, H)$ plane leading to a successful Starobinsky inflation in $R+R^2$ gravity for a isotropic metrics with positive spatial curvature. Trajectories can reach inflation regime either directly or going through a bounce, and even recollapse followed by a bounce. Our numerical plots indicate that ``good" initial conditions exist even for big initial spatial curvature, however, we argue that such a trajectory must cross a region of rather big $R$ or $H$. This means that the range of viability of $R+R^2$ theory in the $(R,H)$ plane directly affect the question of viability of Starobinsky inflation for a positive spatial curvature isotropic Universe.
In this paper we present the European Low Frequency Survey (ELFS), a project that will enable foregrounds-free measurements of primordial $B$-mode polarization to a level 10$^{-3}$ by measuring the Galactic and extra-Galactic emissions in the 5--120\,GHz frequency window. Indeed, the main difficulty in measuring the B-mode polarization comes not just from its sheer faintness, but from the fact that many other objects in the Universe also emit polarized microwaves, which mask the faint CMB signal. The first stage of this project will be carried out in synergy with the Simons Array (SA) collaboration, installing a 5.5--11 GHz coherent receiver at the focus of one of the three 3.5\,m SA telescopes in Atacama, Chile ("ELFS on SA"). The receiver will be equipped with a fully digital back-end based on the latest Xilinx RF System-on-Chip devices that will provide frequency resolution of 1\,MHz across the whole observing band, allowing us to clean the scientific signal from unwanted radio frequency interference, particularly from low-Earth orbit satellite mega-constellations. This paper reviews the scientific motivation for ELFS and its instrumental characteristics, and provides an update on the development of ELFS on SA.
The large-scale distribution of matter, as mapped by photometric surveys like the Dark Energy Survey (DES), serves as a powerful probe into cosmology. It is especially sensitive to both the amplitude of matter clustering ($\sigma_8$) and the total matter density ($\Omega_m$). The fiducial analysis of the two-point clustering statistics of these surveys is invariably done in configuration space where complex masking scheme is easier to handle. However, such an analysis inherently mixes different scales together, requiring special care in modeling. In this study, we present an analysis of DES Y3 3x2 clustering data in harmonic space where small and large scales are better separated and can be neatly modeled using perturbative techniques. Using conservative scale cuts together with Limber approximation and a Gaussian covariance assumption in a first study, we model the clustering data under a linear bias model for galaxies, incorporating comprehensive treatment for astrophysical effects. We subsequently extend this fiducial analysis to explore a third-order biasing prescription. For our fiducial analysis, we get $S_8=0.789\pm0.020$, consistent with the configuration space analysis presented by the DES collaboration, although under our different modeling choices, we find a preference for a lower $\Omega_m$ and a higher $\sigma_8$. The analysis sets the stage for a future search for signatures of primordial non-Gaussianity and blue-tilted isocurvature perturbations from photometric surveys.
The apparent tension between the luminosity functions of red supergiant (RSG) stars and of RSG progenitors of Type II supernovae (SNe) is often referred to as the RSG problem and it motivated some to suggest that many RSGs end their life without a SN explosion. However, the luminosity functions of RSG SN progenitors presented so far were biased to high luminosities, because the sensitivity of the search was not considered. Here, we use limiting magnitudes to calculate a bias-corrected RSG progenitor luminosity function. We find that only $(36\pm11)\%$ of all RSG progenitors are brighter than a bolometric magnitude of $-7\,\text{mag}$, a significantly smaller fraction than $(56\pm5)\%$ quoted by Davies & Beasor (2020). The larger uncertainty is due to the relatively small progenitor sample, while uncertainties on measured quantities such as magnitudes, bolometric corrections, extinction, or SN distances, only have a minor impact, as long as they fluctuate randomly for different objects in the sample. When comparing the luminosity function of RSG SN progenitors to Type M supergiants in the Large Magellanic cloud, we find that they are consistent, due to the flatter shape of the progenitor luminosity function. The RSG progenitor luminosity function, hence, does not imply the existence of failed SNe. The presented statistical method is not limited to progenitor searches, but applies to any situation in which a measurement is done for a sample of detected objects, but the probed quantity or property can only be determined for part of the sample.
The Advanced X-ray Imaging Satellite (AXIS) is a Probe-class concept that will build on the legacy of the Chandra X-ray Observatory by providing low-background, arcsecond-resolution imaging in the 0.3-10 keV band across a 450 arcminute$^2$ field of view, with an order of magnitude improvement in sensitivity. AXIS utilizes breakthroughs in the construction of lightweight segmented X-ray optics using single-crystal silicon, and developments in the fabrication of large-format, small-pixel, high readout rate CCD detectors with good spectral resolution, allowing a robust and cost-effective design. Further, AXIS will be responsive to target-of-opportunity alerts and, with onboard transient detection, will be a powerful facility for studying the time-varying X-ray universe, following on from the legacy of the Neil Gehrels (Swift) X-ray observatory that revolutionized studies of the transient X-ray Universe. In this paper, we present an overview of AXIS, highlighting the prime science objectives driving the AXIS concept and how the observatory design will achieve these objectives.
This paper compiles the model parameters and zero-temperature properties of an extensive collection of published theoretical nuclear interactions, including 251 non-relativistic (Skyrme-like), 252 relativistic mean field (RMF) and point-coupling (RMF-PC), and 13 Gogny-like forces. This forms the most exhaustive tabulation of model parameters to date. The properties of uniform symmetric matter and pure neutron matter at the saturation density are determined. Symmetry properties found from the second-order term of a Taylor expansion in neutron excess are compared to the energy difference of pure neutron and symmetric nuclear matter at the saturation density. Selected liquid-droplet model parameters, including the surface tension and surface symmetry energy, are determined for semi-infinite surfaces. Liquid droplet model neutron skin thicknesses and dipole polarizabilities of the neutron-rich closed-shell nuclei $^{48}$Ca and $^{208}$Pb are compared to published theoretical Hartree-Fock and experimental results. In addition, the radii, binding energies, moments of inertia and tidal deformabilities of 1.2, 1.4 and 1.6 M$_\odot$ neutron stars are computed. An extensive correlation analysis of bulk matter, nuclear structure, and low-mass neutron star properties is performed and compared to nuclear experiments and astrophysical observations.
In 2020, the HAWC Collaboration presented the first catalog of gamma-ray sources emitting above 56 TeV and 100 TeV. With nine sources detected, this was the highest-energy source catalog to date. Here, we present the results of re-analysis of the old data, along with additional data acquired since then. We use a new version of the reconstruction software with better pointing accuracy and improved gamma/hadron separation. We now see more than 25 sources above 56 TeV, with most sources being located in the Galactic plane. The vast majority of these seem to be leptonic pulsar wind nebulae, but some have been shown to have hadronic emission. We will show spectra and discuss possible emission mechanisms of some of the most interesting sources, including the ones the HAWC Collaboration considers PeVatron candidates.
Context. Blazar AO 0235+164, located at redshift z = 0.94, has undergone several sharp multi-spectral-range flaring episodes during the last decades. In particular, the episodes peaking in 2008 and 2015, that received extensive multi-wavelength coverage, exhibited interesting behavior. Aims. We study the actual origin of these two observed flares by constraining the properties of the observed photo-polarimetric variability, those of the broad-band spectral energy-distribution and the observed time-evolution behavior of the source as seen by ultra-high resolution total-flux and polarimetric Very-long-baseline interferometry (VLBI) imaging. Methods. The analysis of VLBI images allows us to constrain kinematic and geometrical parameters of the 7 mm jet. We use the Discrete Correlation Function to compute the statistical correlation and the delays between emission at different spectral ranges. Multi-epoch modeling of the spectral energy distributions allows us to propose specific models of emission; in particular for the unusual spectral features observed in this source in the X-ray region of the spectrum during strong multi spectral-range flares. Results. We find that these X-ray spectral features can be explained by an emission component originating in a separate particle distribution than the one responsible for the two standard blazar bumps. This is in agreement with the results of our correlation analysis that do not find a strong correlation between the X-rays and the remaining spectral ranges. We find that both external Compton dominated and synchrotron self-Compton dominated models can explain the observed spectral energy distributions. However, synchrotron self-Compton models are strongly favored by the delays and geometrical parameters inferred from the observations.
We present the results from an analysis of data from an \textit{XMM-Newton} observation of the accreting high mass X-ray binary pulsar GX 301$-$2. Spectral analysis in the non-flaring segment of the observation revealed that the equivalent width of the iron fluorescence emission is correlated with the observed absorption column density and the ratio of the iron K$\beta$ and K$\alpha$ line strength varied with the flux of the source. Coherent pulsations were detected with the spin period of the pulsar of 687.9$\pm$0.1 s, and a secondary pulsation was also detected with a period of 671.8$\pm$0.2 s, most prominent in the energy band of the iron line. At the spin period of the neutron star, the pulsation of the iron line has a low amplitude and the profile is different from the continuum. Pulse phase-resolved spectroscopy also revealed pulsations of the iron emission line during the non-flaring segment of the light curve. At the secondary period, both the iron line and the continuum have nearly identical pulse fraction and pulse profile. The additional periodicity can be attributed to the beat frequency between the spin of the neutron star and the Keplerian frequency of a stellar wind clump in retrograde motion around the neutron star. Reprocessed X-ray emissions originating from the clump can produce the observed secondary pulsations both in the continuum and the iron fluorescence line. The clump rotating around the neutron star is estimated to be approximately five lt-s away from the neutron star.
The influence of intergalactic magnetic fields on the strong gravitational lensing of blazar secondary gamma radiation is discussed. Currently, two cases of strong gravitational lensing of blazar gamma-radiation are known, where radiation is deflected by galaxies on the line of sight between the blazars and the Earth. The magnetic field can affect the movements of electron-positron pairs generated by primary radiation, and thereby change the directions of secondary gamma radiation. It modifies the equation of the gravitational lens and leads to the dependence of the observed signal in the secondary gamma radiation on the energy of photons and on the magnetic field. Accordingly, it is possible in principle to estimate the intergalactic magnetic fields from the time delay of signals, from the angular position of images (for future high-resolution gamma-ray telescopes) or from the shape of the observed energy spectrum. This method is demonstrated by the example of the blazar B0218+357. In this case however it is not possible to obtain useful constraints due to the large distances to the blazar and the lens galaxy. The result is only a lower limit on the magnetic field $B>2\times10^{-17}$G, which is weaker than other existing constraints. But the future discoveries of lensed blazars may provide more favourable opportunities for measuring the magnetic fields, especially with the help of new generation of gamma-ray telescopes such as e-ASTROGAM, GECAM, and SVOM as well as future gamma-ray telescopes with high angular resolution, ~0.1''.
AM CVn-type systems are ultracompact, helium-accreting binary systems which are evolutionarily linked to the progenitors of thermonuclear supernovae and are expected to be strong Galactic sources of gravitational waves detectable to upcoming space-based interferometers. AM CVn binaries with orbital periods $\lesssim$ 20--23 min exist in a constant high state with a permanently ionised accretion disc. We present the discovery of TIC 378898110, a bright ($G=14.3$ mag), nearby ($309.3 \pm 1.8$ pc), high-state AM CVn binary discovered in TESS two-minute-cadence photometry. At optical wavelengths this is the third-brightest AM CVn binary known. The photometry of the system shows a 23.07172(6) min periodicity, which is likely to be the `superhump' period and implies an orbital period in the range 22--23 min. There is no detectable spectroscopic variability. The system underwent an unusual, year-long brightening event during which the dominant photometric period changed to a shorter period (constrained to $20.5 \pm 2.0$ min), which we suggest may be evidence for the onset of disc-edge eclipses. The estimated mass transfer rate, $\log (\dot{M} / \mathrm{M_\odot} \mathrm{yr}^{-1}) = -6.8 \pm 1.0$, is unusually high and may suggest a high-mass or thermally inflated donor. The binary is detected as an X-ray source, with a flux of $9.2 ^{+4.2}_{-1.8} \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ in the 0.3--10 keV range. TIC 378898110 is the shortest-period binary system discovered with TESS, and its large predicted gravitational-wave amplitude makes it a compelling verification binary for future space-based gravitational wave detectors.
We study the effect of temperature on the global properties of static and slowly rotating self-gravitating Bose-Einstein condensate (BEC) stars within general relativity. We employ a recently developed temperature dependent BEC equation of state (EoS) to describe the stellar matter by assuming that the condensate can be described by a non-relativistic EoS. Stellar profiles are obtained using general relativistic Hartle-Thorne slow rotation approximation equations. We find that with increasing temperatures mass-radius values are found to be decreasing for the static and rotating cases; though presence of temperature supports high mass values at lower central densities. Countering effects of rotation and temperature on the BEC stellar structure have been analysed and quantified. We report that inclusion of temperature has significant effect on the rotating stellar profiles but negligible effect on the maximum mass, as in the case of static system. We have also studied the effect of EoS parameters -- boson mass and strength of the self-interaction -- on global properties of static and rotating BEC stars, in presence of temperature.
LISA, the Laser Interferometer Space Antenna, will usher in a new era in gravitational-wave astronomy. As the first anticipated space-based gravitational-wave detector, it will expand our view to the millihertz gravitational-wave sky, where a spectacular variety of interesting new sources abound: from millions of ultra-compact binaries in our Galaxy, to mergers of massive black holes at cosmological distances; from the beginnings of inspirals that will venture into the ground-based detectors' view to the death spiral of compact objects into massive black holes, and many sources in between. Central to realising LISA's discovery potential are waveform models, the theoretical and phenomenological predictions of the pattern of gravitational waves that these sources emit. This white paper is presented on behalf of the Waveform Working Group for the LISA Consortium. It provides a review of the current state of waveform models for LISA sources, and describes the significant challenges that must yet be overcome.
We investigate how the photon polarization is affected by the interaction with axion-like particles (ALPs) in the rotating magnetic field of a neutron star (NS). Using quantum Boltzmann equations the study demonstrates that the periodic magnetic field of millisecond NSs enhances the interaction of photons with ALPs and creates a circular polarization on them. A binary system including an NS and a companion star could serve as a probe. When the NS is in front of the companion star with respect to the earth observer, there is a circular polarization on the previously linearly polarized photons as a result of the interaction with ALPs there. After a half-binary period, the companion star passes in front of the NS, and the circular polarization of photons disappears and changes to linear. The excluded parameter space for a millisecond NS with 300~Hz rotating frequency, highlights the coupling constant of $1.7\times10^{-11}~\text{GeV}^{-1}\leq g_{a\gamma\gamma}\leq1.6\times10^{-3}~\text{GeV}^{-1}$ for the ALP masses in the range of $7\times10^{-12}~\text{eV}\leq m_a\leq1.5\times 10^{3}~\text{eV}$.
The multi-messenger joint observations of GW170817 and GRB170817A shed new light on the study of short-duration gamma-ray bursts (SGRBs). Not only did it substantiate the assumption that SGRBs originate from binary neutron star (BNS) mergers, but it also confirms that the jet generated by this type of merger must be structured, hence the observed energy of an SGRB depends on the viewing angle from the observer. However, the precise structure of the jet is still subject to debate. Moreover, whether a single unified jet model can be applied to all SGRBs is not known. Another uncertainty is the delay timescale of BNS mergers with respect to star formation history of the Universe. In this paper, we conduct a global test of both delay and jet models of BNS mergers across a wide parameter space with simulated SGRBs. We compare the simulated peak flux, redshift and luminosity distributions with the observed ones and test the goodness-of-fit for a set of models and parameter combinations. Our simulations suggest that GW170817/GRB 170817A and all SGRBs can be understood within the framework of a universal structured jet viewed at different viewing angles. Furthermore, models invoking a jet plus cocoon structure with a lognormal delay timescale is most favored. Some other combinations (e.g. a Gaussian delay with a power-law jet model) are also acceptable. However, the Gaussian delay with Gaussian jet model and the entire set of power-law delay models are disfavored.
GX 3$+$1, an atoll type neutron star low-mass X-ray binary, was observed four times by Soft X-ray Telescope and The Large Area X-ray Proportional Counters on-board \textit{AstroSat} between October 5, 2017 and August 9, 2018. The hardness-intensity-diagram of the source showed it to be in the soft spectral state during all the four observations. The spectra of the source could be adequately fit with a model consisting of blackbody ($\mathtt{bbody}$) and power-law ($\mathtt{powerlaw}$) components. This yielded the blackbody radius and mass accretion rate to be $\sim$8 km and $\sim$2 $\times$ $10^{-9}$ M$_{\odot}$ y$^{-1}$, respectively. In one of the observations, a Type I X-ray burst having a rise and e-folding time of 0.6 and 5.6 s, respectively, was detected. Time-resolved spectral analysis of the burst showed that the source underwent a photospheric radius expansion. The radius of the emitting blackbody in GX 3$+$1 and its distance were estimated to be 9.19 $\substack{+0.97\\-0.82}$ km and 10.17 $\substack{+0.07\\-0.18}$ kpc, respectively. Temporal analysis of the burst yielded upper limits of the fractional RMS amplitude of 7$\%$, 5$\%$ and 6$\%$ during burst start, burst maximum and right after the radius expansion phase, respectively.
Light curves are the primary observable of type-I x-ray bursts. Computational x-ray burst models must match simulations to observed light curves. Most of the error in simulated curves comes from uncertainties in $rp$ process reaction rates, which can be reduced via precision mass measurements of neutron-deficient isotopes in the $rp$ process path. We perform a precise Penning trap mass measurement of $^{27}$P utilizing the ToF-ICR technique. We use this measurement to calculate $rp$ process reaction rates and input these rates into an x-ray burst model to reduce simulated light curve uncertainty. We also use the mass measurement of $^{27}$P to validate the Isobaric Multiplet Mass Equation (IMME) for the A=27 T=$\frac{3}{2}$ isospin quartet which $^{27}$P belongs to. The mass excess of $^{27}$P was measured to be -670.7(6) keV, a fourteen-fold precision increase over the mass reported in the 2020 Atomic Mass Evaluation (AME2020). X-ray burst light curves were produced with the MESA (Modules for Experiments in Stellar Astrophysics) code using the new mass and associated reaction rates. Changes in the mass of $^{27}$P seem to have minimal effect on light curves, even in burster systems tailored to maximize impact. The mass of $^{27}$P does not play a significant role in x-ray burst light curves. It is important to understand that more advanced models do not just provide more precise results, but often qualitatively different ones. This result brings us a step closer to extracting stellar parameters from individual x-ray burst observations. The IMME has been validated for the $A=27, T=3/2$ quartet. The normal quadratic form of the IMME using the latest data yields a reduced $\chi^2$ of 2.9. The cubic term required to generate an exact fit to the latest data matches theoretical attempts to predict this term.
We have studied the spectro-temporal properties of the neutron star low mass X-ray binary GX 9$+$1 using data from \textit{NuSTAR/FPM} and \textit{AstroSat/SXT} and \textit{LAXPC}. The hardness-intensity diagram of the source showed it to be in the soft spectral state during both observations. \textit{NuSTAR} spectral analysis yielded an inclination angle ($\theta$) $=$ 29$\substack{+3\\-4}^{\circ}$ and inner disk radius ($R_{in}$) $\leq$ 19.01 km. Assuming that the accretion disk was truncated at the Alfv\'en radius during the observation, the upper limit of the magnetic dipole moment ($\mu$) and the magnetic field strength ($B$) at the poles of the neutron star in GX 9$+$1 were calculated to be 1.45$\times$$10^{26}$ G cm$^3$ and 2.08$\times$$10^8$ G, respectively (for $k_A$ $=$ 1). Flux resolved spectral analysis with \textit{AstroSat} data showed the source to be in the soft spectral state ($F_{disk}$/$F_{total}$ $\sim$0.9) with a monotonic increase in mass accretion rate ($\dot{m}$) along the banana branch. The analysis also showed the presence of absorption edges at $\sim$1.9 and $\sim$2.4 keV, likely due to Si XIII and S XV, respectively. Temporal analysis with \textit{LAXPC-20} data in the 0.02 $-$ 100 Hz range revealed the presence of noise components, which could be characterized with broad Lorentzian components.
The recent detections of the $\sim10$-s long $\gamma$-ray bursts (GRBs) 211211A and 230307A followed by softer temporally extended emission (EE) and kilonovae, point to a new GRB class. Using state-of-the-art first-principles simulations, we introduce a unifying theoretical framework that connects binary neutron star (BNS) and black hole-NS (BH-NS) merger populations with the fundamental physics governing compact-binary GRBs (cbGRBs). For binaries with large total masses $M_{\rm tot}\gtrsim2.8\,M_\odot$, the compact remnant created by the merger promptly collapses into a BH, surrounded by an accretion disk. The duration of the pre-magnetically arrested disk (MAD) phase sets the duration of the roughly constant power cbGRB and could be influenced by the disk mass, $M_d$. We show that massive disks ($M_d\gtrsim0.1\,M_\odot$), which form for large binary mass ratio $q\gtrsim1.2$ in BNS or $q\lesssim3$ in BH-NS mergers, inevitably produce 211211A-like long cbGRBs. Once the disk becomes MAD, the jet power drops with the mass accretion rate as $\dot{M}\sim t^{-2}$, naturally establishing the EE decay. Two scenarios are plausible for short cbGRBs. They can be powered by BHs with less massive disks, which form for other $q$ values. Alternatively, for binaries with $M_{\rm tot}\lesssim2.8\,M_\odot$, mergers should go through a hypermassive NS (HMNS) phase, as inferred for GW170817. Magnetized outflows from such HMNSs, which typically live for $\lesssim1\,{\rm s}$, offer an alternative progenitor for short cbGRBs. The first scenario is challenged by the bimodal GRB duration distribution and the fact that the Galactic BNS population peaks at sufficiently low masses that most mergers should go through a HMNS phase.
It is recognized that some core-collapse supernovae (SNe) show a double-peaked radio light curve within a few years since the explosion. A shell of circumstellar medium (CSM) detached from the SN progenitor has been considered to play a viable role in characterizing such a re-brightening of radio emission. Here, we propose another mechanism that can give rise to the double-peaked radio light curve in core-collapse SNe. The key ingredient in the present work is to expand the model for the evolution of the synchrotron spectral energy distribution (SED) to a generic form, including fast and slow cooling regimes, as guided by the widely-accepted modeling scheme of gamma-ray burst afterglows. We show that even without introducing an additional CSM shell, the radio light curve would show a double-peaked morphology when the system becomes optically thin to synchrotron self-absorption at the observational frequency during the fast cooling regime. We can observe this double-peaked feature if the transition from fast cooling to slow cooling regime occurs during the typical observational timescale of SNe. This situation is realized when the minimum Lorentz factor of injected electrons is initially large enough for the non-thermal electrons' SED to be discrete from the thermal distribution. We propose SN 2007bg as a special case of double-peaked radio SNe that can be possibly explained by the presented scenario. Our model can serve as a potential diagnostic for electron acceleration properties in SNe.
The vast majority of stars in the nearby stellar neighborhood are M dwarfs. Their low masses and luminosities result in slow rates of nuclear evolution and minimal changes to the star's observable properties, even along astronomical timescales. However, they possess relatively powerful magnetic dynamos and resulting X-ray to UV activity, compared to their bolometric luminosities. This magnetic activity does undergo an observable decline over time, making it an important potential age determinant for M dwarfs. Observing this activity is important for studying the outer atmospheres of these stars, but also for comparing the behaviors of different spectral type subsets of M dwarfs, e.g., those with partially vs. fully convective interiors. Beyond stellar astrophysics, understanding the X-ray to UV activity of M dwarfs over time is also important for studying the atmospheres and habitability of any hosted exoplanets. Earth-sized exoplanets, in particular, are more commonly found orbiting M dwarfs than any other stellar type, and thermal escape (driven by the M dwarf X-ray to UV activity) is believed to be the dominant atmospheric loss mechanism for these planets. Utilizing recently calibrated M dwarf age-rotation relationships, also constructed as part of the $\textit{Living with a Red Dwarf}$ program (Engle & Guinan 2023), we have analyzed the evolution of M dwarf activity over time, in terms of coronal (X-ray), chromospheric (Lyman-$\alpha$, and Ca II), and overall X--UV (5--1700 Angstrom) emissions. The activity-age relationships presented here will be useful for studying exoplanet habitability and atmospheric loss, but also for studying the different dynamo and outer atmospheric heating mechanisms at work in M dwarfs.
Optically dense clouds in the interstellar medium composed predominantly of molecular hydrogen, known as molecular clouds, are sensitive to energy injection in the form of photon absorption, cosmic-ray scattering, and dark matter (DM) scattering. The ionization rates in dense molecular clouds are heavily constrained by observations of abundances of various molecular tracers. Recent studies have set constraints on the DM-electron scattering cross section using measurements of ionization rates in dense molecular clouds. Here we calculate the analogous bounds on the DM-proton cross section using the molecular Migdal effect, recently adapted from the neutron scattering literature to the DM context. These bounds may be the strongest limits on a strongly-coupled DM subfraction, and represent the first application of the Migdal effect to astrophysical systems.
We present 0.075 (~400 pc) resolution ALMA observations of the [CII] and dust continuum emission from the host galaxy of the z=6.5406 quasar, P036+03. We find that the emission arises from a thin, rotating disk with an effective radius of 0.21" (1.1 kpc). The velocity dispersion of the disk is consistent with a constant value of 66.4+-1.0 km/s, yielding a scale height of 80+-30 pc. The [CII] velocity field reveals a distortion that we attribute to a warp in the disk. Modeling this warped disk yields an inclination estimate of 40.4+-1.3 degrees and a rotational velocity of 116+-3 km/s. The resulting dynamical mass estimate of (1.96+-0.10) x 10^10 Msun is lower than previous estimates, which strengthens the conclusion that the host galaxy is less massive than expected based on local scaling relations between the black hole mass and the host galaxy mass. Using archival MUSE Ly-alpha observations, we argue that counter-rotating halo gas could provide the torque needed to warp the disk. We further detect a region with excess (15-sigma) dust continuum emission, which is located 1.3 kpc northwest of the galaxy's center and is gravitationally unstable (Toomre-Q < 0.04). We posit this is a star-forming region whose formation was triggered by the warp, because the region is located within a part of the warped disk where gas can efficiently lose angular momentum. The combined ALMA and MUSE imaging provides a unique view of how gas interactions within the disk-halo interface can influence the growth of massive galaxies within the first billion years of the universe.
Most of a galaxy's mass is located out to hundreds of kiloparsecs beyond its stellar component. This diffuse reservoir of gas, the circumgalactic medium (CGM), acts as the interface between a galaxy and the cosmic web that connects galaxies. We present kiloparsec-scale resolution integral field spectroscopy of emission lines that trace cool ionized gas from the center of a nearby galaxy to 30 kpc into its CGM. We find a smooth surface brightness profile with a break in slope at twice the 90% stellar radius. At this radius, the gas also transitions from being photoionized by HII star-forming regions in the disk to being ionized by shocks or the extragalactic UV background at larger distances. These changes represent the boundary between the interstellar medium (ISM) and the CGM, revealing the shape and extent of the dominant reservoir of baryonic matter in galaxies for the first time.
Using empty-field `Quick Look' images from the first two epochs of the VLA Sky Survey (VLASS) observations, centred on the positions of $\sim3700$ individually radio-non-detected active galactic nuclei (AGNs) at $z\ge4$, we performed image stacking analysis to examine the sub-mJy emission at $3$ GHz. We found characteristic monochromatic radio powers of $P_\mathrm{char}=(2-13) \times 10^{24}$ W Hz$^{-1}$, $P_\mathrm{char}=2\times10^{24}-1.3\times10^{25}$ W Hz$^{-1}$, indicating that AGN-related radio emission is widespread in the sample. The signal-to-noise ratios of the redshift-binned median stacked maps are between $4-6$, and we expect that with the inclusion of the yet to be completed third-epoch VLASS observations, the detection limit defined as signal-to-noise ratio $\mathrm{SNR}\ge6$ could be reached, and the redshift dependence can be determined. To obtain information on the general spectral properties of the faint radio emission in high-redshift AGNs, we confined the sample to $\sim3000$ objects covered by both the VLASS and the Faint Images of the Radio Sky at Twenty-centimeters (FIRST) survey. We found that the flux densities from the median stacked maps show a characteristic spectral index of $\alpha^*=-0.30\pm0.15$, which is in agreement with the median spectral index of the radio-detected $z\ge4$ AGNs from our high-redshift AGN catalogue. The three-band mid-infrared colour--colour diagram based on Wide-field Infrared Survey Explorer observations provides further support regarding the AGN contribution to the radio emission in the sub-mJy sample.
We analyze TYPHOON long slit absorption line spectra of the starburst barred spiral galaxy NGC 1365 obtained with the Progressive Integral Step Method covering an area of 15 square kpc. Applying a population synthesis technique, we determine the spatial distribution of ages and metallicity of the young and old stellar population together with star formation rates, reddening, extinction and the ratio R$_V$ of extinction to reddening. We detect a clear indication of inside-out growth of the stellar disk beyond 3 kpc characterized by an outward increasing luminosity fraction of the young stellar population, a decreasing average age and a history of mass growth, which was finished 2 Gyrs later in the outermost disk. The metallicity of the young stellar population is clearly super solar but decreases towards larger galactocentric radii with a gradient of -0.02 dex/kpc. On the other hand, the metal content of the old population does not show a gradient and stays constant at a level roughly 0.4 dex lower than that of the young population. In the center of NGC 1365 we find a confined region where the metallicity of the young population drops dramatically and becomes lower than that of the old population. We attribute this to infall of metal poor gas and, additionally, to interrupted chemical evolution where star formation is stopped by AGN and supernova feedback and then after several Gyrs resumes with gas ejected by stellar winds from earlier generations of stars. We provide a simple model calculation as support for the latter.
We have carried out HST snapshot observations at 1.1 $\mu$m of 281 candidate strongly lensed galaxies identified in the wide-area extragalactic surveys conducted with the Herschel space observatory. Our candidates comprise systems with flux densities at $500\,\mu$m$ S_{500}\geq 80$ mJy. We model and subtract the surface brightness distribution for 130 systems, where we identify a candidate for the foreground lens candidate. After combining visual inspection, archival high-resolution observations, and lens subtraction, we divide the systems into different classes according to their lensing likelihood. We confirm 65 systems to be lensed. Of these, 30 are new discoveries. We successfully perform lens modelling and source reconstruction on 23 systems, where the foreground lenses are isolated galaxies and the background sources are detected in the HST images. All the systems are successfully modelled as a singular isothermal ellipsoid. The Einstein radii of the lenses and the magnifications of the background sources are consistent with previous studies. However, the background source circularised radii (between 0.34 kpc and 1.30 kpc) are $\sim$3 times smaller than the ones measured in the sub-mm/mm for a similarly selected and partially overlapping sample. We compare our lenses with those in the SLACS survey, confirming that our lens-independent selection is more effective at picking up fainter and diffuse galaxies and group lenses. This sample represents the first step towards characterising the near-IR properties and stellar masses of the gravitationally lensed dusty star-forming galaxies.
[Abridged] JWST observations of the Orion Bar have shown the incredible richness of PAH bands and their variation on small scales. We aim to probe the photochemical evolution of PAHs across the key zones of the photodissociation region (PDR) that is the Orion Bar using unsupervised machine learning. We use NIRSpec and MIRI IFU data from the JWST ERS Program PDRs4All. We lever bisecting k-means clustering to generate detailed spatial maps of the spectral variability in several wavelength regions. We discuss the variations in the cluster profiles and connect them to the local physical conditions. We interpret these variations with respect to the key zones: the HII region, the atomic PDR zone, and the three dissociation fronts. The PAH emission exhibits spectral variation that depends strongly on spatial position in the PDR. We find the 8.6um band to behave differently than all other bands which vary systematically with one another. We find uniform variation in the 3.4-3.6um bands and 3.4/3.3 intensity ratio. We attribute the carrier of the 3.4-3.6um bands to a single side group attached to very similarly sized PAHs. Cluster profiles reveal a transition between characteristic profiles classes of the 11.2um feature from the atomic to the molecular PDR zone. We find the carriers of each of the profile classes to be independent, and reason the latter to be PAH clusters existing solely deep in the molecular PDR. Clustering also reveals a connection between the 11.2 and 6.2um bands; and that clusters generated from variation in the 10.9-11.63um region can be used to recover those in the 5.95-6.6um region. Clustering is a powerful tool for characterizing PAH variability on both spatial and spectral scales. For individual bands as well as global spectral behaviours, we find UV-processing to be the most important driver of the evolution of PAHs and their spectral signatures in the Orion Bar.
Urea is a prebiotic molecule that has been detected in few sources of the interstellar medium (ISM) and in Murchison meteorite. Being stable against ultraviolet radiation and high-energy electron bombardment, urea is expected to be present in interstellar ices. Theoretical and experimental studies suggest that isocyanic acid (HNCO) and formamide (NH$_2$CHO) are possible precursors of urea. However, uncertainties still exist regarding its formation routes. Previous computational works characterised urea formation in the gas phase or in presence of few water molecules by reaction of formamide with nitrogen-bearing species. In this work, we investigated the reaction of HNCO + NH$_3$ on an 18 water molecules ice cluster model mimicking interstellar ice mantles by means of quantum chemical computations. We characterised different mechanisms involving both closed-shell and open-shell species at B3LYP-D3(BJ)/ma-def2-TZVP level of theory, in which the radical-radical H$_2$NCO + NH$_2$ coupling has been found to be the most favourable one due to being almost barrierless. In this path, the presence of the icy surfaces is crucial for acting as reactant concentrators/suppliers, as well as third bodies able to dissipate the energy liberated during the urea formation.
We provide global models of line-driven winds of B supergiants for metallicities corresponding to the Large and Small Magellanic Clouds. The velocity and density structure of the models is determined consistently from hydrodynamical equations with radiative force derived in the comoving frame and level populations computed from kinetic equilibrium equations. We provide a formula expressing the predicted mass-loss rates in terms of stellar luminosity, effective temperature, and metallicity. Predicted wind mass-loss rates decrease with decreasing metallicity as $\dot M\sim Z^{0.60}$ and are proportional to the stellar luminosity. The mass-loss rates increase below the region of the bistability jump at about 20\,kK because of iron recombination. In agreement with previous theoretical and observational studies, we find a smooth change of wind properties in the region of the bistability jump. With decreasing metallicity, the bistability jump becomes weaker and shifts to lower effective temperatures. At lower metallicities above the bistability jump, our predictions provide similar rates to those used in current evolutionary models, but our rates are significantly lower than older predictions below the bistability jump. Our predicted mass-loss rates agree with observational estimates derived from H$\alpha$ line assuming that observations of stellar winds from Galaxy and the Magellanic Clouds are uniformly affected by clumping. The models nicely reproduce the dependence of terminal velocities on temperature derived from ultraviolet spectroscopy.
We propose polarization of scattered optical light from intermediate Galactic latitude infrared cirrus as a new diagnostic to constrain models of interstellar dust and the anisotropic interstellar radiation field (aISRF). For single scattering by a sphere, with Mie scattering phase functions for intensity and polarized intensity for a dust model at a given wavelength (Sloan $r$ and $g$ bands), and with models of anisotropic illumination from the entire sky (represented in HEALPix), we develop the formalism for calculating useful summary parameters for an integrated flux nebula (IFN): average of the phase function weighted by the illumination, polarization angle ($\psi$), and polarization fraction ($p$). To demonstrate the diagnostic discrimination of polarization from scattered light, we report on the effects of different anisotropic illumination models and different dust models on the summary parameters for the Spider IFN. The summary parameters are also sensitive to the IFN location, as we illustrate using FRaNKIE illumination models. For assessing the viability of dust and aISRF models, we find that observations of $\psi$ and $p$ of scattered light are indeed powerful new diagnostics to complement joint modeling of the intensity of scattered light (related to the average phase function) and the intensity of thermal dust emission. However, optically thin IFNs that can be modelled using single scattering are faint and $p$ is not large, as it could be with Rayleigh scattering, and so these observations need to be carried out with care and precision. Results for the Draco nebula compared to the Spider illustrate the challenge.
We present a novel method for identifying cosmic web filaments using the IllustrisTNG (TNG100) cosmological simulations and investigate the impact of filaments on galaxies. We compare the use of cosmic density field estimates from the Delaunay Tessellation Field Estimator (DTFE) and the Monte Carlo Physarum Machine (MCPM), which is inspired by the slime mold organism, in the DisPerSE structure identification framework. The MCPM-based reconstruction identifies filaments with higher fidelity, finding more low-prominence/diffuse filaments and better tracing the true underlying matter distribution than the DTFE-based reconstruction. Using our new filament catalogs, we find that most galaxies are located within 1.5-2.5 Mpc of a filamentary spine, with little change in the median specific star formation rate and the median galactic gas fraction with distance to the nearest filament. Instead, we introduce the filament line density, {\Sigma}fil(MCPM), as the total MCPM overdensity per unit length of a local filament segment, and find that this parameter is a superior predictor of galactic gas supply and quenching. Our results indicate that most galaxies are quenched and gas-poor near high-line density filaments at z<=1. At z=0, quenching in log(M*/Msun)>10.5 galaxies is mainly driven by mass, while lower-mass galaxies are significantly affected by the filament line density. In high-line density filaments, satellites are strongly quenched, whereas centrals have reduced star formation, but not gas fraction, at z<=0.5. We discuss the prospect of applying our new filament identification method to galaxy surveys with SDSS, DESI, Subaru PFS, etc. to elucidate the effect of large-scale structure on galaxy formation.
We study the astrochemical diagnostics of the isolated massive protostar G28.20-0.05. We analyze data from ALMA 1.3~mm observations with resolution of 0.2 arcsec ($\sim$1,000 au). We detect emission from a wealth of species, including oxygen-bearing (e.g., $\rm{H_2CO}$, $\rm{CH_3OH}$, $\rm{CH_3OCH_3}$), sulfur-bearing (SO$_2$, H$_2$S) and nitrogen-bearing (e.g., HNCO, NH$_2$CHO, C$_2$H$_3$CN, C$_2$H$_5$CN) molecules. We discuss their spatial distributions, physical conditions, correlation between different species and possible chemical origins. In the central region near the protostar, we identify three hot molecular cores (HMCs). HMC1 is part of a mm continuum ring-like structure, is closest in projection to the protostar, has the highest temperature of $\sim300\:$K, and shows the most line-rich spectra. HMC2 is on the other side of the ring, has a temperature of $\sim250\:$K, and is of intermediate chemical complexity. HMC3 is further away, $\sim3,000\:$au in projection, cooler ($\sim70\:$K) and is the least line-rich. The three HMCs have similar mass surface densities ($\sim10\:{\rm{g\:cm}}^{-2}$), number densities ($n_{\rm H}\sim10^9\:{\rm{cm}}^{-3}$) and masses of a few $M_\odot$. The total gas mass in the cores and in the region out to $3,000\:$au is $\sim 25\:M_\odot$, which is comparable to that of the central protostar. Based on spatial distributions of peak line intensities as a function of excitation energy, we infer that the HMCs are externally heated by the protostar. We estimate column densities and abundances of the detected species and discuss the implications for hot core astrochemistry.
Due to ram-pressure stripping, jellyfish galaxies are thought to lose large amounts, if not all, of their interstellar medium. Nevertheless, some, but not all, observations suggest that jellyfish galaxies exhibit enhanced star formation compared to control samples, even in their ram pressure-stripped tails. We use the TNG50 cosmological gravity+magnetohydrodynamical simulation, with an average spatial resolution of 50-200 pc in the star-forming regions of galaxies, to quantify the star formation activity and rates (SFRs) of more than 700 jellyfish galaxies at $z=0-1$ with stellar masses $10^{8.3-10.8}\,\mathrm{M}_\odot$ in hosts with mass $10^{10.5-14.3}\,\mathrm{M}_\odot$. We extract their global SFRs, the SFRs within their main stellar body vs. within the tails, and we follow the evolution of the star formation along their individual evolutionary tracks. We compare the findings for jellyfish galaxies to those of diversely-constructed control samples, including against satellite and field galaxies with matched redshift, stellar mass, gas fraction and host halo mass. According to TNG50, star formation and ram-pressure stripping can indeed occur simultaneously within any given galaxy, and frequently do so. Moreover, star formation can also take place within the ram pressure-stripped tails, even though the latter is typically subdominant. However, TNG50 does not predict elevated population-wide SFRs in jellyfish compared to analog satellite galaxies with the same stellar mass or gas fraction. Simulated jellyfish galaxies do undergo bursts of elevated star formation along their history but, at least according to TNG50, these do not translate into a population-wide enhancement at any given epoch.
We present a study of the connection between the escape fraction of Lyman-alpha (Ly$\alpha$) and Lyman-continuum (LyC) photons within a sample of N=152 star-forming galaxies selected from the VANDELS survey at $3.85<z_{spec}<4.95$. By combining measurements of H$\alpha$ equivalent width $(W_\lambda(\rm{H\alpha}))$ derived from broad-band photometry with Ly$\alpha$ equivalent width $(W_\lambda(Ly\alpha))$ measurements from VANDELS spectra, we individually estimate $f_{\rm{esc}}^{Ly\alpha}$ for our full sample. In agreement with previous studies, we find a positive correlation between $W_\lambda(Ly\alpha)$ and $f_{\rm{esc}}^{Ly\alpha}$, increasing from $f_{\rm{esc}}^{Ly\alpha}\simeq0.04$ at $W_\lambda(Ly\alpha)=10$\r{A} to $f_{\rm{esc}}^{Ly\alpha}\simeq0.1$ at $W_\lambda(Ly\alpha)=25$\r{A}. For the first time at $z\sim4-5$, we investigate the relationship between $f_{\rm{esc}}^{Ly\alpha}$ and $f_{\rm{esc}}^{\rm{LyC}}$ using $f_{\rm{esc}}^{\rm{LyC}}$ estimates derived using the equivalent widths of low-ionization, FUV absorption lines in composite VANDELS spectra. Our results indicate that $f_{\rm{esc}}^{\rm{LyC}}$ rises monotonically with $f_{\rm{esc}}^{Ly\alpha}$, following the relation $f_{\rm{esc}}^{\rm{LyC}}\simeq 0.15^{+0.06}_{-0.04}f_{\rm{esc}}^{Ly\alpha}$. Based on composite spectra of sub-samples with roughly constant $W_\lambda(Ly\alpha)$, but very different $f_{\rm{esc}}^{Ly\alpha}$, we show that the $f_{\rm{esc}}^{\rm{LyC}}-f_{\rm{esc}}^{Ly\alpha}$ correlation is not driven by a secondary correlation between $f_{\rm{esc}}^{Ly\alpha}$and $W_\lambda(Ly\alpha)$. The $f_{\rm{esc}}^{\rm{LyC}}-f_{\rm{esc}}^{Ly\alpha}$ correlation is in good qualitative agreement with theoretical predictions and provides further evidence that estimates of $f_{\rm{esc}}^{\rm{LyC}}$ within the Epoch of Reionization should be based on proxies sensitive to neutral gas density/geometry and dust attenuation.
We propose a scenario in which the Large Magellanic Cloud (LMC) is on its second passage around the Milky Way. Using a series of tailored N-body simulations, we demonstrate that such orbits are consistent with current observational constraints on the mass distribution and relative velocity of both galaxies. The previous pericentre passage of the LMC could have occurred 5-10 Gyr ago at a distance >~100 kpc, large enough to retain its current population of satellites. The perturbations of the Milky Way halo induced by the LMC look nearly identical to the first-passage scenario, however, the distribution of LMC debris is considerably broader in the second-passage model. We examine the likelihood of current and past association with the Magellanic system for dwarf galaxies in the Local Group, and find that in addition to 10-11 current LMC satellites, it could have brought a further 4-6 galaxies that have been lost after the first pericentre passage. In particular, four of the classical dwarfs - Carina, Draco, Fornax and Ursa Minor - each have a ~50% probability of once belonging to the Magellanic system, thus providing a possible explanation for the ``plane of satellites'' conundrum.
In this paper, we demonstrate that the inference of galaxy stellar masses via spectral energy distribution (SED) fitting techniques for galaxies formed in the first billion years after the Big Bang carries fundamental uncertainties owing to the loss of star formation history (SFH) information from the very first episodes of star formation in the integrated spectra of galaxies. While this early star formation can contribute substantially to the total stellar mass of high-redshift systems, ongoing star formation at the time of detection outshines the residual light from earlier bursts, hampering the determination of accurate stellar masses. As a result, order of magnitude uncertainties in stellar masses can be expected. We demonstrate this potential problem via direct numerical simulation of galaxy formation in a cosmological context. In detail, we carry out two cosmological simulations with significantly different stellar feedback models which span a significant range in star formation history burstiness. We compute the mock SEDs for these model galaxies at z=7 via 3D dust radiative transfer calculations, and then backwards fit these SEDs with Prospector SED fitting software. The uncertainties in derived stellar masses that we find for z>7 galaxies motivate the development of new techniques and/or star formation history priors to model early Universe star formation.
We present observational evidence of the correlation between the high-mass slope of the stellar initial mass function (IMF) in young star clusters and their stellar surface density, $\sigma_{*}$. When the high-mass end of the IMF is described by a power law of the form $dN/d{\rm log}{M_{*}}\propto M_{*}^{-\Gamma}$, the value of $\Gamma$ is seen to weakly decrease with increasing $\sigma_{*}$, following a $\Gamma=1.31~\sigma_{*}^{-0.095}$ relation. We also present a model that can explain these observations. The model is based on the idea that the coalescence of protostellar cores in a protocluster forming clump is more efficient in high density environments where cores are more closely packed. The efficiency of the coalescence process is calculated as a function of the parental clump properties and in particular the relation between its mass and radius as well as its core formation efficiency. The main result of this model is that the increased efficiency of the coalescence process leads to shallower slopes of the IMF in agreement with the observations of young clusters, and the observations are best reproduced with compact protocluster forming clumps. These results have significant implications for the shape of the IMF in different Galactic and extragalactic environments and have very important consequences for galactic evolution.
The inner hundred parsecs of the Milky Way hosts the nearest supermassive black hole, largest reservoir of dense gas, greatest stellar density, hundreds of massive main and post main sequence stars, and the highest volume density of supernovae in the Galaxy. As the nearest environment in which it is possible to simultaneously observe many of the extreme processes shaping the Universe, it is one of the most well-studied regions in astrophysics. Due to its proximity, we can study the center of our Galaxy on scales down to a few hundred AU, a hundred times better than in similar Local Group galaxies and thousands of times better than in the nearest active galaxies. The Galactic Center (GC) is therefore of outstanding astrophysical interest. However, in spite of intense observational work over the past decades, there are still fundamental things unknown about the GC. JWST has the unique capability to provide us with the necessary, game-changing data. In this White Paper, we advocate for a JWST NIRCam survey that aims at solving central questions, that we have identified as a community: i) the 3D structure and kinematics of gas and stars; ii) ancient star formation and its relation with the overall history of the Milky Way, as well as recent star formation and its implications for the overall energetics of our galaxy's nucleus; and iii) the (non-)universality of star formation and the stellar initial mass function. We advocate for a large-area, multi-epoch, multi-wavelength NIRCam survey of the inner 100\,pc of the Galaxy in the form of a Treasury GO JWST Large Program that is open to the community. We describe how this survey will derive the physical and kinematic properties of ~10,000,000 stars, how this will solve the key unknowns and provide a valuable resource for the community with long-lasting legacy value.
In the absence of supplementary heat, the radiative cooling of halo gas around massive galaxies (Milky Way mass and above) leads to an excess of cold gas or stars beyond observed levels. AGN jet-induced heating is likely essential, but the specific properties of the jets remain unclear. Our previous work (Su et al. 2021) concludes from simulations of a halo with $10^{14} M_\odot$ that a successful jet model should have an energy flux comparable to the free-fall energy flux at the cooling radius and should inflate a sufficiently wide cocoon with a long enough cooling time. In this paper, we investigate three jet modes with constant fluxes satisfying the criteria, including high-temperature thermal jets, cosmic ray (CR)-dominant jets, and widely precessing kinetic jets in $10^{12}-10^{15}\,{\rm M}_{\odot}$ halos using high-resolution, non-cosmological MHD simulations with the FIRE-2 (Feedback In Realistic Environments) stellar feedback model, conduction, and viscosity. We find that scaling the jet energy according to the free-fall energy at the cooling radius can successfully suppress the cooling flows and quench galaxies without obviously violating observational constraints. We investigate an alternative scaling method in which we adjust the energy flux based on the total cooling rate within the cooling radius. However, we observe that the strong interstellar medium (ISM) cooling dominates the total cooling rate in this scaling approach, resulting in a jet flux that exceeds the amount needed to suppress the cooling flows. With the same energy flux, the CR-dominant jet is most effective in suppressing the cooling flow across all the surveyed halo masses due to the enhanced CR pressure support. We confirm that the criteria for a successful jet model, which we proposed in Su et al. (2021), work across a much wider range, encompassing halo masses of $10^{12}-10^{15} {\rm M_\odot}$.
We report a new, rare detection of HI 21-cm absorption associated with a quasar (only six known at $1<z<2$) here towards J2339-5523 at $z_{em}$ = 1.3531, discovered through the MeerKAT Absorption Line Survey (MALS). The absorption profile is broad ($\sim 400$ km/s), and the peak is redshifted by $\sim 200$ km/s, from $z_{em}$. Interestingly, optical/FUV spectra of the quasar from Magellan-MIKE/HST-COS spectrographs do not show any absorption features associated with the 21-cm absorption. This is despite the coincident presence of the optical quasar and the radio `core' inferred from a flat spectrum component of flux density $\sim 65$ mJy at high frequencies ($>5$ GHz). The simplest explanation would be that no large HI column (N(HI)$>10^{17}$ cm$^{-2}$) is present towards the radio `core' and the optical AGN. Based on the joint optical and radio analysis of a heterogeneous sample of 16 quasars ($z_{median}$ = 0.7) and 15 radio galaxies ($z_{median}$ = 0.3) with HI 21-cm absorption detection and matched in 1.4 GHz luminosity (L$_{\rm 1.4\,GHz}$), a consistent picture emerges where quasars are primarily tracing the gas in the inner circumnuclear disk and cocoon created by the jet-ISM interaction. These exhibit L$_{1.4\,\rm GHz}$ - $\Delta V_{\rm null}$ correlation, and frequent mismatch between the radio and optical spectral lines. The radio galaxies show no such correlation and likely trace the gas from the cocoon and the galaxy-wide ISM outside the photoionization cone. The analysis presented here demonstrates the potential of radio spectroscopic observations to reveal the origin of the absorbing gas associated with AGN that may be missed in optical observations.
We present the results of our Keck/DEIMOS spectroscopic follow-up of candidate galaxies of i-band-dropout protocluster candidate galaxies at $z\sim6$ in the COSMOS field. We securely detect Lyman-$\alpha$ emission lines in 14 of the 30 objects targeted, 10 of them being at $z=6$ with a signal-to-noise ratio of $5-20$, the remaining galaxies are either non-detections or interlopers with redshift too different from $z=6$ to be part of the protocluster. The 10 galaxies at $z\approx6$ make the protocluster one of the riches at $z>5$. The emission lines exhibit asymmetric profiles with high skewness values ranging from 2.87 to 31.75, with a median of 7.37. This asymmetry is consistent with them being Ly$\alpha$, resulting in a redshift range of $z=5.85-6.08$. Using the spectroscopic redshifts, we re-calculate the overdensity map for the COSMOS field and find the galaxies to be in a significant overdensity at the $4\sigma$ level, with a peak overdensity of $\delta=11.8$ (compared to the previous value of $\delta=9.2$). The protocluster galaxies have stellar masses derived from Bagpipes SED fits of $10^{8.29}-10^{10.28} \rm \,M_{\rm \odot}$ and star formation rates of $2-39\,\rm M_{\rm \odot}\rm\,yr^{-1}$, placing them on the main sequence at this epoch. Using a stellar-to-halo-mass relationship, we estimate the dark matter halo mass of the most massive halo in the protocluster to be $\sim 10^{12}\rm M_{\rm \odot}$. By comparison with halo mass evolution tracks from simulations, the protocluster is expected to evolve into a Virgo- or Coma-like cluster in the present day.
Future space-based far-infrared astrophysical observatories will require exquis-itely sensitive detectors consistent with the low optical backgrounds. The PRobe far-Infrared Mission for Astrophysics (PRIMA) will deploy arrays of thousands of superconducting kinetic inductance detectors (KIDs) sensitive to radiation between 25 and 265 $\mu$m. Here, we present laboratory characterization of prototype, 25 -- 80 $\mu$m wavelength, low-volume, aluminum KIDs designed for the low-background environment expected with PRIMA. A compact parallel plate capacitor is used to minimize the detector footprint and suppress TLS noise. A novel resonant absorber is designed to enhance response in the band of interest. We present noise and optical efficiency measurements of these detectors taken with a low-background cryostat and a cryogenic blackbody. A microlens-hybridized KID array is found to be photon noise limited down to about 50 aW with a limiting detector NEP of about $6.5 \times 10^{-19}~\textrm{W/Hz}^{1/2}$. A fit to an NEP model shows that our optical system is well characterized and understood down to 50 aW. We discuss future plans for low-volume aluminum KID array development as well as the testbeds used for these measurements.
We present results from the characterization and optimization of six Skipper CCDs for use in a prototype focal plane for the SOAR Integral Field Spectrograph (SIFS). We tested eight Skipper CCDs and selected six for SIFS based on performance results. The Skipper CCDs are 6k $\times$ 1k, 15 $\mu$m pixels, thick, fully-depleted, $p$-channel devices that have been thinned to $\sim 250 \mu$m, backside processed, and treated with an antireflective coating. We optimize readout time to achieve $<4.3$ e$^-$ rms/pixel in a single non-destructive readout and $0.5$ e$^-$ rms/pixel in $5 \%$ of the detector. We demonstrate single-photon counting with $N_{\rm samp}$ = 400 ($\sigma_{\rm 0e^-} \sim$ 0.18 e$^-$ rms/pixel) for all 24 amplifiers (four amplifiers per detector). We also perform conventional CCD characterization measurements such as cosmetic defects ($ <0.45 \%$ ``bad" pixels), dark current ($\sim 2 \times 10^{-4}$ e$^-$/pixel/sec.), charge transfer inefficiency ($3.44 \times 10^{-7}$ on average), and charge diffusion (PSF $< 7.5 \mu$m). We report on characterization and optimization measurements that are only enabled by photon-counting. Such results include voltage optimization to achieve full-well capacities $\sim 40,000-63,000$ e$^-$ while maintaining photon-counting capabilities, clock induced charge optimization, non-linearity measurements at low signals (few tens of electrons). Furthermore, we perform measurements of the brighter-fatter effect and absolute quantum efficiency ($\gtrsim\, 80 \%$ between 450 nm and 980 nm; $\gtrsim\,90 \%$ between 600 nm and 900 nm) using Skipper CCDs.
Future far-infrared astrophysics observatories will require focal plane arrays containing thousands of ultra-sensitive, superconducting detectors, each of which needs to be optically coupled to the telescope. At longer wavelengths, many approaches have been developed including feedhorn arrays and macroscopic arrays of lenslets. However, with wavelengths as short as 25 microns, optical coupling in the far-infrared remains challenging. In this paper, we present a novel approach for fabricating far-infrared monolithic silicon microlens arrays using grayscale lithography and deep reactive ion etching. The design, fabrication, and characterization of the microlens arrays are discussed. We compare the designed and fabricated lens profile, and calculate that the fabricated lenses will achieve 84% encircled power for the designed detector, which is only 3% less than the designed performance. We also present methods developed for anti-reflection coating microlens arrays and for a silicon-to-silicon die bonding process to hybridize microlens arrays with detector arrays.
Given the urgency to reduce fossil fuel energy production to make climate tipping points less likely, we call for resource-aware knowledge gain in the research areas on Universe and Matter with emphasis on the digital transformation. A portfolio of measures is described in detail and then summarized according to the timescales required for their implementation. The measures will both contribute to sustainable research and accelerate scientific progress through increased awareness of resource usage. This work is based on a three-days workshop on sustainability in digital transformation held in May 2023.
This paper describes the design of a 5.5:1 bandwidth feed antenna and reflector system, intended for use in hydrogen intensity mapping experiments. The system is optimized to reduce systematic effects that can arise in these experiments from scattering within the feed/reflector and cross-coupling between antennas. The proposed feed is an ultra wideband Vivaldi style design and was optimized to have a smooth frequency response, high gain, and minimal shadowing of the reflector dish. This feed can optionally include absorptive elements which reduce systematics but degrade sensitivity. The proposed reflector is a deep parabolic dish with $f/d = 0.216$ along with an elliptical collar to provide additional shielding. The procedure for optimizing these design choices is described.
For the analysis of data taken by Imaging Air Cherenkov Telescopes (IACTs), a large number of air shower simulations are needed to derive the instrument response. The simulations are very complex, involving computational and memory-intensive calculations, and are usually performed repeatedly for different observation intervals to take into account the varying optical sensitivity of the instrument. The use of generative models based on deep neural networks offers the prospect for memory-efficient storing of huge simulation libraries and cost-effective generation of a large number of simulations in an extremely short time. In this work, we use Wasserstein Generative Adversarial Networks to generate photon showers for an IACT equipped with the FlashCam design, which has more than $1{,}500$ pixels. Using simulations of the H.E.S.S. experiment, we demonstrate the successful generation of high-quality IACT images. The analysis includes a comprehensive study of the generated image quality based on low-level observables and the well-known Hillas parameters that describe the shower shape. We demonstrate for the first time that the generated images have high fidelity with respect to low-level observables, the Hillas parameters, their physical properties, as well as their correlations. The found increase in generation speed in the order of $10^5$ yields promising prospects for fast and memory-efficient simulations of air showers for IACTs.
Next generation high contrast imaging instruments face a challenging trade off: they will be required to deliver data with high spectral resolution at a relatively fast cadence (minutes) and across a wide field of view (arcseconds). For instruments that employ focal plane wavefront sensing and therefore require super-Nyquist sampling, these requirements cannot simultaneously be met with a traditional lenslet integral field spectrograph (IFU). For the SPIDERS pathfinder instrument, we are demonstrating an imaging Fourier transform spectrograph (IFTS) that offers a different set of tradeoffs than a lenslet IFU, delivering up to R20,000 spectral resolution across a dark hole. We present preliminary results from the SPIDERS IFTS including a chromaticity analysis of its dark hole and demonstrate a spectral differential imaging (SDI) improvement of up to 40 $\times$, and a first application of spectro-coherent differential imaging, combining both coherent differential imaging (CDI) and SDI.
A stable-frequency transmitter with relative radial acceleration to a receiver will show a change in received frequency over time, known as a "drift rate''. For a transmission from an exoplanet, we must account for multiple components of drift rate: the exoplanet's orbit and rotation, the Earth's orbit and rotation, and other contributions. Understanding the drift rate distribution produced by exoplanets relative to Earth, can a) help us constrain the range of drift rates to check in a Search for Extraterrestrial Intelligence (SETI) project to detect radio technosignatures and b) help us decide validity of signals-of-interest, as we can compare drifting signals with expected drift rates from the target star. In this paper, we modeled the drift rate distribution for $\sim$5300 confirmed exoplanets, using parameters from the NASA Exoplanet Archive (NEA). We find that confirmed exoplanets have drift rates such that 99\% of them fall within the $\pm$53 nHz range. This implies a distribution-informed maximum drift rate $\sim$4 times lower than previous work. To mitigate the observational biases inherent in the NEA, we also simulated an exoplanet population built to reduce these biases. The results suggest that, for a Kepler-like target star without known exoplanets, $\pm$0.44 nHz would be sufficient to account for 99\% of signals. This reduction in recommended maximum drift rate is partially due to inclination effects and bias towards short orbital periods in the NEA. These narrowed drift rate maxima will increase the efficiency of searches and save significant computational effort in future radio technosignature searches.
We calculate the reflection of diffuse galactic emission by meteor trails and investigate its potential relationship to Meteor Radio Afterglow (MRA). The formula to calculate the reflection of diffuse galactic emission is derived from a simplified case, assuming that the signals are mirrored by the cylindrical over-dense ionization trail of meteors. The overall observed reflection is simulated through a ray tracing algorithm together with the diffuse galactic emission modelled by the GSM sky model. We demonstrate that the spectrum of the reflected signal is broadband and follows a power law with a negative spectral index of around -1.3. The intensity of the reflected signal varies with local sidereal time and the brightness of the meteor and can reach 2000 Jy. These results agree with some previous observations of MRAs. Therefore, we think that the reflection of galactic emission by meteor trails can be a possible mechanism causing MRAs, which is worthy of further research.
The whole article deals with the analysis of the cosmic model of Ruban's space-time in the context of a bulk viscosity impact in the form of Ricci dark energy within the framework Brans-Dicke theory (Brans and Dicke, Phys. Rev. 124, 925 (1961)). We believe that outer space is filled with dark matter and viscous Ricci dark energy (VRDE) under the pressureless situation. The velocity and rate at which the Universe is expanding are presumed to be proportional to the coefficient of total bulk viscosity, is in the form, -- , are the constants. To solve the RDE model's field equations, we utilize the relation among the metric potentials and also the power-law relation among the average scale factor a(t) and scalar field . To examine the evolutionary dynamics of the Universe, we investigate the deceleration parameter(q), jerk parameter(j), EoS parameter , Om(z), stability of the obtained models through the square speed of the sound, wde wde' plane, statefinder parameter planes (r, s) and (q, r) and presented via graphical representation. By the end of the discussion, VRDE model was found to be compatible with the present accelerated expansion of the Universe.
We show how to use Exceptional Field Theory to efficiently compute $n$-point couplings of all Kaluza-Klein modes for vacua that can be uplifted from maximal gauged supergravities to 10/11 dimensions via a consistent truncation. Via the AdS/CFT correspondence, these couplings encode the $n$-point functions of holographic conformal fields theories. Our methods show that these $n$-point couplings are controlled by the $n$-point invariant of scalar harmonics of the maximally symmetric point of the truncation, allowing us to show that infinitely-many $n$-point couplings vanish for any vacua of the truncation, even though they may be allowed by the remnant symmetry group of the vacua. This gives new results even for the maximally supersymmetric AdS$_5 \times S^5$, AdS$_4 \times S^7$ and AdS$_7 \times S^4$ vacua of string and M-theory, where we prove old conjectures about the vanishing of $n$-point extremal and near-extremal couplings. Focusing in particular on cubic couplings for vacua of 5-dimensional gauged supergravity, we derive explicit universal formulae encoding these couplings for any vacuum within a consistent truncation. We use this to compute known and new couplings involving spin-0, spin-1, spin-2 for the AdS$_5 \times S^5$ vacuum of IIB string theory.
We present a convenient method of algebraic classification of 2+1 spacetimes into the types I, II, D, III, N and O, without using any field equations. It is based on the 2+1 analogue of the Newman-Penrose curvature scalars $\Psi_A$, which are specific projections of the Cotton tensor onto a suitable null triad. The algebraic types are then simply determined by the gradual vanishing of such Cotton scalars, starting with those of the highest boost weight. This classification is directly related to the specific multiplicity of the Cotton-aligned null directions (CANDs) and to the corresponding Bel-Debever criteria. Using a bivector (that is 2-form) decomposition, we demonstrate that our method is fully equivalent to the usual Petrov-type classification of 2+1 spacetimes based on the eigenvalue problem and determining the respective canonical Jordan form of the Cotton-York tensor. We also derive a simple synoptic algorithm of algebraic classification based on the key polynomial curvature invariants. To show the practical usefulness of our approach, we perform the classification of several explicit examples, namely the general class of Robinson-Trautman spacetimes with an aligned electromagnetic field and a cosmological constant, and other metrics of various algebraic types.
In the present letter, the solutions of the Klein Gordon equation in the space time of an infinite domain wall is studied. It is shown that the horizon is an special region, where initial conditions (scattering conditions) at asymptotic past for the field can not be defined. This is in harmony with the fact that this region can not be crossed, a possibility that was suggested in \cite{lanosa}. If these results are correct, then it makes no sense to impose boundary conditions at this location.
We present a comprehensive study exploring the relationship between transport properties and measures of quantum entanglement in the Einstein-Maxwell-Axion-Horndeski theory. By using holographic duality, we study the entanglement measures, holographic entanglement entropy (HEE) and entanglement wedge cross-section (EWCS), and transport coefficients, for this model and analyze their dependence on free parameters which we classify into action parameter, observable parameters and axion factor. We find contrasting behaviors between HEE and EWCS with respect to observable parameters (charge and temperature), and the axion factor, indicating that they capture different types of quantum correlations. We also find that HEE exhibits positive correlation with both charge and thermal excitations, whereas EWCS exhibits a negative correlation with charge-related conductivities and thermal fluctuations. Furthermore, we find that the Horndenski coupling term, as the modification to standard gravity theory, does not change the qualitative behaviors of the conductivities and the entanglement measures.
This paper aims to investigate the de Sitter Schwarzschild black hole in the framework of Finslerian space-time because Finslerian geometry can explain problems that Einstein's gravity cannot. For this end, we assume the Ricci curvature is constant in the Finsler space and obtain an exact solution for the field equations in the Finsler space-time. This solution is equivalent to the Finslerian Schwarzschild de Sitter-like black hole. A constant Ricci curvature in Finslerian space-time requires, in its two-dimensional space, the Ricci curvature ($\lambda$) to be constant. We find that for $\lambda\neq1$, this solution resembles a black hole surrounded by a cloud of strings. Furthermore, we investigate null and time-like geodesics for $\lambda=1$.
The motion of water is governed by the Navier-Stokes equations, accompanied by the continuity equation to ensure local mass conservation. In this work, we construct the relativistic generalization of these equations via gradient expansion for a fluid with a conserved charge in a curved $d$-dimensional spacetime. We follow a general hydrodynamic frame approach and introduce the irreducible structure (IS) algorithm, which is based on derivatives of the expansion scalar and the shear and vorticity tensors. In this manner, we systematically generate all permissible gradients up to a specified order and derive the most comprehensive constitutive relations for a charged fluid, accurate up to third-order gradients. The constitutive relations are formulated to both ordinary, non-conformal, and conformal-invariant charged fluids. We also examine the hydrodynamic frame dependence of the transport coefficients for a non-conformal charged fluid up to third order in the gradient expansion. The frame dependence of the scalar, vector, and tensor parts of the constitutive relations is obtained in terms of the field redefinitions of the fundamental hydrodynamic variables. These frame dependencies are challenging to manage due to their non-linear character. In the linear regime, however, these higher-order transformations become tractable, allowing us to identify frame-invariant transport coefficients. One advantage of employing these coefficients is the possibility of studying linear equations of motion in any chosen frame; so, we do it in the Landau frame. Subsequently, these linear equations are solved in momentum space, yielding dispersion relations for shear, sound, and diffusive modes in a non-conformal charged fluid, in terms of the frame-invariant transport coefficients.
The quantum off-shell recursion provides an efficient and universal computational tool for loop-level scattering amplitudes. In this work, we present a new comprehensive computational framework based on the quantum off-shell recursion for binary black hole systems. Using the quantum perturbiner method, we derive the recursions and solve them explicitly up to two-loop order. We develop a power-counting prescription that enables the straightforward separation of classical diagrams. We also devise a classification scheme that optimizes the integration by parts (IBP) reduction process, which makes higher-loop calculations more tractable. By employing the soft expansion technique, we remove irrelevant terms from the loop integrands and express them in terms of master integrals. We classify the one-loop and the two-loop classical diagrams, and their loop integrands are represented by linear combinations of the master integrals. Finally, we explicitly calculate the classical scalar 2 to 2 amplitudes in the potential region up to the 3PM order and reproduce the known results.
Using holographic duality, we investigate the impact of finite temperature on the instability and splitting patterns of quadruply quantized vortices, providing the first-ever analysis in this context. Through linear stability analysis, we reveal the occurrence of two consecutive dynamical transitions. At a specific low temperature, the dominant unstable mode transitions from the $2$-fold rotational symmetry mode to the $3$-fold one, followed by a transition from the $3$-fold one to the $4$-fold one at a higher temperature. As the temperature is increased, we also observe the $5$ and $6$-fold rotational symmetry unstable modes get excited successively. Employing the full non-linear numerical simulations, we further demonstrate that these two novel dynamical transitions, along with the temperature-induced instabilities for the $5$ and $6$-fold rotational symmetry modes, can be identified by examining the resulting distinct splitting patterns, which offers a promising route for the experimental verification in the cold atom gases.
Quantum fields in regions of extreme spacetime curvature give rise to a wealth of effects, like Hawking radiation at the horizon of black holes. While quantum field theory can only be studied theoretically in black holes, it can be tested in controlled laboratory experiments. Typically, a fluid accelerating from sub- to supersonic speed will create an effectively curved spacetime for the acoustic field, with an apparent horizon where the speed of the fluid equals the speed of sound. Here we create effective curved spacetimes with a quantum fluid of light, with smooth and steep acoustic horizons and various supersonic fluid speeds. We use a recently developed spectroscopy method to measure the spectrum of acoustic excitations on these spacetimes, thus observing negative energy modes in the supersonic regions. This demonstrates the potential of quantum fluids of light for the study of field theories on curved spacetimes.
General relativity (GR) admits two alternative formulations with the same dynamics attributing the gravitational phenomena to torsion or nonmetricity of the manifold's connection. They lead, respectively, to the teleparallel equivalent of general relativity (TEGR) and the symmetric teleparallel equivalent of general relativity (STEGR). In this work, we focus on STEGR and present its differences with the conventional, curvature-based GR. We exhibit the 3+1 decomposition of the STEGR Lagrangian in the coincident gauge and present the Hamiltonian, and the Hamiltonian and momenta constraints. For a particular case of spherical symmetry, we explicitly show the differences in the Hamiltonian between GR and STEGR, one of the few genuinely different features of both formulations of gravity, and the repercussions it might encompass to numerical relativity.
We study the EFT of a spinning compact object and show that with appropriate gauge fixing, computations become amenable to worldline quantum field theory techniques. We use the resulting action to compute Compton and one-loop scattering amplitudes at fourth order in spin. By matching these amplitdes to solutions of the Teukolsky equations we fix the values of Wilson coefficients appearing in the EFT such that it reproduces Kerr black hole scattering. We keep track of the spin supplementary condition throughout our computations and discuss alternative ways to ensure its preservation.
We study the asymptotic behavior of the solution curves of the dynamics of spacetimes of the topological type $\Sigma_{p}\times \mathbb{R}$, $p>1$, where $\Sigma_{p}$ is a closed Riemann surface of genus $p$, in the regime of $2+1$ dimensional classical general relativity. The configuration space of the gauge fixed dynamics is identified with the Teichm\"uller space ($\mathcal{T}\Sigma_{p}\approx \mathbb{R}^{6p-6}$) of $\Sigma_{p}$. Utilizing the properties of the Dirichlet energy of certain harmonic maps, estimates derived from the associated elliptic equations in conjunction with a few standard results of the theory of the compact Riemann surfaces, we prove that every non-trivial solution curve runs off the edge of the Teichm\"uller space at the limit of the big bang singularity and approaches the space of projective measured laminations/foliations ($\mathcal{PML}$ $\mathcal{PMF}$), the Thurston boundary of the Teichm\"uller space.
The information loss paradox is widely regarded as one of the biggest open problems in theoretical physics. Several classical and quantum features must be present to enable its formulation. First, an event horizon is needed to justify the objective status of tracing out degrees of freedom inside the black hole. Second, evaporation must be completed (or nearly completed) in finite time according to a distant observer, and thus the formation of the black hole should also occur in finite time. In spherical symmetry these requirements constrain the possible metrics strongly enough to obtain a unique black hole formation scenario and match their parameters with the semiclassical results. However, the two principal generalizations of surface gravity, the quantity that determines the Hawking temperature, do not agree with each other on the dynamical background. Neither can correspond to the emission of nearly-thermal radiation. We infer from this that the information loss problem cannot be consistently posed in its standard form.
There is much recent development towards interferometric measurements of holographic quantum uncertainties in an emergent background space-time. Despite increasing promise for the target detection regime of Planckian strain power spectral density, the foundational insights of the motivating theories have not been connected to a phenomenological model of observables measured in a realistic experiment. This work proposes a candidate model, based on the central hypothesis that all horizons are universal boundaries of coherent quantum information -- where the decoherence of space-time happens for the observer. The prediction is inspired by 't Hooft's algebra for black hole information that gives coherent states on horizons, whose spatial correlations were shown by Verlinde and Zurek to also appear on holographic fluctuations of causal boundaries in flat space-time (conformal Killing horizons). Time-domain correlations are projected from Planckian jitters whose coherence scales match causal diamonds, motivated by Banks' framework for the emergence of space-time and locality. The universality of this coherence on causal horizons compels a multimodal research program probing concordant signatures: An analysis of cosmological data to probe primordial correlations, motivated by Hogan's interpretation of well-known CMB anomalies as coherent fluctuations on the inflationary horizon, and upcoming 3D interferometers to probe causal diamonds in flat space-time. Candidate interferometer geometries are presented, with a modeled frequency spectrum for each design.
We observe the parallel between the null Killing vector on the horizon and degenerate Killing vectors at both north and south poles in Kerr-Taub-NUT and general Plebanski solutions. This suggests a correspondence between the pairs of the angular momentum/velocity and the NUT charge/potential. We treat the time as a real line such that the Misner strings are physical. We find that the NUT charge spreads along the Misner strings, analogous to that the mass in the Schwarzschild black hole sits at its spacetime singularity. We develop procedures to calculate all the thermodynamic quantities and we find that the results are consistent with the first law (Wald formalism), the Euclidean action and the Smarr relation. We also apply the Wald formalism, the Euclidean action approach, and the (generalized) Komar integration to the electric and magnetic black holes in a class of EMD theories, and also to boosted black strings and Kaluza-Klein monopoles in five dimensions, to gain better understandings of how to deal with the subtleties associated with Dirac and Misner strings.
When tetrad (metric) fields are not invertible, the standard canonical formulation of gravity cannot be adopted as it is. Here we develop a Hamiltonian theory of gravity for non-invertible tetrad. In contrast to Einstein gravity, this phase is found to exhibit three local degrees of freedom. This reflects a discrete discontinuity in the limit of a vanishing tetrad determinant. For the particular case of vanishing lapse, the Hamiltonian constraint disappears from the classical theory upon fixing the torsional gauge-freedom. Any state functional invariant under the internal gauge rotations and spatial diffeomorphisms is a formal solution of the associated quantum theory. The formulation here provides a Hamiltonian basis to analyze gravity theory around a physical singularity, which corresponds to a zero of the tetrad determinant in curved spacetime.
This work is devoted to the thermodynamics description of a phantom scenario proposed previously by the authors. The presence of negative chemical potential is unavoidable if we allege for a well defined thermodynamics framework since the cosmological model passages from phantom stage at present time to a future de Sitter evolution. As noted earlier in other works, we find that the negativity of the chemical potential is necessary to save phantom dark energy from thermodynamics inconsistencies.
We explore an interesting connection between black hole shadow parameters and the acceleration bounds for radial linear uniformly accelerated (LUA) trajectories in static spherically symmetric black hole spacetime geometries of the Schwarzschild type. For an incoming radial LUA trajectory to escape back to infinity, there exists a bound on its magnitude of acceleration and the distance of closest approach from the event horizon of the black hole. We calculate these bounds and the shadow parameters, namely the photon sphere radius and the shadow radius, explicitly for specific black hole solutions in $d$-dimensional Einstein's theory of gravity, in pure Lovelock theory of gravity and in the $\mathcal{F}(R)$ theory of gravity. We find that for a particular boundary data, the photon sphere radius $r_{ph}$ is equal to the bound on radius of closest approach $r_b$ of the incoming radial LUA trajectory while the shadow radius $r_{sh}$ is equal to the inverse magnitude of the acceleration bound $|a|_b$ for the LUA trajectory to turn back to infinity. Using the effective potential technique, we further show that the same relations are valid in any theory of gravity for static spherically symmetric black hole geometries of the Schwarzschild type. Investigating the trajectories in a more general class of static spherically symmetric black hole spacetimes, we find that the two relations are valid separately for two different choices of boundary data.
We investigate complex quaternion-valued exterior differential forms over 4-dimensional Lorentzian spacetimes and explore Weyl spinor fields as minimal left ideals within the complex quaternion algebra. The variational derivation of the coupled Einstein-Weyl equations from an action is presented, and the resulting field equations for both first and second order variations are derived and simplified. Exact plane symmetric solutions of the Einstein-Weyl equations are discussed, and two families of exact solutions describing left-moving and right-moving neutrino plane waves are provided. The study highlights the significance of adjusting a quartic self-coupling of the Weyl spinor in the action to ensure the equivalence of the field equations.
There is a surge of research devoted to the formalism and physical manifestations of non-Lorentzian kinematical symmetries, which focuses especially on the ones associated with the Galilei and Carroll relativistic limits (the speed of light taken to infinity or to zero, respectively). The investigations have also been extended to quantum deformations of the Carrollian and Galilean symmetries, in the sense of (quantum) Hopf algebras. The case of 2+1 dimensions is particularly worth to study due to both the mathematical nature of the corresponding (classical) theory of gravity, and the recently finalized classification of all quantum-deformed algebras of spacetime isometries. Consequently, the list of all quantum deformations of (anti-)de Sitter-Carroll algebra is immediately provided by its well-known isomorphism with either Poincar\'{e} or Euclidean algebra. Quantum contractions from the (anti-)de Sitter to (anti-)de Sitter-Carroll classification allow to almost completely recover the latter. One may therefore conjecture that the analogous contractions from the (anti-)de Sitter to (anti-)de Sitter-Galilei $r$-matrices provide (almost) all coboundary deformations of (anti-)de Sitter-Galilei algebra. This scheme is complemented by deriving (Carrollian and Galilean) quantum contractions of deformations of Poincar\'{e} algebra, leading to coboundary deformations of Carroll and Galilei algebras.
Considerable attention has been paid to the study of the quantum geometry of nonrotating black holes within the framework of Loop Quantum Cosmology. This interest has been reinvigorated since the introduction of a novel effective model by Ashtekar, Olmedo, and Singh. Despite recent advances in its foundation, there are certain questions about its quantization that still remain open. Here we complete this quantization taking as starting point an extended phase space formalism suggested by several authors, including the proposers of the model. Adopting a prescription that has proven successful in Loop Quantum Cosmology, we construct an operator representation of the Hamiltonian constraint. By searching for solutions to this constraint operator in a sufficiently large set of dual states, we show that it can be solved for a continuous range of the black hole mass. This fact seems in favour of a conventional classical limit (at least for large masses) and contrasts with recent works that advocate a discrete spectrum. We present an algorithm that determines the solutions in closed form. To build the corresponding physical Hilbert space and conclude the quantization, we carry out an asymptotic analysis of those solutions, which allows us to introduce a suitable inner product on them.
We study entanglement dynamics in toy models of black hole information built out of chaotic many-body quantum systems, by utilising a coarse-grained description of entanglement dynamics in such systems known as the `entanglement membrane'. We show that in these models the Page curve associated to the entropy of Hawking radiation arises from a transition in the entanglement membrane around the Page time, in an analogous manner to the change in quantum extremal surfaces that leads to the Page curve in semi-classical gravity. We also use the entanglement membrane prescription to study the Hayden-Preskill protocol, and demonstrate how information initially encoded in the black hole is rapidly transferred to the radiation around the Page time. Our results relate recent developments in black hole information to generic features of entanglement dynamics in chaotic many-body quantum systems.
Particles at rest before the arrival of a burst of gravitational wave move, after the wave has passed, with constant velocity along diverging geodesics. As recognized by Souriau 50 years ago and then forgotten, their motion is particularly simple in Baldwin-Jeffery-Rosen (BJR) coordinates (which are however defined only in coordinate patches): they are determined when the first integrals associated with the 5-parameter isometry group (recently identified as L\'evy-Leblond's ``Carroll'' group with broken rotations) are used. A global description can be given instead in terms of Brinkmann coordinates, however it requires to solve a Sturm-Liouville equation, whereas the relation between BJR and Brinkmann requires to solve yet another Sturm-Liouville equation. The theory is illustrated by geodesic motion in a linearly polarized (approximate) ``sandwich'' wave proposed by Gibbons and Hawking for gravitational collapse, and by circularly polarized approximate sandwich waves with Gaussian envelope.
This paper presents a study on charged black holes in quintic quasi-topological gravity, where we construct numerical solutions and investigate their thermodynamics and conserved quantities. We verify the first law of thermodynamics and compare our findings with that of Einstein gravity. We examine the physical properties of the solutions, considering anti-de Sitter, de Sitter, and flat solutions. Our analysis shows that anti-de Sitter solutions exhibit thermal stability, whereas de Sitter and flat solutions do not. Finally, we discuss the implications of our results and possible future research directions.
In scenarios of physical interest in Loop Quantum Cosmology, with a preinflationary epoch where the kinetic energy of the inflaton dominates, the analytic study of the dynamics of the primordial fluctuations has been carried out by neglecting the inflaton potential in those stages of the evolution. In this work we develop approximations to investigate the influence of the potential as the period of kinetic dominance gives way to the inflationary regime, treating the potential as a perturbation. Specifically, we study how the potential modifies the effective mass that dictates the dynamics of the scalar perturbations in the preinflationary epochs, within the framework of the so-called hybrid prescription for Loop Quantum Cosmology. Moreover, we motivate and model a transition period that connects the kinetically dominated regime with inflation, allowing us to study the interval of times where the contribution of the potential is no longer negligible but an inflationary description is not yet valid. Finally, we include the main modifications coming from a slow-roll correction to a purely de Sitter evolution of the perturbations during inflation. We analytically solve the dynamics of the perturbations in each of these different epochs of cosmological evolution, starting from initial conditions fixed by the criterion of asymptotic Hamiltonian diagonalization. This enables us to compute and quantitatively analyze the primordial power spectrum in a specific case, using a quadratic inflaton potential.
We study the black hole spacetime structure of a model consisting of the standard Maxwell theory and a $p$-power-Yang-Mills term. This non-linear contribution introduces a non-Abelian charge into the global solution, resulting in a modified structure of the standard Reissner-Nordstr\"{o}m black hole. Specifically, we focus on the model with $p=1/2$, which gives rise to a new type of modified Reissner-Nordstr\"{o}m black hole. For this class of black holes, we compute the event horizon, the innermost stable circular orbit, and the conditions to preserve the weak cosmic censorship conjecture. The latter condition sets a well-established relation between the electric and the Yang-Mills charges. As a first astrophysical implication, the accretion properties of spherical steady flows are investigated in detail. Extensive numerical examples of how the Yang-Mills charge affects the accretion process of an isothermal fluid in comparison to the standard Reissner-Nordstr\"{o}m and Schwarzschild black holes are displayed. Finally, analytical solutions in the fully relativistic regime, along with numerical computations, of the mass accretion rate for a polytropic fluid in terms of the electric and Yang-Mills charges are obtained. As a main result, the mass accretion rate efficiency is considerably improved, with respect to the standard Reissner-Nordstr\"{o}m and Schwarzschild solutions, for negative values of the Yang-Mills charge.
We provide the analytic waveform in time domain for the scattering of two Kerr black holes at leading order in the post-Minkowskian expansion and up to fourth order in both spins. The result is obtained by the generalization of the KMOC formalism to radiative observables, combined with the analytic continuation of the five-point scattering amplitude to complex kinematics. We use analyticity arguments to express the waveform directly in terms of the three-point coupling of the graviton to the spinning particles and the gravitational Compton amplitudes, completely bypassing the need to compute and integrate the five-point amplitude. In particular, this allows to easily include higher-order spin contributions for any spinning compact body. Finally, in the spinless case we find a new compact and gauge-invariant representation of the Kovacs-Thorne waveform.
Opinion is divided about the nature of state dependence in the black hole interior. Some argue that it is a necessary feature, while others argue it is a bug. In this paper, we consider the extended half-sided modular translation $U(s_0)$ (with $s_0 > 0$) of Leutheusser and Liu that takes us inside the horizon. We note that we can use this operator to construct a modular Hamiltonian $H$ and a conjugation $J$ on the modular time-evolved wedges. The original thermofield double translates to a new cyclic and separating vector in the shifted algebra. We use these objects and the Connes' cocycle to repeat Witten's crossed product construction in this new setting, and to obtain a Type II$_\infty$ algebra that is independent of the various choices, in particular that of the cyclic separating vector. Our emergent times are implicitly boundary-dressed. But if one admits an ``extra'' observer in the interior, we argue that the (state-independent) algebra can be Type I or Type II$_1$ instead of Type II$_\infty$, depending on details. Along with these general considerations, we present some specific calculations in the setting of the Poincare BTZ black hole. We identify a specific pointwise (as opposed to non-local) modular translation in BTZ-Kruskal coordinates that is analytically tractable, exploiting a connection with AdS-Rindler. This modular translation can reach the singularity. Curiously, these modular time Cauchy slices become (piecewise) null at the singularity.
We consider four-dimensional general relativity with vanishing cosmological constant defined on a manifold with a boundary. In Lorentzian signature, the timelike boundary is of the form $\boldsymbol{\sigma} \times \mathbb{R}$, with $\boldsymbol{\sigma}$ a spatial two-manifold that we take to be either flat or $S^2$. In Euclidean signature, we take the boundary to be $S^2\times S^1$. We consider conformal boundary conditions, whereby the conformal class of the induced metric and trace $K$ of the extrinsic curvature are fixed at the timelike boundary. The problem of linearised gravity is analysed using the Kodama-Ishibashi formalism. It is shown that for a round metric on $S^2$ with constant $K$, there are modes that grow exponentially in time. We discuss a method to control the growing modes by varying $K$. The growing modes are absent for a conformally flat induced metric on the timelike boundary. We provide evidence that the Dirichlet problem for a spherical boundary does not suffer from non-uniqueness issues at the linearised level. We consider the extension of black hole thermodynamics to the case of conformal boundary conditions, and show that the form of the Bekenstein-Hawking entropy is retained.
Rastall gravity is the same as General Relativity, with a simple algebraic redefinition of what is called the energy-momentum tensor. Despite it having been very clearly explained by M. Visser several years go, there are still many papers claiming big differences between the two formulations of gravitational equations and trying to use them for problems of physics. When going this way, the totally ignored task is to explain why the conserved energy-momentum quantities and the quantities used for other purposes are different from each other. Moreover, when researchers are using the non-conserved energy density and pressure for determining the sound speed, it is just inconsistent with the Rastall gravity. I carefully explain all this, and also show how one could construct a variational principle for producing equations in the Rastall form.
We provide a first quantitative indication that the wave function of the proton contains unequal distributions of charm quarks and antiquarks, i.e. a nonvanishing intrinsic valence charm distribution. A significant nonvanishing valence component cannot be perturbatively generated, hence our results reinforce previous evidence that the proton contains an intrinsic (i.e., not radiatively generated) charm quark component. We establish our result through a determination of the parton distribution functions (PDFs) of charm quarks and antiquarks in the proton. We propose two novel experimental probes of this intrinsic charm valence component: D-meson asymmetries in Z+c-jet production at the LHCb experiment, and flavor-tagged structure functions at the Electron-Ion Collider.
This study focuses on the decay of the $B_c$ meson to S-wave charmonia. Using lattice inputs on $B_c\to J/\psi$ form factors, we have obtained the $B_c\to\eta_c$ form factors using heavy quark spin symmetry (HQSS) relations between the associated form factors after parametrizing and extracting the possible symmetry breaking corrections. Using the $q^2$ shapes of these form factors, we have extracted the branching fractions $\mathcal{B}(B_c^-\to \eta_c\ell^-\bar{\nu})$ (with $\ell =\tau, \mu (e)$) and the decay rate distributions and have predicted the Standard model estimate for the observable $R(\eta_c)=\Gamma(B_c^-\to \eta_c\tau^-\bar{\nu})/\Gamma(B_c^-\to \eta_c\mu^-\bar{\nu}) = 0.302 \pm 0.010$. In addition, we have extracted the radial wave functions $\psi_{B_c}^R(0)$, $\psi_{J/\psi}^R(0)$ and $\psi_{\eta_c}^R(0)$ at small quark-antiquark distances from the available information on the form factors from lattice and experimental data on radiative and rare decays of the $J/\psi$ and $\eta_c$ mesons. To do so, we choose the theory framework of nonrelativistic QCD (NRQCD) effective theory. Using our results, we have estimated the branching fractions of a few non-leptonic decays of $B_c$ to $J/\psi$ or $\eta_c$ and other light mesons. We have also updated the numerical estimates of the cross sections $\sigma(e^+e^- \to J/\psi \eta_c, \eta_c\gamma)$ and predicted the branching fractions of $Z$ boson decays to either $J/\psi$ or $\eta_c$ final states or both.
In high-energy particle collisions, charged track finding is a complex yet crucial endeavour. We propose a quantum algorithm, specifically quantum template matching, to enhance the accuracy and efficiency of track finding. Abstracting the Quantum Amplitude Amplification routine by introducing a data register, and utilising a novel oracle construction, allows data to be parsed to the circuit and matched with a hit-pattern template, without prior knowledge of the input data. Furthermore, we address the challenges posed by missing hit data, demonstrating the ability of the quantum template matching algorithm to successfully identify charged-particle tracks from hit patterns with missing hits. Our findings therefore propose quantum methodologies tailored for real-world applications and underline the potential of quantum computing in collider physics.
We perform a comprehensive analysis of the scattering of matter and gravitational Kaluza-Klein (KK) modes in five-dimensional gravity theories. We consider matter localized on a brane as well as in the bulk of the extra dimension for scalars, fermions and vectors respectively, and consider an arbitrary warped background. While naive power-counting suggests that there are amplitudes which grow as fast as ${\cal O}(s^3)$ [where $s$ is the center-of-mass scattering energy-squared], we demonstrate that cancellations between the various contributions result in a total amplitude which grows no faster than ${\cal O}(s)$. Extending previous work on the self-interactions of the gravitational KK modes, we show that these cancellations occur due to sum-rule relations between the couplings and the masses of the modes that can be proven from the properties of the mode equations describing the gravity and matter wavefunctions. We demonstrate that these properties are tied to the underlying diffeomorphism invariance of the five-dimensional theory. We discuss how our results generalize when the size of the extra dimension is stabilized via the Goldberger-Wise mechanism. Our conclusions are of particular relevance for freeze-out and freeze-in relic abundance calculations for dark matter models including a spin-2 portal arising from an underlying five-dimensional theory.
The rare $B \to K^{(*)} \bar{\ell} \ell$ decays exhibit a long-standing tension with Standard Model (SM) predictions, which can be attributed to a lepton-universal short-distance $b \to s \bar{\ell} \ell$ interaction. We present two novel methods to disentangle this effect from long-distance dynamics: one based on the determination of the inclusive $b \to s \bar{\ell} \ell$ rate at high dilepton invariant mass ($q^2\geq 15~{\rm GeV}^2$), the other based on the analysis of the $q^2$ spectrum of the exclusive modes $B \to K^{(*)} \bar{\ell} \ell$ (in the entire $q^2$ range). Using the first method, we show that the SM prediction for the inclusive $b \to s \bar{\ell} \ell$ rate at high dilepton invariant mass is in good agreement with the result obtained summing the SM predictions for one- and two-body modes ($K$, $K^*$, $K\pi$). This observation allows us to perform a direct comparison of the inclusive $b \to s \bar{\ell} \ell$ rate with data. This comparison shows a significant deficit ($\sim 2\sigma$) in the data, fully compatible with the deficit observed at low-$q^2$ on the exclusive modes. This provides independent evidence of an anomalous $b \to s \bar{\ell} \ell$ short-distance interaction, free from uncertainties on the hadronic form factors. To test the short-distance nature of this effect we use a second method, where we analyze the exclusive $B \to K^{(*)} \bar{\ell} \ell$ data in the entire $q^2$ region. Here, after using a dispersive parametrization of the charmonia resonances, we extract the non-SM contribution to the universal Wilson coefficient $C_9$ for every bin in $q^2$ and for every polarization. The $q^2$- and polarization-independence of the result, and its compatibility with the inclusive determination, provide a consistency check of the short-distance nature of this effect.
We introduce a new parameterization of $B\rightarrow D \pi \ell \nu$ form factors using a partial-wave expansion and derive bounds on the series coefficients using analyticity and unitarity. This is the first generalization of the model-independent formalism developed by Boyd, Grinstein, and Lebed for $B \to D \ell \nu$ to semileptonic decays with multi-hadron final states, and enables data-driven form factor determinations with robust, systematically-improvable uncertainties. Using this formalism, we extract the form-factor parameters for $B \to D_2^\ast(\to D\pi) \ell \nu$ decays in a model-independent way from fits of data from the Belle Experiment, and, for the first time, study the two-pole structure in the $D\pi$ S-wave in semileptonic decays employing lineshapes from unitarized chiral perturbation theory.
We study interacting electromagnetic fields in the framework of effective QED action, implementing the everlasting nature of the photon interaction with electron-positron loop fluctuations in the vacuum state. We develop a polarization summation based on the requirement that the always-interacting (dressed) photons are the asymptotic states. We show that as result the interaction-picture-based Schwinger-Dyson summation is extended in the strong coupling limit to a continuous fraction, for which there is no Landau pole.
We report on the charged-particle multiplicity dependence of net-proton cumulant ratios up to sixth order from $\sqrt{s}$ = 200 GeV $p$+$p$ collisions at the Relativistic Heavy Ion Collider (RHIC). The measured ratios $C_{4}/C_{2}$, $C_{5}/C_{1}$, and $C_{6}/C_{2}$ decrease with increased charged-particle multiplicity and rapidity acceptance. Neither the Skellam baselines nor PYTHIA8 calculations account for the observed multiplicity dependence. In addition, the ratios $C_{5}/C_{1}$ and $C_{6}/C_{2}$ approach negative values in the highest-multiplicity events. The negative ratios in the most central p+p collisions at 200 GeV, similar to those observed in central Au+Au 200 GeV collisions, imply the formation of thermalized QCD matter.
The LHCb collaboration has recently announced the discovery of two hidden-charm pentaquark states with also strange quark content, $P_{cs}(4338)$ and $P_{cs}(4459)$; its analysis points towards having both hadrons isospin equal to zero and spin-parity quantum numbers $\frac12^-$ and $\frac32^-$, respectively. We perform herein a systematical investigation of the $qqsc\bar{c}$ $(q=u,\,d)$ system by means of a chiral quark model, along with a highly accurate computational method, the Gaussian expansion approach combined with the complex-scaling technique. Baryon-meson configurations in both singlet- and hidden-color channels are considered. The $P_{cs}(4338)$ and $P_{cs}(4459)$ signals can be well identified as molecular bound states with dominant components $\Lambda J/\psi$ $(60\%)$ and $\Xi_c D$ $(23\%)$ for the lowest-energy case and $\Xi_c D^*$ $(72\%)$ for the highest-energy one. Besides, it seems that some narrow resonances can be also found in each allowed $I(J^P)$-channel in the energy region of $4.6-5.5$ GeV, except for the $1(\frac12^-)$ where a shallow bound state with dominant $\Xi^*_c D^*$ structure is obtained at $4673$ MeV with binding energy $E_B=-3$ MeV. These exotic states are expected to be confirmed in future high energy experiments.
The differential cross section and polarization observables for the elastic reaction induced by deuteron scattering off electrons at rest are calculated in the one-photon-exchange (Born) approximation. Specific attention is given to the kinematical conditions, that is, to the specific range of incident energy and transferred momentum. The specific interest of this reaction is to access very small transferred momenta. Numerical estimates are given for polarization observables that describe the of single- and double-spin effects, provided that the polarization components (both, vector and tensor) of each particle in the reaction are determined in the rest frame of the electron target.
QCD matter in strong magnetic field exhibits a rich phase structure. In the presence of an external magnetic field, the chiral Lagrangian for two flavors is accompanied by the Wess-Zumino-Witten (WZW) term containing an anomalous coupling of the neutral pion $\pi_0$ to the magnetic field via the chiral anomaly. Due to this term, the ground state is inhomogeneous in the form of either chiral soliton lattice (CSL), an array of solitons in the direction of magnetic field, or domain-wall Skyrmion (DWSk) phase in which Skyrmions supported by $\pi_3[{\rm SU}(2)] \simeq {\mathbb Z}$ appear inside the solitons as topological lumps supported by $\pi_2(S^2) \simeq {\mathbb Z}$ in the effective worldvolume theory of the soliton. In this paper, we determine the phase boundary between the CSL and DWSk phases beyond the single-soliton approximation, within the leading order of chiral perturbation theory. To this end, we explore a domain-wall Skyrmion chain in multiple soliton configurations. First, we construct the effective theory of the CSL by the moduli approximation, and obtain the ${\mathbb C}P^1$ model or O(3) model, gauged by a background electromagnetic gauge field, with two kinds of topological terms coming from the WZW term: one is the topological lump charge in 2+1 dimensional worldvolume and the other is a topological term counting the soliton number. Topological lumps in the 2+1 dimensional worldvolume theory are superconducting rings and their sizes are constrained by the flux quantization condition. The negative energy condition of the lumps yields the phase boundary between the CSL and DWSk phases. We find that a large region inside the CSL is occupied by the DWSk phase, and that the CSL remains metastable in the DWSk phase in the vicinity of the phase boundary.
Deviations in the trilinear self-coupling of the Higgs boson at 125 GeV from the Standard Model (SM) prediction are a sensitive test of physics Beyond the SM (BSM). The LHC experiments searching for the simultaneous production of two Higgs bosons start to become sensitive to such deviations. Therefore, precise predictions for the trilinear Higgs self-coupling in different BSM models are required in order to be able to test them against current and future bounds. We present the new framework $\texttt{anyH3}$, which is a $\texttt{Python}$ library that can be utilized to obtain predictions for trilinear scalar couplings up to the one-loop level in any renormalisable theory. The program makes use of the $\texttt{UFO}$ format as input and is able to automatically apply a wide variety of renormalisation schemes involving minimal and non-minimal subtraction conditions. External-leg corrections are also computed automatically, and finite external momenta can be optionally taken into account. The $\texttt{Python}$ library comes with convenient command-line as well as $\texttt{Mathematica}$ user interfaces. We perform cross-checks using consistency conditions such as UV-finiteness and decoupling, and also by comparing against results know in the literature. As example applications, we obtain results for the trilinear self-coupling of the SM-like Higgs boson in various concrete BSM models, study the effect of external momenta as well as of different renormalisation schemes.
Raoul Gatto's contributions to the establishment of electron-positron colliders as a fundamental discovery tool in particle physics is illustrated. His collaboration with Bruno Touschek both in the construction of AdA and proposing ADONE is highlighted, through unpublished photographs and original documents
The light-cone distribution amplitude (LCDA) of the pion contains information about the parton momentum carried by the quarks and is an important theoretical input for various predictions of exclusive processes at high energy, including the pion electromagnetic form factor. Progress towards constraining the fourth Mellin moment of the LCDA using the heavy-quark operator product expansion (HOPE) method is presented.
The library provides a set of C++/Python functions for computing cross sections of ultraperipheral collisions of high energy particles under the equivalent photons approximation. Cross sections are represented through multiple integrals over the phase space. The integrals are calculated through recurrent application of algorithms for one dimensional integration. The paper contains an introduction to the theory of ultraperipheral collisions, discusses the library approach and provides a few examples of calculations.
The transverse-momentum-dependent distributions (TMDs), which are defined by gauge-invariant 3D parton correlators with staple-shaped lightlike Wilson lines, can be calculated from quark and gluon correlators fixed in the Coulomb gauge on a Euclidean lattice. These quantities can be expressed gauge-invariantly as the correlators of dressed fields in the Coulomb gauge, which reduce to the standard TMD correlators in the infinite boost limit. In the framework of Large-Momentum Effective Theory, a quasi-TMD defined from such correlators in a large-momentum hadron state can be matched to the TMD via a factorization formula, whose exact form is derived using Soft Collinear Effective Theory and verified at one-loop order. Compared to the currently used gauge-invariant quasi-TMDs, this new method can substantially improve statistical precision and simplify renormalization, thus providing a more efficient way to calculate TMDs in lattice QCD.
The application of machine learning in sciences has seen exciting advances in recent years. As a widely applicable technique, anomaly detection has been long studied in the machine learning community. Especially, deep neural nets-based out-of-distribution detection has made great progress for high-dimensional data. Recently, these techniques have been showing their potential in scientific disciplines. We take a critical look at their applicative prospects including data universality, experimental protocols, model robustness, etc. We discuss examples that display transferable practices and domain-specific challenges simultaneously, providing a starting point for establishing a novel interdisciplinary research paradigm in the near future.
We propose a novel strategy for disentangling proton collisions at hadron colliders such as the LHC that considerably improves over the current state of the art. Employing a metric inspired by optimal transport problems as the cost function of a graph neural network, our algorithm is able to compare two particle collections with different noise levels and learns to flag particles originating from the main interaction amidst products from up to 200 simultaneous pileup collisions. We thereby sidestep the critical task of obtaining a ground truth by labeling particles and avoid arduous human annotation in favor of labels derived in situ through a self-supervised process. We demonstrate how our approach - which, unlike competing algorithms, is trivial to implement - improves the resolution in key objects used in precision measurements and searches alike and present large sensitivity gains in searching for exotic Higgs boson decays at the High-Luminosity LHC.
We analyse the radiative stability of the next-to-tribimaximal mixings ($NTBM$) with the variation of SUSY breaking scale ($m_S$) in MSSM, for both normal ordering (NO) and inverted ordering (IO) at the fixed input value of seesaw scale $M_R = 10^{15}$ GeV and two different values of $\tan \beta$. All the neutrino oscillation parameters receive varying radiative corrections irrespective of the $m_S$ values at the electroweak scale, which are all within $3\sigma$ range of the latest global fit data at low value of $\tan \beta$ (30). NO is found to be more stable than IO for all four different NTBM mixing patterns.
We propose theories of a complete mirror world with parity (P) solving the strong CP problem. P exchanges the entire Standard Model (SM) with its mirror copy. We derive bounds on the two new mass scales that arise: $v'$ where parity and mirror electroweak symmetry are spontaneously broken, and $v_3$ where the color groups break to the diagonal strong interactions. The strong CP problem is solved even if $v_3 \ll v^{\prime}$, when heavy coloured states at the scale $v_3$ may be accessible at LHC and future colliders. Furthermore, we argue that the breaking of P introduces negligible contributions to $\bar \theta_\text{QCD}$, starting at three-loop order. The symmetry breaking at $v_3$ can be made dynamical, without introducing an additional hierarchy problem.
A large amount of data on hadronic two body weak decays of anti-triplet charmed baryons $T_{c\bar 3}$ to an octet baryon $T_8$ and an octet or singlet pseudoscalar meson $P$, $T_{c \bar 3} \to T_8 P$, have been measured. The SU(3) flavor symmetry has been applied to study these decays to obtain insights about weak interactions for charm physics. However not all such decays needed to determine the SU(3) irreducible amplitudes have been measured forbidding a complete global analysis. Previously, it has been shown that data from measured decays can be used to do a global fit to determine all except one parity violating and one parity conserving amplitudes of the relevant SU(3) irreducible amplitudes causing 8 hadronic two body weak decay channels involving $\Xi^0_c$ to $\eta$ or $\eta'$ transitions undetermined. It is important to obtain information about these decays in order to guide experimental searches. In this work using newly measured decay modes by BESIII and Belle in 2022, we carry out a global analysis and parameterize the unknown amplitudes to provide the ranges for the branching ratios of the 8 undetermined decays. Our results indicate that the SU(3) flavor symmetry can explain the measured data exceptionally well, with a remarkable minimal $\chi^2/d.o.f.$ of 1.21 and predict 80 observables in 45 decays for future experimental data to test. We then vary the unknown SU(3) amplitudes to obtain the allowed range of branching ratios for the 8 undetermined decays. We find that some of them are within reach of near future experimental capabilities. We urge our experimental colleagues to carry out related searches.
The flavour puzzle is one of the greatest mysteries in particle physics. A `flavour deconstruction' of the electroweak gauge symmetry, by promoting at least part of it to the product of a third family factor (under which the Higgs is charged) times a light family factor, allows one to address the flavour puzzle at a low scale due to accidentally realised $U(2)^5$ flavour symmetries. The unavoidable consequence is new heavy gauge bosons with direct couplings to the Higgs, threatening the stability of the electroweak scale. In this work, we propose a UV complete model of flavour based on deconstructing only hypercharge. We find that the model satisfies finite naturalness criteria, benefiting from the smallness of the hypercharge gauge coupling in controlling radiative Higgs mass corrections and passing phenomenological bounds. Our setup allows one to begin explaining flavour at the TeV scale, while dynamics solving the large hierarchy problem can lie at a higher scale up to around 10 TeV - without worsening the unavoidable little hierarchy problem. The low-energy phenomenology of the model is dominated by a single $Z'$ gauge boson with chiral and flavour non-universal couplings, with mass as light as a few TeV thanks to the $U(2)^5$ symmetry. The natural parameter space of the model will be probed by the HL-LHC and unavoidably leads to large positive shifts in the $W$-boson mass, as well as an enhancement in $\text{Br}(B_{s,d} \to \mu^+ \mu^-)$. Finally, we show that a future electroweak precision machine such as FCC-ee easily has the reach to fully exclude the model.
We use the scattering method to calculate the gravitational wave from axions annihilation in the axion cloud formed by the superradiance process around the Kerr black hole. We consider axions annihilating to gravitons as a three-body decay process and then calculate the corresponding decay width. In this approach, we can simply obtain the radiation power of gravitational wave and give the analytical approximate result with the spin effects of the Kerr black hole. Our study can also provide a cross-check to the numerical results in the traditional method.
The elastic and inelastic neutral current $\nu$ ($\overline{\nu}$) scattering off the polarized nucleon is discussed. The inelastic scattering concerns the single-pion production process. We show that the spin asymmetries' measurement can help to distinguish between neutrino and antineutrino neutral current scattering processes. The spin asymmetries also encode information about a type of target. Eventually, detailed studies of the inelastic spin asymmetries can improve understanding of the resonant-nonresonant pion production mechanism.
We compute the heavy quarkonium complex potential in an arbitrary magnetic field strength generated in the relativistic heavy-ion collision. First, the one-loop gluon polarization tensor is obtained in the presence of an external, constant, and homogeneous magnetic field using the Schwinger proper time formalism in Euclidean space. The gluon propagator is computed from the gluon polarization tensor, and it is used to calculate the dielectric permittivity in the presence of the magnetic field in the static limit. The modified dielectric permittivity is then used to compute the heavy quarkonium complex potential. We find that the heavy quarkonium complex potential is anisotropic in nature, which depends on the angle between the quark-antiquark ($Q\bar{Q}$) dipole axis and the direction of the magnetic field. We discuss the effect of the magnetic field strength and the angular orientation of the dipole on the heavy quarkonium potential. We discuss how the magnetic field influences the thermal widths of quarkonium states. Further, we also discuss the limitation of the strong-field approximation as done in literature in the light of heavy-ion observables, as the effect of the magnetic field is very nominal to the quarkonium potential.
Two Higgs doublets respect a mirror symmetry with spontaneous violation so that their vacuum expectation values can realize a small difference. Under this symmetry, three newly introduced right-handed neutrinos rather than the standard model fermions perform an odd transformation. Accordingly the neutrino masses and the charged fermion masses respectively are proportional to the difference and sum of the vacuum expectation values of two Higgs doublets. From a phenomenological perspective, such nearly degenerate Higgs doublets with large cancellation are equivalent to a Dirac seesaw mechanism with high suppression.
Physics beyond the Standard Model that is resonant in one or more dimensions has been a longstanding focus of countless searches at colliders and beyond. Recently, many new strategies for resonant anomaly detection have been developed, where sideband information can be used in conjunction with modern machine learning, in order to generate synthetic datasets representing the Standard Model background. Until now, this approach was only able to accommodate a relatively small number of dimensions, limiting the breadth of the search sensitivity. Using recent innovations in point cloud generative models, we show that this strategy can also be applied to the full phase space, using all relevant particles for the anomaly detection. As a proof of principle, we show that the signal from the R\&D dataset from the LHC Olympics is findable with this method, opening up the door to future studies that explore the interplay between depth and breadth in the representation of the data for anomaly detection.
With the use of mathematical techniques of tropical geometry, it was shown by Mikhalkin some twenty years ago that certain Gromov-Witten invariants associated with topological quantum field theories of pseudoholomorphic maps can be computed by going to the tropical limit of the geometries in question. Here we examine this phenomenon from the physics perspective of topological quantum field theory in the path integral representation, beginning with the case of the topological sigma model before coupling it to topological gravity. We identify the tropicalization of the localization equations, investigate its geometry and symmetries, and study the theory and its observables using the standard cohomological BRST methods. We find that the worldsheet theory exhibits a nonrelativistic structure, similar to theories of the Lifshitz type. Its path-integral formulation does not require a worldsheet complex structure; instead, it is based on a worldsheet foliation structure.
We develop a systematic method to classify connected \'etale algebras $A$'s in (possibly degenerate) pre-modular category $\mathcal B$. In particular, we find the category of $A$-modules, $\mathcal B_A$, have ranks bounded from above by $\lfloor\text{FPdim}(\mathcal B)\rfloor$. For demonstration, we classify connected \'etale algebras in some $\mathcal B$'s, which appear in physics. Physically, the results constrain (or fix) ground state degeneracies of (certain) $\mathcal B$-symmetric gapped phases. We study massive deformations of rational conformal field theories such as minimal models and Wess-Zumino-Witten models. In most of our examples, the classification suggests the symmetries $\mathcal B$'s are spontaneously broken.
String vertices of open-closed string field theory on an arbitrary closed string background with $N$ identical D-branes are investigated when $N$ is large. We identify the relevant geometric master equation and solve it using open-closed hyperbolic genus zero string vertices with a milder systolic constraint. The limits corresponding to integrating out open or closed strings are investigated. We highlight the possible implications of our construction to AdS/CFT correspondence.
We use Krylov complexity to study operator growth in the $q$-body dissipative SYK model, where the dissipation is modeled by linear and random $p$-body Lindblad operators. In the large $q$ limit, we analytically establish the linear growth of two sets of coefficients for any generic jump operators. We numerically verify this by implementing the bi-Lanczos algorithm, which transforms the Lindbladian into a pure tridiagonal form. We find that the Krylov complexity saturates inversely with the dissipation strength, while the dissipative timescale grows logarithmically. This is akin to the behavior of other $\mathfrak{q}$-complexity measures, namely out-of-time-order correlator (OTOC) and operator size, which we also demonstrate. We connect these observations to continuous quantum measurement processes. We further investigate the pole structure of a generic auto-correlation and the high-frequency behavior of the spectral function in the presence of dissipation, thereby revealing a general principle for operator growth in dissipative quantum chaotic systems.
We review the main ideas underlying the emerging theory of Yangians -- the new type of hidden symmetry in string-inspired models. Their classification by quivers is a far-going generalization of simple Lie algebras classification by Dynkin diagrams. However, this is still a kind of project, while a more constructive approach goes through toric Calabi-Yau spaces, related supersymmetric systems and the Duistermaat-Heckmann/equivariant integrals between the fixed points in the ADHM-like moduli spaces. These fixed points are classified by crystals (Young-type diagrams) and Yangian generators describe ``instanton'' transitions between them. Detailed examples will be presented elsewhere.
The constraints arising from anomaly cancellation are particular strong for chiral theories in six dimensions. We make progress towards a complete classification of 6D supergravities with minimal supersymmetry and non-abelian gauge group. First, we generalize a previously known infinite class of anomaly-free theories which has $T\gg 9$ to essentially any semi-simple gauge group and infinitely many choices for hypermultiplets. The construction relies on having many decoupled sectors all selected from a list of four simple theories which we identify. Second, we use ideas from graph theory to rephrase the task of finding anomaly-free theories as constructing cliques in a certain multigraph. A branch-and-bound type algorithm is described which can be used to explicitly construct, in a $T$-independent way, anomaly-free theories with an arbitrary number of simple factors in the gauge group. We implement these ideas to generate an ensemble of $\mathcal{O}(10^7)$ irreducible cliques from which anomaly-free theories may be easily built, and as a special case obtain a complete list of $19,\!847$ consistent theories for $T=0$, for which the maximal gauge group rank is $24$. Modulo the new infinite families, we give a complete characterization of anomaly-free theories and show that the bound $T\leq 273$ is sharp.
The Cubic CFT can be understood as the O(3) invariant CFT perturbed by a slightly relevant operator. In this paper, we use conformal perturbation theory together with the conformal data of the O(3) vector model to compute the anomalous dimension of scalar bilinear operators of the Cubic CFT. When the $Z_2$ symmetry that flips the signs of $\phi_i$ is gauged, the Cubic model describes a certain phase transition of a quantum dimer model. The scalar bilinear operators are the order parameters of this phase transition. Based on the conformal data of the O(3) CFT, we determine the correction to the critical exponent as $\eta_{*}^{Cubic}-\eta_{*}^{O(3)}\approx -0.0215(49)$. The O(3) data is obtained using the numerical conformal bootstrap method to study all four-point correlators involving the four operators: $v=\phi_i$, $s=\sum_i \phi_i\phi_i$ and the leading scalar operators with O(3) isospin $j=2$ and 4. According to large charge effective theory, the leading operator with charge $Q$ has scaling dimension $\Delta_{Q}=c_{3/2} Q^{3/2}+c_{1/2}Q^{1/2}$. We find a good match with this prediction up to isospin $j=6$ for spin 0 and 2 and measured the coefficients $c_{3/2}$ and $c_{1/2}$.
This thesis investigates the impact of the background magnetic field on correlations or entanglement between pairs created by the background electric field in quantum field theoretic systems in the Minkowski, the primordial inflationary de Sitter and the Rindler spacetimes. These analyses might provide insight into the relativistic entanglement in the early inflationary universe scenario, where such background fields might exist due to primordial fluctuations, and in the near-horizon of non-extremal black holes, which are often endowed with background electromagnetic fields due to the accretion of plasma onto them.
In this paper, we explore the concept of pseudo R\'enyi entropy within the context of quantum field theories (QFTs). The transition matrix is constructed by applying operators situated in different regions to the vacuum state. Specifically, when the operators are positioned in the left and right Rindler wedges respectively, we discover that the logarithmic term of the pseudo R\'enyi entropy is necessarily real. In other cases, the result might be complex. We provide direct evaluations of specific examples within 2-dimensional conformal field theories (CFTs). Furthermore, we establish a connection between these findings and the pseudo-Hermitian condition. Our analysis reveals that the reality or complexity of the logarithmic term of pseudo R\'enyi entropy can be explained through this pseudo-Hermitian framework. Additionally, we investigate the divergent term of the pseudo R\'enyi entropy. Interestingly, we observe a universal divergent term in the second pseudo R\'enyi entropy within 2-dimensional CFTs. This universal term is solely dependent on the conformal dimension of the operator under consideration. For $n$-th pseudo R\'enyi entropy ($n\ge 3$), the divergent term is intricately related to the specific details of the underlying theory.
We develop a test for the vanishing of higher central charges of a fermionic topological order, which is a necessary condition for the existence of a gapped boundary, purely in terms of the modular data of the super-modular tensor category. More precisely, we test whether a given super-MTC has $c = 0$ mod $\frac{1}{2}$, and, if so, whether the modular extension with $c =0$ mod $8$ has vanishing higher central charges. The test itself does not require an explicit computation of the modular extensions and is easily carried out. We apply this test to known examples of super-modular tensor categories. Since our test allows us to obtain information about the chiral central charge of a super-modular tensor category in terms of its modular data without direct knowledge of its modular extensions, this can also be thought of as the first step towards a fermionic analogue of the Gauss-Milgram formula.
At its critical point, the three-dimensional lattice Ising model is described by a conformal field theory (CFT), the 3d Ising CFT. Instead of carrying out simulations on Euclidean lattices, we use the Quantum Finite Elements method to implement radially quantized critical $\phi^4$ theory on simplicial lattices approaching $\mathbb{R} \times S^2$. Computing the four-point function of identical scalars, we demonstrate the power of radial quantization by the accurate determination of the scaling dimensions $\Delta_{\epsilon}$ and $\Delta_{T}$ as well as ratios of the operator product expansion (OPE) coefficients $f_{\sigma \sigma \epsilon}$ and $f_{\sigma \sigma T}$ of the first spin-0 and spin-2 primary operators $\epsilon$ and $T$ of the 3d Ising CFT.
The usual Bogolyubov R-operation works in non-renormalizable theories in the same way as in renormalizable ones. However, in the non-renormalizable case, the counter-terms eliminating ultraviolet divergences do not repeat the structure of the original Lagrangian but contain new terms with a higher degree of fields and derivatives increasing from order to order of PT. If one does not aim to obtain finite off-shell Green functions but limits oneself only to the finiteness of the S-matrix, then one can use the equations of motion and drastically reduce the number of independent counter-terms. For example, it is possible to reduce all counter-terms to a form containing only operators with four fields and an arbitrary number of derivatives. And although there will still be infinitely many such counter-terms, in order to fix the arbitrariness of the subtraction procedure, one can normalize the on-shell 4-point amplitude, which must be known for arbitrary kinematics, plus the 6-point amplitude at one point. All other multiparticle amplitudes will be calculated unambiguously. Within the framework of perturbation theory, the number of independent counter-terms in a given order is limited, so does the number of normalization conditions. The constructed counter-terms are not absorbed into the normalization of a single coupling constant, the Lagrangian contains an infinite number of terms, but after fixing the arbitrariness, it allows one to obtain unambiguous predictions for observables.
A geometrisation scheme internal to the category of Lie supergroups is discussed for the supersymmetric de Rham cocycles on the super-Minkowski group $\,\mathbb{T}\,$ which determine the standard super-$p$-brane dynamics with that target, and interpreted within Cartan's approach to the modelling of orbispaces of group actions by homotopy quotients. The ensuing higher geometric objects are shown to carry a canonical equivariant structure for the action of a discrete subgroup of $\,\mathbb{T}$, which results in their descent to the corresponding orbifolds of $\,\mathbb{T}\,$ and in the emergence of a novel class of superfield theories with defects.
We investigate the quantum entanglement of free-fermion models on fractal lattices with non-integer dimension and broken translation symmetry. For gapless systems with finite density-of-state at the chemical potential, we find a universal scaling of entanglement entropy (EE) as $S_{A} \sim L_{A}^{d_{s}-1} \log L_{A}$ that is independent of the partition scheme, where $d_s$ is the space dimension where fractals are embedded, and $L_A$ is the linear size of the subsystem $A$. This scaling extends the Widom conjecture of translation-invariant systems to fractal lattices. We also study entanglement contour (EC) as a real-space entanglement ``tomography''. The EC data show a self-similar and universal pattern called ``entanglement fractal'' (EF), which resembles Chinese papercutting and keeps invariant for different partition schemes, leading to the EE scaling robustness. We propose a set of rules to artificially generate the EF pattern, which matches the numerical results at the scaling limit. For gapped systems, we observe that the fractal feature of $A$'s boundary affects the EE scaling as $ S_{A} \sim L_{A}^{d_{\rm bf}}$, where $ d_{\rm bf}$ is the Hausdorff dimension of $A$'s boundary, generalizing the area law. Meanwhile, the EC mainly localizes at $A$'s boundary. Our study reveals how fractal geometry interacts with the entanglement of free fermions. Future directions from physics and mathematics are discussed, e.g., experimental verification and Laplacian on fractals.
In spite of the fact that photons do not interact with an external magnetic field, the latter field may indirectly affect photons in the presence of a charged environment. This opens up an interesting possibility to continuously control the entanglement of photon beams without using any crystalline devices. We study this possibility in the framework of an adequate QED model. In an approximation it was discovered that such entanglement has a resonant nature, namely, a peak behavior at certain magnetic field strengths, depending on characteristics of photon beams direction of the magnetic field and parameters of the charged medium. Numerical calculations illustrating the above-mentioned resonant behavior of the entanglement measure and some concluding remarks are presented.
In this work the connection established in [7, 8] between a model of two linked polymers rings with fixed Gaussian linking number forming a 4-plat and the statistical mechanics of non-relativistic anyon particles is explored. The excluded volume interactions have been switched off and only the interactions of entropic origin arising from the topological constraints are considered. An interpretation from the polymer point of view of the field equations that minimize the energy of the model in the limit in which one of the spatial dimensions of the 4-plat becomes very large is provided. It is shown that the self-dual contributions are responsible for the long-range interactions that are necessary for preserving the global topological properties of the system during the thermal fluctuations. The non self-dual part is also related to the topological constraints, and takes into account the local interactions acting on the monomers in order to prevent the breaking of the polymer lines. It turns out that the energy landscape of the two linked rings is quite complex. Assuming as a rough approximation that the monomer densities of half of the 4-plat are constant, at least two points of energy minimum are found. Classes of non-trivial self-dual solutions of the self-dual field equations are derived. ... .
Turbulence is a complex spatial and temporal structure created by the strong non-linear dynamics of fluid flows at high Reynolds numbers. Despite being an ubiquitous phenomenon that has been studied for centuries, a full understanding of turbulence remained a formidable challenge. Here, we introduce tools from the fields of quantum chaos and Random Matrix Theory (RMT) and present a detailed analysis of image datasets generated from turbulence simulations of incompressible and compressible fluid flows. Focusing on two observables: the data Gram matrix and the single image distribution, we study both the local and global eigenvalue statistics and compare them to classical chaos, uncorrelated noise and natural images. We show that from the RMT perspective, the turbulence Gram matrices lie in the same universality class as quantum chaotic rather than integrable systems, and the data exhibits power-law scalings in the bulk of its eigenvalues which are vastly different from uncorrelated classical chaos, random data, natural images. Interestingly, we find that the single sample distribution only appears as fully RMT chaotic, but deviates from chaos at larger correlation lengths, as well as exhibiting different scaling properties.
In this paper we study the simplifying effects of supersymmetry on celestial OPEs at both tree and loop level. We find at tree level that theories with unbroken supersymmetry around a stable vacuum have celestial soft current algebras satisfying the Jacobi identity, and we show at one loop that celestial OPEs in these theories have no double poles.
The Checkerboard conformal field theory is an interesting representative of a large class of non-unitary, logarithmic Fishnet CFTs (FCFT) in arbitrary dimension which have been intensively studied in the last years. Its planar Feynman graphs have the structure of a regular square lattice with checkerboard colouring. Such graphs are integrable since each coloured cell of the lattice is equal to an R-matrix in the principal series representations of the conformal group. We compute perturbatively and numerically the anomalous dimension of the shortest single-trace operator in two reductions of the Checkerboard CFT: the first one corresponds to the fishnet limit of the twisted ABJM theory in 3D, whereas the spectrum in the second, 2D reduction contains the energy of the BFKL Pomeron. We derive an analytic expression for the Checkerboard analogues of Basso--Dixon 4-point functions, as well as for the class of Diamond-type 4-point graphs with disc topology. The properties of the latter are studied in terms of OPE for operators with open indices. We prove that the spectrum of the theory receives corrections only at even orders in the loop expansion and we conjecture such a modification of Checkerboard CFT where quantum corrections occur only with a given periodicity in the loop order.
A systematic approach to dualities in symmetric (1+1)d quantum lattice models has recently been proposed in terms of module categories over the symmetry fusion categories. By characterizing the non-trivial way in which dualities intertwine closed boundary conditions and charge sectors, these can be implemented by unitary matrix product operators. In this manuscript, we explain how to turn such duality operators into unitary linear depth quantum circuits via the introduction of ancillary degrees of freedom that keep track of the various sectors. The linear depth is consistent with the fact that these dualities change the phase of the states on which they act. When supplemented with measurements, we show that dualities with respect to symmetries encoded into nilpotent fusion categories can be realised in constant depth. The resulting circuits can for instance be used to efficiently prepare short- and long-range entangled states or map between different gapped boundaries of (2+1)d topological models.
We consider a k--essence model in which a single scalar field can be responsible for both primordial inflation and the present observed acceleration of the cosmological background geometry. The early one is driven by a slow roll potential, and the late one through a dynamical dimensional reduction process which freezes the scalar field in a degenerate surface, turning it into a cosmological constant. This is done by proposing a realizable stable cosmic time crystal, although giving a different interpretation to the "moving ground state", in which there is no motion because the system loses degrees of freedom. Furthermore, the model is free of pathologies such as propagating superluminal perturbations, negative energies, and perturbation instabilities.
In these notes we give a pedagogical account of the replica trick derivation of CFT entanglement and its holographic counterpart, i.e. the Lewkowycz Maldacena derivation of the Ryu-Takayanagi formula. The application to an 'island set-up' for the calculation of black hole radiation entropy is briefly discussed. Further topics focused on are the relation to thermal entropy, thermofield double constructions and statements about the emergence of gravity from entanglement through reinterpretations of gravitational first laws.
We explore non-invertible symmetries in two-dimensional lattice models with subsystem $\mathbb Z_2$ symmetry. We introduce a subsystem $\mathbb Z_2$-gauging procedure, called the subsystem Kramers-Wannier transformation, which generalizes the ordinary Kramers-Wannier transformation. The corresponding duality operators and defects are constructed by gaugings on the whole or half of the Hilbert space. By gauging twice, we derive fusion rules of duality operators and defects, which enriches ordinary Ising fusion rules with subsystem features. Subsystem Kramers-Wannier duality defects are mobile in both spatial directions, unlike the defects of invertible subsystem symmetries. We finally comment on the anomaly of the subsystem Kramers-Wannier duality symmetry, and discuss its subtleties.
Consider a system consisting of D$p$ and D$p'$, placed parallel at a separation and with $p - p' = 2 n$ (assuming $p \ge p'$, the integer $n \ge 0$ and $p' > 0$). When either D$p$ or D$p'$ carries a worldvolume electric flux, one in general expects a non-vanishing open string pair production due to the pair of virtual open string/anti open string connecting the two D branes under the action of the applied flux. However, this will not be true for $p' = 0$ and $p = 2, 4, 6$ when the D$p$ carries a pure electric flux. In this note, we will explore the case for which a finite non-vanishing open string pair production rate can indeed be produced when a certain worldvolume flux is applied to the Dp brane and understand the physics behind.
Evaluating a lattice path integral in terms of spectral data and matrix elements pertaining to a suitably defined quantum transfer matrix, we derive form-factor series expansions for the dynamical two-point functions of arbitrary local operators in fundamental Yang-Baxter integrable lattice models at finite temperature. The summands in the series are parameterised by solutions of the Bethe Ansatz equations associated with the eigenvalue problem of the quantum transfer matrix. We elaborate on the example of the XXZ chain for which the solutions of the Bethe Ansatz equations are sufficiently well understood in certain limiting cases. We work out in detail the case of the spin-zero operators in the antiferromagnetic massive regime at zero temperature. In this case the thermal form-factor series turn into series of multiple integrals with fully explicit integrands. These integrands factorize into an operator-dependent part, determined by the so-called Fermionic basis, and a part which we call the universal weight as it is the same for all spin-zero operators. The universal weight can be inferred from our previous work. The operator-dependent part is rather simple for the most interesting short-range operators. It is determined by two functions $\rho$ and $\omega$ for which we obtain explicit expressions in the considered case. As an application we rederive the known explicit form-factor series for the two-point function of the magnetization operator and obtain analogous expressions for the magnetic current and the energy operators.
We construct a holographic dual theory of one-dimensional anisotropic Heisenberg spin chain, which includes two Chern-Simons gauge fields and a charged scalar field. Thermodynamic quantities of the spin chain at low temperatures, which are exactly calculated from the integrability, are completely reproduced by the dual theory on three-dimensional black hole backgrounds and the exact matching of the parameters between the dual theory and the spin chain is obtained. The holographic dual theory provides a new theoretical framework to analyze the quantum spin chain and one-dimensional quantum many-body systems.
In this note we discuss features of the simplest spinning Discrete Series Unitary Irreducible Representations (UIR) of SO(1,4). These representations are known to be realised in the single particle Hilbert space of a free gauge field propagating in a four dimensional fixed de Sitter background. They showcase distinct features as compared to the more common Principal Series realised by heavy fields. Upon computing the $1-$loop Sphere path integral we show that the \emph{edge modes} of the theory can be understood in terms of a Discrete Series of SO$(1,2)$. We then canonically quantise the theory and show how group theory constrains the mode decomposition. We further clarify the role played by the second SO(4) Casimir in the single particle Hilbert space of the theory.
We consider the boundary dual of AdS3xS3xK3 for NS5-flux Q5=1, which is described by a sigma model with target space given by the d-fold symmetric product of K3. Building on results in algebraic geometry, we address the problem of deforming it away from the orbifold point from the viewpoint of topological strings. We propose how the 't Hooft expansion can be geometrized in terms of Gromov-Witten invariants and, in favorable settings, how it can be summed up to all orders in closed form. We consider an explicit example in detail for which we discuss the genus expansion around the orbifold point, as well as the divergence in the strong coupling regime. We find that within the domain of convergence, scale separation does not occur. However, in order for the mathematical framework to be applicable in the first place, we need to consider "reduced" Gromov-Witten invariants that fit, as we argue, naturally to topologically twisted N=4 strings. There are some caveats and thus to what extent this toy model captures the physics of strings on AdS3xS3xK3 remains to be seen.
In this paper, we consider the spin-2 field perturbations of four families of supergravity solutions. These include AdS$_5$ and AdS$_7$ backgrounds of type IIA as well as AdS$_4$ and AdS$_6$ backgrounds of Type IIB. As the main result, we show that, in all the cases, there is a solution given by a combination of the warp factors. We also find the respective mass spectra. We analyze the normalizability of the solutions and identify the superconformal multiplets dual to them.
The Lanzhou Cooling-Storage-Ring facility is set to conduct experiments involving Uranium-Uranium collisions at the center of mass energies ranging from 2.12 to 2.4 GeV. Our investigation is focused on various bulk observables, which include charged particle multiplicity ($N_{\text{ch}}$), average transverse momentum ($\langle p_{\text{T}}\rangle$), initial eccentricity ($\epsilon_{n}$), and flow harmonics ($v_{n}$), for different orientations of U+U collisions within the range of $0^{\circ} < \theta < 120 ^{\circ}$ at $\sqrt{s_{\mathrm NN}} = 2.12$ GeV ($p_{\mathrm lab}$ = 500 MeV). Among the various collision configurations at this energy, the tip-tip scenario emerged with the highest average charged particle multiplicity, denoted as $\langle N_{\text{ch}} \rangle$. Notably, both the second and third-order eccentricities, $\epsilon_{2,3}$, revealed intricate patterns as they varied with impact parameter across distinct configurations. The tip-tip configuration displayed the most pronounced magnitude of rapidity-odd directed flow ($v_{1}$), whereas the body-body configuration exhibited the least pronounced magnitude. Concerning elliptic flow ($v_{2}$) near mid-rapidity ($\eta < 1.0$), a negative sign is observed for all configurations except for the side-side exhibited a distinctly positive sign. Within the spectrum of configurations, the body-body scenario displayed the highest magnitude of $v_{2}$. For reaction plane correlated triangular flow ($v_{3}$), the body-body configuration emerged with the largest magnitude while the side-side exhibited the smallest magnitude. Our study seeks to establish a fundamental understanding of various U+U collision configurations in preparation for the forthcoming CEE experiment.
A light scalar $X_{0}$ or vector $X_{1}$ particles have been introduced as a possible explanation for the $(g-2)_{\mu}$ anomaly and dark matter phenomena. Using $(8.998\pm 0.039)\times10^9$ $\jpsi $ events collected by the BESIII detector, we search for a light muon philic scalar $X_{0}$ or vector $X_{1}$ in the processes $J/\psi\to\mu^+\mu^- X_{0,1}$ with $X_{0,1}$ invisible decays. No obvious signal is found, and the upper limits on the coupling $g_{0,1}'$ between the muon and the $X_{0,1}$ particles are set to be between $1.1\times10^{-3}$ and $1.0\times10^{-2}$ for the $X_{0,1}$ mass in the range of $1<M(X_{0,1})<1000$~MeV$/c^2$ at 90$\%$ confidence level.
We report a search for time variations of the solar $^8$B neutrino flux using 5,804 live days of Super-Kamiokande data collected between May 31, 1996, and May 30, 2018. Super-Kamiokande measured the precise time of each solar neutrino interaction over 22 calendar years to search for solar neutrino flux modulations with unprecedented precision. Periodic modulations are searched for in a data set comprised of five-day interval solar neutrino flux measurements with a maximum likelihood method. We also applied the Lomb-Scargle method to this data set to compare it with previous reports. The only significant modulation found is due to the elliptic orbit of the Earth around the Sun. The observed modulation is consistent with astronomical data: we measured an eccentricity of (1.53$\pm$0.35)\,\%, and a perihelion shift is ($-$1.5$\pm$13.5)\,days.
The vector-boson production cross-section for the Higgs boson decay in the $H \rightarrow WW^{\ast} \rightarrow e\nu\mu\nu$ channel is measured as a function of kinematic observables sensitive to the Higgs boson production and decay properties as well as integrated in a fiducial phase space. The analysis is performed using the proton-proton collision data collected by the ATLAS detector in Run 2 of the LHC at $\sqrt{s}= 13$ $\text{TeV}$ center-of-mass energy corresponding to an integrated luminosity of 139 fb$^{-1}$. The different flavor final state is studied by selecting an electron and a muon originating from a pair of $W$ bosons and compatible with the Higgs boson decay. The data are corrected for the effects of detector inefficiency and resolution, and the measurements are compared with different state-of-the-art theoretical predictions. The differential cross-sections are used to constrain anomalous interactions described by dimension-six operators in an Effective Field Theory.
A study of prompt $\Xi_{c}^{+}$ production in proton-lead collisions is performed with the LHCb experiment at a centre-of-mass energy per nucleon pair of 8.16 TeV in 2016 in $p$Pb and Pb$p$ collisions with an estimated integrated luminosity of approximately 12.5 and 17.4 nb$^{-1}$, respectively. The $\Xi_{c}^{+}$ production cross-section, as well as the $\Xi_{c}^{+}$ to $\Lambda_{c}^{+}$ production cross-section ratio, are measured as a function of the transverse momentum and rapidity and compared to latest theory predictions. The forward-backward asymmetry is also measured as a function of the $\Xi_{c}^{+}$ transverse momentum.
When a measurement of a physical quantity is reported, the total uncertainty is usually decomposed into statistical and systematic uncertainties. This decomposition is not only useful to understand the contributions to the total uncertainty, but also to propagate these contributions in a subsequent analysis, such as combinations or interpretation fits including results from other measurements or experiments. In profile-likelihood fits, contributions of systematic uncertainties are most often quantified using impacts, which are not adequate for such applications. We discuss the difference between these impacts and uncertainty components, and propose a simple method to determine the latter.
A search is performed for a heavy particle decaying into different-flavour, dilepton final states, using 139 $\mathrm{fb}^{-1}$ of proton-proton collision data at $\sqrt{s}=13$ TeV collected in 2015-2018 by the ATLAS detector at the Large Hadron Collider. Final states with electrons, muons and hadronically decaying tau leptons are considered ($e\mu$, $e\tau$ or $\mu\tau$). No significant excess over the Standard Model predictions is observed. Upper limits on the production cross-section are set as a function of the mass of a Z' boson, a supersymmetric $\tau$-sneutrino, and a quantum black-hole. The observed 95% CL lower mass limits obtained on a typical benchmark model Z' boson are 5.0 TeV (e$\mu$), 4.0 TeV (e$\tau$), and 3.9 TeV ($\mu\tau$), respectively.
We present the first comprehensive tests of light-lepton universality in the angular distributions of semileptonic $B^0$-meson decays to charged spin-1 charmed mesons. We measure five angular-asymmetry observables as functions of the decay recoil that are sensitive to lepton-universality-violating contributions. We use events where one neutral $B$ is fully reconstructed in $\Upsilon\left(4S\right)\to{}B \overline{B}$ decays in data corresponding to $189~\mathrm{fb}^{-1}$ integrated luminosity from electron-positron collisions collected with the Belle II detector. We find no significant deviation from the standard model expectations.
We discuss a tensor network method for constructing the adiabatic gauge potential -- the generator of adiabatic transformations -- as a matrix product operator, which allows us to adiabatically transport matrix product states. Adiabatic evolution of tensor networks offers a wide range of applications, of which two are explored in this paper: improving tensor network optimization and scanning phase diagrams. By efficiently transporting eigenstates to quantum criticality and performing intermediary density matrix renormalization group (DMRG) optimizations along the way, we demonstrate that we can compute ground and low-lying excited states faster and more reliably than a standard DMRG method at or near quantum criticality. We demonstrate a simple automated step size adjustment and detection of the critical point based on the norm of the adiabatic gauge potential. Remarkably, we are able to reliably transport states through the critical point of the models we study.
Feedback-based control is the de-facto standard when it comes to controlling classical stochastic systems and processes. However, standard feedback-based control methods are challenged by quantum systems due to measurement induced backaction and partial observability. Here we remedy this by using weak quantum measurements and model-free reinforcement learning agents to perform quantum control. By comparing control algorithms with and without state estimators to stabilize a quantum particle in an unstable state near a local potential energy maximum, we show how a trade-off between state estimation and controllability arises. For the scenario where the classical analogue is highly nonlinear, the reinforcement learned controller has an advantage over the standard controller. Additionally, we demonstrate the feasibility of using transfer learning to develop a quantum control agent trained via reinforcement learning on a classical surrogate of the quantum control problem. Finally, we present results showing how the reinforcement learning control strategy differs from the classical controller in the non-linear scenarios.
Of existing entanglement witnesses that utilize only collective measurements of a spin ensemble, not all can detect genuine multipartite entanglement (GME), and none can detect Greenberger-Horne-Zeilinger (GHZ) states beyond the tripartite case. We fill this gap by introducing an entanglement witness that detects GME of spin ensembles, whose total spin is half-integer, using only collective spin measurements. Our witness is based on a nonclassicality test introduced by Tsirelson, and solely requires the measurement of total angular momentum along different directions. States detected by our witness are close to a family of GHZ-like states, which includes GHZ states of an odd number of spin-half particles. We also study the robustness of our witness under depolarizing noise, and derive exact noise bounds for detecting noisy GHZ states.
We introduce entanglement witnesses for spin ensembles which detect genuine multipartite entanglement using only measurements of the total angular momentum. States that are missed by most other angular-momentum-based witnesses for spin ensembles, which include Greenberger-Horne-Zeilinger states and certain superpositions of Dicke states, can be effectively detected by our witness. The protocol involves estimating the probability that the total angular momentum is positive along equally-spaced directions on a plane. Alternatively, one could measure along a single direction at different times, under the assumption that the total spins undergoes a uniform precession. Genuine multipartite entanglement is detected when the observed score exceeds a separable bound. Exact analytical expressions for the separable bound are obtained for spin ensembles $j_1\otimes j_2\otimes\dots \otimes j_N$ such that the total spin is a half-integer, and numerical results are reported for the other cases. Finally, we conjecture an expression for the separable bound when the total spin is not known, which is well supported by the numerical results.
In this paper, we identify a family of nonconvex continuous optimization instances, each $d$-dimensional instance with $2^d$ local minima, to demonstrate a quantum-classical performance separation. Specifically, we prove that the recently proposed Quantum Hamiltonian Descent (QHD) algorithm [Leng et al., arXiv:2303.01471] is able to solve any $d$-dimensional instance from this family using $\widetilde{\mathcal{O}}(d^3)$ quantum queries to the function value and $\widetilde{\mathcal{O}}(d^4)$ additional 1-qubit and 2-qubit elementary quantum gates. On the other side, a comprehensive empirical study suggests that representative state-of-the-art classical optimization algorithms/solvers (including Gurobi) would require a super-polynomial time to solve such optimization instances.
Quantum computers are not yet up to the task of providing computational advantages for practical stochastic diffusion models commonly used by financial analysts. In this paper we introduce a class of stochastic processes that are both realistic in terms of mimicking financial market risks as well as more amenable to potential quantum computational advantages. The type of models we study are based on a regime switching volatility model driven by a Markov chain with observable states. The basic model features a Geometric Brownian Motion with drift and volatility parameters determined by the finite states of a Markov chain. We study algorithms to estimate credit risk and option pricing on a gate-based quantum computer. These models bring us closer to realistic market settings, and therefore quantum computing closer the realm of practical applications.
For a quantum error correcting code to be used in practice, it needs to be equipped with an efficient decoding algorithm, which identifies corrections given the observed syndrome of errors.Hypergraph product codes are a promising family of constant-rate quantum LDPC codes that have a linear-time decoding algorithm called Small-Set-Flip ($\texttt{SSF}$) (Leverrier, Tillich, Z\'emor FOCS 2015). The algorithm proceeds by iteratively applying small corrections which reduce the syndrome weight. Together, these small corrections can provably correct large errors for sufficiently large codes with sufficiently large (but constant) stabilizer weight. However, this guarantee does not hold for small codes with low stabilizer weight. In this case, $\texttt{SSF}$ can terminate with stopping failures, meaning it encounters an error for which it is unable to identify a small correction. We find that the structure of errors that cause stopping failures have a simple form for sufficiently small qubit failure rates. We propose a new decoding algorithm called the Projection-Along-a-Line ($\texttt{PAL}$) decoder to supplement $\texttt{SSF}$ after stopping failures. Using $\texttt{SSF}+\texttt{PAL}$ as a combined decoder, we find an order-of-magnitude improvement in the logical error rate.
A physical system is determined by a finite set of initial conditions and laws represented by equations. The system is computable if we can solve the equations in all instances using a ``finite body of mathematical knowledge". In this case, if the laws of the system can be coded into a computer program, then given the system's initial conditions of the system, one can compute the system's evolution. This scenario is tacitly taken for granted. But is this reasonable? The answer is negative, and a straightforward example is when the initial conditions or equations use irrational numbers, like Chaitin's Omega Number: no program can deal with such numbers because of their ``infinity''. Are there incomputable physical systems? This question has been theoretically studied in the last 30--40 years. This article presents a class of quantum protocols producing quantum random bits. Theoretically, we prove that every infinite sequence generated by these quantum protocols is strongly incomputable -- no algorithm computing any bit of such a sequence can be proved correct. This theoretical result is not only more robust than the ones in the literature: experimental results support and complement it.
We prove that that if the boundary of a topological insulator divides the plane in two regions containing arbitrarily large balls, then it acts as a conductor. Conversely, we show that topological insulators that fit within strips do not need to admit conducting boundary modes.
Classical computing has borne witness to the development of machine learning. The integration of quantum technology into this mix will lead to unimaginable benefits and be regarded as a giant leap forward in mankind's ability to compute. Demonstrating the benefits of this integration now becomes essential. With the advance of quantum computing, several machine-learning techniques have been proposed that use quantum annealing. In this study, we implement a matrix factorization method using quantum annealing for image classification and compare the performance with traditional machine-learning methods. Nonnegative/binary matrix factorization (NBMF) was originally introduced as a generative model, and we propose a multiclass classification model as an application. We extract the features of handwritten digit images using NBMF and apply them to solve the classification problem. Our findings show that when the amount of data, features, and epochs is small, the accuracy of models trained by NBMF is superior to classical machine-learning methods, such as neural networks. Moreover, we found that training models using a quantum annealing solver significantly reduces computation time. Under certain conditions, there is a benefit to using quantum annealing technology with machine learning.
We present a quantum-classical hybrid algorithm for calculating the ground state and its energy of the quantum many-body Hamiltonian by proposing an adaptive construction of a quantum state for the quantum-selected configuration interaction (QSCI) method. QSCI allows us to select important electronic configurations in the system to perform CI calculation (subspace diagonalization of the Hamiltonian) by sampling measurement for a proper input quantum state on a quantum computer, but how we prepare a desirable input state has remained a challenge. We propose an adaptive construction of the input state for QSCI in which we run QSCI repeatedly to grow the input state iteratively. We numerically illustrate that our method, dubbed \textit{ADAPT-QSCI}, can yield accurate ground-state energies for small molecules, including a noisy situation for eight qubits where error rates of two-qubit gates and the measurement are both as large as 1\%. ADAPT-QSCI serves as a promising method to take advantage of current noisy quantum devices and pushes forward its application to quantum chemistry.
Variational quantum eigensolvers (VQEs) are one of the most important and effective applications of quantum computing, especially in the current noisy intermediate-scale quantum (NISQ) era. There are mainly two ways for VQEs: problem-agnostic and problem-specific. For problem-agnostic methods, they often suffer from trainability issues. For problem-specific methods, their performance usually relies upon choices of initial reference states which are often hard to determine. In this paper, we propose an Entanglement-variational Hardware-efficient Ansatz (EHA), and numerically compare it with some widely used ansatzes by solving benchmark problems in quantum many-body systems and quantum chemistry. Our EHA is problem-agnostic and hardware-efficient, especially suitable for NISQ devices and having potential for wide applications. EHA can achieve a higher level of accuracy in finding ground states and their energies in most cases even compared with problem-specific methods. The performance of EHA is robust to choices of initial states and parameters initialization and it has the ability to quickly adjust the entanglement to the required amount, which is also the fundamental reason for its superiority.
Qubits based on ions trapped in linear radio-frequency traps form a successful platform for quantum computing, due to their high-fidelity of operations, all-to-all connectivity and degree of local control. In principle there is no fundamental limit to the number of ion-based qubits that can be confined in a single 1d register. However, in practice there are two main issues associated with long trapped ion-crystals, that stem from the 'softening' of their modes of motion, upon scaling up: high heating rates of the ions' motion, and a dense motional spectrum; both impede the performance of high-fidelity qubit operations. Here we propose a holistic, scalable architecture for quantum computing with large ion-crystals that overcomes these issues. Our method relies on dynamically-operated optical potentials, that instantaneously segment the ion-crystal into cells of a manageable size. We show that these cells behave as nearly independent quantum registers, allowing for parallel entangling gates on all cells. The ability to reconfigure the optical potentials guarantees connectivity across the full ion-crystal, and also enables efficient mid-circuit measurements. We study the implementation of large-scale parallel multi-qubit entangling gates that operate simultaneously on all cells, and present a protocol to compensate for crosstalk errors, enabling full-scale usage of an extensively large register. We illustrate that this architecture is advantageous both for fault-tolerant digital quantum computation and for analog quantum simulations.
The paper considers the problem of finding a given substring in a text. It is known that the complexity of a classical search query in an unordered database is linear in the length of the text and a given substring. At the same time, Grover's quantum search provides a quadratic speedup in the complexity of the query and gives the correct result with a high probability. We propose a hybrid classical-quantum algorithm (hybrid random-quantum algorithm to be more precise), that implements Grover's search to find a given substring in a text. As expected, the algorithm works a) with a high probability of obtaining the correct result and b) with a quadratic query acceleration compared to the classical one. What's new is that our algorithm uses the uniform hash family functions technique. As a result, our algorithm is much more memory efficient (in terms of the number of qubits used) compared to previously known quantum algorithms.
Accurate and scalable methods for computational quantum chemistry can accelerate research and development in many fields, ranging from drug discovery to advanced material design. Solving the electronic Schrodinger equation is the core problem of computational chemistry. However, the combinatorial complexity of this problem makes it intractable to find exact solutions, except for very small systems. The idea of quantum computing originated from this computational challenge in simulating quantum-mechanics. We propose an end-to-end quantum chemistry pipeline based on the variational quantum eigensolver (VQE) algorithm and integrated with both HPC-based simulators and a trapped-ion quantum computer. Our platform orchestrates hundreds of simulation jobs on compute resources to efficiently complete a set of ab initio chemistry experiments with a wide range of parameterization. Per- and poly-fluoroalkyl substances (PFAS) are a large family of human-made chemicals that pose a major environmental and health issue globally. Our simulations includes breaking a Carbon-Fluorine bond in trifluoroacetic acid (TFA), a common PFAS chemical. This is a common pathway towards destruction and removal of PFAS. Molecules are modeled on both a quantum simulator and a trapped-ion quantum computer, specifically IonQ Aria. Using basic error mitigation techniques, the 11-qubit TFA model (56 entangling gates) on IonQ Aria yields near-quantitative results with milli-Hartree accuracy. Our novel results show the current state and future projections for quantum computing in solving the electronic structure problem, push the boundaries for the VQE algorithm and quantum computers, and facilitates development of quantum chemistry workflows.
We propose two-mode two-photon microlaser using a single semiconductor quantum dot grown inside a two-mode microcavity. We explore both incoherent and coherent pumping at low temperatures to achieve suitable conditions for two-mode two-photon lasing. The two-mode two-photon stimulated emission is strongly suppressed but the single-photon stimulated emission is enhanced by exciton-phonon interactions. In coherently pumped quantum dot one can achieve large two-mode two-photon lasing where single-photon lasing is almost absent. We also discuss generation of steady state two-mode entangled state using two-photon resonant pumping.
Quantum Error Mitigation (QEM) presents a promising near-term approach to reduce error when estimating expectation values in quantum computing. Here, we introduce QEM techniques tailored for quantum annealing, using Zero-Noise Extrapolation (ZNE). We implement ZNE through zero-temperature extrapolation as well as energy-time rescaling. We conduct experimental investigations into the quantum critical dynamics of a transverse-field Ising spin chain, demonstrating the successful mitigation of thermal noise through both of these techniques. Moreover, we show that energy-time rescaling effectively mitigates control errors in the coherent regime where the effect of thermal noise is minimal. Our ZNE results agree with exact calculations of the coherent evolution over a range of annealing times that exceeds the coherent annealing range by almost an order of magnitude.
Quantum error correction is crucial for scalable quantum information processing applications. Traditional discrete-variable quantum codes that use multiple two-level systems to encode logical information can be hardware-intensive. An alternative approach is provided by bosonic codes, which use the infinite-dimensional Hilbert space of harmonic oscillators to encode quantum information. Two promising features of bosonic codes are that syndrome measurements are natively analog and that they can be concatenated with discrete-variable codes. In this work, we propose novel decoding methods that explicitly exploit the analog syndrome information obtained from the bosonic qubit readout in a concatenated architecture. Our methods are versatile and can be generally applied to any bosonic code concatenated with a quantum low-density parity-check (QLDPC) code. Furthermore, we introduce the concept of quasi-single-shot protocols as a novel approach that significantly reduces the number of repeated syndrome measurements required when decoding under phenomenological noise. To realize the protocol, we present a first implementation of time-domain decoding with the overlapping window method for general QLDPC codes, and a novel analog single-shot decoding method. Our results lay the foundation for general decoding algorithms using analog information and demonstrate promising results in the direction of fault-tolerant quantum computation with concatenated bosonic-QLDPC codes.
The expressibility of an ansatz used in a variational quantum algorithm is defined as the uniformity with which it can explore the space of unitary matrices. The expressibility of a particular ansatz has a well-defined upper bound. In this work, we show that the expressibiliity also has a well-defined lower bound in the hypothesis space. We provide an analytical expression for the lower bound of the covering number, which is directly related to expressibility. We also perform numerical simulations to to support our claim. To numerically calculate the bond length of a diatomic molecule, we take hydrogen ($H_2$) as a prototype system and calculate the error in the energy for the equilibrium energy point for different ansatzes. We study the variation of energy error with circuit depths and show that in each ansatz template, a plateau exists for a range of circuit depths, which we call the set of acceptable points, and the corresponding expressibility is known as the best expressive region. We report that the width of this best expressive region in the hypothesis space is inversely proportional to the average error. Our analysis reveals that alongside trainability, the lower bound of expressibility also plays a crucial role in selecting variational quantum ansatzes.
Engineering high-fidelity two-qubit gates is an indispensable step toward practical quantum computing. For superconducting quantum platforms, one important setback is the stray interaction between qubits, which causes significant coherent errors. For transmon qubits, protocols for mitigating such errors usually involve fine-tuning the hardware parameters or introducing usually noisy flux-tunable couplers. In this work, we propose a simple scheme to cancel these stray interactions. The coupler used for such cancellation is a driven high-coherence resonator, where the amplitude and frequency of the drive serve as control knobs. Through the resonator-induced-phase (RIP) interaction, the static ZZ coupling can be entirely neutralized. We numerically show that such a scheme can enable short and high-fidelity entangling gates, including cross-resonance CNOT gates within 40 ns and adiabatic CZ gates within 140 ns. Our architecture is not only ZZ free but also contains no extra noisy components, such that it preserves the coherence times of fixed-frequency transmon qubits. With the state-of-the-art coherence times, the error of our cross-resonance CNOT gate can be reduced to below 1e-4.
We explore the role of exceptional points and complex eigenvalues on the occurrence of the quantum Mpemba effect. To this end, we study a two-level driven dissipative system subjected to an oscillatory electric field and dissipative coupling with the environment. We find that both exceptional points and complex eigenvalues can lead to multiple quantum Mpemba effect where time evolved trajectories corresponding to two different initial conditions may intersect each other more than once. Such multiple intersections originate from additional algebraic time dependence at the exceptional points and due to oscillatory relaxation in the case of complex eigenvalues. We provide analytical results for the quantum Mpemba effect in the density matrix and other observables in the presence of coherence. The system temperature shows multiple thermal quantum Mpemba effect. The distance function measured in terms of the Kullback-Leibler divergence is found to have only a single intersection whereas the corresponding speed can surprisingly give rise to multiple intersections.
The nonstabilizerness, or magic, is an essential quantum resource to perform universal quantum computation. Robustness of magic (RoM) in particular characterizes the degree of usefulness of a given quantum state for non-Clifford operation. While the mathematical formalism of RoM can be given in a concise manner, it is extremely challenging to determine the RoM in practice, since it involves superexponentially many pure stabilizer states. In this work, we present efficient novel algorithms to compute the RoM. The crucial technique is a subroutine that achieves the remarkable features in calculation of overlaps between pure stabilizer states: (i) the time complexity per each stabilizer is reduced exponentially, (ii) the space complexity is reduced superexponentially. Based on this subroutine, we present algorithms to compute the RoM for arbitrary states up to $n=7$ qubits on a laptop, while brute-force methods require a memory size of 86 TiB. As a byproduct, the proposed subroutine allows us to simulate the stabilizer fidelity up to $n=8$ qubits, for which naive methods require memory size of 86 PiB so that any state-of-the-art classical computer cannot execute the computation. We further propose novel algorithms that utilize the preknowledge on the structure of target quantum state such as the permutation symmetry of disentanglement, and numerically demonstrate our state-of-the-art results for copies of magic states and partially disentangled quantum states. The series of algorithms constitute a comprehensive ``handbook'' to scale up the computation of the RoM, and we envision that the proposed technique applies to the computation of other quantum resource measures as well.
In a nonlocal game, two noncommunicating players cooperate to convince a referee that they possess a strategy that does not violate the rules of the game. Quantum strategies enable players to optimally win some games by performing joint measurements on a shared entangled state, but computing these strategies can be challenging. We develop a variational algorithm for computing strategies of nonlocal games and show that it can yield optimal strategies for small examples of both convex and non-convex games. We show that our algorithm returns an optimal quantum strategy for a graph coloring game; whereas no optimal quantum strategy was previously known for this problem. Moreover, we describe how this technique can be run on quantum computers to discover shallow-depth circuits that yield optimal quantum strategies. We argue that such circuits will be useful for benchmarking quantum computers because of the ability to verify the solutions at scale and the experiment's sensitivity to 2-qubit gate noise. Finally, we demonstrate the use of nonlocal games as a benchmarking strategy experimentally on 11 IBM quantum computers.
We present an efficient and robust protocol for quantum-enhanced sensing using a single-spin qubit in the topological waveguide system. Our method relies on the topological-paired bound states, which are localized near the spin and can be effectively regarded as a two-level system. Through the lens of Bayesian inference theory, we show the sensitivity can reach the Heisenberg limit across a large field range. Inheriting from the topological robustness of the waveguide, our sensing protocol is robust against local perturbations. The advantages of our protocol are multifold as it allows for sensing various parameters and uses a product initial state, which can be easily prepared in experiments. We expect this approach would pave the way towards robust topological quantum sensors based on near term quantum platforms such as topological photonics and Rydberg arrays.
In the domain of quantum metrology cat states have demonstrated their utility despite their inherent fragility with respect to photon loss. Here, we introduce noise robust optical cat states which exhibit a metrological robustness for phase estimation in the regime of high photon numbers. These cat states are obtained from the intense laser driven process of high harmonic generation (HHG), and in the ideal case of vanishing losses, show almost twice the quantum Fisher information (QFI) compared to the even and odd cat states. However, and more importantly, these HHG-cat states are much more robust against noise such that the noisy HHG-cat outperforms the pure even/odd cat states even in the presence of more than $25\%$ losses in the regime of high photon numbers. Furthermore, in the regime of small losses, the HHG-cat remains almost pure while the even/odd cat state counterpart already decohere to the maximally mixed state. This demonstrates that high photon number optical cat states can indeed be used for metrological applications even in the presence of losses.
Ground state preparation is classically intractable for general Hamiltonians. On quantum devices, shallow parameterized circuits can be effectively trained to obtain short-range entangled states under the paradigm of variational quantum eigensolver, while deep circuits are generally untrainable due to the barren plateau phenomenon. In this Letter, we give a general lower bound on the variance of circuit gradients for arbitrary quantum circuits composed of local 2-designs. Based on our unified framework, we prove the absence of barren plateaus in training finite local-depth circuits for the ground states of local Hamiltonians. These circuits are allowed to be deep in the conventional definition of circuit depth so that they can generate long-range entanglement, but their local depths are finite, i.e., there is only a finite number of non-commuting gates acting on individual qubits. This fact suggests that long-range entangled ground states, such as topologically ordered states, are in general possible to be prepared efficiently on quantum devices via variational methods. We validate our analytical results with extensive numerical simulations and demonstrate the effectiveness of variational training using the generalized toric code model.
The inherently low signal-to-noise ratio of NMR and MRI is now being addressed by hyperpolarization methods. For example, iridium-based catalysts that reversibly bind both parahydrogen and ligands in solution can hyperpolarize protons (SABRE) or heteronuclei (X-SABRE) on a wide variety of ligands, using a complex interplay of spin dynamics and chemical exchange processes, with common signal enhancements between $10^3-10^4$. This does not approach obvious theoretical limits, and further enhancement would be valuable in many applications (such as imaging mM concentration species in vivo). Most SABRE/X-SABRE implementations require far lower fields (${\mu}T-mT$) than standard magnetic resonance (>1T), and this gives an additional degree of freedom: the ability to fully modulate fields in three dimensions. However, this has been underexplored because the standard simplifying theoretical assumptions in magnetic resonance need to be revisited. Here we take a different approach, an evolutionary strategy algorithm for numerical optimization, Multi-Axis Computer-aided HEteronuclear Transfer Enhancement for SABRE (MACHETE-SABRE). We find nonintuitive but highly efficient multi-axial pulse sequences which experimentally can produce a 10-fold improvement in polarization over continuous excitation. This approach optimizes polarization differently than traditional methods, thus gaining extra efficiency.
The quantum switch is a physical process that creates a coherent control between different unitary operations which is often described as a process which transforms a pair of unitary operations $(U_1 , U_2)$ into a controlled unitary operation that coherently applies them in different orders as ${\vert {0} \rangle\!\langle {0} \vert} \otimes U_1 U_2 + {\vert {1} \rangle\!\langle {1} \vert} \otimes U_2 U_1$. This description, however, does not directly define its action on non-unitary operations. The action of quantum switch on non-unitary operations is then chosen to be a "natural" extension of its action on unitary operation. Since, in general, the action of a process on non-unitary operations is not uniquely determined by its action on only unitary operations, in principle, there could be a set of inequivalent extensions of quantum switch for non-unitary operations. In this paper, we prove that there is a unique way to extend the actions of quantum switch to non-unitary operations. In other words, contrary to the general case, the action of quantum switch on non-unitary operations is completely determined by its action on unitary operations. We also discuss the general problem of when the complete description of a quantum process is uniquely determined by its action on unitary operations and identify a set of single-slot processes which are completely defined by their action on unitary operations.
Exact single photons cannot be generated on demand due to their infinite tails. To quantify how close realizable optical states can be to some target single photon in one dimension, we argue that there are two natural but incompatible ways to specify the target state. Either it can be expressed as a photon with a chosen positive-frequency spectrum, or it can be described as an (unphysical) photon in a chosen positive-time pulse. The results show that for sufficiently short target pulses, the closest realizable states contain substantial multiphoton components. Upper and lower bounds for the maximum fidelity are derived and are expressed as functions of the size of the target state's tail, for negative time or negative frequency, respectively. We also generalize the bounds to arbitrary photon-number states.
We consider the quantum state control of a multi-state system which evolves an initial state into a target state. We explicitly demonstrate the control method in an interesting case involving the transfer and rotation of a Schr\"{o}dinger cat state through a coupled harmonic oscillator chain at a predetermined time $T$. We use the gradient-based Krotov's method to design the time-dependent parameters of the coupled chain to find an optimal control shape that will evolve the system into a target state. We show that the prescribed quantum state control can be achieved with high fidelity, and the robustness of the control against generic environment noises is explored. Our findings will be of interest for the optimal control of a many-body open quantum system in the presence of environmental noise.
Quantum many-body scars are an intriguing dynamical regime in which quantum systems exhibit coherent dynamics and long-range correlations when prepared in certain initial states. We use this combination of coherence and many-body correlations to benchmark the performance of present-day quantum computing devices by using them to simulate the dynamics of an antiferromagnetic initial state in mixed-field Ising chains of up to 19 sites. In addition to calculating the dynamics of local observables, we also calculate the Loschmidt echo and a nontrivial connected correlation function that witnesses long-range many-body correlations in the scarred dynamics. We find coherent dynamics to persist over up to 40 Trotter steps even in the presence of various sources of error. To obtain these results, we leverage a variety of error mitigation techniques including noise tailoring, zero-noise extrapolation, dynamical decoupling, and physically motivated postselection of measurement results. Crucially, we also find that using pulse-level control to implement the Ising interaction yields a substantial improvement over the standard CNOT-based compilation of this interaction. Our results demonstrate the power of error mitigation techniques and pulse-level control to probe many-body coherence and correlation effects on present-day quantum hardware.
We show that the dynamics of generic quantum systems concentrate around their equilibrium value when measuring at arbitrary times. This means that the probability of finding them away from equilibrium is exponentially suppressed, with a decay rate given by the effective dimension. Our result allows us to place a lower bound on the recurrence time of quantum systems, since recurrences corresponds to the rare events of finding a state away from equilibrium. In many-body systems, this bound is doubly exponential in system size. We also show corresponding results for free fermions, which display a weaker concentration and earlier recurrences.
The increasing capabilities of quantum computing hardware and the challenge of realizing deep quantum circuits require fully automated and efficient tools for compiling quantum circuits. To express arbitrary circuits in a sequence of native gates specific to the quantum computer architecture, it is necessary to make algorithms portable across the landscape of quantum hardware providers. In this work, we present a compiler capable of transforming and optimizing a quantum circuit targeting a shuttling-based trapped-ion quantum processor. It consists of custom algorithms set on top of the quantum circuit framework Pytket. The performance was evaluated for a wide range of quantum circuits and the results show that the gate counts can be reduced by factors up to 5.1 compared to standard Pytket and up to 2.2 compared to standard Qiskit compilation.
A passive quantum key distribution (QKD) transmitter generates the quantum states prescribed by a QKD protocol at random, combining a fixed quantum mechanism and a post-selection step. By avoiding the use of active optical modulators externally driven by random number generators, passive QKD transmitters offer immunity to modulator side channels and potentially enable higher frequencies of operation. Recently, the first linear optics setup suitable for passive decoy-state QKD has been proposed. In this work, we simplify the prototype and adopt sharply different approaches for BB84 polarization encoding and decoy-state generation. On top of it, we elaborate a tight custom-made security analysis surpassing an unnecessary assumption and a post-selection step that are central to the former proposal.
A milestone in the field of quantum computing will be solving problems in quantum chemistry and materials faster than state-of-the-art classical methods. The current understanding is that achieving quantum advantage in this area will require some degree of fault tolerance. While hardware is improving towards this milestone, optimizing quantum algorithms also brings it closer to the present. Existing methods for ground state energy estimation are costly in that they require a number of gates per circuit that grows exponentially with the desired number of bits in precision. We reduce this cost exponentially, by developing a ground state energy estimation algorithm for which this cost grows linearly in the number of bits of precision. Relative to recent resource estimates of ground state energy estimation for the industrially-relevant molecules of ethylene-carbonate and PF$_6^-$, the estimated gate count and circuit depth is reduced by a factor of 43 and 78, respectively. Furthermore, the algorithm can use additional circuit depth to reduce the total runtime. These features make our algorithm a promising candidate for realizing quantum advantage in the era of early fault-tolerant quantum computing.
Bulk dislocation lattice defects are instrumental in identifying translationally active topological insulators (TATIs), featuring band inversion at a finite momentum (${\bf K}_{\rm inv}$). As such, TATIs host robust gapless modes around the dislocation core, when the associated Burgers vector ${\bf b}$ satisfies ${\bf K}_{\rm inv} \cdot {\bf b}=\pi$ (modulo $2 \pi$). From the time evolution of appropriate density matrices, we show that when a TATI via a real time ramp enters into a trivial or translationally inert topological insulating phase, devoid of gapless dislocation modes, the signatures of the preramp defect modes survive for a long time. More intriguingly, as the system ramps into a TATI phase from any translationally inert insulator, signature of the dislocation mode dynamically builds up near its core, which is prominent for slow ramps. We exemplify these generic outcomes for two-dimensional time-reversal symmetry breaking insulators. Proposed dynamic responses at the dislocation core can be experimentally observed in quantum crystals, optical lattices and metamaterials with time a tunable band gap.
Hilbert space fragmentation (HSF) is a phenomenon that the Hilbert space of an isolated quantum system splits into exponentially many disconnected subsectors. The fragmented systems do not thermalize after long-time evolution because the dynamics are restricted to a small subsector. Inspired by recent developments of the HSF, we construct the Hamiltonian that exhibits the HSF in the momentum space. We show that persistent-current (PC) states emerge due to the HSF in the momentum space. We also investigate the stability of the PC states against the random potential, which breaks the structure of the HSF, and find that the decay rate of the PC is almost independent of the current velocity.
When a time propagator $e^{\delta t A}$ for duration $\delta t$ consists of two noncommuting parts $A=X+Y$, Trotterization approximately decomposes the propagator into a product of exponentials of $X$ and $Y$. Various Trotterization formulas have been utilized in quantum and classical computers, but much less is known for the Trotterization with the time-dependent generator $A(t)$. Here, for $A(t)$ given by the sum of two operators $X$ and $Y$ with time-dependent coefficients $A(t) = x(t) X + y(t) Y$, we develop a systematic approach to derive high-order Trotterization formulas with minimum possible exponentials. In particular, we obtain fourth-order and sixth-order Trotterization formulas involving seven and fifteen exponentials, respectively, which are no more than those for time-independent generators. We also construct another fourth-order formula consisting of nine exponentials having a smaller error coefficient. Finally, we numerically benchmark the fourth-order formulas in a Hamiltonian simulation for a quantum Ising chain, showing that the 9-exponential formula accompanies smaller errors per local quantum gate than the well-known Suzuki formula.
Quantum many-body scar states are highly excited eigenstates of many-body systems that exhibit atypical entanglement and correlation properties relative to typical eigenstates at the same energy density. Scar states also give rise to infinitely long-lived coherent dynamics when the system is prepared in a special initial state having finite overlap with them. Many models with exact scar states have been constructed, but the fate of scarred eigenstates and dynamics when these models are perturbed is difficult to study with classical computational techniques. In this work, we propose state preparation protocols that enable the use of quantum computers to study this question. We present protocols both for individual scar states in a particular model, as well as superpositions of them that give rise to coherent dynamics. For superpositions of scar states, we present both a system-size-linear depth unitary and a finite-depth nonunitary state preparation protocol, the latter of which uses measurement and postselection to reduce the circuit depth. For individual scarred eigenstates, we formulate an exact state preparation approach based on matrix product states that yields quasipolynomial-depth circuits, as well as a variational approach with a polynomial-depth ansatz circuit. We also provide proof of principle state-preparation demonstrations on superconducting quantum hardware.
The power-law random banded matrix (PLRBM) is a paradigmatic ensemble to study the Anderson localization transition (AT). In $d$-dimension the PLRBM are random matrices with algebraic decaying off-diagonal elements $H_{\vec{n}\vec{m}}\sim 1/|\vec{n}-\vec{m}|^\alpha$, having AT at $\alpha=d$. In this work, we investigate the fate of the PLRBM to non-Hermiticity. We consider the case where the random on-site diagonal potential takes complex values, mimicking an open system, subject to random gain-loss terms. We provide an analytical understanding of the model by generalizing the Anderson-Levitov resonance counting technique to the non-Hermitian case. This generalization identifies two competing mechanisms due to non-Hermiticity: one favoring localization and the other delocalization. The competition between the two gives rise to AT at $d/2\le \alpha\le d$. The value of the critical $\alpha$ depends on the strength of the on-site potential, reminiscent of Hermitian disordered short-range models in $d>2$. Within the localized phase, the wave functions are algebraically localized with an exponent $\alpha$ even for $\alpha<d$. This result provides an example of non-Hermiticity-induced localization.
The coherent dynamics of a quantum mechanical two-level system passing through an anti-crossing of two energy levels can give rise to Landau-Zener-St\"uckelberg-Majorana (LZSM) interference. LZSM interference spectroscopy has proven to be a fruitful tool to investigate charge noise and charge decoherence in semiconductor quantum dots (QDs). Recently, bilayer graphene has developed as a promising platform to host highly tunable QDs potentially useful for hosting spin and valley qubits. So far, in this system no coherent oscillations have been observed and little is known about charge noise in this material. Here, we report coherent charge oscillations and $T_2^*$ charge decoherence times in a bilayer graphene double QD. The charge decoherence times are measured independently using LZSM interference and photon assisted tunneling. Both techniques yield $T_2^*$ average values in the range of 400 to 500 ps. The observation of charge coherence allows to study the origin and spectral distribution of charge noise in future experiments.
A hybrid system established by the direct interaction between a magnon mode and a superconducting transmon qubit is used to realize a high-degree blockade for magnon. It is a fundamental way toward quantum manipulation at the level of a single magnon and preparation of single magnon sources. Through weakly driving the magnon and probing the qubit, our magnon-blockade proposal can be optimized when the transversal coupling strength between the magnon and qubit is equivalent to the detuning of the qubit and the probing field or that of the magnon and the driving field. Under this condition, the equal-time second-order correlation function $g^{(2)}(0)$ can be analytically minimized when the probing intensity is about three times the driving intensity. Moreover, the magnon blockade could be further enhanced by proper driving intensity and system decay rate, whose magnitudes outrange the current systems of cavity QED and cavity optomechanics. In particular, the correlation function achieves $g^{(2)}(0)\sim10^{-7}$, about two orders lower than that for the photon blockade in cavity optomechanics. Also, we discuss the effects on $g^{(2)}(0)$ from thermal noise and the extra longitudinal interaction between the magnon and qubit. Our optimized conditions for blockade are found to persist in these nonideal situations.
We present a new framework for creating a quantum version of a classical game, based on Fine's theorem. This theorem shows that for a given set of marginals, a system of Bell's inequalities constitutes both necessary and sufficient conditions for the existence of the corresponding joint probability distribution. Using Fine's theorem, we re-express both the player payoffs and their strategies in terms of a set of marginals, thus paving the way for the consideration of sets of marginals -- corresponding to entangled quantum states -- for which no corresponding joint probability distribution may exist. By harnessing quantum states and employing Positive Operator-Valued Measures (POVMs), we then consider particular quantum states that can potentially resolve dilemmas inherent in classical games.
In this study, we explore the non-Markovian cost function for quantum error mitigation (QEM) and the representation of two-qubit operators using Dirac Gamma matrices, central to the structure of relativistic quantum mechanics. The primary focus of quantum computing research, particularly with noisy intermediate-scale quantum (NISQ) devices, is on reducing errors and decoherence for practical application. While much of the existing research concentrates on Markovian noise sources, the study of non-Markovian sources is crucial given their inevitable presence in most solid-state quantum computing devices. We introduce a non-Markovian model of quantum state evolution and a corresponding QEM cost function for NISQ devices, considering an environment typified by simple harmonic oscillators as a noise source. The Dirac Gamma matrices, integral to areas of physics like quantum field theory and supersymmetry, share a common algebraic structure with two-qubit gate operators. By representing the latter using Gamma matrices, we are able to more effectively analyze and manipulate these operators due to the distinct properties of Gamma matrices. We evaluate the fluctuations of the output quantum state for identity and SWAP gate operations in two-qubit operations across various input states. By comparing these results with experimental data from ion-trap and superconducting quantum computing systems, we estimate the key parameters of the QEM cost functions. Our results reveal that as the coupling strength between the quantum system and its environment increases, so does the QEM cost function. This study underscores the importance of non-Markovian models for understanding quantum state evolution and the practical implications of the QEM cost function when assessing experimental results from NISQ devices.
Quantum measurement is one of the most fascinating and discussed phenomena in quantum physics, due to the impact on the system of the measurement action and the resulting interpretation issues. Scholars proposed weak measurements to amplify measured signals by exploiting a quantity called a weak value, but also to overcome philosophical difficulties related to the system perturbation induced by the measurement process. The method finds many applications and raises many philosophical questions as well, especially about the proper interpretation of the observations. In this paper, we show that any weak value can be expressed as the expectation value of a suitable non-normal operator. We propose a preliminary explanation of their anomalous and amplification behavior based on the theory of non-normal matrices and their link with non-normality: the weak value is different from an eigenvalue when the operator involved in the expectation value is non-normal. Our study paves the way for a deeper understanding of the measurement phenomenon, helps the design of experiments, and it is a call for collaboration to researchers in both fields to unravel new quantum phenomena induced by non-normality.
Secure communication over long distances is one of the major problems of modern informatics. Classical transmissions are recognized to be vulnerable to quantum computer attacks. Remarkably, the same quantum mechanics that engenders quantum computers offers guaranteed protection against such attacks via quantum key distribution (QKD). Yet, long-distance transmission is problematic since the essential signal decay in optical channels occurs at a distance of about a hundred kilometers. We propose to resolve this problem by a QKD protocol, further referred to as the Terra Quantum QKD protocol (TQ-QKD protocol). In our protocol, we use semiclassical pulses containing enough photons for random bit encoding and exploiting erbium amplifiers to retranslate photon pulses and, at the same time, ensuring that at the chosen pulse intensity only a few photons could go outside the channel even at distances of about a hundred meters. As a result, an eavesdropper will not be able to efficiently utilize the lost part of the signal. The central component of the TQ-QKD protocol is the end-to-end loss control of the fiber-optic communication line since optical losses can in principle be used by the eavesdropper to obtain the transmitted information. However, our control precision is such that if the degree of the leak is below the detectable level, then the leaking states are quantum since they contain only a few photons. Therefore, available to the eavesdropper parts of the bit encoding states representing `0' and `1' are nearly indistinguishable. Our work presents the experimental demonstration of the TQ-QKD protocol allowing quantum key distribution over 1079 kilometers. Further refining the quality of the scheme's components will expand the attainable transmission distances. This paves the way for creating a secure global QKD network in the upcoming years.
We investigate the dynamics of entanglement between the system and the environment during thermalization of a noninteracting fermionic impurity coupled to a fermionic thermal bath. We show that transient entanglement can be observed even in the weak coupling regime, when the reduced dynamics and thermodynamics of the system can be well described by an effectively classical and Markovian master equation for the state populations. This entanglement vanishes for long times, but is preserved over timescales comparable to the relaxation time. Its magnitude depends only weakly on the system-environment coupling but instead strongly on the purity of the initial state of the system. We relate the presence of such transient entanglement to the unitary character of the system-bath dynamics underlying the reduced Markovian description.
Information processing, quantum or classical, relies on channels transforming multiple input states to different corresponding outputs. Previous research has established bounds on the thermodynamic resources required for such operations, but no protocols have been specified for their optimal implementation. For the insightful case of qubits, we here develop explicit protocols to transform multiple states in an energetically optimal manner. We first prove conditions on the feasibility of carrying out such transformations at all, and then quantify the achievable work extraction. Our results uncover a fundamental incompatibility between the thermodynamic ideal of slow, quasistatic processes and the information-theoretic requirement to preserve distinguishablity between different possible output states.
We investigate the behavior of entanglement between a single fermionic level and a fermionic bath in three distinct thermodynamic regimes. First, in thermal equilibrium, we analyze the dependence of entanglement on the considered statistical ensemble: for the grand canonical state, it is generated only for a sufficiently strong system-bath coupling, whereas it is present for arbitrarily weak couplings for the canonical state with a fixed particle number. The threshold coupling strength, at which entanglement appears, is shown to strongly depend on the bath bandwidth. Second, we consider the relaxation to equilibrium. In this case a transient entanglement in a certain time interval can be observed even in the weak-coupling regime, when the reduced dynamics and thermodynamics of the system can be well described by an effectively classical and Markovian master equation for the state populations. At strong coupling strengths, entanglement is preserved for long times and converges to its equilibrium value. Finally, in voltage-driven junctions, a steady-state entanglement is generated for arbitrarily weak system-bath couplings at a certain threshold voltage. It is enhanced in the strong-coupling regime, and it is reduced by either the particle-hole or the tunnel coupling asymmetry.
I propose a version of quantum mechanics featuring a discrete and finite number of states that is plausibly a model of the real world. The model is based on standard unitary quantum theory of a closed system with a finite-dimensional Hilbert space. Given certain simple conditions on the spectrum of the Hamiltonian, Schr\"odinger evolution is periodic, and it is straightforward to replace continuous time with a discrete version, with the result that the system only visits a discrete and finite set of state vectors. The biggest challenges to the viability of such a model come from cosmological considerations. The theory may have implications for questions of mathematical realism and finitism.
The Ising model is defined by an objective function using a quadratic formula of qubit variables. The problem of an Ising model aims to determine the qubit values of the variables that minimize the objective function, and many optimization problems can be reduced to this problem. In this paper, we focus on optimization problems related to permutations, where the goal is to find the optimal permutation out of the $n!$ possible permutations of $n$ elements. To represent these problems as Ising models, a commonly employed approach is to use a kernel that utilizes one-hot encoding to find any one of the $n!$ permutations as the optimal solution. However, this kernel contains a large number of quadratic terms and high absolute coefficient values. The main contribution of this paper is the introduction of a novel permutation encoding technique called dual-matrix domain-wall, which significantly reduces the number of quadratic terms and the maximum absolute coefficient values in the kernel. Surprisingly, our dual-matrix domain-wall encoding reduces the quadratic term count and maximum absolute coefficient values from $n^3-n^2$ and $2n-4$ to $6n^2-12n+4$ and $2$, respectively. We also demonstrate the applicability of our encoding technique to partial permutations and Quadratic Unconstrained Binary Optimization (QUBO) models. Furthermore, we discuss a family of permutation problems that can be efficiently implemented using Ising/QUBO models with our dual-matrix domain-wall encoding.
We present a simple proof of a sufficient condition for the uniqueness of non-equilibrium steady states of Gorini-Kossakowski-Sudarshan-Lindblad equations. We demonstrate the applications of the sufficient condition using examples of the transverse-field Ising model, the XYZ model, and the tight-binding model with dephasing.
An important problem in quantum computation is generation of single-qubit quantum gates such as Hadamard ($H$) and $\pi/8$ ($T$) gates which are components of a universal set of gates. Qubits in experimental realizations of quantum computing devices are interacting with their environment. While the environment is often considered as an obstacle leading to decrease of the gate fidelity, in some cases it can be used as a resource. Here we consider the problem of optimal generation of $H$ and $T$ gates using coherent control and the environment as a resource acting on the qubit via incoherent control. For this problem, we study quantum control landscape which represents the behaviour of the infidelity as a functional of the controls. We consider three landscapes, with infidelities defined by steering between two, three (via Goerz-Reich-Koch approach), and four matrices in the qubit Hilbert space. We observe that for the $H$ gate, which is Clifford gate, for all three infidelities the distributions of minimal values obtained with gradient search have a simple form with just one peak. However, for $T$ gate which is a non-Clifford gate, the situation is surprisingly different - this distribution for the infidelity defined by two matrices also has one peak, whereas distributions for the infidelities defined by three and four matrices have two peaks, that might indicate possible existence of two isolated minima in the control landscape. Important is that among these three infidelities only those defined with three and four matrices guarantee closeness of generated gate to a target and can be used as a good measure of closeness. We study sets of optimized solutions for this most general and not treated before case of coherent and incoherent controls acting together, and discover that they form submanifolds in the control space, and unexpected, in some cases two isolated submanifolds.
Superconducting quantum processors have made significant progress in size and computing potential. As a result, the practical cryogenic limitations of operating large numbers of superconducting qubits are becoming a bottleneck for further scaling. Due to the low thermal conductivity and the dense optical multiplexing capacity of telecommunications fiber, converting qubit signal processing to the optical domain using microwave-to-optics transduction would significantly relax the strain on cryogenic space and thermal budgets. Here, we demonstrate high-fidelity multi-shot optical readout through an optical fiber of a superconducting transmon qubit connected via a coaxial cable to a fully integrated piezo-optomechanical transducer. Using a demolition readout technique, we achieve a multi-shot readout fidelity of >0.99 at 6 $\mu$W of optical power transmitted into the cryostat with as few as 200 averages, without the use of a quantum-limited amplifier. With improved frequency matching between the transducer and the qubit readout resonator, we anticipate that single-shot optical readout is achievable. Due to the small footprint (<0.15mm$^2$) and the modular fiber-based architecture, this device platform has the potential to scale towards use with thousands of qubits. Our results illustrate the potential of piezo-optomechanical transduction for low-dissipation operation of large quantum processors.
The $p$-adic unitary operator $U$ is defined as an invertible operator on $p$-adic ultrametric Banach space such that $\left |U\right |=\left |U^{-1}\right |=1$. We point out $U$ has a spectral measure valued in $\textbf{projection functors}$, which can be explained as the measure theory on the formal group scheme. The spectrum decomposition of $U$ is complete when $\psi$ is a $p$-adic wave function. We study $\textbf{the Galois theory of operators}$. The abelian extension theory of $\mathbb{Q}_p$ is connected to the topological properties of the $p$-adic unitary operator. We classify the $p$-adic unitary operator as three types: $\textbf{Teichm\"uller type}, \textbf{continuous type}, \textbf{pro-finite type}$. Finally, we establish a $\textbf{framework of $p$-adic quantum mechanics}$, where projection functor plays a role of quantum measurement.
Uniform continuity bounds on entropies are generally expressed in terms of a single distance measure between a pair of probability distributions or quantum states, typically, the total variation distance or trace distance. However, if an additional distance measure between the probability distributions or states is known, then the continuity bounds can be significantly strengthened. Here, we prove a tight uniform continuity bound for the Shannon entropy in terms of both the local- and total variation distances, sharpening an inequality proven in [I. Sason, IEEE Trans. Inf. Th., 59, 7118 (2013)]. We also obtain a uniform continuity bound for the von Neumann entropy in terms of both the operator norm- and trace distances. The bound is tight when the quotient of the trace distance by the operator norm distance is an integer. We then apply our results to compute upper bounds on the quantum- and private classical capacities of channels. We begin by refining the concept of approximate degradable channels, namely, $\varepsilon$-degradable channels, which are, by definition, $\varepsilon$-close in diamond norm to their complementary channel when composed with a degrading channel. To this end, we introduce the notion of $(\varepsilon,\nu)$-degradable channels; these are $\varepsilon$-degradable channels that are, in addition, $\nu$-close in completely bounded spectral norm to their complementary channel, when composed with the same degrading channel. This allows us to derive improved upper bounds to the quantum- and private classical capacities of such channels. Moreover, these bounds can be further improved by considering certain unstabilized versions of the above norms. We show that upper bounds on the latter can be efficiently expressed as semidefinite programs. We illustrate our results by obtaining a new upper bound on the quantum capacity of the qubit depolarizing channel.
The eigenstate thermalization hypothesis (ETH) is a successful theory that establishes the criteria for ergodicity and thermalization in isolated quantum many-body systems. In this work, we investigate the thermalization properties of spin-$ 1/2 $ XXZ chain with linearly-inhomogeneous interactions. We demonstrate that introduction of the inhomogeneous interactions leads to an onset of quantum chaos and thermalization, which, however, becomes inhibited for sufficiently strong inhomogeneity. To exhibit ETH, and to display its breakdown upon varying the strength of interactions, we probe statistics of energy levels and properties of matrix elements of local observables in eigenstates of the inhomogeneous XXZ spin chain. Moreover, we investigate the dynamics of the entanglement entropy and the survival probability which further evidence the thermalization and its breakdown in the considered model. We outline a way to experimentally realize the XXZ chain with linearly-inhomogeneous interactions in systems of ultracold atoms. Our results highlight a mechanism of emergence of ETH due to insertion of inhomogeneities in an otherwise integrable system and illustrate the arrest of quantum dynamics in presence of strong interactions.
Optical excitations in moir\'e transition metal dichalcogenide bilayers lead to the creation of excitons, as electron-hole bound states, that are generically considered within a Bose-Hubbard framework. Here, we demonstrate that these composite particles obey an angular momentum commutation relation that is generally non-bosonic. This emergent spin description of excitons indicates a limitation to their occupancy on each site, which is substantial in the weak electron-hole binding regime. The effective exciton theory is accordingly a spin Hamiltonian, which further becomes a Hubbard model of emergent bosons subject to an occupancy constraint after a Holstein-Primakoff transformation. We apply our theory to three commonly studied bilayers (MoSe2/WSe2, WSe2/WS2, and WSe2/MoS2) and show that in the relevant parameter regimes their allowed occupancies never exceed three excitons. Our systematic theory provides guidelines for future research on the many-body physics of moir\'e excitons.