Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-14 11:30 to 2023-11-17 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
In nonminimally coupled theories where a scalar field is coupled to the Ricci scalar, neutron stars (NSs) can have scalar charges through an interaction with matter mediated by gravity. On the other hand, the same theories do not give rise to hairy black hole (BH) solutions. The observations of gravitational waves (GWs) emitted from an inspiralling NS-BH binary system allows a possibility of constraining the NS scalar change. Moreover, the nonminimally coupled scalar-tensor theories generate a breathing scalar mode besides two tensor polarizations. Using the GW200115 data of the coalescence of a BH-NS binary, we place observational constraints on the NS scalar charge as well as the nonminimal coupling strength for a subclass of massless Horndeski theories with a luminal GW propagation. Unlike past related works, we exploit a waveform for a mixture of tensor and scalar polarizations. Taking the breathing mode into account, the scalar charge is more tightly constrained in comparison to the analysis of the tensor GWs alone. In nonminimally coupled theories including Brans-Dicke gravity and spontaneous scalarization scenarios with/without a kinetic screening, we put new bounds on model parameters of each theory.
The main purpose of this work is to investigate the properties of the non-thermal emission in the interacting clusters pairs Abell 0399-Abell 0401 and Abell 21-PSZ2 G114.9, found in an interacting state. In both cases their connection along a filament is supported by SZ effect detected by the Planck satellite and, in the special case of Abell 0399-Abell 0401, the presence of a radio bridge has been already confirmed by LOFAR observations at 140MHz. Here, we analyse new high sensitivity wideband (250-500MHz) uGMRT data of these two systems and describe an injection procedure to place limits on the spectrum of Abell 0399-Abell 0401 and on the radio emission between Abell 21-PSZ2 G114.9. For the A399-A401 pair, we are able to constrain the steep spectral index of the bridge emission to be alpha>2.2 with a 95% confidence level between 140MHz and 400MHz. For the A21-PSZ2 G114.9 pair, we are able to place an upper limit on the flux density of the bridge emission with two different methods, finding at the central frequency of 383MHz a conservative value of fu_1<260mJy at 95% confidence level, and a lower value of fu_2<125mJy at 80% confidence level, based on visual inspection and a morphological criterion. Our work provides a constraint on the spectrum in the bridge A399-A401 which disfavours shock-acceleration as the main mechanism for the radio emission.
When a galaxy falls into a cluster, its outermost parts are the most affected by the environment. In this paper, we are interested in studying the influence of a dense environment on different galaxy's components to better understand how this affects the evolution of galaxies. We use, as laboratory for this study, the Hydra cluster which is close to virialization; yet it still shows evidence of substructures. We present a multi-wavelength bulge-disc decomposition performed simultaneously in 12 bands from S-PLUS data for 52 galaxies brighter than m$_{r}$= 16. We model the galaxies with a Sersic profile for the bulge and an exponential profile for the disc. We find that the smaller, more compact, and bulge-dominated galaxies tend to exhibit a redder colour at a fixed stellar mass. This suggests that the same mechanisms (ram-pressure stripping and tidal stripping) that are causing the compaction in these galaxies are also causing them to stop forming stars. The bulge size is unrelated to the galaxy's stellar mass, while the disc size increases with greater stellar mass, indicating the dominant role of the disc in the overall galaxy mass-size relation found. Furthermore, our analysis of the environment unveils that quenched galaxies are prevalent in regions likely associated with substructures. However, these areas also harbour a minority of star-forming galaxies, primarily resulting from galaxy interactions. Lastly, we find that ~37 percent of the galaxies exhibit bulges that are bluer than their discs, indicative of an outside-in quenching process in this type of dense environments.
In this work we study the early-time behavior and the background evolution of ultra-light vector dark matter. We present a model for vector dark matter in an anisotropic Bianchi type I universe. Vector fields source anisotropies in the early universe characterized by a shear tensor which rapidly decays once the fields start oscillating, making them viable dark matter candidates. We present the set of equations needed to evolve scalar cosmological perturbations in the linear regime, both in Synchronous gauge and Newtonian gauge. We show that the shear tensor has to be taken into account in the calculation of adiabatic initial conditions.
We present a detailed analysis of a new, iterative density reconstruction algorithm. This algorithm uses a decreasing smoothing scale to better reconstruct the density field in Lagrangian space. We implement this algorithm to run on the Quijote simulations, and extend it to (a) include a smoothing kernel that smoothly goes from anisotropic to isotropic, and (b) a variant that does not correct for redshift space distortions. We compare the performance of this algorithm with the standard reconstruction method. Our examinations of the methods include cross-correlation of the reconstructed density field with the linear density field, reconstructed two-point functions, and BAO parameter fitting. We also examine the impact of various parameters, such as smoothing scale, anisotropic smoothing, tracer type/bias, and the inclusion of second order perturbation theory. We find that the two reconstruction algorithms are comparable in most of the areas we examine. In particular, both algorithms give consistent fittings of BAO parameters. The fits are robust over a range of smoothing scales. We find the iterative algorithm is significantly better at removing redshift space distortions. The new algorithm will be a promising method to be employed in the ongoing and future large-scale structure surveys.
The overabundance of the red and massive candidate galaxies observed by the James Webb Space Telescope (JWST) implies efficient structure formation or large star formation efficiency at high redshift $z\sim 10$. In the scenario of a low star formation efficiency, because massive neutrinos tend to suppress the growth of structure of the universe, the JWST observation tightens the upper bound of the neutrino masses. Assuming $\Lambda$ cold dark matter cosmology and a star formation efficiency $\epsilon \lesssim 0.1$, we perform joint analyses of Planck+JWST and Planck+BAO+JWST, and obtain improved constraints $\sum m_\nu<0.214\,\mathrm{eV}$ and $\sum m_\nu < 0.114\,\mathrm{eV}$ at 95% confidence level, respectively. The inverted mass ordering, which implies $\sum m_\nu\geq 0.1\mathrm{eV}$, is excluded by Planck+BAO+JWST at 92% confidence level.
We present the first detailed analysis of the ultra-steep spectrum radio halo in the merging galaxy cluster Abell 521, based on upgraded Giant Metrewave Radio telescope (uGMRT) observations. The combination of radio observations (300-850 MHz) and archival X-ray data provide a new window into the complex physics occurring in this system. When compared to all previous analyses, our sensitive radio images detected the centrally located radio halo emission to a greater extent of $\sim$ 1.3 Mpc. A faint extension of the southeastern radio relic has been discovered. We detected another relic, recently discovered by MeerKAT, and coincident with a possible shock front in the X-rays, at the northwest position of the center. We find that the integrated spectrum of the radio halo is well-fitted with a spectral index of $-1.86 \pm 0.12$. A spatially resolved spectral index map revealed the spectral index fluctuations, as well as an outward radial steepening of the average spectral index. The radio and X-ray surface brightness are well correlated for the entire and different sub-parts of the halo, with sub-linear correlation slopes (0.50$-$0.65). We also found a mild anti-correlation between the spectral index and X-ray surface brightness. Newly detected extensions of the SE relic and the counter relic are consistent with the merger in the plane of the sky.
We use a large set of halo mass function (HMF) models in order to investigate their ability to represent the observational Cluster Mass Function (CMF), derived from the $\mathtt{GalWCat19}$ cluster catalogue, within the $\Lambda$CDM cosmology. We apply the $\chi^2$ minimization procedure to constrain the free parameters of the models, namely $\Omega_m$ and $\sigma_8$. We find that all HMF models fit well the observational CMF, while the Bocquet et. al. model provides the best fit, with the lowest $\chi^2$ value. Utilizing the {\em Index of Inconsistency} (IOI) measure, we further test the possible inconsistency of the models with respect to a variety of {\em Planck 2018} $\Lambda$CDM cosmologies, resulting from the combination of different probes (CMB - BAO or CMB - DES). We find that the HMF models that fitted well the observed CMF provide consistent cosmological parameters with those of the {\em Planck} CMB analysis, except for the Press $\&$ Schechter, Yahagi et. al., and Despali et. al. models which return large IOI values. The inverse $\chi_{\rm min}^2$-weighted average values of $\Omega_m$ and $\sigma_8$, over all 23 theoretical HMF models are: ${\bar \Omega_{m,0}}=0.313\pm 0.022$ and ${\bar \sigma_8}=0.798\pm0.040$, which are clearly consistent with the results of {\em Planck}-CMB, providing $S_8=\sigma_8\left(\Omega_m/0.3\right)^{1/2}= 0.815\pm 0.05$. Within the $\Lambda$CDM paradigm and independently of the selected HMF model in the analysis, we find that the current CMF shows no $\sigma_8$-tension with the corresponding {\em Planck}-CMB results.
Exclusion zones in the cross-correlations between critical points (peak-void, peak-wall, filament-wall, filament-void) of the density field define quasi-standard rulers that can be used to constrain dark matter and dark energy cosmological parameters. The average size of the exclusion zone is found to scale linearly with the typical distance between extrema. The latter changes as a function of the matter content of the universe in a predictable manner, but its comoving size remains essentially constant in the linear regime of structure growth on large scales, unless the incorrect cosmology is assumed in the redshift-distance relation. This can be used to constrain the dark energy parameters when considering a survey that scans a range of redshifts. The precision of the parameter estimation is assessed using a set of cosmological simulations, and is found to be a 4$\sigma$ detection of a change in matter content of 5%, or about 3.8$\sigma$ detection of 50% shift in the dark energy parameter using a full sky survey up to redshift 0.5.
We look for signatures of the Hu-Sawicki f(R) modified gravity theory, proposed to explain the observed accelerated expansion of the universe; in observations of the galaxy distribution, the cosmic microwave background (CMB), and gravitational lensing of the CMB. We study constraints obtained by using observations of only the CMB primary anisotropies, before adding the galaxy power spectrum and its cross-correlation with CMB lensing. We show that cross-correlation of the galaxy distribution with lensing measurements is crucial to breaking parameter degeneracies, placing tighter constraints on the model. In particular, we set a strong upper limit on $\log{\lvert f_{R_0}\lvert }<-4.61$ at 95% confidence level. This means that while the model may explain the accelerated expansion, its impact on large-scale structure closely resembles General Relativity. Studies of this kind with future data sets will probe smaller potential deviations from General Relativity.
In this work, we study the evolution of Betti curves obtained by persistent-homological analysis of pointclouds formed by halos in different cosmological $N$-body simulations. We show that they can be approximated with a scaled log-normal distribution function with reasonable precision. Our analysis shows that the shapes and maximums of Betti curves exhibit dependence on the mass range of the selected subpopulation of halos, but at the same time, the resolution of a simulation does not play any significant role, provided that the mass distribution of simulated halos is complete down to a given mass scale. Besides, we study how Betti curves change with the evolution of the Universe, i.e., their dependence on redshift. Sampling subpopulations of halos within certain mass ranges up to redshift $z=2.5$ yields a surprisingly small difference between corresponding Betti curves. We propose that this may be an indicator of the existence of a new specific topological invariant in the structure of the Universe.
The Next Generation Very Large Array (ngVLA) is a planned radio interferometer providing unprecedented sensitivity at wavelengths between 21 cm and 3 mm. Its 263 antenna element array will be spatially distributed across North America to enable both superb low surface brightness recovery and sub-milliarcsecond angular resolution imaging. The project was developed by the international astronomy community under the lead of the National Radio Astronomy Observatory (NRAO), and is anticipated to be built between 2027 and 2037. Two workshops have been held in 2022 and 2023 with the goal to discuss and consolidate the scientific interests in the ngVLA within the German astronomical community. This community paper constitutes a collection of 41 science ideas which the German community aims to pursue with the ngVLA in the 2030s. This is not a complete list and the ideas are not developed at the level of a "Science Book", such that the present document is mainly to be considered a "living document", to provide a basis for further discussion within the community. As such, additional contributions are welcome, and will be considered for inclusion in future revisions.
The warm-hot plasma in cosmic web filaments is thought to comprise a large fraction of the gas in the local Universe. So far, the search for this gas has focused on mapping its emission, or detecting its absorption signatures against bright, point-like sources. Future, non-dispersive, high spectral resolution X-ray detectors will, for the first time, enable absorption studies against extended objects. Here, we use the Hydrangea cosmological hydrodynamical simulations to predict the expected properties of intergalactic gas in and around massive galaxy clusters, and investigate the prospects of detecting it in absorption against the bright cores of nearby, massive, relaxed galaxy clusters. We probe a total of $138$ projections from the simulation volumes, finding $16$ directions with a total column density $N_{O VII} > 10^{14.5}$ cm$^{-2}$. The strongest absorbers are typically shifted by $\pm 1000$ km/s with respect to the rest frame of the cluster they are nearest to. Realistic mock observations with future micro-calorimeters, such as the Athena X-ray Integral Field Unit or the proposed Line Emission Mapper (LEM) X-ray probe, show that the detection of cosmic web filaments in O VII and O VIII absorption against galaxy cluster cores will be feasible. An O VII detection with a $5\sigma$ significance can be achieved in $10-250$ ks with Athena for most of the galaxy clusters considered. The O VIII detection becomes feasible only with a spectral resolution of around $1$ eV, comparable to that envisioned for LEM.
Primordial non-Gaussianity of the local type induces a strong scale-dependent bias on the clustering of halos in the late-time Universe. This signature is particularly promising to provide constraints on the non-Gaussianity parameter $f_{\rm NL}$ from galaxy surveys, as the bias amplitude grows with scale and becomes important on large, linear scales. However, there is a well-known degeneracy between the real prize, the $f_{\rm NL}$ parameter, and the (non-Gaussian) assembly bias i.e., the halo formation history-dependent contribution to the amplitude of the signal, which could seriously compromise the ability of large-scale structure surveys to constrain $f_{\rm NL}$. We show how the assembly bias can be modeled and constrained, thus almost completely recovering the power of galaxy surveys to competitively constrain primordial non-Gaussianity. In particular, studying hydrodynamical simulations, we find that a proxy for the halo properties that determine assembly bias can be constructed from photometric properties of galaxies. Using a prior on the assembly bias guided by this proxy degrades the statistical errors on $f_{\rm NL}$ only mildly compared to an ideal case where the assembly bias is perfectly known. The systematic error on $f_{\rm NL}$ that the proxy induces can be safely kept under control.
Cosmological weak lensing measurements rely on a precise measurement of the shear two-point correlation function (2PCF) along with a deep understanding of systematics that affect it. In this work, we demonstrate a general framework for detecting and modeling the impact of PSF systematics on the cosmic shear 2PCF, and mitigating its impact on cosmological analysis. Our framework can describe leakage and modeling error from all spin-2 quantities contributed by the PSF second and higher moments, rather than just the second moments, using the cross-correlations between galaxy shapes and PSF moments. We interpret null tests using the HSC Year 3 (Y3) catalogs with this formalism, and find that leakage from the spin-2 combination of PSF fourth moments is the leading contributor to additive shear systematics, with total contamination that is an order of magnitude higher than that contributed by PSF second moments alone. We conducted a mock cosmic shear analysis for HSC Y3, and find that, if uncorrected, PSF systematics can bias the cosmological parameters $\Omega_m$ and $S_8$ by $\sim$0.3$\sigma$. The traditional second moment-based model can only correct for a 0.1$\sigma$ bias, leaving the contamination largely uncorrected. We conclude it is necessary to model both PSF second and fourth moment contamination for HSC Y3 cosmic shear analysis. We also reanalyze the HSC Y1 cosmic shear analysis with our updated systematics model, and identify a 0.07$\sigma$ bias on $\Omega_m$ when using the more restricted second moment model from the original analysis. We demonstrate how to self-consistently use the method in both real space and Fourier space, assess shear systematics in tomographic bins, and test for PSF model overfitting.
We use the Sloan Digital Sky Survey (SDSS) BOSS galaxies and their overlap with approximately 416 sq. degree of deep $grizy$-band imaging from the Subaru Hyper Suprime-Cam Survey (HSC). We measure three two-point correlations that form the basis of the cosmological inference presented in our companion papers, Miyatake et al. and Sugiyama et al. We use three approximately volume limited subsamples of spectroscopic galaxies by their $i$-band magnitude from the SDSS-BOSS: LOWZ (0.1<z<0.35), CMASS1 (0.43<z<0.55) and CMASS2 (0.55<z<0.7), respectively. We present high signal-to-noise ratio measurements of the projected correlation functions of these galaxies, which is expected to be proportional to the matter correlation function times the bias of galaxies on large scales. In order to break the degeneracy between the amplitude of the matter correlation and the bias of these galaxies, we use the distortions of the shapes of galaxies in HSC due to weak gravitational lensing, to measure the galaxy-galaxy lensing signal, which probes the galaxy-matter cross-correlation of the SDSS-BOSS galaxies. We also measure the cosmic shear correlation functions from HSC galaxies which is related to the projected matter correlation function. We demonstrate the robustness of our measurements with a variety of systematic tests. Our use of a single sample of HSC source galaxies is crucial to calibrate any residual systematic biases in the inferred redshifts of our galaxies. We also describe the construction of a suite of mocks: i) spectroscopic galaxy catalogs which obey the clustering and abundance of each of the three SDSS-BOSS subsamples, and ii) galaxy shape catalogs which obey the footprint of the HSC survey and have been appropriately sheared by the large-scale structure expected in a $\Lambda$-CDM model. We use these mock catalogs to compute the covariance of each of our observables.
The precise estimation of the statistical errors and accurate removal of the systematical errors are the two major challenges for the stage IV cosmic shear surveys. We explore their impact for the China Space-Station Telescope (CSST) with survey area $\sim17,500\deg^2$ up to redshift $\sim4$. We consider statistical error contributed from Gaussian covariance, connected non-Gaussian covariance and super-sample covariance. We find the non-Gaussian covariances, which is dominated by the super-sample covariance, can largely reduce the signal-to-noise of the two-point statistics for CSST, leading to a $\sim1/3$ loss in the figure-of-merit for the matter clustering properties ($\sigma_8-\Omega_m$ plane) and $1/6$ in the dark energy equation-of-state ($w_0-w_a$ plane). We further put requirements of systematics-mitigation on: intrinsic alignment of galaxies, baryonic feedback, shear multiplicative bias, and bias in the redshift distribution, for an unbiased cosmology. The $10^{-2}$ to $10^{-3}$ level requirements emphasize strong needs in related studies, to support future model selections and the associated priors for the nuisance parameters.
Dark energy is believed to be responsible for the acceleration of the universe. In this paper, we reconstruct the dark energy scalar field potential $V(\phi)$ using the Hubble parameter H(z) through Gaussian Process analysis. Our goal is to investigate dark energy using various H(z) datasets and priors. We find that the selection of prior and the H(z) dataset significantly affects the reconstructed $V(\phi)$. And we compare two models, Power Law and Free Field, to the reconstructed $V(\phi)$ using $\chi^2$ method. The results suggest that the models are generally in agreement with the reconstructed potential within a $3\sigma$ confidence interval, except in the case of Observational H(z) data (OHD) with the Planck 18 (P18) prior. Additionally, we simulate H(z) data to measure the effect of increasing the number of data points on the accuracy of reconstructed $V(\phi)$. We find that doubling the number of H(z) data points can improve the accuracy rate of reconstructed $V(\phi)$ by 5$\%$ to 30$\%$.
The scale-dependent bias of galaxy density contrasts is an important signal to be extracted in constraining local primordial non-Gaussianity ($f_{\rm NL}^{\text{local}}$) from observations of large-scale structure. Constraints so obtained rely on the assumption that horizon-scale features in the galaxy power spectrum are exclusively due to primordial physical mechanisms. Yet, post-inflationary effects can induce modulations to the galaxy number density that appear as horizon-scale, scale-dependent bias. We investigate the effect of two such sources of scale-dependent bias - the free-streaming of light relics and fluctuations in the background of ionising radiation - on precision measurements of local primordial non-Gaussianity $f_{\rm NL}^{\text{local}}$ from galaxy power spectrum measurements. Using the SPHEREx survey as a test case survey reaching $\sigma(f_{\rm NL}^{\rm local}) \lesssim 1$, we show that ignoring the scale-dependent bias induced by free-streaming particles can negatively bias the inferred value of $f_{\rm NL}^{\rm local}$ by $\sim 0.1-0.3\sigma$. Ignoring the effect of ionising radiation fluctuations can negatively bias the inferred value of $f_{\rm NL}^{\rm local}$ by $ \sim 1\sigma$. The range of biases depends on the source populations and the ranges of scales used in the analysis, as well as the value of the neutrino mass and the modelling of the impact of ionising radiation. If these sources of scale-dependent bias are included in the analysis, forecasts for $f_{\rm NL}^{\rm local}$ are unbiased but degraded.
We perform the weak lensing mass mapping analysis to identify troughs, which are defined as local minima in the mass map. Since weak lensing probes projected matter along the line-of-sight, these troughs can be produced by single voids or multiple voids projected along the line-of-sight. To scrutinise the origins of the weak lensing troughs, we systematically investigate the line-of-sight structure of troughs selected from the latest Subaru Hyper Suprime-Cam (HSC) Year 3 weak lensing data covering $433.48 \, \mathrm{deg}^2$. From a curved sky mass map constructed with the HSC data, we identify 15 troughs with the signal-to-noise ratio higher than $5.7$ and address their line-of-sight density structure utilizing redshift distributions of two galaxy samples, photometric luminous red galaxies observed by HSC and spectroscopic galaxies detected by Baryon Oscillation Spectroscopic Survey. While most of weak lensing signals due to the troughs are explained by multiple voids aligned along the line-of-sight, we find that two of the 15 troughs potentially originate from single voids at redshift $\sim 0.3$. The single void interpretation appears to be consistent with our three-dimensional mass mapping analysis. We argue that single voids can indeed reproduce observed weak lensing signals at the troughs if these voids are not spherical but are highly elongated along the line-of-sight direction.
Recently, many works have tried to realize cosmological accelerated expansion in string theory models in the asymptotic regions of field space, with a typical scalar potential $V(\varphi)$ having an exponential fall-off $e^{-\gamma\, \varphi}$. Those attempts have been plagued by the fact that $V$ is too steep, namely $\gamma \geq 2/\sqrt{d-2}$ in a $d$-dimensional spacetime. We revisit the corresponding dynamical system for arbitrary $d$ and $\gamma$, and show that for an open universe ($k=-1$), there exists a new stable fixed point $P_1$ precisely if $\gamma > 2/\sqrt{d-2}$. Building on the recent work arXiv:2210.10813, we show in addition that cosmological solutions asymptoting to $P_1$ exhibit accelerated expansion in various fashions (semi-eternal, eternal, transient with parametrically controlled number of e-folds, or rollercoaster). We finally present realizations in string theory of these cosmological models with asymptotically accelerating solutions, for $d=4$ or $d=10$. We also show that these solutions do not admit a cosmological event horizon, and discuss the possibility of this being a generic feature of quantum gravity.
Thanks to the MUSE integral field spectrograph on the VLT, extragalactic distance measurements with the [O III] 5007 A planetary nebula luminosity function (PNLF) are now possible out to approx. 40 Mpc. Here we analyze the VLT/MUSE data for 20 galaxies from the ESO public archive to identify the systems' planetary nebulae (PNe) and determine their PNLF distances. Three of the galaxies do not contain enough PNe for a robust measure of the PNLF, and the results for one other system are compromised by the galaxy's internal extinction. However, we obtain robust PNLF distances for the remaining 16 galaxies, two of which are isolated and beyond 30 Mpc in a relatively unperturbed Hubble flow. From these data, we derive a Hubble Constant of 74.2 +/- 7.2 (stat) +/-3.7 (sys) km/s/Mpc, a value that is very similar to that found from other quality indicators (e.g., Cepheids, the tip of the red giant branch, and surface brightness fluctuations). At present, the uncertainty is dominated by the small number of suitable galaxies in the ESO archival and their less than ideal observing conditions and calibrations. Based on our experience with these systems, we identify the observational requirements necessary for the PNLF to yield a competitive value for H0 that is independent of the SN Ia distance scale, and help resolve the current tension in the Hubble constant.
Practically all the full-fledged MOND theories propounded to date are of the modified-gravity (MG) type: they modify only the Newtonian, Poisson action of the gravitational potential, or the general-relativistic Einstein-Hilbert action, leaving other terms (inertia) intact. Here, I discuss the interpretation of MOND as modified inertia (MI). My main aim is threefold: (a) to advocate exploring MOND theories beyond MG, and appreciating their idiosyncrasies, (b) to highlight the fact that secondary predictions of such theories can differ materially from those of MG theories, (c) to demonstrate some of this with specific MI models. I discuss some definitions and generalities concerning MI. I then present instances of MI in physics, and the lessons we can learn from them for MOND. I then concentrate on a specific class of nonrelativistic, MOND, MI models, and contrast their predictions with those of the two workhorse, MG theories -- AQUAL and QUMOND. The MI models predict possibly a stronger external-field effect -- e.g. on low acceleration systems in the solar neighborhood -- such as very wide binary stars -- and on vertical motions in disc galaxies. More generally, the workings of the effect are rather different, and depend in different ways on dimensionless characteristics of the system, such as frequency ratios of the external and internal fields, eccentricity of trajectories, etc. These models predict a {\it much} weaker effect of the Galactic field in the inner Solar System than is predicted by AQUAL/QUMOND. I also show how noncircular motions -- such as those perpendicular to the disc -- modify the standard, algebraic mass-discrepancy-acceleration relation (aka RAR) that is predicted by MI for exactly circular orbits. These differences, and more that are discussed, can potentially offer ways to distinguish between theories.
In this work, we examine the propagation of gravitational waves in cosmological and astrophysical spacetimes in the context of Einstein--Gauss-Bonnet gravity, in view of the GW170817 event. The perspective we approach the problem is to obtain a theory which can produce a gravitational wave speed that is equal to that of light in the vacuum, or at least the speed can be compatible with the constraints imposed by the GW170817 event. As we show, in the context of Einstein--Gauss-Bonnet gravity, the propagation speed of gravity waves in cosmological spacetimes can be compatible with the GW170817 event, and we reconstruct some viable models. However, the propagation of gravity waves in spherically symmetric spacetimes violates the GW170817 constraints, thus it is impossible for the gravitational wave that propagates in a spherically symmetric spacetime to have a propagating speed which is equal to that of light in the vacuum. The same conclusion applies to the Einstein--Gauss-Bonnet theory with two scalars. We discuss the possible implications of our results on spherically symmetric spacetimes.
Massive neutrinos have non-negligible impact on the formation of large-scale structures. We investigate the impact of massive neutrinos on the halo assembly bias effect, measured by the relative halo bias $b$ as a function of the curvature of the initial density peak $s$, neutrino excess $\epsilon_\nu$, or halo concentration $c$, using a large suite of $\Sigma M_\nu{=}0.0$ eV and $0.4$ eV simulations with the same initial conditions. By tracing dark matter haloes back to their initial density peaks, we construct a catalogue of halo "twins" that collapsed from the same peaks but evolved separately with and without massive neutrinos, thereby isolating any effect of neutrinos on halo formation. We detect a 2% weakening of the halo assembly bias as measured by $b(\epsilon_\nu)$ in the presence of massive neutrinos. Due to the significant correlation between $s$ and $\epsilon_\nu$~($r_{cc}{=}0.319$), the impact of neutrinos persists in the halo assembly bias measured by $b(s)$ but reduced by an order of magnitude to 0.1%. As the correlation between $c$ and $\epsilon_\nu$ drops to $r_{cc}{=}0.087$, we do not detect any neutrino-induced impact on $b(c)$, consistent with earlier studies. We also discover an analogous assembly bias effect for the "neutrino haloes", whose concentrations are anti-correlated with the large-scale clustering of neutrinos.
We study, analytically and numerically, the structure and evolution of relativistic jetted blast waves that propagate in uniform media, such as those that generate afterglows of gamma-ray bursts. Similar to previous studies, we find that the evolution can be divided into two parts: (i) a pre-spreading phase, in which the jet core angle is roughly constant, $\theta_{c,0}$, and the shock Lorentz factor along the axis, $\Gamma_a$, evolves as a part of the Blandford-Mckee solution, and (ii) a spreading phase, in which $\Gamma_a$ drops exponentially with the radius and the core angle, $\theta_c$, grows rapidly. Nevertheless, the jet remains collimated during the relativistic phase, where $\theta_c(\Gamma_a\beta_a=1)\simeq 0.4\theta_{c,0}^{1/3}$. The transition between the phases takes place when $\Gamma_a\simeq 0.2\theta_{c,0}^{-1}$. We find that the "wings" of jets with initial "narrow" structure ($\frac{d \log\,E_{iso}}{d\log\,\theta}<-3$ outside of the core, where $E_{iso}$ is isotropic equivalent energy), start evolving during the pre-spreading phase. By the spreading phase these jets evolve to a self-similar profile, which is independent of the initial structure, where in the wings $\Gamma(\theta)\propto\theta^{-1.5}$ and $E_{iso}(\theta)\propto \theta^{-2.6}$. Jets with initial "wide" structure roughly keep their initial profile during their entire evolution. We provide analytic description of the jet lateral profile evolution for a range of initial structures, as well as the evolution of $\Gamma_a$ and $\theta_c$. For off-axis GRBs, we present a relation between the initial jet structure and the light curve rising phase. Applying our model to GW170817, we find that initially the jet had $\theta_{c,0}=0.4-4.5~\deg$ and wings which are consistent with $E_{iso} \propto \theta^{-3}-\theta^{-4}$.
Disk-fed accretion onto neutron stars can power a wide range of astrophysical sources ranging from X-ray binaries, to accretion powered millisecond pulsars, ultra-luminous X-ray sources, and gamma-ray bursts. A crucial parameter controlling the gas-magnetosphere interaction is the strength of the stellar dipole. In addition, coherent X-ray pulsations in many neutron star systems indicate that the star's dipole moment is oblique relative to its rotation axis. Therefore, it is critical to systematically explore the 2D parameter space of the star's magnetic field strength and obliquity, which is what this work does, for the first time, in the framework of 3D general-relativistic magnetohydrodynamics. If the accretion disk carries its own vertical magnetic field, this introduces an additional factor: the relative polarity of the disk and stellar magnetic fields. We find that depending on the strength of the stellar dipole and the star-disk relative polarity, the neutron star's jet power can either increase or decrease with increasing obliquity. For weak dipole strength (equivalently, high accretion rate), the parallel polarity results in a positive correlation between jet power and obliquity, whereas the anti-parallel orientation displays the opposite trend. For stronger dipoles, the relative polarity effect disappears, and jet power always decreases with increasing obliquity. The influence of the relative polarity gradually disappears as obliquity increases. Highly oblique pulsars tend to have an increased magnetospheric radius, a lower mass accretion rate, and enter the propeller regime at lower magnetic moments than aligned stars.
We report on contemporaneous optical observations at ~10 ms timescales from the fast radio burst (FRB) 20180916B of two repeat bursts (FRB 20201023, FRB 20220908) taken with the 'Alopeke camera on the Gemini North telescope. These repeats have radio fluences of 2.8 and 3.5 Jy ms, respectively, approximately in the lower 50th percentile for fluence from this repeating burst. The 'Alopeke data reveal no significant optical detections at the FRB position and we place 3-sigma upper limits to the optical fluences of <8.3e-3 and <7.7e-3 Jy ms after correcting for line-of-sight extinction. Together, these yield the most sensitive limits to the optical-to-radio fluence ratio of an FRB on these timescales with eta < 3e-3 by roughly an order of magnitude. These measurements rule out progenitor models where FRB 20180916B has a similar fluence ratio to optical pulsars similar to the Crab pulsar or optical emission is produced as inverse Compton radiation in a pulsar magnetosphere or young supernova remnant. Our ongoing program with 'Alopeke on Gemini-N will continue to monitor repeating FRBs, including FRB 20180916B, to search for optical counterparts on ms timescales.
Magnetars are slowly rotating neutron stars that possess the strongest magnetic fields ($10^{14}-10^{15} \mathrm{G}$) known in the cosmos. They display a range of transient high-energy electromagnetic activity. The brightest and most energetic of these events are the gamma-ray bursts (GRBs) known as magnetar giant flares (MGFs), with isotropic energy $E\approx10^{44}-10^{46} \mathrm{erg}$. There are only seven detections identified as MGFs to date: three unambiguous events occurred in our Galaxy and the Magellanic Clouds, and the other four MGF candidates are associated with nearby star-forming galaxies. As all seven identified MGFs are bright at Earth, additional weaker events remain unidentified in archival data. We conducted a search of the Fermi Gamma-ray Burst Monitor (GBM) database for candidate extragalactic MGFs and, when possible, collected localization data from the Interplanetary Network (IPN) satellites. Our search yielded one convincing event, GRB 180128A. IPN localizes this burst with NGC 253, commonly known as the Sculptor Galaxy. This event is the second MGF in modern astronomy to be associated with this galaxy and the first time two bursts are associated with a single galaxy outside our own. Here, we detail the archival search criteria that uncovered this event and its spectral and temporal properties, which are consistent with expectations for a MGF. We also discuss the theoretical implications and finer burst structures resolved from various binning methods. Our analysis provides observational evidence for an eighth identified MGF.
Supermassive black holes can experience super-Eddington peak mass fallback rates following the tidal disruption of a star. The theoretical expectation is that part of the infalling material is expelled by means of an accretion disk wind, whose observational signature includes blueshifted absorption lines of highly ionized species in X-ray spectra. To date, however, only one such ultra-fast outflow (UFO) has been reported in the tidal disruption event (TDE) ASASSN-14li. Here we report on the discovery of transient absorption-like signatures in X-ray spectra of the TDE AT2020ksf/Gaia20cjk (at a redshift of $z$=0.092), following an X-ray brightening $\sim 230$ days after UV/optical peak. We find that while no statistically significant absorption features are present initially, they appear on a timescale of several days, and remain detected up to 770 days after peak. Simple thermal continuum models, combined with a power-law or neutral absorber, do not describe these features well. Adding a partial covering, low velocity ionized absorber improves the fit at early times, but fails at late times. A high velocity (v$_w$ $\sim$ 42000 km s$^{-1}$, or -0.15c), ionized absorber (ultra-fast outflow) provides a good fit to all data. The few day timescale of variability is consistent with expectations for a clumpy wind. We discuss several scenarios that could explain the X-ray delay, as well as the potential for larger scale wind feedback. The serendipitous nature of the discovery could suggest a high incidence of UFOs in TDEs, alleviating some of the tension with theoretical expectations.
Kilonovae are a class of astronomical transients observed as counterparts to mergers of compact binary systems, such as a binary neutron star (BNS) or black hole-neutron star (BHNS) inspirals. They serve as probes for heavy-element nucleosynthesis in astrophysical environments, while together with gravitational wave emission constraining the distance to the merger itself, they can place constraints on the Hubble constant. Obtaining the physical parameters (e.g. ejecta mass, velocity, composition) of a kilonova from observations is a complex inverse problem, usually tackled by sampling-based inference methods such as Markov-chain Monte Carlo (MCMC) or nested sampling techniques. These methods often rely on computing approximate likelihoods, since a full simulation of compact object mergers involve expensive computations such as integrals, the calculation of likelihood of the observed data given parameters can become intractable, rendering the likelihood-based inference approaches inapplicable. We propose here to use Simulation-based Inference (SBI) techniques to infer the physical parameters of BNS kilonovae from their spectra, using simulations produced with KilonovaNet. Our model uses Amortized Neural Posterior Estimation (ANPE) together with an embedding neural network to accurately predict posterior distributions from simulated spectra. We further test our model with real observations from AT2017gfo, the only kilonova with multi-messenger data, and show that our estimates agree with previous likelihood-based approaches.
In this paper, we provide a first physical interpretation for the Event Horizon Telescope (EHT)'s 2017 observations of Sgr A*. Our main approach is to compare resolved EHT data at 230 GHz and unresolved non-EHT observations from radio to X-ray wavelengths to predictions from a library of models based on time-dependent general relativistic magnetohydrodynamics (GRMHD) simulations, including aligned, tilted, and stellar wind-fed simulations; radiative transfer is performed assuming both thermal and non-thermal electron distribution functions. We test the models against 11 constraints drawn from EHT 230 GHz data and observations at 86 GHz, 2.2 $\mu$m, and in the X-ray. All models fail at least one constraint. Light curve variability provides a particularly severe constraint, failing nearly all strongly magnetized (MAD) models and a large fraction of weakly magnetized (SANE) models. A number of models fail only the variability constraints. We identify a promising cluster of these models, which are MAD and have inclination $i \le$ 30$^\circ$. They have accretion rate $(5.2$-$9.5)\times10^{-9}M_\odot$yr$^{-1}$, bolometric luminosity $(6.8$--$9.2)\times10^{35}$ erg s$^{-1}$, and outflow power $(1.3$--$4.8)\times10^{38}$ erg s$^{-1}$. We also find that: all models with $i \ge$ 70$^\circ$ fail at least two constraints, as do all models with equal ion and electron temperature; exploratory, non-thermal model sets tend to have higher 2.2 $\mu$m flux density; the population of cold electrons is limited by X-ray constraints due to the risk of bremsstrahlung overproduction. Finally we discuss physical and numerical limitations of the models, highlighting the possible importance of kinetic effects and duration of the simulations.
We present the first event-horizon-scale images and spatiotemporal analysis of Sgr A* taken with the Event Horizon Telescope in 2017 April at a wavelength of 1.3 mm. Imaging of Sgr A* has been conducted through surveys over a wide range of imaging assumptions using the classical CLEAN algorithm, regularized maximum likelihood methods, and a Bayesian posterior sampling method. Different prescriptions have been used to account for scattering effects by the interstellar medium towards the Galactic Center. Mitigation of the rapid intra-day variability that characterizes Sgr A* has been carried out through the addition of a "variability noise budget" in the observed visibilities, facilitating the reconstruction of static full-track images. Our static reconstructions of Sgr A* can be clustered into four representative morphologies that correspond to ring images with three different azimuthal brightness distributions, and a small cluster that contains diverse non-ring morphologies. Based on our extensive analysis of the effects of sparse $(u,v)$-coverage, source variability and interstellar scattering, as well as studies of simulated visibility data, we conclude that the Event Horizon Telescope Sgr A* data show compelling evidence for an image that is dominated by a bright ring of emission with a ring diameter of $\sim$ 50 $\mu$as, consistent with the expected "shadow" of a $4\times10^6 M_\odot$ black hole in the Galactic Center located at a distance of 8 kpc.
Astrophysical black holes are expected to be described by the Kerr metric. This is the only stationary, vacuum, axisymmetric metric, without electromagnetic charge, that satisfies Einstein's equations and does not have pathologies outside of the event horizon. We present new constraints on potential deviations from the Kerr prediction based on 2017 EHT observations of Sagittarius A* (Sgr A*). We calibrate the relationship between the geometrically defined black hole shadow and the observed size of the ring-like images using a library that includes both Kerr and non-Kerr simulations. We use the exquisite prior constraints on the mass-to-distance ratio for Sgr A* to show that the observed image size is within $\sim$ 10$\%$ of the Kerr predictions. We use these bounds to constrain metrics that are parametrically different from Kerr as well as the charges of several known spacetimes. To consider alternatives to the presence of an event horizon we explore the possibility that Sgr A* is a compact object with a surface that either absorbs and thermally re-emits incident radiation or partially reflects it. Using the observed image size and the broadband spectrum of Sgr A*, we conclude that a thermal surface can be ruled out and a fully reflective one is unlikely. We compare our results to the broader landscape of gravitational tests. Together with the bounds found for stellar mass black holes and the M87 black hole, our observations provide further support that the external spacetimes of all black holes are described by the Kerr metric, independent of their mass.
We numerically study the diffusion and scattering of cosmic rays (CRs) together with their acceleration processes in the framework of the modern understanding of magnetohydrodynamic (MHD) turbulence. Based on the properties of compressible MHD turbulence obtained from observations and numerical experiments, we investigate the interaction of CRs with plasma modes. We find that (1) the gyroradius of particles exponentially increases with the acceleration timescale; (2) the momentum diffusion presents the power-law relationship with the gyroradius in the strong turbulence regime, and shows a plateau in the weak turbulence regime implying a stochastic acceleration process; (3) the spatial diffusion is dominated by the parallel diffusion in the sub-Alfv\'enic regime, while it is dominated by the perpendicular diffusion in the super-Alfv\'enic one; (4) as for the interaction of CRs with plasma modes, the particle acceleration is dominated by the fast mode in the high $\beta$ case, while in the low $\beta$ case, it is dominated by the fast and slow modes; (5) in the presence of acceleration, magnetosonic modes still play a critical role in diffusion and scattering processes of CRs, which is in good agreement with the earlier theoretical predictions.
The detection of a secular post-merger gravitational wave (GW) signal in a binary neutron star (BNS) merger serves as strong evidence for the formation of a long-lived post-merger neutron star (NS), which can help constrain the maximum mass of NSs and differentiate NS equation of states. We specifically focus on the detection of GW emissions from rigidly rotating NSs formed through BNS mergers, using several kilohertz GW detectors that have been designed. We simulate the BNS mergers within the detecting limit of LIGO-Virgo-KARGA O4 and attempt to find out on what fraction the simulated sources may have a detectable secular post-merger GW signal. For kilohertz detectors designed in the same configuration of LIGO A+, we find that the design with peak sensitivity at approximately $2{\rm kHz}$ is most appropriate for such signals. The fraction of sources that have a detectable secular post-merger GW signal would be approximately $0.94\% - 11\%$ when the spindowns of the post-merger rigidly rotating NSs are dominated by GW radiation, while be approximately $0.46\% - 1.6\%$ when the contribution of electromagnetic (EM) radiation to the spin-down processes is non-negligible. We also estimate this fraction based on other well-known proposed kilohertz GW detectors and find that, with advanced design, it can reach approximately $12\% - 45\%$ for the GW-dominated spindown case and $4.7\% - 16\%$ when both the GW and EM radiations are considered.
A wealth of astrophysical and cosmological observational evidence shows that the matter content of the universe is made of about 85$\%$ of non-baryonic dark matter. Huge experimental efforts have been deployed to look for the direct detection of dark matter via their scattering on target nucleons, their production in colliders, and their indirect detection via their annihilation products. Inelastic scattering of high-energy cosmic rays off dark matter particles populating the Milky Way halo would produce secondary gamma rays in the final state from the decay of the neutral pions produced in such interactions, providing a new avenue to probe dark matter properties. We compute here the sensitivity for H.E.S.S.-like observatory, a current-generation ground-based Cherenkov telescopes, to the expected gamma-ray flux from collisions of Galactic cosmic rays and dark matter in the center of the Milky Way. We also derive sensitivity prospects for the upcoming Cherenkov Telescope Array (CTA) and Southern Wide-field Gamma-ray Observatory (SWGO). The expected sensitivity allows us to probe a poorly-constrained range of dark matter masses so far, ranging from keV to sub-GeV, and provide complementary constraints on the dark matter-proton scattering cross section traditionally probed by deep underground direct dark matter experiments.
The diffuse supernova neutrino background (DSNB) is the constant flux of neutrinos and antineutrinos emitted by all past core collapses in the observable Universe. We study the potential to extract information on the neutrino lifetime from the upcoming observation of the DSNB flux. The DSNB flux has a unique sensitivity to neutrino nonradiative decay for $\tau / m \in \left[ 10^9, 10^{11}\right]$~s/eV. To this end, we integrate, for the first time, astrophysical uncertainties, the contribution from failed supernovae, and a three-neutrino description of neutrino nonradiative decay. We present our predictions for future detection at the running Super-Kamiokande + Gd and the upcoming Hyper-Kamiokande, JUNO, and DUNE experiments. Finally, our results show the importance of identifying the neutrino mass ordering to restrict the possible decay scenarios for the impact of nonradiative neutrino decay on the DSNB.
Our knowledge about neutron star (NS) masses is renewed once again due to the recognition of the heaviest NS PSR J$ 0952-0607 $. By taking advantage of both mass observations of super massive neutron stars and the tidal deformability derived from event GW170817, a joint constraint on tidal deformability is obtained. A wide-ranging correlation between NS pressure and tidal deformability within the density range from saturation density $\rho_0$ to $5.6\rho_0$ is discovered, which directly yields a constrained NS EoS. The newly constrained EoS has a small uncertainty and a softer behavior at high densities without the inclusion of extra degrees of freedom, which shows its potential to be used as an indicator for the component of NS core.
The detection of a high velocity (~ 0.3c) inflow of highly ionized matter during a long XMM-Newton observation of the luminous Seyfert galaxy PG 1211+143 in 2014 offered the first direct observational evidence of a short-lived accretion event, where matter approaching at a high inclination to the black hole spin plane may result in warping and tearing of the inner accretion disc, with subsequent inter-ring collisions producing shocks, loss of rotational support and rapid mass infall. In turn, such accretion events provide an explanation for the ultrafast outflows (UFOs) now recognised as a common property of many luminous Seyfert galaxies. While the ultra-fast inflow in PG 1211+143 was detected in only one of 7 spacecraft orbits, a weaker (lower column density) inflow, at a much lower redshift of ~ 0.123, is revealed in the soft x-ray spectrum by summing RGS data over the full 5-weeks XMM-Newton campaign. Modelling of the simultaneous, stacked pn data finds evidence for a similar low redshift absorption component in a previously unexplained feature on the low energy wing of the Fe K emission line complex near 6 keV. We briefly consider Doppler and strong gravity explanations for the observed redshift, with the former indicating a distant inflow feeding off-plane accretion, where an infall velocity of v ~ 0.038c and (free-fall) radius at 1400 R$_{g}$ lies beyond the tearing radius for PG 1211+143, but still within the sphere of influence of the SMBH. An intriguing alternative, recently given added credence, might be as the gravitational redshift of absorption in matter orbiting the SMBH at a radius of ~ 27 R_g. In the latter case the narrow RGS absorption line spectrum constrains the thickness of the orbiting ring due to the strong velocity shear so close to the hole.
In this paper, I investigate a neutral current (NC) antineutrinos scattering with neutron star (NS) matter constituents at zero temperature. The modeling of the standard matters in NS is constructed in the framework of both extended relativistic mean-field (E-RMF) and nonrelativistic Korea-IBS-Daegu-SKKU energy density functional (KIDS-EDF) models. In the E-RMF model, I use a new parameter of G3(M), which was constrained by the recent PREX II experiment measurement of neutron distribution of $^{208}\rm{Pb}$, while the KIDS-EDF models are constrained by terrestrial experiments, gravitational-wave signals, and astrophysical observations. Using both optimal and well-constrained matter models, I then calculate the antineutrino differential cross-section (ADCS) and antineutrino mean free path (AMFP) of the antineutrinos-NS matter constituents interaction using a linear response theory. One found that the AMFP for the KIDS0 and KIDSA models are smaller in comparison to the SLy4 model and the E-RMF model with the G3(M) parameter. The AMFP result of the SLy4 model is found almost similar prediction to that of the E-RMF model with the G3(M) parameter. Contributions of each nucleon to total AMFP are also presented for the G3(M) model.
The forbidden dark matter cannot annihilate into a pair of heavier partners, either SM particles or its partners in the dark sector, at the late stage of cosmological evolution by definition. We point out the possibility of reactivating the forbidden annihilation channel around supermassive black holes. Being attracted towards a black hole, the forbidden dark matter is significantly accelerated to overcome the annihilation threshold. The subsequent decay of the annihilation products to photon leaves a unique signal around the black hole, which can serve as a smoking gun for the forbidden dark matter. For illustration, the Fermi-LAT data around Sgr $A^*$ provides a preliminary constraint on the thermally averaged cross section of the reactivated forbidden annihilation that is consistent with the DM relic density requirement.
Black hole spectroscopy is the program to measure the complex gravitational-wave frequencies of merger remnants, and to quantify their agreement with the characteristic frequencies of black holes computed at linear order in black hole perturbation theory. In a "weaker" (non-agnostic) version of this test, one assumes that the frequencies depend on the mass and spin of the final Kerr black hole as predicted in perturbation theory. Linear perturbation theory is expected to be a good approximation only at late times, when the remnant is close enough to a stationary Kerr black hole. However, it has been claimed that a superposition of overtones with frequencies fixed at their asymptotic values in linear perturbation theory can reproduce the waveform strain even at the peak. Is this overfitting, or are the overtones physically present in the signal? To answer this question, we fit toy models of increasing complexity, waveforms produced within linear perturbation theory, and full numerical relativity waveforms using both agnostic and non-agnostic ringdown models. We find that higher overtones are unphysical: their role is mainly to "fit away" features such as initial data effects, power-law tails, and (when present) nonlinearities. We then identify physical quasinormal modes by fitting numerical waveforms in the original, agnostic spirit of the no-hair test. We find that a physically meaningful ringdown model requires the inclusion of higher multipoles, quasinormal mode frequencies induced by spherical-spheroidal mode mixing, and nonlinear quasinormal modes. Even in this "infinite signal-to-noise ratio" version of the original spectroscopy test, there is convincing evidence for the first overtone of the dominant multipole only well after the peak of the radiation.
I analyze a new X-ray image of the youngest supernova remnant (SNR) in the Galaxy, which is the type Ia SNR G1.9+0.3, and reveal a very clear point-symmetrical structure. Since explosion models of type Ia supernovae (SNe Ia) do not form such morphologies, the point-symmetrical morphology must come from the circumstellar material (CSM) into which the ejecta expands. The large-scale point-symmetry that I identify and the known substantial deceleration of the ejecta of SNR G1.9+0.3 suggest a relatively massive CSM of >1Mo. I argue that the most likely explanation is the explosion of this SN Ia into a planetary nebula (PN). The scenario that predicts a large fraction of SN Ia inside PNe (SNIPs) is the core degenerate scenario. Other SN Ia scenarios might lead to only a very small fraction of SNIPs or not at all.
Transition blazars exhibit a shift from one subclass to the next during different flux states. It is therefore crucial to study them to understand the underlying physics of blazars. We probe the origin of the multi-wavelength emission from the transition blazar B2 1308+326 using 14-year-long gamma-ray light curve from Fermi and the quasi-simultaneous data from Swift. We used the Bayesian block algorithm to identify epochs of flaring and quiescent flux states and modelled the broadband SEDs for these epochs. We employed the one-zone leptonic model in which the synchrotron emission causes the low-energy part of the SED and the high-energy part is produced by the IC emission of external seed photons. We also investigated its multi-band variability properties and gamma-ray flux distribution, and the correlation between optical and gamma-ray emissions. We observed a historically bright flare from B2 1308+326 across the optical to gamma-ray bands in June and July 2022. The highest daily averaged gamma-ray flux was (14.24$\pm$2.36) $\times$ 10$^{-7}$ ph cm$^{-2}$ s$^{-1}$ and was detected on 1 July 2022. The gamma-ray flux distribution was found to be log-normal. The optical and gamma-ray emissions are well correlated with zero time lag. The synchrotron peak frequency changes from $\sim 8 \times$ 10$^{12}$ Hz (in the quiescent state) to $\sim 6 \times$ 10$^{14}$ Hz (in the flaring state), together with a decrease in the Compton dominance providing a hint that the source transitions from a LSP to an ISP. The SEDs for these two states are well-fitted by one-zone leptonic models. The parameters in the model fits are essentially consistent between both SEDs, except for the Doppler-beaming factor, which changes from $\sim$15.6 to $\sim$27 during the transition. An increase in the Doppler factor might cause both the flare and the transition of B2 1308+326 from an LSP to an ISP blazar.
Thorne-\.{Z}ytkow objects (T\.{Z}Os), hypothetical merger products in which a central neutron star powers a stellar envelope, are traditionally considered steady-state configurations, though their assembly, especially through dynamical channels, is not well-understood. The predominant focus in the literature has been the observational signatures related to the long-term fate and evolution of T\.{Z}Os, with their initial formation often treated as a given. However, the foundational calculations supporting the existence of T\.{Z}Os assumed non-rotating, spherically symmetric initial conditions that are inconsistent with a merger scenario. In this work, we explore the implications of post-merger dynamics in T\.{Z}O formation scenarios with field binary progenitors, specifically the role that angular momentum transport during the common envelope phase plays in constraining possible merger products, using the tools of stellar evolution and three-dimensional hydrodynamics. We also propose an alternative steady-state outcome for these mergers: the thin-envelope T\.{Z}O. These potential X-ray sources would follow a series of bright transient events and may be of interest to upcoming time-domain surveys.
Boson stars arise as solutions of a massive complex scalar field coupled to gravity. A variety of scalar potentials, giving rise to different types of boson stars, have been studied in the literature. Here we study instead the effect of promoting the kinetic term of the scalar field to a nonlinear sigma model -- an extension that is naturally motivated by UV completions of gravity like string theory. We consider the $\mathrm{O}(3)$ and $\mathrm{SL}(2,\mathbb{R})$ sigma models with minimally interacting potentials and obtain their boson star solutions. We study the maximum mass and compactness of the solutions as a function of the curvature of the sigma model and compare the results to the prototypical case of mini boson stars, which are recovered in the case of vanishing curvature. The effect of the curvature turns out to be dramatic. While $\mathrm{O}(3)$ stars are massive and can reach a size as compact as $R\sim 3.3 GM$, $\mathrm{SL}(2,\mathbb{R})$ stars are much more diffuse and only astrophysically viable if the bosons are ultralight. These results show that the scalar kinetic term is at least as important as the potential in determining the properties of boson stars.
Since the discovery of the cosmic X-ray background (CXB), astronomers have strived to understand the accreting supermassive black holes (SMBHs) contributing to its peak in the 10-40 keV band. Existing soft X-ray telescopes could study this population up to only 10 keV, and, while NuSTAR (focusing on 3--24 keV) made great progress, it also left significant uncertainties in characterizing the hard X-ray population, crucial for calibrating current population synthesis models. This paper presents an in-depth analysis of simulations of two extragalactic surveys (deep and wide) with the High-Energy X-ray Probe (HEX-P), each observed for 2 Ms. Applying established source detection techniques, we show that HEX-P surveys will reach a flux of $\sim$10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ in the 10-40 keV band, an order of magnitude fainter than current NuSTAR surveys. With the large sample of new hard X-ray detected sources ($\sim2000$), we showcase HEX-P's ability to resolve more than 80% of the CXB up to 40 keV into individual sources. The expected precision of HEX-P's resolved background measurement will allow us to distinguish between population synthesis models of SMBH growth. HEX-P leverages accurate broadband (0.5-40 keV) spectral analysis and the combination of soft and hard X-ray colors to provide obscuration constraints even for the fainter sources, with the overall objective of measuring the Compton-thick fraction. With unprecedented sensitivity in the 10--40 keV band, HEX-P will explore the hard X-ray emission from AGN to flux limits never reached before, thus expanding the parameter space for serendipitous discoveries. Consequently, it is plausible that new models will be needed to capture the population HEX-P will unveil.
We present the first Event Horizon Telescope (EHT) observations of Sagittarius A* (Sgr A$^*$), the Galactic center source associated with a supermassive black hole. These observations were conducted in 2017 using a global interferometric array of eight telescopes operating at a wavelength of $\lambda=1.3\,{\rm mm}$. The EHT data resolve a compact emission region with intrahour variability. A variety of imaging and modeling analyses all support an image that is dominated by a bright, thick ring with a diameter of $51.8 \pm 2.3$\,\uas (68\% credible interval). The ring has modest azimuthal brightness asymmetry and a comparatively dim interior. Using a large suite of numerical simulations, we demonstrate that the EHT images of Sgr A$^*$ are consistent with the expected appearance of a Kerr black hole with mass ${\sim}4 \times 10^6\,{\rm M}_\odot$, which is inferred to exist at this location based on previous infrared observations of individual stellar orbits as well as maser proper motion studies. Our model comparisons disfavor scenarios where the black hole is viewed at high inclination ($i > 50^\circ$), as well as non-spinning black holes and those with retrograde accretion disks. Our results provide direct evidence for the presence of a supermassive black hole at the center of the Milky Way galaxy, and for the first time we connect the predictions from dynamical measurements of stellar orbits on scales of $10^3-10^5$ gravitational radii to event horizon-scale images and variability. Furthermore, a comparison with the EHT results for the supermassive black hole M87$^*$ shows consistency with the predictions of general relativity spanning over three orders of magnitude in central mass.
It is exceedingly rare to find quiescent low-mass galaxies in the field. UGC5205 is an example of such a quenched field dwarf ($M_\star\sim3\times10^8M_\odot$). Despite a wealth of cold gas ($M_{\rm HI}\sim 3.5 \times 10^8 M_\odot$) and GALEX emission that indicates significant star formation in the past few hundred Myr, there is no detection of H$\alpha$ emission -- star formation in the last $\sim 10$ Myr -- across the face of the galaxy. Meanwhile, the near equal-mass companion of UGC5205, PGC027864, is starbursting ($\rm EW_{\rm H\alpha}>1000$ Angstrom). In this work, we present new Karl G. Jansky Very Large Array (VLA) 21 cm line observations of UGC5205 that demonstrate that the lack of star formation is caused by an absence of HI in the main body of the galaxy. The HI of UGC5205 is highly disturbed; the bulk of the HI resides in several kpc-long tails, while the HI of PGC027864 is dominated by ordered rotation. We model the stellar populations of UGC5205 to show that, as indicated by the UV-H$\alpha$ emission, the galaxy underwent a coordinated quenching event $\sim\!100-300$ Myr ago. The asymmetry of outcomes for UGC5205 and PGC027864 demonstrate that major mergers can both quench and trigger star formation in dwarfs. However, because the gas remains bound to the system, we suggest that such mergers only temporarily quench star formation. We estimate a total quenched time of $\sim 560$ Myr for UGC5205, consistent with established upper limits on the quenched fraction of a few percent for dwarfs in the field.
The shapes of galaxies, in particular their outer regions, are important guideposts to their formation and evolution. Here we report on the discovery of strongly box-shaped morphologies of the, otherwise well-studied, elliptical and lenticular galaxies NGC 720 and NGC 2768 from deep imaging. The boxiness is strongly manifested in the shape parameter $A_4/a$ of $-0.04$ in both objects, and also significant center shifts of the isophotes of $\sim$ 2--4 kpc are seen. One reason for such asymmetries commonly stated in the literature is a merger origin, although the number of such cases is still sparse and the exact properties of the individual boxy objects is highly diverse. Indeed, for NGC 2768, we identify a progenitor candidate (dubbed Pelops) in the residual images, which appears to be a dwarf satellite that is currently merging with NGC 2768. At its absolute magnitude of M$_r$ of $-$12.2 mag, the corresponding Sersic radius of 2.4 kpc is more extended than those of typical dwarf galaxies from the literature. However, systematically larger radii are known to occur in systems that are in tidal disruption. This finding is bolstered by the presence of a tentative tidal stream feature on archival GALEX data. Finally, further structures in the fascinating host galaxy comprise rich dust lanes and a vestigial X-shaped bulge component.
There has been a discussion for many years on whether the disc in the Milky Way extends down to low metallicity. We aim to address the question by employing a large sample of giant stars with radial velocities and homogeneous metallicities based on the Gaia DR3 XP spectra. We study the 3D velocity distribution of stars in various metallicity ranges, including the very-metal poor regime (VMP, [M/H] $<-2.0$). We find that a clear disc population starts to emerge only around [M/H] $\sim -1.3$, and is not visible for [M/H] $<-1.6$. Using Gaussian Mixture Modeling (GMM), we show that there are two halo populations in the VMP regime: one stationary and one with a net prograde rotation of $\sim80\,\mathrm{km/s}$. In this low-metallicity range, we are able to place constraints on the contribution of a rotation-supported disc sub-population to a maximum of $\sim 3$\%. We compare our results to previous claims of discy VMP stars in both observations and simulations and find that having a prograde halo component could explain most of these.
The mass distribution in massive elliptical galaxies encodes their evolutionary history, thus providing an avenue to constrain the baryonic astrophysics in their evolution. The power-law assumption for the radial mass profile in ellipticals has been sufficient to describe several observables to the noise level, including strong lensing and stellar dynamics. In this paper, we quantitatively constrained any deviation, or the lack thereof, from the power-law mass profile in massive ellipticals through joint lensing-dynamics analysis of a large statistical sample with 77 galaxy-galaxy lens systems. We performed an improved and uniform lens modelling of these systems from archival Hubble Space Telescope imaging using the automated lens modelling pipeline dolphin. We combined the lens model posteriors with the stellar dynamics to constrain the deviation from the power law after accounting for the line-of-sight lensing effects, a first for analyses on galaxy-galaxy lenses. We find that the Sloan Lens ACS Survey (SLACS) lens galaxies with a mean redshift of 0.2 are consistent with the power-law profile within 1.1$\sigma$ (2.8$\sigma$) and the Strong Lensing Legacy Survey (SL2S) lens galaxies with a mean redshift of 0.6 are consistent within 0.8$\sigma$ (2.1$\sigma$), for a spatially constant (Osipkov-Merritt) stellar anisotropy profile. We adopted the spatially constant anisotropy profile as our baseline choice based on previous dynamical observables of local ellipticals. However, spatially resolved stellar kinematics of lens galaxies are necessary to differentiate between the two anisotropy models. Future studies will use our lens models to constrain the mass distribution individually in the dark matter and baryonic components.
High-velocity clouds (HVCs) are multi-phase gas structures whose velocities (|v_LSR|>100 km/s) are too high to be explained by Galactic disk rotation. While large HVCs are well characterized, compact and small HVCs (with HI angular sizes of a few degrees) are poorly understood. Possible origins for such small clouds include Milky Way halo gas or fragments of the Magellanic System, but neither their origin nor their connection to the Milky Way halo has been confirmed. We use new Hubble Space Telescope/Cosmic Origins Spectrograph UV spectra and Green Bank Telescope HI spectra to measure the metallicities of five small HVCs in the southern Galactic sky projected near the Magellanic System. We build a set of distance-dependent Cloudy photoionization models for each cloud and calculate their ionization-corrected metallicities. All five small HVCs have oxygen metallicities <0.17 Z_sun, indicating they do not originate in the disk of the Milky Way. Two of the five have metallicities of 0.16-0.17 Z_sun, similar to the Magellanic Stream, suggesting these clouds are fragments of the Magellanic System. The remaining three clouds have much lower metallicities of 0.02-0.04 Z_sun. While the origin of these low-metallicity clouds is unclear, they could be gaseous mini-halos or gas stripped from dwarf galaxies by ram pressure or tidal interactions. These results suggest that small HVCs do not all reside in the inner Milky Way halo or the Magellanic System, but instead can trace more distant structures.
We present a comparison of the presence and properties of dust in two distinct phases of the Milky Way's interstellar medium: the warm neutral medium (WNM) and the warm ionized medium (WIM). Using distant pulsars at high Galactic latitudes and vertical distance ($|b| > 40\deg$, $D \sin|b| > 2 \mathrm{\,\, kpc}$) as probes, we measure their dispersion measures and the neutral hydrogen component of the warm neutral medium ($\text{WNM}_\text{HI}$) using HI column density. Together with dust intensity along these same sightlines, we separate the respective dust contributions of each ISM phase in order to determine whether the ionized component contributes to the dust signal. We measure the temperature ($T$), spectral index ($\beta$), and dust opacity ($\tau/N_{H}$) in both phases. We find $T~{\text{(WNM}_\text{HI})}=20^{+3}_{-2}$~K, $\beta~{\text{(WNM}_\text{HI})} = 1.5\pm{0.4}$, and $\tau_{\text{353}}/N_{H}~{\text{(WNM}_\text{HI})}=(1.0\pm0.1)\times 10^{-26}$~cm$^2$. Assuming that the temperature and spectral index are the same in both the WNM$_\text{HI}$ and WIM, and given our simple model that widely separated lines-of-sight can be fit together, we find evidence that there is a dust signal associated with the ionized gas and $\tau_{\text{353}}/N_{H}~\text{(WIM)}=(0.3\pm0.3)\times 10^{-26}$, which is about three times smaller than $\tau_{\text{353}}/N_{H}~{\text{(WNM}_\text{HI})}$. We are 80% confident that $\tau_{\text{353}}/N_{H}~\text{(WIM)}$ is at least two times smaller than $\tau_{\text{353}}/N_{H}~{\text{(WNM}_\text{HI})}$.
The Carbon-Enhanced Metal-Poor (CEMP) stars with no enhancement of neutron-capture elements, the so-called CEMP-no stars are believed to be the direct descendants of first-generation stars and provide a unique opportunity to probe the early Galactic nucleosynthesis. We present a detailed chemical and kinematic analysis for two extremely metal-poor stars HE1243$-$2408 and HE0038$-$0345 using high-resolution (R${\sim}$86,000) HERMES spectra. For the object HE1243$-$2408, we could make a detailed comparison with the available literature values; however, only limited information is available for the other object HE0038$-$0345. Our estimated metallicity for these two objects are $-$3.05 and $-$2.92 respectively. With estimated [C/Fe] (1.03 and 1.05) and [Ba/Fe] ($-$0.18 and $-$0.11) respectively, the objects are found to be bonafide CEMP-no stars. From the observed abundances of C, Na, Mg, and Ba (i.e., A(C), A(Na), A(Mg), A(Ba)), the objects are found to belong to Group II CEMP-no stars. A detailed abundance profile analysis indicates that the objects are accreted from dSph satellite galaxies that support hierarchical Galaxy assembly. Further, our analysis shows that the progenitors of the stars are likely Pop II Core-Collapse Supernovae. The object HE0038$-$0345 is found to be a high-energy, prograde, outer-halo object, and HE1243$-$2408 is found to be a high-energy, retrograde, inner-halo object. Our detailed chemodynamical analysis shows that HE1243$-$2408 is related to I'itoi structure, where as HE0038$-$0345 is likely related to Sgr or GSE events. The mass of the progenitor galaxies of the program stars inferred from their dynamics is at par with their likely origin in massive dSph galaxies.
Our study aims to investigate the outer disc structure of the Milky Way galaxy using the red clump (RC) stars. We analysed the distribution of the largest sample of RC stars to date, homogeneously covering the entire Galactic plane in the range of $40^\circ \le \ell \le 340^\circ$ and $-10^\circ \le b \le +10^\circ$. This sample allows us to model the RC star distribution in the Galactic disc to better constrain the properties of the flare and warp of the Galaxy. Our results show that the scale length of the old stellar disc weakly depends on azimuth, with an average value of $1.95 \pm0.26$ kpc. On the other hand, a significant disc flaring is detected, where the scale height of the disc increases from 0.38 kpc in the solar neighbourhood to $\sim 2.2$ kpc at R $\approx 15$ kpc. The flare exhibits a slight asymmetry, with $\sim 1$ kpc more scale height below the Galactic plane as compared to the Northern flare. We also confirm the warping of the outer disc, which can be modelled with $Z_w = (0.0057 \pm 0.0050)~ [R-(7358 \pm 368) (pc)]^{1.40 \pm 0.09} \sin(\phi - (-2^\circ.03 \pm 0^\circ .18))$. Our analysis reveals a noticeable north-south asymmetry in the warp, with a greater amplitude observed in the southern direction compared to the northern. Comparing our findings with younger tracers from the literature, we observe an age dependency of both the flare and warp. An increase in flare strength with age suggests the secular evolution of the disc as the preferred mechanism for forming the flare. The increase of the maximum warp amplitude with age indicates that the warp dynamics could be the possible cause of the variation in the warp properties with age.
Dust is a ubiquitous component in our Galaxy. It accounts for only $1\%$ mass of the ISM but still is an essential part of the Galaxy. It affects our view of the Galaxy by obscuring the starlight at shorter wavelengths and re-emitting in longer wavelengths. Studying the dust distribution in the Galaxy at longer wavelengths may cause discrepancies due to distance ambiguity caused by unknown Galactic potential. However, another aspect of dust, i.e., the polarisation of the background starlight, when combined with distance information, will help to give direct observational evidence of the number of dust clouds encountered in the line of sight. We observed 15 open clusters distributed at increasing distances in three lines of sight using two Indian national facilities. The measured polarisation results used to scrutinize the dust distribution and orientation of the local plane of sky magnetic fields towards selected directions. The analysis of the stars observed towards the distant cluster King 8 cluster shows two foreground layers at a distance of $\sim 500$ pc and $\sim$ 3500 pc. Similar analysis towards different clusters also results in multiple dust layers.
Semi-detached binaries are in the stage of mass transfer and play a crucial role in studying mass transfer physics between interacting binaries. Large-scale time-domain surveys provide massive light curves of binary systems, while Gaia offers high-precision astrometric data. In this paper, we develop, validate, and apply a pipeline that combines the MCMC method with a forward model and DBSCAN clustering to search for semi-detached binary and estimate its inclination, relative radius, mass ratio, and temperature ratio using light curve. We train our model on the mock light curves from PHOEBE, which provides broad coverage of light curve simulations for semi-detached binaries. Applying our pipeline to TESS sectors 1-26, we have identified 77 semi-detached binary candidates. Utilizing the distance from Gaia, we determine their masses and radii with median fractional uncertainties of ~26% and ~7%, respectively. With the added 77 candidates, the catalog of semi-detached binaries with orbital parameters has been expanded by approximately 20%. The comparison and statistical results show that our semi-detached binary candidates align well with the compiled samples and the PARSEC model in Teff-L and M-R relations. Combined with the literature samples, comparative analysis with stability criteria for conserved mass transfer indicates that ~97.4% of samples are undergoing nuclear-timescale mass transfer, and two samples (GO Cyg and TIC 454222105) are located within the limits of stability criteria for dynamical- and thermal-timescale mass transfer, which are currently undergoing thermal-timescale mass transfer. Additionally, one system (IR Lyn) is very close to the upper limit of delayed dynamical-timescale mass transfer.
Stars like company. They are mostly formed in clusters and their lives are often altered by the presence of one or more companions. Interaction processes between components may lead to complex outcomes like Algols, blue stragglers, chemically altered stars, type Ia supernovae, as well as progenitors of gravitational wave sources, to cite a few. Observational astronomy has entered the era of big data, and thanks to large surveys like spatial missions Kepler, TESS, Gaia, and ground-based spectroscopic surveys like RAVE, Gaia-ESO, APOGEE, LAMOST, GALAH (to name a few) the field is going through a true revolution, as illustrated by the recent detection of stellar black holes and neutron stars as companions of massive but also low-mass stars. In this review, I will present why it is important to care about stellar multiples, what are the main large surveys in which many binaries are harvested, and finally present some features related to the largest catalogue of astrometric, spectroscopic and eclipsing binaries provided by the Non-Single Star catalogue of Gaia, which is, to date, the largest homogeneous catalogue of stellar binaries.
Collisional (de-)excitation of H$_{2}$ by helium plays an important role in the thermal balance and chemistry of various astrophysical environments, making accurate rate coefficients essential for the interpretation of observations of the interstellar medium. Our goal is to utilize a state-of-the-art potential energy surface (PES) to provide comprehensive state-to-state rate coefficients for He-induced transitions among rovibrational levels of H$_{2}$. We perform quantum scattering calculations for the H$_{2}$-He system and provide state-to-state rate coefficients for 1 089 transitions between rovibrational levels of H$_{2}$ with internal energies up to 15 000 cm$^{-1}$ for temperatures ranging from 20 to 8 000 K. Our results show good agreement with previous calculations for pure rotational transitions between low-lying rotational levels, but we find significant discrepancies for rovibrational processes involving highly-excited rotational and vibrational states. We attribute these differences to two key factors: the broader range of intramolecular distances covered by ab initio points, and the superior accuracy of the PES, resulting from the utilization of the state-of-the-art quantum chemistry methods, compared to the previous lower-level calculations. Radiative transfer calculations performed with the new collisional data indicate that the population of rotational levels in excited vibrational states experiences significant modifications, highlighting the critical need for this updated dataset in models of high-temperature astrophysical environments.
Finding the emergence of the first generation of metals in the early Universe, and identifying their origin, are some of the most important goals of modern astrophysics. We present deep JWST/NIRSpec spectroscopy of GS-z12, a galaxy at z=12.5, in which we report the detection of C III]${\lambda}{\lambda}$1907,1909 nebular emission. This is the most distant detection of a metal transition and the most distant redshift determination via emission lines. In addition, we report tentative detections of [O II]${\lambda}{\lambda}$3726,3729 and [Ne III]${\lambda}$3869, and possibly O III]${\lambda}{\lambda}$1661,1666. By using the accurate redshift from C III], we can model the Ly${\alpha}$ drop to reliably measure an absorbing column density of hydrogen of $N_{HI} \approx 10^{22}$ cm$^{-2}$ - too high for an IGM origin and implying abundant ISM in GS-z12 or CGM around it. We infer a lower limit for the neutral gas mass of about $10^7$ MSun which, compared with a stellar mass of $\approx4 \times 10^7$ MSun inferred from the continuum fitting, implies a gas fraction higher than about 0.1-0.5. We derive a solar or even super-solar carbon-to-oxygen ratio, tentatively [C/O]>0.15. This is higher than the C/O measured in galaxies discovered by JWST at z=6-9, and higher than the C/O arising from Type-II supernovae enrichment, while AGB stars cannot contribute to carbon enrichment at these early epochs and low metallicities. Such a high C/O in a galaxy observed 350 Myr after the Big Bang may be explained by the yields of extremely metal poor stars, and may even be the heritage of the first generation of supernovae from Population III progenitors.
Based on the atomic-hydrogen (HI) observations using the Five-hundred-meter Aperture Spherical radio Telescope (FAST), we present a detailed study of the gas-rich massive S0 galaxy NGC 1023 in a nearby galaxy group. The presence of an HI extended warped disk in NGC 1023 indicates that this S0 galaxy originated from a spiral galaxy. The data also suggest that NGC 1023 is interacting with four dwarf galaxies. In particular, one of the largest dwarf galaxies has fallen into the gas disk of NGC 1023, forming a rare bright-dark galaxy pair with a large gas clump. This clump shows the signature of a galaxy but has no optical counterpart, implying that it is a newly formed starless galaxy. Our results firstly suggest that a massive S0 galaxy in a galaxy group can form via the morphological transformation from a spiral under the joint action of multiple tidal interactions.
The low-mass metal-poor stars in the Galaxy that preserve in their atmosphere, the chemical imprints of the gas clouds from which they were formed can be used as probes to get insight into the origin and evolution of elements in the early galaxy, early star formation and nucleosynthesis. Among the metal-poor stars, a large fraction, the so-called carbon-enhanced metal-poor (CEMP) stars exhibits high abundance of carbon. These stars exhibit diverse abundance patterns, particularly for heavy elements, based on which they are classified into different groups. The diversity of abundance patterns points at different formation scenarios. Hence, accurate classification of CEMP stars and knowledge of their distribution is essential to understand the role and contribution of each group. While CEMP-s and CEMP-r/s stars can be used to get insight into binary interactions at very low metallicity, CEMP-no stars can be used to probe the properties of the first stars and early nucleosynthesis. To exploit the full potential of CEMP stars for Galactic archaeology a homogeneous analysis of each class is extremely important. Our efforts towards, and contributions to providing an improved classification scheme for accurate classification of CEMP-s and CEMP-r/s stars and in characterizing the companion asymptotic giant branch (AGB) stars of CH, CEMP-no, CEMP-s and CEMP-r/s binary systems are discussed. Some recent results obtained based on low- and high-resolution spectroscopic analysis of a large number of potential CH and CEMP star candidates are highlighted.
Using N-body simulations we explore the effects of growing a supermassive black hole (SMBH) prior to or during the formation of a stellar bar. Keeping the final mass and growth rate of the SMBH fixed, we show that if it is introduced before or while the bar is still growing, the SMBH does not cause a decrease in bar amplitude. Rather, in most cases, it is strengthened. In addition early growing SMBHs always either decreases the buckling amplitude, delay buckling, or both. This weakening of buckling is caused by an increase in the disk vertical velocity dispersion at radii well beyond the nominal black hole sphere-of-influence. While we find considerable stochasticity and sensitivity to initial conditions, the only case where the SMBH causes a decrease in bar amplitude is when it is introduced after the bar has attained a steady state. In this case we confirm previous findings that the decrease in bar strength is a result of scattering of bar-supporting orbits with small pericenter radii. By heating the inner disk both radially and vertically, an early growing SMBH increases the fraction of stars that can be captured by the Inner Lindblad Resonance (ILR) and the vertical ILR, thereby strengthening both the bar and the boxy peanut shaped bulge. Using orbital frequency analysis of star particles, we show that when an SMBH is introduced early and the bar forms around it, the bar is populated by different families of regular bar-supporting orbits than when the bar forms without an SMBH.
The complex chemistry that occurs in star-forming regions can provide insight into the formation of prebiotic molecules at various evolutionary stages of star formation. To study this process, we present millimeter-wave interferometric observations of the neighboring hot cores W3(H$_2$O) and W3(OH) carried out using the NOEMA interferometer. We have analyzed distributions of six molecules that account for most observed lines across both cores and have constructed physical parameter maps for rotational temperature, column density, and velocity field with corresponding uncertainties. We discuss the derived spatial distributions of these parameters in the context of the physical structure of the source. We propose the use of HCOOCH$_3$ as a new temperature tracer in W3(H$_2$O) and W3(OH) in addition to the more commonly used CH$_3$CN. By analyzing the physically-derived parameters for each molecule across both W3(H$_2$O) and W3(OH), the work presented herein further demonstrates the impact of physical environment on hot cores at different evolutionary stages.
The peptide-like molecules, cyanoformamide (NCCONH2), is the cyano (CN) derivative of formamide (NH2CHO). It is known to play a role in the synthesis of nucleic acid precursors under prebiotic conditions. In this paper, we present a tentative detection of NCCONH2 in the interstellar medium (ISM) with the Atacama Large Millimeter/submillimeter Array (ALMA) archive data. Ten unblended lines of NCCONH2 were seen around 3sigma noise levels toward Sagittarius B2(N1E), a position that is slightly offset from the continuum peak. The column density of NCCONH2 was estimated to be 2.4\times 10^15 cm ^-2, and the fractional abundance of NCCONH2 toward Sgr B2(N1E) was 6.9\times10^-10. The abundance ratio between NCCONH2 and NH2CHO is estimated to be ~0.01. We also searched for other peptide-like molecules toward Sgr B2(N1E). The abundances of NH2CHO, CH3NCO and CH3NHCHO toward Sgr B2(N1E) were about one tenth of those toward Sgr B2(N1S), while the abundances of CH3CONH2 was only one twentieth of that toward Sgr B2(N1S).
I examine the applicability of ecological concepts in discussing issues related to space environmentalism. Terms such as "ecosystem"", "carrying capacity"", and "tipping point" are either ambiguous or well defined but not applicable to orbital space and its contents; using such terms uncritically may cause more confusion than enlightenment. On the other hand, it may well be fruitful to adopt the approach of the Planetary Boundaries Framework, defining trackable metrics that capture the damage to the space environment. I argue that the key metric is simply the number of Anthropogenic Space Objects (ASOs), rather than for example their reflectivity, which is currently doubling every 1.7 years; we are heading towards degree scale separation. Overcrowding of the sky is a problem astronomers and satellite operators have in common.
Upper limits and confidence intervals are a convenient way to present experimental results. With modern experiments producing more and more data, it is often necessary to reduce the volume of the results. A common approach is to take a maximum over a set of upper limits, which yields an upper limit valid for the entire set. This, however, can be very inefficient. In this paper we introduce functional upper limits and confidence intervals that allow to summarize results much more efficiently. An application to upper limits in all-sky continuous gravitational wave searches is worked out, with a method of deriving upper limits using linear programming.
GW200129 is claimed to be the first-ever observation of the spin-disk orbital precession detected with gravitational waves (GWs) from an individual binary system. However, this claim warrants a cautious evaluation because the GW event coincided with a broadband noise disturbance in LIGO Livingston caused by the 45 MHz electro-optic modulator system. In this paper, we present a state-of-the-art neural network that is able to model and mitigate the broadband noise from the LIGO Livingston interferometer. We also demonstrate that our neural network mitigates the noise better than the algorithm used by the LIGO-Virgo-KAGRA collaboration. Finally, we re-analyse GW200129 with the improved data quality and show that the evidence for precession is still observed.
A novel approach is presented for discovering PDEs that govern the motion of satellites in space. The method is based on SINDy, a data-driven technique capable of identifying the underlying dynamics of complex physical systems from time series data. SINDy is utilized to uncover PDEs that describe the laws of physics in space, which are non-deterministic and influenced by various factors such as drag or the reference area (related to the attitude of the satellite). In contrast to prior works, the physically interpretable coordinate system is maintained, and no dimensionality reduction technique is applied to the data. By training the model with multiple representative trajectories of LEO - encompassing various inclinations, eccentricities, and altitudes - and testing it with unseen orbital motion patterns, a mean error of around 140 km for the positions and 0.12 km/s for the velocities is achieved. The method offers the advantage of delivering interpretable, accurate, and complex models of orbital motion that can be employed for propagation or as inputs to predictive models for other variables of interest, such as atmospheric drag or the probability of collision in an encounter with a spacecraft or space objects. In conclusion, the work demonstrates the promising potential of using SINDy to discover the equations governing the behaviour of satellites in space. The technique has been successfully applied to uncover PDEs describing the motion of satellites in LEO with high accuracy. The method possesses several advantages over traditional models, including the ability to provide physically interpretable, accurate, and complex models of orbital motion derived from high-entropy datasets. These models can be utilised for propagation or as inputs to predictive models for other variables of interest.
In the past two decades, it has been convincingly argued that magnetospheric radio emissions, of cyclotron maser origin, can occur for exoplanetary systems, similarly as solar planets, with the same periodicity as the planetary orbit. These emissions are primarily expected at low frequencies (usually below 100 MHz, c.f. Farrell et al., 1999; Zarka, 2007). The radio detection of exoplanets will considerably expand the field of comparative magnetospheric physics and star-planet plasma interactions (Hess & Zarka, 2011). We have developed a prediction code for exoplanetary radio emissions, PALANTIR: "Prediction Algorithm for star-pLANeT Interactions in Radio". This code has been developed for the construction of an up-to-date and evolutive target catalog, based on observed exoplanet physical parameters, radio emission theory, and magnetospheric physics embedded in scaling laws. It is based on, and extends, previous work by Grie{\ss}meier et al. (2007b). Using PALANTIR, we prepared an updated list of targets of interest for radio emissions. Additionally, we compare our results with previous studies conducted with similar models (Grie{\ss}meier, 2017). For the next steps, we aim at improving this code by adding new models and updating those already used.
Current endeavours in exoplanet characterisation rely on atmospheric retrieval to quantify crucial physical properties of remote exoplanets from observations. However, the scalability and efficiency of the technique are under strain with increasing spectroscopic resolution and forward model complexity. The situation becomes more acute with the recent launch of the James Webb Space Telescope and other upcoming missions. Recent advances in Machine Learning provide optimisation-based Variational Inference as an alternative approach to perform approximate Bayesian Posterior Inference. In this investigation we combined Normalising Flow-based neural network with our newly developed differentiable forward model, Diff-Tau, to perform Bayesian Inference in the context of atmospheric retrieval. Using examples from real and simulated spectroscopic data, we demonstrated the superiority of our proposed framework: 1) Training Our neural network only requires a single observation; 2) It produces high-fidelity posterior distributions similar to sampling-based retrieval and; 3) It requires 75% less forward model computation to converge. 4.) We performed, for the first time, Bayesian model selection on our trained neural network. Our proposed framework contribute towards the latest development of a neural-powered atmospheric retrieval. Its flexibility and speed hold the potential to complement sampling-based approaches in large and complex data sets in the future.
We propose a measure, the joint differential entropy of eigencolours, for determining the spatial complexity of exoplanets using only spatially unresolved light curve data. The measure can be used to search for habitable planets, based on the premise of a potential association between life and exoplanet complexity. We present an analysis using disk-integrated light curves from Earth, developed in previous studies, as a proxy for exoplanet data. We show that this quantity is distinct from previous measures of exoplanet complexity due to its sensitivity to spatial information that is masked by features with large mutual information between wavelengths, such as cloud cover. The measure has a natural upper limit and appears to avoid a strong bias toward specific planetary features. This makes it a candidate for being used as a generalisable measure of exoplanet habitability, since it is agnostic regarding the form that life could take.
We revisit Duff, Okun, and Veneziano's divergent views on the number of fundamental constants and argue that the issue can be set to rest by having spacetime as the starting point. This procedure disentangles the resolution in what depends on the assumed spacetime (whether relativistic or not) from the theories built over it. By defining that the number of fundamental constants equals the minimal number of independent standards necessary to express all observables, as assumed by Duff, Okun, and Veneziano, it is shown that the same units fixed by the apparatuses used to construct the spacetimes are enough to express all observables of the physical laws defined over them. As a result, the number of fundamental constants equals two in Galilei spacetime and one in relativistic spacetimes.
We address the question of whether thermal QCD at high temperature is chaotic from the ${\cal M}$ theory dual of QCD-like theories at intermediate coupling as constructed in arXiv: 2004.07259, and find the dual to be unusually chaotic-like - hence the title ${\cal M}$-theoretic (too) Unusually Chaotic Holographic (${\cal M}$UCH) QCD. The EOM of the gauge-invariant combination $Z_s(r)$ of scalar metric perturbations is shown to possess an irregular singular point at the horizon radius $r_h$. Very interestingly, at a specific value of the imaginary frequency and momentum used to read off the analogs of the "Lyapunov exponent" $\lambda_L$ and "butterfly velocity" $v_b$ not only does $r_h$ become a regular singular point, but truncating the incoming mode solution of $Z_s(r)$ as a power series around $r_h$, yields a "missing pole", i.e., "$C_{n, n+1}=0,\ {\rm det}\ M^{(n)}=0", n\in\mathbb{Z}^+$ is satisfied for a single $n\geq3$ depending on the values of the string coupling $g_s$, number of (fractional) $D3$ branes $(M)N$ and flavor $D7$-branes $N_f$ in the parent type IIB set (arXiv:hep-th/0902.1540), e.g., for the QCD(EW-scale)-inspired $N=100, M=N_f=3, g_s=0.1$, one finds a missing pole at $n=3$. For integral $n>3$, truncating $Z_s(r)$ at ${\cal O}((r-r_h)^n)$, yields $C_{n, n+1}=0$ at order $n,\ \forall n\geq3$. Incredibly, (assuming preservation of isotropy in $\mathbb{R}^3$ even with the inclusion of HD corrections) the aforementioned gauge-invariant combination of scalar metric perturbations receives no ${\cal O}(R^4)$ corrections. Hence, (the aforementioned analogs of) $\lambda_L, v_b$ are unrenormalized up to ${\cal O}(R^4)$ in ${\cal M}$ theory.
In gravity, spacelike separated regions can be dependent on each other due to the constraint equations. In this paper, we give a natural definition of subsystem independence and gravitational dressing of perturbations in classical gravity. We find that extremal surfaces, non-perturbative lumps of matter, and generic trapped surfaces are structures that enable dressing and subregion independence. This leads to a simple intuitive picture for why extremal surfaces tend to separate independent subsystems. The underlying reason is that localized perturbations on one side of an extremal surface contribute negatively to the mass on the other side, making the gravitational constraints behave as if there exist both negative and positive charges. Our results support the consistency of islands in massless gravity, shed light on the Python's lunch, and provide hints on the nature of the split property in perturbatively quantized general relativity. We also prove a theorem bounding the area of certain surfaces in spherically symmetric asymptotically de Sitter spacetimes from above and below in terms of the horizon areas of de Sitter and Nariai. This theorem implies that it is impossible to deform a single static patch without also deforming the opposite patch, provided we assume spherical symmetry and an energy condition.
We present the manifestly covariant canonical operator formalism of a Weyl invariant (or equivalently, a locally scale invariant) gravity whose classical action consists of the well-known conformal gravity and Weyl invariant scalar-tensor gravity, on the basis of the Becchi-Rouet-Stora-Tyupin (BRST) formalism. It is shown that there exists a Poincar${\rm{\acute{e}}}$-like $\mathit{IOSp}(8|8)$ global symmetry as in Einstein's general relativity, which should be contrasted to the case of only the Weyl invariant scalar-tensor gravity where we have a more extended Poincar${\rm{\acute{e}}}$-like $\mathit{IOSp}(10|10)$ global symmetry. This reduction of the global symmetry is attributed to the presence of the St\"{u}ckelberg symmetry.
We present the computation of logarithmic corrections to near-extremal black hole entropy from one-loop Euclidean gravity path integral around the near-horizon geometry. We extract these corrections employing a suitably modified heat kernel method, where the near-extremal near-horizon geometry is treated as a perturbation around the extremal near-horizon geometry. Using this method we compute the logarithmic corrections to non-rotating solutions in four dimensional Einstein-Maxwell and $\mathcal{N} = 2,4,8$ supergravity theories. We also discuss the limit that suitably recovers the extremal black hole results.
This paper investigates holographic torus correlators of generic operators at conformal infinity and a finite cutoff within AdS$_3$ gravity coupled with a free scalar field. Using a near-boundary analysis and solving the gravitational boundary value problem, we solve Einstein's equation and calculate mixed correlators for massless and massive coupled scalar fields. The conformal ward identity on the torus has been reproduced holographically, which can be regarded as a consistency check. Further, recurrence relations for a specific class of higher-point correlators are derived, validating AdS$_3$/CFT$_2$ with non-trivial boundary topology. While the two-point scalar correlator is accurately computed on the thermal AdS$_3$ saddle, the higher-point correlators associated with scalar and stress tensor operators are explored.
Based on the previously formulated theory of spherical perturbations in the cosmological medium of self-gravitating scalarly charged fermions with the Higgs scalar interaction and the similarity properties of such models, the formation of supermassive Black Hole Seeds (SSBH) in the early Universe is studied. Using numerical simulation of the process, it is shown that the mass of SSBH during the evolution process reaches a limiting value, after which it begins to slowly fall. The possible influence of nonlinearity on this process is discussed.
The Unruh vacuum is widely used as quantum state to describe black hole evaporation since near the horizon it reproduces the physical state of a quantum field, the so called in-vacuum, in the case the black hole is formed by gravitational collapse. We examine the relation between these two quantum states in the background spacetime of a Reissner-Nordstr\"om black hole (both extremal and not) highlighting similarities and striking differences.
Spacetime inversion symmetries such as parity and time reversal play a central role in physics, but they are usually treated as global symmetries. In quantum gravity there are no global symmetries, so any spacetime inversion symmetries must be gauge symmetries. In particular this includes $\mathcal{CRT}$ symmetry (in even dimensions usually combined with a rotation to become $\mathcal{CPT}$), which in quantum field theory is always a symmetry and seems likely to be a symmetry of quantum gravity as well. In this article we discuss what it means to gauge a spacetime inversion symmetry, and we explain some of the more unusual consequences of doing this. In particular we argue that the gauging of $\mathcal{CRT}$ is automatically implemented by the sum over topologies in the Euclidean gravity path integral, that in a closed universe the Hilbert space of quantum gravity must be a real vector space, and that in Lorentzian signature manifolds which are not time-orientable must be included as valid configurations of the theory. In particular we give an example of an asymptotically-AdS time-unorientable geometry which must be included to reproduce computable results in the dual CFT.
To the first post-Newtonian order, if two test particles revolve in opposite directions about a massive, spinning body along two circular and equatorial orbits with the same radius, they take different times to return to the reference direction relative to which their motion is measured: it is the so-called gravitomagnetic clock effect. The satellite moving in the same sense of the rotation of the primary is slower, and experiences a retardation with respect to the case when the latter does not spin, while the one circling in the opposite sense of the rotation of the source is faster, and its orbital period is shorter than it would be in the static case. The resulting time difference due to the stationary gravitomagnetic field of the central spinning body is proportional to the angular momentum per unit mass of the latter through a numerical factor which so far has been found to be $4\pi$. A numerical integration of the equations of motion of a fictitious test particle moving along a circular path lying in the equatorial plane of a hypothetical rotating object by including the gravitomagnetic acceleration to the first post-Newtonian order shows that, actually, the gravitomagnetic corrections to the orbital periods are larger by a factor of $4$ in both the prograde and retrograde cases. Such an outcome, which makes the proportionality coefficient of the gravitomagnetic difference in the orbital periods of the two counter-revolving orbiters equal to $16\pi$, confirms an analytical calculation recently published in the literature by the present author.
We investigate the strong field lensing observables for the Damour-Solodukhin wormhole and examine how small the values of the deviation parameter $\lambda $ need be for reproducing the observables for the Schwarzschild black hole. While the extremely tiny values of $\lambda$ indicated by the matter accretion or Hawking evaporation are not disputed, it turns out that $\lambda $ could actually assume values considerably higher than those tiny values and still reproduce black hole lensing signatures. The lensing observations thus provide a surprising counterexample to the intuitive expectation that all experiments ought to lead to the mimicking of black holes for the same range of values of $\lambda$.
Stable massless wormholes are theoretically interesting in their own right as well as for astrophysical applications, especially as galactic halo objects. Therefore, the study of gravitational lensing observables for such objects is of importance, and we do here by applying the parametric post-Newtonian method of Keeton and Petters to massless dyonic charged wormholes of the Einstein-Maxwell-Dilaton field theory and to the massless Ellis wormhole of the Einstein minimally coupled scalar field theory. The paper exemplifies how the lensing signatures of two different solutions belonging to two different theories could be qualitatively similar from the observational point of view. Quantitative differences appear depending on the parameter values. Surprisingly, there appears an unexpected divergence in the correction to differential time delay, which seems to call for a review of its original derivation.
We extend a recent work on weak field first order light deflection in the MOdified Gravity (MOG) by comprehensively analyzing the actual observables in gravitational lensing both in the weak and strong field regime. The static spherically symmetric black hole (BH) obtained by Moffat is what we call here the Schwarzschild-MOG (abbreviated as SMOG) containing repulsive Yukawa-like force characterized by the MOG parameter $\alpha>0$ diminishing gravitational attraction. We point out a remarkable feature of SMOG, viz., it resembles a regular \textit{brane-world} BH in the range $-1<\alpha <0$ giving rise to a negative "tidal charge" $Q$ ($=\frac{1}{4}\frac{\alpha }{1+\alpha}$) interpreted as an imprint from the $5D$ bulk with an imaginary source charge $q$ in the brane. The Yukawa-like force of MOG is attractive in the brane-world range enhancing gravitational attraction. For $-\infty <\alpha <-1$, the SMOG represents a naked singularity. Specifically, we shall investigate the effect of $\alpha $ or Yukawa-type forces on the weak (up to third PPN order) and strong field lensing observables. For illustration, we consider the supermassive BH SgrA* with $\alpha =0.055$ for the weak field to quantify the deviation of observables from GR but in general we leave $\alpha$ unrestricted both in sign and magnitude so that future accurate lensing measurements, which are quite challenging, may constrain $\alpha$.
Recent trend of research indicates that not only massive but also massless (asymptotic Newtonian mass zero) wormholes can reproduce post-merger initial ring-down gravitational waves characteristic of black hole horizon. In the massless case, it is the non-zero charge of other fields, equivalent to what we call here the "Wheelerian mass", that is responsible for mimicking ring-down quasi-normal modes. In this paper, we enquire whether the same Wheelerian mass can reproduce black hole observables also in an altogether different experiment, viz., the strong field lensing. We examine two classes of massless wormholes, one in the Einstein-Maxwell-Dilaton (EMD) theory and the other in the Einstein-Minimally-coupled-Scalar field (EMS) theory. The observables such as the radius of the shadow, image separation and magnification of the corresponding Wheelerian masses are compared with those of a black hole (idealized SgrA* chosen for illustration) assuming that the three types of lenses share the same minimum impact parameter and distance from the observer. It turns out that, while the massless EMS\ wormholes can closely mimic the black hole in terms of strong field lensing observables, the EMD wormholes show considerable differences due to the presence of dilatonic charge. The conclusion is that masslessless alone is enough to closely mimic Schwarzschild black hole strong lensing observables in the EMS theory but not in the other, where extra parameters also influence those observables. The motion of timelike particles is briefly discussed for completeness.
This paper is concerned with the global stability of the plane wave solutions to the relativistic string equation with non-small perturbations. Under certain decay assumptions on the plane wave, we conclude that the perturbed system admits a globally smooth solution if the perturbation along the transversal direction is sufficiently small, while the travelling direction is allowed to be large. By choosing a gauge adapted to the plane wave solution, we deduce an equivalent Euler-Lagrangian equation for the perturbation whose quasilinear structure is reflected precisely in the induced geometry of the relativistic string. It then helps to proceed a geometrically adapted and weighted energy argument for which robust estimates suffice. Moreover, due to the non-trivial background solutions, the induced metric of the relativistic string involves linear perturbations with undetermined signs, and hence a key observation is needed to guarantee that the energies associated to the multipliers are positive up to lower order terms. %rather than quadratic perturbations as in the case of trivial background solutions.
This work is divided into two parts. The first examines recent proposals for "witnessing" quantum gravity via entanglement from the point of view of Bronstein's original objection to a quantization of gravity. Using techniques from open quantum systems we sketch how unavoidable decoherence from both inertial and gravitational backreaction between probe and detector could spoil the experimental detection of the quantization of gravity. We argue that this "failure" is actually an inherent feature of any quantum description that attempts to incorporate the equivalence principle exactly within quantum dynamics. In the second part, we speculate on how an exact realization of the equivalence principle might be implemented in an effective quantum field theory via the general covariance of correlators. While we are far from giving an explicit construction of such a theory we point out some features and consequences of such a program.
The post-Newtonian formalism plays an integral role in the models used to extract information from gravitational wave data, but models that incorporate this formalism are inherently approximations. Disagreement between an approximate model and nature will produce mismodeling biases in the parameters inferred from data, introducing systematic error. We here carry out a proof-of-principle study of such systematic error by considering signals produced by quasi-circular, inspiraling black hole binaries through an injection and recovery campaign. In particular, we study how unknown, but calibrated, higher-order post-Newtonian corrections to the gravitational wave phase impact systematic error in recovered parameters. As a first study, we produce injected data of non-spinning binaries as detected by a current, second-generation network of ground-based observatories and recover them with models of varying PN order in the phase. We find that the truncation of higher order (>3.5) post-Newtonian corrections to the phase can produce significant systematic error even at signal-to-noise ratios of current detector networks. We propose a method to mitigate systematic error by marginalizing over our ignorance in the waveform through the inclusion of higher-order post-Newtonian coefficients as new model parameters. We show that this method can reduce systematic error greatly at the cost of increasing statistical error.
Based on string theory, loop quantum gravity, black hole physics, and other theories of quantum gravity, physicists have proposed generalized uncertainty principle (GUP) modifications. In this work, we obtain exact solutions of Einstein$'$s field equations when the GUP effect is taken into account, and these solutions describe the GUP modifications for rotating black holes. We analyze two different ways of constructing GUP rotating black holes (Model $I$ and Model $II$). Model $I$ takes into account the modification of mass by GUP, i.e. the change of mass by quantization of space, and the resulting GUP-rotating black hole metric (18) is similar in form to Kerr black holes. Model $II$ takes into account the modification of the rotating black hole when GUP is an external field, where GUP acts like an electric charge, and the resulting GUP-rotating black hole metric (19) is similar in form to Kerr-Newman black holes. If the GUP-rotating black holes (18) and (19) are thermodynamically self-consistent, the functional relation of the GUP model parameters corresponding to Model $I$ and Model $II$ can be obtained. The difference between (18) and (19) in the space-time linear structure provides a basis for us to examine the physical nature of GUP-rotating black holes from observation, which is of great significance for understanding the GUP modification of black holes.
We investigate false vacuum decay catalysed by black holes under the influence of the higher order Gauss-Bonnet term. We study both bubble nucleation and Hawking-Moss types of phase transition in arbitrary dimension. The equations of motion of ''bounce'' solutions in which bubbles nucleate around arbitrary dimensional black holes are found in the thin wall approximation, and the instanton action is computed. The headline result that the tunnelling action for static instantons is the difference in entropy of the seed and remnant black holes is shown to hold for arbitrary dimension. We also study the Hawking-Moss transition and find a picture similar to the Einstein case, with one curious five-dimensional exception (due to a mass gap). In four dimensions, we find as expected that the Gauss-Bonnet term only impacts topology changing transitions, i.e. when vacuum decay removes the seed black hole altogether, or in a (Hawking-Moss) transition where a black hole is created. In the former case, topology changing transitions are suppressed (for positive GB coupling $\alpha$), whereas the latter case results in an enhanced transition.
We develop semiclassical methods for studying bubble nucleation in models with parameters that vary slowly in time. Introducing a more general rotation of the time contour allows access to a larger set of final states, and typically a non-Euclidean rotation is necessary in order to find the most relevant tunneling solution. We work primarily with effective quantum mechanical models parametrizing tunneling along restricted trajectories in field theories, which are sufficient, for example, to study thin wall bubble nucleation. We also give one example of an exact instanton solution in a particular Kaluza-Klein cosmology where the circumference of the internal circle is changing in time.
There has recently been considerable interest in the question whether and under which conditions accelerated cosmological expansion can arise in the asymptotic regions of field space of a $d$-dimensional EFT. We conjecture that such acceleration is impossible unless there exist metastable de Sitter vacua in more than $d$ dimensions. That is, we conjecture that `Asymptotic Acceleration Implies de Sitter' (AA$\Rightarrow$DS). Phrased negatively, we argue that the $d$-dimensional `No Asymptotic Acceleration' conjecture (a.k.a. the `strong asymptotic dS conjecture') follows from the de Sitter conjecture in more than $d$ dimensions. The key idea is that the relevant field-space asymptotics almost always correspond to decompactification and that the only positive energy contribution which decays sufficiently slowly in this regime is the vacuum energy of a higher-dimensional metastable vacuum. This result is in agreement with recent Swampland bounds on the potential in the asymptotics in field space from e.g. the species bound, but is significantly more constraining. As an intriguing observation, we note that the asymptotic expansion that arises from compactifying a de Sitter vacuum always satisfies and can saturate the bound from the Trans-Planckian Censorship Conjecture.
The power-law parametrization for the energy density spectrum of gravitational wave (GW) background is a useful tool to study its physics and origin. While scalar induced secondary gravitational waves (SIGWs) from some particular models fit the signal detected by NANOGrav, Parkers Pulsar Timing Array, European Pulsar Timing Array, and Chinese Pulsar Timing Array collaborations better than GWs from supermassive black hole binaries (SMBHBs), we test the consistency of the data with the infrared part of SIGWs which is somewhat independent of models. Through Bayesian analysis, we show that the infrared parts of SIGWs fit the data better than GW background from SMBHBs. The results give tentative evidence for SIGWs.
We study black hole linear perturbation theory in a four-dimensional Schwarzschild (anti) de Sitter background. When dealing with a positive cosmological constant, the corresponding spectral problem is solved systematically via the Nekrasov-Shatashvili functions or, equivalently, classical Virasoro conformal blocks. However, this approach can be more complicated to implement for certain perturbations if the cosmological constant is negative. For these cases, we propose an alternative method to set up perturbation theory for both small and large black holes in an analytical manner. Our analysis reveals a new underlying recursive structure that involves multiple polylogarithms. We focus on gravitational, electromagnetic, and conformally coupled scalar perturbations subject to Dirichlet and Robin boundary conditions. The low-lying modes of the scalar sector of gravitational perturbations and its hydrodynamic limit are studied in detail.
We investigate the properties of the fermionic Fulling-Rindler vacuum for a massive Dirac field in a general number of spatial dimensions. As important local characteristics, the fermionic condensate and the expectation value of the energy-momentum tensor are evaluated. The renormalization is reduced to the subtraction of the corresponding expectation values for the Minkowski vacuum. It is shown that the fermion condensate vanishes for a massless field and is negative for nonzero mass. Unlike the case of scalar fields, the fermionic vacuum stresses are isotropic for general case of massive fields. The energy density and the pressures are negative. For a massless field the corresponding spectral distributions exhibit thermal properties with the standard Unruh temperature. However, the density-of-states factor is not Planckian for general number of spatial dimensions. Another interesting feature is that the thermal distribution is of the Bose-Einstein type in even number of spatial dimensions. This feature has been observed previously in the response of a particle detector uniformly accelerating through the Minkowski vacuum. In an even number of space dimensions the fermion condensate and the mean energy-momentum tensor coincide for the fields realizing two inequivalent irreducible representations of the Clifford algebra. In the massless case, we consider also the vacuum energy-momentum tensor for Dirac fields in the conformal vacuum of the Milne universe, in static open universe and in the hyperbolic vacuum of de Sitter spacetime.
In this paper, we investigate a non-canonical scalar field model in the background dynamics of anisotropic Locally Rotationally Symmetric (LRS) Bianchi type I universe where gravity is coupled minimally to scalar field which is taken as dark energy and pressureless dust as dark matter are the main matter content of the universe. We perform dynamical system analysis to characterize the cosmological evolution of the model with and without interaction in the dark sector separately. First, we convert the evolution equation into an autonomous system of ordinary differential equations by using a suitable choice of dimensionless variables, which are normalized over the Hubble scale. We choose scalar field coupling and potential in such a way that the autonomous system converted to a 2D system. Linear stability theory is employed to the extracted critical points to find the nature. From the analysis, we find some interesting cosmological scenarios, such as late-time scalar-field dominated solutions, which evolve in the quintessence era, cannot solve the coincidence problem. Accelerated scaling attractors are also obtained that correspond to the late phase evolution in agreement with present observational data, and these solutions also provide possible mechanisms to alleviate the coincidence problem. A complete cosmic evolution is obtained from early inflation to a late-time dark energy-dominated phase, connecting through a matter-dominated transient phase of the universe. Furthermore, we find that for different values of the interaction parameter $\alpha$, the evolutionary trajectories of the Hubble parameter, and the distance modulus forecasted by the model are in quite well agreement with observational datasets.
We construct entangled microstates of a pair of holographic CFTs whose dual semiclassical description includes big bang-big crunch AdS cosmologies in spaces without boundaries. The cosmology is supported by inhomogeneous heavy matter and it partially purifies the bulk entanglement of two disconnected auxiliary AdS spacetimes. We show that the island formula for the fine grained entropy of one of the CFTs follows from a standard gravitational replica trick calculation. In generic settings, the cosmology is contained in the entanglement wedge of one of the two CFTs. We then investigate properties of the cosmology-to-boundary encoding map, and in particular, its non-isometric character. Restricting our attention to a specific class of states on the cosmology, we provide an explicit, and state-dependent, boundary representation of operators acting on the cosmology. Finally, under genericity assumptions, we argue for a non-isometric to approximately-isometric transition of the cosmology-to-boundary map for ``simple'' states on the cosmology as a function of the bulk entanglement, with tensor network toy models of our setup as a guide.
In this paper, the study of canonical quantization of a free real massive scalar field in the Schwarzschild spacetime is continued. The normalization constants for the eigenfunctions of the corresponding radial equation are calculated, providing the necessary coefficients for the doubly degenerate scatteringlike states that are used in the expansion of the quantum field. It is shown that one can pass to a new type of states such that the spectrum of states with energies larger than the mass of the field splits into two parts. The first part consists of states that resemble properly normalized plane waves far away from the black hole, so they just describe the theory for an observer located in that area. The second part consists of states that live relatively close to the horizon and whose wave functions decrease when one goes away from the black hole. The appearance of the second part of the spectrum, which follows from the initial degeneracy of the scatteringlike states, is a consequence of the topological structure of the Schwarzschild spacetime.
We study the holographic interpretation of the bulk instability, i.e. the bulk Lyapunov exponent in the motion of open classical bosonic strings in AdS black hole/brane/string backgrounds. In the vicinity of homogeneous and isotropic horizons the bulk Lyapunov exponent saturates the MSS chaos bound but in fact has nothing to do with chaos as our string configurations live in an integrable sector. In the D1-D5-p black string background, the bulk Lyapunov exponent is deformed away from the MSS value both by the rotation (the infrared deformation) and the existence of an asympotically flat region (the ultraviolet deformation). The dynamics is still integrable and has nothing to do with chaos (either in gravity or in field theory). Instead, the bulk Lyapunov scale exactly captures the values of the quasinormal mode frequencies. Therefore, the meaning of the bulk chaos is that it determines the thermal decay rate due to the coupling to the heat bath, i.e. the horizon.
In $F(R,R_{\mu\nu}R^{\mu\nu},R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma})$ gravity, a general class of fourth-order theories of gravity, the multipole expansion of the metric outside a spatially compact source up to $1/c^3$ order is provided, and the closed-form expressions for the source multipole moments are all presented explicitly. Since the integrals involved in the multipole moments are performed only over the actual source distributions, our result yields a ready-to-use form of the metric. Three separated parts are contained in the expansion. Same as that in General Relativity, the massless tensor part showing the Coulomb-like dependence is characterized by the mass and spin multipole moments. As a contrast, both the massive scalar and tensor parts bear the Yukawa-like dependence on the two massive parameters, respectively, and predict the appearance of six additional sets of source multipole moments.
This is a review article about neutrino mass and mixing and flavour model building strategies based on modular symmetry. After an introduction to neutrino mass and lepton mixing, we then turn to the main subject of this review, namely a pedagogical introduction to modular symmetry as a candidate for family symmetry, from the bottom-up point of view. After an informal introduction to modular symmetry, we introduce the modular group, and discuss its fixed points and residual symmetry, assuming supersymmetry throughout. We then introduce finite modular groups of level $N$ and modular forms with integer or rational modular weights, corresponding to simple geometric groups or their double or metaplectic covers, including the most general finite modular groups and vector-valued modular forms, with detailed results for $N=2, 3, 4, 5$. The interplay between modular symmetry and generalized CP symmetry is discussed, deriving CP transformations on matter multiplets and modular forms, highlighting the CP fixed points and their implications. In general, compactification of extra dimensions generally leads to a number of moduli, and modular invariance with factorizable and non-factorizable multiple moduli based on symplectic modular invariance and automorphic forms is reviewed. Modular strategies for understanding fermion mass hierarchies are discussed, including the weighton mechanism, small deviations from fixed points, and texture zeroes. Then examples of modular models are discussed based on single modulus $A_4$ models, a minimal $S'_4$ model of leptons (and quarks), and a multiple moduli model based on three $S_4$ groups capable of reproducing the Littlest Seesaw model. We then extend the discussion to include Grand Unified Theories (GUTs) based on modular (flipped) $SU(5)$ and $SO(10)$. Finally we discuss top-down approaches, including eclectic flavour symmetry and moduli stabilisation.
We critically assess to what extent it makes sense to bound the Wilson coefficients of dimension-six operators. In the context of Higgs physics, we establish that a closely related observable, $c_H$, is well-defined and satisfies a two-sided bound. $c_H$ is derived from the low momentum expansion of the scattering amplitude, or the derivative of the amplitude at the origin with respect to the Mandelstam variable $s$, expressed as $M(H_iH_i\rightarrow H_jH_j)=c_H s +O(g_\text{SM}, s^{-2})$ where $g_\text{SM}$ represents all Standard Model couplings. This observable is non-dispersive and, as a result, not sign-definite. We also determine the conditions under which the bound on $c_H$ is equivalent to a bound on the dimension-six operator $O_H=\partial| H|^2 \partial |H|^2$.
We present a simple set of power counting rules which allows us to easily estimate calculable instanton effects up to ${\cal O}(1)$ factors. We apply the resulting Instanton NDA to examine the effects of small instantons on various axion models. We confirm that mechanisms that increase the axion mass via small instantons generically also lead to an enhancement of misaligned instanton contributions to the axion potential, deepening the axion quality problem. For generic models, new sources of CP violation in the UV must be absent in order to raise the axion mass above the QCD prediction. However, we find that $Z_N$ and composite axions are UV-safe against these misalignment effects. Axion GUT models are also insensitive to UV contributions at the GUT scale, unless a very large number of extra states are introduced below this scale.
We posit that the distinct patterns observed in fermion masses and mixings are due to a minimally broken $\mathrm{U}(2)_{q+e}$ flavor symmetry acting on left-handed quarks and right-handed charged leptons, giving rise to an accidental $\mathrm{U}(2)^5$ symmetry at the renormalizable level without imposing selection rules on the Weinberg operator. We show that the symmetry can be consistently gauged by explicit examples and comment on realizations in $\mathrm{SU}(5)$ unification. Via a model-independent SMEFT analysis, we find that selection rules due to $\mathrm{U}(2)_{q+e}$ enhance the importance of charged lepton flavor violation as a probe, where significant experimental progress is expected in the near future.
We introduce a model of hadronization based on invertible neural networks that faithfully reproduces a simplified version of the Lund string model for meson hadronization. Additionally, we introduce a new training method for normalizing flows, termed MAGIC, that improves the agreement between simulated and experimental distributions of high-level (macroscopic) observables by adjusting single-emission (microscopic) dynamics. Our results constitute an important step toward realizing a machine-learning based model of hadronization that utilizes experimental data during training. Finally, we demonstrate how a Bayesian extension to this normalizing-flow architecture can be used to provide analysis of statistical and modeling uncertainties on the generated observable distributions.
Precision measurements of anomalous quartic couplings of electroweak gauge bosons allow us to search for deviations of the Standard Model predictions and signals of new physics. Here, we obtain the constraints on anomalous quartic gauge couplings using the presently available data on the production of gauge-boson pairs via vector boson fusion. We work in the Higgs effective theory framework and obtain the present bounds on the operator's Wilson coefficients. Anomalous quartic gauge boson couplings lead to rapidly growing cross sections and we discuss the impact of a unitarization procedure on the attainable limits.
We propose a new regime of minimal QCD axion dark matter that lies between the pre- and post-inflationary scenarios, such that the Peccei-Quinn (PQ) symmetry is restored only on sufficiently large spatial scales. This leads to a novel cosmological evolution, in which strings and domain walls re-enter the horizon and annihilate later than in the ordinary post-inflationary regime, possibly even after the QCD crossover. Such dynamics can occur if the PQ symmetry is restored by inflationary fluctuations, i.e. the Hubble parameter during inflation $H_I$ is larger than the PQ breaking scale $f_a$, but it is not thermally restored afterwards. Solving the Fokker-Planck equation, we estimate the number of inflationary e-folds required for the PQ symmetry to be, on average, restored. Moreover, we show that, in the large parts of parameter space where the radial mode is displaced from the minimum by de Sitter fluctuations, a string network forms due to the radial mode oscillating over the top of its potential after inflation. In both cases we identify order one ranges in $H_I/f_a$ and in the quartic coupling $\lambda$ of the PQ potential that lead to the late-string dynamics. In this regime the cosmological dark matter abundance can be reproduced for axion decay constants as low as the astrophysical constraint $O(10^8)$ GeV, corresponding to axion masses up to $10^{-2}~{\rm eV}$, and with miniclusters with masses as large as $O(10)M_\odot$.
We find exact solution in the Cahn eikonal model, which describes Coulomb-nuclear interference in elastic scattering of charged hadrons. The cases of both point-like and extended particles equipped with electromagnetic form factors are considered. According to the solution obtained the Coulomb-nuclear contributions are not exponentiated and cannot be added to the Coulomb phase. At the same time, the $O(\alpha)$-approximation of the amplitude is ambiguous. We also consider a model with multiple Coulomb scattering under the assumption that Coulomb scattering occurs on uncorrelated charged partons. In this case, in the approximation of a large number of the partons, the scattering can be described in terms of an effective potential (the mean field) resulting from the interaction of multiparton systems. Such a scattering pattern of the protons is possible in principle, but at extremely high energies.
The effective potential has been previously calculated through three-loop order, in Landau gauge, for a general renormalizable theory using dimensional regularization. However, dimensional regularization is not appropriate for softly broken supersymmetric gauge theories, because it explicitly violates supersymmetry. In this paper, I obtain the three-loop effective potential using a supersymmetric regulator based on dimensional reduction. Checks follow from the vanishing of the effective potential in examples with supersymmetric vacua, and from renormalization scale invariance in examples for which supersymmetry is broken, either spontaneously or explicitly by soft terms. As byproducts, I obtain the three-loop Landau gauge anomalous dimension for the scalar component of a chiral supermultiplet, and the beta function for the field-independent vacuum energy.
We consider a simple setup with a dark sector containing dark electrons charged under an abelian $U(1)_D$ gauge symmetry. We show that if the massless dark photon associated to the $U(1)_D$ is produced during inflation in such a way as to form a classical dark electric field, then dark electron-positron pairs are also produced close to the end of inflation via the Schwinger mechanism. Their mass is larger than the Hubble scale so they are non-relativistic at production and remain so for the rest of their cosmic evolution following reheating. They can account for the dark matter abundance today in scenarios with a high scale of inflation and efficient reheating.This points to a very heavy mass for the dark electrons of order $10^{14}$ GeV. Even with a dark gauge coupling comparable to the visible one of electromagnetism, the dark electrons are so heavy that they do not thermalize with the dark photons throughout their cosmic history. Assuming negligible kinetic mixing with the visible $U(1)$ the dark electrons remain decoupled from the Standard Model thermal bath as well.
There is growing interest in the development of a muon collider that would make it possible to produce lepton collisions at energies of several TeV. Among others, there can be significant contributions to electroweak gauge boson, Higgs boson and top quark physics. In this work we pay attention to the latter, in particular, effective flavor-changing top-quark interactions. We discuss two flavor changing $t \bar q$ ($q=u,c$) production processes that can be a good probe of the dimension-six top quark and gauge boson operators in the SMEFT. We consider all nine operators that can generate flavor-changing top quak couplings. After comparing with the current LHC limits, we identify two of them that have the highest sensitivity. On the other hand, for two others the sensitivity is so low that they can be safely discarded from future studies.
A number of discrepancies have emerged between lattice computations and data-driven dispersive evaluations of the RBC/UKQCD Intermediate-window-hadronic contribution to the muon anomalous magnetic moment. It is therefore interesting to obtain data-driven estimates for the light-quark-connected and strange-plus-disconnected components of this window quantity, allowing for a more detailed comparison between the lattice and data-driven approaches. The aim of this paper is to provide these estimates, extending the analysis to several other window quantities, including two windows designed to focus on the region in which the two-pion contribution is dominant. Clear discrepancies are observed for all light-quark-connected contributions considered, while good agreement with lattice results is found for strange-plus disconnected contributions to the quantities for which corresponding lattice results exist. The largest of these discrepancies is that for the RBC/UKQCD intermediate window, where, as previously reported, our data-driven result, $a_\mu^{W1,{\rm lqc}}=198.9(1.1)\times 10^{-10}$, is in significant tension with the results of 8 different recent lattice determinations. Our strategy is the same as recently employed in obtaining data-driven estimates for the light-quark-connected and strange-plus-disconnected components of the full leading-order hadronic vacuum polarization contribution to the muon anomalous magnetic moment. Updated versions of those earlier results are also presented, for completeness.
Axion-like particles (ALPs) are promising dark matter candidates. Their signals in direct detection experiments arise from the well-known inverse Primakoff effect or the inverse Compton scattering of the ALPs with the electron. In this paper, we revisit the direct detection of ALP by carefully considering the interference between the inverse Primakoff amplitude and the inverse Compton amplitude in the scattering process $a+e \to e+\gamma$ for the first time. It shows that the contribution of the interference term turns to be dominated in the scattering for a large ALP energy. Given the new analytical formula, signals or constraints of ALP couplings in various projected experiments are investigated. Our results show that these experiments may put strong constraints on ALP couplings for relatively heavy ALP. We further study projected constraints on the ALP from the JUNO experiment, which shows competitive constraints on ALP couplings using a ten-year exposure.
Analytical solutions to the microscopic Boltzmann equation are useful in testing the applicability and accuracy of macroscopic hydrodynamic theory. In this work, we present exact solutions of the relativistic Boltzmann equation, based on a new family of exact solutions of the relativistic ideal hydrodynamic equations [Phys. Rev. C 105, L021902]. To the best of our knowledge, this is the first exact solution that allows either symmetric or asymmetric longitudinal expansion with broken boost invariance.
We study the photoproduction process of dileptons in heavy ion collision at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) energys. The equivalent photon approximation, which equates the electromagnetic field of high-energy charged particles to the virtual photon flux, is used to calculate the processes of dileptons production. The numerical results demonstrate that the experimental study of dileptons in ultra-peripheral collisions is feasible at RHIC and LHC energies.
We give a new and simple proof of the inconsistency of the Bethe-West-Yennie parametrization for Coulomb-nuclear interference.
From the humble beginnings of particle physics to groundbreaking discoveries, the journey of particle physics has revolutionized our understanding of the universe. The progression of particle physics, begin from the discovery of new particles, to gauge symmetries and many successful theories in its journey which help in building up the model that explain the underlying makeup of matter, regulated by fundamental forces led to establishment of Standard Model(SM). The discovery of neutrino oscillations compelled scientists to explore theories that extend beyond the Standard Model, which opened up new frontiers in the field of particle physics. The theories have shifted its paradigm to a single comprehensive theory at the scale of 10^14 GeV. This article explains the journey of particle physics from the basic constituent to the theory of everything.
We find a dark gauge symmetry $U(1)_D$ that transforms nontrivially only for three right-handed neutrinos $\nu_{1,2,3R}$. The anomaly cancellation demands that they have a dark charge $D=0,-1,+1$ assigned to $\nu_{1,2,3R}$, respectively. The dark charge is broken by two units down to a dark parity, i.e. $U(1)_D\to P_D=(-1)^D$, which stabilizes a dark matter candidate. Interestingly, the model manifestly supplies neutrino masses via joint seesaw and scotogenic mechanisms, called scotoseesaw.
We compute all helicity amplitudes for the scattering of five partons in two-loop QCD in all the relevant flavor configurations, retaining all contributing color structures. We employ tensor projection to obtain helicity amplitudes in the 't Hooft-Veltman scheme starting from a set of primitive amplitudes. Our analytic results are expressed in terms of massless pentagon functions, and are easy to evaluate numerically. These amplitudes provide important input to investigations of collinear-factorization breaking and to studies of the multi-Regge kinematics regime.
In this paper we have studied neutrino masses and mixings by adding a scalar triplet $\eta$ to the particle content of minimal inverse seesaw. We have realised this extension of minimal inverse seesaw by implementing an isomorphic modular group $\Gamma(3)$ and a non-abelian discrete symmetry group $A_4$. We have also used $Z_3$ symmetry group to restrain certain interaction terms in the lagrangian of the model. We have studied baryon asymmetry of the universe, neutrinoless double-beta decay and dark matter in our work. In order to check the consistency of our model with various experimental constraints, we have therefore calculated effective mass, relic density and baryogenesis via leptogenesis. Interestingly, we have found our model quite compatible with the experimental bounds and is also successful in producing the neutrino masses and mixings in the 3$\sigma$ range.
In this contribution we describe a recent study focused on the lattice calculation of inclusive decay rates of heavy mesons. We show how the inclusive calculation can be achieved starting from four-point lattice correlation functions normalised appropriately. The correlators used in this project come from gauge ensembles provided by the JLQCD and ETM collaborations. An essential point of this method is the extraction of spectral densities from lattice correlators which is obtained using two of the most recent approaches in the lit- erature. Our results are in remarkable agreement with analytical predictions from the operator-product expansion. Finally, the lattice results are compared with the analytic results of the OPE where a remarkable agreement is found. This study represents the first step towards a full lattice QCD study of heavy mesons inclusive semileptonic decays.
The anomalous dimension of the heavy-light quark current in HQET is calculated up to four loops. The N$^3$LL perturbative correction to $f_B/f_D$ is obtained.
The Fermilab Proton-Improvement-Plan-II (PIP-II) is being implemented in order to support the precision neutrino oscillation measurements at the Deep Underground Neutrino Experiment, the U.S. flagship neutrino experiment. The PIP-II LINAC is presently under construction and is expected to provide 800~MeV protons with 2~mA current. This white paper summarizes the outcome of the first workshop on May 10 through 13, 2023, to exploit this capability for new physics opportunities in the kinematic regime that are unavailable to other facilities, in particular a potential beam dump facility implemented at the end of the LINAC. Various new physics opportunities have been discussed in a wide range of kinematic regime, from eV scale to keV and MeV. We also emphasize that the timely establishment of the beam dump facility at Fermilab is essential to exploit these new physics opportunities.
The resummation calculation (ResBos) is a widely used tool for the simulation of single vector boson production at colliders. In this work, we develop a significant improvement over the ResBos code by increasing the accuracy from NNLL+NLO to N${}^3$LL+NNLO and release the ResBos v2.0 code. Furthermore, we propose a new non-perturbative function that includes information about the rapidity of the system (IFY). The IFY functional form was fitted to data from fixed target experiments, the Tevatron, and the LHC. We find that the non-perturbative function has mild rapidity dependence based on the results of the fit. Finally, we investigate the effects that this increased precision has on the measurement of the $W$ boson by CDF and impacts on future LHC measurements.
The lightest neutralino ($\tilde{\chi}_1^0$) is a good Dark Matter (DM) candidate in the R-parity conserving Minimal Supersymmetric Standard Model (MSSM). In this work, we consider the light higgsino-like neutralino as the Lightest Stable Particle (LSP), thanks to rather small higgsino mass parameter $\mu$. We then estimate the prominent radiative corrections to the neutralino-neutralino-Higgs boson vertices. We show that for higgsino-like $\tilde{\chi}_1^0$, these corrections can significantly influence the spin-independent direct detection cross-section, even contributing close to 100\% in certain regions of the parameter space. These corrections, therefore, play an important role in deducing constraints on the mass of the higgsino-like lightest neutralino DM, and thus the $\mu$ parameter.
We compute the next-to-leading order (NLO) corrections to the vertices where a pair of the lightest neutralino couples to CP-even (light or heavy) Higgs scalars. In particular, the lightest neutralino is assumed to be a dominantly Bino-like mixed state, composed of Bino and Higgsino or Bino, Wino, and Higgsino. After computing all the three-point functions in the electroweak MSSM, we detail the contributions from the counterterms that arise in renormalizing these vertices in one-loop order. The amendment of the renormalized vertices impacts the spin-independent direct detection cross-sections of the scattering of nucleons with dark matter. We perform a comprehensive numerical scan over the parameter space where all the points satisfy the present B-physics constraints and accommodate the muon's anomalous magnetic moment. Finally, we exemplify a few benchmark points, which indulge the present searches of supersymmetric particles. After including the renormalized one-loop vertices, the spin-independent DM-nucleon cross-sections may be enhanced up to $20\%$ compared to its tree-level results. Finally, with the NLO cross-section, we use the recent LUX-ZEPLIN (LZ) results on the neutralino-nucleon scattering to display the relative rise in the lowest allowed band of the Higgsino mass parameter in the $M_1-\mu$ plane of the electroweak MSSM.
Using data consisting of top quarks produced with additional final leptons collected by the CMS detector at a center-of-mass energy of $\sqrt{s}=$13 TeV from 2016 to 2018 (138 fb$^{-1}$), a search for beyond standard model (BSM) physics is presented. The BSM physics is probed in the context of Effective Field Theory (EFT) by parameterizing potential new physics effects in terms of 26 dimension-six EFT operators. The data are categorized based on lepton multiplicity, total lepton charge, jet multiplicities, and b-tagged jet multiplicities. To gain further sensitivity to potential new physics (NP) effects, events in each jet category are binned using kinematic differential distributions. A simultaneous fit to data is performed to put constraints on the 26 operators. The results are consistent with the standard model prediction.
We investigate the speed of sound and polytropic index of quantum chromodynamics (QCD) matter in the full phase diagram based on a 3-flavor Polyakov-looped Nambu-Jona-Lasinio (pNJL) model. The speed of sound and polytropic index in isothermal and adiabatic cases all have a dip structure at the low chemical potential side of the chiral phase transition boundary, and these quantities reach their global minimum values at the critical endpoint (CEP) but are not completely zero, where the values in adiabatic are lightly greater than those in isothermal. Different from the speed of sound, the polytropic index also exists a peak around the chiral phase transition boundary. Along the hypothetical chemical freeze-out lines, the speed of sound rapidly decreases near the CEP, followed by a small spinodal behavior, while the polytropic index, especially in isothermal, exhibits a more pronounced and nearly closed to zero dip structure as it approaches the CEP.
Motivated by the first observation of the double-charm tetraquark $T_{cc}^+(3875)$ by the LHCb Collaboration, we investigate the nature of $T_{cc}^+$ as an isoscalar $DD^*$ hadronic molecule in a meson-exchange potential model incorporated by the coupled-channel effects and three-body unitarity. The $D^0D^0\pi^+$ invariant mass spectrum can be well-described and the $T_{cc}^+$ pole structure can be precisely extracted. Under the hypothesis that the interactions between the heavy flavor hadrons can be saturated by the light meson-exchange potentials, the near-threshold dynamics of $T_{cc}^+$ can shed a light on the binding of its heavy-quark spin symmetry (HQSS) partner $DD$ and $D^*D^*$ ($I=0$) and on the nature of other heavy hadronic molecule candidates such as $X(3872)$ and $Z_c(3900)$ in the charmed-anticharmed systems. The latter states can be related to $T_{cc}^+$ in the meson-exchange potential model with limited assumptions based on the SU(3) flavor symmetry relations. The combined analysis, on the one hand, indicates the HQSS breaking effects among those HQSS partners, and on the other hand, highlights the role played by the short and long-distance dynamics for the near threshold $D^{(*)}D^{(*)}$ and $D\bar{D}^{(*)}+c.c.$ systems.
We find a new utility of neutrons, usually treated as an experimental nuisance causing unwanted background, in probing new physics signals. They can either be radiated from neutrons (neutron bremsstrahlung) or appear through secondary particles from neutron-on-target interactions, dubbed "neutron beam dump". As a concrete example, we take the FASER/FASER2 experiment as a "factory" of high-energy neutrons that interact with the iron dump. We find that neutron-initiated bremsstrahlung contributions are comparable to proton-initiated ones, in terms of the resulting flux and the range of couplings that can be probed. The neutron bremsstrahlung can be used to probe dark gauge bosons with non-zero neutron coupling. In particular, we investigate protophobic gauge bosons and find that FASER/FASER2 can probe new parameter space. We also illustrate the possibility of neutron-induced secondary particles by considering axion-like particles with electron couplings. We conclude that the physics potential of FASER/FASER2 in terms of new physics searches can be greatly extended and improved with the inclusion of neutron interactions.
We investigate the quark-level effective operators related to the neutrinoless double beta $(0\nu\beta\beta)$ decay process, and their ultraviolet completions relevant to chiral enhancement effects at the hadronic level. We have classified several kinds of leptoquark models, matching to different standard model effective operators. Assuming weakly-coupled new physics, we find the ongoing $0\nu\beta\beta$-decay experiments are sensitive to new physics scale at around $2\sim 4$ TeV, which is in the reach of LHC searches. We discuss the discovery potential of such resonances in the same-sign dilepton channels at the LHC. Therefore, the direct LHC searches and indirect $0\nu\beta\beta$-decay searches are complementary to each other in testing the UV completions of the effective operators for $0\nu\beta\beta$ decay.
We compute the two-loop helicity amplitudes for the scattering of five gluons, including all contributions beyond the leading-color approximation. The analytic expressions are represented as linear combinations of transcendental functions with rational coefficients, which we reconstruct from finite-field samples obtained with the numerical unitarity method. Guided by the requirement of removing unphysical singularities, we find a remarkably compact generating set of rational coefficients, which we are able to display entirely in the manuscript. We implement our results in a public code, which provides efficient and reliable numerical evaluations for phenomenological applications.
We study low-energy scattering of spin-1/2 baryons from the perspective of quantum information science, focusing on the correlation between entanglement minimization and the appearance of accidental symmetries. The baryon transforms as an octet under the SU(3) flavor symmetry and its interactions below the pion threshold are described by contact operators in an effective field theory (EFT) of QCD. Despite there being 64 channels in the 2-to-2 scattering, only six independent operators in the EFT are predicted by SU(3). We show that successive entanglement minimization in SU(3)-symmetric channels are correlated with increasingly large emergent symmetries in the EFT. In particular, we identify scattering channels whose entanglement suppression are indicative of emergent SU(6), SO(8), SU(8), and SU(16) symmetries. We also observe the appearance of non-relativistic conformal invariance in channels with unnaturally large scattering lengths. Improved precision from lattice simulations could help determine the degree of entanglement suppression, and consequently the amount of accidental symmetry, in low-energy QCD.
We propose a novel effect that accounts for the photon emission from a quark-gluon plasma in the presence of a weak external magnetic field. Although the weak magnetic photon emission from quark-gluon plasma only leads to a small correction to the photon production rate, the induced photon spectrum can be highly azimuthally anisotropic, as a consequence of the coupled effect of the magnetic field and the longitudinal dynamics in the background medium. With respect to a realistic medium evolution containing a tilted fireball configuration, the direct photon elliptic flow from experiments is reproduced. In comparison to the experimental data of direct photon elliptic flow, in heavy-ion collisions the magnitude of the magnetic field before 1 fm/c can be extracted. For the top energy of RHIC collisions, right after the pre-equilibrium evolution, $|eB|$ is found no larger than a few percent of the pion mass square.
In this study, we analyzed the weak decays induced by $J^P = \frac{1}{2}^{+} \to \frac{3}{2}^{-} $ transitions within the light-cone sum rules. Specifically, semileptonic decays of the bottom baryons into the P-wave baryons $\Lambda_b \to \Lambda_c(2625) \ell \nu_l$ and $\Xi_b \to \Xi_c(2815) \ell \nu_l$ as well as nonleptonic $\Lambda_b \to \Lambda_c(2625) \pi (\rho)$ and $\Xi_b \to \Xi_c(2815) \pi (\rho)$ decays are investigated. The form factors for the considered transitions are obtained within the sum rules method. With the calculated form factors, the decay widths of the processes are determined. Up to now, only the decay width for $\Lambda_b^0 \to \Lambda_c^+ \mu^- \nu_\mu$ has been measured among the considered decays, and we observe that our finding is quite compatible with the measurement. We also compare our results with the predictions of other approaches.
The formation of weakly bound clusters in the hot and dense environment at midrapidity is one of the surprising phenomena observed experimentally in heavy-ion collisions from a low center of mass energy of a few GeV up to a ultra-relativistic energy of several TeV. Three approaches have been advanced to describe the cluster formation: coalescence at kinetic freeze-out, cluster formation during the entire heavy-ion collision by potential interaction between nucleons and deuteron production by hadronic reactions. We identify experimental observables, which can discriminate these production mechanisms for deuterons.
We investigate the elastic photon-proton and photon-photon scattering in a holographic QCD model, focusing on the Regge regime. Considering contributions of the Pomeron and Reggeon exchange, the total and differential cross sections are calculated. While our model involves several parameters, by virtue of the universality of the Pomeron and Reggeon, for most of them the values determined in the preceding study on the proton-proton and proton-antiproton scattering can be employed. Once the two adjustable parameters, the Pomeron-photon and Reggeon-photon coupling constant, are determined with the experimental data of the total cross sections, predicting the both cross sections in a wide kinematic region, from the GeV to TeV scale, becomes possible. We show that the total cross section data can be well described within the model, and our predictions for the photon-proton differential cross section are consistent with the data.
We report results of a search for dark-matter-nucleon interactions via a dark mediator using optimized low-energy data from the PandaX-4T liquid xenon experiment. With the ionization-signal-only data and utilizing the Migdal effect, we set the most stringent limits on the cross section for dark matter masses ranging from 30~$\rm{MeV/c^2}$ to 2~$\rm{GeV/c^2}$. Under the assumption that the dark mediator is a dark photon that decays into scalar dark matter pairs in the early Universe, we rule out significant parameter space of such thermal relic dark-matter model.
We present rules for computing scattering amplitudes of charged scalar matter and photons, where the photon has non-zero spin Casimir $\rho$, and is therefore a continuous spin particle (CSP). The amplitudes reduce to familiar scalar QED when $\rho\rightarrow 0$. As a concrete example, we compute the pair annihilation and Compton scattering amplitudes in this theory and comment on their physical properties, including unitarity and scaling behavior at small and large $\rho$.
The calculation of the neutron electric dipole moment within effective field theories for physics beyond the Standard Model requires non-perturbative hadronic matrix elements of effective operators composed of quark and gluon fields. In order to use input from lattice computations, these matrix elements must be translated from a scheme suitable for lattice QCD to the minimal-subtraction scheme used in the effective-field-theory framework. The accuracy goal in the context of the neutron electric dipole moment necessitates at least a one-loop matching calculation. Here, we provide the one-loop matching coefficients for the $CP$-odd three-gluon operator between two different minimally subtracted 't Hooft-Veltman schemes and the gradient flow. This completes our program to obtain the one-loop gradient-flow matching coefficients for all $CP$-violating and flavor-conserving operators in the low-energy effective field theory up to dimension six.
We revisit the spin effects induced by thermal vorticity by calculating them directly from the spin-dependent distribution functions. For the spin-1/2 particles, we give the polarization up to the first order of thermal vorticity and compare it with the usual result calculated from the spin vector. For the spin-1 particles, we find that all the non-diagonal elements vanish and there is no spin alignment up the first order of thermal vortcity. We present the spin alignment at second-order contribution from thermal vorticity. We also find that the spin effects for both Dirac and vector particles will receive extra contribution when the spin direction is associated with the particle's momentum.
We review the current status and implications of the anomalies (i.e. deviations from the Standard Model predictions) in semi-leptonic $B$ meson decays, both in the charged and in the neutral current. In $b\to s\ell^+\ell^-$ transitions significant tensions between measurements and the Standard Model predictions exist. They are most pronounced in the branching ratios ${\cal B}_{B \to K\mu^+\mu^-}$ and ${\cal B}_{B_s\to\phi\mu^+\mu^-}$ (albeit quite dependent on the form factors used) as well as in angular observables in $B\to K^*\mu^+\mu^-$ (the $P_5^\prime$ anomaly). Because the measurements of ${\cal B}_{B_s\to \mu^+\mu^-}$ and of the ratios $R_K$ and $R_{K^*}$ agree reasonably well with the SM predictions, this points towards (dominantly) lepton flavour universal New Physics coupling vectorially to leptons, i.e. contributions to $C_9^{\rm U}$. In fact, global fits prefer this scenario over the SM hypothesis by $5.8\sigma$. Concerning $b\to c\tau\nu$ transitions, $R(D)$ and $R(D^*)$ suggest constructive New Physics at the level of $10\%$ (w.r.t. the Standard Model amplitude) with a significance above $3\sigma$. We discuss New Physics explanations of both anomalies separately as well as possible combined explanations. In particular, a left-handed vector current solution to $R(D^{(*)})$, either via the $U_1$ leptoquark or the combination of the scalar leptoquarks $S_1$ and $S_3$, leads to an effect in $C_9^{\rm U}$ via an off-shell penguin with the right sign and magnitude and a combined significance (including a tree-level effect resulting in $C_{9\mu}^\mathrm{V}=-C_{10\mu}^\mathrm{V}$ and $R(D^{(*)})$) of $6.3\sigma$. Such a scenario can be tested with $b \to s \tau^+\tau^-$ decays. Finally, we point out an interesting possible correlation of $R(D^{(*)})$ with non-leptonic $B$ anomalies.
We analyse in detail the QED corrections to the total decay width and the moments of the electron energy spectrum of the inclusive semi-leptonic $B \to X_c e \nu$ decay. Our calculation includes short-distance electroweak corrections, the complete ${\cal O}(\alpha)$ partonic terms and leading-logarithmic QED effects up to ${\cal O}(\Lambda^3_{\rm QCD}/m_b^3)$. A comprehensive numerical comparison of our results against those obtained with the Monte Carlo (MC) tool PHOTOS is presented. While the comparison indicates good overall agreement, our computation contains QED effects not included in PHOTOS and should therefore better describe photon radiation to $B \to X_c e \nu$ as measured by the $B$-factories. Our calculations represent the first steps in the construction of a fully differential higher-order QED MC generator for inclusive semi-leptonic $B$ decays.
We consider a braneworld scenario in which a flat 4-D brane, embedded in $M^{3,1} \times S^1$, is moving on or spiraling around the $S^1$. Although the induced metric on the brane is 4-D Minkowski, the would-be Lorentz symmetry of the brane is broken globally by the compactification. As recently pointed out this means causal bulk signals can propagate superluminally and even backwards in time according to brane observers. Here we consider the effective action on the brane induced by loops of bulk fields. We consider a variety of self-energy and vertex corrections due to bulk scalars and gravitons and show that bulk loops with non-zero winding generate UV-finite Lorentz-violating terms in the 4-D effective action. The results can be accommodated by the Standard Model Extension, a general framework for Lorentz-violating effective field theory.
The total cross section of the process $\mu^- \mu^+ \to \nu_\mu \bar{\nu}_\mu t \bar{t} H$ has strong dependence on the CP phase $\xi$ of the top Yukawa coupling, where the ratio of $\xi=\pi$ and $\xi = 0$ (SM) grows to 670 at $\sqrt{s}$ = 30 TeV, 3400 at 100 TeV. We study the cause of the strong energy dependence and identify its origin as the $(E/m_W^{})^2$ growth of the weak boson fusion sub-amplitudes, $W_L^- W_L^+ \to t \bar{t} H$, with the two $W$s are longitudinally polarized. We repeat the study in the SMEFT framework where EW gauge invariance is manifest and find that the highest energy cross section is reduced to a quarter of the complex top Yukawa model result, with the same energy power. By applying the Goldstone boson (GB) equivalence theorem, we identify the origin of this strong energy growth of the SMEFT amplitudes as associated with the dimension-6 $\pi^- \pi^+ ttH$ vertex, where $\pi^\pm$ denotes the GB of $W^\pm$. We obtain the unitarity bound on the coefficient of the SMEFT operator by studying all $2\to2$ and $2\to3$ cross sections in the $J=0$ channel.
The relativistic Langevin equation poses a number of technical and conceptual problems related to its derivation and underlying physical assumptions. Recently, a method has been proposed in [A. Petrosyan and A. Zaccone, J. Phys. A: Math. Theor. 55 015001 (2022)] to derive the relativistic Langevin equation from a first-principles particle-bath Lagrangian. As a result of the particle-bath coupling, a new ``restoring force'' term appeared, which breaks translation symmetry. Here we revisit this problem aiming at deriving a fully translation-invariant relativistic Langevin equation. We successfully do this by adopting the renormalization potential protocol originally suggested by Caldeira and Leggett. The relativistic renormalization potential is derived here and shown to reduce to Caldeira and Leggett's form in the non-relativistic limit. The introduction of this renormalization potential successfully removes the restoring force and a fully translation-invariant relativistic Langevin equation is derived for the first time. The physically necessary character of the renormalization potential is discussed in analogy with non-relativistic systems, where it emerges due to the renormalization of the tagged particle dynamics due to its interaction with the bath oscillators (a phenomenon akin to level-repulsion or avoided-crossing in condensed matter). We discuss the properties that the corresponding non-Markovian friction kernel has to satisfy, with implications ranging from transport models of the quark-gluon plasma, to relativistic viscous hydrodynamic simulations, and to electrons in graphene.
In pursuit of precise and fast theory predictions for the LHC, we present an implementation of the MadNIS method in the MadGraph event generator. A series of improvements in MadNIS further enhance its efficiency and speed. We validate this implementation for realistic partonic processes and find significant gains from using modern machine learning in event generators.
Leptoquark models may explain deviations from the Standard Model observed in decay processes involving heavy quarks at high-energy colliders. Such models give rise to low-energy parity- and time-reversal-violating phenomena in atoms and molecules. One of the leading effects among these phenomena is the nucleon-electron tensor-pseudotensor interaction when the low-energy experimental probe uses a quantum state of an atom or molecule predominantly characterized by closed electron shells. In the present paper the molecular interaction constant for the nucleon-electron tensor-pseudotensor interaction in the thallium-fluoride molecule -- used as such a sensitive probe by the CeNTREX collaboration [Quantum Sci. Technol., 6:044007, 2021] -- is calculated employing highly-correlated relativistic many-body theory. Accounting for up to quintuple excitations in the wavefunction expansion the final result is $W_T({\text{Tl)}} = -6.25 \pm 0.31\, $[$10^{-13} {\langle\Sigma\rangle}_A$ a.u.] Interelectron correlation effects on the tensor-pseudotensor interaction are studied for the first time in a molecule, and a common framework for the calculation of such effects in atoms and molecules is presented.
NOvA is a long-baseline neutrino oscillation experiment that measures oscillations in charged-current $\nu_{\mu} \rightarrow \nu_{\mu}$ (disappearance) and $\nu_{\mu} \rightarrow \nu_{e}$ (appearance) channels, and their antineutrino counterparts, using neutrinos of energies around 2 GeV over a distance of 810 km. In this work we reanalyze the dataset first examined in our previous paper [Phys. Rev. D 106, 032004 (2022)] using an alternative statistical approach based on Bayesian Markov Chain Monte Carlo. We measure oscillation parameters consistent with the previous results. We also extend our inferences to include the first NOvA measurements of the reactor mixing angle $\theta_{13}$ and the Jarlskog invariant. We use these results to quantify the strength of our inferences about CP violation, as well as to examine the effects of constraints from short-baseline measurements of $\theta_{13}$ using antineutrinos from nuclear reactors when making NOvA measurements of $\theta_{23}$. Our long-baseline measurement of $\theta_{13}$ is also shown to be consistent with the reactor measurements, supporting the general applicability and robustness of the PMNS framework for neutrino oscillations.
This is part of a series of papers describing the new curve integral formalism for scattering amplitudes of the colored scalar tr$\phi^3$ theory. We show that the curve integral manifests a very surprising fact about these amplitudes: the dependence on the number of particles, $n$, and the loop order, $L$, is effectively decoupled. We derive the curve integrals at tree-level for all $n$. We then show that, for higher loop-order, it suffices to study the curve integrals for $L$-loop tadpole-like amplitudes, which have just one particle per color trace-factor. By combining these tadpole-like formulas with the the tree-level result, we find formulas for the all $n$ amplitudes at $L$ loops. We illustrate this result by giving explicit curve integrals for all the amplitudes in the theory, including the non-planar amplitudes, through to two loops, for all $n$.
We analyse the properties of Wilson loop observables for holographic gauge theories, when the dual bulk geometries have a single and/or multiple boundaries (Euclidean wormholes). Such observables lead to a generalisation and refinement of the characterisation in arXiv:2202.01372 based on the compressibility of cycles and the pinching limit of higher genus Riemann surfaces, since they carry information about the dynamics and phase structure of the dual gauge theory of an arbitrary dimensionality. Finally, we describe how backreacting correlated observables such as Wilson loops can lead to Euclidean wormhole saddles in the dual gravitational path integral, by taking advantage of a representation theoretic entanglement structure proposed in arXiv:2110.14655 and arXiv:2204.01764 .
Quantum chaotic systems are characterized by energy correlations in their spectral statistics, usually probed by the distribution of nearest-neighbor level spacings. Some signatures of chaos, like the spectral form factor (SFF), take all the correlations into account, while others sample only short-range or long-range correlations. Here, we characterize correlations between eigenenergies at all possible spectral distances. Specifically, we study the distribution of $k$-th neighbor level spacings ($k$nLS) and compute its associated $k$-th neighbor spectral form factor ($k$nSFF). This leads to two new full-range signatures of quantum chaos, the variance of the $k$nLS distribution and the minimum value of the $k$nSFF, which quantitatively characterize correlations between pairs of eigenenergies with any number of levels $k$ between them. We find exact and approximate expressions for these signatures in the three Gaussian ensembles of random matrix theory (GOE, GUE and GSE) and in integrable systems with completely uncorrelated spectra (the Poisson ensemble). We illustrate our findings in a XXZ spin chain with disorder, which interpolates between chaotic and integrable behavior. Our refined measures of chaos allow us to probe deviations from Poissonian and Random Matrix behavior in realistic systems. This illustrates how the measures we introduce bring a new light into studying many-body quantum systems, which lie in-between the fully chaotic or fully integrable models.
In this note, we classify topological solitons of $n$-brane fields, which are nonlocal fields that describe $n$-dimensional extended objects. We consider a class of $n$-brane fields that formally define a homomorphism from the $n$-fold loop space $\Omega^n X_D$ of spacetime $X_D$ to a space $\mathcal{E}_n$. Examples of such $n$-brane fields are Wilson operators in $n$-form gauge theories. The solitons are singularities of the $n$-brane field, and we classify them using the homotopy theory of ${\mathbb{E}_n}$-algebras. We find that the classification of codimension ${k+1}$ topological solitons with ${k\geq n}$ can be understood understood using homotopy groups of $\mathcal{E}_n$. In particular, they are classified by ${\pi_{k-n}(\mathcal{E}_n)}$ when ${n>1}$ and by ${\pi_{k-n}(\mathcal{E}_n)}$ modulo a ${\pi_{1-n}(\mathcal{E}_n)}$ action when ${n=0}$ or ${1}$. However, for ${n>2}$, their classification goes beyond the homotopy groups of $\mathcal{E}_n$ when ${k< n}$, which we explore through examples. We compare this classification to $n$-form $\mathcal{E}_n$ gauge theory. We then apply this classification and consider an ${n}$-form symmetry described by the abelian group ${G^{(n)}}$ that is spontaneously broken to ${H^{(n)}\subset G^{(n)}}$, for which the order parameter characterizing this symmetry breaking pattern is an ${n}$-brane field with target space ${\mathcal{E}_n = G^{(n)}/H^{(n)}}$. We discuss this classification in the context of many examples, both with and without 't Hooft anomalies.
In this paper, we explore the string theory landscape obtained from type IIB and F-theory flux compactifications. We first give a comprehensive introduction to a number of mathematical finiteness theorems, indicate how they have been obtained, and clarify their implications for the structure of the locus of flux vacua. Subsequently, in order to address finer details of the locus of flux vacua, we propose three mathematically precise conjectures on the expected number of connected components, geometric complexity, and dimensionality of the vacuum locus. With the recent breakthroughs on the tameness of Hodge theory, we believe that they are attainable to rigorous mathematical tools and can be successfully addressed in the near future. The remainder of the paper is concerned with more technical aspects of the finiteness theorems. In particular, we investigate their local implications and explain how infinite tails of disconnected vacua approaching the boundaries of the moduli space are forbidden. To make this precise, we present new results on asymptotic expansions of Hodge inner products near arbitrary boundaries of the complex structure moduli space.
We introduce a new elliptic integrable $\sigma$-model in the form of a two-parameter deformation of the Principal Chiral Model on the group $\text{SL}_{\mathbb{R}}(N)$, generalising a construction of Cherednik for $N=2$ (up to reality conditions). We exhibit the Lax connection and $\mathcal{R}$-matrix of this theory, which depend meromorphically on a spectral parameter valued in the torus. Furthermore, we explain the origin of this model from an equivariant semi-holomorphic 4-dimensional Chern-Simons theory on the torus. This approach opens the way for the construction of a large class of elliptic integrable $\sigma$-models, with the deformed Principal Chiral Model as the simplest example.
We introduce a Hamiltonian lattice model for the $(1+1)$-dimensional $\text{SU}(N_c)$ gauge theory coupled to one adjoint Majorana fermion of mass $m$. The discretization of the continuum theory uses staggered Majorana fermions. We analyze the symmetries of the lattice model and find lattice analogs of the anomalies of the corresponding continuum theory. An important role is played by the lattice translation by one lattice site, which in the continuum limit involves a discrete axial transformation. On a lattice with periodic boundary conditions, the Hilbert space breaks up into sectors labeled by the $N_c$-ality $p=0, \ldots N_c-1$. Our symmetry analysis implies various exact degeneracies in the spectrum of the lattice model. In particular, it shows that, for $m=0$ and even $N_c$, the sectors $p$ and $p'$ are degenerate if $|p-p'| = N_c/2$. In the $N_c = 2$ case, we explicitly construct the action of the Hamiltonian on a basis of gauge-invariant states, and we perform both a strong coupling expansion and exact diagonalization for lattices of up to $12$ lattice sites. Upon extrapolation of these results, we find good agreement with the spectrum computed previously using discretized light-cone quantization. One of our new results is the first numerical calculation of the fermion bilinear condensate.
In a recent paper [arXiv:2308.00038], Anupam, Chowdhury, and Sen conjectured that the finite temperature Euclidean five-dimensional Cvetic-Youm solution saturating the BPS bound is supersymmetric. In this paper, we explicitly construct Killing spinors for this solution in five-dimensional minimal supergravity. We also expand on the previous discussions of Killing spinors for the finite temperature Euclidean Kerr-Newman solution saturating the BPS bound. For both these cases, we show that the total charge gets divided into two harmonic sources on three-dimensional flat base space.
We study the scattering processes of kink-antikink and kink-kink pairs in a field theory model with non-differentiable potential at its minima. The kink-antikink scattering includes cases of capture and escape of the soliton pair separated by a critical velocity. Around this critical velocity, the behavior is fractal. The emission of radiation strongly influences the small velocity cases, with the most radiative cases being also the most chaotic. The radiation appears through the emission of compact oscillons and the formation of compact shockwaves. The kink-kink scattering happens elastically, with no emission of radiation. Some features of both the kink-antikink and the kink-kink scattering are explained using a collective coordinate model, even tough the kink-kink case exhibits a null-vector problem.
Recent studies on the holographic descriptions of Kerr black holes indicate that the conformal or the warped conformal symmetries are responsible for the Kerr black hole physics at both background and perturbation levels. In the present paper, we extend the validity of these studies to the case of accelerating Kerr black holes. By invoking a set of non-trivial diffeomorphisms near the horizon bifurcation surface of the accelerating Kerr black hole, the Dirac brackets among charges of the diffeomorphisms form the symmetry algebra of a warped CFT which consists of one Virasoro and one Kac-Moddy algebra with central extensions. This provides the evidence for warped CFTs being possible holographic dual to accelerating Kerr black holes. The thermal entropy formula of the warped CFT fixed by modular parameters and vacuum charges reproduces the entropy of the rotating black hole with acceleration.
The theory of particle scattering is concerned with transition amplitudes between states that belong to unitary representations of the Poincar\'e group. The latter acts as the isometry group of Minkowski spacetime $\mathbb{M}$, making natural the introduction of relativistic tensor fields encoding the particles of interest. Since the Poincar\'e group also acts as a group of conformal isometries of null infinity $\mathcal{I}$, massless particles can also be very naturally encoded into Carrollian conformal fields living on $\mathcal{I}$. In this work we classify the two- and three-point correlation functions such Carrollian conformal fields can have in any consistent quantum theory of massless particles and arbitrary dimension. We then show that bulk correlators of massless fields in $\mathbb{M}$ explicitly reduce to these Carrollian conformal correlators when evaluated on $\mathcal{I}$, although in the case of time-ordered bulk correlators this procedure appears singular at first sight. However we show that the Carrollian correlators of the descendant fields are perfectly regular and precisely carry the information about the corresponding S-matrix elements.
We present, for the first time, covariant expressions for the infinitesimal conformal symmetry transformations for maximal depth partially massless fields of spin-$s \geq 2$ on four-dimensional de Sitter spacetime ($dS_{4}$). In the spin-2 case, the invariance of both the field equations and the covariant action is demonstrated. For spin-$s >2$, the invariance is demonstrated only at the level of the field equations. For all spins $s \geq 2$, we show that the algebra does not close on the conformal algebra, $so(4,2)$, (in agreement with previous statements in the literature) due to the appearance of new symmetry transformations that contain higher derivatives. We thus call our conformal transformations `unconventional'. Our results concerning the closure of the full symmetry algebra are inconclusive. Then we shift focus to the question of supersymmetry (SUSY) on $dS_{4}$ and our objective is twofold. First, we uncover a non-interacting supermultiplet that consists of a complex partially massless spin-2 field and a complex spin-3/2 field with zero mass parameter on $dS_{4}$. Second, we showcase the appearance of the unconventional conformal symmetries in the bosonic subalgebra of our supermultiplet (the bosonic subalgebra is neither $so(4,1)$ nor $so(4,2)$, while its full structure is currently unknown). Open questions arising from our findings are also discussed.
This paper is focused on investigating the effects of a statistical interaction for graphene-like systems, providing Haldane-like properties for topologically trivial lattices. The associated self-energy correction yields an effective next-nearest hopping, inducing the topological phase, whose specific solutions are scrutinized. In the case of an external magnetic field, it leads to a renormalized quasi-particle structure with generalized Landau levels and explicit valley asymmetry. A suitable tool for implementing such achievements is a judicious indefinite metric quantization, leading to advances in field theory foundations. Since the topological behavior is encoded in the radiative corrections, an unequivocal treatment using an integral representation is carefully developed.
1-loop amplitudes in 4d $N=4$ supergravity with non-vanishing soft-scalar limits and anomalous $SU(1,1)$ duality present a 1-loop precursor of a 4-loop UV divergence. In this paper we find that in 6d maximal supergravity there are 3-loop order amplitudes with non-vanishing soft-scalar limits and anomalous $E_{5(5)}$ duality, induced by a 3-loop UV divergence. If the relevant non-vanishing soft-scalar limits are detected at 1-loop amplitudes, it would supply a 1-loop precursor of a 3-loop UV divergence in 6d. In such case, an analogous search of duality anomalies (non-vanishing soft-scalar limits) of 1-loop multi-point amplitudes might provide precursors of higher loop UV divergences in models where higher loop computations are extremely difficult.
In the past decades considerable efforts have been made in order to understand the critical features of both classical and quantum long-range interacting models. The case of the Berezinskii-Kosterlitz-Thouless (BKT) universality class, as in the $2d$ classical $XY$ model, is considerably complicated by the presence, for short-range interactions, of a line of renormalization group fixed points. In this paper we discuss a field theoretical treatment of the $2d$ $XY$ model with long-range couplings and we compare it with results from the self-consistent harmonic approximation. These methods lead to a rich phase diagram, where both power-law BKT scaling and spontaneous symmetry breaking appear for the same (intermediate) decay rates of long-range interactions. We also discuss the Villain approximation for the $2d$ $XY$ model with power-law couplings, providing hints that, in the long-range regime, it fails to reproduce the correct critical behavior. The obtained results are then applied to the long-range quantum XXZ spin chain at zero temperature. We discuss the relation between the phase diagrams of the two models and we give predictions about the scaling of the order parameter of the quantum chain close to the transition.
The estimation of categorical distributions under marginal constraints summarizing some sample from a population in the most-generalizable way is key for many machine-learning and data-driven approaches. We provide a parameter-agnostic theoretical framework that enables this task ensuring (i) that a categorical distribution of Maximum Entropy under marginal constraints always exists and (ii) that it is unique. The procedure of iterative proportional fitting (IPF) naturally estimates that distribution from any consistent set of marginal constraints directly in the space of probabilities, thus deductively identifying a least-biased characterization of the population. The theoretical framework together with IPF leads to a holistic workflow that enables modeling any class of categorical distributions solely using the phenomenological information provided.
Error-correcting codes are known to define chiral 2d lattice CFTs where all the $U(1)$ symmetries are enhanced to $SU(2)$. In this paper, we extend this construction to a broader class of length-$n$ codes which define full (non-chiral) CFTs with $SU(2)^n$ symmetry, where $n=c+\bar c$. We show that codes give a natural discrete ensemble of 2d theories in which one can compute averaged observables. The partition functions obtained from averaging over all codes weighted equally is found to be given by the sum over modular images of the vacuum character of the full extended symmetry group, and in this case the number of modular images is finite. This averaged partition function has a large gap, scaling linearly with $n$, in primaries of the full $SU(2)^n$ symmetry group. Using the sum over modular images, we conjecture the form of the genus-2 partition function. This exhibits the connected contributions to disconnected boundaries characteristic of wormhole solutions in a bulk dual.
The study of spectral properties of natural geometric elliptic partial differential operators acting on smooth sections of vector bundles over Riemannian manifolds is a central theme in global analysis, differential geometry and mathematical physics. Instead of studying the spectrum of a differential operator $L$ directly one usually studies its spectral functions, that is, spectral traces of some functions of the operator, such as the spectral zeta function $\zeta(s)=\Tr L^{-s}$ and the heat trace $\Theta(t)=\Tr\exp(-tL)$. The kernel $U(t;x,x')$ of the heat semigroup $\exp(-tL)$, called the heat kernel, plays a major role in quantum field theory and quantum gravity, index theorems, non-commutative geometry, integrable systems and financial mathematics. We review some recent progress in the study of spectral asymptotics. We study more general spectral functions, such as $\Tr f(tL)$, that we call quantum heat traces. Also, we define new invariants of differential operators that depend not only on the their eigenvalues but also on the eigenfunctions, and, therefore, contain much more information about the geometry of the manifold. Furthermore, we study some new invariants, such as $\Tr\exp(-tL_+)\exp(-sL_-)$, that contain relative spectral information of two differential operators. Finally we show how the convolution of the semigroups of two different operators can be computed by using purely algebraic methods.
We review the structure of maximal $D=11$ and $D=10$ supergravities. Upon dimensional reduction, these theories give rise to the unique maximal supergravities in all lower spacetime dimensions $D<10$. In $D$ dimensions, maximal supergravity exhibits the exceptional global symmetry group E$_{11-D}$, part of which is realized as hidden symmetries and only manifest after proper dualization of the fields. We also briefly review the reformulation of $D=11$ supergravity as an exceptional field theory which renders the appearance of hidden symmetries manifest.
The problem of constructing local bulk observables from boundary CFT data is of paramount importance in holography. In this work, we begin addressing this question from a modern bootstrap perspective. Our main tool is the boundary operator expansion (BOE), which holds for any QFT in AdS. Following Kabat and Lifschytz, we argue that the BOE is strongly constrained by demanding locality of correlators involving bulk fields. Focusing on 'AdS form factors' of one bulk and two boundary insertions, we reformulate these locality constraints as a complete set of sum rules on the BOE data. We show that these sum rules lead to a manifestly local representation of form factors in terms of 'local blocks'. The sum rules are valid non-perturbatively, but are especially well-adapted for perturbative computations in AdS where they allow us to bootstrap the BOE data in a systematic fashion. Finally, in the flat space limit, we show that the AdS form factor reduces to an ordinary QFT form factor. We provide a phase shift formula for it in terms of the BOE and CFT data. In two dimensions, this formula makes manifest Watson's equations for integrable form factors under certain extremality assumptions on the CFT. We discuss the eventual modifications of our formalism to account for dressed operators in AdS.
The Gubser solution to inviscid relativistic fluid dynamics is used to examine the role of transverse expansion on the energy spectrum of photons radiated by quark-gluon plasma. Transverse flow is shown to be a modest effect on the energy spectrum of photons as a whole, despite its large effect on rare high-energy photons produced at low temperatures. An exact expression is derived for the volume of the plasma as a function of its temperature. A simple formula is obtained for the energy spectrum of high-energy thermal photons, which is used to relate the inverse slope $T_{\textrm{eff}}$ of the photon spectrum at energy $E$ to the maximum temperature of the plasma $T_0$, finding $T_{\textrm{eff}} \approx T_0/(1+\frac{5}{2} \frac{T_0}{E})$.
We suggest a new indicator of quantum chaos based on the logarithmic out-of-time-order correlator. On the one hand, this indicator correctly reproduces the average classical Lyapunov exponent in the semiclassical limit and directly links the definitions of quantum chaos and classical K-system. On the other hand, it can be analytically calculated using the replica trick and the Schwinger-Keldysh diagram technique on a $2n$-fold Keldysh contour. To illustrate this approach, we consider several one-dimensional systems, including the quantum cat map, and three paradigmatic large-$N$ models, including the Sachdev-Ye-Kitaev model. Furthermore, we find that correlations between replicas can reduce the magnitude of the Lyapunov exponent compared to estimates based on conventional out-of-time-order correlators.
Massive type IIA flux compactifications of the form AdS$_4 \times X_6$, where $X_6$ admits a Calabi-Yau metric and O6-planes wrapping three-cycles, display families of vacua with parametric scale separation between the compactification scale and the AdS$_4$ radius, generated by an overall rescaling of internal four-form fluxes. For toroidal orbifolds one can perform two T-dualities and map this background to an orientifold of massless type IIA compactified on an SU(3)-structure manifold with fluxes. Via a 4d EFT analysis, we generalise this last construction and embed it into new branches of supersymmetric and non-supersymmetric vacua with similar features. We apply our results to propose new infinite families of vacua based on elliptic fibrations with metric fluxes. Parametric scale separation is achieved by an asymmetric flux rescaling which, however, in general is not a simple symmetry of the 4d equations of motion. At this level of approximation the vacua are stable but, unlike in the Calabi-Yau case, they display a non-universal mass spectrum of light fields.
This paper introduces two operations in quiver gauge theories. The first operation takes a quiver with a permutation symmetry $S_n$ and gives a quiver with adjoint loops. The corresponding 3d $\mathcal{N}=4$ Coulomb branches are related by an orbifold of $S_n$. The second operation takes a quiver with $n$ nodes connected by edges of multiplicity $k$ and replaces them by $n$ nodes of multiplicity $qk$. The corresponding Coulomb branch moduli spaces are related by an orbifold of type $\mathbb{Z}_q^{n-1}$. The first operation generalises known cases that appeared in the literature. These two operations can be combined to generate new relations between moduli spaces that are constructed using the magnetic construction.
$T\bar T$ deformed CFTs with positive deformation parameter have been proposed to be holographically dual to Einstein gravity in a glue-on $\mathrm{AdS}_3$ spacetime. The latter is constructed from AdS$_3$ by gluing a patch of an auxiliary AdS$_3^*$ spacetime to its asymptotic boundary. In this work, we propose a glue-on version of the Ryu-Takayanagi formula, which is given by the signed area of an extremal surface. The extremal surface is anchored at the endpoints of an interval on a cutoff surface in the glue-on geometry. It consists of an RT surface lying in the AdS$_3$ part of the spacetime and its extension to the AdS$_3^*$ region. The signed area is the length of the RT surface minus the length of the segments in AdS$_3^*$. We find that the Ryu-Takayanagi formula with the signed area reproduces the entanglement entropy of a half interval for $T\bar T$-deformed CFTs on the sphere. We then study the properties of extremal surfaces on various glue-on geometries, including Poincar\'e $\mathrm{AdS}_3$, global $\mathrm{AdS}_3$, and the BTZ black hole. When anchored on multiple intervals at the boundary, the signed area of the minimal surfaces undergoes phase transitions with novel properties. In all of these examples, we find that the glue-on extremal surfaces exhibit a minimum length related to the deformation parameter of $T\bar T$-deformed CFTs.
Using Commercial Off-The-Shelf (COTS) Operational Amplifiers (OpAmps) and Complementary Metal-Oxide Semiconductor (CMOS) transistors, we present a demonstration of the Q-Pix front-end architecture, a novel readout solution for kiloton-scale Liquid Argon Time Projection Chamber (LArTPC) detectors. The Q-Pix scheme employs a Charge-Integrate/Reset process based on the Least Action principle, enabling pixel-scale self-triggering charge collection and processing, minimizing energy consumption, and maximizing data compression. We examine the architecture's sensitivity, linearity, noise, and other features at the circuit board level and draw comparisons to SPICE simulations. Furthermore, we highlight the resemblance between the Q-Pix front-end and Sigma-Delta modulator, emphasizing that digital data processing techniques for Sigma-Delta can be directly applied to Q-Pix, resulting in enhanced signal-to-noise performance. These insights will inform the development of Q-Pix front-end designs in integrated circuits (IC) and guide data collection and processing for future large-scale LArTPC detectors in neutrino physics and other high-energy physics experiments.
Particle dark matter could belong to a multiplet that includes an electrically charged state. WIMP dark matter ($\chi^{0}$) accompanied by a negatively charged excited state ($\chi^{-}$) with a small mass difference (e.g. $<$ 20 MeV) can form a bound-state with a nucleus such as xenon. This bound-state formation is rare and the released energy is $\mathcal{O}(1-10$) MeV depending on the nucleus, making large liquid scintillator detectors suitable for detection. We searched for bound-state formation events with xenon in two experimental phases of the KamLAND-Zen experiment, a xenon-doped liquid scintillator detector. No statistically significant events were observed. For a benchmark parameter set of WIMP mass $m_{\chi^{0}} = 1$ TeV and mass difference $\Delta m = 17$ MeV, we set the most stringent upper limits on the recombination cross section times velocity $\langle\sigma v\rangle$ and the decay-width of $\chi^{-}$ to $9.2 \times 10^{-30}$ ${\rm cm^3/s}$ and $8.7 \times 10^{-14}$ GeV, respectively at 90% confidence level.
We present measurements of elliptic flow ($v_{2}$) of $K_{s}^{0}$, $\Lambda$, $\bar{\Lambda}$, $\phi$, $\Xi^{-}$, $\overline{\Xi}^{+}$, and $\Omega^{-}$+$\overline{\Omega}^{+}$ at mid-rapidity ($|\eta| <$ 1.0) in isobar collisions ($^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr) at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV. The centrality and transverse momentum ($p_{\mathrm{T}}$) dependence of elliptic flow is presented. The number of constituent quark (NCQ) scaling of $v_{2}$ in isobar collisions is discussed. $p_{T}$-integrated elliptic flow ($\left\langle v_{2}\right\rangle$) is observed to increase from central to peripheral collisions. The ratio of $\left\langle v_{2}\right\rangle$ between the two isobars shows a deviation from unity for strange hadrons ($K_{s}^{0}$, $\Lambda$ and $\bar{\Lambda}$) indicating a difference in nuclear structure and deformation. A system size dependence of strange hadron $v_{2}$ at high $p_{T}$ is observed among Ru+Ru, Zr+Zr, Cu+Cu, Au+Au, and U+U systems. A multi-phase transport (AMPT) model with string melting (SM) describes the experimental data well in the measured $p_{\mathrm{T}}$ range for isobar collisions at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV.
This paper reports cross-section measurements of $ZZ$ production in $pp$ collisions at $\sqrt{s}=13.6$ TeV at the Large Hadron Collider. The data were collected by the ATLAS detector in 2022, and correspond to an integrated luminosity of 29 fb$^-1$. Events in the $ZZ\rightarrow4\ell$ ($\ell = e$, $\mu$) final states are selected and used to measure the inclusive and differential cross-sections in a fiducial region defined close to the analysis selections. The inclusive cross-section is further extrapolated to the total phase space with a requirement of 66 $< m_Z <$ 116 GeV for both $Z$ bosons, yielding $16.8 \pm 1.1$ pb. The results are well described by the Standard Model predictions.
Accelerator searches for new resonances have a long-standing history of discoveries that have driven advances in our understanding of nature. Since 2010, the Large Hadron Collider (LHC) has probed previously inaccessible energy scales, allowing it to search for new heavy reso- nances predicted by a wide range of theories beyond the standard model (BSM). In particular, resonance decays into fermionic final states are of- ten seen as golden channels since they provide a clear signal, typically a peak in the invariant mass of the decay products over a smoothly-falling background distribution. This review summaries the key concepts of the experimental searches for new resonances decaying to fermions, in the context of the BSM theories that motivate them, and presents the latest results of the ATLAS and CMS experiments, focusing on the complete LHC run-2 dataset. Future prospects at the high-luminosity LHC and potential future colliders are also surveyed.
Two-particle angular correlation is one of the most powerful tools to study the mechanism of particle production in proton--proton (pp) collision systems by relating the difference between the azimuthal angle ($\Delta\varphi$) and the rapidity ($\Delta y$) of the particles from a pair. Hadronization processes are influenced by various physical phenomena, such as resonance decay, Coulomb interactions, laws of conservation of energy and momentum, and others, because of the quark content of the particles involved. Therefore, each correlation function is unique and shows a different dependence on transverse momentum $p_{\mathrm{T}}$ and/or multiplicity. The angular correlation functions reported by the ALICE Collaboration in pp collisions showed an anticorrelation in short range of ($\Delta y,\Delta\varphi$) for baryon pairs which is not predicted by any theoretical model.\\ \indent In this contribution, this behavior will be investigated by studying identified charged hadrons (i.e., $\pi^{\pm}$, $\rm K^{\pm}$, and p($\bar{\rm p}$)) in the $\Delta y,\Delta\varphi$ space in pp collisions at $\sqrt{s} = 13$ TeV recorded by ALICE. In addition, to distinguish the various physical contributions, collisions with different multiplicities are analyzed separately and diverse normalization methods are applied.
Flow harmonic coefficients, $v_n$, which are the key to studying the hydrodynamics of the quark-gluon plasma (QGP) created in heavy-ion collisions, have been measured in various collision systems, kinematic regions, and using various particle species. The study of flow harmonics in a wide pseudorapidity range is particularly valuable to understand the temperature dependence of the shear viscosity to entropy density ratio of the QGP. This paper presents the first LHCb results of the second- and the third-order flow harmonic coefficients of charged hadrons as a function of transverse momentum in the forward region, corresponding to pseudorapidities between 2.0 and 4.9, using the data collected from PbPb collisions in 2018 at a center-of-mass energy of $5.02$ TeV. The coefficients measured using the two-particle angular correlation analysis method are smaller than the central-pseudorapidity measurements at ALICE and ATLAS from the same collision system but share similar features.
Future experiments at high energy $e^+e^-$ colliders will focus on extremely precise Standard Model measurements. Among the most important physics benchmarks, there is the capability to resolve the Higgs decays into W or Z pairs, in their completely hadronic decay modes (4 jets in the final state), only based on the invariant mass of the jet pair coming from decay of the on-shell boson. This translates into a relative energy resolution target of $30\%/\sqrt{E}$, well beyond current detector performances. Dual-readout calorimetry is a technique which aims to improve the energy resolution, for single hadrons and hadronic jets, exploiting the information produced by two different physical processes, namely scintillation and Cerenkov light emission. The IDEA detector, whose concept has been included in both the FCC and CEPC Conceptual Design Reports, is based on a dual-readout fibre calorimeter with independent fibre readout exploiting Silicon PhotoMultipliers (SiPMs). The individual SiPM information will be beneficial for a highly granular calorimeter design, opening up to advanced reconstruction techniques such as Particle Flow and a variety of neural network algorithms. In this paper the status of calorimeter prototypes that have been developed to demonstrate the feasibility of the dual-readout method in association with the high granularity feature is illustrated. The specific choice for the design of each prototype is presented, together with the performances achieved at high-energy test beams or through simulations.
Dark photons have been considered potential candidates for dark matter. The dark photon dark matter (DPDM) has a mass and interacts with electromagnetic fields via kinetic mixing with a coupling constant of $\chi$. Thus, DPDMs are converted into ordinary photons at metal surfaces. Using a millimeter-wave receiver set in a radioshielding box, we performed experiments to detect the conversion photons from the DPDM in the frequency range 10--18 GHz, which corresponds to a mass range 41--74 $\mu\mathrm{eV}$. We found no conversion photon signal in this range and set the upper limits to $\chi < (0.5\text{--}3.9) \times 10^{-10}$ at a 95% confidence level.
The Scattering and Neutrino Detector at the LHC (\SND) started taking data at the beginning of Run 3 of the LHC. The experiment is designed to perform measurements with neutrinos produced in proton-proton collisions at the LHC in an energy range between 100GeV and 1 TeV. It covers a previously unexplored pseudo-rapidity range of $7.2<\eta<8.4$. The detector is located 480 m downstream of the ATLAS interaction point in the TI18 tunnel. It comprises a veto system, a target consisting of tungsten plates interleaved with nuclear emulsion and scintillating fiber (SciFi) trackers, followed by a muon detector (UpStream, US and DownStream, DS). In this article we report the measurement of the muon flux in three subdetectors: the emulsion, the SciFi trackers and the DownStream Muon detector. The muon flux per integrated luminosity through an 18$\times$18 cm$^{2}$ area in the emulsion is $1.5 \pm 0.1(\textrm{stat}) \times 10^4\,\textrm{fb/cm}^{2}$. The muon flux per integrated luminosity through a 31$\times$31 cm$^{2}$ area in the centre of the SciFi is $2.06\pm0.01(\textrm{stat})\pm0.12(\textrm{sys}) \times 10^{4} \textrm{fb/cm}^{2}$. The muon flux per integrated luminosity through a 52$\times$52 cm$^{2}$ area in the centre of the downstream muon system is $2.35\pm0.01(\textrm{stat})\pm0.10(\textrm{sys}) \times 10^{4}\,\textrm{fb/cm}^{2}$. The total relative uncertainty of the measurements by the electronic detectors is 6 $\%$ for the SciFi and 4 $\%$ for the DS measurement. The Monte Carlo simulation prediction of these fluxes is 20-25 $\%$ lower than the measured values.
While non-contextual hidden-variable theories are proved to be impossible, contextual ones are possible. In a contextual hidden-variable theory, an observable is called a beable if the hidden-variable assigns its value in a given measurement context specified by a state and a preferred observable. Halvorson and Clifton characterized the algebraic structure of beables as a von Neumann subalgebra, called a beable subalgebra, of the full observable algebra such that the probability distribution of every observable affiliated therewith admits the ignorance interpretation. On the other hand, we have shown that for every von Neumann algebra there is a unique set theoretical universe such that the internal "real numbers" bijectively correspond to the observables affiliated with the given von Neumann algebra. Here, we show that a set theoretical universe is associated with a beable subalgebra if and only if it is ZFC-satisfiable, namely, every theorem of ZFC set theory holds with probability equal to unity. Moreover, we show that there is a unique maximal ZFC-satisfiable subuniverse "implicitly definable", in the sense of Malament and others, by the given measurement context. The set theoretical language for the ZFC-satisfiable universe, characterized by the present work, rigorously reconstructs Bohr's notion of the "classical language" to describe the beables in a given measurement context.
The search for elusive Nagaoka-type ferromagnetism in the Hubbard model has recently enjoyed renewed attention with the advent of a variety of experimental platforms enabling its realization, including moir\'e materials, quantum dots, and ultracold atoms in optical lattices. Here, we demonstrate a universal mechanism for Nagaoka ferromagnetism (that applies to both bipartite and nonbipartite lattices) based on the formation of ferromagnetic polarons consisting of a dopant dressed with polarized spins. Using large-scale density-matrix renormalization group calculations, we present a comprehensive study of the ferromagnetic polaron in an electron-doped Hubbard model, establishing various polaronic properties such as its size and energetics. Moreover, we systematically probe the internal structure of the magnetic state$\unicode{x2014}$through the use of pinning fields and three-point spin-charge-spin correlation functions$\unicode{x2014}$for both the single-polaron limit and the high-density regime of interacting polarons. Our results highlight the crucial role of mobile polarons in the birth of global ferromagnetic order from local ferromagnetism and provide a unified framework to understand the development and demise of the Nagaoka-type ferromagnetic state across dopings.
Shadow tomography aims to build a classical description of a quantum state from a sequence of simple random measurements. Physical observables are then reconstructed from the resulting classical shadow. Shadow protocols which use single-body random measurements are simple to implement and capture few-body observables efficiently, but do not apply to systems with fundamental number conservation laws, such as ultracold atoms. We address this shortcoming by proposing and analyzing a new local shadow protocol adapted to such systems. The "All-Pairs" protocol requires one layer of two-body gates and only $\textrm{poly}(V)$ samples to reconstruct arbitrary few body observables. Moreover, by exploiting the permutation symmetry of the protocol, we derive a linear time post-processing algorithm. We provide a proof-of-principle reference implementation and demonstrate the reconstruction of 2- and 4-point functions in a paired Luttinger liquid of hardcore bosons.
We calculate two-body scattering phase shifts on a quantum computer using a leading order short-range effective field theory Hamiltonian. The algorithm combines the variational quantum eigensolver and the quantum subspace expansion. As an example, we consider scattering in the deuteron $^3$S$_1$ partial wave. We calculate scattering phase shifts with a quantum simulator and on real hardware. We also study how noise impacts these calculations and discuss noise mitigation required to extend our work to larger quantum processing units. With current hardware, up to five superconducting qubits can produce acceptable results, and larger calculations will require a significant noise reduction.
We explore the robustness of the correlation matrix Hamiltonian reconstruction technique with respect to the choice of operator basis, studying the effects of bases that are undercomplete and over-complete -- too few or too many operators respectively. An approximation scheme for reconstructing from an undercomplete basis is proposed and performed numerically on select models. We discuss the confounding effects of conserved quantities and symmetries on reconstruction attempts. We apply these considerations to a variety of one-dimensional systems in zero- and finite-temperature regimes.
Spin-orbit coupled dynamics are of fundamental interest in both quantum optical and condensed matter systems alike. In this work, we show that photonic excitations in pseudospin-1/2 atomic lattices exhibit an emergent spin-orbit coupling when the geometry is chiral. This spin-orbit coupling arises naturally from the electric dipole interaction between the lattice sites and leads to spin polarized excitation transport. Using a general quantum optical model, we determine analytically the conditions that give rise to spin-orbit coupling and characterize the behavior under various symmetry transformations. We show that chirality-induced spin textures are associated with a topologically nontrivial Zak phase that characterizes the chiral setup. Our results demonstrate that chiral atom arrays are a robust platform for realizing spin-orbit coupled topological states of matter.
Accurate modeling of noise in realistic quantum processors is critical for constructing fault-tolerant quantum computers. While a full simulation of actual noisy quantum circuits provides information about correlated noise among all qubits and is therefore accurate, it is, however, computationally expensive as it requires resources that grow exponentially with the number of qubits. In this paper, we propose an efficient systematic construction of approximate noise channels, where their accuracy can be enhanced by incorporating noise components with higher qubit-qubit correlation degree. To formulate such approximate channels, we first present a method, dubbed the cluster expansion approach, to decompose the Lindbladian generator of an actual Markovian noise channel into components based on interqubit correlation degree. We then generate a $k$-th order approximate noise channel by truncating the cluster expansion and incorporating noise components with correlations up to the $k$-th degree. We require that the approximate noise channels must be accurate and also "honest", i.e., the actual errors are not underestimated in our physical models. As an example application, we apply our method to model noise in a three-qubit quantum processor that stabilizes a [[2,0,0]] codeword, which is one of the four Bell states. We find that, for realistic noise strength typical for fixed-frequency superconducting qubits coupled via always-on static interactions, correlated noise beyond two-qubit correlation can significantly affect the code simulation accuracy. Since our approach provides a systematic noise characterization, it enables the potential for accurate, honest and scalable approximation to simulate large numbers of qubits from full modeling or experimental characterizations of small enough quantum subsystems, which are efficient but still retain essential noise features of the entire device.
A proposed method for preparing the superposition states of qubits using different axes of the Bloch sphere. This method utilizes the Y-axis of the Bloch sphere using IBM native (square root of X) gates, instead of utilizing the X-axis of the Bloch sphere using IBM non-native Hadamard gates, for transpiling cost-effective quantum circuits on IBM quantum computers. In this paper, our presented method ensures that the final transpiled quantum circuits always have a lower quantum cost than that of the transpiled quantum circuits using the Hadamard gates.
Distant quantum control via quantum gates represents an essential step toward realizing distributed quantum networks. An efficient theoretical protocol for the dual non-local implementation of controlled-not (CNOT) gates between two separated partners is presented in this regard. The suggested protocol requires 1~ebit with local operations and classical communication channels. The efficiency of the teleportation scheme is quantified through an infidelity measure. The numerical results show that the infidelity of performing the CNOT gate between legitimate partners depends on the initial qubit settings. It is also shown that the protocol is performed efficiently if the CNOT control qubit and the auxiliary qubit are prepared in the same direction. Furthermore, we provide a noise analysis for the suggested scheme. We find that by maintaining the noise strengths under the threshold $\frac{1}{4}$, one can achieve the dual non-local CNOT gate optimally.
Two-dimensional semiconductor-superconductor heterostructures form the foundation of numerous nanoscale physical systems. However, measuring the properties of such heterostructures, and characterizing the semiconductor in-situ is challenging. A recent experimental study [arXiv:2107.03695] was able to probe the semiconductor within the heterostructure using microwave measurements of the superfluid density. This work revealed a rapid depletion of superfluid density in semiconductor, caused by the in-plane magnetic field which in presence of spin-orbit coupling creates so-called Bogoliubov Fermi surfaces. The experimental work used a simplified theoretical model that neglected the presence of non-magnetic disorder in the semiconductor, hence describing the data only qualitatively. Motivated by experiments, we introduce a theoretical model describing a disordered semiconductor with strong spin-orbit coupling that is proximitized by a superconductor. Our model provides specific predictions for the density of states and superfluid density. Presence of disorder leads to the emergence of a gapless superconducting phase, that may be viewed as a manifestation of Bogoliubov Fermi surface. When applied to real experimental data, our model showcases excellent quantitative agreement, enabling the extraction of material parameters such as mean free path and mobility, and estimating $g$-tensor after taking into account the orbital contribution of magnetic field. Our model can be used to probe in-situ parameters of other superconductor-semiconductor heterostructures and can be further extended to give access to transport properties.
Photonic quantum computers are currently one of the primary candidates for fault-tolerant quantum computation. At the heart of the photonic quantum computation lies the strict requirement for suitable quantum sources e.g. high purity, high brightness single photon sources. To build a practical quantum computer, thousands to millions of such sources are required. In this article, we theoretically propose a unique single-photon source design on a thin-film lithium niobate (TFLN) platform co-integrated with superconducting nanowire single-photon detectors. We show that with a judicial design of single photon source using thin film periodically poled lithium waveguides (PPLN), back-illuminated grating couplers (GCs) and directly bonded or integrated cavity coupled superconducting nanowire single-photon detectors (SNSPDs) can lead to a simple but practical high efficiency heralded single-photon source using the current fabrication technology. Such a device will eliminate the requirement of out coupling of the generated photons and can lead to a fully integrated solution. The proposed design can be useful for fusion-based quantum computation and for multiplexed single photon sources and also for efficient on-chip generation and detection of squeezed light.
Experimental quantum physics and computing platforms rely on sophisticated computer control and timing systems that must be deterministic. An exemplar is the sequence used to create a Bose-Einstein condensate at the University of Illinois, which involves 46,812 analog and digital transitions over 100 seconds with 20 ns timing precision and nanosecond timing drift. We present a control and sequencing platform, using industry-standard National Instruments hardware to generate the necessary digital and analog signals, that achieves this level of performance. The system uses a master 10 MHz reference clock that is conditioned to the Global Positioning Satellite constellation and leverages low-phase-noise clock distribution hardware for timing stability. A Python-based user front-end provides a flexible language to describe experimental procedures and easy-to-implement version control. A library of useful peripheral hardware that can be purchased as low-cost evaluation boards provides enhanced capabilities. We provide a GitHub repository containing example python sequences and libraries for peripheral devices as a resource for the community.
Among the variational wave functions for Fermionic Hamiltonians, neural network backflow (NNBF) and hidden fermion determinant states (HFDS) are two prominent classes to provide accurate approximations to the ground state. Here we develop a unifying view of fermionic neural quantum states casting them all in the framework of NNBF. NNBF wave-functions have configuration-dependent single-particle orbitals (SPO) which are parameterized by a neural network. We show that HFDS with $r$ hidden fermions can be written as a NNBF with an $r \times r$ determinant Jastrow and a restricted low-rank $r$ additive correction to the SPO. Furthermore, we show that in NNBF wave-functions, such determinant Jastrow's can generically be removed at the cost of further complicating the additive SPO correction increasing its rank by $r$. We numerically and analytically compare additive SPO corrections generated by the product of two matrices with inner dimension $r$. We find that larger $r$ wave-functions span a larger space and give evidence that simpler and more direct updates to the SPO's tend to be more expressive and better energetically. These suggest the standard NNBF approach is preferred amongst other related choices. Finally, we uncover that the row-selection used to select single-particle orbitals allows significant sign and amplitude modulation between nearby configurations and is partially responsible for the quality of NNBF and HFDS wave-functions.
The cooperative modification of spontaneous radiative decay is a paradigmatic many-emitter effect in quantum optics. So far its experimental realization has involved interactions mediated by rapidly escaping photons that do not play an active role in the emitter dynamics. Here we explore cooperative dynamics of quantum emitters in an optical lattice that interact by radiating atomic matter waves. Using the ability to prepare weakly and strongly interacting many-body phases of excitations in an array of matter-wave emitters, we demonstrate directional super- and subradiance from a superfluid phase with tunable radiative phase lags, and directly access the buildup of coherence imprinted by the emitted radiation across a Mott insulator. We investigate the onset of cooperative dynamics for slow wave propagation and observe a coupling to collective bound states with radiation trapped at and between the emitters. Our results in open-system quantum electrodynamics establish ultracold matter waves as a versatile tool for studying many-body quantum optics in spatially extended and ordered systems.
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases. Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them. On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
Recent constructions of the first asymptotically good quantum LDPC (qLDPC) codes led to two breakthroughs in complexity theory: the NLTS (No Low-Energy Trivial States) theorem (Anshu, Breuckmann, and Nirkhe, STOC'23), and explicit lower bounds against a linear number of levels of the Sum-of-Squares (SoS) hierarchy (Hopkins and Lin, FOCS'22). In this work, we obtain improvements to both of these results using qLDPC codes of low rate: - Whereas Anshu et al. only obtained NLTS Hamiltonians from qLDPC codes of linear dimension, we show the stronger result that qLDPC codes of arbitrarily small positive dimension yield NLTS Hamiltonians. - The SoS lower bounds of Hopkins and Lin are only weakly explicit because they require running Gaussian elimination to find a nontrivial codeword, which takes polynomial time. We resolve this shortcoming by introducing a new method of planting a strongly explicit nontrivial codeword in linear-distance qLDPC codes, which in turn yields strongly explicit SoS lower bounds. Our "planted" qLDPC codes may be of independent interest, as they provide a new way of ensuring a qLDPC code has positive dimension without resorting to parity check counting, and therefore provide more flexibility in the code construction.
A global multi-partite entanglement may place a constraint on the wave-particle duality. We investigate this constraint relation of the global entanglement and the quantitative wave-particle duality in tripartite systems. We perform quantum state tomography to reconstruct the reduced density matrix by using the OriginQ quantum computing cloud platform. As a result, we show that, theoretically and experimentally, the quantitative wave-particle duality is indeed constrained by the global tripartite entanglement.
The accurate description and robust computational modeling of the nonequilibrium properties of quantum systems remain challenges in condensed matter physics. In this work, we develop a linear-scale computational simulation technique for the non-equilibrium dynamics of quantum quench systems. In particular, we report a polynomial expansion of the Loschmidt echo to describe the dynamical quantum phase transitions of noninteracting quantum quench systems. An expansion based method allows us to efficiently compute the Loschmidt echo for infinitely large systems without diagonalizing the system Hamiltonian. To demonstrate its utility, we highlight quantum quenching dynamics under tight-binding quasicrystals and disordered lattices in one spatial dimension. In addition, the role of the wave vector on the quench dynamics under lattice models is addressed. We observe wave vector-independent dynamical phase transitions in self-dual localization models.
Coherently dressed spins have shown promising results as building blocks for future quantum computers owing to their resilience to environmental noise and their compatibility with global control fields. This mode of operation allows for more amenable qubit architecture requirements and simplifies signal routing on the chip. However, multi-qubit operations, such as qubit addressability and two-qubit gates, are yet to be demonstrated to establish global control in combination with dressed qubits as a viable path to universal quantum computing. Here we demonstrate simultaneous on-resonance driving of degenerate qubits using a global field while retaining addressability for qubits with equal Larmor frequencies. Furthermore, we implement SWAP oscillations during on-resonance driving, constituting the demonstration of driven two-qubit gates. Significantly, our findings highlight the fragility of entangling gates between superposition states and how dressing can increase the noise robustness. These results represent a crucial milestone towards global control operation with dressed qubits. It also opens a door to interesting spin physics on degenerate spins.
We introduce a meta logarithmic-Sobolev (log-Sobolev) inequality for the Lindbladian of all single-mode phase-covariant Gaussian channels of bosonic quantum systems, and prove that this inequality is saturated by thermal states. We show that our inequality provides a general framework to derive information theoretic results regarding phase-covariant Gaussian channels. Specifically, by using the optimality of thermal states, we explicitly compute the optimal constant $\alpha_p$, for $1\leq p\leq 2$, of the $p$-log-Sobolev inequality associated to the quantum Ornstein-Uhlenbeck semigroup. These constants were previously known for $p=1$ only. Our meta log-Sobolev inequality also enables us to provide an alternative proof for the constrained minimum output entropy conjecture in the single-mode case. Specifically, we show that for any single-mode phase-covariant Gaussian channel $\Phi$, the minimum of the von Neumann entropy $S\big(\Phi(\rho)\big)$ over all single-mode states $\rho$ with a given lower bound on $S(\rho)$, is achieved at a thermal state.
Today's mechanical sensors are capable of detecting extremely weak perturbations while operating near the standard quantum limit. However, further improvements can be made in both sensitivity and bandwidth when we reduce the noise originating from the process of measurement itself -- the quantum-mechanical backaction of measurement -- and go below this 'standard' limit, possibly approaching the Heisenberg limit. One of the ways to eliminate this noise is by measuring a quantum nondemolition variable such as the momentum in a free-particle system. Here, we propose and characterize theoretical models for direct velocity measurement that utilize traditional electric and magnetic transducer designs to generate a signal while enabling this backaction evasion. We consider the general readout of this signal via electric or magnetic field sensing by creating toy models analogous to the standard optomechanical position-sensing problem, thereby facilitating the assessment of measurement-added noise. Using simple models that characterize a wide range of transducers, we find that the choice of readout scheme -- voltage or current -- for each mechanical detector configuration implies access to either the position or velocity of the mechanical sub-system. This in turn suggests a path forward for key fundamental physics experiments such as the direct detection of dark matter particles.
The geometric phase is a fundamental quantity characterizing the holonomic feature of quantum systems. It is well known that the evolution operator of a quantum system undergoing a cyclic evolution can be simply written as the product of holonomic and dynamical components for the three special cases concerning the Berry phase, adiabatic non-Abelian geometric phase, and nonadiabatic Abelian geometric phase. However, for the most general case concerning the nonadiabatic non-Abelian geometric phase, how to separate the evolution operator into holonomic and dynamical components is a long-standing open problem. In this work, we solve this open problem. We show that the evolution operator of a quantum system can always be separated into the product of holonomy and dynamic operators. Based on it, we further derive a matrix representation of this separation formula for cyclic evolution, and give a necessary and sufficient condition for a general evolution being purely holonomic. Our finding is not only of theoretical interest itself, but also of vital importance for the application of quantum holonomy. It unifies the representations of all four types of evolution concerning the adiabatic/nonadiabatic Abelian/non-Abelian geometric phase, and provides a general approach to realizing purely holonomic evolution.
In this work the root to macroscopic quantum effects is revealed based on the quasiparticle model of collective excitations in an arbitrary degenerate electron gas. The $N$-electron quantum system is considered as $N$ streams coupled, through the Poisson's relation, which are localized in momentum space rather than electron localization in real space, assumed in ordinary many body theories. Using a new wavefunction representation, the $N+1$-coupled system is reduced to simple pseudoforce equations via quasiparticle (collective quantum) model leading to a generalized matter wave dispersion relation. It is shown that the resulting dual lengthscale de Broglie's matter wave theory predicts macroscopic quantum effects and deterministic field trajectories for charges moving in the electron gas due to the coupling of the electrostatic field to the local electron number density. It is remarked that any quantum many body system composed of large number of interacting particles acts as a dual arm device controlling the microscopic single particle effects with one hand and the macroscopic phenomena with the other. Current analysis can be further extended to include the magnetic potential and spin exchange effects. Present model can also be used to confirm macroscopic entanglement of charged particles embedded in a quantum electron fluid.
The circuit class $\mathsf{QAC}^0$ was introduced by Moore (1999) as a model for constant depth quantum circuits where the gate set includes many-qubit Toffoli gates. Proving lower bounds against such circuits is a longstanding challenge in quantum circuit complexity; in particular, showing that polynomial-size $\mathsf{QAC}^0$ cannot compute the parity function has remained an open question for over 20 years. In this work, we identify a notion of the \emph{Pauli spectrum} of $\mathsf{QAC}^0$ circuits, which can be viewed as the quantum analogue of the Fourier spectrum of classical $\mathsf{AC}^0$ circuits. We conjecture that the Pauli spectrum of $\mathsf{QAC}^0$ circuits satisfies \emph{low-degree concentration}, in analogy to the famous Linial, Nisan, Mansour theorem on the low-degree Fourier concentration of $\mathsf{AC}^0$ circuits. If true, this conjecture immediately implies that polynomial-size $\mathsf{QAC}^0$ circuits cannot compute parity. We prove this conjecture for the class of depth-$d$, polynomial-size $\mathsf{QAC}^0$ circuits with at most $n^{O(1/d)}$ auxiliary qubits. We obtain new circuit lower bounds and learning results as applications: this class of circuits cannot correctly compute -- the $n$-bit parity function on more than $(\frac{1}{2} + 2^{-\Omega(n^{1/d})})$-fraction of inputs, and -- the $n$-bit majority function on more than $(1 - 1/\mathrm{poly}(n))$-fraction of inputs. \end{itemize} Additionally we show that this class of $\mathsf{QAC}^0$ circuits with limited auxiliary qubits can be learned with quasipolynomial sample complexity, giving the first learning result for $\mathsf{QAC}^0$ circuits. More broadly, our results add evidence that ``Pauli-analytic'' techniques can be a powerful tool in studying quantum circuits.
Variational quantum eigensolver (VQE) is a hybrid quantum-classical algorithm designed for noisy intermediate-scale quantum (NISQ) computers. It is promising for quantum chemical calculations (QCC) because it can calculate the ground-state energy of a target molecule. Although VQE has a potential to achieve a higher accuracy than classical approximation methods in QCC, it is challenging to achieve it on current NISQ computers due to the significant impact of noises. Density matrix embedding theory (DMET) is a well-known technique to divide a molecule into multiple fragments, which is available to mitigate the noise impact on VQE. However, our preliminary evaluation shows that the naive combination of DMET and VQE does not outperform a gold standard classical method. In this work, we present three approaches to mitigate the noise impact for the DMET+VQE combination. (1) The size of quantum circuits used by VQE is decreased by reducing the number of bath orbitals which represent interactions between multiple fragments in DMET. (2) Reduced density matrices (RDMs), which are used to calculate a molecular energy in DMET, are calculated accurately based on expectation values obtained by executing quantum circuits using a noise-less quantum computer simulator. (3) The parameters of a quantum circuit optimized by VQE are refined with mathematical post-processing. The evaluation using a noisy quantum computer simulator shows that our approaches significantly improve the accuracy of the DMET+VQE combination. Moreover, we demonstrate that on a real NISQ device, the DMET+VQE combination applying our three approaches achieves a higher accuracy than the gold standard classical method.
Flexible Job Shop Scheduling (FJSSP) is a complex optimization problem crucial for real-world process scheduling in manufacturing. Efficiently solving such problems is vital for maintaining competitiveness. This paper introduces Quantum Annealing-based solving algorithm (QASA) to address FJSSP, utilizing quantum annealing and classical techniques. QASA optimizes multi-criterial FJSSP considering makespan, total workload, and job priority concurrently. It employs Hamiltonian formulation with Lagrange parameters to integrate constraints and objectives, allowing objective prioritization through weight assignment. To manage computational complexity, large instances are decomposed into subproblems, and a decision logic based on bottleneck factors is used. Experiments on benchmark problems show QASA, combining tabu search, simulated annealing, and Quantum Annealing, outperforms a classical solving algorithm (CSA) in solution quality (set coverage and hypervolume ratio metrics). Computational efficiency analysis indicates QASA achieves superior Pareto solutions with a reasonable increase in computation time compared to CSA.
Control over the joint spectral amplitude of a photon pair has proved highly desirable for many quantum applications, since it contains the spectral quantum correlations, and has crucial effects on the indistinguishability of photons, as well as promising emerging applications involving complex quantum functions and frequency encoding of qudits. Until today, this has been achieved by engineering a single degree of freedom, either by custom poling nonlinear crystal or by shaping the pump pulse. We present a combined approach where two degrees of freedom, the phase-matching function, and the pump spectrum, are controlled. This approach enables the two-dimensional control of the joint spectral amplitude, generating a variety of spectrally encoded quantum states - including frequency uncorrelated states, frequency-bin Bell states, and biphoton qudit states. In addition, the joint spectral amplitude is controlled by photon bunching and anti-bunching, reflecting the symmetry of the phase-matching function.
Quantum annealers are suited to solve several logistic optimization problems expressed in the QUBO formulation. However, the solutions proposed by the quantum annealers are generally not optimal, as thermal noise and other disturbing effects arise when the number of qubits involved in the calculation is too large. In order to deal with this issue, we propose the use of the classical branch-and-bound algorithm, that divides the problem into sub-problems which are described by a lower number of qubits. We analyze the performance of this method on two problems, the knapsack problem and the traveling salesman problem. Our results show the advantages of this method, that balances the number of steps that the algorithm has to make with the amount of error in the solution found by the quantum hardware that the user is willing to risk. All the results are actual runs on the quantum annealer D-Wave Advantage.
Shortcuts to adiabaticity guide given systems to final destinations of adiabatic control via fast tracks. Various methods were proposed as varieties of shortcuts to adiabaticity. Basic theory of shortcuts to adiabaticity was established in the 2010s, but it has still been developing and many fundamental findings have been reported. In this Topical Review, we give a pedagogical introduction to theory of shortcuts to adiabaticity and revisit relations between different methods. Some versatile approximations in counterdiabatic driving, which is one of the methods of shortcuts to adiabaticity, will be explained in detail. We also summarize recent progress in studies of shortcuts to adiabaticity.
Altering chemical reactivity and material structure in confined optical environments is on the rise, and yet, a conclusive understanding of the microscopic mechanisms remains elusive. This originates mostly from the fact that accurately predicting vibrational and reactive dynamics for soluted ensembles of realistic molecules is no small endeavor, and adding (collective) strong light-matter interaction does not simplify matters. Here, we establish a framework based on a combination of machine learning (ML) models, trained using density-functional theory calculations, and molecular dynamics to accelerate such simulations. We then apply this approach to evaluate strong coupling, changes in reaction rate constant, and their influence on enthalpy and entropy for the deprotection reaction of 1-phenyl-2-trimethylsilylacetylene, which has been studied previously both experimentally and using ab initio simulations. While we find qualitative agreement with critical experimental observations, especially with regard to the changes in kinetics, we also find differences in comparison with previous theoretical predictions. The features for which the ML-accelerated and ab initio simulations agree show the experimentally estimated kinetic behavior. Conflicting features indicate that a contribution of electronic polarization to the reaction process is more relevant then currently believed. Our work demonstrates the practical use of ML for polaritonic chemistry, discusses limitations of common approximations and paves the way for a more holistic description of polaritonic chemistry.
In the current era, known as Noisy Intermediate-Scale Quantum (NISQ), encoding large amounts of data in the quantum devices is challenging and the impact of noise significantly affects the quality of the obtained results. A viable approach for the execution of quantum classification algorithms is the introduction of a well-known machine learning paradigm, namely, the ensemble methods. Indeed, the ensembles combine multiple internal classifiers, which are characterized by compact sizes due to the smaller data subsets used for training, to achieve more accurate and robust prediction performance. In this way, it is possible to reduce the qubits requirements with respect to a single larger classifier while achieving comparable or improved performance. In this work, we present an implementation and an extensive empirical evaluation of ensembles of quantum classifiers for binary classification, with the purpose of providing insights into their effectiveness, limitations, and potential for enhancing the performance of basic quantum models. In particular, three classical ensemble methods and three quantum classifiers have been taken into account here. Hence, the scheme that has been implemented (in Python) has a hybrid nature. The results (obtained on real-world datasets) have shown an accuracy advantage for the ensemble techniques with respect to the single quantum classifiers, and also an improvement in robustness. In fact, the ensembles have turned out to be able to mitigate both unsuitable data normalizations and repeated measurement inaccuracies, making quantum classifiers more stable.
With the explosion of data over the past decades there has been a respective explosion of techniques to extract information from the data from labeled data, quasi-labeled data, and data with no labels known a priori. For data with at best quasi-labels, graphs are a natural structure to connect points to further extract information. In particular, anomaly detection in graphs is a method to determine which data points do not posses the latent characteristics of the other data. There have been a variety of classical methods to score vertices on their anomalous level with respect to the graph, spanning straightforward methods of checking the local topology of a node to intricate neural networks. Leveraging the structure of the graph, we propose a first ever quantum-based technique to calculate the anomaly score of each node by continuously traversing the graph in a particular manner. The proposed algorithm incorporates well-known characteristics of quantum random walks, and an adjustment to the algorithm is given to mitigate the increasing depth of the circuit. This algorithm is rigorously shown to converge to the expected probability, with respect to the initial condition.
We study a 2D mesoscopic ring with an anisotropic effective mass considering surface quantum confinement effects. Consider that the ring is defined on the surface of a cone, which can be controlled topologically and mapped to the 2D ring in flat space. We demonstrate through numerical analysis that the electronic properties, the magnetization, and the persistent current undergo significant changes due to quantum confinement and non-isotropic mass. We investigate these changes in the direct band gap semiconductors SiC, ZnO, GaN, and AlN. There is a plus (or minus) shift in the energy sub-bands for different values of curvature parameter and anisotropy. Manifestations of this nature are also seen in the Fermi energy profile as a function of the magnetic field and in the ring width as a function of the curvature parameter. Aharonov-Bohm (AB) and de Haas van-Alphen (dHvA) oscillations are also studied, and we find that they are sensitive to variations in curvature and anisotropy.
Device-independent quantum key distribution (DIQKD) is a key distribution scheme whose security is based on the laws of quantum physics but does not require any assumptions about the devices used in the protocol. The security of the existing entanglement-based DIQKD protocol relies on the Bell test. Here, we propose an efficient device-independent quantum key distribution (EDIQKD) protocol in which one participant prepares states and transmits them to another participant through a quantum channel to measure. In this prepare-and-measure protocol, the transmission process between participants is characterized according to the process tomography for security, ruling out any mimicry using the classical initial, transmission, and final state. When comparing the minimum number of bits of the raw key, i.e., the key rounds, to guarantee security against collective attacks, the efficiency of the EDIQKD protocol is two orders of magnitude more than that of the DIQKD protocol for the reliable key, the quantum bit error rate of which is allowed up to 6.5%. This advantage will enable participants to substantially conserve the entangled pair's demanded resources and the measurement. According to the highest detection efficiency in the recent most advanced photonic experiment, our protocol can be realized with a non-zero key rate and remains more efficient than usual DIQKD. Our protocol and its security analysis may offer helpful insight into identifying the typical prepare-and-measure quantum information tasks with the device-independent scenario.
Multipartite Einstein-Podolsky-Rosen (EPR) steering admits multipartite entanglement in the presence of uncharacterized verifiers, enabling practical applications in semi-device-independent protocols. Such applications generally require stronger steerability, while the unavoidable noise weakens steerability and consequently degrades the performance of quantum information processing. Here, we propose and demonstrate the optimal local filter operation to distill genuine tripartite EPR steering from two copies of weakly steerable assemblages in the context of two semi-device-independent cases -- one-sided device-independent scenario and two-sided device-independent scenario -- on three-qubit generalized Greenberger-Horne-Zeilinger states. The advance of the optimal local filter is confirmed by the distilled assemblage in terms of higher assemblage fidelity with perfectly genuine tripartite steerable assemblages, as well as the greater violation of the inequality to witness genuine tripartite steerable assemblages. Our results benefit the distillation of multipartite EPR steering in practice, where the number of copies of initial assemblages is generally finite.
Noise is in general inevitable and detrimental to practical and useful quantum communication and computation. Under the resource theory framework, resource distillation serves as a generic tool to overcome the effect of noise. Yet, conventional resource distillation protocols generally require operations on multi-copies of resource states, and strong limitations exist that restrict their practical utilities. Recently, by relaxing the setting of resource distillation to only approximating the measurement statistics instead of the quantum state, a resource-frugal protocol, virtual resource distillation, is proposed, which allows more effective distillation of noisy resources. Here, we report its experimental implementation on a four-qubit photonic quantum system for the distillation of quantum coherence (up to dimension 4) and bipartite entanglement. We show the virtual distillation of the maximal superposed state of dimension four from the state of dimension two, an impossible task in conventional coherence distillation. Furthermore, we demonstrate the virtual distillation of entanglement with operations acting only on a single copy of the noisy EPR pair and showcase the quantum teleportation task using the virtually distilled EPR pair with a significantly improved fidelity of the teleported state. These results illustrate the feasibility of the virtual resource distillation method and pave the way for accurate manipulation of quantum resources with noisy quantum hardware.
Recent advances in quantum information and quantum science have inspired the development of various compact dynamic structured ansatze that are expected to be realizable in the Noisy Intermediate-Scale Quantum (NISQ) devices. However, such ansatze construction strategies hitherto developed involve considerable pre-circuit measurements, and thus they deviate significantly in NISQ platform from their ideal structures. It is thus imperative that the usage of quantum resources must be minimized while retaining the expressivity and dynamical structure of the ansatz that can tailor itself depending on the degree of strong correlation. We propose a novel ansatz construction strategy based on the \textit{ab-initio} many-body perturbation theory that requires \textit{no} pre-circuit measurement and thus it remains structurally unaffected by any hardware noise. The accuracy and quantum complexity associated with the ansatz are solely dictated by a pre-defined perturbative order as desired and hence are tunable. Furthermore, the underlying perturbative structure of the ansatz construction pipeline enables us to decompose any high-rank excitation that appears in higher perturbative orders into various low-rank operators, and thus keeps the execution gate-depth to its minimum. With a number of challenging applications on strongly correlated system, we demonstrate that our ansatz performs significantly better, both in terms of accuracy, parameter count and circuit depth, in comparison to the allied unitary coupled cluster based ansatze.
Polariton thermalization is a key process in achieving light-matter Bose--Einstein condensation, spanning from solid-state semiconductor microcavities at cryogenic temperatures to surface plasmon nanocavities with molecules at room temperature. Originated from the matter component of polariton states, the microscopic mechanisms of thermalization are closely tied to specific material properties. In this work, we investigate polariton thermalization in strongly-coupled molecular systems. We developed a microscopic theory addressing polariton thermalization through electron-phonon interactions (known as vibronic coupling) with low-energy molecular vibrations. This theory presents a simple analytical method to calculate the temperature-dependent polariton thermalization rate, utilizing experimentally accessible spectral properties of bare molecules, such as the Stokes shift and temperature-dependent linewidth of photoluminescence, in conjunction with well-known parameters of optical cavities. Our findings demonstrate remarkable agreement with recent experimental reports of nonequilibrium polariton condensation in both ground and excited states, and explain the thermalization bottleneck effect observed at low temperatures. This study showcases the significance of vibrational degrees of freedom in polariton condensation and offers practical guidance for future experiments, including the selection of suitable material systems and cavity designs.
The quantization of systems composed of transmission lines connected to lumped circuits poses significant challenges, arising from the interplay between continuous and discrete degrees of freedom. A widely adopted strategy, based on the pioneering work of Yurke and Denker, entails representing the lumped circuit contributions using Lagrangian densities that incorporate Dirac $\delta$-functions. However, this approach introduces complications, as highlighted in the recent literature, including divergent momentum densities, necessitating the use of regularization techniques. In this work, we introduce a $\delta$-free Lagrangian formulation without the need for a discretization of the transmission line or mode expansions. This is achieved by explicitly enforcing boundary conditions at the line ends. In this framework, the derivation of the Heisenberg equations of the network is straightforward. We demonstrate that, in the Heisenberg representation a finite-length transmission line can be described as a two-port system composed of two resistors and two controlled sources with delay. This equivalent model extends the one-port model which is commonly used in the literature for semi-infinite transmission lines. Finally, we apply our approach to analytically solvable networks.
Integrated photonic systems provide a flexible platform where artificial lattices can be engineered in a reconfigurable fashion. Here, we show that one-dimensional photonic arrays with engineered losses allow realizing topological excitation stemming from non-Hermiticity and bulk mode criticality. We show that a generalized modulation of the local photonic losses allow creating topological modes both in the presence of periodicity and even in the quasiperiodic regime. We demonstrate that a localization transition of all the bulk photonic modes can be engineered in the presence of a quasiperiodic loss modulation, and we further demonstrate that such a transition can be created in the presence of both resonance frequency modulation and loss modulation. We finally address the robustness of this phenomenology to the presence of higher neighbor couplings and disorder in the emergence of criticality and topological modes. Our results put forward a strategy to engineer topology and criticality solely from engineered losses in a photonic system, establishing a potential platform to study the impact of non-linearities in topological and critical photonic matter.
We show that the deficiency indices of magnetic Schr\"odinger operators with several local singularities can be computed in terms of the deficiency indices of operators carrying just one singularity each. We discuss some applications to physically relevant operators.
In recent years, strong expectations have been raised for the possible power of quantum computing for solving difficult optimization problems, based on theoretical, asymptotic worst-case bounds. Can we expect this to have consequences for Linear and Integer Programming when solving instances of practically relevant size, a fundamental goal of Mathematical Programming, Operations Research and Algorithm Engineering? Answering this question faces a crucial impediment: The lack of sufficiently large quantum platforms prevents performing real-world tests for comparison with classical methods. In this paper, we present a quantum analog for classical runtime analysis when solving real-world instances of important optimization problems. To this end, we measure the expected practical performance of quantum computers by analyzing the expected gate complexity of a quantum algorithm. The lack of practical quantum platforms for experimental comparison is addressed by hybrid benchmarking, in which the algorithm is performed on a classical system, logging the expected cost of the various subroutines that are employed by the quantum versions. In particular, we provide an analysis of quantum methods for Linear Programming, for which recent work has provided asymptotic speedup through quantum subroutines for the Simplex method. We show that a practical quantum advantage for realistic problem sizes would require quantum gate operation times that are considerably below current physical limitations.
The inclusion of temperature effects is important to properly simulate and interpret experimentally observed vibrationally resolved electronic spectra. We present a numerically exact approach for evaluating these spectra at finite temperature using the thermofield coherence dynamics. In this method, which avoids implementing an algorithm for solving the von Neumann equation for the coherence, the thermal vibrational ensemble is first mapped to a pure-state wavepacket in an augmented space, and this wavepacket is then propagated by solving the standard, zero-temperature Schr\"{o}dinger equation with the split-operator Fourier method. We show that the finite-temperature spectra obtained with the thermofield coherence dynamics in a Morse potential agree exactly with those computed by Boltzmann-averaging the spectra of individual vibrational levels. Because the split-operator thermofield dynamics on a full tensor-product grid is restricted to low-dimensional systems, we briefly discuss how the accessible dimensionality can be increased by various techniques developed for the zero-temperature split-operator Fourier method.
We introduce an efficient algorithm based on the quantum noise framework for simulating open quantum systems on quantum devices. We prove that the open system dynamics can be simulated by repeatedly applying random unitary gates acting on the system qubits plus a single ancillary bath qubit representing the environment. This algorithm represents a notable step forward compared to current approaches, not only beacause the ancilla overhead remains always constant regardless of the system size, but also because it provides a perturbative approximation of the full Lindblad equation to first order in the environment coupling constants, allowing to reach a better target accuracy with respect to first order approximation in the time step, thus reducing the total number of steps. When the perturbative approximation does not hold one can take smaller time steps and the approach reduces to the solution to first order in the time step. As a future perspective, this framework easily accomodates non-Markovian effects by relaxing the reset of the bath qubit prescription.
We study a generic family of Lindblad master equations modeling bipartite open quantum systems, where one tries to stabilize a quantum system by carefully designing its interaction with another, dissipative, quantum system - a strategy known as quantum reservoir engineering. We provide sufficient conditions for convergence of the considered Lindblad equations; our setting accommodates the case where steady states are not unique but rather supported on a given subspace of the underlying Hilbert space. We apply our result to a Lindblad master equation proposed for the stabilization of so-called cat qubits, a system that received considerable attention in recent years due to its potential applications in quantum information processing.
Quantum networks crucially rely on the availability of high-quality entangled pairs of qubits, known as entangled links, distributed across distant nodes. Maintaining the quality of these links is a challenging task due to the presence of time-dependent noise, also known as decoherence. Entanglement purification protocols offer a solution by converting multiple low-quality entangled states into a smaller number of higher-quality ones. In this work, we introduce a framework to analyse the performance of entanglement buffering setups that combine entanglement consumption, decoherence, and entanglement purification. We propose two key metrics: the availability, which is the steady-state probability that an entangled link is present, and the average consumed fidelity, which quantifies the steady-state quality of consumed links. We then investigate a two-node system, where each node possesses two quantum memories: one for long-term entanglement storage, and another for entanglement generation. We model this setup as a continuous-time stochastic process and derive analytical expressions for the performance metrics. Our findings unveil a trade-off between the availability and the average consumed fidelity. We also bound these performance metrics for a buffering system that employs the well-known bilocal Clifford purification protocols. Importantly, our analysis demonstrates that, in the presence of noise, consistently purifying the buffered entanglement increases the average consumed fidelity, even when some buffered entanglement is discarded due to purification failures.
We propose two optimal phase-estimation schemes that can be used for quantum-enhanced long-baseline interferometry. By using distributed entanglement, it is possible to eliminate the loss of stellar photons during transmission over the baselines. The first protocol is a sequence of gates using nonlinear optical elements, optimized over all possible measurement schemes to saturate the Cram\'er-Rao bound. The second approach builds on an existing protocol, which encodes the time of arrival of the stellar photon into a quantum memory. Our modified version reduces both the number of ancilla qubits and the number of gate operations by a factor of two.
In this book, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning.
The Scalable ZX-calculus is a compact graphical language used to reason about linear maps between quantum states. These diagrams have multiple applications, but they frequently have to be constructed in a case-by-case basis. In this work we present a method to encode quantum programs implemented in a fragment of the linear dependently typed Proto-Quipper-D language as families of SZX-diagrams. We define a subset of translatable Proto-Quipper-D programs and show that our procedure is able to encode non-trivial algorithms as diagrams that grow linearly on the size of the program.
The quantum internet is one of the frontiers of quantum information science research. It will revolutionize the way we communicate and do other tasks, and it will allow for tasks that are not possible using the current, classical internet. The backbone of a quantum internet is entanglement distributed globally in order to allow for such novel applications to be performed over long distances. Experimental progress is currently being made to realize quantum networks on a small scale, but much theoretical work is still needed in order to understand how best to distribute entanglement, especially with the limitations of near-term quantum technologies taken into account. This work provides an initial step towards this goal. In this work, we lay out a theory of near-term quantum networks based on Markov decision processes (MDPs), and we show that MDPs provide a precise and systematic mathematical framework to model protocols for near-term quantum networks that is agnostic to the specific implementation platform. We start by simplifying the MDP for elementary links introduced in prior work, and by providing new results on policies for elementary links. In particular, we show that the well-known memory-cutoff policy is optimal. Then we show how the elementary link MDP can be used to analyze a quantum network protocol in which we wait for all elementary links to be active before creating end-to-end links. We then provide an extension of the MDP formalism to two elementary links, which is useful for analyzing more sophisticated quantum network protocols. Here, as new results, we derive linear programs that give us optimal steady-state policies with respect to the expected fidelity and waiting time of the end-to-end link.
We explore the dynamics of quantum spin systems in two and three dimensions using an exact mapping to classical stochastic processes. In recent work we explored the effectiveness of sampling around the mean field evolution as determined by a stochastically averaged Weiss field. Here, we show that this approach can be significantly extended by sampling around the instantaneous Weiss field associated with each stochastic trajectory taken separately. This trajectory-resolved approach incorporates sample to sample fluctuations and allows for longer simulation times. We demonstrate the utility of this approach for quenches in the two-dimensional and three-dimensional quantum Ising model. We show that the method is particularly advantageous in situations where the average Weiss-field vanishes, but the trajectory-resolved Weiss fields are non-zero. We discuss the connection to the gauge-P phase space approach, where the trajectory-resolved Weiss field can be interpreted as a gauge degree of freedom.
When a generic quantum system is prepared in a simple initial condition, it typically equilibrates toward a state that can be described by a thermal ensemble. A known exception are localized systems which are non-ergodic and do not thermalize, however local observables are still believed to become stationary. Here we demonstrate that this general picture is incomplete by constructing product states which feature periodic high-fidelity revivals of the full wavefunction and local observables that oscillate indefinitely. The system neither equilibrates nor thermalizes. This is analogous to the phenomenon of weak ergodicity breaking due to many-body scars and challenges aspects of the current MBL phenomenology, such as the logarithmic growth of the entanglement entropy. To support our claim, we combine analytic arguments with large-scale tensor network numerics for the disordered Heisenberg chain. Our results hold for arbitrarily long times in chains of 160 sites up to machine precision.
Entanglement is a fundamental concept at the core of quantum information science, both in terms of its theoretical underpinnings and practical applications. A key priority in studying and utilizing entanglement is to find reliable procedures for the generation of entangled states. In this research, we propose a graph-based method for systematically searching for schemes that can produce genuine entanglement in arbitrary $N$-partite linear bosonic systems, without postselection. While the entanglement generation without postselection renders more tolerable schemes for quantum tasks, it is in general more challenging to find appropriate circuits for systems with a large number of parties. We present a practical strategy to mitigate the limitation through the implementation of our graph technique. Our physical setup is based on the sculpting protocol, which utilizes an $ N$ spatially overlapped subtractions of single bosons to convert Fock states of evenly distributed bosons into entanglement. We have identified general schemes for qubit N-partite GHZ and W states, which are significantly more efficient than previous schemes. In addition, our scheme for generating the superposition of $N=3$ GHZ and W entangled states illustrates that our approach can be extended to derive more generalized forms of entangled states. Furthermore, we have found an N-partite GHZ state generation scheme for qudits, which requires substantially fewer particles than previous proposals. These results demonstrate the power of our approach in discovering clear solutions for the generation of intricate entangled states. Our schemes can be directly realizable in many-boson systems. As a proof of concept, we propose a linear optical scheme for the generation of the Bell state by heralding detections.
This paper develops practical summation techniques in ZXW calculus to reason about quantum dynamics, such as unitary time evolution. First we give a direct representation of a wide class of sums of linear operators, including arbitrary qubits Hamiltonians, in ZXW calculus. As an application, we demonstrate the linearity of the Schroedinger equation and give a diagrammatic representation of the Hamiltonian in Greene-Diniz et al, which is the first paper that models carbon capture using quantum computing. We then use the Cayley-Hamilton theorem to show in principle how to exponentiate arbitrary qubits Hamiltonians in ZXW calculus. Finally, we develop practical techniques and show how to do Taylor expansion and Trotterization diagrammatically for Hamiltonian simulation. This sets up the framework for using ZXW calculus to the problems in quantum chemistry and condensed matter physics.
Chirality, or handedness, is a geometrical property denoting a lack of mirror symmetry. Chirality is ubiquitous in nature and is associated with the non-reciprocal interactions observed in complex systems ranging from biomolecules to topological materials. Here, we demonstrate that chiral arrangements of dipole-coupled atoms or molecules can facilitate the unidirectional transport of helical photonic excitations without breaking time-reversal symmetry. We show that such helicity dependent transport stems from an emergent spin-orbit coupling induced by the chiral geometry, which results in nontrivial topological properties. We also examine the effects of collective dissipation and find that many-body coherences lead to helicity dependent photon emission: an effect we call helical superradiance. Our results demonstrate an intimate connection between chirality, topology, and photon helicity that may contribute to molecular photodynamics in nature and could be probed with near-term quantum simulators.
The possibility of attaining chiral edge modes under periodic driving has spurred tremendous attention, both theoretically and experimentally, especially in light of anomalous Floquet topological phases that feature vanishing Chern numbers unlike any static counterpart. We here consider a periodically modulated honeycomb lattice and experimentally relevant driving protocols, which allows us to obtain edge modes of various character in a simple model. We calculate the phase diagram over a wide range of parameters and recover an anomalous topological phase with quasienergy gaps harbouring edge states with opposite chirality. Motivated by the advances in single-site control in optical lattices, we investigate wave packet dynamics localized at the edges in distinct Floquet topological regimes that cannot be achieved in equilibrium. We analyse transport properties in edge modes originating from the same bands, but with support at different quasienergies and sublattices as well as possessing different chiralities. We find that an anomalous Floquet topological phase can in general generate more robust chiral edge motion than a Haldane phase. Our results demonstrate that the rich interplay of wave packet dynamics and topological edge states can serve as a versatile tool in ultracold quantum gases in optical lattices.
We review the methodology to theoretically treat parity-time- ($\mathcal{PT}$-) symmetric, non-Hermitian quantum many-body systems... (For the full abstract see paper)
We study the problem of learning the Hamiltonian of a many-body quantum system from experimental data. We show that the rate of learning depends on the amount of control available during the experiment. We consider three control models: one where time evolution can be augmented with instantaneous quantum operations, one where the Hamiltonian itself can be augmented by adding constant terms, and one where the experimentalist has no control over the system's time evolution. With continuous quantum control, we provide an adaptive algorithm for learning a many-body Hamiltonian at the Heisenberg limit: $T = \mathcal{O}(\epsilon^{-1})$, where $T$ is the total amount of time evolution across all experiments and $\epsilon$ is the target precision. This requires only preparation of product states, time-evolution, and measurement in a product basis. In the absence of quantum control, we prove that learning is standard quantum limited, $T = \Omega(\epsilon^{-2})$, for large classes of many-body Hamiltonians, including any Hamiltonian that thermalizes via the eigenstate thermalization hypothesis. These results establish a quadratic advantage in experimental runtime for learning with quantum control.
We explore a full dynamical phase diagram by means of a double quench protocol that depends on a relaxation time as the only control parameter. The protocol comprises two fixed quenches and an intermediate relaxation time that determines the phase in which the quantum state is placed after the final quench. We apply it to an anharmonic Lipkin-Meshkov-Glick model. This model displays two excited-state quantum phase transitions which split the spectrum into three different phases: two of them are symmetry-breaking phases, and one is a disordered phase. As a consequence, our protocol induces several kind of dynamical phase transitions. We characterize all of them in terms of the constants of motion characterizing all three phases of the model.
A proper quantum memory is argued to consist in a quantum channel which cannot be simulated with a measurement followed by classical information storage and a final state preparation, i.e. an entanglement breaking (EB) channel. The verification of quantum memories (non-EB channels) is a task in which an honest user wants to test the quantum memory of an untrusted, remote provider. This task is inherently suited for the class of protocols with trusted quantum inputs, sometimes called measurement-device-independent (MDI) protocols. Here, we study the MDI certification of non-EB channels in continuous variable (CV) systems. We provide a simple witness based on adversarial metrology, and describe an experimentally friendly protocol that can be used to verify all non Gaussian incompatibility breaking quantum memories. Our results can be tested with current technology and can be applied to test other devices resulting in non-EB channels, such as CV quantum transducers and transmission lines.
High-fidelity preparation of quantum states in an interacting many-body system is often hindered by the lack of knowledge of such states and by limited decoherence times. Here we study a quantum optimal control (QOC) approach for fast generation of quantum ground states in a finite-sized Jaynes-Cummings lattice with unit filling. Our result shows that the QOC approach can generate quantum many-body states with high fidelity when the evolution time is above a threshold time, and it can significantly outperform the adiabatic approach. We study the dependence of the threshold time on the parameter constraints and the connection of the threshold time with the quantum speed limit. We also show that the QOC approach can be robust against control errors. Our result can lead to advances in the application of the QOC for many-body state preparation.
This article is a brief introduction to quantum algorithms for the eigenvalue problem in quantum many-body systems. Rather than a broad survey of topics, we focus on providing a conceptual understanding of several quantum algorithms that cover the essentials of adiabatic evolution, variational methods, phase detection algorithms, and several other approaches. For each method, we discuss the potential advantages and remaining challenges.
The representation of a quantum wave function as a neural network quantum state (NQS) provides a powerful variational ansatz for finding the ground states of many-body quantum systems. Nevertheless, due to the complex variational landscape, traditional methods often employ the computation of quantum geometric tensor, consequently complicating optimization techniques. Contributing to efforts aiming to formulate alternative methods, we introduce an approach that bypasses the computation of the metric tensor and instead relies exclusively on first-order gradient descent with Euclidean metric. This allows for the application of larger neural networks and the use of more standard optimization methods from other machine learning domains. Our approach leverages the principle of imaginary time evolution by constructing a target wave function derived from the Schr\"odinger equation, and then training the neural network to approximate this target. We make this method adaptive and stable by determining the optimal time step and keeping the target fixed until the energy of the NQS decreases. We demonstrate the benefits of our scheme via numerical experiments with 2D J1-J2 Heisenberg model, which showcase enhanced stability and energy accuracy in comparison to direct energy loss minimization. Importantly, our approach displays competitiveness with the well-established density matrix renormalization group method and NQS optimization with stochastic reconfiguration.
The manner in which probability amplitudes of paths sum up to form wave functions of orbital angular momentum eigenstates is described. Using a generalization of stationary-phase analysis, distributions are derived that provide a measure of how paths contribute towards any given eigenstate. In the limit of long travel-time, these distributions turn out to be real-valued, non-negative functions of a momentum variable that describes classical travel between the endpoints of a path (with the paths explicitly including nonclassical ones, described in terms of elastica). The distributions are functions of both this characteristic momentum as well as a polar angle that provides a tilt, relative to the z-axis of the chosen coordinate system, of the geodesic that connects the endpoints. The resulting description provides a replacement for the well-known "vector model" for describing orbital angular momentum, and importantly, it includes treatment of the case when the quantum number $\ell$ is zero (i.e., s-states).
Understanding thermodynamical measurement noise is of central importance for electrical and optical precision measurements from mass-fabricated semiconductor sensors, where the Brownian motion of charge carriers poses limits, to optical reference cavities for atomic clocks or gravitational wave detection, which are limited by thermorefractive and thermoelastic noise due to the transduction of temperature fluctuations to the refractive index and length fluctuations. Here, we discover that unexpectedly charge carrier density fluctuations give rise to a novel noise process in recently emerged electro-optic photonic integrated circuits. We show that Lithium Niobate and Lithium Tantalate photonic integrated microresonators exhibit an unexpected Flicker type (i.e. $1/f^{1.2}$) scaling in their noise properties, significantly deviating from the well-established thermorefractive noise theory. We show that this noise is consistent with thermodynamical charge noise, which leads to electrical field fluctuations that are transduced via the strong Pockels effects of electro-optic materials. Our results establish electrical Johnson-Nyquist noise as the fundamental limitation for Pockels integrated photonics, crucial for determining performance limits for both classical and quantum devices, ranging from ultra-fast tunable and low-noise lasers, Pockels soliton microcombs, to quantum transduction, squeezed light or entangled photon-pair generation. Equally, this observation offers optical methods to probe mesoscopic charge fluctuations with exceptional precision.
Based on the number operator on the half-line, we introduce a similarity transformation of the Berry-Keating Hamiltonian, whose eigenfunctions vanish at the Dirichlet boundary by the zeros of the Riemann zeta function. If the Riemann hypothesis (RH) holds true, then its eigenvalues correspond to the imaginary parts of the nontrivial zeros. Moreover, we explore the possibility of whether the introduced Hamiltonian can serve as an approach to the RH within the Hilbert-P\'olya conjecture, which can be shown by proving the reality of all the eigenvalues of the Hamiltonian. In an attempt to show the latter, we identify the effective Hamiltonian in the Mellin space, where the Dirichlet boundary condition manifests itself as an integral boundary condition. The effective Hamiltonian can be transformed into the Berry-Keating Hamiltonian, $\hat{H}_\text{BK}$, without altering the domain on which $\hat{H}_\text{BK}$ is self-adjoint. In essence, the nontrivial zeros of the Riemann zeta function follow from the self-adjoint eigenvalue problem, $\hat{H}_\text{BK} \, h_s (z) = \varepsilon_s \, h_s (z)$, subject to the integral boundary condition $\int_0^\infty dz \, (1+ e^z)^{-1} h_s(z) = 0$.
This paper address the question of thermodynamic entropy production in the context of the dynamical Casimir effect. Specifically, we study a scalar quantum field confined within a one-dimensional ideal cavity subject to time-varying boundary conditions dictated by an externally prescribed trajectory of one of the cavity mirrors. The central question is how the thermodynamic entropy of the field evolves over time. Utilizing an effective Hamiltonian approach, we compute the entropy production and reveal that it exhibits scaling behavior concerning the number of particles created in the short-time limit. Furthermore, this approach elucidates the direct connection between this entropy and the emergence of quantum coherence within the mode basis of the field. In addition, by considering a distinct approach based on the time evolution of Gaussian states we examine the long-time limit of entropy production within a single mode of the field. This approach results in establishing a connection between the thermodynamic entropy production in a single field mode and the entanglement between that particular mode and all other modes. Consequently, by employing two distinct approaches, we comprehensively address both the short-term and long-term dynamics of the system. Our results thus link the irreversible dynamics of the field, as measured by entropy production and induced by the dynamical Casimir effect, to two fundamental aspects of quantum mechanics: coherence and entanglement.
How could quantum cryptography help us achieve what are not achievable in classical cryptography? In this work we consider the following problem, which we call succinct RSPV for classical functions (SRC). Suppose $f$ is a function described by a polynomial time classical Turing machine, which is public; the client would like to sample a random $x$ as the function input and use a protocol to send $f(x)$ to the server. What's more, (1) when the server is malicious, what it knows in the passing space should be no more than $f(x)$; (2) the communication should be succinct (that is, independent to the running time of evaluating $f$). Solving this problem in classical cryptography seems to require strong cryptographic assumptions. We show that, perhaps surprisingly, it's possible to solve this problem with quantum techniques under much weaker assumptions. By allowing for quantum communication and computations, we give a protocol for this problem assuming only collapsing hash functions [Unr16]. Our work conveys an interesting message that quantum cryptography could outperform classical cryptography in a new type of problems, that is, to reduce communications in meaningful primitives without using heavy classical cryptographic assumptions.
This paper reports an inductorless transimpedance amplifier (TIA) with very compact size and adequate performance for spin qubit readout operations in monolithic quantum processors. The TIA has been designed and fabricated in a 22nm FDSOI CMOS foundry technology commercially available. The measurement results show a transimpedance gain of 103 dB{\Omega} with a bandwidth of 13 GHz, at room temperature, and it is expected to exhibit slightly superior performance at cryogenic temperatures. The power consumption amounts to 4.1 mW. The core area amount to 0.00025 mm2, i.e., about two orders of magnitude smaller with respect to the prior-art works, and approaching the qubit size, which makes the inductorless TIA a compact enabling solution for monolithic quantum processors.
The traditional view from particle physics is that quantum gravity effects should only become detectable at extremely high energies and small length scales. Due to the significant technological challenges involved, there has been limited progress in identifying experimentally detectable effects that can be accessed in the foreseeable future. However, in recent decades, the size and mass of quantum systems that can be controlled in the laboratory have reached unprecedented scales, enabled by advances in ground-state cooling and quantum-control techniques. Preparations of massive systems in quantum states paves the way for the explorations of a low-energy regime in which gravity can be both sourced and probed by quantum systems. Such approaches constitute an increasingly viable alternative to accelerator-based, laser-interferometric, torsion-balance, and cosmological tests of gravity. In this review, we provide an overview of proposals where massive quantum systems act as interfaces between quantum mechanics and gravity. We discuss conceptual difficulties in the theoretical description of quantum systems in the presence of gravity, review tools for modeling massive quantum systems in the laboratory, and provide an overview of the current state-of-the-art experimental landscape. Proposals covered in this review include, among others, precision tests of gravity, tests of gravitationally-induced wavefunction collapse and decoherence, as well as gravity-mediated entanglement. We conclude the review with an outlook and discussion of future questions.