Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-07 11:30 to 2023-11-10 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
Recently, pulsar timing array (PTA) collaborations, including NANOGrav, have reported evidence of a stochastic gravitational wave background within the nHz frequency range.\ It can be interpreted by gravitational waves from preheating era. In this context, we demonstrate that the emission of this stochastic gravitational wave background can be attributed to fluctuations occurring at the end of inflation, thus giving rise to the Hubble tension issue. At the onset of inflation, the value of the frequency of the gravitational wave signal stood at $f=0.08nHz$, but it rapidly transitioned to $f=1nHz$ precisely at the end of inflation. However, just before the end of inflation, a phase characterized by curvature perturbation is known to occur, causing a swift increase in the frequency.
Chain inflation is an alternative to slow-roll inflation in which the inflaton tunnels along a large number of consecutive minima in its potential. In this work we perform the first comprehensive calculation of the gravitational wave spectrum of chain inflation. In contrast to slow-roll inflation the latter does not stem from quantum fluctuations of the gravitational field during inflation, but rather from the bubble collisions during the first-order phase transitions associated with vacuum tunneling. Our calculation is performed within an effective theory of chain inflation which builds on an expansion of the tunneling rate capturing most of the available model space. The effective theory can be seen as chain inflation's analogue of the slow-roll expansion in rolling models of inflation. We show that chain inflation produces a very characteristic double-peak spectrum: a faint high-frequency peak associated with the gravitational radiation emitted during inflation, and a strong low-frequency peak associated with the graceful exit from chain inflation (marking the transition to the radiation-dominated epoch). There exist very exciting prospects to test the gravitational wave signal from chain inflation at the aLIGO-aVIRGO-KAGRA network, at LISA and /or at pulsar timing array experiments. A particularly intriguing possibility we point out is that chain inflation could be the source of the stochastic gravitational wave background recently detected by NANOGrav, PPTA, EPTA and CPTA. We also show that the gravitational wave signal of chain inflation is often accompanied by running/ higher running of the scalar spectral index to be tested at future Cosmic Microwave Background experiments.
We present a simple and promising new method to measure the expansion rate and the geometry of the universe that combines observations related to the time delays between the multiple images of time-varying sources, strongly lensed by galaxy clusters, and Type Ia supernovae, exploding in galaxies belonging to the same lens clusters. By means of two different statistical techniques that adopt realistic errors on the relevant quantities, we quantify the accuracy of the inferred cosmological parameter values. We show that the estimate of the Hubble constant is robust and competitive, and depends only mildly on the chosen cosmological model. Remarkably, the two probes separately produce confidence regions on the cosmological parameter planes that are oriented in complementary ways, thus providing in combination valuable information on the values of the other cosmological parameters. We conclude by illustrating the immediate observational feasibility of the proposed joint method in a well-studied lens galaxy cluster, with a relatively small investment of telescope time for monitoring from a 2 to 3m class ground-based telescope.
We explore the effect of ionizing UV background (UVB) on the redshift space clustering of low-$\textit{z}$ ($\textit{z} \leq 0.5$) OVI absorbers using Sherwood simulations incorporating "WIND" (i.e. outflows driven by stellar feedback) only and "AGN+WIND" feedbacks. These simulations show a positive clustering signals up to a scale of 3 Mpc. We find that the effect of feedback is restricted to small scales (i.e $\leq$ 2 Mpc or 200 $\textit{kms}^{-1} $ at $\textit{z}$ ~ 0.3) and "WIND" only simulations produce stronger clustering signal compared to simulations incorporating "AGN+WIND" feedbacks. How clustering signal is affected by the assumed UVB depends on the feedback processes assumed. For the simulations considered here the effect of UVB is confined to even smaller scales (i.e <1 Mpc or $\approx 100\textit{kms}^{-1}$ at $\textit{z}$ ~ 0.3). These scales are also affected by exclusion caused by line blending. Therefore, our study suggests clustering at intermediate scales (i.e 1-2 Mpc for simulations considered here) together with the observed column density distribution can be used to constrain the effect of feedback in simulations.
The cosmic dawn 21-cm signal is enabled by Ly~$\alpha$ photons through a process called the Wouthuysen-Field effect. An accurate model of the signal in this epoch hinges on the accuracy of the computation of the Ly~$\alpha$ coupling, which requires one to calculate the specific intensity of UV radiation from sources such as the first stars. Most traditional calculations of the Ly~$\alpha$ coupling assume a delta-function scattering cross-section, as the resonant nature of the Ly~$\alpha$ scattering makes an accurate radiative transfer solution computationally expensive. Attempts to improve upon this traditional approach using numerical radiative transfer have recently emerged. However, the radiative transfer computation in these treatments suffers from assumptions such as a uniform density of intergalactic gas, zero gas temperature, and absence of gas bulk motion, or numerical approximations such as core skipping. We investigate the role played by these approximations in setting the value of the Ly~$\alpha$ coupling and the 21-cm signal at cosmic dawn. We present results of Monte Carlo radiative transfer simulations, without core skipping, and show that neglecting gas temperature in the radiative transfer significantly underestimates the scattering rate and hence the Ly~$\alpha$ coupling and the 21-cm signal. We also discuss the effect of these processes on the 21-cm power spectrum from the cosmic dawn. This work points the way towards higher-accuracy models to enable better inferences from future measurements.
Optical circular polarization observations can directly test the particle composition in black holes jets. Here we report on the first observations of the BL Lac type object S4 0954+65 in high linear polarized states. While no circular polarization was detected, we were able to place upper limits of <0.5% at the 99.7% confidence. Using a simple model and our novel optical circular polarization observations we can constrain the allowed parameter space for the magnetic field strength and composition of the emitting particles. Our results favor models that require magnetic field strengths of only a few Gauss and models where the jet composition is dominated by electron-positron pairs. We discuss our findings in the context of typical magnetic field strength requirements for blazar emission models.
We present an eigenfunction method to analyze 161 visual light curves (LCs) of Type Ia supernovae (SNe Ia) obtained by the Carnegie Supernova Project to characterize their diversity and host-galaxy correlations. The eigenfunctions are based on the delayed-detonation scenario using three parameters: the LC stretch being determined by the amount of deflagration-burning governing the 56Ni production, the main-sequence mass M_MS of the progenitor white dwarf controlling the explosion energy, and its central density rho_c shifting the 56Ni distribution. Our analysis tool (SPAT) extracts the parameters from observations and projects them into physical space using their allowed ranges M_MS < 8 M_sun, rho_c < 7-8x10^9g/cc. The residuals between fits and individual LC-points are ~ 1-3% for ~ 92% of objects. We find two distinct M_MS groups corresponding to a fast (~ 40-65 Myrs) and a slow(~ 200-500 Myrs) stellar evolution. Most underluminous SNe Ia have hosts with low star formation but high M_MS, suggesting slow evolution times of the progenitor system. 91T-likes SNe show very similar LCs and high M_MS and are correlated to star formation regions, making them potentially important tracers of star formation in the early Universe out to z = 4-11. Some 6% outliers with `non-physical' parameters can be attributed to superluminous SNe Ia and subluminous SNe Ia with hosts of active star formation. For deciphering the SNe Ia diversity and high-precision SNe Ia cosmology, the importance is shown for LCs covering out to ~ 60 days past maximum. Finally, our method and results are discussed within the framework of multiple explosion scenarios, and in light of upcoming surveys.
It was pointed out previously~\cite{Kinney:2014jya} that a sufficiently negative running of the spectral index of curvature perturbations from (ordinary i.e. cold) inflation is able to prevent eternal inflation from ever occurring. Here, we reevaluate those original results, but in the context of warm inflation, in which a substantial radiation component (produced by the inflaton) exists throughout the inflationary period. We demonstrate that the same general requirements found in the context of ordinary (cold) inflation also hold true in warm inflation; indeed an even tinier amount of negative running is sufficient to prevent eternal inflation. This is particularly pertinent, as models featuring negative running are more generic in warm inflation scenarios. Finally, the condition for the existence of eternal inflation in cold inflation -- that the curvature perturbation amplitude exceed unity on superhorizon scales -- becomes more restrictive in the case of warm inflation. The curvature perturbations must be even larger, i.e. even farther out on the potential, away from the part of the potential where observables, e.g. in the Cosmic Microwave Background, are produced.
The halo concentration-mass relation has ubiquitous use in modeling the matter field for cosmological and astrophysical analyses, and including the imprints from galaxy formation physics is tantamount to its robust usage. Many analyses, however, probe the matter around halos selected by a given halo/galaxy property -- rather than by halo mass -- and the imprints under each selection choice can be different. We employ the CAMELS simulation suite to quantify the astrophysics and cosmology dependence of the concentration-mass relation, $c_{\rm vir}-M_{\rm vir}$, when selected on five properties: (i) velocity dispersion, (ii) formation time, (iii) halo spin, (iv) stellar mass, and (v) gas mass. We construct simulation-informed nonlinear models for all properties as a function of halo mass, redshift, and six cosmological/astrophysical parameters, with a mass range $M_{\rm vir} \in [10^{11}, 10^{14.5}] M_\odot/h$. There are many mass-dependent imprints in all halo properties, with clear connections across different properties and non-linear couplings between the parameters. Finally, we extract the $c_{\rm vir}-M_{\rm vir}$ relation for subsamples of halos that have scattered above/below the mean property-$M_{\rm vir}$ relation for a chosen property. Selections on gas mass or stellar mass have a significant impact on the astrophysics/cosmology dependence of $c_{\rm vir}$, while those on any of the other three properties have a significant (mild) impact on the cosmology (astrophysics) dependence. We show that ignoring such selection effects can lead to errors of $\approx 25\%$ in baryon imprint modelling of $c_{\rm vir}$. Our nonlinear model for all properties is made publicly available.
The visibilities measured by radio astronomical interferometers include non-astronomical correlated signals that arise from the local environment of the array. These correlated signals are especially important in compact arrays such as those under development for 21\,cm intensity mapping. The amplitudes of the contaminated visibilities can exceed the expected 21\,cm signal and represent a significant systematic effect. We study the receiver noise radiated by antennas in compact arrays and develop a model for how it couples to other antennas. We apply the model to the Tianlai Dish Pathfinder Array (TDPA), a compact array of 16, 6-m dish antennas. The coupling model includes electromagnetic simulations, measurements with a network analyzer, and measurements of the noise of the receivers. We compare the model to drift-scan observations with the array and set requirements on the level of antenna cross-coupling for 21\,cm intensity mapping instruments. We find that for the TDPA, cross-coupling would have to be reduced by TBD orders of magnitude in order to contribute negligibly to the visibilities.
We explore the cosmic birefringence signal produced by an ultralight axion field with a small CP-violating coupling to bulk SM matter in addition to the usual CP-preserving photon coupling. The change in the vacuum expectation value of the field between recombination and today results in a frequency-independent rotation of the plane of CMB linear polarization across the entire sky. While many previous approaches rely on the axion rolling from a large initial expectation value, the couplings considered in this work robustly generate the birefringence signal regardless of initial conditions, by sourcing the field from the cosmological nucleon density. We place bounds on such monopole-dipole interactions using measurements of the birefringence angle from Planck and WMAP data, which improve upon existing constraints by up to three orders of magnitude. We also discuss UV completions of this model, and possible strategies to avoid fine-tuning the axion mass.
Dark matter haloes grow at a rate that depends on the value of the cosmological parameters $\sigma_8$ and $\Omega_{\rm m}$ through the initial power spectrum and the linear growth factor. While halo abundance is routinely used to constrain these parameters, through cluster abundance studies, the halo growth rate is not. In recent work, we proposed constraining the cosmological parameters using observational estimates of the overall dynamical "age" of clusters, expressed, for instance, by their half-mass assembly redshift $z_{50}$. Here we explore the prospects for using the instantaneous growth rate, as estimated from the halo merger rate, from the average growth rate over the last dynamical time, or from the fraction of systems with recent episodes of major growth. We show that the merger rate is mainly sensitive to the amplitude of fluctuations $\sigma_8$, while the rates of recent growth provide constraints in the $\Omega_{\rm m}$-$\sigma_8$ plane that are almost orthogonal to those provided by abundance studies. Data collected for forthcoming cluster abundance studies, or studies of the galaxy merger rate in current and future galaxy surveys, may thus provide additional constraints on the cosmological parameters complementary to those already derived from halo abundance.
Fuzzy Dark Matter (FDM) has recently emerged as an interesting alternative model to the standard Cold Dark Matter (CDM). In this model, dark matter consists of very light bosonic particles with quantum mechanical effects on galactic scales. Using the N-body code AX-GADGET, we perform cosmological simulations of FDM that fully model the dynamical effects of the quantum potential throughout cosmic evolution. Through the combined analysis of FDM volume and high-resolution zoom-in simulations of different FDM particle masses ($m_{\chi}$ $\sim$ $10^{-23} - 10^{-21}$ eV/c$^2$), we study how FDM impacts the abundance of substructure and the inner density profiles of dark matter haloes. For the first time, using our FDM volume simulations, we provide a fitting formula for the FDM-to-CDM subhalo abundance ratio as a function of the FDM mass. More importantly, our simulations clearly demonstrate that there exists an extended FDM particle mass interval able to reproduce the observed substructure counts and, at the same time, create substantial cores ($r_{c} \sim 1$ kpc) in the density profile of dwarf galaxies ($\approx 10^{9}-10^{10}$ M$_{\odot}$), which stands in stark contrast with CDM predictions even with baryonic effects taken into account. The dark matter distribution in the faintest galaxies offers then a clear way to discriminate between FDM and CDM.
The immense diversity of the galaxy population in the universe is believed to stem from their disparate merging histories, stochastic star formations, and multi-scale influences of filamentary environments. Any single initial condition of the early universe was never expected to explain alone how the galaxies formed and evolved to end up possessing such various traits as they have at the present epoch. However, several observational studies have revealed that the key physical properties of the observed galaxies in the local universe appeared to be regulated by one single factor, the identity of which has been shrouded in mystery up to date. Here, we report on our success of identifying the single regulating factor as the degree of misalignments between the initial tidal field and protogalaxy inertia momentum tensors. The spin parameters, formation epochs, stellar-to-total mass ratios, stellar ages, sizes, colors, metallicities and specific heat energies of the galaxies from the IllustrisTNG suite of hydrodynamic simulations are all found to be almost linearly and strongly dependent on this initial condition, when the differences in galaxy total mass, environmental density and shear are controlled to vanish. The cosmological predispositions, if properly identified, turns out to be much more impactful on galaxy evolution than conventionally thought.
In view of recent interest in high-frequency detectors, broad features of gravitational wave signals from phase transitions taking place soon after inflation are summarized. The influence of the matter domination era that follows the slow-roll stage is quantified in terms of two equilibration rates. Turning to the highest-frequency part of the spectrum, we show how it is constrained by the fact that the bubble distance scale must exceed the mean free path.
The cosmological model with an interaction between dynamical quintessence dark energy and cold dark matter is considered. Evolution of a dark energy equation of state parameter is defined by a dark energy adiabatic sound speed and a dark sector interaction parameter, which must be more physically correct model then a previously used in which such evolution was given by some fixed dependence on scale factor. The constraints on interaction parameter and other parameters of the model was obtained using a cosmic microwave background, baryon acoustic oscillations and supernova SN Ia data.
Recently we put forward a framework where the dark matter (DM) component within virialized halos is subject to a non-local interaction originated by fractional gravity (FG) effects. In previous works we demonstrated that such a framework can substantially alleviate the small-scale issues of the standard $\Lambda$CDM paradigm, without altering the DM mass profile predicted by $N-$body simulations, and retaining its successes on large cosmological scales. In this paper we dig deeper to probe FG via high-quality data of individual dwarf galaxies, by exploiting the rotation velocity profiles inferred from stellar and gas kinematic measurements in $8$ dwarf irregulars, and the projected velocity dispersion profiles inferred from the observed dynamics of stellar tracers in $7$ dwarf spheroidals and in the ultra-diffuse galaxy DragonFly 44. We find that FG can reproduce extremely well the rotation and dispersion curves of the analysed galaxies, performing in most instances significantly better than the standard Newtonian setup.
The search for non-Gaussian signatures in the Cosmic Microwave Background (CMB) is crucial for understanding the physics of the early Universe. Given the possibility of non-Gaussian fluctuations in the CMB, a recent revision to the standard $\Lambda$-Cold Dark Matter ($\Lambda$CDM) model has been proposed, dubbed "Super-$\Lambda$CDM". This model introduces additional free parameters to account for the potential effects of a trispectrum in the primordial fluctuations. In this study, we explore the impact of the Super-$\Lambda$CDM model on current constraints on neutrino physics. In agreement with previous research, our analysis reveals that for most of the datasets, the Super-$\Lambda$CDM parameter $A_0$ significantly deviates from zero at over a $95\%$ confidence level. We then demonstrate that this signal might influence current constraints in the neutrino sector. Specifically, we find that the current constraints on neutrino masses may be relaxed by over a factor of two within the Super-$\Lambda$CDM framework, thanks to the correlation present with $A_0$. Consequently, locking $A_0=0$ might introduce a bias, leading to overly stringent constraints on the total neutrino mass.
We revisit the one-loop corrections on CMB scale perturbations induced from small scale modes in single field models which undergo a phase of ultra slow-roll inflation. There were concerns that large loop corrections are against the notion of the decoupling of scales and they are cancelled out once the boundary terms are included in Hamiltonian. We highlight that the non-linear coupling between the long and short modes and the modulation of the short mode power spectrum by the long mode are the key physical reasons behind the large loop corrections. In particular, in order for the modulation by the long mode to be significant there should be a strong scale-dependent enhancement in power spectrum of the short mode which is the hallmark of the USR inflation. We highlight the important roles played by the would-be decaying mode which were not taken into account properly in recent works claiming the loop cancellation. We confirm the original conclusion that the loop corrections are genuine and they can be dangerous for PBHs formation unless the transition to the final attractor phase is mild.
The neutrino magnetic moment operator clasps a tiny but non-zero value within the standard model (SM) of particle physics and rather enhanced values in various new physics models. This generation of the magnetic moment ($\mu_\nu$) is through quantum loop corrections which can exhibit spin-flavor oscillations in the presence of an external magnetic field. Also, several studies predict the existence of a primordial magnetic field (PMF) in the early universe, extending back to the era of Big Bang Nucleosynthesis (BBN) and before. The recent NANOGrav measurement can be considered as a strong indication of the presence of these PMFs. In this work, we consider the effect of the PMF on the flux of relic neutrinos. For Dirac neutrinos, we show that half of the active relic neutrinos can become sterile due to spin-flavor oscillations well before becoming non-relativistic owing to the expansion of the Universe and also before the timeline of the formation of galaxies and hence intergalactic fields, subject to the constraints on the combined value of $\mu_\nu$ and the cosmic magnetic field at the time of neutrino decoupling. For the upper limit of PMF allowed by the BBN, this can be true even if the experimental bounds on $\mu_{\nu}$ approaches a few times its SM value.
We investigate the enhancement of the power spectra in models with multiple scalar fields large enough to produce primordial black holes. We derive the criteria that can lead to an exponential amplification of the curvature perturbation on subhorizon scales, while leaving the perturbations stable on superhorizon scales. We apply our results on the three-field ultra-light scenario and show how the presence of extra turning parameters in the field space can yield distinct observables compared to two fields. Finally, we present analytic solutions for both sharp and broad turns, and clarify the role of the Hubble friction that has been overlooked previously.
Using a fully Bayesian approach, Gaussian Process regression is extended to include marginalisation over the kernel choice and kernel hyperparameters. In addition, Bayesian model comparison via the evidence enables direct kernel comparison. The calculation of the joint posterior was implemented with a transdimensional sampler which simultaneously samples over the discrete kernel choice and their hyperparameters by embedding these in a higher-dimensional space, from which samples are taken using nested sampling. This method was explored on synthetic data from exoplanet transit light curve simulations. The true kernel was recovered in the low noise region while no kernel was preferred for larger noise. Furthermore, inference of the physical exoplanet hyperparameters was conducted. In the high noise region, either the bias in the posteriors was removed, the posteriors were broadened or the accuracy of the inference was increased. In addition, the uncertainty in mean function predictive distribution increased due to the uncertainty in the kernel choice. Subsequently, the method was extended to marginalisation over mean functions and noise models and applied to the inference of the present-day Hubble parameter, $H_0$, from real measurements of the Hubble parameter as a function of redshift, derived from the cosmologically model-independent cosmic chronometer and {\Lambda}CDM-dependent baryon acoustic oscillation observations. The inferred $H_0$ values from the cosmic chronometers, baryon acoustic oscillations and combined datasets are $H_0$ = 66$\pm$6 km/s/Mpc, $H_0$ = 67$\pm$10 km/s/Mpc and $H_0$ = 69$\pm$6 km/s/Mpc, respectively. The kernel posterior of the cosmic chronometers dataset prefers a non-stationary linear kernel. Finally, the datasets are shown to be not in tension with ln(R)=12.17$\pm$0.02.
The cosmic magnification is able to probe the geometry of large scale structure on cosmological scales, thereby providing another window for probing theories of the late-time accelerated expansion of the Universe. It holds the potential to reveal new information on the nature of dark energy and modified gravity. By using the angular power spectrum, we investigated cosmic magnification in beyond-Horndeski gravity. We incorporated the known relativistic corrections and considered only large scales, where relativistic effects are known to become substantial. We probed both the total relativistic signal, and the individual relativistic signals. Our results suggest that surveys at low redshifts ($z \,{\lesssim}\, 0.5$) will be able to measure directly the total relativistic signal in the total magnification angular power spectrum, without the need for multi-tracer analysis (to beat down cosmic variance); similarly, for the Doppler signal, at the given $z$. However, for the integrated-Sachs-Wolfe, the time-delay, and the (gravitational) potential signals, respectively, it will require surveys at high redshifts ($z \,{\gtrsim}\, 3$). For both aforementioned sets of signals, their amplitudes at the given $z$ will ordinarily surpass cosmic variance, and hence, are able to be detected directly; whereas at other $z$, multi-tracer techniques will need to be taken into account. We also found that the beyond-Horndeski gravity boosts relativistic effects; consequently, the cosmic magnification. Conversely, relativistic effects enhance the potential of the total magnification angular power spectrum to detect the imprint of beyond-Horndeski gravity.
In this Letter we discuss the intrinsic pathologies associated to theories formulated in the framework of symmetric teleparallel geometries and argue how they are prone to propagating Ostrogradski ghosts. We illustrate our general argument by working out the cosmological perturbations in $f(\mathbb{Q})$ theories. We focus on the three branches of spatially flat cosmologies and show that they are all pathological. Two of these branches exhibit reduced linear spectra, signalling that they are infinitely strongly coupled. For the remaining branch we unveil the presence of seven propagating modes associated to the gravitational sector and we show that at least one of them is a ghost. Our results show the non-viability of these cosmologies and sheds light on the number of propagating degrees of freedom in these theories.
We present a first measurement of the galaxy-galaxy-CMB lensing bispectrum. The signal is detected at $26\sigma$ and $22\sigma$ significance using two samples from the unWISE galaxy catalog at mean redshifts $\bar{z}=0.6$ and $1.1$ and lensing reconstructions from Planck PR4. We employ a compressed bispectrum estimator based on the cross-correlation between the square of the galaxy overdensity field and CMB lensing reconstructions. We present a series of consistency tests to ensure the cosmological origin of our signal and rule out potential foreground contamination. We compare our results to model predictions from a halo model previously fit to only two-point spectra, finding reasonable agreement when restricting our analysis to large scales. Such measurements of the CMB lensing galaxy bispectrum will have several important cosmological applications, including constraining the uncertain higher-order bias parameters that currently limit lensing cross-correlation analyses.
We present a new method for obtaining photometric redshifts (photo-z) for sources observed by multiple photometric surveys using a combination (conflation) of the redshift probability distributions (PDZs) obtained independently from each survey. The conflation of the PDZs has several advantages over the usual method of modelling all the photometry together, including modularity, speed, and accuracy of the results. Using a sample of galaxies with narrow-band photometry in 56 bands from J-PAS and deeper grizy photometry from the Hyper-SuprimeCam Subaru Strategic program (HSC-SSP), we show that PDZ conflation significantly improves photo-z accuracy compared to fitting all the photometry or using a weighted average of point estimates. The improvement over J-PAS alone is particularly strong for i>22 sources, which have low signal-to-noise ratio in the J-PAS bands. For the entire i<22.5 sample, we obtain a 64% (45%) increase in the number of sources with redshift errors |Dz|<0.003, a factor 3.3 (1.9) decrease in the normalised median absolute deviation of the errors (sigma_NMAD), and a factor 3.2 (1.3) decrease in the outlier rate compared to J-PAS (HSC-SSP) alone. The photo-z accuracy gains from combining the PDZs of J-PAS with a deeper broadband survey such as HSC-SSP are equivalent to increasing the depth of J-PAS observations by ~1.2--1.5 magnitudes. These results demonstrate the potential of PDZ conflation and highlight the importance of including the full PDZs in photo-z catalogues.
We explore a re-parameterization of the lensing amplitude tension between weak lensing (WL) and cosmic microwave background (CMB) data and its implications for a joint resolution with the Hubble tension. Specifically, we focus on the lensing amplitude over a scale of 12 Mpc in absolute distance units using a derived parameter $S_{12}$ and show its constraints from recent surveys in comparison with Planck 2018. In WL alone, we find that the absolute distance convention correlates $S_{12}$ with $H_0$. Accounting for this correlation in the 3D space $S_{12}\times \omega_m \times h$ reproduces the usual levels of $2\sim 3\sigma$ tension inferred from $S_8\times\Omega_m$. Additionally, we derive scaling relations in the $S_8\times h$ and $S_{12}\times h$ planes that are allowed by $\Lambda$CDM and extrapolate target scalings needed to solve the $H_0$ and lensing-amplitude tensions jointly in a hypothetical beyond-$\Lambda$CDM model. As a test example, we quantify how the early dark energy scenario compares with these target scalings. Useful fitting formulae for $S_8$ and $S_{12}$ as a function of other cosmological parameters in $\Lambda$CDM are provided, with 1% precision.
In this work, we study general features of a regime where gauge fields produced during inflation cause a strong backreaction on the background evolution and its impact on the spectrum and the correlation length of gauge fields. With this aim, the gradient-expansion formalism previously proposed for the description of inflationary magnetogenesis in purely kinetic or purely axial coupling models, is extended to the case when both types of coupling are present. As it is formulated in position space, this method allows us to self-consistently take into account the backreaction of generated gauge fields on the inflationary background because it captures the nonlinear evolution of all physically relevant gauge-field modes at once. Using this extended gradient-expansion formalism, suitable for a wide range of inflationary magnetogenesis models, we study the gauge-field production in a specific generalization of the Starobinsky $R^2$-model with a nonminimal coupling of gauge fields to gravity. In the Einstein frame, this model implies, in addition to an asymptotically flat inflaton potential, also a nontrivial form of kinetic and axial coupling functions which decrease in time and, thus, are potentially suitable for the generation of gauge fields with a scale-invariant or even red-tilted power spectrum. The numerical analysis shows, however, that backreaction, which unavoidably occurs in this model for the interesting range of parameters, strongly alters the behavior of the spectrum and does not allow us to obtain a sufficiently large correlation length for the magnetic field. The oscillatory behavior of the generated field, caused by the retarded response of the gauge field to changes of the inflaton velocity, was revealed.
Much of the research in supernova cosmology is based on an assumption that the peak luminosity of type Ia supernovae (SNe Ia), after a standardization process, is independent of the galactic environment. A series of recent studies suggested that there is a significant correlation between the standardized luminosity and the progenitor age of SNe Ia. The correlation found in the most recent work by Lee et al. is strong enough to explain the extra dimming of distant SNe Ia and therefore casts doubts on the direct evidence of cosmic acceleration. The present work incorporates the uncertainties of progenitor ages, which were ignored in Lee et al., into a fully Bayesian inference framework. We find a weaker dependence of supernova standardized luminosity on the progenitor age, but the detection of correlation remains significant (3.5$\sigma$). Assuming that such correlation can be extended to high redshift and applying it to the Pantheon SN Ia data set, we confirm that when the Hubble residual does not include intrinsic scatter, the age-bias could be the primary cause of the observed extra dimming of distant SNe Ia. Furthermore, we use the PAge formalism, which is a good approximation to many dark energy and modified gravity models, to do a model comparison. We find that if intrinsic scatter is included in the Hubble residual, the Lambda cold dark matter model remains a good fit. However, in a scenario without intrinsic scatter, the Lambda cold dark matter model faces a challenge.
In General Relativity approximations based on the spherical collapse model such as Press--Schechter theory and its extensions are able to predict the number of objects of a certain mass in a given volume. In this paper we use a machine learning algorithm to test whether such approximations hold in screened modified gravity theories. To this end, we train random forest classifiers on data from N-body simulations to study the formation of structures in $\Lambda$CDM as well as screened modified gravity theories, in particular $f(R)$ and nDGP gravity. The models are taught to distinguish structure membership in the final conditions from spherical aggregations of density field behaviour in the initial conditions. We examine the differences between machine learning models that have learned structure formation from each gravity, as well as the model that has learned from $\Lambda$CDM. We also test the generalisability of the $\Lambda$CDM model on data from $f(R)$ and nDGP gravities of varying strengths, and therefore the generalisability of Extended-Press-Schechter spherical collapse to these types of modified gravity.
Cosmological correlators from inflation are often generated at tree level and hence loop contributions are bounded to be small corrections by perturbativity. Here we discuss a scenario where this is not the case. Recently, it has been shown that for any number of scalar fields of any mass, the parity-odd trispectrum of a massless scalar must vanish in the limit of exact scale invariance due to unitarity and the choice of initial state. By carefully handling UV-divergences, we show that the one-loop contribution is non-vanishing and hence leading. Surprisingly, the one-loop parity-odd trispectrum is simply a rational function of kinematics, which we compute explicitly in a series of models, including single-clock inflation. Although the loop contribution is the leading term in the parity-odd sector, its signal-to-noise ratio is typically bounded from above by that of a corresponding tree-level parity-even trispectrum, unless instrumental noise and systematics for the two observables differ. Furthermore, we identify a series of loop contributions to the wavefunction that cancel exactly when computing correlators, suggesting a more general phenomenon.
In the coming decades, the space-based gravitational-wave (GW) detectors such as Taiji, TianQin, and LISA are expected to form a network capable of detecting millihertz GWs emitted by the mergers of massive black hole binaries (MBHBs). In this work, we investigate the potential of GW standard sirens from the Taiji-TianQin-LISA network in constraining cosmological parameters. For the optimistic scenario in which electromagnetic (EM) counterparts can be detected, we predict the number of detectable bright sirens based on three different MBHB population models, i.e., pop III, Q3d, and Q3nod. Our results show that the Taiji-TianQin-LISA network alone could achieve a constraint precision of $0.9\%$ for the Hubble constant, meeting the standard of precision cosmology. Moreover, the Taiji-TianQin-LISA network could effectively break the cosmological parameter degeneracies generated by the CMB data, particularly in the dynamical dark energy models. When combined with the CMB data, the joint CMB+Taiji-TianQin-LISA data offer $\sigma(w)=0.036$ in the $w$CDM model, which is close to the latest constraint result obtained from the CMB+SN data. We also consider a conservative scenario in which EM counterparts are not available. Due to the precise sky localizations of MBHBs by the Taiji-TianQin-LISA network, the constraint precision of the Hubble constant is expected to reach $1.2\%$. In conclusion, the GW standard sirens from the Taiji-TianQin-LISA network will play a critical role in helping solve the Hubble tension and shedding light on the nature of dark energy.
Galaxy morphology, a key tracer of the evolution of a galaxy's physical structure, has motivated extensive research on machine learning techniques for efficient and accurate galaxy classification. The emergence of quantum computers has generated optimism about the potential for significantly improving the accuracy of such classifications by leveraging the large dimensionality of quantum Hilbert space. This paper presents a quantum-enhanced support vector machine algorithm for classifying galaxies based on their morphology. The algorithm requires the computation of a kernel matrix, a task that is performed on a simulated quantum computer using a quantum circuit conjectured to be intractable on classical computers. The result shows similar performance between classical and quantum-enhanced support vector machine algorithms. For a training size of $40$k, the receiver operating characteristic curve for differentiating ellipticals and spirals has an under-curve area (ROC AUC) of $0.946\pm 0.005$ for both classical and quantum-enhanced algorithms. Additionally, we demonstrate for a small dataset that the performance of a noise-mitigated quantum SVM algorithm on a quantum device is in agreement with simulation. Finally, a necessary condition for achieving a potential quantum advantage is presented. This investigation is among the very first applications of quantum machine learning in astronomy and highlights their potential for further application in this field.
In the presence of self-interactions, the post-inflationary evolution of the inflaton field is driven into the non-linear regime by the resonant growth of its fluctuations. The once spatially homogeneous coherent inflaton is converted into a collection of inflaton particles with non-vanishing momentum. Fragmentation significantly alters the energy transfer rate to the inflaton's offspring during the reheating epoch. In this work we introduce a formalism to quantify the effect of fragmentation on particle production rates, and determine the evolution of the inflaton and radiation energy densities, including the corresponding reheating temperatures. For an inflaton potential with a quartic minimum, we find that the efficiency of reheating is drastically diminished after backreaction, yet it can lead to temperatures above the big bang nucleosynthesis limit for sufficiently large couplings. In addition, we use a lattice simulation to estimate the spectrum of induced gravitational waves, sourced by the scalar inhomogeneities, and discuss detectability prospects. We find that a Boltzmann approach allows to accurately predict some of the main features of this spectrum.
We investigate the production of primordial black holes (PBHs) in a mixed inflaton-curvaton scenario with a quadratic curvaton potential, assuming the curvaton is in de Sitter equilibrium during inflation with $\langle \chi\rangle =0$. In this setup, the curvature perturbation sourced by the curvaton is strongly non-Gaussian, containing no leading Gaussian term. We show that for $m^2/H^2\gtrsim 0.3$, the curvaton contribution to the spectrum of primordial perturbations on CMB scales can be kept negligible but on small scales the curvaton can source PBHs. In particular, PBHs in the asteroid mass range $10^{-16}M_{\odot}\lesssim M\lesssim 10^{-10}M_{\odot}$ with an abundance reaching $f_{\rm PBH} = 1$ can be produced when the inflationary Hubble scale $H\gtrsim 10^{12}$ GeV and the curvaton decay occurs in the window from slightly before the electroweak transition to around the QCD transition.
We consider the relic density and positivity bounds for freeze-in scalar dark matter with general Higgs-portal interactions up to dimension-8 operators. When dimension-4 and dimension-6 Higgs-portal interactions are proportional to mass squares for Higgs or scalar dark matter in certain microscopic models such as massive graviton, radion or general metric couplings with conformal and disformal modes, we can take the dimension-8 derivative Higgs-portal interactions to be dominant for determining the relic density via the 2-to-2 thermal scattering of the Higgs fields after reheating. We discuss the implications of positivity bounds for microscopic models. First, massive graviton or radion mediates attractive forces between Higgs and scalar dark matter and the resultant dimension-8 operators respect the positivity bounds. Second, the disformal couplings in the general metric allow for the subluminal propagation of graviton but violate the positivity bounds. We show that there is a wide parameter space for explaining the correct relic density from the freeze-in mechanism and the positivity bounds can curb out the dimension-8 derivative Higgs-portal interactions nontrivially in the presence of the similar dimension-8 self-interactions for Higgs and dark matter.
Deploying \textit{multiple sharp transitions} (MSTs) under a unified framework, we investigate the formation of Primordial Black Holes (PBHs) and the production of Scalar Induced Gravitational Waves (SIGWs) by incorporating one-loop corrected renormalized-resummed scalar power spectrum. With effective sound speed parameter, $1 \leq c_s \leq 1.17$, the direct consequence is the generation of PBH masses spanning $M_{\rm PBH}\sim{\cal O}(10^{-31}M_{\odot}- 10^{4}M_{\odot})$, thus evading well known \textit{No-go theorem} on PBH mass. Our results align coherently with the extensive NANOGrav 15-year data and the sensitivities outlined by other terrestrial and space-based experiments (e.g.: LISA, HLVK, BBO, HLV(O3), etc.).
Galactic conformity is the phenomenon in which a galaxy of a certain physical property is correlated with its neighbors of the same property, implying a possible causal relationship. The observed auto correlations of emission line galaxies (ELGs) from the highly complete DESI One-Percent survey exhibit a strong clustering signal on small scales, providing clear evidence for the conformity effect of ELGs. Building upon the original subhalo abundance matching (SHAM) method developed by Gao et al. (2022, 2023), we propose a concise conformity model to improve the ELG-halo connection. In this model, the number of satellite ELGs is boosted by a factor of $\sim 5$ in the halos whose central galaxies are ELGs. We show that the mean ELG satellite number in such central halos is still smaller than 1, and the model does not significantly increase the overall satellite fraction. With this model, we can well recover the ELG auto correlations to the smallest scales explored with the current data (i.e. $r_{\mathrm{p}} > 0.03$ $\mathrm{Mpc}\,h^{-1}$ in real space and at $s > 0.3$ $\mathrm{Mpc}\,h^{-1}$ in redshift space), while the cross correlations between luminous red galaxies (LRGs) and ELGs are nearly unchanged. Although our SHAM model has only 8 parameters, we further verify that it can accurately describe the ELG clustering in the entire redshift range from $z = 0.8$ to $1.6$. We therefore expect that this method can be used to generate high-quality ELG lightcone mocks for DESI.
We present deep upper limits from the 2014 Murchison Widefield Array (MWA) Phase I observing season, with a particular emphasis on identifying the spectral fingerprints of extremely faint radio frequency interference (RFI) contamination in the 21~cm power spectra (PS). After meticulous RFI excision involving a combination of the \textsc{SSINS} RFI flagger and a series of PS-based jackknife tests, our lowest upper limit on the Epoch of Reionization (EoR) 21~cm PS signal is $\Delta^2 \leq 1.61\cdot10^4 \text{ mK}^2$ at $k=0.258\text{ h Mpc}^{-1}$ at a redshift of 7.1 using 14.7 hours of data. By leveraging our understanding of how even fainter RFI is likely to contaminate the EoR PS, we are able to identify ultra-faint RFI signals in the cylindrical PS. Surprisingly this signature is most obvious in PS formed with less than an hour of data, but is potentially subdominant to other systematics in multiple-hour integrations. Since the total RFI budget in a PS detection is quite strict, this nontrivial integration behavior suggests a need to more realistically model coherently integrated ultra-faint RFI in PS measurements so that its potential contribution to a future detection can be diagnosed.
We propose a new model to generate large anisotropies in the scalar-induced gravitational wave (SIGW) background via sound speed resonance in the inflaton-curvaton mixed scenario. Cosmological curvature perturbations are not only exponentially amplified at a resonant frequency, but also preserve significant non-Gaussianity of local type described by $f_{\mathrm{nl}}$. Besides a significant enhancement of energy-density fraction spectrum, large anisotropies in SIGWs can be generated, because of super-horizon modulations of the energy density due to existence of primordial non-Gaussianity. A reduced angular power spectrum $\tilde{C}_{\ell}$ could reach an amplitude of $[\ell(\ell+1)\tilde{C}_{\ell}]^{1/2} \sim 10^{-2}$, leading to potential measurements via planned gravitational-wave detectors such as DECIGO. The large anisotropies in SIGWs would serve as a powerful probe of the early universe, shedding new light on the inflationary dynamics, primordial non-Gaussianity, and primordial black hole dark matter.
We consider the formation of primordial black holes (PBHs), during the radiation-dominated Universe, generated from the collapse of super-horizon curvature fluctuations that are overlapped with others on larger scales. Using a set of different curvature profiles, we show that the threshold for PBH formation (defined as the critical peak of the compaction function) can be decreased by several percentages, thanks to the overlapping between the fluctuations. In the opposite case, when the fluctuations are sufficiently decoupled the threshold values behave as having the fluctuations isolated (isolated peaks). We find that the analytical estimates of arXiv:1907.13311 can be used accurately when applied to the corresponding peak that is leading to the gravitational collapse. We also study in detail the dynamics and estimate the final PBH mass for different initial configurations, showing that the profile dependence has a significant effect on that.
We discuss long-lasting gravitational wave sources arising and operating during radiation-dominated stage. Under a set of assumptions, we establish the correspondence between cosmological evolution of a source and the resulting gravitational wave spectrum. Namely, for the energy density of the source falling as $\rho_s \propto 1/a^{\beta}$ in the Universe expanding with a scale factor $a$, the spectrum takes the form $\Omega_{gw} \propto f^{2\beta-8}$ in certain ranges of values of constant $\beta$ and frequencies $f$. In particular, matching to the best fit power law shape of stochastic gravitational wave background discovered recently by Pulsar Timing Array collaborations, one identifies $\beta \approx 5$. We demonstrate the correspondence with concrete examples of long-lasting sources: domain walls and cosmic strings.
Boson stars arise as solutions of a massive complex scalar field coupled to gravity. A variety of scalar potentials, giving rise to different types of boson stars, have been studied in the literature. Here we study instead the effect of promoting the kinetic term of the scalar field to a nonlinear sigma model -- an extension that is naturally motivated by UV completions of gravity like string theory. We consider the $\mathrm{O}(3)$ and $\mathrm{SL}(2,\mathbb{R})$ sigma models with minimally interacting potentials and obtain their boson star solutions. We study the maximum mass and compactness of the solutions as a function of the curvature of the sigma model and compare the results to the prototypical case of mini boson stars, which are recovered in the case of vanishing curvature. The effect of the curvature turns out to be dramatic. While $\mathrm{O}(3)$ stars are massive and can reach a size as compact as $R\sim 3.3 GM$, $\mathrm{SL}(2,\mathbb{R})$ stars are much more diffuse and only astrophysically viable if the bosons are ultralight. These results show that the scalar kinetic term is at least as important as the potential in determining the properties of boson stars.
We study in detail the ejecta conditions and theoretical nucleosynthetic results for 17 three-dimensional core-collapse supernova (CCSN) simulations done by F{\sc ornax}. We find that multi-dimensional effects introduce many complexities into ejecta conditions. We see stochastic electron fraction evolution, complex peak temperature distributions and histories, and long-tail distributions of the time spent within nucleosynthetic temperature ranges. These all lead to substantial variation in CCSN nucleosynthetic yields and differences with 1D results. We discuss the production of lighter $\alpha$-nuclei, radioactive isotopes, heavier elements, and a few isotopes of special interest. Comparing pre-CCSN and CCSN contributions, we find that a significant fraction of elements between roughly Si and Ge are generically produced in CCSNe. We find that $^{44}$Ti exhibits an extended production timescale compared to $^{56}$Ni, which may explain its different distribution and higher than previously predicted abundances in supernova remnants such as Cas A and SN1987A. We also discuss the morphology of the ejected elements. This study highlights the high-level diversity of ejecta conditions and nucleosynthetic results in 3D CCSN simulations and emphasizes the need for additional long-term 3D simulations to properly address such complexities.
In this paper, we use Topological Data Analysis (TDA), a mathematical approach for studying data shape, to analyse Fast Radio Bursts (FRBs). Applying the Mapper algorithm, we visualise the topological structure of a large FRB sample. Our findings reveal three distinct FRB populations based on their inferred source properties, and show a robust structure indicating their morphology and energy. We also identify potential non-repeating FRBs that might become repeaters based on proximity in the Mapper graph. This work showcases TDA's promise in unraveling the origin and nature of FRBs.
In order to shed light on the characteristics of the broad line region (BLR) in a narrow-line Seyfert 1 galaxy, we present an analysis of X-ray, UV, and optical spectroscopic observations of the broad emission lines in Mrk 110. For the broad-band modelling of the emission-line luminosity, we adopt the `locally optimally emitting cloud' approach, which allows us to place constraints on the gas radial and density distribution. By exploring additional environmental effects, we investigate the possible scenarios resulting in the observed spectra. We find that the photoionised gas in Mrk 110 responsible for the UV emission can fully account for the observed low-ionisation X-ray lines. The overall ionisation of the gas is lower, and one radial power-law distribution with a high integrated covering fraction $C_{\mathrm{f}} \approx 0.5$ provides an acceptable description of the emission lines spanning from X-rays to the optical band. The BLR is likely more compact than the broad-line Seyfert 1s studied so far, extending from $\sim\!10^{16}$ to $\sim\!10^{18}$ cm, and limited by the dust sublimation radius at the outer edge. Despite the large colour excess predicted by the Balmer ratio, the best fit suggests $E(B-V)\approx0.03$ for both the ionising luminosity and the BLR, indicating that extinction might be uniform over a range of viewing angles. While the adopted data-modelling technique does not allow us to place constraints on the geometry of the BLR, we show that the addition of models with a clumpy, equatorial, wind-like structure may lead to a better description of the observed spectra.
Swift J1357.2-0933 is a black hole transient of particular interest due to the optical, recurrent dips found during its first two outbursts (in 2011 and 2017), with no obvious X-ray equivalent. We present fast optical photometry during its two most recent outbursts, in 2019 and 2021. Our observations reveal that the optical dips were present in every observed outburst of the source, although they were shallower and showed longer recurrence periods in the two most recent and fainter events. We perform a global study of the dips properties in the four outbursts, and find that they do not follow a common temporal evolution. In addition, we discover a correlation with the X-ray and optical fluxes, with dips being more profound and showing shorter recurrence periods for brighter stages. This trend seems to extend even to the faintest, quiescent states of the source. Finally, we discuss these results in the context of the possible connection between optical dips and outflows found in previous works.
In `spider' pulsars, the X-ray band is dominated by Intrabinary Shock (IBS) synchrotron emission. While the double-peaked X-ray light curves from these shocks have been well characterized in several spider systems (both black widows and redbacks), the polarization of this emission is yet to be studied. Motivated by the new polarization capability of the Imaging X-ray Polarization Explorer (IXPE) and the confirmation of highly ordered magnetic fields in pulsar wind nebulae, we model the IBS polarization, employing two potential magnetic field configurations: toroidal magnetic fields imposed by the pre-shock pulsar wind, and tangential shock-generated fields, which follow the post-shock flow. We find that if IBSs host ordered magnetic fields, the synchrotron X-rays from spider binaries can display a high degree of polarization ($\gtrsim50\%$), while the polarization angle variation provides a good probe of the binary geometry and the magnetic field structure. Our results encourage polarization observational studies of spider pulsars, which can distinguish the proposed magnetic models and better constrain unique properties of these systems.
Black hole solutions in the braneworld scenario are predicted to possess a tidal charge parameter, leaving imprints in the quasinormal spectrum. We conduct an extensive computation of such spectrum, and use it to construct a waveform model for the ringdown relaxation regime of binary black hole mergers observed by LIGO and Virgo. Applying a Bayesian time-domain analysis formalism, we analyse a selected dataset from the GWTC-3 LIGO-Virgo-Kagra catalog of binary coalescences, bounding the value of the tidal charge. With our analysis we obtain the first robust constraints on such charges, highlighting the importance of accounting for the previously ignored correlations with the other black hole intrinsic parameters.
The observability of afterglows from binary neutron star mergers, occurring within AGN disks is investigated. We perform 3D GRMHD simulations of a post-merger system, and follow the jet launched from the compact object. We use semi-analytic techniques to study the propagation of the blast wave powered by the jet through an AGN disk-like external environment, extending to distances beyond the disk scale height. The synchrotron emission produced by the jet-driven forward shock is calculated to obtain the afterglow emission. The observability of this emission at different frequencies is assessed by comparing it to the quiescent AGN emission. In the scenarios where the afterglow could temporarily outshine the AGN, we find that detection will be more feasible at higher frequencies (> 10^(14) Hz) and the electromagnetic counterpart could manifest as a fast variability in the AGN emission, on timescales less than a day.
In this study, we highlight the capacity of current and forthcoming air shower arrays utilizing water-Cherenkov stations to detect neutrino events spanning energies from $10\,$GeV to $100\,$TeV. This detection approach leverages individual stations equipped with both bottom and top photosensors, making use of features of the signal time trace and machine learning techniques. Our findings demonstrate the competitiveness of this method compared to established and future neutrino-detection experiments, including IceCube and the upcoming Hyper-Kamiokande experiment.
Although low-frequency quasiperiodic oscillations (LFQPOs) are commonly detected in the X-ray light curves of accreting black hole X-ray binaries, their origin still remains elusive. In this study, we conduct phase-resolved spectroscopy in a broad energy band for LFQPOs in MAXI J1820+070 during its 2018 outburst, utilizing Insight-HXMT observations. By employing the Hilbert-Huang transform method, we extract the intrinsic quasiperiodic oscillation (QPO) variability, and obtain the corresponding instantaneous amplitude, phase, and frequency functions for each data point. With well-defined phases, we construct QPO waveforms and phase-resolved spectra. By comparing the phase-folded waveform with that obtained from the Fourier method, we find that phase folding on the phase of the QPO fundamental frequency leads to a slight reduction in the contribution of the harmonic component. This suggests that the phase difference between QPO harmonics exhibits time variability. Phase-resolved spectral analysis reveals strong concurrent modulations of the spectral index and flux across the bright hard state. The modulation of the spectral index could potentially be explained by both the corona and jet precession models, with the latter requiring efficient acceleration within the jet. Furthermore, significant modulations in the reflection fraction are detected exclusively during the later stages of the bright hard state. These findings provide support for the geometric origin of LFQPOs and offer valuable insights into the evolution of the accretion geometry during the outburst in MAXI J1820+070.
We report on IXPE, NICER and XMM-Newton observations of the magnetar 1E 2259+586. We find that the source is significantly polarized at about or above 20% for all phases except for the secondary peak where it is more weakly polarized. The polarization degree is strongest during the primary minimum which is also the phase where an absorption feature has been identified previously (Pizzocaro et al. 2019). The polarization angle of the photons are consistent with a rotating vector model with a mode switch between the primary minimum and the rest of the rotation of the neutron star. We propose a scenario in which the emission at the source is weakly polarized (as in a condensed surface) and, as the radiation passes through a plasma arch, resonant cyclotron scattering off of protons produces the observed polarized radiation. This confirms the magnetar nature of the source with a surface field greater than about 10<sup>15</sup> G
Tests of general relativity with gravitational wave observations from merging compact binaries continue to confirm Einstein's theory of gravity with increasing precision. However, these tests have so far only been applied to signals that were first confidently detected by matched-filter searches assuming general relativity templates. This raises the question of selection biases: what is the largest deviation from general relativity that current searches can detect, and are current constraints on such deviations necessarily narrow because they are based on signals that were detected by templated searches in the first place? In this paper, we estimate the impact of selection effects for tests of the inspiral phase evolution of compact binary signals with a simplified version of the GstLAL search pipeline. We find that selection biases affect the search for very large values of the deviation parameters, much larger than the constraints implied by the detected signals. Therefore, combined population constraints from confidently detected events are mostly unaffected by selection biases, with the largest effect being a broadening at the $\sim10$ % level for the $-1$PN term. These findings suggest that current population constraints on the inspiral phase are robust without factoring in selection biases. Our study does not rule out a disjoint, undetectable binary population with large deviations from general relativity, or stronger selection effects in other tests or search procedures.
In an accreting X-ray pulsar, a neutron star accretes matter from a stellar companion through an accretion disk. The high magnetic field of the rotating neutron star disrupts the inner edge of the disc, funneling the gas to flow onto the magnetic poles on its surface. Hercules X-1 is in many ways the prototypical X-ray pulsar; it shows persistent X-ray emission and it resides with its companion HZ Her, a two-solar-mass star, at about 7~kpc from Earth. Its emission varies on three distinct timescales: the neutron star rotates every 1.2~seconds, it is eclipsed by its companion each 1.7~days, and the system exhibits a superorbital period of 35~days which has remained remarkably stable since its discovery. Several lines of evidence point to the source of this variation as the precession of the accretion disc, the precession of the neutron star or both. Despite the many hints over the past fifty years, the precession of the neutron star itself has yet not been confirmed or refuted. We here present X-ray polarization measurements with the Imaging X-ray Polarimetry Explorer (IXPE) which probe the spin geometry of the neutron star. These observations provide direct evidence that the 35-day-period is set by the free precession of the neutron star crust, which has the important implication that its crust is somewhat asymmetric fractionally by a few parts per ten million. Furthermore, we find indications that the basic spin geometry of the neutron star is altered by torques on timescale of a few hundred days.
The classifications of Fermi-LAT unassociated sources are studied using multiple machine learning (ML) methods. The update data from 4FGL-DR3 are divided into high Galactic latitude (HGL, Galactic latitude $|b|>10^\circ$) and low Galactic latitude (LGL, $|b|\le10^\circ$) regions. In the HGL region, a voting ensemble of four binary ML classifiers achieves a 91$\%$ balanced accuracy. In the LGL region, an additional Bayesian-Gaussian (BG) model with three parameters is introduced to eliminate abnormal soft spectrum AGNs from the training set and ML-identified AGN candidates, a voting ensemble of four ternary ML algorithms reach an 81$\%$ balanced accuracy. And then, a catalog of Fermi-LAT all-sky unassociated sources is constructed. Our classification results show that (i) there are 1037 AGN candidates and 88 pulsar candidates with a balanced accuracy of $0.918 \pm 0.029$ in HGL region, which are consistent with those given in previous all-sky ML approaches; and (ii) there are 290 AGN-like candidates, 135 pulsar-like candidates, and 742 other-like candidates with a balanced accuracy of $0.815 \pm 0.027$ in the LGL region, which are different from those in previous all-sky ML approaches. Additionally, different training sets and class weights were tested for their impact on classifier accuracy and predicted results. The findings suggest that while different training approaches can yield similar model accuracy, the predicted numbers across different categories can vary significantly. Thus, reliable evaluation of the predicted results is deemed crucial in the ML approach for Fermi-LAT unassociated sources.
On August 25th 2013 Dana Patchick from the "Deep Sky Hunters" (DSH) amateur astronomer group discovered a diffuse nebulosity in the Wide-field Infrared Survey Explorer (WISE) mid-IR image archive that had no optical counterpart but appeared similar to many Planetary Nebulae (PNe) in WISE. As his 30th discovery he named it Pa 30 and it was added to the HASH PN database as a new PN candidate. Little did he know how important his discovery would become. 10 years later this object is the only known bound remnant of a violent double WD merger accompanied by a rare Type Iax SN, observed and recorded by the ancient Chinese and Japanese in 1181 AD. This makes Pa 30 and its central star IRAS 00500+6713 (WD J005311) the only SN Iax remnant in our Galaxy, the only known bound remnant of any SN, and based on the central star's spectrum the only Wolf-Rayet star known that neither has a massive progenitor nor is the central star of a Planetary Nebula. We cover this story and our key role in it.
The KM3NeT neutrino telescope is currently being deployed at two different sites in the Mediterranean Sea. First searches for astrophysical neutrinos have been performed using data taken with the partial detector configuration already in operation. The paper presents the results of two independent searches for neutrinos from compact binary mergers detected during the third observing run of the LIGO and Virgo gravitational wave interferometers. The first search looks for a global increase in the detector counting rates that could be associated with inverse beta decay events generated by MeV-scale electron anti-neutrinos. The second one focuses on upgoing track-like events mainly induced by muon (anti-)neutrinos in the GeV--TeV energy range. Both searches yield no significant excess for the sources in the gravitational wave catalogs. For each source, upper limits on the neutrino flux and on the total energy emitted in neutrinos in the respective energy ranges have been set. Stacking analyses of binary black hole mergers and neutron star-black hole mergers have also been performed to constrain the characteristic neutrino emission from these categories.
We describe a novel operator-splitting approach to numerical relativistic magnetohydrodynamics designed to expand its applicability to the domain of ultra-high magnetisation. In this approach, the electromagnetic field is split into the force-free component, governed by the equations of force-free degenerate electrodynamics (FFDE), and the perturbation component, governed by the perturbation equations derived from the full system of relativistic magnetohydrodynamics (RMHD). The combined system of the FFDE and perturbation equations is integrated simultaneously, for which various numerical techniques developed for hyperbolic conservation laws can be used. At the end of every time-step of numerical integration, the force-free and the perturbation components of the electromagnetic field are recombined and the result is regarded as the initial value of the force-free component for the next time-step, whereas the initial value of the perturbation component is set to zero. To explore the potential of this approach, we build a 3rd-order WENO code, which was used to carry out 1D and 2D test simulations. Their results show that this operator-splitting approach allows us to bypass the stiffness of RMHD in the ultra-high-magnetisation regime where the perturbation component becomes very small. At the same time, the cod
Analyzing single-dish and VLBI radio, as well as Fermi-LAT $\gamma$-ray observations, we explained the three major $\gamma$-ray flares in the $\gamma$-ray light curve of FSRQ J1048+7143 with the spin-orbit precession of the dominant mass black hole in a supermassive black hole binary system. Here, we report on the detection of a fourth $\gamma$-ray flare from J1048+7143, appearing in the time interval which was predicted in our previous work. Using the updated analysis covering the time range between 2008 Aug 4 and 2023 Jun 4, we further constrain the parameters of the hypothetical supermassive binary black hole at the heart of J1048+7143 and we predict the occurrence of the fifth major $\gamma$-ray flare that would appear only if the jet will still lay close to our line sight. The fourth major $\gamma$-ray flare also shows the two-subflare structure, further strengthening our scenario in which the occurrence of the subflares is the signature of the precession of a spine-sheath jet structure that quasi-periodically interacts with a proton target, e.g. clouds in the broad-line region.
Radio emission from magnetars provides a unique probe of the relativistic, magnetized plasma within the near-field environment of these ultra-magnetic neutron stars. The transmitted waves can undergo birefringent and dispersive propagation effects that result in frequency-dependent conversions of linear to circularly polarized radiation and vice-versa, thus necessitating classification when relating the measured polarization to the intrinsic properties of neutron star and fast radio burst (FRB) emission sites. We report the detection of such behavior in 0.7-4 GHz observations of the P = 5.54 s radio magnetar XTE J1810$-$197 following its 2018 outburst. The phenomenon is restricted to a narrow range of pulse phase centered around the magnetic meridian. Its temporal evolution is closely coupled to large-scale variations in magnetic topology that originate from either plastic motion of an active region on the magnetar surface or free precession of the neutron star crust. Our model of the effect deviates from simple theoretical expectations for radio waves propagating through a magnetized plasma. Birefringent self-coupling between the transmitted wave modes, line-of-sight variations in the magnetic field direction and differences in particle charge or energy distributions above the magnetic pole are explored as possible explanations. We discuss potential links between the immediate magneto-ionic environments of magnetars and those of FRB progenitors.
Eccentric compact binary mergers are significant scientific targets for current and future gravitational wave observatories. To detect and analyze eccentric signals, there is an increasing effort to develop waveform models, numerical relativity simulations, and parameter estimation frameworks for eccentric binaries. Unfortunately, current models and simulations use different internal parameterisations of eccentricity in the absence of a unique natural definition of eccentricity in general relativity, which can result in incompatible eccentricity measurements. In this paper, we adopt a standardized definition of eccentricity and mean anomaly based solely on waveform quantities, and make our implementation publicly available through an easy-to-use Python package, gw_eccentricity. This definition is free of gauge ambiguities, has the correct Newtonian limit, and can be applied as a postprocessing step when comparing eccentricity measurements from different models. This standardization puts all models and simulations on the same footing and enables direct comparisons between eccentricity estimates from gravitational wave observations and astrophysical predictions. We demonstrate the applicability of this definition and the robustness of our implementation for waveforms of different origins, including post-Newtonian theory, effective one body, extreme mass ratio inspirals, and numerical relativity simulations. We focus on binaries without spin-precession in this work, but possible generalizations to spin-precessing binaries are discussed.
The enhanced activity typical of the core of Seyfert galaxies can drive powerful winds where high-energy phenomena can occur. In spite of their high power content, the number of such non-jetted active galactic nuclei (AGN) detected in gamma rays is very limited. In this Letter, we report the identification and measurement of the gamma-ray flux from NGC 4151, a Seyfert galaxy located at about 15.8 Mpc. The source is known for hosting ultra-fast outflows (UFOs) in its innermost core through X-ray spectroscopic observations, thereby becoming the first UFO host ever recognized in gamma rays. UFOs are mildly relativistic, wide opening angle winds detected in the innermost parsecs of active galaxies where strong shocks can develop. We interpret the gamma-ray flux as a result of diffusive shock acceleration at the wind termination shock of the UFO and inelastic hadronic collisions in its environment. Interestingly, NGC 4151 is also spatially coincident with a weak excess of neutrino events identified by the IceCube neutrino observatory. We compute the contribution of the UFO to such a neutrino excess and we discuss other possible emission regions such as the AGN corona.
The luminosity distance is a key observable of gravitational-wave (GW) observations. We demonstrate how one can correctly retrieve the luminosity distance of compact binary coalescences (CBCs) if the GW signal is "strongly lensed". We perform a proof-of-concept parameter estimation for the luminosity distance supposing (i) strong lensing produces two lensed GW signals emitted from a CBC, (ii) the Advanced LIGO-Virgo network detects both lensed signals as independent events, and (iii) the two events are identified as strongly lensed signals originated from the same source. Taken into account the maximum magnification allowed in two lensing scenario and simulated GW signals emitted from four different binary black holes, we find that the strong lensing can improve the precision of the distance estimation of a CBC by up to a factor of a few compared to that can be expected without lensing.
Fast radio bursts (FRBs) are extragalactic transients with typical durations of milliseconds. FRBs have been shown, however, to fluctuate on a wide range of timescales: some show sub-microsecond sub-bursts while others last up to a few seconds in total. Probing FRBs on a range of timescales is crucial for understanding their emission physics, how to detect them effectively, and how to maximize their utility as astrophysical probes. FRB 20121102A is the first-known repeating FRB source. Here we show that FRB 20121102A is able to produce isolated microsecond-duration bursts whose total durations are more than ten times shorter than all other known FRBs to date. The polarimetric properties of these micro-bursts resemble those of the longer-lasting bursts, suggesting a common emission mechanism producing FRBs spanning a factor of 1,000 in duration. Furthermore, this work shows that there exists a population of ultra-fast radio bursts that current wide-field FRB searches are missing due to insufficient time-resolution. These results indicate that FRBs occur more frequently and with greater diversity than initially thought. This could also influence our understanding of energy, wait time, and burst rate distributions.
This study investigates the geodesic motion of test particles, both massless and massive, within a Schwarzschild-Klinkhamer (SK) wormhole space-time. We specifically consider the influence of cosmic strings on the system and analyze the effective potential, and observing that the presence of a cosmic string parameter alters it for null and time-like geodesics. Moreover, we calculate the deflection angle for null geodesics, and demonstrate that the cosmic string modifies this angle and induces a shift in the results. Additionally, we extend our investigation in this SK-wormhole space-time but with a global monopole. We explore the geodesic motion of test particles in this scenario and find that the effective potential is affected by the global monopole. Similarly, we determine the deflection angle for null geodesics and show that the global monopole parameter introduces modifications to this angle. Lastly, we present several known solutions for space-times involving cosmic strings and global monopoles within the framework of this SK-wormhole.
Internal shocks are a leading candidate for the dissipation mechanism that powers the prompt $\gamma$-ray emission in gamma-ray bursts (GRBs). In this scenario a compact central source produces an ultra-relativistic outflow with varying speeds, causing faster parts or shells to collide with slower ones. Each collision produces a pair of shocks -- a forward shock (FS) propagating into the slower leading shell and a reverse shock (RS) propagating into the faster trailing shell. The RS's lab-frame speed is always smaller, while the RS is typically stronger than the FS, leading to different conditions in the two shocked regions that both contribute to the observed emission. We show that optically-thin synchrotron emission from both (weaker FS + stronger RS) can naturally explain key features of prompt GRB emission such as the pulse shapes, time-evolution of the $\nu{}F_\nu$ peak flux and photon-energy, and the spectrum. Particularly, it can account for two features commonly observed in GRB spectra: (i) a sub-dominant low-energy spectral component (often interpreted as ``photospheric''-like), or (ii) a doubly-broken power-law spectrum with the low-energy spectral slope approaching the slow cooling limit. Both features can be obtained while maintaining high overall radiative efficiency without any fine-tuning of the physical conditions.
The cores of dense stars are a powerful laboratory for studying feebly-coupled particles such as axions. Some of the strongest constraints on axionlike particles and their couplings to ordinary matter derive from considerations of stellar axion emission. In this work we study the radiation of axionlike particles from degenerate neutron star matter via a lepton-flavor-violating (LFV) coupling that leads to muon-electron conversion when an axion is emitted. We calculate the axion emission rate per unit volume (emissivity) and by comparing with the rate of neutrino emission, we infer upper limits on the LFV coupling that are at the level of $|g_{ae\mu}| \lesssim 10^{-6}$. For the hotter environment of a supernova, such as SN 1987A, the axion emission rate is enhanced and the limit is stronger, at the level of $|g_{ae\mu}| \lesssim 10^{-11}$, competitive with laboratory limits. Interestingly, our derivation of the axion emissivity reveals that axion emission via the LFV coupling is suppressed relative to the familiar lepton-flavor-preserving channels by a factor of $T^2 E_{F,e}^2 / (m_\mu^2 - m_e^2)^2 \sim T^2/m_\mu^2$, which is responsible for the relatively weaker limits.
We present a JWST mid-infrared spectrum of the under-luminous Type Ia Supernova (SN Ia) 2022xkq, obtained with the medium-resolution spectrometer on the Mid-Infrared Instrument (MIRI) $\sim130$ days post-explosion. We identify the first MIR lines beyond 14 $\mu$m in SN Ia observations. We find features unique to under-luminous SNe Ia, including: isolated emission of stable Ni, strong blends of [Ti II], and large ratios of singly ionized to doubly ionized species in both [Ar] and [Co]. Comparisons to normal-luminosity SNe Ia spectra at similar phases show a tentative trend between the width of the [Co III] 11.888 $\mu$m feature and the SN light curve shape. Using non-LTE-multi-dimensional radiation hydro simulations and the observed electron capture elements we constrain the mass of the exploding white dwarf. The best-fitting model shows that SN 2022xkq is consistent with an off-center delayed-detonation explosion of a near-Chandrasekhar mass WD (M$_{\rm ej}$ $\approx 1.37$ M$_{\odot}$) of high-central density ($\rho_c \geq 2.0\times10^{9}$ g cm$^{-3}$) seen equator on, which produced M($^{56}$Ni) $= 0.324$ M$_{\odot}$ and M($^{58}$Ni) $\geq 0.06$ M$_{\odot}$. The observed line widths are consistent with the overall abundance distribution; and the narrow stable Ni lines indicate little to no mixing in the central regions, favoring central ignition of sub-sonic carbon burning followed by an off-center DDT beginning at a single point. Additional observations may further constrain the physics revealing the presence of additional species including Cr and Mn. Our work demonstrates the power of using the full coverage of MIRI in combination with detailed modeling to elucidate the physics of SNe Ia at a level not previously possible.
We considered the generation of gravitational waves by the binary system associated with a wormhole. In the Newtonian limit, the gravitational potential of a wormhole requires the effective mass of the wormhole taking into account radial tension effects. This definition allows us to derive gravitational wave production in homogeneous and heterogeneous binary systems. Therefore, we studied gravitational waves generation by orbiting wormhole-wormhole and wormhole-black hole binary systems before coalescence. Cases involving negative mass require more careful handling. We also calculated the energy loss to gravitational radiation by a particle orbiting around the wormhole and by a particle moving straight through the wormhole mouth, respectively.
We perform a calculation of dense and hot nuclear matter where the mean interaction between nucleons is described by in-medium effective fields and where we employ analytical approximations of the Fermi integrals. We generalize a previous work [1], where we have addressed the case of the Fermi gas model with in-medium effective mass. In the present work, we fully treat the in-medium interaction by considering both its contribution to the in-medium effective fields, which can be subsumed by the mass in some cases, and to the potential term. Our formalism is general and could be applied to relativistic and non-relativistic approaches. It is illustrated for different popular models -- Skyrme, nonlinear, and density-dependent relativistic mean-field models -- and it provides a clear understanding of the in-medium correction to the pressure, which is present in the case of the Skyrme models but is not for the relativistic ones. For the Fermi integrals, we compare the analytical approximation to the, so-called, ``exact'' numerical calculations in order to quantitatively estimate the accuracy of the approximation.
When two galaxies merge, they often produce a supermassive black hole binary (SMBHB) at their center. Numerical simulations with cold dark matter show that SMBHBs typically stall out at a distance of a few parsecs apart, and take billions of years to coalesce. This is known as the final parsec problem. We suggest that ultralight dark matter (ULDM) halos around SMBHBs can generate dark matter waves due to gravitational cooling. These waves can effectively carry away orbital energy from the black holes, rapidly driving them together. To test this hypothesis, we performed numerical simulations of black hole binaries inside ULDM halos. Our results imply that ULDM waves can lead to the rapid orbital decay of black hole binaries.
This document describes BinCodex, a common format for the output of binary population synthesis (BPS) codes agreed upon by the members of the LISA Synthetic UCB Catalogue Group. The goal of the format is to provide a common reference framework to describe the evolution of a single, isolated binary system or a population of isolated binaries.
We test Milgromian dynamics (MOND) using wide binary stars (WBs) with separations of $2-30$ kAU. Locally, the WB orbital velocity in MOND should exceed the Newtonian prediction by $\approx 20\%$ at asymptotically large separations given the Galactic external field effect (EFE). We investigate this with a detailed statistical analysis of \emph{Gaia} DR3 data on 8611 WBs within 250 pc of the Sun. Orbits are integrated in a rigorously calculated gravitational field that directly includes the EFE. We also allow line of sight contamination and undetected close binary companions to the stars in each WB. We interpolate between the Newtonian and Milgromian predictions using the parameter $\alpha_{\rm{grav}}$, with 0 indicating Newtonian gravity and 1 indicating MOND. Directly comparing the best Newtonian and Milgromian models reveals that Newtonian dynamics is preferred at $19\sigma$ confidence. Using a complementary Markov Chain Monte Carlo analysis, we find that $\alpha_{\rm{grav}} = -0.021^{+0.065}_{-0.045}$, which is fully consistent with Newtonian gravity but excludes MOND at $16\sigma$ confidence. This is in line with the similar result of Pittordis and Sutherland using a somewhat different sample selection and less thoroughly explored population model. We show that although our best-fitting model does not fully reproduce the observations, an overwhelmingly strong preference for Newtonian gravity remains in a considerable range of variations to our analysis. Adapting the MOND interpolating function to explain this result would cause tension with rotation curve constraints. We discuss the broader implications of our results in light of other works, concluding that MOND must be substantially modified on small scales to account for local WBs.
We present a study of molecular gas, traced via CO (3-2) from ALMA data, of four z< 0.2, `radio quiet', type 2 quasars (log [L(bol)/(erg/s)] = 45.3 - 46.2; log [L(1.4 GHz)/(W/Hz)] = 23.7 - 24.3). Targets were selected to have extended radio lobes (>= 10 kpc), and compact, moderate-power jets (1 - 10 kpc; log [Pjet/(erg/s)]= 43.2 - 43.7). All targets show evidence of central molecular outflows, or injected turbulence, within the gas disks (traced via high-velocity wing components in CO emission-line profiles). The inferred velocities (Vout = 250 - 440 km/s) and spatial scales (0.6 - 1.6 kpc), are consistent with those of other samples of luminous low-redshift AGN. In two targets, we observe extended molecular gas structures beyond the central disks, containing 9 - 53 % of the total molecular gas mass. These structures tend to be elongated, extending from the core, and wrap-around (or along) the radio lobes. Their properties are similar to the molecular gas filaments observed around radio lobes of, mostly `radio loud', Brightest Cluster Galaxies. They have: projected distances of 5 - 13 kpc; bulk velocities of 100 - 340 km/s; velocity dispersion of 30 - 130 km/s; inferred mass outflow rates of 4 - 20 Msolar/yr; and estimated kinetic powers of log [Ekin/(erg/s)]= 40.3 - 41.7. Our observations are consistent with simulations that suggest moderate-power jets can have a direct (but modest) impact on molecular gas on small scales, through direct jet-cloud interactions. Then, on larger scales, jet-cocoons can push gas aside. Both processes could contribute to the long-term regulation of star formation.
The relatively red wavelength range (4800-9300{\AA}) of the VLT Multi Unit Spectroscopic Explorer (MUSE) limits which metallicity diagnostics can be used; in particular excluding those requiring the [O ii]{\lambda}{\lambda}3726,29 doublet. We assess various strong line diagnostics by comparing to sulphur Te-based metallicity measurements for a sample of 671 HII regions from 36 nearby galaxies from the MUSE Atlas of Disks (MAD) survey. We find that the O3N2 and N2 diagnostics return a narrower range of metallicities which lie up to ~0.3 dex below Te-based measurements, with a clear dependence on both metallicity and ionisation parameter. The N2S2H{\alpha} diagnostic shows a near-linear relation with the Te-based metallicities, although with a systematic downward offset of ~0.2 dex, but no clear dependence on ionisation parameter. These results imply that the N2S2H{\alpha} diagnostic produces the most reliable results when studying the distribution of metals within galaxies with MUSE. On sub-HII region scales, the O3N2 and N2 diagnostics measure metallicity decreasing towards the centres of HII regions, contrary to expectations. The S-calibration and N2S2H{\alpha} diagnostics show no evidence of this, and show a positive relationship between ionisation parameter and metallicity at 12 + log(O/H)> 8.4, implying the relationship between ionisation parameter and metallicity differs on local and global scales. We also present HIIdentify, a python tool developed to identify HII regions within galaxies from H{\alpha} emission maps. All segmentation maps and measured emission line strengths for the 4408 HII regions identified within the MAD sample are available to download.
Polar-ring galaxies are photometrically and kinematically decoupled systems which are highly inclined to the major axis of the host galaxy. These objects have been explored since the 1970s, but the rarity of these systems has made such study difficult. We examine a sample of over 18,362 galaxies from the Sloan Digital Sky Survey (SDSS) Stripe 82 for the presence of galaxies with polar structures. Using deep SDSS Stripe 82, DESI Legacy Imaging Surveys, and Hyper Suprime-Cam Subaru Strategic Program, we select 53 good candidate galaxies with photometrically decoupled polar rings, 9 galaxies with polar halos, and 34 possibly forming polar-ring galaxies, versus 13 polar-ring candidates previously mentioned in the literature for the Stripe 82. Our results suggest that the occurrence rate of galaxies with polar structures may be significantly underestimated, as revealed by the deep observations, and may amount to 1-3% of non-dwarf galaxies.
Since the 1970s, astronomers have struggled with the issue of how matter can be accreted to promote black hole growth. While low-angular-momentum stars may be devoured by the black hole, they are not a sustainable source of fuel. Gas, which could potentially provide an abundant fuel source, presents another challenge due to its enormous angular momentum. While viscous torques are not significant, gas is subject to gravity torques from non-axisymmetric potentials such as bars and spirals. Primary bars can exchange angular momentum with the gas inside corotation, driving it inward spiraling until the inner Lindblad resonance is reached. An embedded nuclear bar can then take over. As the gas reaches the black hole's sphere of influence, the torque turns negative, fueling the center. Dynamical friction also accelerates the infall of gas clouds closer to the nucleus. However, due to the Eddington limit, growing a black hole from a stellar-mass seed is a slow process. The existence of very massive black holes in the early universe remains a puzzle that could potentially be solved through direct collapse of massive clouds into black holes or super-Eddington accretion.
The variable continuum emission of an active galactic nucleus (AGN) produces corresponding responses in the broad emission lines, which are modulated by light travel delays, and contain information on the physical properties, structure, and kinematics of the emitting gas region. The reverberation mapping technique, a time series analysis of the driving light curve and response, can recover some of this information, including the size and velocity field of the broad line region (BLR). Here we introduce a new forward-modeling tool, the Broad Emission Line MApping Code (BELMAC), which simulates the velocity-resolved reverberation response of the BLR to any given input light curve by setting up a 3D ensemble of gas clouds for various specified geometries, velocity fields, and cloud properties. In this work, we present numerical approximations to the transfer function by simulating the velocity-resolved responses to a single continuum pulse for sets of models representing a spherical BLR with a radiatively driven outflow and a disk-like BLR with Keplerian rotation. We explore how the structure, velocity field, and other BLR properties affect the transfer function. We calculate the response-weighted time delay (reverberation "lag"), which is considered to be a proxy for the luminosity-weighted radius of the BLR. We investigate the effects of anisotropic cloud emission and matter-bounded (completely ionized) clouds and find the response-weighted delay is only equivalent to the luminosity-weighted radius when clouds emit isotropically and are radiation-bounded (partially ionized). Otherwise, the luminosity-weighted radius can be overestimated by up to a factor of 2.
We present a hierarchical Bayesian inference approach to estimating the structural properties and the phase space center of a globular cluster (GC) given the spatial and kinematic information of its stars based on lowered isothermal cluster models. As a first step towards more realistic modelling of GCs, we built a differentiable, accurate emulator of the lowered isothermal distribution function using interpolation. The reliable gradient information provided by the emulator allows the use of Hamiltonian Monte Carlo methods to sample large Bayesian models with hundreds of parameters, thereby enabling inference on hierarchical models. We explore the use of hierarchical Bayesian modelling to address several issues encountered in observations of GC including an unknown GC center, incomplete data, and measurement errors. Our approach not only avoids the common technique of radial binning but also incorporates the aforementioned uncertainties in a robust and statistically consistent way. Through demonstrating the reliability of our hierarchical Bayesian model on simulations, our work lays out the foundation for more realistic and complex modelling of real GC data.
Traditional spectral analysis methods are increasingly challenged by the exploding volumes of data produced by contemporary astronomical surveys. In response, we develop deep-Regularized Ensemble-based Multi-task Learning with Asymmetric Loss for Probabilistic Inference ($\rm{deep-REMAP}$), a novel framework that utilizes the rich synthetic spectra from the PHOENIX library and observational data from the MARVELS survey to accurately predict stellar atmospheric parameters. By harnessing advanced machine learning techniques, including multi-task learning and an innovative asymmetric loss function, $\rm{deep-REMAP}$ demonstrates superior predictive capabilities in determining effective temperature, surface gravity, and metallicity from observed spectra. Our results reveal the framework's effectiveness in extending to other stellar libraries and properties, paving the way for more sophisticated and automated techniques in stellar characterization.
Dynamical friction can be a valuable tool for inferring dark matter properties that are difficult to constrain by other methods. Most applications of dynamical friction calculations are concerned with the long-term angular momentum loss and orbital decay of the perturber within its host. This, however, assumes knowledge of the unknown initial conditions of the system. We advance an alternative methodology to infer the host properties from the perturber's shape distortions induced by the tides of the wake of dynamical friction, which we refer to as the tidal dynamical friction. As the shape distortions rely on the tidal field that has a predominantly local origin, we present a strategy to find the local wake by integrating the stellar orbits back in time along with the perturber, then removing the perturber's potential and re-integrating them back to the present. This provides perturbed and unperturbed coordinates and hence a change in coordinates, density, and acceleration fields, which yields the back-reaction experienced by the perturber. The method successfully recovers the tidal field of the wake based on a comparison with N-body simulations. We show that similar to the tidal field itself, the noise and randomness of the dynamical friction force due to the finite number of stars is also dominated by regions close to the perturber. Stars near the perturber influence it more but are smaller in number, causing a high variance in the acceleration field. These fluctuations are intrinsic to dynamical friction. We show that a stellar density of $0.0014 {\rm M_\odot\, kpc^{-3}}$ yields an inherent variance of 10% to the dynamical friction. The current method extends the family of dynamical friction methods that allow for the inference of host properties from tidal forces of the wake. It can be applied to specific galaxies, such as Magellanic Clouds, with Gaia data.
Detailed astrochemical models are a key component to interpret the observations of interstellar and circumstellar molecules since they allow important physical properties of the gas and its evolutionary history to be deduced. We update one of the most widely used astrochemical databases to reflect advances in experimental and theoretical estimates of rate coefficients and to respond to the large increase in the number of molecules detected in space since our last release in 2013. We present the sixth release of the UMIST Database for Astrochemistry (UDfA), a major expansion of the gas-phase chemistry that describes the synthesis of interstellar and circumstellar molecules. Since our last release, we have undertaken a major review of the literature which has increased the number of reactions by over 40% to a total of 8767 and increased the number of species by over 55% to 737. We have made a particular attempt to include many of the new species detected in space over the past decade, including those from the QUIJOTE and GOTHAM surveys, as well as providing references to the original data sources. We use the database to investigate the gas-phase chemistries appropriate to O-rich and C-rich conditions in TMC-1 and to the circumstellar envelope of the C-rich AGB star IRC+10216 and identify successes and failures of gas-phase only models. This update is a significant improvement to the UDfA database. For the dark cloud and C-rich circumstellar envelope models, calculations match around 60% of the abundances of observed species to within an order of magnitude. There are a number of detected species, however, that are not included in the model either because their gas-phase chemistry is unknown or because they are likely formed via surface reactions on icy grains. Future laboratory and theoretical work is needed to include such species in reaction networks.
We present the analysis of cloud-cloud collision (CCC) process in the Galactic molecular complex S235. Our new CO observations performed with the PMO-13.7m telescope reveal two molecular clouds, namely the S235-Main and the S235-ABC, with $\sim$ 4 km s$^{-1}$ velocity separation. The bridge feature, the possible colliding interface and the complementary distribution of the two clouds are significant observational signatures of cloud-cloud collision in S235. The most direct evidence of cloud-cloud collision process in S235 is that the S235-Main (in a distance of 1547$^{+44}_{-43}$ pc) and S235-ABC (1567$^{+33}_{-39}$ pc) meet at almost the same position (within 1$\sigma$ error range) at a supersonic relative speed. We identified ten $^{13}$CO clumps from PMO-13.7m observations, 22 dust cores from the archival SCUBA-2 data, and 550 YSOs from NIR-MIR data. 63$\%$ of total YSOs are clustering in seven MST groups (M1$-$M7). The tight association between the YSO groups (M1 $\&$ M7) and the bridge feature suggests that the CCC process triggers star formation there. The collisional impact subregion (the South) shows $3\sim5$ times higher CFE and SFE (average value of 12.3$\%$ and 10.6$\%$, respectively) than the non-collisional impact subregion (2.4$\%$ and 2.6$\%$, respectively), suggesting that the CCC process may have enhanced the CFE and SFE of the clouds compared to those without collision influence.
Understanding the methodological robustness in identifying and quantifying high-redshift bars is essential for studying their evolution with the {\it James} {\it Webb} Space Telescope (JWST). Using a sample of nearby spiral galaxies, we created simulated images consistent with the observational conditions of the Cosmic Evolution Early Release Science (CEERS) survey. Through a comparison of measurements before and after image degradation, we show that the bar measurements for massive galaxies remain robust against noise. While the bar position angle measurement is unaffected by resolution, both the bar size ($a_{\rm bar}$) and bar ellipticity are typically underestimated, with the extent depending on $a_{\rm bar}/{\rm FWHM}$. To address these effects, correction functions are derived. We find that the detection rate of bars remains at $\sim$ 1 when the $a_{\rm bar}/{\rm FWHM}$ is above 2, below which the rate drops sharply, quantitatively validating the effectiveness of using $a_{\rm bar}>2\times {\rm FWHM}$ as a bar detection threshold. By holding the true bar fraction ($f_{\rm bar}$) constant and accounting for both resolution effects and intrinsic bar size growth, the simulated CEERS images yield an apparent F444W-band $f_{\rm bar}$ that decreases significantly with higher redshifts. Remarkably, this simulated apparent $f_{\rm bar}$ is in good agreement with JWST observations reported by Conte et al., suggesting that the observed $f_{\rm bar}$ is significantly underestimated, especially at higher redshifts, leading to an overstated evolution of the $f_{\rm bar}$. Our results underscore the importance of disentangling the true $f_{\rm bar}$ evolution from resolution effects and bar size growth.
Large sky spectroscopic surveys have reached the scale of photometric surveys in terms of sample sizes and data complexity. These huge datasets require efficient, accurate, and flexible automated tools for data analysis and science exploitation. We present the Galaxy Spectra Network/GaSNet-II, a supervised multi-network deep learning tool for spectra classification and redshift prediction. GaSNet-II can be trained to identify a customized number of classes and optimize the redshift predictions for classified objects in each of them. It also provides redshift errors, using a network-of-networks that reproduces a Monte Carlo test on each spectrum, by randomizing their weight initialization. As a demonstration of the capability of the deep learning pipeline, we use 260k Sloan Digital Sky Survey spectra from Data Release 16, separated into 13 classes including 140k galactic, and 120k extragalactic objects. GaSNet-II achieves 92.4% average classification accuracy over the 13 classes (larger than 90% for the majority of them), and an average redshift error of approximately 0.23% for galaxies and 2.1% for quasars. We further train/test the same pipeline to classify spectra and predict redshifts for a sample of 200k 4MOST mock spectra and 21k publicly released DESI spectra. On 4MOST mock data, we reach 93.4% accuracy in 10-class classification and an average redshift error of 0.55% for galaxies and 0.3% for active galactic nuclei. On DESI data, we reach 96% accuracy in (star/galaxy/quasar only) classification and an average redshift error of 2.8% for galaxies and 4.8% for quasars, despite the small sample size available. GaSNet-II can process ~40k spectra in less than one minute, on a normal Desktop GPU. This makes the pipeline particularly suitable for real-time analyses of Stage-IV survey observations and an ideal tool for feedback loops aimed at night-by-night survey strategy optimization.
The fraction of massive stars in young stellar clusters is of importance as they are the dominant sources of both mechanical and radiative feedback, strongly influencing the thermal and dynamical state of their birth environments. It turns out that a significant fraction of massive stars escape from their parent cluster via dynamical interactions of single stars and/or multiple stellar systems. M 17 is the nearest giant H II region hosting a very young and massive cluster: NGC 6618. Our aim is to identify stars brighter than G < 21 mag that belong to NGC 6618, including the (massive) stars that may have escaped since its formation, and to determine the cluster distance and age. We identified 42 members of NGC 6618 of which eight have a spectral type of O, with a mean distance of 1675 pc and a transversal velocity dispersion of about 3 km/s , and a radial velocity dispersion of 6 km/s. Another ten O stars are associated with NGC 6618, but they cannot be classified as members due to poor astrometry or high extinction. We have also identified six O star runaways. The relative transverse velocity of these runaways ranges from 10 to 70 km/s and their kinematic age ranges from about 100 to 750 kyr. Given the already established young age of NGC 6618 (< 1 Myr), this implies that massive stars are being ejected from the cluster already directly after or during the cluster formation process. When constructing the initial mass function, one has to take into account the massive stars that have already escaped from the cluster, that is, about 30% of the O stars of the original population of NGC 6618. The trajectories of the O runaways can be traced back to the central 0.25 pc region of NGC 6618. The good agreement between the evolutionary and kinematic age of the runaways implies that the latter provides an independent way to estimate (a lower limit to) the age of the cluster.
Cloud-cloud collisions (CCCs) are expected to compress gas and trigger star formation. However, it is not well understood how the collisions and the induced star formation affect galactic-scale properties. By developing an on-the-fly algorithm to identify CCCs at each timestep in a galaxy simulation and a model that relates CCC-triggered star formation to collision speeds, we perform simulations of isolated galaxies to study the evolution of galaxies and giant molecular clouds (GMCs) with prescriptions of self-consistent CCC-driven star formation and stellar feedback. We find that the simulation with the CCC-triggered star formation produces slightly higher star formation rates and a steeper Kennicutt-Schmidt relation than that with a more standard star formation recipe, although collision speeds and frequencies are insensitive to the star formation models. In the simulation with the CCC model, about 70% of the stars are born via CCCs, and colliding GMCs with masses of $\approx 10^{5.5}\,M_{\odot}$ are the main drivers of CCC-driven star formation. In the simulation with the standard star formation recipe, about 50% of stars are born in colliding GMCs even without the CCC-triggered star formation model. These results suggest that CCCs may be one of the most important star formation processes in galaxy evolution. Furthermore, we find that a post-processing analysis of CCCs, as used in previous studies in galaxy simulations, may lead to slightly greater collision speeds and significantly lower collision frequencies than the on-the-fly analysis.
The fates of massive galaxies are tied to the evolution of their central supermassive black holes (BHs), due to the influence of AGN feedback. Correlations within simulated galaxy populations suggest that the masses of BHs are governed by properties of their host dark matter haloes, such as the binding energy and assembly time, at a given halo mass. However, the full picture must be more complex as galaxy mergers have also been shown to influence the growth of BHs and the impact of AGN. In this study, we investigate this problem through a controlled experiment, using the genetic modification technique to adjust the assembly history of a Milky Way-like galaxy simulated with the EAGLE model. We change the halo assembly time (and hence the binding energy) in the absence of any disruptive merger events, and find little change in the integrated growth of the BH. We attribute this to the angular momentum support provided by a galaxy disc, which reduces the inflow of gas towards the BH and effectively decouples the BH's growth from the halo's properties. Introducing major mergers into the assembly history disrupts the disc, causing the BH to grow $\approx 4\times$ more massive and inject feedback that reduces the halo baryon fraction by a factor of $\approx 2$ and quenches star formation. Merger events appear essential to the diversity in BH masses in EAGLE, and we also show that they increase the halo binding energy; correlations between these quantities may therefore be the result of merger events.
Planets in young star clusters could shed light on planet formation and evolution since star clusters can provide accurate age estimation. However, the number of transiting planets detected in clusters was only $\sim 30$, too small for statistical analysis. Thanks to the unprecedented high-precision astrometric data provided by Gaia DR2 and Gaia DR3, many new Open Clusters(OCs) and comoving groups have been identified. The UPiC project aims to find observational evidence and interpret how planet form and evolve in cluster environments. In this work, we cross-match the stellar catalogs of new OCs and comoving groups with confirmed planets and candidates. We carefully remove false positives and obtain the biggest catalog of planets in star clusters up to now, which consists of 73 confirmed planets and 84 planet candidates. After age validation, we obtain the radius--age diagram of these planets/candidates. We find an increment of the fraction of Hot Jupiters(HJs) around 100 Myr and attribute the increment to the flyby-induced high-e migration in star clusters. An additional small bump of the fraction of HJs after 1 Gyr is detected, which indicates the formation timescale of HJ around field stars is much larger than that in star clusters. Thus, stellar environments play important roles in the formation of HJs. The hot-Neptune desert occurs around 100 Myr in our sample. A combination of photoevaporation and high-e migration may sculpt the hot-Neptune desert in clusters.
We present chemical abundance ratios of 70 star-forming galaxies at $z\sim4$-10 observed by the JWST/NIRSpec ERO, GLASS, and CEERS programs. Among the 70 galaxies, we have pinpointed 2 galaxies, CEERS_01019 at $z=8.68$ and GLASS_150008 at $z=6.23$, with extremely low C/N ([C/N]$\lesssim -1$), evidenced with CIII]$\lambda\lambda$1907,1909, NIII]$\lambda$1750, and NIV]$\lambda\lambda$1483,1486, which show high N/O ratios ([N/O]$\gtrsim 0.5$) comparable with the one of GN-z11 regardless of whether stellar or AGN radiation is assumed. Such low C/N and high N/O ratios found in CEERS_01019 and GLASS_150008 (additionally identified in GN-z11) are largely biased towards the equilibrium of the CNO cycle, suggesting that these 3 galaxies are enriched by metals processed by the CNO cycle. On the C/N vs. O/H plane, these 3 galaxies do not coincide with Galactic HII regions, normal star-forming galaxies, and nitrogen-loud quasars with asymptotic giant branch stars, but globular-cluster (GC) stars, indicating a connection with GC formation. We compare C/O and N/O of these 3 galaxies with those of theoretical models, and find that these 3 galaxies are explained by scenarios with dominant CNO-cycle materials, i.e. Wolf-Rayet stars, supermassive ($10^{3}-10^{5}\ M_{\odot}$) stars, and tidal disruption events, interestingly with a requirement of frequent direct collapses. For all the 70 galaxies, we present measurements of Ne/O, S/O, and Ar/O, together with C/O and N/O. We identify 4 galaxies with very low Ne/O, $\log(\rm Ne/O)<-1.0$, indicating abundant massive ($\gtrsim30\ M_\odot$) stars.
Classical Cepheids (DCEPs) play a fundamental role in the calibration of the extra-galactic distance ladder which eventually leads to the determination of the Hubble constant($H_0$) thanks to the period-luminosity ($PL$) and period-Wesenheit ($PW$) relations exhibited by these pulsating variables. Therefore, it is of great importance to establish the dependence of $PL/PW$ relations on metallicity. We aim at quantifying the metallicity dependence of the Galactic DCEPs' $PL/PW$ relations for a variety of photometric bands ranging from optical to near-infrared. We gathered a literature sample of 910 DCEPs with available [Fe/H] values from high-resolution spectroscopy or metallicities from \gaia\ Radial Velocity Spectrometer. For all these stars, we collected photometry in the $G_{BP},G_{RP},G,I,V,J,H,K_S$ bands and astrometry from the \gaia\ DR3. These data have been used to investigate the metal dependence of both intercepts and slopes of a variety of $PL/PW$ relations at multiple wavelengths. We find a large negative metallicity effect on the intercept ($\gamma$ coefficient) of all the $PL/PW$ relations investigated in this work, while present data still do not allow us to draw firm conclusions regarding the metal dependence of the slope ($\delta$ coefficient). The typical values of $\gamma$ are around $-0.4:-0.5$ mag/dex, i.e. larger than most of the recent determinations present in the literature. We carried out several tests which confirm the robustness of our results. As in our previous works, we find that the inclusion of global zero point offset of \gaia\ parallaxes provides smaller values of $\gamma$ (in an absolute sense). However, the assumption of the geometric distance of the LMC seems to indicate that larger values of $\gamma$ (in an absolute sense) would be preferred.
Observations of clusters suffer from issues such as completeness, projection effects, resolving individual stars and extinction. As such, how accurate are measurements and conclusions are likely to be? Here, we take cluster simulations (Westerlund2- and Orion- type), synthetically observe them to obtain luminosities, accounting for extinction and the inherent limits of Gaia, then place them within the real Gaia DR3 catalogue. We then attempt to rediscover the clusters at distances of between 500pc and 4300pc. We show the spatial and kinematic criteria which are best able to pick out the simulated clusters, maximising completeness and minimising contamination. We then compare the properties of the 'observed' clusters with the original simulations. We looked at the degree of clustering, the identification of clusters and subclusters within the datasets, and whether the clusters are expanding or contracting. Even with a high level of incompleteness (e.g. $<2\%$ stellar members identified), similar qualitative conclusions tend to be reached compared to the original dataset, but most quantitative conclusions are likely to be inaccurate. Accurate determination of the number, stellar membership and kinematic properties of subclusters, are the most problematic to correctly determine, particularly at larger distances due to the disappearance of cluster substructure as the data become more incomplete, but also at smaller distances where the misidentification of asterisms as true structure can be problematic. Unsurprisingly, we tend to obtain better quantitative agreement of properties for our more massive Westerlund2-type cluster. We also make optical style images of the clusters over our range of distances.
The ANAIS experiment is intended to search for dark matter annual modulation with ultrapure NaI(Tl) scintillators in order to provide a model independent confirmation or refutation of the long-standing DAMA/LIBRA positive annual modulation signal in the low energy detection rate, using the same target and technique. Other experiments exclude the region of parameters singled out by DAMA/LIBRA. However, these experiments use different target materials, so the comparison of their results depends on the models assumed for the dark matter particle and its distribution in the galactic halo. ANAIS-112, consisting of nine 12.5 kg NaI(Tl) modules produced by Alpha Spectra Inc., disposed in a 3$\times$3 matrix configuration, is taking data smoothly with excellent performance at the Canfranc Underground Laboratory, Spain, since August, 2017. Last published results corresponding to three-year exposure were compatible with the absence of modulation and incompatible with DAMA/LIBRA for a sensitivity above 2.5$\sigma$ C.L. Present status of the experiment and a reanalysis of the first 3 years data using new filtering protocols based on machine-learning techniques are reported. This reanalysis allows to improve the sensitivity previously achieved for the DAMA/LIBRA signal. Updated sensitivity prospects are also presented: with the improved filtering, testing the DAMA/LIBRA signal at 5$\sigma$ will be within reach in 2025.
The goal of this work is to characterize the polarization effects of the VLTI and GRAVITY. This is needed to calibrate polarimetric observations with GRAVITY for instrumental effects and to understand the systematic error introduced to the astrometry due to birefringence when observing targets with a significant intrinsic polarization. By combining a model of the VLTI light path and its mirrors and dedicated experimental data, we construct a full polarization model of the VLTI UTs and the GRAVITY instrument. We first characterize all telescopes together to construct a UT calibration model for polarized targets. We then expand the model to include the differential birefringence. With this, we can constrain the systematic errors for highly polarized targets. Together with this paper, we publish a standalone Python package to calibrate the instrumental effects on polarimetric observations. This enables the community to use GRAVITY to observe targets in a polarimetric observing mode. We demonstrate the calibration model with the galactic center star IRS 16C. For this source, we can constrain the polarization degree to within 0.4 % and the polarization angle within 5 deg while being consistent with the literature. Furthermore, we show that there is no significant contrast loss, even if the science and fringe-tracker targets have significantly different polarization, and we determine that the phase error in such an observation is smaller than 1 deg, corresponding to an astrometric error of 10 {\mu}as. With this work, we enable the use of the polarimetric mode with GRAVITY/UTs and outline the steps necessary to observe and calibrate polarized targets. We demonstrate that it is possible to measure the intrinsic polarization of astrophysical sources with high precision and that polarization effects do not limit astrometric observations of polarized targets.
The subtle influence of gravitational waves on the apparent positioning of celestial bodies offers novel observational windows. We calculate the expected astrometric signal induced by an isotropic Stochastic Gravitational Wave Background (SGWB) in the short distance limit. Our focus is on the resultant proper motion of Solar System objects, a signal on the same time scales addressed by Pulsar Timing Arrays (PTA). We derive the corresponding astrometric deflection patterns, finding that they manifest as distinctive dipole and quadrupole correlations, or in some cases, may not be present. Our analysis encompasses both Einsteinian and non-Einsteinian polarisations. We estimate the upper limits for the amplitude of a scale-invariant SGWB that could be obtained by tracking the proper motions of large numbers of solar system objects such as asteroids. With the Gaia satellite and the Vera C. Rubin Observatory poised to track an extensive sample of asteroids-ranging from $O(10^5)$ to $O(10^6)$, we highlight the significant future potential for similar surveys to contribute to our understanding of the SGWB.
Thermal inertia estimates are available for a limited number of a few hundred objects, and the results are practically solely based on thermophysical modeling (TPM). We present a novel thermal inertia estimation method, Asteroid Thermal Inertia Analyzer (ASTERIA). The core of the ASTERIA model is the Monte Carlo approach, based on the Yarkovsky drift detection. We validate our model on asteroid Bennu plus ten well-characterized near-Earth asteroids (NEAs) for which a good estimation of the thermal inertia from the TPM exists. The tests show that the ASTERIA provides reliable results consistent with the literature values. The new method is independent from the TPM, allowing an independent verification of the results. As the Yarkovsky effect is more pronounced in small asteroids, the noteworthy advantage of the ASTERIA compared to the TPM is the ability to work with smaller asteroids for which TPM typically lacks the input data. We used the ASTERIA to estimate the thermal inertia of 38 NEAs, with 31 of them being sub-km asteroids. Twenty-nine objects in our sample are characterized as Potentially Hazardous Asteroids. On the limitation side, the ASTERIA is somewhat less accurate than the TPM. The applicability of our model is limited to NEAs, as the Yarkovsky effect is yet to be detected in main-belt asteroids. However, we can expect a significant increase in high-quality measurements of the input parameters relevant to the ASTERIA with upcoming surveys. This will surely increase the reliability of the results generated by the ASTERIA and widen the model's applicability.
This work reports the performance evaluation of an SDR readout system based on the latest generation (Gen3) of the AMD's Radio Frequency System-on-Chip (RFSoC) processing platform, which integrates a full-stack processing system and a powerful FPGA with up to 32 high-speed and high-resolution 14-bit Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs). The proposed readout system uses a previously developed multi-band, double-conversion IQ RF-mixing board targeting a multiplexing factor of approximately 1,000 bolometers in a bandwidth between 4 and 8 GHz, in line with state-of-the-art microwave SQUID multiplexers ($\mu$MUX). The characterization of the system was performed in two stages, under the conditions typically imposed by the multiplexer and the cold readout circuit. First, in transmission, showing that noise and spurious levels of the generated tones are close to the values imposed by the cold readout. Second, in RF loopback, presenting noise values better than -100 dBc/Hz totally in agreement with the state-of-the-art readout systems. It was demonstrated that the RFSoC Gen3 device is a suitable enabling technology for the next generation of superconducting detector readout systems, reducing system complexity, increasing system integration, and achieving these goals without performance degradation.
The PRobe far-Infrared Mission for Astrophysics (PRIMA) is under study as a potential far-IR space mission, featuring actively cooled optics, and both imaging and spectroscopic instrumentation. To fully take advantage of the low background afforded by a cold telescope, spectroscopy with PRIMA requires detectors with a noise equivalent power (NEP) better than $1 \times 10^{-19}$ W Hz$^{-1/2}$. To meet this goal we are developing large format arrays of kinetic inductance detectors (KIDs) to work across the $25-250$ micron range. Here we present the design and characterization of a single pixel prototype detector optimized for $210$ micron. The KID consist of a lens-coupled aluminum inductor-absorber connected to a niobium interdigitated capacitor to form a 2 GHz resonator. We measure the performance of this detector with optical loading in the $0.01 - 300$ aW range. At low loading the detector achieves an NEP of $9\times10^{-20}$ W Hz$^{-1/2}$ at a 10 Hz readout frequency, and the lens-absorber system achieves a good optical efficiency. An extrapolation of these measurements suggest this detector may remain photon noise limited at up to 20 fW, offering a high dynamic range for PRIMA observations of bright astronomical sources.
Coronal Mass Ejections (CMEs) are immense eruptions of plasma and magnetic fields that are propelled outward from the Sun, sometimes with velocities greater than 2000 km/s. They are responsible for some of the most severe space weather at Earth, including geomagnetic storms and solar energetic particle (SEP) events. We have developed CORHEL-CME, an interactive tool that allows non-expert users to routinely model multiple CMEs in a realistic coronal and heliospheric environment. The tool features a web-based user interface that allows the user to select a time period of interest, and employs RBSL flux ropes to create stable and unstable pre-eruptive configurations within a background global magnetic field. The properties of these configurations can first be explored in a zero-beta magnetohydrodynamic (MHD) model, followed by complete CME simulations in thermodynamic MHD, with propagation out to 1 AU. We describe design features of the interface and computations, including the innovations required to efficiently compute results on practical timescales with moderate computational resources. CORHEL-CME is now implemented at NASA's Community Coordinated Modeling Center (CCMC) using NASA Amazon Web Services (AWS). It will be available to the public by the time this paper is published.
While proper orbital elements are currently available for more than 1 million asteroids, taxonomical information is still lagging behind. Surveys like SDSS-MOC4 provided preliminary information for more than 100,000 objects, but many asteroids still lack even a basic taxonomy. In this study, we use Dark Energy Survey (DES) data to provide new information on asteroid physical properties. By cross-correlating the new DES database with other databases, we investigate how asteroid taxonomy is reflected in DES data. While the resolution of DES data is not sufficient to distinguish between different asteroid taxonomies within the complexes, except for V-type objects, it can provide information on whether an asteroid belongs to the C- or S-complex. Here, machine learning methods optimized through the use of genetic algorithms were used to predict the labels of more than 68,000 asteroids with no prior taxonomic information. Using a high-quality, limited set of asteroids with data on $gri$ slopes and $i-z$ colors, we detected 409 new possible V-type asteroids. Their orbital distribution is highly consistent with that of other known V-type objects.
Image degradation impedes our ability to extract information from astronomical observations. One factor contributing to this degradation is ``dome seeing", the reduction in image quality due to variations in the index of refraction within the observatory dome. Addressing this challenge, we introduce a novel setup-DIMSUM (Differential Image Motion Sensor Using Multisources)-which offers a simple installation and provides direct characterization of local index of refraction variations. This is achieved by measuring differential image motion using strobed imaging that effectively``freezes" the atmosphere, aligning our captured images with the timescale of thermal fluctuations, thereby giving a more accurate representation of dome seeing effects. Our apparatus has been situated within the Auxiliary Telescope of the Vera C. Rubin Observatory. Early results from our setup are encouraging. Not only do we observe a correlation between the characteristic differential image motion (DIM) values and local temperature fluctuations (a leading cause of variations in index of refraction), but also hint at the potential of DIM measures to characterize dome seeing with greater precision in subsequent tests. Our preliminary findings underscore the potential of DIMSUM as a powerful tool for enhancing image quality in ground-based astronomical observations. Further refinement and data collection will likely solidify its place as a useful component for managing dome seeing in major observatories like the Vera C. Rubin Observatory.
Astronomical data reduction is usually done with processing pipelines that consist of a series of individual processing steps that can be executed stand-alone. These processing steps are then strung together into workflows and fed with data to address a particular processing goal. In this paper, we propose a data processing system that automatically derives processing workflows for different use cases from a single specification of a cascade of processing steps. The system works by using formalized descriptions of data processing pipelines that specify the input and output of each processing step. Inputs can be existing data or the output of a previous step. Rules to select the most appropriate input data are directly attached to the description. A version of the proposed system has been implemented as the ESO Data Processing System (EDPS) in the Python language. The specification of processing cascades and data organisation rules use a restrictive set of Python classes, attributes and functions. The EDPS implementation of the proposed system was used to demonstrate that it is possible to automatically derive from a single specification of a pipeline processing cascade the workflows that the European Southern Observatory uses for quality control, archive production, and specialized science reduction. The EDPS will be used to replace all data reduction systems using different workflow specifications that are currently used at the European Southern Observatory.
We describe here updates and new elements of the Catalogue of Cometary Orbits and their Dynamical Evolution (CODE) that, in its original 2020 version, has been introduced by Kr\'olikowska & Dybczy\'nski (2020). Currently, the CODE Catalogue offers rich sets of orbital solutions for almost a complete sample of Oort spike comets discovered between 1900 and 2021. We often offer several orbital solutions based on different nongravitational force models or different observational material treatments. An important novelty is that the `previous` (at previous perihelion or at 120,000 au from the Sun in the past) and `next` (at next perihelion or 120,000 au after leaving the planetary zone) orbits are given in two variants. One with the dynamical model restricted only to the full Galactic tide (with all individual stars omitted) and the second one, where all currently known stellar perturbers are also taken into account. Calculations of the previous and next orbits were performed using the up-to-date StePPeD database of potential stellar perturbers.
We report a study exploring how the use of deep neural networks with astronomical Big Data may help us find and uncover new insights into underlying phenomena: through our experiments towards unsupervised knowledge extraction from astronomical Big Data we serendipitously found that deep convolutional autoencoders tend to reject telluric lines in stellar spectra. With further experiments we found that only when the spectra are in the barycentric frame does the network automatically identify the statistical independence between two components, stellar vs telluric, and rejects the latter. We exploit this finding and turn it into a proof-of-concept method for removal of the telluric lines from stellar spectra in a fully unsupervised fashion: we increase the inter-observation entropy of telluric absorption lines by imposing a random, virtual radial velocity to the observed spectrum. This technique results in a non-standard form of ``whitening'' in the atmospheric components of the spectrum, decorrelating them across multiple observations. We process more than 250,000 spectra from the High Accuracy Radial velocity Planetary Search (HARPS) and with qualitative and quantitative evaluations against a database of known telluric lines, show that most of the telluric lines are successfully rejected. Our approach, `Stellar Karaoke', has zero need for prior knowledge about parameters such as observation time, location, or the distribution of atmospheric molecules and processes each spectrum in milliseconds. We also train and test on Sloan Digital Sky Survey (SDSS) and see a significant performance drop due to the low resolution. We discuss directions for developing tools on top of the introduced method in the future.
Young $\delta$ Scuti stars have proven to be valuable asteroseismic targets but obtaining robust uncertainties on their inferred properties is challenging. We aim to quantify the random uncertainties in grid-based modelling of $\delta$ Sct stars. We apply Bayesian inference using nested sampling and a neural network emulator of stellar models, testing our method on both simulated and real stars. Based on results from simulated stars we demonstrate that our method can recover plausible posterior probability density estimates while accounting for both the random uncertainty from the observations and neural network emulation. We find that the posterior distributions of the fundamental parameters can be significantly non-Gaussian, multi-modal, and have strong covariance. We conclude that our method reliably estimates the random uncertainty in the modelling of $\delta$ Sct stars and paves the way for the investigation and quantification of the systematic uncertainty.
Imaging Air Cherenkov Telescopes (IACTs) are essential to ground-based observations of gamma rays in the GeV to TeV regime. One particular challenge of ground-based gamma-ray astronomy is an effective rejection of the hadronic background. We propose a new deep-learning-based algorithm for classifying images measured using single or multiple Imaging Air Cherenkov Telescopes. We interpret the detected images as a collection of triggered sensors that can be represented by graphs and analyzed by graph convolutional networks. For images cleaned of the light from the night sky, this allows for an efficient algorithm design that bypasses the challenge of sparse images in deep learning approaches based on computer vision techniques such as convolutional neural networks. We investigate different graph network architectures and find a promising performance with improvements to previous machine-learning and deep-learning-based methods.
Environmental neutrons are a source of background for rare event searches (e.g., dark matter direct detection and neutrinoless double beta decay experiments) taking place in deep underground laboratories. The overwhelming majority of these neutrons are produced in the cavern walls by means of intrinsic radioactivity of the rock and concrete. Their flux and spectrum depend on time and location. Precise knowledge of this background is necessary to devise sufficient shielding and veto mechanisms, improving the sensitivity of the neutron-susceptible underground experiments. In this report, we present the design and the expected performance of a mobile neutron detector for the LNGS underground laboratory. The detector is based on capture-gated spectroscopy technique and comprises essentially a stack of plastic scintillator bars wrapped by gadolinium foils. The extensive simulation studies demonstrate that the detector will be capable of measuring ambient neutrons at low flux levels ($\sim$$10^{-6}\,\mathrm{n/cm^2/s}$) at LNGS, where the ambient gamma flux is by about 5 orders of magnitude larger.
Atmospheric characterisation of exoplanets from the ground is an actively growing field of research. In this context we have created the ATMOSPHERIX consortium: a research project aimed at characterizing exoplanets atmospheres using ground-based high resolution spectroscopy. This paper presents the publicly-available data analysis pipeline and demonstrates the robustness of the recovered planetary parameters from synthetic data. Simulating planetary transits using synthetic transmission spectra of a hot Jupiter that were injected into real SPIRou observations of the non-transiting system Gl 15 A, we show that our pipeline is successful at recovering the planetary signal and input atmospheric parameters. We also introduce a deep learning algorithm to optimise data reduction which proves to be a reliable, alternative tool to the commonly used principal component analysis. We estimate the level of uncertainties and possible biases when retrieving parameters such as temperature and composition and hence the level of confidence in the case of retrieval from real data. Finally, we apply our pipeline onto two real transits of HD~189733 b observed with SPIRou and obtain similar results than in the literature. In summary, we have developed a publicly available and robust pipeline for the forthcoming studies of the targets to be observed in the framework of the ATMOSPHERIX consortium, which can easily be adapted to other high resolution instruments than SPIRou (e.g. VLT-CRIRES, MAROON-X, ELT-ANDES)
In a companion paper, we introduced a publicly-available pipeline to characterise exoplanet atmospheres through high-resolution spectroscopy. In this paper, we use this pipeline to study the biases and degeneracies that arise in atmospheric characterisation of exoplanets in near-infrared ground-based transmission spectroscopy. We inject synthetic planetary transits into sequences of SPIRou spectra of the well known M dwarf star Gl 15 A, and study the effects of different assumptions on the retrieval. We focus on (i) mass and radius uncertainties, (ii) non isothermal vertical profiles and (iii) identification and retrieval of multiple species. We show that the uncertainties on mass and radius should be accounted for in retrievals and that depth-dependent temperature information can be derived from high-resolution transmission spectroscopy data. Finally, we discuss the impact of selecting wavelength orders in the retrieval and the issues that arise when trying to identify a single species in a multi-species atmospheric model. This analysis allows us to understand better the results obtained through transmission spectroscopy and their limitations in preparation to the analysis of actual SPIRou data.
We investigate imaging point sources with a monopole gravitational lens, such as the Solar Gravitational Lens in the geometric optics limit. We compute the light amplification of the lens used in conjunction with a telescope featuring a circular aperture that is placed in the focal region of the lens, compared to the amount of light collected by the same telescope unaided by a gravitational lens. We recover an averaged point-spread function that is in robust agreement with a wave-theoretical description of the lens, and can be used in practical calculations or simulations.
We investigate the utility of a constellation of four satellites in heliocentric orbit, equipped with accurate means to measure intersatellite ranges, round-trip times and phases of signals coherently retransmitted between members of the constellation. Our goal is to reconstruct the measured trace of the gravitational gradient tensor as accurately as possible. Intersatellite ranges alone are not sufficient for its determination, as they do not account for any rotation of the satellite constellation, which introduces fictitious forces and accelerations. However, measuring signal round-trip time differences among the satellites supplies the necessary observables to estimate, and subtract, the effects of rotation. Utilizing, in addition, the approximate distance and direction from the Sun, it is possible to approach an accuracy of $10^{-24}~{\rm s}^{-2}$ for a constellation with typical intersatellite distances of 1,000 km in an orbit with a 1 astronomical unit semi-major axis. This is deemed sufficient to detect the presence of a galileonic modification of the solar gravitational field.
The paper proposes an acceleration effect that a local short-time acceleration produces an additional broadening to spectral line, while the central value of the line remains unaffected. The effect can be considered as a local and non-uniform generalization of Unruh effect. Although the acceleration-induced line broadening effect is too small to be measured in ordinary lab setup, it may offer us a key concept to gain a simple and unified perspective on the cosmic acceleration and the radial acceleration discrepancy of rotation galaxies without introducing any missing energy and matter in the universe. We find that the measurement of the acceleration of the cosmic expansion by fitting the distance-redshift relation is essentially the measurement of the line or redshift broadening, and the cosmic acceleration induced line broadening also plays a crucial role in the acceleration discrepancy at the outskirt of rotating galaxies. Possible predictions of the effect are also discussed.
In this paper, we investigate the evolution of cosmological perturbations within the context of Bianchi Type$-I$ spacetimes. We consider models containing viscous fluids with evolving cosmological (\LL) and Newtonian gravitational (G) parameters. The investigation of how over-densities in the viscous matter content in the Bianchi Type$-I$ model is our primary emphasis. In particular, we investigate the generation and propagation of signals associated with large-scale structures in this setting. We contrast our findings with the predictions of the classic $\Lambda$CDM ($\Lambda$-Cold-Dark-Matter) cosmological model to draw relevant contrasts and insights. Our findings emphasise the need to incorporate viscous fluids into the Bianchi Type$-I$ geometry, as well as the dynamic fluctuations of \LL and $G$. These factors influence the rate of structure growth in the cosmos as a whole. Thus, our findings offer light on the complex dynamic interaction between viscosity, changing cosmological parameters, and the growth of large-scale structures in an anisotropic universe.
We conduct a novel study to obtain the initial spin of the primordial black holes created during a first-order phase transition due to delayed false vacuum decay. Remaining within the parameter space consistent with observational bounds, we express the abundance and the initial spin of the primordial black holes as functions of the phase transition parameters. The abundance of the primordial black holes is extremely sensitive to the phase transition parameters. We also find that the initial spin weakly depends on all parameters except the transition temperature.
This article investigates on the radial and non-radial geodesic structures of the generalized K-essence Vaidya spacetime. Within the framework of K-essence geometry, it is important to note that the metric does not possess conformal equivalence to the conventional gravitational metric. This study employs a non-canonical action of the Dirac-Born-Infeld kind. In this work, we categorize the generalized K-essence Vaidya mass function into two distinct forms. Both the forms of the mass functions have been extensively utilized to analyze the radial and non-radial time-like or null geodesics in great details inside the comoving plane. Indications of the existence of wormhole can be noted during the extreme phases of spacetime, particularly in relation to black holes and white holes, which resemble the Einstein-Rosen bridge. In addition, we have also detected the distinctive indication of the quantum tunneling phenomenon around the central singularity.
A remarkable double copy relation of Einstein gravity to QCD in Regge asymptotics is $\Gamma^{\mu\nu}= \frac12C^\mu C^\nu- \frac12N^\mu N^\nu$, where $\Gamma^{\mu\nu}$ is the gravitational Lipatov vertex in the $2\to 3$ graviton scattering amplitude, $C^\mu$ its Yang-Mills counterpart, and $N^\mu$ the QED bremssstrahlung vertex. In QCD, the Lipatov vertex is a fundamental building block of the BFKL equation describing $2\to N$ scattering of gluons at high energies. Likewise, the gravitational Lipatov vertex is a key ingredient in a 2-D effective field theory framework describing trans-Planckian $2\to N$ graviton scattering. We construct a quantitative correspondence between a semi-classical Yang-Mills framework for radiation in gluon shockwave collisions and its counterpart in general relativity. In particular, we demonstrate the Lipatov double copy in a dilute-dilute approximation corresponding to $R_{S,L}$, $R_{S,H}$ $ \ll b$, with $R_{S,L}$, $R_{S,H}$ the respective emergent Schwarzchild radii generated in shockwave collisions and $b$ is the impact parameter. We outline extensions of the correspondence developed here to the dilute-dense computation of gravitational wave radiation in close vicinity of one of the black holes, the construction of graviton propagators in the shockwave background, and a renormalization group approach to compute $2\rightarrow N$ amplitudes that incorporates graviton reggeization and coherent graviton multiple scattering.
We study pure D dimensional Einstein gravity in spacetimes with a generic null boundary. We focus on the symplectic form of the solution phase space which comprises a 2D dimensional boundary part and a 2(D(D-3)/2+1) dimensional bulk part. The symplectic form is the sum of the bulk and boundary parts, obtained through integration over a codimension 1 surface (null boundary) and a codimension 2 spatial section of it, respectively. Notably, while the total symplectic form is a closed 2-form over the solution phase space, neither the boundary nor the bulk symplectic forms are closed due to the symplectic flux of the bulk modes passing through the boundary. Furthermore, we demonstrate that the D(D-3)/2+1 dimensional Lagrangian submanifold of the bulk part of the solution phase space has a Carrollian structure, with the metric on the D(D-3)/2 dimensional part being the Wheeler-DeWitt metric, and the Carrollian kernel vector corresponding to the outgoing Robinson-Trautman gravitational wave solution.
In this paper, we study the linearized gravity theory in the near horizon region of the Schwarzschild black hole in four dimensional spacetime. Under the Newman-Unti gauge, we derive the most general near horizon symmetry and solution space without any near horizon fall-off condition. There are four towers of surface charges that are generic functions on the horizon associated to the near horizon symmetry. With suitable near horizon fall-off conditions, we reveal a soft graviton theorem from the Ward identity of the near horizon supertranslation in both coordinates space and momentum space.
Generalized entropy, that has been recently proposed, puts all the known and apparently different entropies like The Tsallis, the R\'{e}nyi, the Barrow, the Kaniadakis, the Sharma-Mittal and the loop quantum gravity entropy within a single umbrella. However, the microscopic origin of such generalized entropy as well as its relation to thermodynamic system(s) is not clear. In the present work, we will provide a microscopic thermodynamic explanation of generalized entropy(ies) from canonical and grand-canonical ensembles. It turns out that in both the canonical and grand-canonical descriptions, the generalized entropies can be interpreted as the statistical ensemble average of a series of microscopic quantity(ies) given by various powers of $\left(-k\ln{\rho}\right)^n$ (with $n$ being a positive integer and $\rho$ symbolizes the phase space density of the respective ensemble), along with a term representing the fluctuation of Hamiltonian and number of particles of the system under consideration (in case of canonical ensemble, the fluctuation on the particle number vanishes).
We determine hidden conformal symmetries behind the evolution equations of black hole perturbations in a vector-tensor theory of gravity. Such hidden symmetries are valid everywhere in the exterior region of a spherically symmetric, asymptotically flat black hole geometry. They allow us to factorize second order operators controlling the black hole perturbations into a product of two commuting first order operators. As a consequence, we are able to analytically determine the most general time-dependent solutions for the black hole perturbation equations. We focus on solutions belonging to a highest weight representation of a conformal symmetry, showing that they correspond to quasi-bound states with an ingoing behaviour into the black hole horizon, and exponential decay at spatial infinity. Their time-dependence is characterized by purely imaginary frequencies, with imaginary parts separated by integer numbers, as the overtones of quasi-normal modes in General Relativity.
We propose high-frequency gravitational wave (GW) detectors with Rydberg atoms. Rydberg atoms are sensitive detectors of electric fields. By setting up a constant magnetic field, a weak electric field is generated upon the arrival of GWs. The weak electric field signal is then detected by an electromagnetically induced transparency (EIT) in the system of the Rydberg atoms. Recently, the sensitivity of the Rydberg atoms is further improved by combining superheterodyne detection method. Hence, even the weak signal generated by the GWs turns out to be detectable. We calculate the amplitude of Rabi frequency of the Rydberg atoms induced by the GWs and show that the sensitivity of the Rydberg atoms becomes maximum when the size of the Rydberg atoms is close to the wavelength of GWs. As an example, we evaluate the sensitivity of the GW detector with Rubidium Rydberg atoms and find that the detector can probe GWs with a frequency 26.4 GHz and an amplitude approximately around $10^{20}$. We argue that the sensitivity can be further enhanced by exploiting entangled atoms.
How does one formalize the structure of structures necessary for the foundations of physics? This work is an attempt at conceptualizing the metaphysics of pregeometric structures, upon which new and existing notions of quantum geometry may find a foundation. We discuss the philosophy of pregeometric structures due to Wheeler, Leibniz as well as modern manifestations in topos theory. We draw attention to evidence suggesting that the framework of formal language, in particular, homotopy type theory, provides the conceptual building blocks for a theory of pregeometry. This work is largely a synthesis of ideas that serve as a precursor for conceptualizing the notion of space in physical theories. In particular, the approach we espouse is based on a constructivist philosophy, wherein ``structureless structures'' are syntactic types realizing formal proofs and programs. Spaces and algebras relevant to physical theories are modeled as type-theoretic routines constructed from compositional rules of a formal language. This offers the remarkable possibility of taxonomizing distinct notions of geometry using a common theoretical framework. In particular, this perspective addresses the crucial issue of how spatiality may be realized in models that link formal computation to physics, such as the Wolfram model.
Based on the recent work [1,2], we formulate the first law and the second law of stochastic thermodynamics in the framework of general relativity. These laws are established for a charged Brownian particle moving in a heat reservoir and subjecting to an external electromagnetic field in generic stationary spacetime background, and in order to maintain general covariance, they are presented respectively in terms of the divergences of the energy current and the entropy density current. The stability of the equilibrium state is also analyzed.
Black holes are treated as topological defects in the thermodynamic parameter space of Einstein-Gauss-Bonnet gravity theory. The kinetics of thermodynamic defects are studied using Duan's bifurcation theory. In this picture, a first-order phase transition between small/large black hole phases is interpreted as the interchange of winding numbers between the defects as a result of some action at a distance. We observe a first-order phase transition between small/large black holes for $D=5$ Gauss-Bonnet theory similar to Reissner-Nordstr\"om black holes in AdS space. This implies that these black hole solutions share the same topology and phase structure. We have also studied the phase transition of neutral black holes in $D\geq 6$ and found a transition between unstable small and large stable black hole phases similar to the case of neutral black holes in AdS space. It has been conjectured that black holes with similar topological nature exhibit the same thermodynamic properties. Our results strengthen the conjecture by connecting topology to phase transitions.
We study the renormalization of a particular sector of Horndeski theory. We focus on the nonminimal coupling of a scalar field to the Gauss-Bonnet term through an arbitrary function of the former plus a kinetic coupling to the Einstein tensor. In the asymptotically AdS sector of the theory, we perform a near-boundary expansion of the fields and we work out the asymptotic form of the action and its variation. By assuming a power expansion of the scalar coupling function and the Gauss-Bonnet term, we find specific conditions on their coefficients such that the action and charges are finite. To accomplish the latter, a finite set of intrinsic boundary terms has to be added. If the nonminimal kinetic coupling is absent, the trace of the holographic stress-energy tensor cannot be zero while the dual CFT remains unitary as the scalar mass lies outside the Breitenlohner-Freedman bound. However, if one considers the kinetic coupling to the Einstein tensor, we find that its contribution allows one to recover the unitarity of the dual CFT, motivating the introduction of that term from a holographic viewpoint.
A neutron star in an inspiraling binary system is tidally deformed by its companion, and the effect leaves a measurable imprint on the emitted gravitational waves. While the tidal interaction falls within the regime of static tides during the early stages of inspiral, a regime of dynamical tides takes over in the later stages. The description of dynamical tides found in the literature makes integral use of a spectral representation of the tidal deformation, in which it is expressed as a sum over the star's normal modes of vibration. This description is deeply rooted in Newtonian fluid mechanics and gravitation, and we point out that considerable obstacles manifest themselves in an extension to general relativity. To remedy this we propose an alternative, mode-less description of dynamical tides that can be formulated in both Newtonian and relativistic mechanics. Our description is based on a time-derivative expansion of the tidal dynamics. The tidal deformation is characterized by two sets of Love numbers: the static Love numbers $k_\ell$ and the dynamic Love numbers $\ddot{k}_\ell$. These are computed here for polytropic stellar models in both Newtonian gravity and general relativity. The time-derivative expansion of the tidal dynamics seems to preclude any attempt to capture an approach to resonance, which occurs when the frequency of the tidal field becomes equal to a normal-mode frequency. To overcome this limitation we propose a pragmatic extension of the time-derivative expansion which does capture an approach to resonance. We demonstrate that with this extension, our formulation of dynamical tides should be just as accurate as the $f$-mode truncation of the mode representation, in which the sum over modes is truncated to a single term involving the star's fundamental mode of vibration.
Using effective field theory methods, we derive the Carrollian analog of the geodesic action. We find that it contains both `electric' and `magnetic' contributions that are in general coupled to each other. The equations of motion descending from this action are the Carrollian pendant of geodesics, allowing surprisingly rich dynamics. As an example, we derive Carrollian geodesics on a Carroll-Schwarzschild background and discover an effective potential similar to the one appearing in geodesics on Schwarzschild backgrounds. However, the Newton term in the potential turns out to depend on the Carroll particle's energy. As a consequence, there is only one circular orbit localized at the Carroll extremal surface, and this orbit is unstable. For large impact parameters, the deflection angle is half the value of the general relativistic light-bending result. For impact parameters slightly bigger than the Schwarzschild radius, orbits wind around the Carroll extremal surface. For small impact parameters, geodesics get reflected by the Carroll black hole, which acts as a perfect mirror.
Adopting an intrinsic Carrollian viewpoint, we show that the generic Carrollian scalar field action is a combination of electric and magnetic actions, found in the literature by taking the Carrollian limit of the relativistic scalar field. This leads to non-trivial dynamics: there is motion in Carrollian physics.
Here we continue studying the Wahlquist metric. We know that the wave equation written for a zero mass scalar particle in the background of this metric gives Heun type solutions. To be able to use the existing literature on Heun functions, we try to put our wave equation to the standard form for these functions. Then we calculate the reflection coefficient of a wave coming from infinity and scattered at the center using this formalism. In this new version , we give an alternative derivation for the calculation of the reflection coeffient formulae.
We present the first examples in black hole thermodynamics of multicritical phase transitions, in which more than three distinct black hole phases merge at a critical point. Working in the context of non-linear electrodynamics, we explicitly present examples of black hole quadruple and quintuple points, and demonstrate how $n$-tuple critical points can be obtained. Our results indicate that black holes can have multiple phases beyond the three types observed so far, resembling the behaviour of multicomponent chemical systems. We discuss the interpretation of our results in the context of the Gibbs Phase Rule.
We employ Verlinde's entropic force scenario to extract the modified Friedmann equations by taking into account the zero-point length correction to the gravitational potential. Starting form the modified gravitational potential due to the zero-point length, we first find the logarithmic corrections to the entropy expression and then we derive the modified Friedman equations. Interestingly enough, we observe that the corrected Friedmann equations are similar to the Friedmann equations in braneworld scenario. In addition, from the corrected Friedmann equations, we pointed out a possible connection to the GUP principle which might have implications on the Hubble tension. To this end, we discuss the evolution of the scale factor under the effect of zero-point length. Finally, having in mind that the minimal length is of the Planck order, we obtain the critical density and the bouncing behavior of the universe with a critical density and a minimal scale factor of the order of Planck length.
Allowing for the possibility of extra dimensions, there are two paradigms: either the extra dimensions are hidden from observations by being compact and small as in Kaluza-Klein scenarios, or the extra dimensions are large/non-compact and undetectable due to a large warping as in the Randall-Sundrum scenario. In the latter case, the five-dimensional background has a large curvature, and Isaacson's construction of the gravitational energy-momentum tensor, which relies on the assumption that the wavelength of the metric fluctuations is much smaller than the curvature length of the background spacetime, cannot be used. In this paper, we construct the gravitational energy-momentum tensor in a strongly curved background such as Randall-Sundrum. We perform a scalar-vector-tensor decomposition of the metric fluctuations with respect to the $SO(1,3)$ background isometry and construct the covariantly-conserved gravitational energy-momentum tensor out of the gauge-invariant metric fluctuations. We give a formula for the power radiated by gravitational waves and verify it in known cases. In using the gauge-invariant metric fluctuations to construct the gravitational energy-momentum tensor we follow previous work done in cosmology. Our framework has applicability beyond the Randall-Sundrum model.
Kaniadakis entropy is a one-parameter deformation of the classical Boltzmann-Gibbs-Shannon entropy, arising from a self-consistent relativistic statistical theory. Assuming a Kaniadakis-type generalization of the entropy associated with the apparent horizon of Friedmann-Robertson-Walker (FRW) Universe and using the gravity-thermodynamics conjecture, a new cosmological scenario is obtained based on the modified Friedmann equations. By employing such modified equations, we analyze the slow-roll inflation, driven by a scalar field with power-law potential, at the early stages of the Universe. We explore the phenomenological consistency of this model by computation of the scalar spectral index and tensor-to-scalar ratio. Comparison with the latest Planck data allows us to constrain Kaniadakis parameter to $\kappa\lesssim\mathcal{O}(10^{-13}\div10^{-12})$, which is discussed in relation to other observational bounds in the past literature. We also disclose the effects of Kaniadakis correction term on the growth of perturbations at the early stages of the Universe by employing the spherically symmetric collapse formalism in the linear regime of density perturbations. We find out that the profile of density contrast is non-trivially affected in this scenario. Interestingly enough, we observe that increasing Kaniadakis parameter $\kappa$ corresponds to a faster growth of perturbations in a Universe governed by the corrected Friedmann equations. Finally, we comment on the consistency of the primordial power spectrum for scalar perturbations with the best data-fit provided by Planck.
The holographic gauge/gravity duality provides an explicit reduction of quantum field theory (QFT) calculations in the semi-classical large-$N$ limit to sets of `gravitational' differential equations whose analysis can reveal all details of the spectra of thermal QFT correlators. We argue that in certain cases, a complete reconstruction of the spectrum and of the corresponding correlator is possible from only the knowledge of an infinite, discrete set of pole-skipping points traversed by a single (hydrodynamic) mode computed in a series expansion in an inverse number of spacetime dimensions. Conceptually, this reduces the computation of a QFT correlator spectrum to performing a set of purely algebraic manipulations. With the help of the pole-skipping analysis, we also uncover a novel structure underpinning the coefficients that enter the hydrodynamic dispersion relations.
The traditional presentation of Unimodular Gravity (UG) consists on indicating that it is an alternative theory of gravity that restricts the generic diffeomorphism invariance of General Relativity. In particular, as often encountered in the literature, unlike General Relativity, Unimodular Gravity is invariant solely under volume-preserving diffeomorphisms. That characterization of UG has led to some confusion and incorrect statements in various treatments on the subject. For instance, sometimes it is claimed (mistakenly) that only spacetime metrics such that $|$det $g_{\mu \nu}| = 1$ can be considered as valid solutions of the theory. Additionally, that same (incorrect) statement is often invoked to argue that some particular gauges (e.g. the Newtonian or synchronous gauge) are not allowed when dealing with cosmological perturbation theory in UG. The present article is devoted to clarify those and other misconceptions regarding the notion of diffeomorphism invariance, in general, and its usage in the context of UG, in particular.
We explore the duality invariance of the Maxwell and linearized Einstein-Hilbert actions on a non-rotating black hole background. On shell these symmetries are electric-magnetic duality and Chandrasekhar duality, respectively. Off shell they lead to conserved quantities; we demonstrate that one of the consequences of these conservation laws is that even- and odd-parity metric perturbations have equal Love numbers. Along the way we derive an action principle for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities in electromagnetism.
We study the EFT of a spinning compact object and show that with appropriate gauge fixing, computations become amenable to worldline quantum field theory techniques. We use the resulting action to compute Compton and one-loop scattering amplitudes at fourth order in spin. By matching these amplitdes to solutions of the Teukolsky equations, we fix the values of Wilson coefficients appearing in the EFT such that it reproduces Kerr black hole scattering. We keep track of the spin supplementary condition throughout our computations and discuss alternative ways to ensure its preservation.
A combined explanation of the deviations from Standard Model predictions in $h\to e\tau$, $h\to \mu\tau$, $b\to s \ell^+ \ell^-$, the $W$ mass and $R({D^{(*)}})$ as well as the excess in $t\to bH^+(130\,{\rm GeV})\to b\overline{b}c$ is proposed: We show that a Two-Higgs-Doublet Model with non-minimal flavour violation can simultaneously explain these hints for new physics without violating the stringent bounds from e.g. $\mu\to e\gamma$, $\tau\to \mu\gamma$, $B_s-\bar B_s$ mixing, $b\to s\gamma$, low mass di-jet and $pp\to H^+H^-\to \tau^+\tau^-\nu\bar\nu$ searches. Furthermore, a shift in the SM Higgs coupling strength to tau leptons as well as a non-zero $t\to hc$ rate is predicted, as preferred by recent measurements. We propose three benchmark points providing such a simultaneous explanation and calculate their predictions, including collider signatures which can be tested with upcoming LHC run-3 data.
We present a subtraction scheme for ultraviolet (UV) divergent, infrared (IR) safe scalar Feynman integrals in dimensional regularization with any number of scales. This is done by the introduction of $u$-variables, which are a suitable generalization of dihedral coordinates on the open string moduli space to Feynman integrals. The subtraction scheme furnishes subtraction terms which are products of lower loop Feynman integrals deformed by order $\epsilon$ powers of $u$-variables and deformations of the degree of divergence. The result is a canonical and algorithmic prescription to express the Feynman integral as a sum of convergent integrals dressed with inverse powers of $\epsilon$.
In this paper, we set up the numerical S-matrix bootstrap using a basis inspired by the crossing symmetric dispersion relation (CSDR). As a motivation behind examining the local version of the CSDR, we derive a new crossing symmetric, 3-channels-plus-contact-terms representation of the Virasoro-Shapiro amplitude in string theory that converges everywhere except at the poles. We then focus on gapped theories and give novel analytic and semi-analytic derivations of several bounds on low-energy data. We examine the high-energy behaviour of the experimentally measurable rho-parameter, introduced by Khuri and Kinoshita and defined as the ratio of the real to the imaginary part of the amplitude in the forward limit. Contrary to expectations, we find numerical evidence that there could be multiple changes in the sign of this ratio before it asymptotes at high energies. We compare our approach with other existing numerical methods and find agreement, with improvement in convergence.
In high-energy heavy-ion collisions a nearly perfect fluid, the so called strongly coupled quark gluon plasma forms. After the short period of thermalisation, the evolution of this medium can be described by the laws of relativistic hydrodynamics. The time evolution of the quark gluon plasma can be understood through direct photon spectra measurements, which are sensitive to the entire period between the thermalisation and the freeze-out of the medium. I present a new analytic formula that describes the thermal photon radiation and it is derived from an exact and finite solution of relativistic hydrodynamics with accelerating velocity field. Then, I compare my calculations to the most recent nonprompt spectrum of direct photons for $Au+Au$ at $\sqrt{s_{NN}}=$200 GeV collisions. I have found a convincing agreement between the model and the data, which allows to give an estimate of the initial temperature in the center of the fireball.
Charged current interactions of neutrinos inside the Earth can result in secondary muons and {\tau} - leptons which are detectable by a large swath of existing and planned neutrino experiments through a wide variety of event topologies. Consideration of such events can improve detector performance and provide unique signatures which help with event reconstruction. In this work, we describe NuLeptonSim, a propagation tool for neutrinos and charged leptons that builds on the fast NuTauSim framework. NuLeptonSim considers energy losses of charged leptons, modelled both continuously for performance or stochastically for accuracy, as well as interaction models for all flavors of neutrinos, including the Glashow resonance. We demonstrate the results from including these effects on the Earth emergence probability of various charged leptons from different flavors of primary neutrino and their corresponding energy distributions. We find that the emergence probability of muons can be higher than that of taus for energies below 100 PeV, whether from a primary muon or {\tau} neutrino, and that the Glashow resonance contributes to a surplus of emerging leptons near the resonant energy.
The string+${}^3P_0$ model of spin-dependent hadronization is applied to the fragmentation of a string stretched between a quark and an antiquark with entangled spin states, assumed to be produced in the $e^+e^-$ annihilation process. The model accounts systematically for the spin correlations in the hadronization chain and is formulated as a recursive recipe suitable for the implementation in a Monte Carlo event generator. The recipe is applied to the production of two back-to-back pseudoscalar mesons produced in $e^+e^-$ annihilation, and it is shown to reproduce the form of the azimuthal distribution of the hadrons as expected in QCD.
Chiral perturbation theory predicts the chiral anomaly to induce a so-called Chiral Soliton Lattice at sufficiently large magnetic fields and baryon chemical potentials. This state breaks translational invariance in the direction of the magnetic field and was shown to be unstable with respect to charged pion condensation. Improving on previous work by considering a realistic pion mass, we employ methods from type-II superconductivity and construct a three-dimensional pion (and baryon) crystal perturbatively, close to the instability curve of the Chiral Soliton Lattice. We find an analogue of the usual type-I/type-II transition in superconductivity: Along the instability curve for magnetic fields $eB > 0.12\, {\rm GeV}^2$ and chemical potentials $\mu< 910\, {\rm MeV}$, this crystal can continuously supersede the Chiral Soliton Lattice. For smaller magnetic fields the instability curve must be preceded by a discontinuous transition.
We collect spectra extracted in the $I=\ell=1$ $\pi\pi$ sector provided by various lattice QCD collaborations and study the $m_\pi$ dependence of $\rho$-meson properties using Hamiltonian Effective Field Theory (HEFT). In this unified analysis, the coupling constant and cutoff mass, characterizing the $\rho - \pi \pi$ vertex, are both found to be weakly dependent on $m_\pi$, while the mass of the bare $\rho$, associated with a simple quark-model state, shows a linear dependence on $m_\pi^2$. Both the lattice results and experimental data can be described well. Drawing on HEFT's ability to describe the pion mass dependence of resonances in a single formalism, we map the dependence of the phase shift as a function of $m_\pi$, and expose interesting discrepancies in contemporary lattice QCD results.
There must be electromagnetic fields created during high-energy heavy-ion collisions. As the quark-gluon plasma (QGP) starts to evolve hydrodynamically, although these fields may become weak comparing to the energy scales of the strong interaction, they are potentially important to some electromagnetic probes. In this work, we focus on the dissipative corrections in QGP due to the presence of a weak external magnetic field, and calculate accordingly the induced photon radiation in the framework of viscous hydrodynamics. By event-by-event hydrodynamical simulations, the experimentally measured direct photon elliptic flow can be well reproduced. Correspondingly, the direct photon elliptic flow implies a magnetic field strength around 0.1$m_\pi^2 \sim 10^{16}$ G. This is indeed a weak field in heavy-ion physics that is compatible to the theoretical predictions, however, it is still an ultra-strong magnetic field in nature.
Using lattice simulations, we analyze the influence of uniform rotation on the equation of state of gluodynamics. For a sufficiently slow rotation, the free energy of the system can be expanded into a series of powers of angular velocity. We calculate the moment of inertia given by the quadratic coefficient of this expansion using both analytic continuation and derivative methods, which demonstrate a good agreement between the results. We find that the moment of inertia unexpectedly takes a negative value below the ``supervortical temperature'' $T_s = 1.50(10) T_c$, vanishes at $T = T_s$, and becomes a positive quantity at higher temperatures. We discuss how our results are related to the scale anomaly and the magnetic gluon condensate. We point out that the negativity of the moment of inertia is in qualitative agreement with our previous lattice calculations, indicating that the rigid rotation increases the critical temperatures in gluodynamics and QCD.
We perform isospin analysis of CKM-favored two-body weak hadronic decays of bottom mesons. Obtaining the factorizable contributions from the spectator-quark model for Nc=3, a systematics has been identified among the isospin reduced amplitudes for the nonfactorizable terms among B decay modes. This systematics helps us to derive a generic formula which assists to predict the branching fractions for decays. Inspired by this observation, we extend our analysis to p-wave meson emitting decays of Bmeson B to PA/ PT / PS, particularly , which have similar isospin structure and make predictions for decays, where the experimental measurements are not yet available.
We study the role of renormalon cancellation schemes and perturbative scale choices in extractions of the strong coupling constant $\alpha_s(m_Z)$ and the leading non-perturbative shift parameter $\Omega_1$ from resummed predictions of the $e^+e^-$ event shape thrust. We calculate the thrust distribution to N$^{3}$LL$^\prime$ resummed accuracy in Soft-Collinear Effective Theory (SCET) matched to the fixed-order $\mathcal{O}(\alpha_s^2)$ prediction, and perform a new high-statistics computation of the $\mathcal{O}(\alpha_s^3)$ matching in EERAD3, although we do not include the latter in our final $\alpha_s$ fits due to some observed systematics that require further investigation. We are primarily interested in testing the phenomenological impact sourced from varying amongst three renormalon cancellation schemes and two sets of perturbative scale profile choices. We then perform a global fit to available data spanning center-of-mass energies between 35-207 GeV in each scenario. Relevant subsets of our results are consistent with prior SCET-based extractions of $\alpha_s(m_Z)$, but we are also led to a number of novel observations. Notably, we find that the combined effect of altering the renormalon cancellation scheme and profile parameters can lead to few-percent-level impacts on the extracted values in the $\alpha_s-\Omega_1$ plane, indicating a potentially important systematic theory uncertainty that should be accounted for. We also observe that fits performed over windows dominated by dijet events are typically of a higher quality than those that extend into the far tails of the distributions, possibly motivating future fits focused more heavily in this region. Finally, we discuss how different estimates of the three-loop soft matching coefficient $c_{\tilde{S}}^3$ can also lead to measurable changes in the fitted $\lbrace \alpha_s, \Omega_1 \rbrace$ values.
We explore the sensitivity of future hadron collider measurements in constraining the fermionic Higgs portal, focusing on the case where the new fermions are not accessible in exotic Higgs decays. These portals arise in neutral naturalness and dark matter models and are very challenging to test at colliders. We study the reach of the high-luminosity option of the Large Hadron Collider (HL-LHC), the high-energy upgrade of the LHC (HE-LHC) and a Future Circular Collider (FCC) in off-shell Higgs and double-Higgs production. Interestingly, quantum enhanced indirect probes provide the best sensitivity. We then compare these constraints to the limits one expects to find from other Higgs probes. It is shown that the studied Higgs processes provide complementary constraints, implying that a multi-prong approach will be needed to exploit the full potential of the HL-LHC, HE-LHC and FCC in testing fermionic Higgs-portal interactions. This article provides a roadmap for such a multifaceted strategy.
Several experiments have measured a deviation in $B\to D^{(*)}$ semileptonic decays, that point to new physics at the TeV scale violating lepton flavor universality. A scalar leptoquark $S_1$ is known to be able to solve this anomaly modifying $b\to c\tau\bar\nu$. In the context of composite Higgs models, we consider a theory containing $H$ and $S_1$ as Nambu-Goldstone bosons (NGBs) of a new strongly interacting sector, with ordinary resonances at a scale ${\cal O}(10)$ TeV. Assuming anarchic partial compositeness of the Standard Model (SM) fermions we calculate the potential of the NGBs that is dominated by the fermions of the third generation, we compute $R_{D^{(*)}}$ and estimate the corrections to flavor observables by the presence of $S_1$. We find that the SM spectrum and $m_{S_1}\sim$ TeV can be obtained with a NGB decay constant of order $\sim 5$ TeV, simultaneously reproducing the deviation in $R_{D^{(*)}}$. We find that the bounds on the flavor observables $B_{K^{(*)}\nu\nu}$, $g_\tau^W$, BR$(\tau\to\mu\gamma)$ and $\Delta m_{B_s}$ are saturated, with the first two requiring a coupling between resonances $g_*\lesssim 2$, whereas the third one demands $m_{S_1}\gtrsim 1.7$ TeV, up to corrections of ${\cal O}(1)$.
The problem of normalisation of the modular forms in modular invariant lepton and quark flavour models is discussed. Modular invariant normalisations of the modular forms are proposed.
Understanding the transitions of nucleons into various resonance structures through electromagnetic interactions plays a pivotal role in advancing our comprehension of the strong interactions within the domain of quark confinement. Furthermore, gaining precise insights into the elastic and resonance structures of nucleons is indispensable for deciphering the physics from neutrino-nucleus scattering cross sections experimental data, which remain theoretically challenging, even in the context of neutrino-nucleon interactions whose profound understanding is imperative for the neutrino oscillation experiments. One promising avenue involves the direct evaluation of the lepton-nucleon scattering cross sections across quasi-elastic, resonance, shallow-inelastic, and deep inelastic regions, which can be achieved through the hadronic tensor formalism in lattice QCD. In this work, we present the determination of the nucleon's Sachs electric form factor using the hadronic tensor formalism and verify that it is consistent with that from the conventional three-point function calculation. We additionally obtain the transition form factor from the nucleon to its first radial excited state within a finite volume. Consequently, we identify the latter with the nucleon-to-Roper transition form factor $G_E^*(Q^2)$, determine the corresponding longitudinal helicity amplitude $S_{1/2}(Q^2)$ and compare our findings with experimental measurements, for the first time using the hadronic tensor formalism. The limitations and systematic improvements of the approach are also discussed.
We propose a new way of understanding how chiral symmetry is realized in the high temperature phase of QCD. Based on the finding that a simple free instanton gas precisely describes the details of the lowest part of the spectrum of the lattice overlap Dirac operator, we propose an instanton-based random matrix model of QCD with dynamical quarks. Simulations of this model reveal that even for small quark mass the Dirac spectral density has a singularity at the origin, caused by a dilute gas of free instantons. Even though the interaction, mediated by light dynamical quarks creates small instanton-antiinstanton molecules, those do not influence the singular part of the spectrum, and this singular part is shown to dominate Banks-Casher type sums in the chiral limit. By generalizing the Banks-Casher formula for the singular spectrum, we show that in the chiral limit the chiral condensate vanishes if there are at least two massless flavors. We also resolve a long-standing debate, by demonstrating that for two massless quark flavors the $U(1)\msub{A}$ symmetry remains broken up to arbitrarily high finite temperatures.
We have estimated the electrical and thermal conductivity of a hadron resonance gas for a time-varying magnetic field, which is also compared with constant and zero magnetic field cases. Considering the exponential decay of electromagnetic fields with time, a kinetic theory framework can provide the microscopic expression of electrical and thermal conductivity in terms of relaxation and decay times. In the absence of the magnetic field, only a single time scale appears, and in the finite magnetic field case, their expressions carry two-time scales, relaxation time and cyclotron time period. Estimating the conductivities for HRG matter in three cases -- zero, constant, and time-varying magnetic fields, we have studied the validity of the Wiedemann-Franz law. We noticed that at a high-temperature domain, the ratio saturates at a particular value, which may be considered as Lorenz number of the hadron resonance gas. With respect to the saturation values, the deviation of the Wiedemann-Franz law has been quantified at the low-temperature domain. For the first time, the present work sketches this quantitative deviation of the Wiedemann-Franz law for hadron resonance gas at a constant and a time-varying magnetic field.
The two-Higgs-doublet model with a $U(1)_H$ gauge symmetry (N2HDM-$U(1)$) has several advantages compared to the ``standard'' $Z_2$ version (N2HDM-$Z_2$): It is purely based on gauge symmetries, involves only spontaneous symmetry breaking, and is more predictive because it contains one parameter less in the Higgs potential, which further ensures $CP$ conservation, i.e., avoiding the stringent bounds from electric dipole moments. After pointing out that a second, so far unknown version of the N2HDM-$U(1)$ exists, we examine the phenomenological consequences for the Large Hadron Collider (LHC) of the differences in the scalar potentials. In particular, we find that while the N2HDM-$Z_2$ predicts suppressed branching ratios for decays into different Higgs bosons for the case of the small scalar mixing (as suggested by Higgs coupling measurements), both versions of the N2HDM-$U(1)$ allow for sizable rates. This is particularly relevant in light of the CMS excess in resonant Higgs-pair production at around $650\,$GeV of a Standard Model Higgs boson subsequently decaying to photons and a new scalar with a mass of $\approx90\,$GeV subsequently decaying to bottom quarks (i.e., compatible with the CMS and ATLAS $\gamma\gamma$ excesses at $95\,$GeV and $\approx 670\,$GeV). As we will show, this excess can be addressed within the N2HDM-$U(1)$ in case of a nonminimal Yukawa sector, predicting an interesting and unavoidable $Z+ b\bar b$ signal and motivating further asymmetric di-Higgs searches at the LHC.
The precise determination of the leptonic $CP$-phase is one of the major goals for future generation long Baseline experiments. On the other hand, if new physics beyond the Standard Model exists, a robust determination of such a $CP$-phase may be a challenge. Moreover, it has been pointed out that, in this scenario, an apparent discrepancy in the $CP$-phase measurement at different experiments may arise. In this work, we investigate the determination of the Dirac $CP$-phase and the atmospheric mixing angle $\theta_{23}$ at several long-baseline configurations: ESSnuSB, T2HKK, and a DUNE-like experiment. We use the nonstandard neutrino interactions (NSI) formalism as a framework. We found that complementary between ESSnuSB and a DUNE-like experiment will be favorable to obtain a reliable value of the $CP$-phase, within the aforementioned scenario. Moreover, the T2HKK proposal can help to constrain the matter NSI parameters.
We propose a minimal gauged $U(1)_{B-L}$ extension of the minimal supersymmetric Standard Model (MSSM) which resolves the cosmological moduli problem via thermal inflation, and realizes late-time Affleck-Dine leptogensis so as to generate the right amount of baryon asymmetry at the end of thermal inflation. The present relic density of dark matter can be explained by sneutrinos, MSSM neutralinos, axinos, or axions. Cosmic strings from $U(1)_{B-L}$ breaking are very thick, and so the expected stochastic gravitational wave background from cosmic string loops has a spectrum different from the one in the conventional Abelian-Higgs model, as would be distinguishable at least at LISA and DECIGO. The characteristic spectrum is due to a flat potential, and may be regarded as a hint of supersymmetry. Combined with the resolution of moduli problem, the expected signal of gravitational waves constrains the $U(1)_{B-L}$ breaking scale to be $\mathcal{O}(10^{12-13})\,{\rm GeV}$. Interestingly, our model provides a natural possibility for explaining the observed ultra-high-energy cosmic rays thanks to the fact that the core width of strings in our scenario is very large, allowing a large enhancement of particle emissions from the cusps of string loops. Condensation of $LH_u$ flat-direction inside of string cores arises inevitably and can also be the main source of the ultra-high-energy cosmic rays accompanied by ultra-high-energy lightest supersymmetric particles.
Generation of simulated detector response to collision products is crucial to data analysis in particle physics, but computationally very expensive. One subdetector, the calorimeter, dominates the computational time due to the high granularity of its cells and complexity of the interactions. Generative models can provide more rapid sample production, but currently require significant effort to optimize performance for specific detector geometries, often requiring many models to describe the varying cell sizes and arrangements, without the ability to generalize to other geometries. We develop a $\textit{geometry-aware}$ autoregressive model, which learns how the calorimeter response varies with geometry, and is capable of generating simulated responses to unseen geometries without additional training. The geometry-aware model outperforms a baseline unaware model by over $50\%$ in several metrics such as the Wasserstein distance between the generated and the true distributions of key quantities which summarize the simulated response. A single geometry-aware model could replace the hundreds of generative models currently designed for calorimeter simulation by physicists analyzing data collected at the Large Hadron Collider. This proof-of-concept study motivates the design of a foundational model that will be a crucial tool for the study of future detectors, dramatically reducing the large upfront investment usually needed to develop generative calorimeter models.
We study the inclusive $H_b \to X_s \gamma$ decay with $H_b$ a beauty baryon, in particular $\Lambda_b$, employing an expansion in the heavy quark mass at $\mathit{O}(m_b^{-3})$ at leading order in $\alpha_s$, keeping the dependence on the hadron spin. For a polarized baryon we compute the distribution $\displaystyle\frac{d^2\Gamma}{dy \, d \cos \theta_P}$, with $y=2E_\gamma/m_b$, $E_\gamma$ the photon energy and $\theta_P$ the angle between the baryon spin vector and the photon momentum in the $H_b$ rest-frame. We discuss the correlation between the baryon and photon polarization, and show that effects of physics beyond the Standard Model can modify the photon polarization asymmetry. We also discuss a method to treat the singular terms in the photon energy spectrum obtained by the OPE.
We examine possible experimental signatures that may be exploited to search for stable supermassive particles with electric charges of $O(1)$ in future underground experiments, and the upcoming JUNO experiment in particular. The telltale signal providing a unique signature of such particles, would be a correlated sequence of three or more nuclear recoils along a straight line, corresponding to the motion of a non-relativistic ($\beta \lesssim 10^{-2}$) particle that could enter the detector from any direction. We provide some preliminary estimates for the expected event rates.
Leptoquarks are theoretically well-motivated and have received increasing attention in recent years as they can explain several hints for physics beyond the Standard Model. In this article, we calculate the renormalisation group evolution of models with scalar leptoquarks. We compute the anomalous dimensions for all couplings (gauge, Yukawa, Higgs and leptoquarks interactions) of the most general Lagrangian at the two-loop level and the corresponding threshold corrections at one-loop. The most relevant analytic results are presented in the Appendix, while the notebook containing the full expressions can be downloaded at https://github.com/SumitBanikGit/SLQ-RG. In our phenomenological analysis, we consider some exemplary cases with focus on gauge and Yukawa coupling unification.
This work is dedicated to the study of power expansion in the transverse momentum dependent (TMD) factorization theorem. Each genuine term in this expansion gives rise to a series of kinematic power corrections (KPCs). All terms of this series exhibit the same properties as the leading term and share the same nonperturbative content. Among various power corrections, KPCs are especially important since they restore charge conservation and frame invariance, which are violated at a fixed power order. I derive and sum a series of KPCs associated with the leading-power term of the TMD factorization theorem. The resulting expression resembles a hadronic tensor computed with free massless quarks while still satisfying a proven factorization statement. Additionally, I provide an explicit check of this novel form of factorization theorem at the next-to-leading order (NLO) and demonstrate the restoration of the frame-invariant argument of the leading-power coefficient function. Numerical estimations show that incorporating the summed KPCs into the cross-section leads to an almost constant shift, which may help to explain the observed challenges in the TMD phenomenology.
We calculate the two-point massless QCD correlator of nonlocal (composite) vector quark currents with chains of fermion one-loop radiative corrections inserted into gluon lines. The correlator depends on the Bjorken fraction $x$ related to the composite current and, under large-$\beta_0$ approximation, gives the main contributions in each order of perturbation theory. In the mentioned approximation, these contributions dominate the endpoint behavior of the leading-twist distribution amplitudes of light mesons in the framework of QCD sum rules. Based on this, we analyze the endpoint behavior of these distribution amplitudes for $\pi$ and longitudinally polarized $\rho^\|$ mesons and find inequalities for their moments.
There are strong phenomenological arguments favoring the existence of vector-like leptons and quarks in nature. In spite of extensive studies conducted in search of vector-like quarks, there are only a limited number of experimental studies on vector-like leptons. Moreover, these searches do not include all possible decay modes of vector-like leptons. Therefore, the analyses done so far are incomplete. In this letter, we highlight decay channels that are not covered by different experimental analyses, with a focus on L3, ATLAS, and CMS results. We argue that experimental analyses should be redone considering these shortcomings.
In this work, we attempt to answer the question, "What is the minimal viable renormalizable $SU(5)$ GUT with representations no higher than adjoints?". We find that an $SU(5)$ model with a pair of vectorlike fermions $5_F+\overline{5}_F$, as well as two copies of $15_H$ Higgs fields, is the minimal candidate that accommodates for correct charged fermion and neutrino masses and can also address the matter-antimatter asymmetry of the universe. Our results show that the presented model is highly predictive and will be fully tested by a combination of upcoming proton decay experiments, collider searches, and low-energy experiments in search of flavor violations. Moreover, we also entertain the possibility of adding a pair of vectorlike fermions $10_F+\overline{10}_F$ or $15_F+\overline{15}_F$ (instead of a $5_F+\overline{5}_F$). Our study reveals that the entire parameter space of these two models, even with minimal particle content, cannot be fully probed due to possible longer proton lifetime beyond the reach of Hyper-Kamiokande.
We present a new calculation of next-to-leading-order corrections of the strong and electroweak interactions to like-sign W-boson scattering at the Large Hadron Collider, implemented in the Monte Carlo integrator Bonsay. The calculation includes leptonic decays of the $\mathrm{W}$ bosons. It comprises the whole tower of next-to-leading-order contributions to the cross section, which scale like $\alpha_\mathrm{s}^3\alpha^4$, $\alpha_\mathrm{s}^2\alpha^5$, $\alpha_\mathrm{s}\alpha^6$, and $\alpha^7$ in the strong and electroweak couplings $\alpha_\mathrm{s}$ and $\alpha$. We present a detailed survey of numerical results confirming the occurrence of large pure electroweak corrections of the order of $\sim-12\%$ for integrated cross sections and even larger corrections in high-energy tails of distributions. The electroweak corrections account for the major part of the complete next-to-leading-order correction, which amounts to $15{-}20\%$ in size, depending on the details of the event selection chosen for analysing vector-boson-scattering. Moreover, we compare the full next-to-leading-order corrections to approximate results based on the neglect of contributions that are not enhanced by the vector-boson scattering kinematics (VBS approximation) and on resonance expansions for the $\mathrm{W}$-boson decays (double-pole approximation); the quality of this approximation is good within $\sim 1.5\%$ for integrated cross sections and the dominating parts of the differential distributions. Finally, for the leading-order predictions, we construct different versions of effective vector-boson approximations, which are based on cross-section contributions that are enhanced by collinear emission of $\mathrm{W}$ bosons off the initial-state (anti)quarks; in line with previous findings in the literature, it turns out that the approximative quality is rather limited for applications at the LHC.
We analyze data from the dark matter direct detection experiments PandaX-4T, LUX-ZEPLIN and XENONnT to place bounds on neutrino electromagnetic properties (magnetic moments, millicharges, and charge radii). We also show how these bounds will improve at the future facility DARWIN. In our analyses we implement a more conservative treatment of background uncertainties than usually done in the literature. From the combined analysis of all three experiments we can place very strong bounds on the neutrino magnetic moments and on the neutrino millicharges. We show that even though the bounds on the neutrino charge radii are not very strong from the analysis of current data, DARWIN could provide the first measurement of the electron neutrino charge radius, in agreement with the Standard Model prediction.
Based on our previous work, we study the harmonic coefficient of both inclusive and diffractive azimuthal angle dependent lepton-jet correlations in Hadron-Electron Ring Accelerator and the future electron-ion collider. Numerical calculations for inclusive and diffractive harmonics and the ratio of harmonics in $e+\text{Au}$~and $e+p$ indicate their strong discriminating power for non-saturation model and saturation model. Additionally, we demonstrate that the t-dependent diffractive harmonics can serve as novel observables for nuclear density profile.
A novel observable, the double nuclear modification factor, is proposed to probe simultaneously the initial and final state effects in nucleus-nucleus collisions. An interesting competition between the combinatorial enhancement in the double parton scattering and the suppression due to parton energy loss can be observed in the production rate of two hard particles. In particular, the production of $J/\psi$ mesons in association with a $W$ boson is not suppressed but is enhanced in the region of moderate transverse momenta, unlike the case of unassociated (inclusive) $J/\psi$ production. At the same time, in the region of high enough transverse momenta the nuclear modification factor for associated $J/\psi+W$ production converges to that of unassociated $J/\psi$.
In binary superfluid counterflow systems, vortex nucleation arises as a consequence of hydrodynamic instabilities when the coupling coefficient and counterflow velocity exceed critical value. When dealing with two identical components, one might naturally anticipate that the number of vortices generated would remain equal. However, our investigation has unveiled a remarkable phenomenon: in AC counterflow systems, once the coupling coefficient and frequency exceed certain critical values, a surprising symmetry-breaking phenomenon occurs. This results in an asymmetry in the number of vortices in the two components. We establish that this phenomenon represents a novel continuous phase transition, which, as indicated by the phase diagram, is exclusively observable in AC counterflow. We provide an explanation for this intriguing phenomenon through soliton structures, thus shedding light on the intricate and distinct characteristics of quantum fluid instability and its rich spectrum of phenomena.
Effective field theories endowed with a nontrivial moduli space may be broken down by several, distinct effects as the energy scales that are probed increase. These may include the appearance of a finite number of new states, or the emergence of an infinite tower of states, as predicted by the Distance Conjecture. Consequently, the moduli space can be partitioned according to which kind of state first breaks down the effective description, and the effective-theory cutoff has to be regarded as a function of the moduli that may abruptly vary in form across the components of the partition. In this work we characterize such a slicing of the moduli space, induced by the diverse breakdown mechanisms, in a two-fold way. Firstly, employing the recently formulated Tameness Conjecture, we show that the partition of the moduli space so constructed is composed only of a finite number of distinct components. Secondly, we illustrate how this partition can be concretely constructed by means of supervised machine learning techniques, with minimal bottom-up information.
We consider the path integral of a quantum field theory in Minkowski spacetime with fixed boundary values (for the elementary fields) on asymptotic boundaries. We define and study the corresponding boundary correlation functions obtained by taking derivatives of this path integral with respect to the boundary values. The S-matrix of the QFT can be extracted directly from these boundary correlation functions after smearing. We interpret this relation in terms of coherent state quantization and derive the constraints on the path-integral as a function of boundary values that follow from the unitarity of the S-matrix. We then study the locality structure of boundary correlation functions. In the massive case, we find that the boundary correlation functions for generic locations of boundary points are dominated by a saddle point which has the interpretation of particles scattering in a small elevator in the bulk, where the location of the elevator is determined dynamically, and the S-matrix can be recovered after stripping off some dynamically determined but non-local ``renormalization'' factors. In the massless case, we find that while the boundary correlation functions are generically analytic as a function on the whole manifold of locations of boundary points, they have special singularities on a sub-manifold, points on which correspond to light-like scattering in the bulk. We completely characterize this singular scattering sub-manifold, and find that the corresponding residues of the boundary correlations at these singularities are precisely given by S-matrices. This analysis parallels the analysis of bulk-point singularities in AdS/CFT and generalizes it to the case of multi-bulk point singularities.
We diagnose the stability of the Migdal-Eliashberg theory for a Fermi surface coupled to a gapless boson in 2+1 dimensions. We provide a scheme for diagonalizing the Bethe-Salpeter ladder when small-angle scattering mediated by the boson plays a dominant role. We found a large number of soft modes which correspond to shape fluctuations of the Fermi surface, and these shape deformations follow a diffusion-like dynamics on the Fermi surface. Surprisingly, the odd-parity deformations of a convex Fermi surface becomes unstable near the non-Fermi liquid regime of the Ising-Nematic quantum critical point and our finding calls for revisit of the Migdal-Eliashberg framework. The implication of the Bethe-Salpeter eigenvalues in transport will be discussed in the companion paper [H.Guo,XXX].
We extend the kinetic operator formalism developed in the companion paper [H.Guo,XXX] to study the general eigenvalues of the fluctuation normal modes. We apply the formalism to calculate the optical conductivity of a critical Fermi surface near the Ising-Nematic quantum critical point. We find that the conductivity is the sum of multiple conduction channels including both the soft and non-soft eigenvectors of the kinetic operator, and therefore it is not appropriate to interpret the optical conductivity using extended Drude formula for momentum conserved systems. We also show that the propagation of the FS soft modes is governed by a Boltzmann equation from which hydrodynamics emerges. We calculate the viscosity and it shows clear signature of the non-Fermi liquid physics.
We explicitly construct K-theoretic and elliptic stable envelopes for certain moduli spaces of vortices, and apply this to enumerative geometry of rational curves in these varieties. In particular, we identify the quantum difference equations in equivariant variables with quantum Knizhnik-Zamolodchikov equations, and give their monodromy in terms of geometric elliptic R-matrices. A novel geometric feature in these constructions is that the varieties under study are not holomorphic symplectic, yet nonetheless have representation-theoretic significance. In physics, they originate from 3d supersymmetric gauge theories with $\mathcal{N} = 2$ rather than $\mathcal{N} = 4$ supersymmetry. We discuss an application of the results to the ramified version of the quantum q-Langlands correspondence of Aganagic, Frenkel, and Okounkov.
We consider the multiparticle asymmetric diffusion model (MADM) introduced by Sasamoto and Wadati with integrability preserving reservoirs at the boundaries. In contrast to the open asymmetric simple exclusion process (ASEP) the number of particles allowed per site is unbounded in the MADM. Taking inspiration from the stationary measure in the symmetric case, i.e. the rational limit, we first obtain the length 1 solution and then show that the steady state can be expressed as an iterated product of Jackson q-integrals. In the proof of the stationarity condition, we observe a cancellation mechanism that closely resembles the one of the matrix product ansatz. To our knowledge, the occupation probabilities in the steady state of the boundary-driven MADM were not available before.
We develop a suitably general version of quantum mechanics that can handle nonassociative algebras of observables and which reduces to standard quantum theory in the traditional associative setting. Our algebraic approach is naturally probabilistic and is based on the universal enveloping algebra of a general nonassociative algebra. We formulate properties of states together with notions of trace, and use them to develop GNS constructions. We describe Heisenberg and Schrodinger pictures of completely positive dynamics, and we illustrate our formalism on the explicit examples of finite-dimensional matrix Jordan algebras as well as the octonion algebra.
We discuss a general framework for the analytic Langlands correspondence over an arbitrary local field F introduced and studied in our works arXiv:1908.09677, arXiv:2103.01509 and arXiv:2106.05243, in particular including non-split and twisted settings. Then we specialize to the archimedean cases (F=C and F=R) and give a (mostly conjectural) description of the spectrum of the Hecke operators in various cases in terms of opers satisfying suitable reality conditions, as predicted in part in arXiv:2103.01509, arXiv:2106.05243 and arXiv:2107.01732. Finally, we apply the tools of the analytic Langlands correspondence over archimedean fields in genus zero to the Gaudin model and its generalizations, as well as their q-deformations.
The equations of open 2-dimensional Toda lattice (TL) correspond to Leznov-Saveliev equations (LSE) interpreted as zero-curvature Yang-Mills equations on the variety of $O(3)$-orbits on the Minkowski space when the gauge algebra is the image of $\mathfrak{sl}(2)$ under a principal embedding into a simple finite-dimensional Lie algebra $\mathfrak{g}(A)$ with Cartan matrix $A$. The known integrable super versions of TL equations correspond to matrices $A$ of two different types. I interpret the super LSE of one type 1 as zero-curvature equations for the \textit{reduced} connection on the non-integrable distribution on the supervariety of $OSp(1|2)$-orbits on the $N=1$-extended Minkowski superspace; the Leznov-Saveliev method of solution is applicable only to $\mathfrak{g}(A)$ finite-dimensional and admitting a superprincipal embedding $\mathfrak{osp}(1|2)\to\mathfrak{g}(A)$. The simplest LSE1 is the super Liouville equation; it can be also interpreted in terms of the superstring action. Olshanetsky introduced LSE2 -- another type of equations of super TL. Olshanetsky's equations, as well as LSE1 with infinite-dimensional $\mathfrak{g}(A)$, can be solved by the Inverse Scattering Method. To interpret these equations remains an open problem, except for the super Liouville equation -- the only case where these two types of LSE coincide. I also review related less known and less popular mathematical constructions involved.
We consider Little String Theories (LSTs) that are engineered by $N$ parallel M5-branes probing a transverse $\mathbb{Z}_M$ geometry. By exploiting a dual description in terms of F-theory compactified on a toric Calabi-Yau threefold $X_{N,M}$, we establish numerous symmetries that leave the BPS partition function $\mathcal{Z}_{N,M}$ invariant. They furthemore act in a non-perturbative fashion from the point of view of the low energy quiver gauge theory associated with the LST. We present different group theoretical organisations of these symmetries, thereby generalising the results of [arXiv:1811.03387] to the case of generic $M \geq 1$. We also provide a Mathematica package that allows to represent them in terms of matrices that act linearly on the K\"ahler parameters of $X_{N,M}$. From the perspective of dual realisations of the LSTs the symmetries found here act in highly nontrivial ways: as an example, we consider a formulation of $\mathcal{Z}_{N,M}$ in terms of correlation functions of a vertex operator algebra, whose commutation relations are governed by an affine quiver algebra. We show the impact of the symmetry transformations on the latter and discuss invariance of $\mathcal{Z}_{N,M}$ from this perspective for concrete examples.
We explore the 't Hooft-Veneziano limit of the Polyakov loop models at finite baryon chemical potential. Using methods developed by us earlier we calculate the two- and $N$-point correlation functions of the Polyakov loops. This gives a possibility to compute the various potentials in the confinement phase and to derive the screening masses outside the confinement region. In particular, we establish the existence of complex masses and an oscillating decay of correlations in a certain range of parameters. Furthermore, it is shown that the calculation of the $N$-point correlation function in the confinement phase reduces to the geometric median problem. This leads to a large $N$ analog of the $Y$ law for the baryon potential.
We ask whether Krylov complexity is mutually compatible with the circuit and Nielsen definitions of complexity. We show that the Krylov complexities between three states fail to satisfy the triangle inequality and so cannot be a measure of distance: there is no possible metric for which Krylov complexity is the length of the shortest path to the target state or operator. We show this explicitly in the simplest example, a single qubit, and in general.
We discuss a new classical action that enables efficient computation of the gluonic tree amplitudes but does not contain any triple point vertices. This new formulation is obtained via a canonical transformation of the light-cone Yang-Mills action, with the field transformations based on Wilson line functionals. In addition to MHV vertices, the action contains also $\mathrm{N}^k\mathrm{MHV}$ vertices, where $1 \leq k \leq (n - 4)$, and $n$ is the number of external legs. We computed tree-level amplitudes up to 8 gluons and found agreement with standard results. The classical action is however not sufficient to obtain rational parts of amplitudes, in particular the finite amplitudes with all same helicity gluons. In order to systematically develop quantum corrections to this new action, we derive the one-loop effective action, in such a way there are no quantum contributions missing at one loop.
We study the $N$-point Coon amplitude discovered first by Baker and Coon in the 1970s and then again independently by Romans in the 1980s. This Baker-Coon-Romans (BCR) amplitude retains several properties of tree-level string amplitudes, namely duality and factorization, with a $q$-deformed version of the string spectrum. Although the formula for the $N$-point BCR amplitude is only valid for ${q > 1}$, the four-point case admits a straightforward extension to all ${q \geq 0}$ which reproduces the usual expression for the four-point Coon amplitude. At five points, there are inconsistencies with factorization when pushing ${q < 1}$. Despite these issues, we find a new relation between the five-point BCR amplitude and Cheung and Remmen's four-point basic hypergeometric amplitude, placing the latter within the broader family of Coon amplitudes. Finally, we compute the $q \to \infty$ limit of the $N$-point BCR amplitudes and discover an exact correspondence between these amplitudes and the field theory amplitudes of a scalar transforming in the adjoint representation of a global symmetry group with an infinite set of non-derivative single-trace interaction terms. This correspondence at $q = \infty$ is the first definitive realization of the Coon amplitude (in any limit) from a field theory described by an explicit Lagrangian.
The classic Abraham-Lorentz-Dirac self-force of point-like particles is generalized within an effective field theory setup to include linear spin and susceptibility effects described perturbatively, in that setup, by effective couplings in the action. Electromagnetic self-interactions of the point-like particle are integrated out using the in-in supersymmetric worldline quantum field theory formalism. Divergences are regularized with dimensional regularization and the resulting equations of motion are in terms only of an external electromagnetic field and the particle degrees of freedom.
The addition of mass terms in general breaks gauge symmetries which can be recovered usually via Stueckelberg fields. The massive BF model describes massive spin-1 particles while preserving the $U(1)$ symmetry without Stueckelberg fields. Replacing the spin-1 curvature (field strength) by the Riemann tensor one can define its spin-2 analogue (massive``BR'' model). Here we investigate the canonical structure of the free mBR model in terms of gauge invariants in arbitrary dimensions and compare with the massive BF model. We also investigate non linear completions of the mBR model in arbitrary dimensions. In $D=3$ we find a non linear completion in the form of a bimetric model which is a sub case of a new class of bimetric models whose decoupling limit is ghost free at leading order. Their spectrum consists only of massive spin-2 particles. In arbitrary dimensions $D\ge 3$ we show that the consistency of a possible single metric completion of the mBR model is related with the consistency of a higher rank description of massless spin-1 particles in arbitrary backgrounds.
Deriving an effective massless field theory for fluctuations about a braneworld spacetime requires analysis of the transverse-space-wavefunction's second-order differential equation. There can be two strikingly different types of effective theory. For a supersymmetric braneworld, one involves a technically consistent embedding of a supergravity theory on the worldvolume; the other can produce, in certain situations, a genuine localisation of gravity near the worldvolume but not via a technically consistent embedding. So, in the latter situation, the theory's dynamics remains higher-dimensional but there can still be a lower-dimensional effective-theory interpretation of the dynamics at low worldvolume momenta / large worldvolume distances. This paper examines the conditions for such a gravity localisation to be possible. Localising gravity about braneworld spacetimes requires finding solutions to transverse-space self-adjoint Sturm-Liouville problems admitting a normalisable zero mode in the noncompact transverse space. This in turn requires analysis of Sturm-Liouville problems with radial singular endpoints following a formalism originating in the work of Hermann Weyl. Examples of such gravity-localising braneworld systems are found and analysed in this formalism with underlying "skeleton" braneworlds of Salam-Sezgin, resolved D3-brane and Randall-Sundrum II types.
We study the von Neumann algebra description of the inflationary quasi-de Sitter (dS) space. Unlike perfect dS space, quasi-dS space allows the nonzero energy flux across the horizon, which can be identified with the expectation value of the static time translation generator. Moreover, as a dS isometry associated with the static time translation is spontaneously broken, the fluctuation in time is accumulated, which induces the fluctuation in the energy flux. When the inflationary period is given by $(\epsilon_H H)^{-1}$ where $\epsilon_H$ is the slow-roll parameter measuring the increasing rate of the Hubble radius, both the energy flux and its fluctuation diverge in the $G \to 0$ limit. Taking the fluctuation in the energy flux and that in the observer's energy into account, we argue that the inflationary quasi-dS space is described by Type II$_\infty$ algebra. As the entropy is not bounded from above, this is different from Type II$_1$ description of perfect dS space in which the entropy is maximized by the maximal entanglement. We also show that our result is consistent with the observation that the von Neumann entropy for the density matrix reflecting the fluctuations above is interpreted as the generalized entropy.
The configuration entropy (CE) provides a measure of the stability of physical systems that are spatially localized. An increase in the CE is associated with an increase in the instability of the system. In this work we apply a recently developed holographic description of a rotating plasma, in order to investigate the behaviour of the CE when the plasma has angular momentum. Considering the holographic dual to the plasma, namely a rotating AdS black hole, the CE is computed at different rotational speeds and temperatures. The result obtained shows not only an increase with the rotational speed $ v$ but, in particular, a divergence of the CE as $v$ approaches the speed of light: $\, v \to 1 $. We discuss the results obtained showing that they are consistent with the change in the geometry of the black hole caused by the rotation and the corresponding variation of the volume of the dual plasma. We also connect the results found here with those obtained in a recent work, where it was shown that the complete dissociation of heavy mesons in a plasma is represented by a positive singularity in the CE.
We consider quantum and classical first-order transitions, at equilibrium and under out-of-equilibrium conditions, mainly focusing on quench and slow quasi-adiabatic protocols. For these phenomena, we review the finite-size scaling theory appropriate to describe the general features of the large-scale, and long-time for dynamic phenomena, behavior of finite-size systems.
We study the phase structure and charge transport at finite temperature and chemical potential in the non-Hermitian PT-symmetric holographic model of arXiv:1912.06647. The non-Hermitian PT-symmetric deformation is realized by promoting the parameter of a global U(1) symmetry to a complex number. Depending on the strength of the deformation, we find three phases: stable PT-symmetric phase, unstable PT-symmetric phase, and an unstable PT-symmetry broken phase. In the three phases, the square of the condensate and also the spectral weight of the AC conductivity at zero frequency are, respectively, positive, negative, and complex. We check that the Ferrell-Glover-Tinkham sum rule for the AC conductivity holds in all the three phases. We also investigate a complexified U(1) rotor model with PT-symmetric deformation, derive its phase structure and condensation pattern, and find a zero frequency spectral weight analogous to the holographic model.
The non-commutative electrodynamics based on the canonical Poisson gauge theory is studied in this paper. For a pure spatial non-commutativity, we investigate the plane wave solutions in the presence of a constant and uniform magnetic background field for the classical electrodynamics in canonical Poisson gauge theory. We obtain the properties of the medium ruled by the permittivity and the permeability tensors in terms of the non-commutative parameter, with the electrodynamics equations in the momentum space. Using the plane wave solutions mentioned, the dispersion relations are modified by the magnetic background, and the correspondent group velocity is affected by the spatial non-commutative parameter. We construct the energy-momentum tensor and discuss the conserved components of this tensor in the spatial non-commutative case. The birefringence phenomenon is showed through the modified dispersion relations, that depends directly on the non-commutative corrections and also on the magnetic background field. Using the bound of the polarized vacuum with laser (PVLAS) experiment for the vacuum magnetic birefringence, we estimate a theoretical value for the spatial non-commutative parameter.
The double copy programme relies crucially on the so-called color-kinematics duality which, in turn, is widely believed to descend from a kinematic algebra possessed by gauge theories. In this paper we construct the kinematic algebra of gauge invariant and off-shell self-dual Yang-Mills theory, up to trilinear maps. This structure is a homotopy algebra of the same type as the ones recently uncovered in Chern-Simons and full Yang-Mills theories. To make contact with known results for the self-dual sector, we show that it reduces to the algebra found by Monteiro and O'Connell upon taking light-cone gauge and partially solving the self-duality constraints. Finally, we test a double copy prescription recently proposed in [1] and reproduce self-dual gravity.
We consider 4-dimensional $\mathcal{N} = 2$ superconformal quiver theories with $SU(N)^M$ gauge group and bi-fundamental matter and we evaluate correlation functions of $n$ coincident Wilson loops in the planar limit of the theory. Exploiting specific untwisted/twisted combinations of these operators and using supersymmetric localization, we are able to resum the whole perturbative expansion and find exact expressions for these correlators that are valid for all values of the 't Hooft coupling. Moreover, we analytically derive the leading strong coupling behaviour of the correlators, showing that they obey a remarkable simple rule. Our analysis is complemented by numerical checks based on a Pad\'e resummation of the perturbative series.
We study giant graviton expansions of the superconformal index of 4d orbifold/orientifold theories. In general, a giant graviton expansion is given as a multiple sum over wrapping numbers. It has been known that the expansion can be reduced to a simple sum for the ${\cal N}=4$ $U(N)$ SYM by choosing appropriate expansion variables. We find such a reduction occurs for a few examples of orbifold and orientifold theories: $\mathbb{Z}_k$ orbifold and orientifolds with $O3$ and $O7$. We also argue that for a quiver gauge theory associated with a toric Calabi-Yau $3$-fold the simple-sum expansion works only if the toric diagram is a triangle, that is, the Calabi-Yau is an orbifold of $\mathbb{C}^3$.
We introduce a "radial" two-point invariant for quantum field theory in de Sitter (dS) analogous to the radial coordinate used in conformal field theory. We show that the two-point function of a free massive scalar in the Bunch-Davies vacuum has an exponentially convergent series expansion in this variable with positive coefficients only. Assuming a convergent K\"all\'en-Lehmann decomposition, this result is then generalized to the two-point function of any scalar operator non-perturbatively. A corollary of this result is that, starting from two-point functions on the sphere, an analytic continuation to an extended complex domain is admissible. dS two-point configurations live inside or on the boundary of this domain, and all the paths traced by Wick rotations between dS and the sphere or between dS and Euclidean Anti-de Sitter are also contained within this domain.
Noble element time projection chambers are a leading technology for rare event detection in physics, such as for dark matter and neutrinoless double beta decay searches. Time projection chambers typically assign event position in the drift direction using the relative timing of prompt scintillation and delayed charge collection signals, allowing for reconstruction of an absolute position in the drift direction. In this paper, alternate methods for assigning event drift distance via quantification of electron diffusion in a pure high pressure xenon gas time projection chamber are explored. Data from the NEXT-White detector demonstrate the ability to achieve good position assignment accuracy for both high- and low-energy events. Using point-like energy deposits from $^{83\mathrm{m}}$Kr calibration electron captures ($E\sim45$keV), the position of origin of low-energy events is determined to $2~$cm precision with bias $< 1$mm. A convolutional neural network approach is then used to quantify diffusion for longer tracks (E$\geq$1.5MeV), yielding a precision of 3cm on the event barycenter. The precision achieved with these methods indicates the feasibility energy calibrations of better than 1% FWHM at Q$_{\beta\beta}$ in pure xenon, as well as the potential for event fiducialization in large future detectors using an alternate method that does not rely on primary scintillation.
Single top quark production in association with vector bosons provides a unique way to probe the electroweak sector of the standard model at the Large Hadron Collider. In this talk the latest experimental results of the ATLAS and CMS Collaborations for these processes are presented.
We report the first measurement of the atmospheric neutrino-oxygen neutral-current quasielastic (NCQE) cross section in the gadolinium-loaded Super-Kamiokande (SK) water Cherenkov detector. In June 2020, SK began a new experimental phase, named SK-Gd, by loading 0.011% by mass of gadolinium into the ultrapure water of the SK detector. The introduction of gadolinium to ultrapure water has the effect of improving the neutron-tagging efficiency. Using a 552.2 day data set from August 2020 to June 2022, we measure the NCQE cross section to be 0.74 $\pm$ 0.22(stat.) $^{+0.85}_{-0.15}$ (syst.) $\times$ 10$^{-38}$ cm$^{2}$/oxygen in the energy range from 160 MeV to 10 GeV, which is consistent with the atmospheric neutrino-flux-averaged theoretical NCQE cross section and the measurement in the SK pure-water phase within the uncertainties. Furthermore, we compare the models of the nucleon-nucleus interactions in water and find that the Binary Cascade model and the Liege Intranuclear Cascade model provide a somewhat better fit to the observed data than the Bertini Cascade model. Since the atmospheric neutrino-oxygen NCQE reactions are one of the main backgrounds in the search for diffuse supernova neutrino background (DSNB), these new results will contribute to future studies - and the potential discovery - of the DSNB in SK.
This paper presents a search for a new $Z^\prime$ resonance decaying into a pair of dark quarks which hadronise into dark hadrons before promptly decaying back as Standard Model particles. This analysis is based on proton-proton collision data recorded at $\sqrt{s}=13$ TeV with the ATLAS detector at the Large Hadron Collider between 2015 and 2018, corresponding to an integrated luminosity of 139 fb$^{-1}$. After selecting events containing large-radius jets with high track multiplicity, the invariant mass distribution of the two highest-transverse-momentum jets is scanned to look for an excess above a data-driven estimate of the Standard Model multijet background. No significant excess of events is observed and the results are thus used to set 95 % confidence-level upper limits on the production cross-section times branching ratio of the $Z^\prime$ to dark quarks as a function of the $Z^\prime$ mass for various dark-quark scenarios.
A major remaining challenge for $Nb_3Sn$ high field magnets is their training due to random temperature variations in the coils. The main objective of our research is to reduce or eliminate it by finding novel impregnation materials with respect to the epoxies currently used. An organic olefin-based thermosetting dicyclopentadiene (DCP) resin, $C_10H_12$, commercially available in Japan as TELENE by RIMTEC, was used to impregnate a short $Nb_3Sn$ undulator coil developed by ANL and FNAL. This magnet reached short sample limit after only two quenches, compared with several dozens when $CTD-101K$ was used. Ductility, i.e. the ability to accept large strains, and toughness were identified as key properties to achieve these results. In addition, we have been investigating whether mixing TELENE with high heat capacity ceramic powders, increases the specific heat ($C_p$) of impregnated $Nb_3Sn$ superconducting magnets. The viscosity, heat capacity, thermal conductivity, and other physical properties of TELENE with $high-C_p$ powder fillers were measured in this study as a function of temperature and magnetic field. Mixing TELENE with either $Gd_2O_3$, $Gd_2O_2S$, and $HoCu_2$ increases its $C_p$ tenfold. We have also investigated the effect on the mechanical properties of pure and mixed TELENE under 10 $Gy$ of gamma-ray irradiation at the Takasaki Advanced Radiation Research Institute in Takasaki, Japan. Whereas both $TELENE-82wt\%Gd_2O_3$ and $TELENE-83wt\%HoCu_2$ performed well, the best mechanical properties after irradiation were obtained for $TELENE-87wt\%Gd2O_2S$. Testing a short undulator in the future with the latter impregnation material will verify whether it will further improve the coils thermal stability. Short magnet training will lead to better magnet reliability, lower risk and substantial saving in accelerators commissioning costs.
A search for a heavy CP-odd Higgs boson, $A$, decaying into a $Z$ boson and a heavy CP-even Higgs boson, $H$, is presented. It uses the full LHC Run 2 dataset of $pp$ collisions at $\sqrt{s}=13$ TeV collected with the ATLAS detector, corresponding to an integrated luminosity of $140$ fb$^{-1}$. The search for $A\to ZH$ is performed in the $\ell^+\ell^- t\bar{t}$ and $\nu\bar{\nu}b\bar{b}$ final states and surpasses the reach of previous searches in different final states in the region with $m_H>350$ GeV and $m_A>800$ GeV. No significant deviation from the Standard Model expectation is found. Upper limits are placed on the production cross-section times the decay branching ratios. Limits with less model dependence are also presented as functions of the reconstructed $m(t\bar{t})$ and $m(b\bar{b})$ distributions in the $\ell^+\ell^- t\bar{t}$ and $\nu\bar{\nu}b\bar{b}$ channels, respectively. In addition, the results are interpreted in the context of two-Higgs-doublet models.
The $B^+ \rightarrow D_s^+ D_s^- K^+$ decay is observed for the first time using proton-proton collision data collected by the LHCb detector at centre-of-mass energies of $7$, $8$ and $13\, \text{TeV}$, corresponding to an integrated luminosity of $9\,\text{fb}^{-1}$. Its branching fraction relative to that of the $B^{+} \rightarrow D^{+} D^{-} K^{+}$ decay is measured to be $$\frac{B\left(B^{+} \rightarrow D_s^{+} D_s^{-} K^{+}\right)}{B\left(B^{+} \rightarrow D^{+} D^{-} K^{+}\right)}=0.525 \pm 0.033 \pm 0.027 \pm 0.034,$$ where the first uncertainty is statistical, the second systematic, and the third is due to the uncertainties on the branching fractions of the $D_s^{\pm} \rightarrow K^{\mp} K^{\pm} \pi^{\pm}$ and $D^{\pm} \rightarrow K^{\mp} \pi^{\pm} \pi^{\pm}$ decays. This measurement fills an experimental gap in the knowledge of the family of Cabibbo$-$favoured $\bar{b} \rightarrow \bar{c} c \bar{s}$ transitions and opens the path for unique studies of spectroscopy in future.
The first simultaneous test of muon-electron universality using $B^{+}\rightarrow K^{+}\ell^{+}\ell^{-}$ and $B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-}$ decays is performed, in two ranges of the dilepton invariant-mass squared, $q^{2}$. The analysis uses beauty mesons produced in proton-proton collisions collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of 9 $\mathrm{fb}^{-1}$. Each of the four lepton universality measurements reported is either the first in the given $q^{2}$ interval or supersedes previous LHCb measurements. The results are compatible with the predictions of the Standard Model.
A simultaneous analysis of the $B^+\to K^+\ell^+\ell^-$ and $B^0\to K^{*0}\ell^+\ell^-$ decays is performed to test muon-electron universality in two ranges of the square of the dilepton invariant mass, $q^2$. The measurement uses a sample of beauty meson decays produced in proton-proton collisions collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of $9$ $\text{fb}^{-1}$. A sequence of multivariate selections and strict particle identification requirements produce a higher signal purity and a better statistical sensitivity per unit luminosity than previous LHCb lepton universality tests using the same decay modes. Residual backgrounds due to misidentified hadronic decays are studied using data and included in the fit model. Each of the four lepton universality measurements reported is either the first in the given $q^2$ interval or supersedes previous LHCb measurements. The results are compatible with the predictions of the Standard Model.
In recent years, interest has grown in alternative strategies for the search for New Physics beyond the Standard Model. One envisaged solution lies in the development of anomaly detection algorithms based on unsupervised machine learning techniques. In this paper, we propose a new Generative Adversarial Network-based auto-encoder model that allows both anomaly detection and model-independent background modeling. This algorithm can be integrated with other model-independent tools in a complete heavy resonance search strategy. The proposed strategy has been tested on the LHC Olympics 2020 dataset with promising results.
Using $(10087\pm44)\times10^{6}$ $J/\psi$ events collected with the BESIII detector at the BEPCII $e^+e^-$ storage ring at the center-of-mass energy of $\sqrt{s}=3.097~\rm{GeV}$, we present a search for the rare semi-muonic charmonium decay $J/\psi\to D^{-}\mu^{+}\nu_{\mu}+c.c.$. Since no significant signal is observed, we set an upper limit of the branching fraction to be $\mathcal{B}(J/\psi\to D^{-}\mu^{+}\nu_{\mu}+c.c.)<5.6\times10^{-7}$ at $90\%$ confidence level. This is the first search for the weak decay of charmonium with a muon in the final state.
We study single-image super-resolution algorithms for photons at collider experiments based on generative adversarial networks. We treat the energy depositions of simulated electromagnetic showers of photons and neutral-pion decays in a toy electromagnetic calorimeter as 2D images and we train super-resolution networks to generate images with an artificially increased resolution by a factor of four in each dimension. The generated images are able to reproduce features of the electromagnetic showers that are not obvious from the images at nominal resolution. Using the artificially-enhanced images for the reconstruction of shower-shape variables and of the position of the shower center results in significant improvements. We additionally investigate the utilization of the generated images as a pre-processing step for deep-learning photon-identification algorithms and observe improvements in the case of training samples of small size.
We study scalable machine learning models for full event reconstruction in high-energy electron-positron collisions based on a highly granular detector simulation. Particle-flow reconstruction can be formulated as a supervised learning task using tracks and calorimeter clusters or hits. We compare a graph neural network and kernel-based transformer and demonstrate that both avoid quadratic memory allocation and computational cost while achieving realistic reconstruction. We show that hyperparameter tuning on a supercomputer significantly enhances the physics performance of the models, improving the jet transverse momentum resolution by up to 50% compared to the baseline. The resulting model is highly portable across hardware processors. Finally, we demonstrate that the model can be trained on highly granular inputs consisting of tracks and calorimeter hits, resulting in a competitive physics performance with the baseline. Datasets and software to reproduce the studies are published following the findable, accessible, interoperable, and reusable principles.
Image decomposition plays a crucial role in various computer vision tasks, enabling the analysis and manipulation of visual content at a fundamental level. Overlapping images, which occur when multiple objects or scenes partially occlude each other, pose unique challenges for decomposition algorithms. The task intensifies when working with sparse images, where the scarcity of meaningful information complicates the precise extraction of components. This paper presents a solution that leverages the power of deep learning to accurately extract individual objects within multi-dimensional overlapping-sparse images, with a direct application in high-energy physics with decomposition of overlaid elementary particles obtained from imaging detectors. In particular, the proposed approach tackles a highly complex yet unsolved problem: identifying and measuring independent particles at the vertex of neutrino interactions, where one expects to observe detector images with multiple indiscernible overlapping charged particles. By decomposing the image of the detector activity at the vertex through deep learning, it is possible to infer the kinematic parameters of the identified low-momentum particles - which otherwise would remain neglected - and enhance the reconstructed energy resolution of the neutrino event. We also present an additional step - that can be tuned directly on detector data - combining the above method with a fully-differentiable generative model to improve the image decomposition further and, consequently, the resolution of the measured parameters, achieving unprecedented results. This improvement is crucial for precisely measuring the parameters that govern neutrino flavour oscillations and searching for asymmetries between matter and antimatter.
Network motif identification problem aims to find topological patterns in biological networks. Identifying non-overlapping motifs is a computationally challenging problem using classical computers. Quantum computers enable solving high complexity problems which do not scale using classical computers. In this paper, we develop the first quantum solution, called QOMIC (Quantum Optimization for Motif IdentifiCation), to the motif identification problem. QOMIC transforms the motif identification problem using a integer model, which serves as the foundation to develop our quantum solution. We develop and implement the quantum circuit to find motif locations in the given network using this model. Our experiments demonstrate that QOMIC outperforms the existing solutions developed for the classical computer, in term of motif counts. We also observe that QOMIC can efficiently find motifs in human regulatory networks associated with five neurodegenerative diseases: Alzheimers, Parkinsons, Huntingtons, Amyotrophic Lateral Sclerosis (ALS), and Motor Neurone Disease (MND).
We study the quantum system made of a particle trapped in a spherically symmetric Gaussian well with special emphasis on the weakly bound regime. In two and three dimensions, we perform highly accurate numerical calculations for the lowest states of the spectrum (n < 4) using the Lagrange Mesh Method. The critical parameters, for which the lowest states pass to the continuum regime, are estimated up to six decimal digits. The behavior of the energy near the threshold is also investigated. In particular, we estimate the coefficients of the leading terms of the energy expansion around the critical parameter. Additionally, we analyse the asymptotic behavior of the exact wave function at small and large distances to motivate a few-parametric Ansatz which is locally accurate in the whole domain of the radial coordinate. We use this Ansatz to build a basis set and estimate the binding energy of deuteron in a realistic model of nuclear physics, where the short range interaction between nucleons is described by the Gaussian well. We observe an extremely fast convergence of the energy as a function of the number of terms.
As a dedicated quantum device, Ising machines could solve large-scale binary optimization problems in milliseconds. There is emerging interest in utilizing Ising machines to train feedforward neural networks due to the prosperity of generative artificial intelligence. However, existing methods can only train single-layer feedforward networks because of the complex nonlinear network topology. This paper proposes an Ising learning algorithm to train quantized neural network (QNN), by incorporating two essential techinques, namely binary representation of topological network and order reduction of loss function. As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines, providing an alternative to gradient-based backpropagation. Firstly, training QNN is formulated as a quadratic constrained binary optimization (QCBO) problem by representing neuron connection and activation function as equality constraints. All quantized variables are encoded by binary bits based on binary encoding protocol. Secondly, QCBO is converted to a quadratic unconstrained binary optimization (QUBO) problem, that can be efficiently solved on Ising machines. The conversion leverages both penalty function and Rosenberg order reduction, who together eliminate equality constraints and reduce high-order loss function into a quadratic one. With some assumptions, theoretical analysis shows the space complexity of our algorithm is $\mathcal{O}(H^2L + HLN\log H)$, quantifying the required number of Ising spins. Finally, the algorithm effectiveness is validated with a simulated Ising machine on MNIST dataset. After annealing 700 ms, the classification accuracy achieves 98.3%. Among 100 runs, the success probability of finding the optimal solution is 72%. Along with the increasing number of spins on Ising machine, our algorithm has the potential to train deeper neural networks.
The formulation of a measurement theory for relativistic quantum field theory (QFT) has recently been an active area of research. In contrast to the asymptotic measurement framework that was enshrined in QED, the new proposals aim to supply a measurement framework for measurements in local spacetime regions. This paper surveys episodes in the history of quantum theory that contemporary researchers have identified as precursors to their own work and discusses how they laid the groundwork for current approaches to local measurement theory for QFT.
Non-Gaussian quantum states are crucial to fault-tolerant quantum computation with continuous-variable systems. Usually, generation of such states involves tradeoffs between success probability and quality of the resultant state. For example, injecting squeezed light into a multimode interferometer and postselecting on certain patterns of photon-number outputs in all but one mode, a fundamentally probabilistic task, can herald the creation of cat states, Gottesman-Kitaev-Preskill (GKP) states, and more. We consider the addition of a non-Gaussian resource state, particularly single photons, to this configuration and show how it improves the qualities and generation probabilities of desired states. With only two modes, adding a single photon source improves GKP-state fidelity from 0.68 to 0.95 and adding a second then increases the success probability eightfold; for cat states with a fixed target fidelity, the probability of success can be improved by factors of up to 4 by adding single-photon sources. These demonstrate the usefulness of additional commonplace non-Gaussian resources for generating desirable states of light.
Conventional hydrodynamics describes systems with few long-lived excitations. In one dimension, however, many experimentally relevant systems feature a large number of long-lived excitations even at high temperature, because they are proximate to integrable limits. Such models cannot be treated using conventional hydrodynamics. The framework of generalized hydrodynamics (GHD) was recently developed to treat the dynamics of one-dimensional models: it combines ideas from integrability, hydrodynamics, and kinetic theory to come up with a quantitative theory of transport. GHD has successfully settled several longstanding questions about one-dimensional transport; it has also been leveraged to study dynamical questions beyond the transport of conserved quantities, and to systems that are not integrable. In this article we introduce the main ideas and predictions of GHD, survey some of the most recent theoretical extensions and experimental tests of the GHD framework, and discuss some open questions in transport that the GHD perspective might elucidate.
Exponential observables, formulated as $\log \langle e^{\hat{X}}\rangle$ where $\hat{X}$ is an extensive quantity, play a critical role in study of quantum many-body systems, examples of which include the free-energy and entanglement entropy. Given that $e^{X}$ becomes exponentially large (or small) in the thermodynamic limit, accurately computing the expectation value of this exponential quantity presents a significant challenge. In this Letter, we propose a comprehensive algorithm for quantifying these observables in interacting fermion systems, utilizing the determinant quantum Monte Carlo (DQMC) method. We have applied this novel algorithm to the 2D half-filled Hubbard model. At the strong coupling limit, our method showcases a significant accuracy improvement compared to conventional methods that are derived from the internal energy. We also illustrate that this novel approach delivers highly efficient and precise measurements of the nth R\'enyi entanglement entropy. Even more noteworthy is that this improvement comes without incurring increases in computational complexity. This algorithm effectively suppresses exponential fluctuations and can be easily generalized to other models.
Quantum simulation of many-body systems, particularly using ultracold atoms and trapped ions, presents a unique form of quantum control -- it is a direct implementation of a multi-qubit gate generated by the Hamiltonian. As a consequence, it also faces a unique challenge in terms of benchmarking, because the well-established gate benchmarking techniques are unsuitable for this form of quantum control. Here we show that the symmetries of the target many-body Hamiltonian can be used to benchmark and even characterize experimental errors in the quantum simulation. We consider two forms of errors: (i) unitary errors arising out of systematic errors in the applied Hamiltonian and (ii) canonical non-Markovian errors arising out of random shot-to-shot fluctuations in the applied Hamiltonian. We show that the dynamics of the expectation value of the target Hamiltonian itself, which is ideally constant in time, can be used to characterize these errors. In the presence of errors, the expectation value of the target Hamiltonian shows a characteristic thermalization dynamics, when it satisfies the operator thermalization hypothesis (OTH). That is, an oscillation in the short time followed by relaxation to a steady-state value in the long time limit. We show that while the steady-state value can be used to characterize the coherent errors, the amplitude of the oscillations can be used to estimate the non-Markovian errors. We develop two experimental protocols to characterize the unitary errors based on these results, one of which requires single-qubit addressing and the other one doesn't. We also develop a protocol to detect and partially characterize non-Markovian errors.
Trapped ions are a promising technology for building scalable quantum computers. Not only can they provide a high qubit quality, but they also enable modular architectures, referred to as Quantum Charge Coupled Device (QCCD) architecture. Within these devices, ions can be shuttled (moved) throughout the trap and through different dedicated zones, e.g., a memory zone for storage and a processing zone for the actual computation. However, this movement incurs a cost in terms of required time steps, which increases the probability of decoherence, and, thus, should be minimized. In this paper, we propose a formalization of the possible movements in ion traps via Boolean satisfiability. This formalization allows for determining the minimal number of time steps needed for a given quantum algorithm and device architecture, hence reducing the decoherence probability. An empirical evaluation confirms that -- using the proposed approach -- minimal results (i.e., the lower bound) can be determined for the first time. An open-source implementation of the proposed approach is publicly available at https://github.com/cda-tum/mqt-ion-shuttler.
Lasers with high spectral purity are indispensable for optical clocks and coherent manipulation of atomic and molecular qubits for applications such as quantum computing and quantum simulation. Stabilisation of the laser to a reference can provide a narrow linewidth and high spectral purity. However, widely-used diode lasers exhibit fast phase noise that prevents high fidelity qubit manipulation. Here we demonstrate a self-injection locked diode laser system utilizing a medium finesse cavity. The cavity not only provides a stable resonance frequency, but at the same time acts as a low-pass filter for phase noise beyond the cavity linewidth of around 100 kHz, resulting in low phase noise from dc to the injection lock limit. We model the expected laser performance and benchmark it using a single trapped $^{40}$Ca$^{+}$-ion as a spectrum analyser. We show that the fast phase noise of the laser at relevant Fourier frequencies of 100 kHz to >2 MHz is suppressed to a noise floor of between -110 dBc/Hz and -120 dBc/Hz, an improvement of 20 to 30 dB over state-of-the-art Pound-Drever-Hall-stabilized extended-cavity diode lasers. This strong suppression avoids incoherent (spurious) spin flips during manipulation of optical qubits and improves laser-driven gates in using diode lasers with applications in quantum logic spectroscopy, quantum simulation and quantum computation.
We present a methodological argument to refute the so-called many-worlds interpretation (MWI) of quantum theory. Several known criticisms in the literature have already pointed out problematic aspects of this interpretation, such as the lack of a satisfactory account of probabilities, or the huge ontological cost of MWI. Our criticism, however, does not go into the technical details of any version of MWI, but is at the same time more general and more radical. We show, in fact, that a whole class of theories--of which MWI is a prime example--fails to satisfy some basic tenets of science which we call facts about natural science. The problem of approaches the likes of MWI is that, in order to reproduce the observed empirical evidence about any concrete quantum measurement outcome, they require as a tacit assumption that the theory does in fact apply to an arbitrarily large range of phenomena, and ultimately to all phenomena. We call this fallacy the holistic inference loop, and we show that this is incompatible with the facts about natural science, rendering MWI untenable and dooming it to be refuted.
There has been a recent interest in proposing quantum protocols whose security relies on weaker computational assumptions than their classical counterparts. Importantly to our work, it has been recently shown that public-key encryption (PKE) from one-way functions (OWF) is possible if we consider quantum public keys. Notice that we do not expect classical PKE from OWF given the impossibility results of Impagliazzo and Rudich (STOC'89). However, the distribution of quantum public keys is a challenging task. Therefore, the main question that motivates our work is if quantum PKE from OWF is possible if we have classical public keys. Such protocols are impossible if ciphertexts are also classical, given the impossibility result of Austrin et al. (CRYPTO'22) of quantum enhanced key-agreement (KA) with classical communication. In this paper, we focus on black-box separation for PKE with classical public key and quantum ciphertext from OWF under the polynomial compatibility conjecture, first introduced in Austrin et al.. More precisely, we show the separation when the decryption algorithm of the PKE does not query the OWF. We prove our result by extending the techniques of Austrin et al. and we show an attack for KA in an extended classical communication model where the last message in the protocol can be a quantum state.
Yuan and Feng [Eur. Phys. J. Plus 138:70, 2023] recently proposed a modification of the nested Mach-Zehnder interferometer experiment performed by Danan et al. [Phys. Rev. Lett. 111:240402, 2013] and argued that photons give "contradictory" answers about where they have been, when traces are locally imprinted on them in different ways. They concluded that their results are comprehensible from what they call the "three-path interference viewpoint", but difficult to explain from the "discontinuous trajectory" viewpoint advocated by Danan et al. We argue that the weak trace approach (the basis of the "discontinuous trajectory" viewpoint) provides a consistent explanation of the Yuan-Feng experiment. The contradictory messages of the photons are just another example of photons lying about where they have been when the experimental method of Danan et al. is applied in an inappropriate setup.
In broadband quantum optical systems, nonlinear interactions among a large number of frequency components induce complex dynamics that may defy heuristic analysis. In this work we introduce a perturbative framework for factoring out reservoir degrees of freedom and establishing a concise effective model (effective field theory) for the remaining system. Our approach combines approximate diagonalization of judiciously partitioned subsystems with master equation techniques. We consider cascaded optical $\chi^{(2)}$ (quadratic) nonlinearities as an example and show that the dynamics can be construed (to leading order) as self-phase modulations of dressed fundamental modes plus cross-phase modulations of dressed fundamental and second-harmonic modes. We then formally eliminate the second-harmonic degrees of freedom and identify emergent features of the fundamental wave dynamics, such as two-photon loss channels, and examine conditions for accuracy of the reduced model in dispersive and dissipative parameter regimes. Our results highlight the utility of system-reservoir methods for deriving accurate, intuitive reduced models for complex dynamics in broadband nonlinear quantum photonics.
Color centers in diamond are quantum systems with optically active spin-states that show long coherence times and are therefore a promising candidate for the development of efficient spin-photon interfaces. However, only a small portion of the emitted photons is generated by the coherent optical transition of the zero-phonon line (ZPL), which limits the overall performance of the system. Embedding these emitters in photonic crystal cavities improves the coupling to the ZPL photons and increases their emission rate. Here, we demonstrate the fabrication process of "Sawfish" cavities, a design recently proposed that has the experimentally-realistic potential to simultaneously enhance the emission rate by a factor of 46 and couple photons into a single-mode fiber with an efficiency of 88%. The presented process allows for the fabrication of fully suspended devices with a total length of 20.5 $\mu$m and features size as small as 40 nm. The optical characterization shows fundamental mode resonances that follow the behavior expected from the corresponding design parameters and quality (Q) factors as high as 3825. Finally, we investigate the effects of nanofabrication on the devices and show that, despite a noticeable erosion of the fine features, the measured cavity resonances deviate by only 0.9 (1.2)% from the corresponding simulated values. This proves that the Sawfish design is robust against fabrication imperfections, which makes it an attractive choice for the development of quantum photonic networks.
The non-Hermiticity of the system gives rise to distinct knot topology that has no Hermitian counterpart. Here, we report a comprehensive study of the knot topology in gapped non-Hermitian systems based on the universal dilation method with a long coherence time nitrogen-vacancy center in a $^{\text{12}}$C isotope purified diamond. Both the braiding patterns of energy bands and the eigenstate topology are revealed. Furthermore, the global biorthogonal Berry phase related to the eigenstate topology has been successfully observed, which identifies the topological invariance for the non-Hermitian system. Our method paves the way for further exploration of the interplay among band braiding, eigenstate topology and symmetries in non-Hermitian quantum systems.
It is very difficult and important to construct Bell inequalities for n-partite, k-settings of measurement, and d-dimensional (n,k,d) systems. Inspired by the iteration formula form of the Mermin-Ardehali-Belinski{\u{\i}}-Klyshko (MABK) inequality, we generalize the multi-component correlation functions for bipartite d-dimensional systems to n-partite ones, and construct the corresponding Bell inequality. The Collins-Gisin-Linden-Massar-Popescu inequality can be reproduced by this way. The most important result is that for prime d the general Bell function in full-correlated multi-component correlation function form for (n,2,d) systems can be reformulated in iteration formula by two full-correlated multi-component Bell functions for (n-1,2,d) systems. As applications, we recover the MABK inequality and the most robust coincidence Bell inequalities for (3,2,3),(4,2,3),(5,2,3), and (3,2,5) Bell scenarios with this iteration formula. This implies that the iteration formula is an efficient way of constructing multi-partite Bell inequalities. In addition, we also give some new Bell inequalities with the same robustness but inequivalent to the known ones.
The utility of a quantum computer depends heavily on the ability to reliably perform accurate quantum logic operations. For finding optimal control solutions, it is of particular interest to explore model-free approaches, since their quality is not constrained by the limited accuracy of theoretical models for the quantum processor - in contrast to many established gate implementation strategies. In this work, we utilize a continuous-control reinforcement learning algorithm to design entangling two-qubit gates for superconducting qubits; specifically, our agent constructs cross-resonance and CNOT gates without any prior information about the physical system. Using a simulated environment of fixed-frequency, fixed-coupling transmon qubits, we demonstrate the capability to generate novel pulse sequences that outperform the standard cross-resonance gates in both fidelity and gate duration, while maintaining a comparable susceptibility to stochastic unitary noise. We further showcase an augmentation in training and input information that allows our agent to adapt its pulse design abilities to drifting hardware characteristics, importantly with little to no additional optimization. Our results exhibit clearly the advantages of unbiased adaptive-feedback learning-based optimization methods for transmon gate design.
When strongly pumped at twice their resonant frequency, non-linear resonators develop a high-amplitude intracavity field, a phenomenon known as parametric self-oscillations. The boundary over which this instability occurs can be extremely sharp and thereby presents an opportunity for realizing a detector. Here we operate such a device based on a superconducting microwave resonator whose non-linearity is engineered from kinetic inductance. The device indicates the absorption of low-power microwave wavepackets by transitioning to a self-oscillating state. Using calibrated wavepackets we measure the detection efficiency with zeptojoule energy wavepackets. We then apply it to measurements of electron spin resonance, using an ensemble of $^{209}$Bi donors in silicon that are inductively coupled to the resonator. We achieve a latched-readout of the spin signal with an amplitude that is five hundred times greater than the underlying spin echoes.
Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms, utilizing minimal measurements. While the random measurement approach has been instrumental in this context, the quasi-exponential computational demand with increasing qubit count hurdles its feasibility in large-qubit scenarios. To bridge this knowledge gap, here we introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities: measurement outcomes and classical description of compiled circuits on explored quantum devices, both enriched with unique information. Building upon this insight, we devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation. The learned representation can effectively characterize the similarity between the explored quantum devices when executing new quantum algorithms not present in the training data. We evaluate our proposal on platforms featuring diverse noise models, encompassing system sizes up to 50 qubits. The achieved results demonstrate a three-orders-of-magnitude improvement in prediction accuracy compared to the random measurements and offer compelling evidence of the complementary roles played by each modality in cross-platform verification. These findings pave the way for harnessing the power of multimodal learning to overcome challenges in wider quantum system learning tasks.
Accurate simulations of vibrational molecular spectra are expensive on conventional computers. Compared to the electronic structure problem, the vibrational structure problem with quantum computers is less investigated. In this work we accurately estimate quantum resources, such as number of qubits and quantum gates, required for vibrational structure calculations on a programmable quantum computer. Our approach is based on quantum phase estimation and focuses on fault-tolerant quantum devices. In addition to asymptotic estimates for generic chemical compounds, we present a more detailed analysis of the quantum resources needed for the simulation of the Hamiltonian arising in the vibrational structure calculation of acetylene-like polyynes of interest. Leveraging nested commutators, we provide an in-depth quantitative analysis of trotter errors compared to the prior investigations. Ultimately, this work serves as a guide for analyzing the potential quantum advantage within vibrational structure simulations.
In this work we first examine the hardness of solving various search problems by hybrid quantum-classical strategies, namely, by algorithms that have both quantum and classical capabilities. We then construct a hybrid quantum-classical search algorithm and analyze its success probability. Regarding the former, for search problems that are allowed to have multiple solutions and in which the input is sampled according to arbitrary distributions we establish their hybrid quantum-classical query complexities -- i.e., given a fixed number of classical and quantum queries, determine what is the probability of solving the search task. At a technical level, our results generalize the framework for hybrid quantum-classical search algorithms proposed by Rosmanis. Namely, for an arbitrary distribution $D$ on Boolean functions, the probability an algorithm equipped with $\tau_c$ classical and $\tau_q$ quantum queries succeeds in finding a preimage of $1$ for a function sampled from $D$ is at most $\nu_D \cdot(2\sqrt{\tau_c} + 2\tau_q + 1)^2$, where $\nu_D$ captures the average (over $D$) fraction of preimages of $1$. As applications of our hardness results, we first revisit and generalize the security of the Bitcoin protocol called the Bitcoin backbone, to a setting where the adversary has both quantum and classical capabilities, presenting a new hybrid honest majority condition necessary for the protocol to properly operate. Secondly, we examine the generic security of hash functions against hybrid adversaries. Regarding our second contribution, we design a hybrid algorithm which first spends all of its classical queries and in the second stage runs a ``modified Grover'' where the initial state depends on the distribution $D$. We show how to analyze its success probability for arbitrary target distributions and, importantly, its optimality for the uniform and the Bernoulli distribution cases.
A host of dynamical measures of quantum correlations -- out-of-time ordered correlators, Loschmidt echo, generalized entanglement and observational entropy -- are useful to infer about the underlying classical chaotic dynamics in quantum regime. In this work, these measures are employed to analyse quantum kicked top with kick strength $k$. It is shown that, despite the differences in their definitions, these measures are periodic with $k$, and the periodicity depends on the number of spins represented by the kicked top. The periodic behaviour arises from the structure of the kicked top Floquet operator and spans the regime in which the corresponding classical dynamics is predominantly chaotic. This result can guide experiments towards the right choice of kick strengths to avoid repetitive dynamics.
Non-Hermitian skin effect (NHSE), indicating the breakdown of conventional bulk-boundary correspondence, is a most intriguing phenomenon of non-Hermitian systems. Previous realizations of NHSE typically require unequal left-right couplings or on-site gain and loss. Here we propose a new mechanism for realizing NHSE via dissipative couplings, in which the left-right couplings have equal strengths but the phases do not satisfy the complex conjugation. When combined with multi-channel interference provided by a periodic dissipative-coherent coupling structure, the dissipative couplings can lead to unequal left-right couplings, inducing NHSE. Moreover, we show that the non-Hermiticity induced by dissipative couplings can be fully tranformed into nonreciprocity-type non-Hermiticity without bringing extra gain-loss-type non-Hermiticity. Thus, this mechanism enables unidirectional energy transmission without introducing additional insertion loss. Our work opens a new avenue for the study of non-Hermitian topological effects and the design of directional optical networks.
Optimization of circuits is an essential task for both quantum and classical computers to improve their efficiency. In contrast, classical logic optimization is known to be difficult, and a lot of heuristic approaches have been developed so far. In this study, we define and construct a quantum algorithmic primitive called quantum circuit unoptimization, which makes a given quantum circuit complex by introducing some redundancies while preserving circuit equivalence, i.e., the inverse operation of circuit optimization. Using quantum circuit unoptimization, we propose the quantum circuit equivalence test, a decision problem contained both in NP and BQP classes. Furthermore, as a practical application, we construct concrete unoptimization recipes to generate compiler benchmarks and evaluate circuit optimization performance using Qiskit and Pytket. Our numerical simulations demonstrate that quantum circuit unoptimizer systematically generates redundant circuits that are challenging for compilers to optimize, which can be used to compare the performance of different compilers and improve them. We also offer potential applications of quantum circuit unoptimization, such as generating quantum advantageous machine learning datasets and quantum computer fidelity benchmarks.
Based on the Luttinger-Kohn Hamiltonian in the axial approximation, the transcendental equation determining the hole subband dispersions in a cylindrical Ge nanowire is analytically derived. This equation is more general than that derived using the spherical approximation, and is suitable to study the growth direction dependence of the subband dispersions. The axial approximation almost gives the accurate low-energy subband dispersions for high symmetry nanowire growth directions [001] and [111]. The perturbation corrections from the non-axial terms are negligible for these two directions. The lowest two subband dispersions can be regarded as two shifted parabolic curves with an energy gap at $k_{z}=0$ for both growth directions [001] and [111]. At the site of the energy gap, the eigenstates for growth direction [111] are inverted types of that for growth direction [001]. A nanowire growth direction where the energy gap at $k_{z}=0$ closes is predicted to exist between directions [001] and [111].
We investigate the scattering processes of two photons in a one-dimensional (1D) waveguide coupled to two giant atoms. By adjusting the accumulated phase shifts between the coupling points, we are able to effectively manipulate the characteristics of these scattering photons. Utilizing the Lippmann-Schwinger (LS) formalism, we derive analytical expressions for the wavefunctions describing two photon interacting in separate, braided and nested configurations. Based on these wavefunctions, we also obtian analytical expressions for the incoherent power spectra and second-order correlation functions. In contrast to small atoms, the incoherent spectrum, which is defined by the correlation of the bound state, can exhibit four distinct peaks and a wider frequency range. Additionally, the second order correlation functions in the transmission and reflection fields could be tuned to exhibit either bunching or antibunching upon resonant driving, a behavior that is not possible with small atoms. These unique features offered by the giant atoms in waveguide QED could benefit the generation of non-classical itinerant photons in quantum networks.
With its rich dynamics, the quantum harmonic oscillator is an innate platform for understanding real-world quantum systems, and could even excel as the heart of a quantum computer. A particularly promising and rapidly advancing platform that harnesses quantum harmonic oscillators for information processing is the bosonic circuit quantum electrodynamics (cQED) system. In this article, we provide perspectives on the progress, challenges, and future directions in building a bosonic cQED quantum computer. We describe the main hardware building blocks and how they facilitate quantum error correction, metrology, and simulation. We conclude with our views of the key challenges that lie on the horizon, as well as scientific and cultural strategies for overcoming them and building a practical quantum computer with bosonic cQED hardware.
Optical tweezers use light from a tightly focused laser beam to manipulate the motion of tiny particles. When the laser light is strongly focused, but still paraxial, its e/m field is characterized by a longitudinal component which is of magnitude comparable to the transverse ones and which has been ignored in the theoretical analysis of the tweezing forces. In our work we revise the calculations of the various components of the radiation pressure force, within the limits of Rayleigh regime or dipole approximation, in the case where a tiny particle interacts, in free space, with a circularly polarized optical vortex beam, by taking into account this ignored field term. We show that this term is responsible for considerable modifications in the magnitude of the various components, moreover and also for the appearance of terms involving the coupling of the spin angular momentum (SAM) and the orbital angular momentum (OAM) of the photons of the vortex beam. We compare our findings with the ones taken ignoring the longitudinal field component.
Consider an open quantum system which interacts with its environment. Assuming that the experimenter has access only to the system, an interesting question is that whether it is possible to detect initial correlations, between the system and the environment, by performing measurements only on the system. Various methods have been proposed to detect correlations by local measurements on the system. After reviewing these methods, we will show that correlations, between the system and the environment, are always detectable. In particular, we will show that one can always find a unitary evolution, for the whole system-environment, such that the trace distance method, proposed to witness correlations locally, succeeds. Then, we will find the condition for existence of the optimal unitary evolution, for which entire correlation is locally detectable. Finally, we will consider the case that the system and the environment interact through a time-independent Hamiltonian, and show that, for this case, correlation can be undetectable, by local measurements on the system.
The imperfection of measurements in real-world scenarios can compromise the performance of device-independent quantum key distribution (DIQKD) protocols. In this paper, we focus on a specific DIQKD protocol that relies on the violation of the Svetlichny's inequality (SI), considering an eavesdropper utilizing the convex combination attack. Our analysis covers both the three-party DIQKD case and the general $n$-party scenario. Our main result is the relationship between the measurement accuracy and the extractable secret-key rate in all multi-party scenarios. The result demonstrates that as measurement accuracy improves, the extractable secret-key rate approaches $1$, reaching its maximum value when the measurement accuracy is perfect. We reveal that achieving positive extractable secret-key rates requires a threshold measurement accuracy that is consistently higher than the critical measurement accuracy necessary to violate the SI. We depict these thresholds for $n$-party scenarios ranging from $n=3$ to $n=10$, demonstrating that as the number of parties ($n$) increases, both thresholds exhibit a rapid and monotonic convergence towards unity. Furthermore, we consider a scenario involving a non-maximally entangled state with imperfect measurements, where the emission of the initial GHZ state undergoes noise during transmission, resulting in a Werner state. The study further quantifies and demonstrates the relationship between the extractable secret-key rate, the visibility of the Werner state, and the measurement accuracy, specifically emphasizing the three-party scenario. This study aims to illuminate the influence of imperfect measurement accuracy on the security and performance of multi-party DIQKD protocols. The results emphasize the importance of high measurement accuracy in achieving positive secret-key rates and maintaining the violation of the SI.
We show how the directional collective response of atomic arrays to light can be exploited for the dissipative generation of entangled atomic states, relevant for e.g. quantum metrology. We consider an atomic array illuminated by a paraxial beam of a squeezed-vacuum field and demonstrate that quantum-squeezing correlations are dissipatively transferred to the array atoms, resulting in an atomic spin-squeezed steady state. We find that the entanglement transfer efficiency and hence the degree of spin squeezing are determined by the resonant optical reflectivity of the array. Considering realistic cases of finite-size array and illuminating beam, we find how the spin-squeezing strength scales with system parameters, such as the number of layers in the array and its spatial overlap with the beam. We discuss applications in atomic clocks both in optical and microwave domains.
This paper proposes an efficient stabilizer circuit simulation algorithm that only traverses the circuit forward once. We introduce phase symbolization into stabilizer generators, which allows possible Pauli faults in the circuit to be accumulated explicitly as symbolic expressions in the phases of stabilizer generators. This way, the measurement outcomes are also symbolic expressions, and we can sample them by substituting the symbolic variables with concrete values, without traversing the circuit repeatedly. We show how to integrate symbolic phases into the stabilizer tableau and maintain them efficiently using bit-vector encoding. A new data layout of the stabilizer tableau in memory is proposed, which improves the performance of our algorithm (and other stabilizer simulation algorithms based on the stabilizer tableau). We implement our algorithm and data layout in a Julia package named \texttt{SymPhase.jl}, and compare it with Stim, the state-of-the-art simulator, on several benchmarks. We show that \texttt{SymPhase.jl} has superior performance in terms of sampling time, which is crucial for generating a large number of samples for further analysis.
We study the topological entanglement entropy and scalar chirality of a topologically ordered skyrmion formed in a two-dimensional triangular lattice. Scalar chirality remains a smooth function of the magnetic field in both helical and quantum skyrmion phases. In contrast, topological entanglement entropy remains almost constant in the quantum skyrmion phase, whereas it experiences enhanced fluctuations in the helical phase. Therefore, topological entanglement entropy is an effective tool to distinguish between the two phases and pinpoint the quantum phase transition in the system.
The idea of making photons effectively interact has attracted a lot of interest in recent years, for several reasons. Firstly, since photons do not naturally interact with each other, it is of fundamental physical interest to see what kind of medium can mediate interactions between these fundamental and non-interacting particles, and to what extent. Secondly, photonics is a major candidate for future quantum technology, due to the easy manipulation, readout, and transport of photons, which makes them ideal for quantum information processing. Finally, achieving strong and tunable interactions among photons would open up an avenue for exploring the many-body physics of a fluid of light. In this thesis, we will see how a cavity formed of subwavelength lattices of two-level atoms can confine photons to a nonlinear environment for a long time, such that emitted photons have accumulated strong correlations both among their momenta and in their temporal statistics. This speaks of a strong photon-photon interaction within the cavity. The nonlinearity originates in the saturability of individual atoms, and the lattice structure results in a strong and low-loss collective interaction with light. While a single atomic lattice has a largely linear nature, as the effect of individual atoms washes out in the collective response, the confining geometry of the cavity means the photons are exposed to the underlying saturability of the atoms for such a long time that the nonlinearity is revived. We will analyse this system both using a standard input-output formalism, where the nonlinear physics of the system is handled numerically, and a powerful Green's function-based approach that allows for exact analytical results with no additional approximations. This analytical description has the potential to lead to an exact study of the many-body physics of interacting photons in a two-dimensional setting.
Weak values of quantum observables are a powerful tool for investigating a broad spectrum of quantum phenomena. For this reason, several methods to measure them in the laboratory have been proposed. Some of these methods require weak interactions and postselection, while others are deterministic, but require statistics over a number of experiments growing exponentially with the number of measured particles. Here we propose a deterministic dimension-independent scheme for estimating weak values of arbitrary observables. The scheme, based on coherently controlled SWAP operations, does not require prior knowledge of the initial and final states, nor of the measured observables, and therefore can work with uncharacterized preparation and measurement devices. As a byproduct, our scheme provides an alternative expression for two-time states, that is, states describing quantum systems subject to pre and post-selections. Using this expression, we show that the controlled-SWAP scheme can be used to estimate weak values for a class of two-time states associated to bipartite quantum states with positive partial transpose.
Quantum signal processing (QSP) and the quantum singular value transformation (QSVT) are pivotal tools for simplifying the development of quantum algorithms. These techniques leverage polynomial transformations on the eigenvalues or singular values of block-encoded matrices, achieved with the use of just one control qubit. However, the degree of the polynomial transformations scales linearly with the length of the QSP protocol. In this work, we extend the original QSP ansatz by introducing multiple control qubits. Assuming that powers of two of the matrix to transform are easily implementable - as in Shor's factoring algorithm - we can achieve polynomial transformations with degrees that scale exponentially with the number of control qubits. This work aims to provide a partial characterization of the polynomials that can be implemented using this approach, with the original phase estimation circuit and discrete logarithm serving as illustrative examples.
Two grapes irradiated inside a microwave (MW) oven typically produce a series of sparks and can ignite a violent plasma. The underlying cause of the plasma has been attributed to the formation of morphological-dependent resonances (MDRs) in the aqueous dielectric dimers that lead to the generation of a strong evanescent MW hotspot between them. Previous experiments have focused on the electric-field component of the field as the driving force behind the plasma ignition. Here we couple an ensemble of nitrogen-vacancy (NV) spins in nanodiamonds (NDs) to the magnetic-field component of the dimer MW field. We demonstrate the efficient coupling of the NV spins to the MW magnetic-field hotspot formed between the grape dimers using Optically Detected Magnetic Resonance (ODMR). The ODMR measurements are performed by coupling NV spins in NDs to the evanescent MW fields of a copper wire. When placing a pair of grapes around the NDs and matching the ND position with the expected magnetic-field hotspot, we see an enhancement in the ODMR contrast by more than a factor of two compared to the measurements without grapes. Using finite-element modelling, we attribute our experimental observation of the field enhancement to the MW hotspot formation between the grape dimers. The present study not only validates previous work on understanding grape-dimer resonator geometries, but it also opens up a new avenue for exploring novel MW resonator designs for quantum technologies.
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path. While interior point methods follow the central path with an iterative algorithm that works with successive linearizations of the perturbed KKT conditions, we perform a single simulation working directly with the nonlinear complementarity equations. Combining our approach with iterative refinement techniques, we obtain an exact solution to a linear optimization problem involving $m$ constraints and $n$ variables using at most $\mathcal{O} \left( (m + n) \text{nnz} (A) \kappa (\mathcal{M}) L \cdot \text{polylog} \left(m, n, \kappa (\mathcal{M}) \right) \right)$ elementary gates and $\mathcal{O} \left( \text{nnz} (A) L \right)$ classical arithmetic operations, where $ \text{nnz} (A)$ is the total number of non-zero elements found in the constraint matrix, $L$ denotes binary input length of the problem data, and $\kappa (\mathcal{M})$ is a condition number that depends only on the problem data.
Quantum Key Distribution (QKD) is a prominent application in the field of quantum cryptography providing information-theoretic security for secret key exchange. The implementation of QKD systems on photonic integrated circuits (PICs) can reduce the size and cost of such systems and facilitate their deployment in practical infrastructures. To this end, continuous-variable (CV) QKD systems are particularly well-suited as they do not require single-photon detectors, whose integration is presently challenging. Here we present a CV-QKD receiver based on a silicon PIC capable of performing balanced detection. We characterize its performance in a laboratory QKD setup using a frequency multiplexed pilot scheme with specifically designed data processing allowing for high modulation and secret key rates. The obtained excess noise values are compatible with asymptotic secret key rates of 2.4 Mbit/s and 220 kbit/s at an emulated distance of 10 km and 23 km, respectively. These results demonstrate the potential of this technology towards fully integrated devices suitable for high-speed, metropolitan-distance secure communication.
A future quantum internet is expected to generate, distribute, store and process quantum bits (qubits) over the globe by linking different quantum nodes via quantum states of light. To facilitate the long-haul operations, quantum repeaters, the building blocks for a long-distance quantum network, have to be operated in the telecom wavelengths to take advantage of both the low-loss fiber network and the well-established technologies for optical communications. Semiconductors quantum dots (QDs) so far have exhibited exceptional performances as key elements, i.e., quantum light sources and spin-photon interfaces, for quantum repeaters, but only in the near-infrared (NIR) regime. Therefore, the development of high-performance telecom-band QD devices is highly desirable for a future solid-state quantum internet based on fiber networks. In this review, we present the physics and the technological developments towards epitaxial QD devices emitting at the telecom O- and C-bands for quantum networks by using advanced epitaxial growth for direct telecom emission, and quantum frequency conversion (QFC) for telecom-band down-conversion. We also discuss the challenges and opportunities in the future to realize telecom QD devices with improved performances and expanded functionalities by taking advantage of hybrid integrations.
Low latency and low power consumption are the main goals for our future networks. Fiber optics are already widely used for their faster speed. We want to investigate if optical decoding has further advantages to reaching future goals. We have investigated and compared the decoding latency and power consumption of an optical chip and its electronic counterpart built with MOSFETs. We have found that optical processing has a speed and power consumption benefit. For future networks and real-time applications, this can bring huge advantages over our current electronic processors.
One of the most promising attempts towards solving optimization problems with quantum computers in the noisy intermediate scale era of quantum computing are variational quantum algorithms. The Quantum Alternating Operator Ansatz provides an algorithmic framework for constrained, combinatorial optimization problems. As opposed to the better known standard QAOA protocol, the constraints of the optimization problem are built into the mixing layers of the ansatz circuit, thereby limiting the search to the much smaller Hilbert space of feasible solutions. In this work we develop mixing operators for a wide range of scheduling problems including the flexible job shop problem. These mixing operators are based on a special control scheme defined by a constraint graph model. After describing an explicit construction of those mixing operators, they are proven to be feasibility preserving, as well as exploring the feasible subspace.
Standard cavity cooling of atoms or dielectric particles is based on the action of dispersive optical forces in high-finesse cavities. We investigate here a complementary regime characterized by large cavity losses, resembling the standard Doppler cooling technique. For a single two-level emitter a modification of the cooling rate is obtained from the Purcell enhancement of spontaneous emission in the large cooperativity limit. This mechanism is aimed at cooling of quantum emitters without closed transitions, which is the case for molecular systems, where the Purcell effect can mitigate the loss of population from the cooling cycle. We extend our analytical formulation to the many particle case governed by weak individual coupling but exhibiting collective strong Purcell enhancement to a cavity mode.
Hybrid quantum-classical algorithms appear to be the most promising approach for near-term quantum applications. An important bottleneck is the classical optimization loop, where the multiple local minima and the emergence of barren plateaux make these approaches less appealing. To improve the optimization the Quantum Natural Gradient (QNG) method [Quantum 4, 269 (2020)] was introduced - a method that uses information about the local geometry of the quantum state-space. While the QNG-based optimization is promising, in each step it requires more quantum resources, since to compute the QNG one requires $O(m^2)$ quantum state preparations, where $m$ is the number of parameters in the parameterized circuit. In this work we propose two methods that reduce the resources/state preparations required for QNG, while keeping the advantages and performance of the QNG-based optimization. Specifically, we first introduce the Random Natural Gradient (RNG) that uses random measurements and the classical Fisher information matrix (as opposed to the quantum Fisher information used in QNG). The essential quantum resources reduce to linear $O(m)$ and thus offer a quadratic "speed-up", while in our numerical simulations it matches QNG in terms of accuracy. We give some theoretical arguments for RNG and then benchmark the method with the QNG on both classical and quantum problems. Secondly, inspired by stochastic-coordinate methods, we propose a novel approximation to the QNG which we call Stochastic-Coordinate Quantum Natural Gradient that optimizes only a small (randomly sampled) fraction of the total parameters at each iteration. This method also performs equally well in our benchmarks, while it uses fewer resources than the QNG.
In this study, we simulated the algorithmic performance of a small neutral atom quantum computer and compared its performance when operating with all-to-all versus nearest-neighbor connectivity. This comparison was made using a suite of algorithmic benchmarks developed by the Quantum Economic Development Consortium. Circuits were simulated with a noise model consistent with experimental data from Nature 604, 457 (2022). We find that all-to-all connectivity improves simulated circuit fidelity by $10\%-15\%$, compared to nearest-neighbor connectivity.
Protein folding processes are a vital aspect of molecular biology that is hard to simulate with conventional computers. Quantum algorithms have been proven superior for certain problems and may help tackle this complex life science challenge. We analyze the resource requirements for simulating protein folding on a quantum computer, assessing this problem's feasibility in the current and near-future technological landscape. We calculate the minimum number of qubits, interactions, and two-qubit gates necessary to build a heuristic quantum algorithm with the specific information of a folding problem. Particularly, we focus on the resources needed to build quantum operations based on the Hamiltonian linked to the protein folding models for a given amino acid count. Such operations are a fundamental component of these quantum algorithms, guiding the evolution of the quantum state for efficient computations. Specifically, we study course-grained folding models on the lattice and the fixed backbone side-chain conformation model and assess their compatibility with the constraints of existing quantum hardware given different bit-encodings. We conclude that the number of qubits required falls within current technological capabilities. However, the limiting factor is the high number of interactions in the Hamiltonian, resulting in a quantum gate count unavailable today.
Spin glasses are canonical examples of complex matter. Although much about their structure remains uncertain, they inform the description of a wide array of complex phenomena, ranging from magnetic ordering in metals with impurities to aspects of evolution, protein folding, climate models, combinatorial optimization, and artificial intelligence. Indeed, spin glass theory forms a mathematical basis for neuromorphic computing and brain modeling. Advancing experimental insight into their structure requires repeatable control over microscopic degrees of freedom. Here, we achieve this at the atomic level using a quantum-optical system comprised of ultracold gases of atoms coupled via photons resonating within a confocal cavity. This active quantum gas microscope realizes an unusual type of transverse-field vector spin glass with all-to-all connectivity. Spin configurations are observed in cavity emission and reveal the emergence of replica symmetry breaking and nascent ultrametric structure as signatures of spin-glass order. The driven-dissipative nature of the system manifests as a nonthermal Parisi distribution, in qualitative correspondence with Monte Carlo simulations. The controllability provided by this new spin-glass system, potentially down to the quantum-spin-level, enables the study of spin-glass physics in novel regimes with application to quantum neural network computing.
Nonlinear differential equations exhibit rich phenomena in many fields but are notoriously challenging to solve. Recently, Liu et al. [1] demonstrated the first efficient quantum algorithm for dissipative quadratic differential equations under the condition $R < 1$, where $R$ measures the ratio of nonlinearity to dissipation using the $\ell_2$ norm. Here we develop an efficient quantum algorithm based on [1] for reaction-diffusion equations, a class of nonlinear partial differential equations (PDEs). To achieve this, we improve upon the Carleman linearization approach introduced in [1] to obtain a faster convergence rate under the condition $R_D < 1$, where $R_D$ measures the ratio of nonlinearity to dissipation using the $\ell_{\infty}$ norm. Since $R_D$ is independent of the number of spatial grid points $n$ while $R$ increases with $n$, the criterion $R_D<1$ is significantly milder than $R<1$ for high-dimensional systems and can stay convergent under grid refinement for approximating PDEs. As applications of our quantum algorithm we consider the Fisher-KPP and Allen-Cahn equations, which have interpretations in classical physics. In particular, we show how to estimate the mean square kinetic energy in the solution by postprocessing the quantum state that encodes it to extract derivative information.
Adaptive variational quantum simulation algorithms use information from the quantum computer to dynamically create optimal trial wavefunctions for a given problem Hamiltonian. A key ingredient in these algorithms is a predefined operator pool from which trial wavefunctions are constructed. Finding suitable pools is critical for the efficiency of the algorithm as the problem size increases. Here, we present a technique called operator pool tiling that facilitates the construction of problem-tailored pools for arbitrarily large problem instances. By first performing an ADAPT-VQE calculation on a smaller instance of the problem using a large, but computationally inefficient operator pool, we extract the most relevant operators and use them to design more efficient pools for larger instances. We demonstrate the method here on strongly correlated quantum spin models in one and two dimensions, finding that ADAPT automatically finds a highly effective ansatz for these systems. Given that many problems, such as those arising in condensed matter physics, have a naturally repeating lattice structure, we expect the pool tiling method to be a widely applicable technique apt for such systems.
There exist bipartite entangled states whose violations of Clauser-Horne-Shimony-Holt (CHSH) Bell inequality can be observed by a single Alice and arbitrarily many sequential Bobs [Phys. Rev. Lett. 125, 090401 (2020)]. Here we consider its analogues for tripartite systems: a tripartite entangled state is shared among Alice, Bob and multiple Charlies. The first Charlie measures his qubit and then passes his qubit to the next Charlie who measures again with other measurements and so on. The goal is to maximize the number of Charlies that can observe some kind of nonlocality with the single Alice and Bob. It has been shown that at most two Charlies could share genuine nonlocality of the Greenberger-Horne-Zeilinger (GHZ) state via the violation of Svetlichny inequality with Alice and Bob [Quantum Inf. Process. 18, 42 (2019) and Phys. Rev. A 103, 032216 (2021)]. In this work, we show that arbitrarily many Charlies can have standard nonlocality (via violations of Mermin inequality) and some other kind of genuine nonlocality (which is known as genuinely nonsignal nonlocality) with the single Alice and single Bob.
There are strong evidences in the literature that quantum non-Markovianity would hinder the presence of Quantum Darwinism. In this Letter, we study the relation between quantum Darwinism and approximate quantum Markovianity for open quantum systems by exploiting the properties of quantum conditional mutual information. We show that for approximately Markovian quantum processes the conditional mutual information still has the scaling property for Quantum Darwinism. Then two general bounds on the backflow of information are obtained, with which we can show that the presence of Quantum Darwinism restricts the information backflow and the quantum non-Markovianity must be small.
The historical significance of the Stern-Gerlach experiment lies in its provision of the initial evidence for space quantization. Over time, its sequential form has evolved into an elegant paradigm that effectively illustrates the fundamental principles of quantum theory. To date, the practical implementation of the sequential Stern-Gerlach experiment has not been fully achieved. In this study, we demonstrate the capability of programmable quantum processors to simulate the sequential Stern-Gerlach experiment. The specific parametric shallow quantum circuits, which are suitable for the limitations of current noisy quantum hardware, are given to replicate the functionality of Stern-Gerlach devices with the ability to perform measurements in different directions. Surprisingly, it has been demonstrated that Wigner's Stern-Gerlach interferometer can be readily implemented in our sequential quantum circuit. With the utilization of the identical circuits, it is also feasible to implement Wheeler's delayed-choice experiment. We propose the utilization of cross-shaped programmable quantum processors to showcase sequential experiments, and the simulation results demonstrate a strong alignment with theoretical predictions. With the rapid advancement of cloud-based quantum computing, such as BAQIS Quafu, it is our belief that the proposed solution is well-suited for deployment on the cloud, allowing for public accessibility. Our findings not only expand the potential applications of quantum computers, but also contribute to a deeper comprehension of the fundamental principles underlying quantum theory.
Thermalization processes degrade the states of any working medium, turning any initial state into a passive state from which no work can be extracted. Recently, it has been shown that this degradation can be avoided if two identical thermalization processes take place in coherently controlled order, in a scenario known as the quantum SWITCH. In some situations, control over the order even enables work extraction when the medium was initially in a passive state. This activation phenomenon, however, is subject to a limitation: to extract non-zero work, the initial temperature of the medium should be less than half of the temperature of the reservoirs. Here we analyze this limitation, showing that it still holds true even when the medium interacts with $N\ge 2$ reservoirs in a coherently-controlled order. Then, we show that the limitation can be lifted when the medium and the control systems are initially correlated. In particular, when the medium and control are entangled, work extraction becomes possible for every initial value of the local temperature of the medium.
Conversion of chemical energy into mechanical work is the fundamental mechanism of several natural phenomena at the nanoscale, like molecular machines and Brownian motors. Quantum mechanical effects are relevant for optimising these processes and to implement them at the atomic scale. This paper focuses on engines that transform chemical work into mechanical work through energy and particle exchanges with thermal sources at different chemical potentials. Irreversibility is introduced by modelling the engine transformations with finite-time dynamics generated by a time-depending quantum master equation. Quantum degenerate gases provide maximum efficiency for reversible engines, whereas the classical limit implies small efficiency. For irreversible engines, both the output power and the efficiency at maximum power are much larger in the quantum regime than in the classical limit. The analysis of ideal homogeneous gases grasps the impact of quantum statistics on the above performances, which persists in the presence of interactions and more general trapping. The performance dependence on different types of Bose-Einstein Condensates (BECs) is also studied. BECs under considerations are standard BECs with a finite fraction of particles in the ground state, and generalised BECs where eigenstates with parallel momenta, or those with coplanar momenta are macroscopically occupied according to the confinement anisotropy. Quantum statistics is therefore a resource for enhanced performances of converting chemical into mechanical work.
We give an approximation algorithm for Quantum Max-Cut which works by rounding an SDP relaxation to an entangled quantum state. The SDP is used to choose the parameters of a variational quantum circuit. The entangled state is then represented as the quantum circuit applied to a product state. It achieves an approximation ratio of 0.582 on triangle-free graphs. The previous best algorithms of Anshu, Gosset, Morenz, and Parekh, Thompson achieved approximation ratios of 0.531 and 0.533 respectively. In addition, we study the EPR Hamiltonian, which we argue is a natural intermediate problem which isolates some key quantum features of local Hamiltonian problems. For the EPR Hamiltonian, we give an approximation algorithm with approximation ratio $1 / \sqrt{2}$ on all graphs.
This paper explores the potential benefits of quantum coherence and quantum discord in the non-universal quantum computing model called deterministic quantum computing with one qubit (DQC1) in supervised machine learning. We show that the DQC1 model can be leveraged to develop an efficient method for estimating complex kernel functions. We demonstrate a simple relationship between coherence consumption and the kernel function, a crucial element in machine learning. The paper presents an implementation of a binary classification problem on IBM hardware using the DQC1 model and analyzes the impact of quantum coherence and hardware noise. The advantage of our proposal lies in its utilization of quantum discord, which is more resilient to noise than entanglement.
We propose a class of pure states of two-dimensional lattice systems realizing topological order associated with unitary rational vertex operator algebras. We show that the states are well-defined in the thermodynamic limit and have exponential decay of correlations. The construction provides a natural way to insert anyons and compute certain topological invariants. It also gives candidates for bosonic states in non-trivial invertible phases, including the $E_8$ phase.
Strongly interacting systems can be described in terms of correlation functions at various orders. A quantum analog of high-order correlations is the topological entanglement in topologically ordered states of matter at zero temperature, usually quantified by topological entanglement entropy (TEE). In this work, we propose a statistical interpretation that unifies the two under the same information-theoretic framework. We demonstrate that the existence of a non-zero TEE can be understood in the statistical view as the emergent $n$th order mutual information $I_n$ (for arbitrary integer $n\ge 3$) reflected in projectively measured samples, which also makes explicit the equivalence between the two existing methods for its extraction -- the Kitaev-Preskill and the Levin-Wen construction. To exploit the statistical nature of $I_n$, we construct a restricted Boltzmann machine (RBM) which captures the high-order correlations and correspondingly the topological entanglement that are encoded in the distribution of projected samples by representing the entanglement Hamiltonian of a local region under the proper basis. Furthermore, we derive a closed form which presents a method to interrogate the trained RBM, making explicit the analytical form of arbitrary order of correlations relevant for $I_n$. We remark that the interrogation method for extracting high-order correlation can also be applied to the construction of auxiliary fields that disentangle many-body interactions relevant for diverse interacting models.
The idea of simulating quantum physics with controllable quantum devices had been proposed several decades ago. With the extensive development of quantum technology, large-scale simulation, such as the analog quantum simulation tailoring an artificial Hamiltonian mimicking the system of interest, has been implemented on elaborate quantum experimental platforms. However, due to the limitations caused by the significant noises and the connectivity, analog simulation is generically infeasible on near-term quantum computing platforms. Here we propose an alternative analog simulation approach on near-term quantum devices. Our approach circumvents the limitations by adaptively partitioning the bath into several groups based on the performance of the quantum devices. We apply our approach to simulate the free induction decay of the electron spin in a diamond NV$^-$ center coupled to a huge number of nuclei and investigate the nonclassicality induced by the nuclear spin polarization. The simulation is implemented collaboratively with authentic devices and simulators on IBM quantum computers. We have also applied our approach to address the nonclassical noise caused by the crosstalk between qubits. This work sheds light on a flexible approach to simulate large-scale materials on noisy near-term quantum computers.
We propose and explore a notion of decomposably divisible (D-divisible) differentiable quantum evolution families on matrix algebras. This is achieved by replacing the complete positivity requirement, imposed on the propagator, by more general condition of decomposability. It is shown that such D-divisible dynamical maps satisfy a generalized version of Master Equation and are totally characterized by their time-local generators. Necessary and sufficient conditions for D-divisibility are found. Additionally, decomposable trace preserving semigroups are examined.
A restriction in the quality and quantity of available qubits presents a substantial obstacle to the application of near-term and early fault-tolerant quantum computers in practical tasks. To confront this challenge, some techniques for effectively augmenting the system size through classical processing have been proposed; one promising approach is quantum circuit cutting. The main idea of quantum circuit cutting is to decompose an original circuit into smaller sub-circuits and combine outputs from these sub-circuits to recover the original output. Although this approach enables us to simulate larger quantum circuits beyond physically available circuits, it needs classical overheads quantified by the two metrics: the sampling overhead in the number of measurements to reconstruct the original output, and the number of channels in the decomposition. Thus, it is crucial to devise a decomposition method that minimizes both of these metrics, thereby reducing the overall execution time. This paper studies the problem of decomposing the parallel $n$-qubit identity channel, i.e., $n$-parallel wire cutting, into a set of local operations and classical communication; then we give an optimal wire-cutting method comprised of channels based on mutually unbiased bases, that achieves minimal overheads in both the sampling overhead and the number of channels, without ancilla qubits. This is in stark contrast to the existing method that achieves the optimal sampling overhead yet with ancilla qubits. Moreover, we derive a tight lower bound of the number of channels in parallel wire cutting without ancilla systems and show that only our method achieves this lower bound among the existing methods. Notably, our method shows an exponential improvement in the number of channels, compared to the aforementioned ancilla-assisted method that achieves optimal sampling overhead.
We introduce two families of criteria for detecting and quantifying the entanglement of a bipartite quantum state of arbitrary local dimension. The first is based on measurements in mutually unbiased bases and the second is based on equiangular measurements. Both criteria give a qualitative result in terms of the state's entanglement dimension and a quantitative result in terms of its fidelity with the maximally entangled state. The criteria are universally applicable since no assumptions on the state are required. Moreover, the experimenter can control the trade-off between resource-efficiency and noise-tolerance by selecting the number of measurements performed. For paradigmatic noise models, we show that only a small number of measurements are necessary to achieve nearly-optimal detection in any dimension. The number of global product projections scales only linearly in the local dimension, thus paving the way for detection and quantification of very high-dimensional entanglement.
We propose an interpretation of physics named potentiality realism. This view, which can be applied to classical as well as to quantum physics, regards potentialities (i.e. intrinsic, objective propensities for individual events to obtain) as elements of reality, thereby complementing the actual properties taken by physical variables. This allows one to naturally reconcile realism and fundamental indeterminism in any theoretical framework. We discuss our specific interpretation of propensities, that require them to depart from being probabilities at the formal level, though allowing for statistics and the law of large numbers. This view helps reconcile classical and quantum physics by showing that most of the conceptual problems that are customarily taken to be unique issues of the latter -- such as the measurement problem -- are actually in common to all indeterministic physical theories.
We report on a novel phase-locking technique for fiber-based Mach-Zehnder interferometers based on discrete single-photon detections, and demonstrate this in a setup. Our interferometer decodes relative-phase-encoded optical pulse pairs for quantum key distribution applications and requires no locking laser in addition to the weak received signal. Our new simple locking scheme is shown to produce an Ornstein-Uhlenbeck dynamic and achieve optimal phase noise for a given count rate. In case of wavelength drifts that arise during the reception of Doppler-shifted satellite signals, the arm-length difference gets continuously readjusted to keep the interferometer phase stable.
Ground-state preparation is an important task in quantum computation. The probabilistic imaginary-time evolution (PITE) method is a promising candidate for preparing the ground state of the Hamiltonian, which comprises a single ancilla qubit and forward- and backward-controlled real-time evolution operators. The ground state preparation is a challenging task even in the quantum computation, classified as complexity-class quantum Merlin-Arthur. However, optimal parameters for PITE could potentially enhance the computational efficiency to a certain degree. In this study, we analyze the computational costs of the PITE method for both linear and exponential scheduling of the imaginary-time step size for reducing the computational cost. First, we analytically discuss an error defined as the closeness between the states acted on by exact and approximate imaginary-time evolution operators. The optimal imaginary-time step size and rate of change of imaginary time are also discussed. Subsequently, the analytical discussion is validated using numerical simulations for a one-dimensional Heisenberg chain. From the results, we find that linear scheduling works well in the case of unknown eigenvalues of the Hamiltonian. For a wide range of eigenstates, the linear scheduling returns smaller errors on average. However, the linearity of the scheduling causes problems for some specific energy regions of eigenstates. To avoid these problems, incorporating a certain level of nonlinearity into the scheduling, such as by inclusion of an exponential character, is preferable for reducing the computational costs of the PITE method. The findings of this study can make a significant contribute to the field of ground-state preparation of many-body Hamiltonians on quantum computers.
We derive a compact analytical solution of the $n$th-order equal-time correlation functions by using scattering matrix ($S$ matrix) under a weak coherent state input. Our solution applies to any dissipative quantum system that respects the U(1) symmetry. We further extend our analytical solution into two categories depending on whether the input and output channels are identical. The first category provides a different path for studying cross-correlation and multiple-drive cases, while the second category is instrumental in studying waveguide quantum electrodynamics systems. Our analytical solution allows for easy investigation of the statistical properties of multiple photons even in complex systems. Furthermore, we have developed a user-friendly open-source library in Python known as the quantum correlation solver, and this tool provides a convenient means to study various dissipative quantum systems that satisfy the above-mentioned criteria. Our study enables using $S$ matrix to study the photonic correlation and advance the possibilities for exploring complex systems.
We show that interacting bosons on a ring which are driven periodically by a rotating potential can support discrete time crystals whose absolute stability can be proven. The absolute stability is demonstrated by an exact mapping of discrete time crystal states to low-lying eigenstates of a time-independent model that reveals spontaneous breaking of space translation symmetry. The mapping ensures that there are no residual time-dependent terms that could lead to heating of the system and destruction of discrete time crystals. We also analyze periodically kicked bosons where the mapping is approximate only and cannot guarantee the absolute stability of discrete time crystals. Besides illustrating potential sources of instability, the kicked bosons model demonstrates a rich field for investigating the interplay between different time and space symmetry breaking, as well as the stability of time crystal behavior in contact with a thermal reservoir.
The Rydberg blockade is a key ingredient for entangling atoms in arrays. However, it requires atoms to be spaced well within the blockade radius, which limits the range of local quantum gates. Here we break this constraint using Floquet frequency modulation, with which we demonstrate Rydberg-blockade entanglement beyond the traditional blockade radius and show how the enlarged entanglement range improves qubit connectivity in a neutral atom array. Further, we find that the coherence of entangled states can be extended under Floquet frequency modulation. Finally, we realize Rydberg anti-blockade states for two sodium Rydberg atoms within the blockade radius. Such Rydberg anti-blockade states for atoms at close range enables the robust preparation of strongly-interacting, long-lived Rydberg states, yet their steady-state population cannot be achieved with only the conventional static drive. Our work transforms between the paradigmatic regimes of Rydberg blockade versus anti-blockade and paves the way for realizing more connected, coherent, and tunable neutral atom quantum processors with a single approach.
Approaching the long-time dynamics of non-Markovian open quantum systems presents a challenging task if the bath is strongly coupled. Recent proposals address this problem through a representation of the so-called process tensor in terms of a tensor network, which can be contracted to matrix product state (MPS) form. We show that for Gaussian environments the stationarity of the bath response can be exploited in order to efficiently construct such a MPS with infinite MPS evolution methods. The result structurally resembles open system evolution with carefully designed auxiliary degrees of freedom, as in hierarchical or pseudomode methods. Here, however, these degrees of freedom are generated automatically by the MPS evolution algorithm. Crucially, the semi-group property of the resulting propagator enables us to reach arbitrary evolution times and apply spectral theory for a systematic exploration of asymptotic properties, such as phase transitions in the steady state. Moreover, our algorithm for contracting the process tensor network leads to significant computational speed-up over existing proposals in the strong coupling regime.
The chiral edge modes of a topological superconductor can transport fermionic quasiparticles, with Abelian exchange statistics, but they can also transport non-Abelian anyons: Edge-vortices bound to a $\pi$-phase domain wall that propagates along the boundary. A pair of such edge-vortices is injected by the application of an $h/2e$ flux bias over a Josephson junction. Existing descriptions of the injection process rely on the instantaneous scattering approximation of the adiabatic regime [Beenakker et al. Phys.Rev.Lett. 122, (2019)], where the internal dynamics of the Josephson junction is ignored. Here we go beyond that approximation in a time-dependent many-body simulation of the injection process, followed by a braiding of mobile edge-vortices with a pair of immobile Abrikosov vortices in the bulk of the superconductor. Our simulation sheds light on the properties of the Josephson junction needed for a successful implementation of a flying topological qubit.
A new approximate Quantum State Preparation (QSP) method is introduced, called the Walsh Series Loader (WSL). The WSL approximates quantum states defined by real-valued functions of single real variables with a depth independent of the number $n$ of qubits. Two approaches are presented: the first one approximates the target quantum state by a Walsh Series truncated at order $O(1/\sqrt{\epsilon})$, where $\epsilon$ is the precision of the approximation in terms of infidelity. The circuit depth is also $O(1/\sqrt{\epsilon})$, the size is $O(n+1/\sqrt{\epsilon})$ and only one ancilla qubit is needed. The second method represents accurately quantum states with sparse Walsh series. The WSL loads $s$-sparse Walsh Series into $n$-qubits with a depth doubly-sparse in $s$ and $k$, the maximum number of bits with value $1$ in the binary decomposition of the Walsh function indices. The associated quantum circuit approximates the sparse Walsh Series up to an error $\epsilon$ with a depth $O(sk)$, a size $O(n+sk)$ and one ancilla qubit. In both cases, the protocol is a Repeat-Until-Success (RUS) procedure with a probability of success $P=\Theta(\epsilon)$, giving an averaged total time of $O(1/\epsilon^{3/2})$ for the WSL ({\sl resp.} $O(sk/\epsilon)$ for the sparse WSL). Amplitude amplification can be used to reduce by a factor $O(1/\sqrt{\epsilon})$ the total time dependency with $\epsilon$ but increases the size and depth of the associated quantum circuits, making them linearly dependent on $n$. These protocols give overall efficient algorithms with no exponential scaling in any parameter. They can be generalized to any complex-valued, multi-variate, almost-everywhere-differentiable function. The Repeat-Until-Success Walsh Series Loader is so far the only method which prepares a quantum state with a circuit depth and an averaged total time independent of the number of qubits.
Variational quantum algorithms have emerged as a cornerstone of contemporary quantum algorithms research. Practical implementations of these algorithms, despite offering certain levels of robustness against systematic errors, show a decline in performance due to the presence of stochastic errors and limited coherence time. In this work, we develop a recipe for mitigating quantum gate errors for variational algorithms using zero-noise extrapolation. We introduce an experimentally amenable method to control error strength in the circuit. We utilize the fact that gate errors in a physical quantum device are distributed inhomogeneously over different qubits and qubit pairs. As a result, one can achieve different circuit error sums based on the manner in which abstract qubits in the circuit are mapped to a physical device. We find that the estimated energy in the variational approach is approximately linear with respect to the circuit error sum (CES). Consequently, a linear fit through the energy-CES data, when extrapolated to zero CES, can approximate the energy estimated by a noiseless variational algorithm. We demonstrate this numerically and investigate the applicability range of the technique.
Parametrized quantum circuits (PQC) are quantum circuits which consist of both fixed and parametrized gates. In recent approaches to quantum machine learning (QML), PQCs are essentially ubiquitous and play the role analogous to classical neural networks. They are used to learn various types of data, with an underlying expectation that if the PQC is made sufficiently deep, and the data plentiful, the generalization error will vanish, and the model will capture the essential features of the distribution. While there exist results proving the approximability of square-integrable functions by PQCs under the $L^2$ distance, the approximation for other function spaces and under other distances has been less explored. In this work we show that PQCs can approximate the space of continuous functions, $p$-integrable functions and the $H^k$ Sobolev spaces under specific distances. Moreover, we develop generalization bounds that connect different function spaces and distances. These results provide a theoretical basis for different applications of PQCs, for example for solving differential equations. Furthermore, they provide us with new insight on how to design PQCs and loss functions which better suit the specific needs of the users.
We study phase-controlled planar Josephson junctions comprising a two-dimensional electron gas with strong spin-orbit coupling and d-wave superconductors, which have an advantage of high critical temperature. We show that a region between the two superconductors can be tuned into a topological state by the in-plane Zeeman field, and can host Majorana bound states. The phase diagram as a function of the Zeeman field, chemical potential, and the phase difference between superconductors exhibits the appearance of Majorana bound states for a wide range of parameters. We further investigate the behavior of the topological gap and its dependence on the type of d-wave pairing, i.e., d, d+is, or d+id', and note the difficulties that can arise due to the presence of gapless excitations in pure d-wave superconductors. On the other hand, the planar Josephson junctions based on superconductors with d+is and d+id' pairings can potentially lead to realizations of Majorana bound states. Our proposal can be realized in cuprate superconductors, e.g., in a twisted bilayer, combined with the layered semiconductor Bi2O2Se.
Chern number is a crucial invariant for characterizing topological feature of two-dimensional quantum systems. Real-space Chern number allows us to extract topological properties of systems without involving translational symmetry, and hence plays an important role in investigating topological systems with disorder or impurity. On the other hand, the twisted boundary condition (TBC) can also be used to define the Chern number in the absence of translational symmetry. Based on the perturbative nature of the TBC under appropriate gauges, we derive the two real-space formulae of Chern number (namely the non-commutative Chern number and the Bott index formula), which are numerically confirmed for the Chern insulator and the quantum spin Hall insulator. Our results not only establish the equivalence between the real-space and TBC formula of the Chern number, but also provide concrete and instructive examples for deriving the real-space topological invariant through the twisted boundary condition.
Generalized string-net models have been recently proposed in order to enlarge the set of possible topological quantum phases emerging from the original string-net construction. In the present work, we do not consider vertex excitations and restrict to plaquette excitations, or fluxons, that satisfy important identities. We explain how to compute the energy-level degeneracies of the generalized string-net Hamiltonian associated to an arbitrary unitary fusion category. In contrast to the degeneracy of the ground state, which is purely topological, that of excited energy levels depends not only on the Drinfeld center of the category, but also on internal multiplicities obtained from the tube algebra defined from the category. For a noncommutative category, these internal multiplicities result in extra nontopological degeneracies. Our results are valid for any trivalent graph and any orientable surface. We illustrate our findings with nontrivial examples.
The curvelet transform is a special type of wavelet transform, which is useful for estimating the locations and orientations of waves propagating in Euclidean space. We prove an uncertainty principle that lower-bounds the variance of these estimates, for radial wave functions in n dimensions. As an application of this uncertainty principle, we show the infeasibility of one approach to constructing quantum algorithms for solving lattice problems, such as the approximate shortest vector problem (approximate-SVP), and bounded distance decoding (BDD). This gives insight into the computational intractability of approximate-SVP, which plays an important role in algorithms for integer programming, and in post-quantum cryptosystems. In this approach to solving lattice problems, one prepares quantum superpositions of Gaussian-like wave functions centered at lattice points. A key step in this procedure requires finding the center of each Gaussian-like wave function, using the quantum curvelet transform. We show that, for any choice of the Gaussian-like wave function, the error in this step will be above the threshold required to solve BDD and approximate-SVP.
The presence of competing interactions due to geometry leads to frustration in quantum spin models. As a consequence, the ground state of such systems often displays a large degeneracy that can be lifted due to thermal or quantum effects. One such example is the antiferromagnetic Ising model on the Kagome lattice. It was shown that while the same model on the triangular lattice is ordered at zero temperature for small transverse field due to an order by disorder mechanism, the Kagome lattice resists any such effects and exhibits only short range spin correlations and a trivial paramagnetic phase. We embed this model on the latest architecture of D-Wave's quantum annealer, the Advantage2 prototype, which uses the highly connected Zephyr graph. Using advanced embedding and calibration techniques, we are able to embed a Kagome lattice with mixed open and periodic boundary conditions of 231 sites on the full graph of the currently available prototype. Through forward annealing experiments, we show that under a finite longitudinal field the system exhibits a one-third magnetization plateau, consistent with a classical spin liquid state of reduced entropy. An anneal-pause-quench protocol is then used to extract an experimental ensemble of states resulting from the equilibration of the model at finite transverse and longitudinal field. This allows us to construct a partial phase diagram and confirm that the system exits the constrained Hilbert space of the classical spin liquid when subjected to a transverse field. We connect our results to previous theoretical results and quantum Monte Carlo simulation, which helps us confirm the validity of the quantum simulation realized here, thereby extracting insight into the performance of the D-Wave quantum annealer to simulate non-trivial quantum systems in equilibrium.
Characterizing the properties of multiparticle quantum systems is a crucial task for quantum computing and many-body quantum physics. The task, however, becomes extremely challenging when the system size becomes large and when the properties of interest involve global measurements on a large number of sites. Here we develop a multi-task neural network model that can accurately predict global properties of many-body quantum systems, like string order parameters and many-body topological invariants, using only limited measurement data gathered from few neighbouring sites. The model can simultaneously predict multiple quantum properties, including not only expectation values of quantum observables, but also general nonlinear functions of the quantum state, such as entanglement entropies. Remarkably, we find that multi-task training over a given set of quantum properties enables our model to discover new properties outside the original set. Without any labeled data, the model can perform unsupervised classification of quantum phases of matter and uncover unknown boundaries between different phases.
Efficiently estimating the expectation values of fermionic Hamiltonians, including $k$-particle reduced density matrices ($k$-RDMs) of an $n$-mode fermionic state, is crucial for quantum simulations of a wealth of physical systems from the fields of many-body physics, chemistry, and materials. Yet, conventional quantum state tomography methods are too costly in terms of their resource requirements. Classical shadow (CS) algorithms have been proposed as a solution to address this task by substantially reducing the number of copies of quantum states. However, the implementation of these algorithms faces a significant challenge due to the inherent noise in near-term quantum devices, leading to inaccuracies in gate operations. To address this challenge, we propose an error-mitigated CS algorithm for fermionic systems. For $n$-qubit quantum systems, our algorithm, which employs the easily prepared initial state $|0^n\rangle\!\langle 0^n|$ assumed to be noiseless, provably efficiently estimates all elements of $k$-RDMs with $\widetilde{\mathcal O}(kn^k)$ scaled copies of quantum states and $\widetilde{\mathcal O}(\sqrt{n})$ scaled calibration measurements. It does so even in the presence of gate or measurement noise such as depolarizing, amplitude damping, or $X$-rotation noise with at most a constant noise strength. Furthermore, our algorithm exhibits scaling comparable to previous CS algorithms for fermionic systems with respect to the number of quantum state copies, while also demonstrating enhanced resilience to noise. We numerically demonstrate the performance of our algorithm in the presence of these noise sources, and its performance under Gaussian unitary noise. Our results underscore the potential utility of implementing our algorithm on near-term quantum devices.
In this work, we propose a fully differentiable iterative decoder for quantum low-density parity-check (LDPC) codes. The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers. Both component decoders are defined over the same sparse decoding graph enabling a seamless integration and scalability to large codes. The core idea is to use the GNN component between consecutive BP runs, so that the knowledge from the previous BP run, if stuck in a local minima caused by trapping sets or short cycles in the decoding graph, can be leveraged to better initialize the next BP run. By doing so, the proposed decoder can learn to compensate for sub-optimal BP decoding graphs that result from the design constraints of quantum LDPC codes. Since the entire decoder remains differentiable, gradient descent-based training is possible. We compare the error rate performance of the proposed decoder against various post-processing methods such as random perturbation, enhanced feedback, augmentation, and ordered-statistics decoding (OSD) and show that a carefully designed training process lowers the error-floor significantly. As a result, our proposed decoder outperforms the former three methods using significantly fewer post-processing attempts. The source code of our experiments is available online.
Quantum image processing is a growing field attracting attention from both the quantum computing and image processing communities. We propose a novel method in combining a graph-theoretic approach for optimal surface segmentation and hybrid quantum-classical optimization of the problem-directed graph. The surface segmentation is modeled classically as a graph partitioning problem in which a smoothness constraint is imposed to control surface variation for realistic segmentation. Specifically, segmentation refers to a source set identified by a minimum s-t cut that divides graph nodes into the source (s) and sink (t) sets. The resulting surface consists of graph nodes located on the boundary between the source and the sink. Characteristics of the problem-specific graph, including its directed edges, connectivity, and edge capacities, are embedded in a quadratic objective function whose minimum value corresponds to the ground state energy of an equivalent Ising Hamiltonian. This work explores the use of quantum processors in image segmentation problems, which has important applications in medical image analysis. Here, we present a theoretical basis for the quantum implementation of LOGISMOS and the results of a simulation study on simple images. Quantum Approximate Optimization Algorithm (QAOA) approach was utilized to conduct two simulation studies whose objective was to determine the ground state energies and identify bitstring solutions that encode the optimal segmentation of objective functions. The objective function encodes tasks associated with surface segmentation in 2-D and 3-D images while incorporating a smoothness constraint. In this work, we demonstrate that the proposed approach can solve the geometric-constrained surface segmentation problem optimally with the capability of locating multiple minimum points corresponding to the globally minimal solution.
Millimeter-wave superconducting resonators are a useful tool for studying quantum device coherence in a new frequency domain. However, improving resonators is difficult without a robust and reliable method for coupling millimeter-wave signals to 2D structures. We develop and characterize a tapered transition structure coupling a rectangular waveguide to a planar slotline waveguide with better than 0.5 dB efficiency over 14 GHz, and use it to measure ground-shielded resonators in the W band (75-110 GHz). Having decoupled the resonators from radiative losses, we consistently achieve single-photon quality factors above $10^5$, with a two-level-system loss limit above $10^6$, and verify the effectiveness of oxide removal treatments to reduce loss. These values are 4-5 times higher than those previously reported in the W band, and much closer to typical planar microwave devices, demonstrating the potential for low-loss on-chip millimeter wave quantum technology.
We introduce a new notion called ${\cal Q}$-secure pseudorandom isometries (PRI). A pseudorandom isometry is an efficient quantum circuit that maps an $n$-qubit state to an $(n+m)$-qubit state in an isometric manner. In terms of security, we require that the output of a $q$-fold PRI on $\rho$, for $ \rho \in {\cal Q}$, for any polynomial $q$, should be computationally indistinguishable from the output of a $q$-fold Haar isometry on $\rho$. \par By fine-tuning ${\cal Q}$, we recover many existing notions of pseudorandomness. We present a construction of PRIs and assuming post-quantum one-way functions, we prove the security of ${\cal Q}$-secure pseudorandom isometries (PRI) for different interesting settings of ${\cal Q}$. We also demonstrate many cryptographic applications of PRIs, including, length extension theorems for quantum pseudorandomness notions, message authentication schemes for quantum states, multi-copy secure public and private encryption schemes, and succinct quantum commitments.