Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-10-31 11:30 to 2023-11-03 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
The suppression of relic gravitational waves due to their conversion into electromagnetic radiation in a cosmological magnetic field is studied. It is shown that the subsequent elimination of photons from the beam due to their interaction with the primary plasma prevents from the inverse restoration of the gravitational waves by the photons. The coupled system of equations describing gravitational and electromagnetic wave propagation in an arbitrary curved space-time and in external magnetic field is derived. The system of equations is solved numerically in Friedmann- LeMaitre-Robertson-Walker metric for the upper limit of the intergalactic magnetic field strength of 1 nGs. We conclude that the gravitational wave conversion into photons in the intergalactic magnetic field can significantly change the amplitude of the relic gravitational wave and their frequency spectrum.
In the recent years, primordial black holes (PBHs) have emerged as one of the most interesting and hotly debated topics in cosmology. Among other possibilities, PBHs could explain both some of the signals from binary black hole mergers observed in gravitational wave detectors and an important component of the dark matter in the Universe. Significant progress has been achieved both on the theory side and from the point of view of observations, including new models and more accurate calculations of PBH formation, evolution, clustering, merger rates, as well as new astrophysical and cosmological probes. In this work, we review, analyse and combine the latest developments in order to perform end-to-end calculations of the various gravitational wave signatures of PBHs. Different ways to distinguish PBHs from stellar black holes are emphasized. Finally, we discuss their detectability with LISA, the first planned gravitational-wave observatory in space.
4-Dimensional Einstein-Gauss-Bonnet (4DEGB) gravity has garnered significant attention in the last few years as a phenomenological competitor to general relativity. We consider the theoretical and observational implications of this theory in both the early and late universe, (re-)deriving background and perturbation equations and constraining its characteristic parameters with data from cosmological probes. Our investigation surpasses the scope of previous studies by incorporating non-flat spatial sections. We explore consequences of 4DEGB on the sound and particle horizons in the very early universe, and demonstrate that 4DEGB can provide an independent solution to the horizon problem for some values of its characteristic parameter $\alpha$. Finally, we constrain an unexplored regime of this theory in the limit of small coupling $\alpha$ (empirically supported in the post-Big Bang Nucleosynthesis era by prior constraints). This version of 4DEGB includes a geometric term that resembles dark radiation at the background level, but whose influence on the perturbed equations is qualitatively distinct from that of standard forms of dark radiation. In this limit, only one beyond-$\Lambda$CDM degree of freedom persists, which we denote as $\tilde{\alpha}_C$. Our analysis yields the estimate $\tilde{\alpha}_C = (-9 \pm 6) \times 10^{-6}$ thereby providing a new constraint of a previously untested sector of 4DEGB.
We review and update constraints on the Early Dark Energy (EDE) model from cosmological data sets, in particular Planck PR3 and PR4 cosmic microwave background (CMB) data and large-scale structure (LSS) data sets including galaxy clustering and weak lensing data from the Dark Energy Survey, Subaru Hyper Suprime-Cam, and KiDS+VIKING-450, as well as BOSS/eBOSS galaxy clustering and Lyman-$\alpha$ forest data. We detail the fit to CMB data, and perform the first analyses of EDE using the CAMSPEC and Hillipop likelihoods for Planck CMB data, rather than Plik, both of which yield a tighter upper bound on the allowed EDE fraction than that found with Plik. We then supplement CMB data with large-scale structure data in a series of new analyses. All these analyses are concordant in their Bayesian preference for $\Lambda$CDM over EDE, as indicated by marginalized posterior distributions. We perform a series of tests of the impact of priors in these results, and compare with frequentist analyses based on the profile likelihood, finding qualitative agreement with the Bayesian results. All these tests suggest prior volume effects are not a determining factor in analyses of EDE. This work provides both a review of existing constraints and several new analyses.
Reconstructing astrophysical and cosmological fields from observations is challenging. It requires accounting for non-linear transformations, mixing of spatial structure, and noise. In contrast, forward simulators that map fields to observations are readily available for many applications. We present a versatile Bayesian field reconstruction algorithm rooted in simulation-based inference and enhanced by autoregressive modeling. The proposed technique is applicable to generic (non-differentiable) forward simulators and allows sampling from the posterior for the underlying field. We show first promising results on a proof-of-concept application: the recovery of cosmological initial conditions from late-time density fields.
The near infrared background (NIRB) is the collective light from unresolved sources observed in the band 1-10 $\mu$m. The measured NIRB angular power spectrum on angular scales $\theta \gtrsim 1$ arcmin exceeds by roughly two order of magnitudes predictions from known galaxy populations. The nature of the sources producing these fluctuations is still unknown. Here we test primordial black holes (PBHs) as sources of the NIRB excess. Considering PBHs as a cold dark matter (DM) component, we model the emission of gas accreting onto PBHs in a cosmological framework. We account for both accretion in the intergalactic medium (IGM) and in DM haloes. We self consistently derive the IGM temperature evolution, considering ionization and heating due to X-ray emission from PBHs. Besides $\Lambda$CDM, we consider a model that accounts for the modification of the linear matter power spectrum due to the presence of PBHs; we also explore two PBH mass distributions, i.e. a $\delta$-function and a lognormal distribution. For each model, we compute the mean intensity and the angular power spectrum of the NIRB produced by PBHs with mass 1-$10^3~\mathrm{M}_{\odot}$. In the limiting case in which the entirety of DM is made of PBHs, the PBH emission contributes $<1$ per cent to the observed NIRB fluctuations. This value decreases to $<0.1$ per cent if current constraints on the abundance of PBHs are taken into account. We conclude that PBHs are ruled out as substantial contributors to the NIRB.
Screening mechanisms allow light scalar fields to dynamically avoid the constraints that come from our lack of observation of a long-range fifth force. Galactic scale tests are of particular interest when the light scalar is introduced to explain the dark matter or dark energy that dominates our cosmology. To date, much of the literature that has studied screening in galaxies has described screening using simplifying approximations. In this work, we calculate numerical solutions for scalar fields with screening mechanisms in galactic contexts, and use these to derive new, precise conditions governing where fifth forces are screened. We show that the commonly used binary screened/unscreened threshold can predict a fifth force signal in situations where a fuller treatment does not, leading us to conclude that existing constraints might be significantly overestimated. We show that various other approximations of the screening radius provide a more accurate proxy to screening, although they fail to exactly reproduce the true screening surface in certain regions of parameter space. As a demonstration of our scheme, we apply it to an idealised Milky Way and thus identify the region of parameter space in which the solar system is screened.
Big Bang Nucleosynthesis (BBN) is a strong probe for constraining new physics including gravitation. $f(R)$ gravity theory is an interesting alternative to general relativity which introduces additional degrees of freedom known as scalarons. In this work we demonstrate the existence of black hole solutions in $f(R)$ gravity and develop a relation between scalaron mass and black hole mass. We have used observed bound on the freezeout temperature to constrain scalaron mass range by modifying the cosmic expansion rate at the BBN epoch. The mass range of primordial black holes (PBHs) which are astrophysical dark matter candidates is deduced. The range of scalaron mass which does not spoil the BBN era is found to be $10^{-16}-10^4 \text{ eV}$. The scalaron mass window $10^{-16}-10^{-14}\text{ eV}$ is consistent with the $f(R)$ gravity PPN parameter derived from solar system experiments. The PBH mass range is obtained as $10^6-10^{-14}\text{ }M_{\odot}$. Scalarons constrained by BBN are also eligible to accommodate axion like dark matter particles. Estimation of deuterium (D) fraction and relative D+$^3$He abundance in the $f(R)$ gravity scenario shows that the BBN history mimics that of general relativity. While the PBH mass range is eligible for non-baryonic dark matter, the BBN bounded scalarons provide with an independent strong field test of $f(R)$ gravity.
The era of precision cosmology allows us to test the composition of the dark matter. Mixed ultralight or fuzzy dark matter (FDM) is a cosmological model with dark matter composed of a combination of particles of mass $m\leq 10^{-20}$ eV, with an astrophysical de Broglie wavelength, and particles with a negligible wavelength sharing the properties of cold dark matter (CDM). In this work, we simulate cosmological volumes with a dark matter wave function for the ultralight component coupled gravitationally to CDM particles. We investigate the impact of a mixture of CDM and FDM in various proportions $(0\%,\;1\%,\;10\%,\;50\%,\;100\%)$ and for ultralight particle masses ranging over five orders of magnitude $(2.5\times 10^{-25}\;\mathrm{eV}-2.5\times 10^{-21}\;\mathrm{eV})$. To track the evolution of density perturbations in the non-linear regime, we adapt the simulation code AxioNyx to solve the CDM dynamics coupled to a FDM wave function obeying the Schr\"odinger-Poisson equations. We obtain the non-linear power spectrum and study the impact of the wave effects on the growth of structure on different scales. We confirm that the steady-state solution of the Schr\"odinger-Poisson system holds at the center of halos in the presence of a CDM component when it composes $50\%$ or less of the dark matter but find no stable density core when the FDM accounts for $10\%$ or less of the dark matter. We implement a modified friends-of-friends halo finder and find good agreement between the observed halo abundance and the predictions from the adapted halo model axionHMCode.
We study the evolution of heavy stars ($M\ge40{\rm M}_\odot$) undergoing pair-instability in the presence of annihilating dark matter. Focusing on the scenario where the dark matter is in capture-annihilation equilibrium, we model the profile of energy injections in the local thermal equilibrium approximation. We find that significant changes to masses of astrophysical black holes formed by (pulsational) pair-instability supernovae can occur when the ambient dark matter density $ \rho_{\rm DM} \gtrsim10^9 \rm \, GeV \, cm^{-3}$. There are two distinct outcomes, depending on the dark matter mass. For masses $m_{\rm DM}\gtrsim1$ GeV the DM is primarily confined to the core. The annihilation increases the lifetime of core helium burning, resulting in more oxygen being formed, fueling a more violent explosion during the pair-instability-induced contraction. This drives stronger pulsations, leading to lighter black holes being formed than predicted by the standard model. For masses $m_{\rm DM}\lesssim0.5$ GeV there is significant dark matter in the envelope, leading to a phase where the star is supported by the energy from the annihilation. This reduces the core temperature and density, allowing the star to evade the pair-instability allowing heavier black holes to be formed. We find a mass gap for all models studied.
We present zephyr, a novel method that integrates cutting-edge normalizing flow techniques into a mixture density estimation framework, enabling the effective use of heterogeneous training data for photometric redshift inference. Compared to previous methods, zephyr demonstrates enhanced robustness for both point estimation and distribution reconstruction by leveraging normalizing flows for density estimation and incorporating careful uncertainty quantification. Moreover, zephyr offers unique interpretability by explicitly disentangling contributions from multi-source training data, which can facilitate future weak lensing analysis by providing an additional quality assessment. As probabilistic generative deep learning techniques gain increasing prominence in astronomy, zephyr should become an inspiration for handling heterogeneous training data while remaining interpretable and robustly accounting for observational uncertainties.
The pursuit of understanding the mysteries surrounding dark energy has sparked significant interest within the field of cosmology. While conventional approaches, such as the cosmological constant, have been extensively explored, alternative theories incorporating scalar field-based models and modified gravity have emerged as intriguing avenues. Among these, teleparallel theories of gravity, specifically the $f(T,\phi)$ formulation, have gained prominence as a means to comprehend dark energy within the framework of teleparallelism. In this study, we investigate two well-studied models of teleparallel dark energy and examine the presence of cosmological singularities within these scenarios. Using the Goriely-Hyde procedure, we examine the dynamical systems governing the cosmological equations of these models. Our analysis reveals that both models exhibit Type IV singularities, but only for a limited range of initial conditions. These results could indicate a potential edge for teleparallel cosmological models over their other modified gravity counterparts, as the models we examine seem to be only allowing for weak singularities that too under non general conditions.
We introduce the notion of a Bayesian analysis motivated `reliability' that gives a truer distinction of cusp-core and other halo-parameters (like mass-concentration) in an ensemble of observed galaxies. Our approach goes beyond the standard statistical techniques of parameter estimation and model fitting. We create hundreds of thousands of realistic mock SPARC RCs, with both cuspy and cored DM density profiles as model inputs. These RCs carefully incorporate the details of SPARC data such as the nature of observed uncertainties and different sources of scatters arising from observation, presence of baryons, DM mass-concentration, etc. Bayesian analysis of these mock RCs enables us to reconstruct and identify the parameter space in galaxy observable and theory where one can venture beyond best-fits to a preferred DM halo model or model selections between different density models. We find that it is imperative to choose low stellar surface density ($\Sigma_{\star}$) galaxies for reliable cusp-vs-core distinction; for example, RC data for galaxies with $\Sigma_{\star} \leq 2.5$ is needed for a 75\% confidence in distinguishing cusps from cores. Similarly, we also find that for correct estimations of the halo masses and concentrations, the RCs need to be measured to at least a radial distance $\geq 0.8r_s$ where $r_s$ is the scale radii of the corresponding DM halo density profiles. Out of the total $\sim$ 135 SPARC galaxies, using our reliability criteria, we find that only 21 RCs clear the bar to be used for any unbiased cusp-core distinction as well as DM halo mass-concentration estimates at $\geq$75\% reliability confidence level. With $\geq$66\% ( $\geq$50\%) reliability settings, the sample size increases to 44 (59). Interestingly, in the $\geq 75$\% reliable subsample, there are 5 times more galaxies that are reliably cored than cuspy. [Abridged]
In this study, we investigate how the baryonic effects vary with scale and local density environment mainly by utilizing a novel statistic, the environment-dependent wavelet power spectrum (env-WPS). With four state-of-the-art cosmological simulation suites, EAGLE, SIMBA, Illustris, and IllustrisTNG, we compare the env-WPS of the total matter density field between the hydrodynamic and dark matter-only (DMO) runs at $z=0$. We find that the clustering is most strongly suppressed in the emptiest environment of $\rho_\mathrm{m}/\bar\rho_\mathrm{m}<0.1$ with maximum amplitudes $\sim67-89$ per cent on scales $\sim1.86-10.96\ h\mathrm{Mpc}^{-1}$, and less suppressed in higher density environments on small scales (except Illustris). In the environments of $\rho_\mathrm{m}/\bar\rho_\mathrm{m}\geqslant0.316$ ($\geqslant10$ in EAGLE), the feedbacks also lead to enhancement features at intermediate and large scales, which is most pronounced in the densest environment of $\rho_\mathrm{m}/\bar\rho_\mathrm{m}\geqslant100$ and reaches a maximum $\sim 7-15$ per cent on scales $\sim0.87-2.62\ h\mathrm{Mpc}^{-1}$ (except Illustris). The baryon fraction of the local environment decreases with increasing density, denoting the feedback strength, and potentially explaining some differences between simulations. We also measure the volume and mass fractions of local environments, which are affected by $\gtrsim 1$ per cent due to baryon physics. In conclusion, our results reveal that the baryonic processes can change the overall cosmic structure greatly over the scales of $k>0.1\ h\mathrm{Mpc}^{-1}$. These findings warrant further investigation and confirmation by using much larger or more numerous simulations, comparing different cosmic structure classifications, or altering a specific process in the simulation, e.g. the star formation, black hole accretion, stellar feedback or active galactic nucleus feedback.
We propose the possibility that compact extra dimensions can obtain large size by higher dimensional inflation, relating the weakness of the actual gravitational force to the size of the observable universe. Solution to the horizon problem implies that the fundamental scale of gravity is smaller than $10^{13}$ GeV which can be realised in a braneworld framework for any number of extra dimensions. However, requirement of (approximate) flat power spectrum of primordial density fluctuations consistent with present observations makes this simple proposal possible only for one extra dimension at around the micron scale. After the end of five-dimensional inflation, the radion modulus can be stabilised at a vacuum with positive energy of the order of the present dark energy scale. An attractive possibility is based on the contribution to the Casimir energy of right-handed neutrinos with a mass at a similar scale.
The cosmological evolution within the framework of exponential $F(R)$ gravity is analysed by assuming two forms for dark matter: (a) a standard dust-like fluid and (b) an axion scalar field. As shown in previous literature, an axion-like field oscillates during the cosmological evolution but can play the role of dark matter when approaching the minimum of its potential. Both scenarios are confronted with recent observational data including the Pantheon Type Ia supernovae, Hubble parameter estimations (Cosmic Chronometers), Baryon Acoustic Oscillations and Cosmic Microwave Background distances. The models show great possibilities in describing these observations when compared with the $\Lambda$CDM model, supporting the viability of exponential $F(R)$ gravity. The differences between both descriptions of dark matter is analysed.
This paper is a parametrization of the equation of state (EoS) parameter of dark energy (DE), which is parameterized using Square-Root (SR) form i.e. $\omega _{SR}=\text{$\omega _{0}$}+\text{$\omega _{1}$}\frac{z}{\sqrt{z^{2}+1}}$, where $\omega _{0}$ and $\omega _{1}$ are free constants. This parametrization will be examined in the context of the recently suggested $f(Q)$ gravity theory as an alternative to General Relativity (GR), in which gravitational effects are attributed to the non-metricity scalar $Q$ with the functional form $f(Q)=Q+\alpha Q^{n}$, where $\alpha$ and $n$ are arbitrary constants. We derived observational constraints on model parameters using the Hubble dataset with 31 data points and the Supernovae (SNe) dataset from the Pantheon samples compilation dataset with 1048 data points. For the current model, the evolution of the deceleration parameter, density parameter, EoS for DE, and $Om(z)$ diagnostic have all been investigated. It has been shown that the deceleration parameter favors the current accelerated expansion phase. It has also been shown that the EoS parameter for DE has a quintessence nature at this time.
We present a pan-chromatic study of AT2017bcc, a nuclear transient that was discovered in 2017 within the skymap of a reported burst-like gravitational wave candidate, G274296. It was initially classified as a superluminous supernova, and then reclassified as a candidate tidal disruption event. Its optical light curve has since shown ongoing variability with a structure function consistent with that of an active galactic nucleus, however earlier data shows no variability for at least 10 years prior to the outburst in 2017. The spectrum shows complex profiles in the broad Balmer lines: a central component with a broad blue wing, and a boxy component with time-variable blue and red shoulders. The H$\alpha$ emission profile is well modelled using a circular accretion disc component, and a blue-shifted double Gaussian which may indicate a partially obscured outflow. Weak narrow lines, together with the previously flat light curve, suggest that this object represents a dormant galactic nucleus which has recently been re-activated. Our time-series modelling of the Balmer lines suggests that this is connected to a disturbance in the disc morphology, and we speculate this could involve a sudden violent event such as a tidal disruption event involving the central supermassive black hole. Although we find that the redshifts of AT2017bcc ($z=0.13$) and G274296 ($z>0.42$) are inconsistent, this event adds to the growing diversity of both nuclear transients and multi-messenger contaminants.
The axion is a long-postulated boson that can simultaneously solve two fundamental problems of modern physics: the charge-parity symmetry problem in the strong interaction and the enigma of dark matter. In this work we estimate, by means of Monte Carlo simulations, the sensitivity of the Dark-photons$\&$Axion-Like particles Interferometer (DALI), a new-generation Fabry-P\'erot haloscope proposed to probe axion dark matter in the 25-250 $\mu$eV band.
We study the possibility that dark matter re-enters kinetic equilibrium with a radiation bath after kinetic decoupling, a scenario we dub kinetic recoupling. This naturally occurs, for instance, with certain types of resonantly-enhanced interactions, or as the result of a phase transition. While late kinetic decoupling damps structure on small scales below a cutoff, kinetic recoupling produces more complex changes in the power spectrum that depend on the nature and extent of the recoupling period. We explore the features that kinetic recoupling imprints upon the matter power spectrum, and discuss how such features can be traced to dark matter microphysics with future observations.
A recent research presented a solution for a Schwarzschild black hole with a force-free magnetic field showing how the canonical form of background metric changes, Found. Phys. 52 (2022) 4, 93. Therefore, it is logical that this research seeks modified solutions for the Kerr black hole as well. The goal of this paper is, thereby, to pinpoint an exact solution for the Kerr black holes perturbed by force-free magnetic fields. However, to do this we use a well-known tetrad formalism and obtain an explicit expression for the electromagnetic strength tensor in the background metric, that is Kerr, based on the tangent space calculations. Analyzing the stress-energy tensor reveals the perturbed factors in the metric that allow us to understand better the physics of the disks around massive and supermassive black holes. These results show that it is not enough to depend solely on perturbation theory set against the background metric, without considering the backreaction of the force-free magnetufluid on it. This research indicates the necessity of studying the impact of force-free magnetic fields on the structure and evolution of mysterious cosmic objects.
Ultra-slow-roll~(USR) inflation predicts an exponential amplification of scalar perturbations at small scales, which leads to a stochastic gravitational wave background~(SGWB) through the coupling of the scalar and tensor modes at the second-order expansion of the Einstein equation. In this work, we search for such a scalar-induced SGWB from the NANOGrav 15-year (NG15) dataset, and find that the SGWB from USR inflation could explain the observed data. We place constraints on the amplitude of the scalar power spectrum to $P_{\mathrm{Rp}} > 10^{-1.80}$ at $95\%$ confidence level (C.L.) at the scale of $k\sim 20\, \mathrm{pc}^{-1}$. We find that $\log_{10} P_{\mathrm{Rp}}$ degenerates with the peak scale $\log_{10} k_{\mathrm{p}}$. We also obtain the parameter space allowed by the data in the USR inflationary scenario, where the $e$-folding numbers of the duration of the USR phase has a lower limit $\Delta N > 2.80$ ($95\%$ C.L.) when the USR phase ends at $N\approx 20$. Since the priors for the model parameters %in the USR model are uncertain, we do not calculate the Bayes factors. Instead, to quantify the goodness of fit, we calculate the maximum values of the log-likelihood for USR inflation, bubble collision of the cosmological phase transition, and inspiraling supermassive black hole binaries (SMBHBs), respectively. Our results imply that the SGWB from USR inflation can fit the data better than the one from SMBHBs.
Constrained cosmological simulations play an important role in modelling the local Universe, enabling investigation of the dark matter content of local structures and their formation histories. We introduce a method for determining the extent to which individual haloes are reliably reconstructed between constrained simulations, and apply it to the Constrained Simulations in BORG (CSiBORG) suite of $101$ high-resolution realisations across the posterior probability distribution of initial conditions from the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm. The method is based on the overlap of the initial Lagrangian patch of a halo in one simulation with those in another, and therefore measures the degree to which the haloes' particles are initially coincident. By this metric we find consistent reconstructions of $M\gtrsim10^{14}~M_\odot / h$ haloes across the CSiBORG simulations, indicating that the constraints from the BORG algorithm are sufficient to pin down the masses, positions and peculiar velocities of clusters to high precision. The effect of the constraints tapers off towards lower mass however, and the halo spins and concentrations are largely unconstrained at all masses. We document the advantages of evaluating halo consistency in the initial conditions, describe how the method may be used to quantify our knowledge of the halo field given galaxy survey data analysed through the lens of probabilistic inference machines such as BORG, and describe applications to matched but unconstrained simulations.
Upcoming large galaxy surveys will subject the standard cosmological model, $\Lambda$CDM, to new precision tests. These can be tightened considerably if theoretical models of galaxy formation are available that can predict galaxy clustering and galaxy-galaxy lensing on the full range of measurable scales throughout volumes as large as those of the surveys and with sufficient flexibility that uncertain aspects of the underlying astrophysics can be marginalised over. This, in particular, requires mock galaxy catalogues in large cosmological volumes that can be directly compared to observation, and can be optimised empirically by Monte Carlo Markov Chains or other similar schemes to eliminate or estimate astrophysical parameters related to galaxy formation when constraining cosmology. Semi-analytic galaxy formation methods implemented on top of cosmological dark matter simulations offer a computationally efficient approach to construct physically based and flexibly parametrised galaxy formation models, and as such they are more potent than still faster, but purely empirical models. Here we introduce an updated methodology for the semi-analytic L-GALAXIES code, allowing it to be applied to simulations of the new MillenniumTNG project, producing galaxies directly on fully continuous past lightcones, potentially over the full sky, out to high redshift, and for all galaxies more massive than $\sim 10^8\,{\rm M}_\odot$. We investigate the numerical convergence of the resulting predictions, and study the projected galaxy clustering signals of different samples. The new methodology can be viewed as an important step towards more faithful forward-modelling of observational data, helping to reduce systematic distortions in the comparison of theory to observations.
In [arXiv:2204.13980], we proposed and motivated a modification of the Einstein equation as a function of the topology of the Universe in the form of a bi-connection theory. The new equation features an additional "topological term" related to a second non-dynamical reference connection and chosen as a function of the spacetime topology. In the present paper, we analyse the consequences for cosmology of this modification. First, we show that expansion becomes blind to the spatial curvature in this new theory, i.e. the expansion laws do not feature the spatial curvature parameter anymore (i.e. $\Omega_{\not= K} = 1, \ \forall \, \Omega_K$), while this curvature is still present in the evaluation of distances. Second, we derive the first order perturbations of this homogeneous solution. Two additional gauge invariant variables coming from the reference connection are present compared with general relativity: a scalar and a vector mode, both sourced by the shear of the cosmic fluid. Finally, we confront this model with observations. The differences with the $\Lambda$CDM model are negligible, in particular, the Hubble and curvature tensions are still present. Nevertheless, since the main difference between the two models is the influence of the background spatial curvature on the dynamics, an increased precision on the measure of that parameter might allow us to observationally distinguish them.
The cosmological first-order phase transition (FOPT) can be of strong dynamics but with its bubble wall velocity difficult to be determined due to lack of detailed collision terms. Recent holographic numerical simulations of strongly coupled theories with a FOPT prefer a relatively small wall velocity linearly correlated with the phase pressure difference between false and true vacua for a planar wall. In this Letter, we have analytically revealed the non-relativistic limit of a planar/cylindrical/spherical wall expansion of a bubble strongly interacting with the thermal plasma. The planar-wall result reproduces the linear relation found previously in the holographic numerical simulations. The results for cylindrical and spherical walls can be directly tested in future numerical simulations. Once confirmed, the bubble wall velocity for a strongly coupled FOPT can be expressed purely in terms of the hydrodynamics without invoking the underlying microphysics.
We present and validate the catalog of Lyman-$\alpha$ forest fluctuations for 3D analyses using the Early Data Release (EDR) from the Dark Energy Spectroscopic Instrument (DESI) survey. We used 88,511 quasars collected from DESI Survey Validation (SV) data and the first two months of the main survey (M2). We present several improvements to the method used to extract the Lyman-$\alpha$ absorption fluctuations performed in previous analyses from the Sloan Digital Sky Survey (SDSS). In particular, we modify the weighting scheme and show that it can improve the precision of the correlation function measurement by more than 20%. This catalog can be downloaded from https://data.desi.lbl.gov/public/edr/vac/edr/lya/fuji/v0.3 and it will be used in the near future for the first DESI measurements of the 3D correlations in the Lyman-$\alpha$ forest.
We implement support for a cosmological parameter estimation algorithm as proposed by Racine et al. (2016) in Commander, and quantify its computational efficiency and cost. For a semi-realistic simulation similar to Planck LFI 70 GHz, we find that the computational cost of producing one single sample is about 20 CPU-hours and that the typical Markov chain correlation length is $\sim$100 samples. The net effective cost per independent sample is $\sim$2 000 CPU-hours, in comparison with all low-level processing costs of 812 CPU-hours for Planck LFI and WMAP in Cosmoglobe Data Release 1. Thus, although technically possible to run already in its current state, future work should aim to reduce the effective cost per independent sample by one order of magnitude to avoid excessive runtimes, for instance through multi-grid preconditioners and/or derivative-based Markov chain sampling schemes. This work demonstrates the computational feasibility of true Bayesian cosmological parameter estimation with end-to-end error propagation for high-precision CMB experiments without likelihood approximations, but it also highlights the need for additional optimizations before it is ready for full production-level analysis.
The astrophysical Stochastic Gravitational Wave Background (SGWB) originates from the mergers of compact binary objects that are otherwise undetected as individual events, along with other sources such as supernovae, magnetars, etc. The individual GW signal is time-varying over a time scale that depends on the chirp mass of the coalescing binaries. Another timescale that plays a role is the timescale at which the sources repeat, which depends on the merger rate. The combined effect of these two leads to a breakdown of the time-translation symmetry of the observed SGWB and a correlation between different frequency modes in the signal covariance matrix of the SGWB. Using an ensemble of SGWB due to binary black hole coalescence, calculated using simulations of different black hole mass distributions and merger rates, we show how the structure of the signal covariance matrix varies. This structure in the signal covariance matrix brings additional information about the sources on top of the power spectrum. We show that there is a significant improvement in the Figure of Merit by using this additional information in comparison to only power spectrum estimation for the LIGO-Virgo-KAGRA (LVK) network of detectors with the design sensitivity noise with two years of observation. The inclusion of the off-diagonal correlation in the covariance of the SGWB in the data analysis pipelines will be beneficial in the quest for the SGWB signal in LVK frequency bands as well as in lower frequencies and in getting an insight into its origin.
The recent evidence of a stochastic background of gravitational waves in the nHz band by pulsar-timing array (PTA) experiments has shed new light on the formation and evolution of massive black hole binaries with masses $\sim 10^8$--$10^9 M_\odot$. The PTA data are consistent with a population of such binaries merging efficiently after the coalescence of their galactic hosts, and presenting masses slightly larger than previously expected. This momentous discovery calls for investigating the prospects of detecting the smaller ($\sim 10^5$--$10^7 M_\odot$) massive black hole binaries targeted by the Laser Interferometer Space Antenna (LISA). By using semi-analytic models for the formation and evolution of massive black hole binaries calibrated against the PTA results, we find that LISA will observe at least a dozen and up to thousands of black hole binaries during its mission duration. The minimum number of detections rises to $\sim 70$ if one excludes models that only marginally reproduce the quasar luminosity function at $z=6$. We also assess LISA's parameter estimation capabilities with state-of-the-art waveforms including higher modes and realistic instrumental response, and find that the masses, sky position, and distance will typically be estimated to within respectively 1%, 10 square degrees, and 10% for the detected systems (assuming a 4-year mission).
The pursuit of unraveling the true essence of dark energy has become an immensely captivating endeavor in modern cosmology. Alongside the conventional cosmological constant approach, a diverse range of ideas has been proposed, encompassing scalar field-based models and various modified gravity approaches. A particularly intriguing notion involves exploring scalar field dark energy models within quantum gravitationally motivated cosmologies, with non-canonical theories standing out as a prominent candidate in this context. Hence, in this work, we investigate three widely recognized non-canonical scalar field dark energy models: phantom, quintom, and DBI dark energy models. By employing the Goriely-Hyde procedure, we demonstrate the presence of singularities in both finite and infinite time within these frameworks. and that these singularities can manifest regardless of the system's initial conditions. Moreover, we further establish how cosmological singularities of types I-IV can arise in all of these models. The work goes to show that non-canonical regimes for dark energy can allow for most of the prominent cosmological singularities for a variety of models.
In this work, we reconstruct the Hubble diagram using various data sets, including correlated ones, in Artificial Neural Networks (ANN). Using ReFANN, that was built for data sets with independent uncertainties, we expand it to include non-Guassian data points, as well as data sets with covariance matrices among others. Furthermore, we compare our results with the existing ones derived from Gaussian processes and we also perform null tests in order to test the validity of the concordance model of cosmology.
Ultralight dark photons are compelling dark matter candidates, but their allowed kinetic mixing with the Standard Model photon is severely constrained by requiring that the dark photons do not collapse into a cosmic string network in the early Universe. Direct detection in minimal production scenarios for dark photon dark matter is strongly limited, if not entirely excluded; discovery of sub-meV dark photon dark matter would therefore point to a nonminimal dark sector. We describe a model that evades such constraints, capable of producing cold dark photons in any parameter space accessible to future direct detection experiments. The associated production dynamics yield additional signatures in cosmology and small-scale structure, allowing for possible positive identification of this particular class of production mechanisms.
A short phenomenological account of the genesis and evolution of the universe is presented with emphasis on the primordial phases as well as its physical composition, i.e. dark matter and dark energy. We discuss Einstein's theory of General Relativity and its consequences for the birth of modern relativistic astrophysics. We introduce the Big-Bang theory of Mons. Lemaitre as well as the competing theory of the Steady State Universe of Fred Hoyle. Since Big-Bang theory appeared quite in agreement with Christian doctrine of creation, Pope Pius XII delivered a message to the pontifical Academy of Sciences in 1951 claiming a certain agreement between the creation account in the book of Genesis and the Big-Bang theory (a concordist view), a position which he did not repeat later. On the other hand, Lemaitre always kept separate the scientific and theological planes as two parallel "lines" never intersecting, i.e., as two complementary "magisteria". Similar kind of tensions, between science and theology, emerge also today with the Hartle-Hawking solution to the Wheeler-DeWitt equation in quantum cosmology and its related speculations. To avoid some sort of confusion between theological and physics concepts, we, briefly, summarise the concept of creation in Christian theology.
Common-envelope evolution - where a star is engulfed by a companion - is a critical but poorly understood step in, e.g., the formation pathways for gravitational-wave sources. However, it has been extremely challenging to identify observable signatures of such systems. We show that for systems involving a neutron star, the hypothesized super-Eddington accretion onto the neutron star produces MeV-range, months-long neutrino signals within reach of present and planned detectors. While there are substantial uncertainties on the rate of such events (0.01-1/century in the Milky Way) and the neutrino luminosity (which may be less than the accretion power), this signal can only be found if dedicated new searches are developed. If detected, the neutrino signal would lead to significant new insights into the astrophysics of common-envelope evolution.
In this paper, we present updated estimates of the velocity of the neutron star (NS) in the supernova remnant (SNR) Cassiopeia A using over two decades of Chandra observations. We use two methods: 1.) recording NS positions from dozens of Chandra observations, including the astrometric uncertainty estimates on the data points but not correcting the astrometry of the observations, and 2.) correcting the astrometry of the 13 Chandra observations that have a sufficient number of point sources with identified Gaia counterparts. For method #1, we find velocity of 280 $\pm$ 123 km s$^{-1}$, with an angle of 87 $\pm$ 22 degrees south of east. For method #2, we find a velocity of 445 $\pm$ 90 km s$^{-1}$ at an angle of 68 $\pm$ 12 degrees south of east. Both of these results match with the explosion-center-estimated velocity of $\sim$350 km s$^{-1}$ and the previous 10 year baseline proper motion measurement of 570 $\pm$ 260 km s$^{-1}$, but our use of additional data over a longer baseline has led to a smaller uncertainty by a factor of 2$\unicode{x2013}$3. Our estimates rule out velocities $\gtrsim$600 km s$^{-1}$ and better match with simulations of Cassiopeia A that include NS kick mechanisms.
FR0 galaxies constitute the most abundant jet population in the local Universe. With their compact jet structure, they are broadband photon emitters and have been proposed as multi-messenger sources. Recently, these sources have been detected for the first time in $\gamma$ rays. Using a revised FR0 catalog, we confirm that the FR0 population as a whole are $\gamma$-ray emitters, and we also identify two significant sources. For the first time, we find a correlation between the 5 GHz core radio luminosity and $\gamma$-ray luminosity in the 1 - 800 GeV band, having a 4.5$\sigma$ statistical significance. This is clear evidence that the jet emission mechanism is similar in nature for FR0s and the well-studied canonical FR (FRI and FRII) radio galaxies. Furthermore, we perform broadband SED modeling for the significantly detected sources as well as the subthreshold source population using a one-zone SSC model. Within the maximum jet power budget, our modeling shows that the detected gamma rays from the jet can be explained as inverse Compton photons. To explain the multi-wavelength observations for these galaxies, the modeling results stipulate a low bulk Lorentz factor and a jet composition far from equipartition, with the particle energy density dominating over the magnetic field energy density.
We present highly sensitive measurements taken with MeerKAT at 1280 MHz as well as archival GBT, MWA and VLA images at 333, 88 and 74 MHz. We report the detection of synchrotron radio emission from the infrared dark cloud (IRDC) associated with the halo of the Sgr B complex on a scale of ~60 pc. A strong spatial correlation between low-frequency radio continuum emission and dense molecular gas, combined with spectral index measurements, indicates enhanced synchrotron emission by cosmic-ray electrons. Correlation of the FeI 6.4 keV Kalpha line and synchrotron emission provides compelling evidence that the low energy cosmic-ray electrons are responsible for producing the Kalpha line emission. The observed synchrotron emission within the halo of the Sgr B cloud complex has mean spectral index alpha -1+/-1 gives the magnetic field strength ~100 muG for cloud densities nH = 10^4-10^5 cm-3 and estimate cosmic-ray ionization rates between 10^-13 and 10^-14 s^-1. Furthermore, the energy spectrum of primary cosmic-ray electrons is constrained to be E^-3 +/-1 for typical energies of few hundred MeV. The extrapolation of this spectrum to higher energies is consistent with X-ray and gamma-ray emission detected from this cloud. These measurements have important implications on the role that high cosmic-ray electron fluxes at the Galactic center play in production of radio synchrotron emission, the FeI Kalpha line emission at 6.4 keV and ~GeV gamma-ray emission throughout the central molecular zone (CMZ).
Core-collapse supernovae (SNs) are one of the most powerful cosmic sources of neutrinos, with energies of several MeV. The emission of neutrinos and antineutrinos of all flavors carries away the gravitational binding energy of the compact remnant and drives its evolution from the hot initial to the cold final states. Detecting these neutrinos from Earth and analyzing the emitted signals present a unique opportunity to explore the neutrino mass ordering problem. This research outlines the detection of neutrinos from SNs and their relevance in understanding the neutrino mass ordering. The focus is on developing a model-independent analysis strategy, achieved by comparing distinct detection channels in large underground detectors. The objective is to identify potential indicators of mass ordering within the neutrino sector. Additionally, a thorough statistical analysis is performed on the anticipated neutrino signals for both mass orderings. Despite uncertainties in supernova explosion parameters, an exploration of the parameter space reveals an extensive array of models with significant sensitivity to differentiate between mass orderings. The assessment of various observables and their combinations underscores the potential of forthcoming supernova observations in addressing the neutrino mass ordering problem.
The recent serendipitous discovery of a new population of short duration X-ray transients, thought to be associated with collisions of compact objects or stellar explosions in distant galaxies, has motivated efforts to build up statistical samples by mining X-ray telescope archives. Most searches to date however, do not fully exploit recent developments in the signal and imaging processing research domains to optimise searches for short X-ray flashes. This paper addresses this issue by presenting a new source detection pipeline, STATiX (Space and Time Algorithm for Transients in X-rays), which directly operates on 3-dimensional X-ray data cubes consisting of two spatial and one temporal dimension. The algorithm leverages wavelet transforms and the principles of sparsity to denoise X-ray observations and then detect source candidates on the denoised data cubes. The light curves of the detected sources are then characterised using the Bayesian blocks algorithm to identify flaring periods. We describe the implementation of STATiX in the case of XMM-Newton data, present extensive validation and performance verification tests based on simulations and also apply the pipeline to a small subset of seven XMM-Newton observations, which are known to contain transients sources. In addition to known flares in the selected fields we report a previously unknown short duration transient found by our algorithm that is likely associated with a flaring Galactic star. This discovery demonstrates the potential of applying STATiX to the full XMM-Newton archive.
The existence of a plateau in the short-duration tail of the observed distribution of cosmological Long-soft Gamma Ray Bursts (LGRBs) has been argued as the first direct evidence of Collapsars. A similar plateau in the short-duration tail of the observed duration distribution of Short-hard Gamma Ray Bursts (SGRBs) has been suggested as evidence of compact binary mergers. We present an equally plausible alternative interpretation for this evidence, which is purely statistical. Specifically, we show that the observed plateau in the short-duration tail of the duration distribution of LGRBs can naturally occur in the statistical distributions of strictly-positive physical quantities, exacerbated by the effects of mixing with the duration distribution of SGRBs, observational selection effects and data aggregation (e.g., binning) methodologies. The observed plateau in the short-duration tail of the observed distributions of SGRBs can similarly result from a combination of sample incompleteness and inhomogeneous binning of data.
$\gamma$-ray emission of blazars infer the presence of large-scale magnetic fields in the intergalactic medium, but their origin remains a mystery. Using recent data from MAGIC, H.E.S.S. and $\textit{Fermi}$-LAT, we investigate whether the large-scale magnetic fields in the intergalactic medium could have been generated by a first-order electroweak phase transition in the two-Higgs-doublet model (2HDM). We study two representative scenarios where we vary the initial conditions of the magnetic field and the plasma, assuming either a primordial magnetic field with maximal magnetic helicity or a primordial magnetic field with negligible magnetic helicity in a plasma with kinetic helicity. By considering a primordial magnetic field with maximal helicity and applying the conservative constraints derived from MAGIC and $\textit{Fermi}$-LAT data, we demonstrate that a first-order electroweak phase transition within the 2HDM may account for the observed intergalactic magnetic fields in the case of the strongest transitions. We show that this parameter space also predicts strong gravitational wave signals in the reach of space-based detectors such as LISA, providing a striking multi-messenger signal of the 2HDM.
We show, for the first time, radio measurements of the depth of shower maximum ($X_\text{max}$) of air showers induced by cosmic rays that are compared to measurements of the established fluorescence method at the same location. Using measurements at the Pierre Auger Observatory we show full compatibility between our radio and the previously published fluorescence data set, and between a subset of air showers observed simultaneously with both radio and fluorescence techniques, a measurement setup unique to the Pierre Auger Observatory. Furthermore, we show radio $X_\text{max}$ resolution as a function of energy and demonstrate the ability to make competitive high-resolution $X_\text{max}$ measurements with even a sparse radio array. With this, we show that the radio technique is capable of cosmic-ray mass composition studies, both at Auger and at other experiments.
We report on a new search for continuous gravitational waves from NS 1987A, the neutron star born in SN 1987A, using open data from Advanced LIGO and Virgo's third observing run (O3). The search covered frequencies from 35-1050 Hz, more than five times the band of the only previous gravitational wave search to constrain NS 1987A [B. J. Owen et al., ApJL 935, L7 (2022)]. It used an improved code and coherently integrated from 5.10 days to 14.85 days depending on frequency. No astrophysical signals were detected. By expanding the frequency range and using O3 data, this search improved on strain upper limits from the previous search and was sensitive at the highest frequencies to ellipticities of 1.6e-5 and r-mode amplitudes of 4.4e-4, both an order of magnitude improvement over the previous search and both well within the range of theoretical predictions.
The Auger Engineering Radio Array (AERA), part of the Pierre Auger Observatory, is currently the largest array of radio antenna stations deployed for the detection of cosmic rays, spanning an area of $17$ km$^2$ with 153 radio stations. It detects the radio emission of extensive air showers produced by cosmic rays in the $30-80$ MHz band. Here, we report the AERA measurements of the depth of the shower maximum ($X_\text{max}$), a probe for mass composition, at cosmic-ray energies between $10^{17.5}$ to $10^{18.8}$ eV, which show agreement with earlier measurements with the fluorescence technique at the Pierre Auger Observatory. We show advancements in the method for radio $X_\text{max}$ reconstruction by comparison to dedicated sets of CORSIKA/CoREAS air-shower simulations, including steps of reconstruction-bias identification and correction, which is of particular importance for irregular or sparse radio arrays. Using the largest set of radio air-shower measurements to date, we show the radio $X_\text{max}$ resolution as a function of energy, reaching a resolution better than $15$ g cm$^{-2}$ at the highest energies, demonstrating that radio $X_\text{max}$ measurements are competitive with the established high-precision fluorescence technique. In addition, we developed a procedure for performing an extensive data-driven study of systematic uncertainties, including the effects of acceptance bias, reconstruction bias, and the investigation of possible residual biases. These results have been cross-checked with air showers measured independently with both the radio and fluorescence techniques, a setup unique to the Pierre Auger Observatory.
Andreev-Bashkin entrainment makes the hydrodynamics of the binary superfluid solution particularly interesting. We investigate stability and motion of quantum vortices in such system.
We perform the first numerical simulations modeling the inflow and outflow of magnetized plasma in the Kerr-Sen spacetime, which describes classical spinning black holes (BHs) in string theory. We find that the Blandford-Znajek (BZ) mechanism, which is believed to power astrophysical relativistic outflows or ``jets'', is valid even for BHs in an alternate theory of gravity, including near the extremal limit. The BZ mechanism releases outward Poynting-flux-dominated plasma as frame-dragging forces magnetic field lines to twist. However, for nonspinning BHs, where the frame-dragging is absent, we find an alternate powering mechanism through the release of gravitational potential energy during accretion. Outflows from non-spinning stringy BHs can be approximately $250\%$ more powerful as compared to Schwarzschild BHs, due to their relatively smaller event horizon sizes and, thus, higher curvatures. Finally, by constructing the first synthetic images of near-extremal non-Kerr BHs from time-dependent simulations, we find that these can be ruled out by horizon-scale interferometric images of accreting supermassive BHs.
The launching of astrophysical jets provides the most compelling observational evidence for direct extraction of black hole (BH) spin energy via the Blandford-Znajek (BZ) mechanism. Whilst it is know that spinning Kerr BHs within general relativity (GR) follow the BZ jet power relation, the nature of BH energy extraction in general theories of gravity has not been adequately addressed. This study performs the first comprehensive investigation of the BZ jet power relation by utilising a generalised BH spacetime geometry which describes parametric deviations from the Kerr metric of GR, yet recovers the Kerr metric in the limit that all deviation parameters vanish. Through performing and analysing an extensive suite of three-dimensional covariant magnetohydrodynamics (MHD) simulations of magnetised gas accretion onto these generalised BH spacetimes we find that the BZ jet power relation still holds, in some instances yielding jet powers far in excess of what can be produced by even extremal Kerr BHs. It is shown that the variation of the quadrupole moment of the BH can enhance or suppress the effects of BH spin, and by extension of frame-dragging. This variation greatly enhances or suppresses the observed jet power and underlying photon ring image asymmetry, introducing a previously unexplored yet important degeneracy in BH parameter inference.
Time dependent photoionization modeling of warm absorber outflows in active galactic nuclei can play an important role in understanding the interaction between warm absorbers and the central black hole. The warm absorber may be out of the equilibrium state because of the variable nature of the central continuum. In this paper, with the help of time dependent photoionization modeling, we study how the warm absorber gas changes with time and how it reacts to changing radiation fields. Incorporating a flaring incident light curve, we investigate the behavior of warm absorbers using a photoionization code that simultaneously and consistently solves the time dependent equations of level population, heating and cooling, and radiative transfer. We simulate the physical processes in the gas clouds, such as ionization, recombination, heating, cooling, and the transfer of ionizing radiation through the cloud. We show that time dependent radiative transfer is important and that calculations which omit this effect quantitatively and systematically underestimate the absorption. Such models provide crucial insights into the characteristics of warm absorbers and can constrain their density and spatial distribution.
We numerically construct a series of axisymmetric rotating magnetic wind solutions, aiming at exploring the observation properties of massive white dwarf (WD) merger remnants with a strong magnetic field, a fast spin, and an intense mass loss, as inferred for WD J005311. We investigate the magnetospheric structure and the resultant spin-down torque exerted to the merger remnant with respect to the surface magnetic flux $\Phi_*$, spin angular frequency $\Omega_*$ and the mass loss rate $\dot M$. We confirm that the wind properties for $\sigma \equiv \Phi^2_* \Omega_*^2/\dot M v_\mathrm{esc}^3 \gtrsim 1$ significantly deviate from those of the spherical Parker wind, where $v_\mathrm{esc}$ is the escape velocity at stellar surface. For such a rotating magnetic wind sequence, we find: (i) quasi-periodic mass eruption triggered by magnetic reconnection along with the equatorial plane (ii) a scaling relation for the spin-down torque $T \approx (1/2) \times \dot{M} \Omega_* R^2_* \sigma^{1/4}$. We apply our results to discuss the spin-down evolution and wind anisotropy of massive WD merger remnants, the latter of which could be probed by a successive observation of WD J005311 using Chandra.
The nearby dwarf galaxy POX 52 at $z = 0.021$ hosts an active galactic nucleus (AGN) with a black-hole (BH) mass of $M_{\rm BH} \sim 10^{5-6} M_\odot$ and an Eddington ratio of $\sim$ 0.1-1. This object provides the rare opportunity to study both AGN and host-galaxy properties in a low-mass highly accreting system. To do so, we collected its multi-wavelength data from X-ray to radio. First, we construct a spectral energy distribution, and by fitting it with AGN and host-galaxy components, we constrain AGN-disk and dust-torus components. Then, while considering the AGN-disk emission, we decompose optical HST images. As a result, it is found that a classical bulge component is probably present, and its mass ($M_{\rm bulge}$) is consistent with an expected value from a local relation. Lastly, we analyze new quasi-simultaneous X-ray (0.2-30 keV) data obtained by NuSTAR and XMM-Newton. The X-ray spectrum can be reproduced by multi-color blackbody, warm and hot coronae, and disk and torus reflection components. Based on this, the spin is estimated to be $a_{\rm spin} = 0.998_{-0.814}$, which could suggest that most of the current BH mass was achieved by prolonged mass accretion. Given the presence of the bulge, POX 52 would have undergone a galaxy merger, while the $M_{\rm BH}$-$M_{\rm bulge}$ relation and the inferred prolonged accretion could suggest that AGN feedback occurred. Regarding the AGN structure, the spectral slope of the hot corona, its relative strength to the bolometric emission, and the torus structure are found to be consistent with Eddington-ratio dependencies found for nearby AGNs.
This paper investigates the spin-up of a mass-accreting star in a close binary system passing through the first stage of mass exchange in the Hertzsprung gap. Inside an accreting star, angular momentum is carried by meridional circulation and shear turbulence. The circulation carries part of the angular momentum entering the accretor to its surface. The greater the rate of arrival of angular momentum in the accretor is, the greater this part. It is assumed that this part of the angular momentum can be removed by the disk further from the accretor. If the angular momentum in the matter entering the accretor is more than half the Keplerian value, then the angular momentum obtained by the accretor during mass exchange stage does not depend on the rate of arrival of angular momentum. The accretor may have the characteristics of a Be-star immediately after the end of mass exchange.
We assess the impact of accurate, self-consistent modelling of thermal effects in neutron-star merger remnants in the context of third-generation gravitational-wave detectors. This is done through the usage, in Bayesian model selection experiments, of numerical-relativity simulations of binary neutron star (BNS) mergers modelled through: a) nuclear, finite-temperature (or ``tabulated'') equations of state (EoSs), and b) their simplifed piecewise (or ``hybrid'') representation. These cover four different EoSs, namely SLy4, DD2, HShen and LS220. Our analyses make direct use of the Newman-Penrose scalar $\psi_4$ outputted by numerical simulations. Considering a detector network formed by three Cosmic Explorers, we show that differences in the gravitational-wave emission predicted by the two models are detectable with a natural logarithmic Bayes Factor $\log{\cal{B}}\geq 5$ at average distances of $d_L \simeq 50$Mpc, reaching $d_L \simeq 100$Mpc for source inclinations $\iota \leq 0.8$, regardless of the EoS. This impact is most pronounced for the HShen EoS. For low inclinations, only the DD2 EoS prevents the detectability of such modelling differences at $d_L \simeq 150$Mpc. Our results suggest that the usage a self-consistent treatment of thermal effects is crucial for third-generation gravitational wave detectors.
We present self-consistent three-dimensional core-collapse supernova simulations of a rotating $20M_\odot$ progenitor model with various initial angular velocities from $0.0$ to $4.0$ rad s$^{-1}$ using a smoothed particle hydrodynamics code, SPHYNX, and a grid-based hydrodynamics code, FLASH. We identify two strong gravitational-wave features, with peak frequencies of $\sim300$ Hz and $\sim1.3$ kHz in the first $100$ ms postbounce. We demonstrate that these two features are associated with the $m=1$ deformation from the proto-neutron star (PNS) modulation induced by the low-$T/|W|$ instability, regardless of the simulation code. The $300$ Hz feature is present in models with an initial angular velocity between $1.0$ and $4.0$ rad s$^{-1}$, while the $1.3$ kHz feature is present only in a narrower range, from $1.5$ to $3.5$ rad s$^{-1}$. We show that the $1.3$ kHz signal originates from the high-density inner core of the PNS, and the $m=1$ deformation triggers a strong asymmetric distribution of electron anti-neutrinos. In addition to the $300$ Hz and $1.3$ kHz features, we also observe one weaker but noticeable gravitational-wave feature from higher-order modes in the range between $1.5$ and $3.5$ rad s$^{-1}$. Its peak frequency is around $800$ Hz initially and gradually increases to $900-1000$ Hz. Therefore, in addition to the gravitational bounce signal, the detection of the $300$ Hz, $1.3$ kHz, the higher-order mode, and even the related asymmetric emission of neutrinos, could provide additional diagnostics to estimate the initial angular velocity of a collapsing core.
As the most energetic explosions in the Universe, gamma-ray bursts (GRBs) are commonly believed to be generated by relativistic jets. Recent observational evidence suggests that the jets producing GRBs are likely to have a structured nature. Some studies have suggested that non-axisymmetric structured jets may be formed through internal non-uniform magnetic dissipation processes or the precession of the central engine. In this study, we analyze the potential characteristics of GRB afterglows within the framework of non-axisymmetric structured jets. We simplify the profile of the asymmetric jet as a step function of the azimuth angle, dividing the entire jet into individual elements. By considering specific cases, we demonstrate that the velocity, energy, and line-of-sight direction of each jet element can greatly affect the behaviour of the overall light curve. The radiative contributions from multiple elements may lead to the appearance of multiple distinct peaks or plateaus in the light curve. Furthermore, fluctuations in the rising and declining segments of each peak can be observed. These findings establish a theoretical foundation for future investigations into the structural characteristics of GRBs by leveraging GRB afterglow data.
Supernova explosions attributed to the unseen companion in several binary systems identified by the Third Gaia Data Release (Gaia DR3) may be responsible for a number of well-known and well-studied features in the radio sky, including the Low-Latitude-Intermediate-Velocity Arch and the North-Celestial-Pole Loop. Slices from the Longitude-Latitude-Velocity data cube of the $\lambda$-21-cm galactic neutral hydrogen HI4PI survey (HI4PI Collaboration et al. 2016) show multiple signatures of an expanding shell. The source of this expansion, which includes the Low-Latitude-Intermediate-Velocity Arch on the approaching side, may be the neutron star candidate in the Gaia DR3 1093757200530267520 binary. If we make the simplifying assumptions that the expansion of the cavity is uniform and spherically symmetric, then the explosion took place about 700,000 years ago. The momentum is in reasonable agreement with recent model estimates for a supernova this old. The HI on the receding side of this cavity is interacting with the gas approaching us on the near side of a second cavity. The North-Celestial-Pole Loop appears to be located at the intersection of these two expanding features. The neutron star candidate in the Gaia DR3 1144019690966028928 binary may be (in part) responsible for this cavity. Explosions from other candidates may account for the observed elongation along the line of sight of this second cavity. We can use the primary star in these binaries to anchor the distances to the Low-Latitude-Intermediate-Velocity Arch and North-Celestial-Pole Loop, which are about 167 and about 220 pc, respectively.
We describe the interaction of parallel-propagating Alfv\'en waves with ion-acoustic waves and other Alfv\'en waves, in magnetized, high-$\beta$ collisionless plasmas. This is accomplished through a combination of analytical theory and numerical fluid simulations of the Chew-Goldberger-Low (CGL) magnetohydrodynamic (MHD) equations closed by Landau-fluid heat fluxes. An asymptotic ordering is employed to simplify the CGL-MHD equations and derive solutions for the deformation of an Alfv\'en wave that results from its interaction with the pressure anisotropy generated either by an ion-acoustic wave or another, larger-amplitude Alfv\'en wave. The difference in timescales of acoustic and Alfv\'enic fluctuations at high-$\beta$ means that interactions that are local in wavenumber space yield little modification to either mode within the time it takes the acoustic wave to Landau damp away. Instead, order-unity changes in the amplitude of Alfv\'enic fluctuations can result after interacting with frequency-matched acoustic waves. Additionally, we show that the propagation speed of an Alfv\'en-wave packet in an otherwise homogeneous background is a function of its self-generated pressure anisotropy. This allows for the eventual interaction of separate co-propagating Alfv\'en-wave packets of differing amplitudes. The results of the CGL-MHD simulations agree well with these predictions, suggesting that theoretical models relying on the interaction of these modes should be reconsidered in certain astrophysical environments. Applications of these results to weak Alfv\'enic turbulence and to the interaction between the compressive and Alfv\'enic cascades in strong, collisionless turbulence are also discussed.
Theoretical models of accretion discs and observational data indicate that the X-ray emission from the inner parts of an accretion disc can irradiate its outer regions and induce a thermal wind, which carries away the mass and angular momentum from the disc. Our aim is to investigate the influence of the thermal wind on the outburst light curves of black hole X-ray binary systems. We carry out numerical simulations of a non-stationary disc accretion with wind using upgraded open code freddi. We assume that the wind launches only from the ionised part of the disc and may turn off if the latter shrinks fast enough. Our estimates of the viscosity parameter $\alpha$ are shifted downward compared to a scenario without a wind. Generally, correction of $\alpha$ depends on the spectral hardness of central X-rays and the disc outer radius, but unlikely to exceed a factor of 10 in the case of a black hole low-mass X-ray binary (BH LMXB). We fit 2002 outburst of BH LMXB 4U 1543-47 taking into account the thermal wind. The mass loss in the thermal wind is of order of the accretion rate on the central object at the peak of the outburst. New estimate of the viscosity parameter $\alpha$ for the accretion disc in this system is about two times lower than the previous one. Additionally, we calculate evolution of the number of hydrogen atoms towards 4U 1543-47 due to the thermal wind from the hot disc.
Magnetically arrested accretion disks (MADs) around a rapidly rotating black hole (BH) have been proposed as a model for jetted tidal disruption events (TDEs). However, the dynamics of strongly magnetized disks in a more realistic simulation which can mimic the chaotic dynamics during a TDE have previously been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD disk interacting with an injected TDE stream with impact parameter $\beta\equiv R_t/R_p=4-7$ to investigate how strongly magnetized TDEs differ from the standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD state can be sustained and jets powered by the BH spin are produced in a TDE. We also demonstrate that the strength of the self-intersection shock depends on how dense the disk is relative to the stream, or the density contrast $f_\rho=\rho_d/\rho_s$. The jet or funnel can become significantly tilted (by $10-30^\circ$) due to the self-intersection outflow when $f_\rho \leq 0.1$. In models with a powerful jet and $f_\rho\leq 0.01$, the tilted jet interacts with and ultimately tilts the disk by as much as 23 degrees from the incoming stream. We illustrate that as $f_\rho$ increases, the tilt of the jet and disk is expected to realign with the BH spin once $f_\rho \geq 0.1$. We illustrate how the tilt can rapidly realign if $f_\rho$ increases rapidly and apply this to TDEs which have shown X-ray evolution on timescales of days-weeks.
In this Reply we include the corrections suggested in the Comment [Phys. Rev. Lett. 131, 169001]. We show that their impact on our results is small, and that the overall conclusion of the Article [Phys. Rev. Lett. 129, 111102] are robust. As pointed out in the Article, it is crucial to account for the statistical uncertainty in the ringdown starting time, neglected in most previous studies. This uncertainty is ~40 times larger than the systematic shift induced by the software bug mentioned in the Comment. The remaining discrepancies between the Comment and the Article can be attributed to additional differences in the setup, notably the sampling rate and the noise estimation method (in the Article the latter was chosen to mimic the original methods of [Phys. Rev. Lett. 123, 111102]). Beyond data analysis considerations, the physics of the problem cannot be ignored. As shown in [arXiv:2302.03050], a model consisting of a sum of constant-amplitude overtones starting at the peak of the waveform introduces uncontrolled systematic uncertainties in the measurement due to dynamical and strong-field effects. These theoretical considerations imply that studies based on such models cannot be interpreted as black hole spectroscopy tests.
NGC 1068 is a nearby widely studied Seyfert II galaxy presenting radio, infrared, X- and $\gamma$-ray emission as well as strong evidence for high-energy neutrino emission. Recently, the evidence for neutrino emission could be explained in a multimessenger model in which the neutrinos originate from the corona of the active galactic nucleus (AGN). In this environment $\gamma$-rays are strongly absorbed, so that an additional contribution from e.g. the circumnuclear starburst ring is necessary. In this work, we discuss whether the radio jet can be an alternative source of the $\gamma$-rays between about $0.1$ and $100$ GeV as observed by Fermi-LAT. In particular, we include both leptonic and hadronic processes, i.e. accounting for inverse Compton emission and signatures from $pp$ as well as $p\gamma$ interactions. In order to constrain our calculations, we use VLBA and ALMA observations of the radio knot structures, which are spatially resolved at different distances from the supermassive black hole. Our results show that the best leptonic scenario for the prediction of the Fermi-LAT data is provided by the radio knot closest to the central engine. For that a magnetic field strength $\sim 1\,\text{mG}$ is needed as well as a strong spectral softening of the relativistic electron distribution at $(1-10)\,\text{GeV}$. However, we show that neither such a weak magnetic field strength nor such a strong softening is expected for that knot. A possible explanation for the $\sim$ 10 GeV $\gamma$-rays can be provided by hadronic pion production in case of a gas density $\gtrsim 10^4\,\text{cm}^{-3}$. Nonetheless, this process cannot contribute significantly to the low energy end of the Fermi-LAT range. We conclude that the emission sites in the jet are not able to explain the $\gamma$-rays in the whole Fermi-LAT energy band.
Accretion onto black holes is one of the most efficient energy source in the Universe. Black hole accretion powers some of the most luminous objects in the universe, including quasars, active galactic nuclei, tidal disruption events, gamma-ray bursts, and black hole X-ray transients. In the present review, we give an astrophysical overview of black hole accretion processes, with a particular focus on black hole X-ray binary systems. In Section 1, we briefly introduce the basic paradigms of black hole accretion. Physics related to accretion onto black holes are introduced in Section 2. Models proposed for black hole accretion are discussed in this section, from the Shakura-Sunyaev thin disk accretion to the advective-dominated accretion flow. Observational signatures that make contact to stellar-mass black hole accretion are introduced in Section 3, including the spectral and fast variability properties. A short conclusion is given in Section 4.
State-of-the-art surveys reveal that most massive stars in the universe evolve in close binaries. Massive stars in such systems are expected to develop aspherical envelopes due to tidal interactions and/or rotational effects. Recently, it was shown that point explosions in oblate stars can produce relativistic equatorial ring-like outflows. Moreover, since stripped-envelope stars in binaries can expand enough to fill their Roche lobes anew, it is likely that these stars die with a greater degree of asphericity than the oblate spheroid geometry previously studied. We investigate the effects of this asymmetry by studying the gas dynamics of axisymmetric point explosions in stars in various stages of filling their Roche lobes. We find that point explosions in these pear-shaped stars produce trans-relativistic ejecta that coalesces into bullets pointed both toward and away from the binary companion. We present this result and comment on key morphological differences between core-collapse explosions in spherical versus distorted stars in binary systems, effects on gravitational wave sources, and observational signatures that could be used to glean these explosion geometries from current and future surveys.
We consider the general problem of a Parker-type non-relativistic isothermal wind from a rotating and magnetic star. Using the magnetohydrodynamics (MHD) code athena++, we construct an array of simulations in the stellar rotation rate $\Omega_\ast$ and the isothermal sound speed $c_T$, and calculate the mass, angular momentum, and energy loss rates across this parameter space. We also briefly consider the three dimensional case, with misaligned magnetic and rotation axes. We discuss applications of our results to the spindown of normal stars, highly-irradiated exoplanets, and to nascent highly-magnetic and rapidly-rotating neutron stars born in massive star core collapse.8
We use local stratified shearing-box simulations to elucidate the impact of two-temperature thermodynamics on the thermal structure of coronae in radiatively efficient accretion flows. Rather than treating the coronal plasma as an isothermal fluid, we use a simple, parameterized cooling function that models the collisional transfer of energy from the ions to the rapidly cooling leptons. Two-temperature models naturally form temperature inversions, with a hot, magnetically dominated corona surrounding a cold disc. Simulations with net vertical flux (NF) magnetic fields launch powerful magnetocentrifugal winds that would enhance accretion in a global system. The outflow rates are much better converged with increasing box height than analogous isothermal simulations, suggesting that the winds into two-temperature coronae may be sufficiently strong to evaporate a thin disc and form a radiatively inefficient accretion flow under some conditions. We find evidence for multiphase structure in the corona, with broad density and temperature distributions, and we propose criteria for the formation of a multiphase corona. The fraction of cooling in the surface layers of the disc is substantially larger for NF fields compared to zero net-flux configurations, with moderate NF simulations radiating ${\gtrsim}30$ per cent of the flow's total luminosity above two midplane scale-heights. Our work shows that NF fields may efficiently power the coronae of luminous Seyfert galaxies and quasars, providing compelling motivation for future studies of the heating mechanisms available to NF fields and the interplay of radiation with two-temperature thermodynamics.
The propagation of gravitational waves can reveal fundamental features of the structure of spacetime. For instance, differences in the propagation of gravitational-wave polarizations would be a smoking gun for parity violations in the gravitational sector, as expected from birefringent theories like Chern-Simons gravity. Here we look for evidence of amplitude birefringence in the third catalog of detections by the Laser Interferometer Gravitational Wave Observatory and Virgo through the use of birefringent templates inspired by dynamical Chern-Simons gravity. From $71$ binary-black-hole signals, we obtain the most precise constraints on gravitational-wave amplitude birefringence yet, measuring a birefringent attenuation of $\kappa = -0.019^{+0.038}_{-0.029} \, \mathrm{Gpc}^{-1}$ at $100 \, \mathrm{Hz}$ with $90\%$ credibility, equivalent to a parity-violation energy scale of $M_{\rm PV} \gtrsim 6.8 \times 10^{-21}\, {\rm GeV}$.
Stars with initial masses in the range of 8-25 solar masses are thought to end their lives as hydrogen-rich supernovae (SNe II). Based on the pre-explosion images of Hubble Space Telescope ($HST$) and $Spitzer$ Space Telescope, we place tight constraints on the progenitor candidate of type IIP SN 2023ixf in Messier 101. Fitting of the spectral energy distribution (SED) of its progenitor with dusty stellar spectral models results in an estimation of the effective temperature as 3091$^{+422}_{-258}$ K. The luminosity is estimated as log($L/$L$_{\odot}$)$\sim4.83$, consistent with a red supergiant (RSG) star with an initial mass of 12$^{+2}_{-1}$ M$_{\odot}$. The derived mass loss rate (6-9$\times10^{-6}$ M$_{\odot}$ yr$^{-1}$) is much lower than that inferred from the flash spectroscopy of the SN, suggesting that the progenitor experienced a sudden increase in mass loss when approaching the final explosion. In the infrared bands, significant deviation from the range of regular RSGs in the color-magnitude diagram and period-luminosity space of the progenitor star indicates enhanced mass loss and dust formation. Combining with new evidence of polarization at the early phases of SN 2023ixf, such a violent mass loss is likely a result of binary interaction.
We report the results of careful astrometric measurements of the Cannonball pulsar J0002+6216 carried out over three years using the High Sensitivity Array (HSA). We significantly refine the proper motion to $\mu=35.3\pm0.6$ mas yr$^{-1}$ and place new constraints on the distance, with the overall effect of lowering the velocity and increasing the inferred age to $47.60\pm0.80$ kyr. Although the pulsar is brought more in line with the standard natal kick distribution, this new velocity has implications for the morphology of the pulsar wind nebula that surrounds it, the density of the interstellar medium through which it travels, and the age of the supernova remnant (CTB 1) from which it originates.
We present recent results of the TELAMON program, which is using the Effelsberg 100-m telescope to monitor the radio spectra of active galactic nuclei (AGN) under scrutiny in astroparticle physics, namely TeV blazars and neutrino-associated AGN. Our sample includes all known Northern TeV-emitting blazars as well as blazars positionally coincident with IceCube neutrino alerts. Polarization can give additional insight into the source properties, as the polarized emission is often found to vary on different timescales and amplitudes than the total intensity emission. Here, we present an overview of the polarization properties of the TeV-emitting TELAMON sources at four frequencies in the 20 mm and 7 mm bands. While at 7 mm roughly $82\,\%$ of all observed sources are found to be significantly polarized, for 20 mm the percentage is $\sim58\,\%$. We find that most of the sources exhibit mean fractional polarizations of $<5\%$, matching the expectations of rather low polarization levels in these sources from previous studies at lower radio frequencies. Nevertheless, we demonstrate examples of how the polarized emission can provide additional information over the total intensity.
The discovery and timing follow-up of millisecond pulsars (MSPs) are necessary not just for their usefulness in Pulsar Timing Arrays (PTAs) but also for investigating their own intriguing properties. In this work, we provide the findings of the decade-long timing of the four MSPs discovered by the Giant Metre-wave Radio Telescope (GMRT), including their timing precision, model parameters, and newly detected proper motions. We compare the timing results for these MSPs before and after the GMRT upgrade in 2017, characterise the improvement in timing precision due to the bandwidth upgrade. We discuss the suitability of these four GMRT MSPs as well as the usefulness of their decade-long timing data for the PTA {experiments. It may aid} in the global effort to improve the signal-to-noise (S/N) of recently detected signature of gravitational waves in cross-correlation statistics of residuals of MSPs.
The reconstruction of very inclined air showers is a new challenge for next-generation radio experiments such as the AugerPrime radio upgrade, BEACON, and GRAND, which focus on the detection of ultra-high-energy particles. To tackle this, we study the electromagnetic particle content of very inclined air showers, which has scarcely been studied so far. Using the simulation tools CORSIKA and CoREAS, and analytical modeling, we explore the energy range of the particles that contribute most to the radio emission, quantify their lateral extent, and estimate the atmospheric depth at which the radio emission is strongest. We find that the distribution of the electromagnetic component in very inclined air showers has characteristic features that could lead to clear signatures in the radio signal, and hence impact the reconstruction strategies of next-generation radio-detection experiments.
We investigate the formation of dense stellar clumps in a suite of high-resolution cosmological zoom-in simulations of a massive, star forming galaxy at $z \sim 2$ under the presence of strong quasar winds. Our simulations include multi-phase ISM physics from the Feedback In Realistic Environments (FIRE) project and a novel implementation of hyper-refined accretion disk winds. We show that powerful quasar winds can have a global negative impact on galaxy growth while in the strongest cases triggering the formation of an off-center clump with stellar mass ${\rm M}_{\star}\sim 10^{7}\,{\rm M}_{\odot}$, effective radius ${\rm R}_{\rm 1/2\,\rm Clump}\sim 20\,{\rm pc}$, and surface density $\Sigma_{\star} \sim 10^{4}\,{\rm M}_{\odot}\,{\rm pc}^{-2}$. The clump progenitor gas cloud is originally not star-forming, but strong ram pressure gradients driven by the quasar winds (orders of magnitude stronger than experienced in the absence of winds) lead to rapid compression and subsequent conversion of gas into stars at densities much higher than the average density of star-forming gas. The AGN-triggered star-forming clump reaches ${\rm SFR} \sim 50\,{\rm M}_{\odot}\,{\rm yr}^{-1}$ and $\Sigma_{\rm SFR} \sim 10^{4}\,{\rm M}_{\odot}\,{\rm yr}^{-1}\,{\rm kpc}^{-2}$, converting most of the progenitor gas cloud into stars in $\sim$2\,Myr, significantly faster than its initial free-fall time and with stellar feedback unable to stop star formation. In contrast, the same gas cloud in the absence of quasar winds forms stars over a much longer period of time ($\sim$35\,Myr), at lower densities, and losing spatial coherency. The presence of young, ultra-dense, gravitationally bound stellar clumps in recently quenched galaxies could thus indicate local positive feedback acting alongside the strong negative impact of powerful quasar winds, providing a plausible formation scenario for globular clusters.
SDSS J1004+4112 is a well studied gravitational lens with a recently measured time delay between its first and fourth arriving quasar images. Using this new constraint, we present updated free-form lens reconstructions using the lens inversion method {\tt GRALE}, which only uses multiple image and time delay data as inputs. In addition, we obtain hybrid lens reconstructions by including a model of the brightest cluster galaxy (BCG) as a Sersic lens. For both reconstructions, we use two sets of images as input: one with all identified images, and the other a revised set leaving out images that have been potentially misidentified. We also develop a source position optimization MCMC routine, performed on completed {\tt GRALE} runs, that allows each model to better match observed image positions and time delays. All the reconstructions produce similar mass distributions, with the hybrid models finding a steeper profile in the center. Similarly, all the mass distributions are fit by the Navarro-Frenk-White (NFW) profile, finding results consistent with previous parametric reconstructions and those derived from Chandra X-ray observations. We identify a $\sim 5 \times 10^{11} M_{\odot}$ substructure apparently unaffiliated with any cluster member galaxy and present in all our models, and study its reality. Using our free-form and hybrid models we predict a central quasar image time delay of $\sim 2980 \pm 270$ and $\sim 3280 \pm 215$ days, respectively. A potential future measurement of this time delay will, while being an observational challenge, further constrain the steepness of the central density profile.
Modern cosmological hydrodynamical galaxy simulations provide tens of thousands of reasonably realistic synthetic galaxies across cosmic time. However, quantitatively assessing the level of realism of simulated universes in comparison to the real one is difficult. In this paper of the ERGO-ML series (Extracting Reality from Galaxy Observables with Machine Learning), we utilize contrastive learning to directly compare a large sample of simulated and observed galaxies based on their stellar-light images. This eliminates the need to specify summary statistics and allows to exploit the whole information content of the observations. We produce survey-realistic galaxy mock datasets resembling real Hyper Suprime-Cam (HSC) observations using the cosmological simulations TNG50 and TNG100. Our focus is on galaxies with stellar masses between $10^9$ and $10^{12} M_\odot$ at $z=0.1-0.4$. This allows us to evaluate the realism of the simulated TNG galaxies in comparison to actual HSC observations. We apply the self-supervised contrastive learning method NNCLR to the images from both simulated and observed datasets (g, r, i - bands). This results in a 256-dimensional representation space, encoding all relevant observable galaxy properties. Firstly, this allows us to identify simulated galaxies that closely resemble real ones by seeking similar images in this multi-dimensional space. Even more powerful, we quantify the alignment between the representations of these two image sets, finding that the majority ($\gtrsim 70$ per cent) of the TNG galaxies align well with observed HSC images. However, a subset of simulated galaxies with larger sizes, steeper Sersic profiles, smaller Sersic ellipticities, and larger asymmetries appears unrealistic. We also demonstrate the utility of our derived image representations by inferring properties of real HSC galaxies using simulated TNG galaxies as the ground truth.
A supermassive black hole (SMBH) surrounded by a dense, nuclear star cluster resides at the center of many galaxies. In this dense environment, high-velocity collisions frequently occur between stars. About $10 \%$ of the stars within the Milky Way's nuclear star cluster collide with other stars before evolving off the main-sequence. Collisions preferentially affect tightly-bound stars, which orbit most quickly and pass through regions of the highest stellar density. Over time, collisions therefore shape the bulk properties of the nuclear star cluster. We examine the effect of collisions on the cluster's stellar density profile. We show that collisions produce a turning point in the density profile which can be determined analytically. Varying the initial density profile and collision model, we characterize the evolution of the stellar density profile over $10$ Gyr. We find that old, initially cuspy populations exhibit a break around $0.1$ pc in their density profile, while shallow density profiles retain their initial shape outside of $0.01$ pc. The initial density profile is always preserved outside of a few tenths of parsec irrespective of initial conditions. Lastly, we comment on the implications of collisions for the luminosity and color of stars in the collisionly-shaped inner cluster.
The standard paradigm for the formation of the Universe suggests that large structures are formed from hierarchical clustering by the continuous accretion of less massive galaxy systems through filaments. In this context, filamentary structures play an important role in the properties and evolution of galaxies by connecting high-density regions, such as nodes, and being surrounded by low-density regions, such as cosmic voids. The availability of the filament and point critic catalogues extracted by \textsc{DisPerSE} from the \textsc{Illustris} TNG300-1 hydrodynamic simulation allows a detailed analysis of these structures. The halo occupation distribution (HOD) is a powerful tool for linking galaxies and dark matter halos, allowing constrained models of galaxy formation and evolution. In this work we combine the advantage of halo occupancy with information from the filament network to analyse the HOD in filaments and nodes. In our study, we distinguish the inner regions of cosmic filaments and nodes from their surroundings. The results show that the filamentary structures have a similar trend to the total galaxy sample indicating that, although the filaments span a wide range of densities, they may represent regions of average density. In the case of the nodes sample, an excess of faint and blue galaxies is found for the low-mass nodes suggesting that these structures are not virialised and that galaxies may be continuously falling through the filaments. Instead, the higher-mass halos could be in a more advanced stage of evolution showing features of virialised structures.
Spiral structures are important drivers of the secular evolution of disc galaxies, however, the origin of spiral arms and their effects on the development of galaxies remain mysterious. In this work, we present two three-armed spiral galaxies at z~0.3 in the Middle Age Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey. Taking advantage of the high spatial resolution (~0.6'') of the Multi-Unit Spectroscopic Unit (MUSE), we investigate the two-dimensional distributions of different spectral parameters: Halpha, gas-phase metallicity, and D4000. We notice significant offsets in Halpha (~0.2 dex) as well as gas-phase metallicities (~0.05 dex) among the spiral arms, downstream and upstream of MAGPI1202197197 (SG1202). This observational signature suggests the spiral structure in SG1202 is consistent with arising from density wave theory. No azimuthal variation in Halpha or gas-phase metallicities is observed in MAGPI1204198199 (SG1204), which can be attributed to the tighter spiral arms in SG1204 than SG1202, coming with stronger mixing effects in the disc. The absence of azimuthal D4000 variation in both galaxies suggests the stars at different ages are well-mixed between the spiral arms and distributed around the disc regions. The different azimuthal distributions in Halpha and D4000 highlight the importance of time scales traced by various spectral parameters when studying 2D distributions in spiral galaxies. This work demonstrates the feasibility of constraining spiral structures by tracing interstellar medium (ISM) and stellar population at z~0.3, with a plan to expand the study to the full MAGPI survey.
We present a hierarchical Bayesian pipeline, BP3M, that measures positions, parallaxes, and proper motions (PMs) for cross-matched sources between Hubble~Space~Telescope (HST) images and Gaia -- even for sparse fields ($N_*<10$ per image) -- expanding from the recent GaiaHub tool. This technique uses Gaia-measured astrometry as priors to predict the locations of sources in HST images, and is therefore able to put the HST images onto a global reference frame without the use of background galaxies/QSOs. Testing our publicly-available code in the Fornax and Draco dSphs, we measure accurate PMs that are a median of 8-13 times more precise than Gaia DR3 alone for $20.5<G<21~\mathrm{mag}$. We are able to explore the effect of observation strategies on BP3M astrometry using synthetic data, finding an optimal strategy to improve parallax and position precision at no cost to the PM uncertainty. Using 1619 HST images in the sparse COSMOS field (median 9 Gaia sources per HST image), we measure BP3M PMs for 2640 unique sources in the $16<G<21.5~\mathrm{mag}$ range, 25% of which have no Gaia PMs; the median BP3M PM uncertainty for $20.25<G<20.75~\mathrm{mag}$ sources is $0.44~$mas/yr compared to $1.03~$mas/yr from Gaia, while the median BP3M PM uncertainty for sources without Gaia-measured PMs ($20.75<G<21.5~\mathrm{mag}$) is $1.16~$mas/yr. The statistics that underpin the BP3M pipeline are a generalized way of combining position measurements from different images, epochs, and telescopes, which allows information to be shared between surveys and archives to achieve higher astrometric precision than that from each catalog alone.
One of the fundamental questions in astronomy is how planetary systems form and evolve. Measuring the planetary occurrence and architecture as a function of time directly addresses this question. In the fourth paper of the Planets Across Space and Time (PAST) series, we investigate the occurrence and architecture of Kepler planetary systems as a function of kinematic age by using the LAMOST-Gaia-Kepler sample. To isolate the age effect, other stellar properties (e.g., metallicity) have been controlled. We find the following results. (1) The fraction of stars with Kepler-like planets ($F_{\text{Kep}}$) is about 50% for all stars; no significant trend is found between $F_{\text{Kep}}$ and age. (2) The average planet multiplicity ($\bar{N}_p$) exhibits a decreasing trend (~2$\sigma$ significance) with age. It decreases from $\bar{N}_p$~3 for stars younger than 1 Gyr to $\bar{N}_p$~1.8 for stars about 8 Gyr. (3) The number of planets per star ($\eta=F_{\text{Kep}}\times\bar{N}_p$) also shows a decreasing trend (~2-3$\sigma$ significance). It decreases from $\eta$~1.6-1.7 for young stars to $\eta$~1.0 for old stars. (4) The mutual orbital inclination of the planets ($\sigma_{i,k}$) increases from $1.2^{+1.4}_{-0.5}$ to $3.5^{+8.1}_{-2.3}$ as stars aging from 0.5 to 8 Gyr with a best fit of $\log{\sigma_{i,k}}=0.2+0.4\times\log{\frac{\text{Age}}{\text{1Gyr}}}$. Interestingly, the Solar System also fits such a trend. The nearly independence of $F_{\text{Kep}}$~50% on age implies that planet formation is robust and stable across the Galaxy history. The age dependence of $\bar{N}_p$ and $\sigma_{i,k}$ demonstrates planetary architecture is evolving, and planetary systems generally become dynamically hotter with fewer planets as they age.
We extend the analysis presented in Contini et al. 2023 to higher redshifts, up to $z=2$, by focusing on the relation between the intracluster light (ICL) fraction and the halo mass, its dependence with redshift, role played by the halo concentration and formation time, in a large sample of simulated galaxy groups/clusters with $13\lesssim \log M_{halo} \lesssim 15$. Moreover, a key focus is to isolate the relative contributions provided by the main channels for the ICL formation to the total amount. The ICL fraction at higher redshift is weakly dependent on halo mass, and comparable with that at the present time, in agreement with recent observations. Stellar stripping, mergers and pre-processing are the major responsible channels of the ICL formation, with stellar stripping that accounts for $\sim 90\%$ of the total ICL, regardless of halo mass and redshift. Pre-processing is an important process for clusters to accrete already formed ICL. The diffuse component forms very early, $z\sim 0.6$, and its formation depends on both concentration and formation time of the halo, with more concentrated and earlier formed haloes that assemble their ICL earlier than later formed ones. The efficiency of this process is independent of halo mass, but increases with decreasing redshift, which implies that stellar stripping becomes more important with time as the concentration increases. This highlights the link between the ICL and the dynamical state of a halo: groups/clusters that have a higher fraction of diffuse light are more concentrated, relaxed and in an advanced stage of growth.
The isotopic ratios are good tools for probing the stellar nucleosynthesis and chemical evolution. We performed high-sensitivity mapping observations of the J=7-6 rotational transitions of OCS, OC34S, O13CS, and OC33S toward the Galactic Center giant molecular cloud, Sagittarius B2 (Sgr B2) with IRAM 30m telescope. Positions with optically thin and uncontaminated lines are chosen to determine the sulfur isotope ratios. A 32S/34S ratio of 17.1\pm0.9 was derived with OCS and OC34S lines, while 34S/33S ratio of 6.8\pm1.9 was derived directly from integrated intensity ratio of OC34S and OC33S. With independent and accurate measurements of 32S/34S ratio, our results confirm the termination of the decreasing trend of 32S/34S ratios toward the Galactic Center, suggesting a drop in the production of massive stars at the Galactic centre.
Currently astrometric microlensing is the only tool that can directly measure the mass of a single star, it can also help us to detect compact objects like isolated neutron stars and black holes. The number of microlensing events that are being predicted and reported is increasing. In the paper, the potential lens stars are selected from three types of stars, high-proper-motion stars, nearby stars and high-mass stars. For each potential lens star, we select a larger search scope to find possible matching sources to avoid missing events as much as possible. Using Gaia DR3 data, we predict 4500 astrometric microlensing events with signal>0.1mas that occur between J2010.0 and J2070.0, where 1664 events are different from those found previously. There are 293 lens stars that can cause two or more events, where 5 lens stars can cause more than 50 events. We find that 116 events have the distance of background stars from the proper motion path of lens stars more than 8 arcsec in the reference epoch, where the maximum distance is 16.6 arcsec, so the cone search method of expanding the search range of sources for each potential lens star can reduce the possibility of missing events.
Numerical simulations and observations show that galaxies are not uniformly distributed. In cosmology, the largest known structures in the universe are galaxy filaments formed from the hierarchical clustering of galaxies due to gravitational forces. These consist of "walls" and "bridges" that connect clusters. Here, we use graph theory to model these structures as Euclidean networks in three-dimensional space. Using percolation theory, cosmological graphs are reduced based on the valency of nodes to reveal the inner, most robust structural formation. By constraining the network, we then find thresholds for physical features, such as length-scale and density, at which galaxy filament clusters are identified.
The dust in planet-forming disks evolve rapidly through growth and radial drift, and external photoevaporation also contributes to this evolution in massive star-forming regions. We test whether the presence of substructures can explain the survival of the dust component and observed millimeter continuum emission in protoplanetary disks located within massive star-forming regions. We also characterize the dust content removed by the photoevaporative winds. For this, we performed hydrodynamical simulations of protoplanetary disks subject to irradiation fields of $F_{UV} = 10^2$, $10^3$, and $10^4\, G_0$, with different dust trap locations. We used the FRIED grid to derive the mass loss rate for each irradiation field and disk properties, and then measure the evolution of the dust mass over time. For each simulation we estimate continuum emission at $\lambda = 1.3\, \textrm{mm}$ along with the radii encompassing $90\%$ of the continuum flux, and characterize the dust size distribution entrained in the photoevaporative winds, along with the resulting far-ultraviolet (FUV) cross section. Our simulations show that the presence of dust traps can extend the lifetime of the dust component of the disk to a few millionyears if the FUV irradiation is $F_{UV} \lesssim 10^3 G_0$, but only if the dust traps are located inside the photoevaporative truncation radius. The dust component of a disk quickly disperse if the FUV irradiation is strong ($10^4\, G_0$) or if the substructures are located outside the photoevaporation radius. We do find however, that the dust grains entrained with the photoevaporative winds may result in an absorption FUV cross section of $\sigma \approx 10^{-22}\, \textrm{cm}^2$ at early times of evolution ($<$0.1 Myr), which is enough to trigger a self-shielding effect that reduces the total mass loss rate, and slow down the disk dispersal in a negative feedback loop process.
Stars and planets move supersonically in a gaseous medium during planetary engulfment, stellar interactions and within protoplanetary disks. For a nearly uniform medium, the relevant parameters are the Mach number and the size of the body, $R$, relative to its accretion radius, $R_A$. Over many decades, numerical and analytical work has characterized the flow, the drag on the body and the possible suite of instabilities. Only a limited amount of work has treated the stellar boundary as it is in many of these astrophysical settings, a hard sphere at $R$. Thus we present new 3-D Athena++ hydrodynamic calculations for a large range of parameters. For $R_A\ll R$, the results are as expected for pure hydrodynamics with minimal impact from gravity, which we verify by comparing to experimental wind tunnel data in air. When $R_A\approx R$, a hydrostatically-supported separation bubble forms behind the gravitating body, exerting significant pressure on the sphere and driving a recompression shock which intersects with the bow shock. For $R_A\gg R$, the bubble transitions into an isentropic, spherically-symmetric halo, as seen in earlier works. These two distinct regimes of flow morphology may be treated separately in terms of their shock stand-off distance and drag coefficients. Most importantly for astrophysical applications, we propose a new formula for the dynamical friction which depends on the ratio of the shock stand-off distance to $R_A$. That exploration also reveals the minimum size of the simulation domain needed to accurately capture the deflection of incoming streamlines due to gravity.
To probe star formation processes, we present a multi-scale and multi-wavelength investigation of the `Snake' nebula/infrared dark cloud G11.11$-$0.12 (hereafter, G11; length $\sim$27 pc). Spitzer images hint at the presence of sub-filaments (in absorption), and reveal four infrared-dark hub-filament system (HFS) candidates (extent $<$ 6 pc) toward G11, where massive clumps ($>$ 500 $M_{\odot}$) and protostars are identified. The $^{13}$CO(2-1), C$^{18}$O(2-1), and NH$_{3}$(1,1) line data reveal a noticeable velocity oscillation toward G11, as well as its left part (or part-A) around V$_{lsr}$ of 31.5 km s$^{-1}$, and its right part (or part-B) around V$_{lsr}$ of 29.5 km s$^{-1}$. The common zone of these cloud components is investigated toward the center's G11 housing one HFS. Each cloud component hosts two sub-filaments. In comparison to part-A, more ATLASGAL clumps are observed toward part-B. The JWST near-infrared images discover one infrared-dark HFS candidate (extent $\sim$0.55 pc) around the massive protostar G11P1 (i.e., G11P1-HFS). Hence, the infrared observations reveal multiple infrared-dark HFS candidates at multi-scale in G11. The ALMA 1.16 mm continuum map shows multiple finger-like features (extent $\sim$3500-10000 AU) surrounding a dusty envelope-like feature (extent $\sim$18000 AU) toward the central hub of G11P1-HFS. Signatures of forming massive stars are found toward the center of the envelope-like feature. The ALMA H$^{13}$CO$^{+}$ line data show two cloud components with a velocity separation of $\sim$2 km s$^{-1}$ toward G11P1. Overall, the collision process, the ``fray and fragment'' mechanism, and the ``global non-isotropic collapse'' scenario seem to be operational in G11.
Observations of clusters suffer from issues such as completeness, projection effects, resolving individual stars and extinction. As such, how accurate are measurements and conclusions are likely to be? Here, we take cluster simulations (Westerlund2- and Orion- type), synthetically observe them to obtain luminosities, accounting for extinction and the inherent limits of Gaia, then place them within the real Gaia DR3 catalogue. We then attempt to rediscover the clusters at distances of between 500pc and 4300pc. We show the spatial and kinematic criteria which are best able to pick out the simulated clusters, maximising completeness and minimising contamination. We then compare the properties of the 'observed' clusters with the original simulations. We looked at the degree of clustering, the identification of clusters and subclusters within the datasets, and whether the clusters are expanding or contracting. Even with a high level of incompleteness (e.g. $<2\%$ stellar members identified), similar qualitative conclusions tend to be reached compared to the original dataset, but most quantitative conclusions are likely to be inaccurate. Accurate determination of the number, stellar membership and kinematic properties of subclusters, are the most problematic to correctly determine, particularly at larger distances due to the disappearance of cluster substructure as the data become more incomplete, but also at smaller distances where the misidentification of asterisms as true structure can be problematic. Unsurprisingly, we tend to obtain better quantitative agreement of properties for our more massive Westerlund2-type cluster. We also make optical style images of the clusters over our range of distances.
$Aims:$ We used the nearby Coma Cluster as a laboratory in order to probe the impact of ram pressure on star formation as well as to constrain the characteristic timescales and velocities for the stripping of the non-thermal ISM. $Methods:$ We used high-resolution ($6.5'' \approx 3\,\mathrm{kpc}$), multi-frequency ($144\,\mathrm{MHz} - 1.5\,\mathrm{GHz}$) radio continuum imaging of the Coma Cluster to resolve the low-frequency radio spectrum across the discs and tails of 25 ram pressure stripped galaxies. With resolved spectral index maps across these galaxy discs, we constrained the impact of ram pressure perturbations on galaxy star formation. We measured multi-frequency flux-density profiles along each of the ram pressure stripped tails in our sample. We then fit the resulting radio continuum spectra with a simple synchrotron aging model. $Results:$ We showed that ram pressure stripped tails in Coma have steep ($-2 \lesssim \alpha \lesssim -1$) spectral indices. The discs of galaxies undergoing ram pressure stripping have integrated spectral indices within the expected range for shock acceleration from supernovae ($-0.8 \lesssim \alpha \lesssim -0.5$), though there is a tail towards flatter values. In a resolved sense, there are gradients in spectral index across the discs of ram pressure stripped galaxies in Coma. These gradients are aligned with the direction of the observed radio tails, with the flattest spectral indices being found on the `leading half'. From best-fit break frequencies we estimated the projected plasma velocities along the tail to be on the order of hundreds of kilometers per second, with the precise magnitude depending on the assumed magnetic field strength.
The nature of the dark matter in the Universe is one of the hardest unsolved problems in modern physics. Indeed, on one hand, the overwhelming indirect evidence from astrophysics seems to leave no doubt about its existence; on the other hand, direct search experiments, especially those conducted with low background detectors in underground laboratories all over the world seem to deliver only null results, with a few debated exceptions. Furthermore, the lack of predicted candidates at the LHC energy scale has made this dichotomy even more puzzling. We will recall the most important phases of this novel branch of experimental astro-particle physics, analyzing the interconnections among the main projects involved in this challenging quest, and we will draw conclusions slightly different from how the problem is commonly understood.
Anomalous Cepheids (ACEPs) are intermediate mass metal-poor pulsators mostly discovered in dwarf galaxies of the Local Group. However, recent Galactic surveys, including the Gaia DR3, found a few hundreds of ACEPs in the Milky Way. Their origin is not well understood. We aim to investigate the origin and evolution of Galactic ACEPs by studying for the first time the chemical composition of their atmospheres. We used UVES@VLT to obtain high-resolution spectra for a sample of 9 ACEPs belonging to the Galactic halo. We derived the abundances of 12 elements, including C, Na, Mg, Si, Ca, Sc, Ti, Cr, Fe, Ni, Y, and Ba. We complemented these data with literature abundances for an additional three ACEPs that were previously incorrectly classified as type II Cepheids, thus increasing the sample to a total of 12 stars. All the investigated ACEPs have an iron abundance [Fe/H]$<-1.5$ dex as expected from theoretical predictions for these pulsators. The abundance ratios of the different elements to iron show that the ACEP's chemical composition is generally consistent with that of the Galactic halo field stars, except the Sodium, which is found overabundant in 9 out of the 11 ACEPs where it was measured, in close similarity with second-generation stars in the Galactic Globular Clusters. The same comparison with dwarf and ultra-faint satellites of the Milky Way reveals more differences than similarities so it is unlikely that the bulk of Galactic ACEPs originated in such a kind of galaxies which subsequently dissolved in the Galactic halo. The principal finding of this work is the unexpected overabundance of Sodium in ACEPs. We explored several hypotheses to explain this feature, finding that the most promising scenario is the evolution of low-mass stars in a binary system with either mass transfer or merging. Detailed modelling is needed to confirm this hypothesis.
We have measured structural parameters and radial color profiles of 108 ultra-diffuse galaxies (UDGs), carefully selected from six distant massive galaxy clusters in the Hubble Frontier Fields (HFF) in redshift range from 0.308 to 0.545. Our best-fitting GALFIT models show that the HFF UDGs have a median S\'ersic index of 1.09, which is close to 0.86 for local UDGs in the Coma cluster. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. The structural similarity between HFF and Coma UDGs suggests that they are the same kind of galaxies seen at different times and the structures of UDGs do not change at least for several billion years. By checking the distribution of HFF UDGs in the rest-frame $UVJ$ and $UVI$ diagrams, we find a large fraction of them are star-forming. Furthermore, a majority of HFF UDGs show small $\rm U-V$ color gradients within \,1\,*\,$R_{e,SMA}$ region, the fluctuation of the median radial color profile of HFF UDGs is smaller than 0.1\,mag, which is compatible to Coma UDGs. Our results indicate that cluster UDGs may fade or quench in a self-similar way, irrespective of the radial distance, in less than $\sim$ 4 Gyrs.
Dust emission is an important tool in studies of star-forming clouds, as a tracer of column density and indirectly via the dust evolution that is connected to the history and physical conditions of the clouds. We examine radiative transfer (RT) modelling of dust emission over an extended cloud region, using a filament in the Taurus molecular cloud as an example. We examine how well far-infrared observations can be used to determine both the cloud and the dust properties. Using different assumptions of the cloud shape, radiation field, and dust properties, we fit RT models to Herschel observations of the Taurus filament. Further comparisons are made with measurements of the near-infrared extinction. The models are used to examine the degeneracies between the different cloud parameters and the dust properties. The results show significant dependence on the assumed cloud structure and the spectral shape of the external radiation field. If these are constrained to the most likely values, the observations can be explained only if the dust far-infrared (FIR) opacity has increased by a factor of 2-3 relative to the values in diffuse medium. However, a narrow range of FIR wavelengths provides only weak evidence of the spatial variations in dust, even in the models covering several square degrees of a molecular cloud. The analysis of FIR dust emission is affected by several sources of uncertainty. Further constraints are therefore needed from observations at shorter wavelengths, especially regarding the trends in dust evolution.
The first year of JWST has revealed a surprisingly large number of luminous galaxy candidates beyond $z>10$. While some galaxies are already spectroscopically confirmed, there is mounting evidence that a subsample of the candidates with particularly red inferred UV colors are in fact lower redshift contaminants.These interlopers are often found to be `HST-dark' or `optically-faint' galaxies at $z\sim2-6$, a population key to understanding dust-obscured star formation throughout cosmic time. This paper demonstrates the complementarity of ground-based mm-interferometry and JWST infrared imaging to unveil the true nature of red 1.5-2.0 $\mu$m dropouts that have been selected as ultra-high-redshift galaxy candidates. We present NOEMA Polyfix follow-up observations of four JWST red 1.5-2.0 $\mu$m dropouts selected by Yan et al. 2023 as ultra-high-redshift candidates in the PEARLS field. The new NOEMA observations constrain the rest-frame far-infrared continuum emission and efficiently discriminate between intermediate- and high-redshift solutions. We report $>10\sigma$ NOEMA continuum detections of all our target galaxies at observed frequencies of $\nu$=236 and 252 GHz, with FIR slopes indicating a redshift $z<5$. We model their optical-to-FIR spectral energy distribution (SED) with multiple SED codes, and find that they are not $z>10$ galaxies but instead dust-obscured, massive star-forming galaxies at $z\sim 2-4$. The contribution to the cosmic star-formation rate density of such sources is not negligible at $z\simeq 3.5$ ($\phi\gtrsim(1.9-4.4)\times10^{-3}\ \rm{cMpc}^{-3}$), in line with previous studies of optically-faint/sub-millimeter galaxies. This work showcases a new way to select intermediate- to high-redshift dust-obscured galaxies in JWST fields with minimal wavelength coverage to open a new window on obscured star-formation at intermediate redshifts .[abridged]
Stellar feedback plays a crucial role in star formation and the life cycle of molecular clouds. The intense star formation region 30 Doradus, which is located in the Large Magellanic Cloud (LMC), is a unique target for detailed investigation of stellar feedback owing to the proximity of the hosting galaxy and modern observational capabilities that together allow us to resolve individual molecular clouds $-$ nurseries of star formation. We study the impact of large-scale feedback on the molecular gas using the new observational data in the $^{12}$CO(3$-$2) line obtained with the APEX telescope. Our data cover an unprecedented area of 13.9 sq. deg. of the LMC disc with a spatial resolution of 5 pc and provide an unbiased view on the molecular clouds in the galaxy. Using this data, we locate molecular clouds in the disc of the galaxy, estimate their properties such as the areal number density, relative velocity and separation, width of the line profile, CO-line luminosity, size, virial mass and compare these properties between the clouds of 30 Doradus and those in the rest of the LMC disc. We find that compared with the rest of the observed molecular clouds in the LMC disc, those in 30 Doradus show the highest areal number density; they are spatially more clustered, move faster with respect to each other and feature larger linewidths. In parallel, we do not find statistically significant differences in such properties as the CO-line luminosity, size, and virial mass between the clouds of 30 Doradus and the rest of the observed field. We interpret our results as signatures of gas dispersal and fragmentation due to high-energy large-scale feedback.
Groups of galaxies are the intermediate density environment in which much of the evolution of galaxies is thought to take place. In spectroscopic redshift surveys, one can identify these as close spatial redshift associations. However, spectroscopic surveys will always be more limited in luminosity and completeness than imaging ones. Here we combine the Galaxy And Mass Assembly group catalogue with the extended Satellites Around Galactic Analogues (xSAGA) catalogue of Machine Learning identified low-redshift satellite galaxies. We find 1825 xSAGA galaxies within the bounds of the GAMA equatorial fields (m < 21), 1562 of which could have a counterpart in the GAMA spectroscopic catalogue (m < 19.8). Of these, 1326 do have a GAMA counterpart with 974 below z=0.03 (true positives) and 352 above (false positives). By crosscorrelating the GAMA group catalogue with the xSAGA catalogue, we can extend and characterize the satellite content of GAMA galaxy groups. We find that most groups have <5 xSAGA galaxies associated with them but richer groups may have more. Each additional xSAGA galaxy contributes only a small fraction of the group's total stellar mass (<<10%). Selecting GAMA groups that resemble the Milky Way halo, with a few (<4) bright galaxies, we find xSAGA can add a magnitude fainter sources to a group and that the Local Group does not stand out in the number of bright satellites. We explore the quiescent fraction of xSAGA galaxies in GAMA groups and find a good agreement with the literature.
Evidence abounds that young stellar objects undergo luminous bursts of intense accretion that are short compared to the time it takes to form a star. It remains unclear how much these events contribute to the main-sequence masses of the stars. We demonstrate the power of time-series far-infrared (far-IR) photometry to answer this question compared to similar observations at shorter and longer wavelengths. We start with model spectral energy distributions that have been fit to 86 Class 0 protostars in the Orion molecular clouds. The protostars sample a broad range of envelope densities, cavity geometries, and viewing angles. We then increase the luminosity of each model by factors of 10, 50, and 100 and assess how these luminosity increases manifest in the form of flux increases over wavelength ranges of interest. We find that the fractional change in the far-IR luminosity during a burst more closely traces the change in the accretion rate than photometric diagnostics at mid-infrared and submillimeter wavelengths. We also show that observations at far-IR and longer wavelengths reliably track accretion changes without confusion from large, variable circumstellar and interstellar extinction that plague studies at shorter wavelengths. We close by discussing the ability of a proposed far-IR surveyor for the 2030s to enable improvements in our understanding of the role of accretion bursts in mass assembly.
We investigate light propagation in the gravitational field of multiple gravitational lenses. Assuming these lenses are sufficiently spaced to prevent interaction, we consider a linear alignment for the transmitter, lenses, and receiver. Remarkably, in this axially-symmetric configuration, we can solve the relevant diffraction integrals -- result that offers valuable analytical insights. We show that the point-spread function (PSF) is affected by the number of lenses in the system. Even a single lens is useful for transmission either it is used as a part of the transmitter or it acts on the receiver's side. We show that power transmission via a pair of lenses benefits from light amplification on both ends of the link. The second lens plays an important role by focusing the signal to a much tighter spot; but in practical lensing scenarios, that lens changes the structure of the PSF on scales much smaller than the telescope, so that additional gain due to the presence of the second lens is independent of its properties and is govern solely by the transmission geometry. While evaluating the signal-to-noise ratio (SNR) in various transmitting scenarios, we see that a single-lens transmission performs on par with a pair of lenses. The fact that the second lens amplifies the brightness of the first one, creates a challenging background for signal reception. Nevertheless, in all the cases considered here, we have found practically-relevant SNR values. As a result, we were able to demonstrate the feasibility of establishing interstellar power transmission links relying on gravitational lensing -- a finding with profound implications for applications targeting interstellar power transmission.
ESA's Hera mission aims to visit binary asteroid Didymos in late 2026, investigating its physical characteristics and the result of NASA's impact by the DART spacecraft in more detail. Two CubeSats on-board Hera plan to perform a ballistic landing on the secondary of the system, called Dimorphos. For these types of landings the translational state during descent is not controlled, reducing the spacecrafts complexity but also increasing its sensitivity to deployment maneuver errors and dynamical uncertainties. This paper introduces a novel methodology to analyse the effect of these uncertainties on the dynamics of the lander and design a trajectory that is robust against them. This methodology consists of propagating the uncertain state of the lander using the non-intrusive Chebyshev interpolation (NCI) technique, which approximates the uncertain dynamics using a polynomial expansion, and analysing the results using the pseudo-diffusion indicator, derived from the coefficients of the polynomial expansion, which quantifies the rate of growth of the set of possible states of the spacecraft over time. This indicator is used here to constrain the impact velocity and angle to values which allow for successful settling on the surface. This information is then used to optimize the landing trajectory by applying the NCI technique inside the transcription of the problem. The resulting trajectory increases the robustness of the trajectory compared to a conventional method, improving the landing success by 20 percent and significantly reducing the landing footprint.
We present the DECam Ecliptic Exploration Project (DEEP) survey strategy including observing cadence for orbit determination, exposure times, field pointings and filter choices. The overall goal of the survey is to discover and characterize the orbits of a few thousand Trans-Neptunian Objects (TNOs) using the Dark Energy Camera (DECam) on the Cerro Tololo Inter-American Observatory (CTIO) Blanco 4 meter telescope. The experiment is designed to collect a very deep series of exposures totaling a few hours on sky for each of several 2.7 square degree DECam fields-of-view to achieve a magnitude of about 26.2 using a wide VR filter which encompasses both the V and R bandpasses. In the first year, several nights were combined to achieve a sky area of about 34 square degrees. In subsequent years, the fields have been re-visited to allow TNOs to be tracked for orbit determination. When complete, DEEP will be the largest survey of the outer solar system ever undertaken in terms of newly discovered object numbers, and the most prolific at producing multi-year orbital information for the population of minor planets beyond Neptune at 30 au.
Machine learning techniques can automatically identify outliers in massive datasets, much faster and more reproducible than human inspection ever could. But finding such outliers immediately leads to the question: which features render this input anomalous? We propose a new feature attribution method, Inverse Multiscale Occlusion, that is specifically designed for outliers, for which we have little knowledge of the type of features we want to identify and expect that the model performance is questionable because anomalous test data likely exceed the limits of the training data. We demonstrate our method on outliers detected in galaxy spectra from the Dark Energy Survey Instrument and find its results to be much more interpretable than alternative attribution approaches.
For diffraction-limited optical systems an accurate physical optics model is necessary to properly evaluate instrument performance. Astronomical observatories outfitted with coronagraphs for direct exoplanet imaging require physical optics models to simulate the effects of misalignment and diffraction. Accurate knowledge of the observatory's PSF is integral for the design of high-contrast imaging instruments and simulation of astrophysical observations. The state of the art is to model the misalignment, ray aberration, and diffraction across multiple software packages, which complicates the design process. Gaussian Beamlet Decomposition (GBD) is a ray-based method of diffraction calculation that has been widely implemented in commercial optical design software. By performing the coherent calculation with data from the ray model of the observatory, the ray aberration errors can be fed directly into the physical optics model of the coronagraph, enabling a more integrated model of the observatory. We develop a formal algorithm for the transfer-matrix method of GBD, and evaluate it against analytical results and a traditional physical optics model to assess the suitability of GBD for high-contrast imaging simulations. Our GBD simulations of the observatory PSF, when compared to the analytical Airy function, have a sum-normalized RMS difference of ~10^-6. These fields are then propagated through a Fraunhofer model of a exoplanet imaging coronagraph where the mean residual numerical contrast is 4x10^-11, with a maximum near the inner working angle at 5x10^-9. These results show considerable promise for the future development of GBD as a viable propagation technique in high-contrast imaging. We developed this algorithm in an open-source software package and outlined a path for its continued development to increase the fidelity and flexibility of diffraction simulations using GBD.
The known near-Earth object (NEO) population consists of over 32,000 objects, with a yearly discovery rate of over 3000 NEOs per year. An essential component of the next generation of NEO surveys is an understanding of the population of known objects, including an accounting of the discovery rate per year as a function of size. Using a near-Earth asteroid (NEA) reference model developed for NASA's NEO Surveyor (NEOS) mission and a model of the major current and historical ground-based surveys, an estimate of the current NEA survey completeness as a function of size and absolute magnitude has been determined (termed the Known Object Model; KOM). This allows for understanding of the intersection of the known catalog of NEAs and the objects expected to be observed by NEOS. The current NEA population is found to be $\sim38\%$ complete for objects larger than 140m, consistent with estimates by Harris & Chodas (2021). NEOS is expected to catalog more than two thirds of the NEAs larger than 140m, resulting in $\sim76\%$ of NEAs cataloged at the end of its 5 year nominal survey (Mainzer et al, 2023}, making significant progress towards the US Congressional mandate. The KOM estimates that $\sim77\%$ of the currently cataloged objects will be detected by NEOS, with those not detected contributing $\sim9\%$ to the final completeness at the end its 5 year mission. This model allows for placing the NEO Surveyor mission in the context of current surveys to more completely assess the progress toward the goal of cataloging the population of hazardous asteroids.
Space-based gravitational wave detection is one of the most anticipated gravitational wave (GW) detection projects in the next decade, which will detect abundant compact binary systems. However, the precise prediction of space GW waveforms remains unexplored. To solve the data processing difficulty in the increasing waveform complexity caused by detectors' response and second-generation time-delay interferometry (TDI 2.0), an interpretable pre-trained large model named CBS-GPT (Compact Binary Systems Waveform Generation with Generative Pre-trained Transformer) is proposed. For compact binary system waveforms, three models were trained to predict the waveforms of massive black hole binary (MBHB), extreme mass-ratio inspirals (EMRIs), and galactic binary (GB), achieving prediction accuracies of 98%, 91%, and 99%, respectively. The CBS-GPT model exhibits notable interpretability, with its hidden parameters effectively capturing the intricate information of waveforms, even with complex instrument response and a wide parameter range. Our research demonstrates the potential of large pre-trained models in gravitational wave data processing, opening up new opportunities for future tasks such as gap completion, GW signal detection, and signal noise reduction.
An event-based maximum likelihood method for handling X-ray polarimetry data is extended to include the effects of background and nonuniform sampling of the possible position angle space. While nonuniform sampling in position angle space generally introduces cross terms in the uncertainties of polarization parameters that could create degeneracies, there are interesting cases that engender no bias or parameter covariance. When including background in Poisson-based likelihood formulation, the formula for the minimum detectable polarization (MDP) has nearly the same form as for the case of Gaussian statistics derived by Elsner et al. (2012) in the limiting case of an unpolarized signal. A polarized background is also considered, which demonstrably increases uncertainties in source polarization measurements. In addition, a Kolmogorov-style test of the event position angle distribution is proposed that can provide an unbinned test of models where the polarization angle in Stokes space depends on event characteristics such as time or energy.
Ncorpi$\mathcal{O}$N is a $N$-body software developed for the time-efficient integration of collisional and fragmenting systems of planetesimals or moonlets orbiting a central mass. It features a fragmentation model, based on crater scaling and ejecta models, able to realistically simulate a violent impact. The user of Ncorpi$\mathcal{O}$N can choose between four different built-in modules to compute self-gravity and detect collisions. One of these makes use of a mesh-based algorithm to treat mutual interactions in $\mathcal{O}(N)$ time. Another module, much more efficient than the standard Barnes-Hut tree code, is a $\mathcal{O}(N)$ tree-based algorithm called FalcON. It relies on fast multipole expansion for gravity computation and we adapted it to collision detection as well. Computation time is reduced by building the tree structure using a three-dimensional Hilbert curve. For the same precision in mutual gravity computation, Ncorpi$\mathcal{O}$N is found to be up to 25 times faster than the famous software REBOUND. Ncorpi$\mathcal{O}$N is written entirely in the C language and only needs a C compiler to run. A python add-on, that requires only basic python libraries, produces animations of the simulations from the output files. The name Ncorpi$\mathcal{O}$N, reminding of a scorpion, comes from the French $N$-corps, meaning $N$-body, and from the mathematical notation $\mathcal{O}(N)$, due to the running time of the software being almost linear in the total number $N$ of moonlets. Ncorpi$\mathcal{O}$N is designed for the study of accreting or fragmenting disks of planetesimal or moonlets. It detects collisions and computes mutual gravity faster than REBOUND, and unlike other $N$-body integrators, it can resolve a collision by fragmentation. The fast multipole expansions are implemented up to order six to allow for a high precision in mutual gravity computation.
Simulating the evolution of the gravitational N-body problem becomes extremely computationally expensive as N increases since the problem complexity scales quadratically with the number of bodies. We study the use of Artificial Neural Networks (ANNs) to replace expensive parts of the integration of planetary systems. Neural networks that include physical knowledge have grown in popularity in the last few years, although few attempts have been made to use them to speed up the simulation of the motion of celestial bodies. We study the advantages and limitations of using Hamiltonian Neural Networks to replace computationally expensive parts of the numerical simulation. We compare the results of the numerical integration of a planetary system with asteroids with those obtained by a Hamiltonian Neural Network and a conventional Deep Neural Network, with special attention to understanding the challenges of this problem. Due to the non-linear nature of the gravitational equations of motion, errors in the integration propagate. To increase the robustness of a method that uses neural networks, we propose a hybrid integrator that evaluates the prediction of the network and replaces it with the numerical solution if considered inaccurate. Hamiltonian Neural Networks can make predictions that resemble the behavior of symplectic integrators but are challenging to train and in our case fail when the inputs differ ~7 orders of magnitude. In contrast, Deep Neural Networks are easy to train but fail to conserve energy, leading to fast divergence from the reference solution. The hybrid integrator designed to include the neural networks increases the reliability of the method and prevents large energy errors without increasing the computing cost significantly. For this problem, the use of neural networks results in faster simulations when the number of asteroids is >70.
Scientific data collected at ESO's observatories are freely and openly accessible online through the ESO Science Archive Facility. In addition to the raw data straight out of the instruments, the ESO Science Archive also contains four million processed science files available for use by scientists and astronomy enthusiasts worldwide. ESO subscribes to the FAIR (Findable, Accessible, Interoperable, Reusable) guiding principles for scientific data management and stewardship. All data in the ESO Science Archive are distributed according to the terms of the Creative Commons Attribution 4.0 International licence (CC BY 4.0).
PRIMA (The PRobe for-Infrared Mission for Astrophysics) is a concept for a far-infrared (IR) observatory. PRIMA features a cryogenically cooled 1.8 m diameter telescope and is designed to carry two science instruments enabling ultra-high sensitivity imaging and spectroscopic studies in the 24 to 235 microns wavelength range. The resulting observatory is a powerful survey and discovery machine, with mapping speeds better by 2 - 4 orders of magnitude with respect to its far-IR predecessors. The bulk of the observing time on PRIMA should be made available to the community through a General Observer (GO) program offering 75% of the mission time over 5 years. In March 2023, the international astronomy community was encouraged to prepare authored contributions articulating scientific cases that are enabled by the telescope massive sensitivity advance and broad spectral coverage, and that could be performed within the context of GO program. This document, the PRIMA General Observer Science Book, is the edited collection of the 76 received contributions.
The Kessler syndrome refers to the escalating space debris from frequent space activities, threatening future space exploration. Addressing this issue is vital. Several AI models, including Convolutional Neural Networks, Kernel Principal Component Analysis, and Model-Agnostic Meta- Learning have been assessed with various data types. Earlier studies highlighted the combination of the YOLO object detector and a linear Kalman filter (LKF) for object detection and tracking. Advancing this, the current paper introduces a novel methodology for the Comprehensive Orbital Surveillance and Monitoring Of Space by Detecting Satellite Residuals (CosmosDSR) by combining YOLOv3 with an Unscented Kalman Filter (UKF) for tracking satellites in sequential images. Using the Spacecraft Recognition Leveraging Knowledge of Space Environment (SPARK) dataset for training and testing, the YOLOv3 precisely detected and classified all satellite categories (Mean Average Precision=97.18%, F1=0.95) with few errors (TP=4163, FP=209, FN=237). Both CosmosDSR and an implemented LKF used for comparison tracked satellites accurately for a mean squared error (MSE) and root mean squared error (RME) of MSE=2.83/RMSE=1.66 for UKF and MSE=2.84/RMSE=1.66 for LKF. The current study is limited to images generated in a space simulation environment, but the CosmosDSR methodology shows great potential in detecting and tracking satellites, paving the way for solutions to the Kessler syndrome.
The recently obtained $\textit{special}$ Buchdahl-inspired metric [Phys. Rev. D 107, 104008 (2023)] describes asymptotically flat spacetimes in pure Ricci-squared gravity. The metric depends on a new (Buchdahl) parameter $\tilde{k}$ of higher-derivative characteristic, and reduces to the Schwarzschild metric, for $\tilde{k}=0$. For the case $\tilde{k}\in(-1,0)$, it was shown that it describes a traversable Morris-Thorne-Buchdahl (MTB) wormhole [Eur. Phys. J. C 83, 626 (2023)], where the weak energy condition is formally violated. In this paper, we briefly review the $\textit{special}$ Buchdahl-inspired metric, with focuses on the construction of $\zeta-$Kruskal-Szekeres (KS) diagram and the situation for a wormhole to emerge. Interestingly, the MTB wormhole structure appears to permit the formation of closed timelike curves (CTCs). More specifically, a CTC straddles the throat, comprising of two segments positioned in opposite quadrants of the $\zeta-$KS diagram. The closed timelike loop thus passes through the wormhole throat twice, causing $\textit{two}$ reversals in the time direction experienced by the (timelike) traveller on the CTC. The key to constructing a CTC lies in identifying any given pair of antipodal points $(T,X)$ and $(-T,-X)$ $\textit{on the wormhole throat}$ in the $\zeta-$KS diagram as corresponding to the same spacetime event. It is interesting to note that the Campanelli-Lousto metric in Brans-Dicke gravity is known to support two-way traversable wormholes, and the formation of the CTCs presented herein is equally applicable to the Campanelli-Lousto solution.
The effective low temperature dynamics of near-extremal black holes in the s-wave sector is governed by the quantum fluctuations of the Schwarzian mode of JT gravity. Utilizing as a proxy a planar charged black hole in asymptotically Anti-de-Sitter spacetime, we investigate the effects of these fluctuations on a probe scalar field. The corresponding holographic real-time boundary correlators are computed following a holographic renormalization procedure, using the dubbed gravitational Schwinger-Keldysh geometry (grSK) and known exact results of boundary correlators from the near-horizon region. This analysis gives rise to a retarded Green's function that decays as a power law for late Lorentzian times. Its analytic structure indicates the presence of a branch cut in the complex frequency domain at finite temperature. These features are a non-perturbative hallmark that prevails as long as the planar transverse space is kept compact.
A new class of modified gravity theories, made possible by subtle features of the canonical formulation of general covariance, naturally allows MOND-like behavior (MOdified Newtonian Dynamics) in effective space-time solutions without introducing new fields. A detailed analysis reveals a relationship with various quantum-gravity features, in particular in canonical approaches, and shows several properties of potential observational relevance. A fundamental origin of MOND and a corresponding solution to the dark-matter problem are therefore possible and testable.
We prove a new geometric inequality that relates the Arnowitt-Deser-Misner (ADM) mass of initial data to a quasilocal angular momentum of a marginally outer trapped surface (MOTS) inner boundary. The inequality is expressed in terms of a 1-spinor, which satisfies an intrinsic first-order Dirac-type equation. Furthermore, we show that if the initial data is axisymmetric, then the divergence-free vector used to define the quasilocal angular momentum cannot be a Killing field of the generic boundary.
Vacuum spherically symmetric loop quantum gravity in the midi-superspace approximation using inhomogeneous horizon-penetrating slices has been studied for a decade, and it has been noted that the singularity is eliminated. It is replaced by a region of high curvature and potentially large quantum fluctuations. It was recently pointed out that the effective semiclassical metric implies the existence of a shell of effective matter which violates energy conditions in regions where the curvature is largest. Here we propose an alternative way of treating the problem that is free from the shells. The ambiguity in the treatment is related with the existence of new observables in the quantum theory that characterize the area excitations, and how the counterpart of diffeomorphisms in the discrete quantum theory is mapped to the continuum semi-classical picture. The resulting space-time in the high curvature region inside the horizon is approximated by a metric of the type of the Simpson--Visser wormhole and it connects the black hole interior to a white hole in a smooth manner.
In three-dimensional pseudo-Riemannian manifolds, the Cotton tensor arises as the variation of the gravitational Chern-Simons action with respect to the metric. It is Weyl-covariant, symmetric, traceless and covariantly conserved. Performing a reduction of the Cotton tensor with respect to Carrollian diffeomorphisms in a suitable frame, one discloses four sets of Cotton Carrollian relatives, which are conformal and obey Carrollian conservation equations. Each set of Carrollian Cotton tensors is alternatively obtained as the variation of a distinct Carroll-Chern-Simons action with respect to the degenerate metric and the clock form of a strong Carroll structure. The four Carroll-Chern-Simons actions emerge in the Carrollian reduction of the original Chern-Simons ascendant. They inherit its anomalous behaviour under diffeomorphisms and Weyl transformations. The extremums of these Carrollian actions are commented and illustrated.
We establish and develop a novel methodology to treat higher-order non-linear effects of gravitational radiation that is scattered from binary inspirals, which employs modern scattering-amplitudes methods on the effective picture of the binary as a composite particle. We spell out our procedure to study such effects: assembling tree amplitudes via generalized-unitarity methods and employing the closed-time-path formalism to derive the causal effective actions, which encompass the full conservative and dissipative dynamics. We push through to a new state of the art for these higher-order effects, up to the third subleading tail effect, at order $G_N^5$ and the $5$-loop level, which corresponds to the $8.5$PN order. We formulate the consequent dissipated energy for these higher-order corrections, and carry out a renormalization analysis, where we uncover new subleading RG flow of the quadrupole coupling. For all higher-order tail effects we find perfect agreement with partial observable results in PN and self-force theories, where available.
We apply Wald's formalism to a Lagrangian within generalised Proca gravity that admits a Schwarzschild black hole with a non-trivial vector field. The resulting entropy differs from that of the same black hole in General Relativity by a logarithmic correction modulated by the only independent charge of the vector field. We find conditions on this charge to guarantee that the entropy is a non-decreasing function of the black hole area, as is the case in GR. If this requirement is extended to black hole mergers, we find that for Planck scale black holes, a non-decreasing entropy is possible only if the area of the final black hole is several times larger than the initial total area of the merger. Finally, we discuss some implications of the vector Galileon entropy from the point of view of entropic gravity.
Spinfoams provide a framework for the dynamics of loop quantum gravity that is manifestly covariant under the full four-dimensional diffeomorphism symmetry group of general relativity. In this way they complete the ideal of three-dimensional diffeomorphism covariance that consistently motivates loop quantum gravity at every step. Specifically, spinfoam models aim to provide a projector onto, and a physical inner product on, the simultaneous kernel of all of the constraints of loop quantum gravity by means of a discretization of the gravitational path integral. In the limit of small Planck constant, they are closely related to the path integral for Regge calculus, while at the same time retaining all of the tools of a canonical quantum theory of gravity. They may also be understood as generalizations of well-understood state sum models for topological quantum field theories. In this chapter, we review all of these aspects of spinfoams, as well as review in detail the derivation of the currently most used spinfoam model, the EPRL model, calculational tools for it, and the various extensions of it in the literature. We additionally summarize some of the successes and open problems in the field.
In this paper we consider four dimensional (4D) linearized gravity (LG) with a planar boundary, where the most general boundary conditions are derived following Symanzik's approach. The boundary breaks diffeomorphism invariance and this results in a breaking of the corresponding Ward identity. From this, on the boundary we find two conserved currents which form an algebraic structure of the Kac-Moody type, with a central charge proportional to the action ``coupling''. Moreover, we identify the boundary degrees of freedom, which are two symmetric rank-2 tensor fields, and derive the symmetry transformations, which are diffeomorphisms. The corresponding most general 3D action is obtained and a contact with the higher dimensional theory is established by requiring that the 3D equations of motion coincide with the 4D boundary conditions. Through this kind of holographic procedure, we find two solutions~: LG for a single tensor field and LG for two tensor fields with a mixing term. Curiously, we find that the Symanzik's 4D boundary term which governs the whole procedure contains a mass term of the Fierz-Pauli type for the bulk graviton.
We consider the rotating generalization of the Bardeen black hole solution in the presence of cloud of strings (CoS). The parameter space for which the black hole horizon exists is determined. We also study the static limit surface and the ergo-region in the presence of the CoS parameter. We consider photon orbits and obtain the deformation of black hole shadows due to rotation for various values of CoS parameter. The shadow deformation is used to determine the black hole spin for different values of the black hole parameters.
The landscape of six-dimensional supergravities is dramatically constrained by the cancellation of gauge and gravitational anomalies, but the full extent of its implications has not been uncovered. We explore the cancellation of global anomalies of the Dai-Freed type in this setting with abelian and simply laced gauge groups, finding novel constraints. In particular, we exclude arbitrarily large abelian charges in an infinite family of theories for certain types of quadratic refinements, including a specific one defined in the literature. We also show that the Gepner orientifold with no tensor multiplets is anomaly-free for a different choice, as well as a number of heterotic models with and without spacetime supersymmetry in six dimensions. The latter analysis extends previous results in ten dimensions to some lower-dimensional settings in the heterotic landscape.
The optical characteristics of three types of black holes (BHs) surrounded by a thin accretion disk are discussed, namely the Schwarzschild BH, Bardeen BH, and Hayward BH. We calculate the deflection angle of light as it traverses the vicinity of each BH using numerical integration and semi-analytical methods, revealing that both approaches can effectively elucidate the deflection of light around the BH. We investigate the optical appearance of the accretion disk and its corresponding observational images at various viewing angles, discovering that the luminosity in the region near the BH on the inner side of the accretion disk is higher than that on the outer side owing to higher material density in closer proximity to the BH. We observe a significant accumulation of brightness on the left side of the accretion disk, attributed to the motion of matter and geometric effects. Our findings emphasize the significant influence of the observation inclination angle on the observed outcomes. An increase in the observation inclination angle results in the separation of higher-order images. With the improvement in EHT observation accuracy, we believe that the feature of a minimal distance between the innermost region of the direct image of the Hayward BH and the outermost region of the secondary image can be used as an indicator for identifying Hayward BHs.
We classify and describe totally geodesic and parallel hypersurfaces for the entire class of Siklos spacetimes. A large class of minimal hypersurfaces is also described.
We report on a recently introduced Functional Renormalization Group (RG) Equation, and we apply it to quantum gravity in Lorentzian spacetimes. While the RG flow is state-dependent, it is possible to evaluate state and background independent contributions to the flow. Taking into account only these universal terms, the RG flow exhibits a non-trivial fixed point in the Einstein-Hilbert truncation, providing a mechanism for Asymptotic Safety in Lorentzian quantum gravity.
We consider mimetic Horava gravity, where the scalar field of mimetic gravity was used in the construction of diffeomorphism invariant models reducing to Horava gravity in the synchronous gauge. It will be shown that the gravitational action with the addition of the Gibbons-Hawking-York term and the mimetic Horava action are equivalent for manifolds whose topology is $R \times \Sigma$, where $\Sigma$ is a three-dimensional hypersurface; otherwise, the mimetic Horava action does not contain any surface terms.
We consider perturbative solutions in Einstein gravity with higher-derivative extensions and address some subtle issues of taking extremal limit. As a concrete new result, we construct the perturbative rotating black hole in five dimensions with equal angular momenta $J$ and general mass $M$ in Einstein-Gauss-Bonnet gravity, up to and including the linear order of the standard Gauss-Bonnet coupling constant $\alpha$. We obtain the near horizon structure of the near extremal solution, with the blackening factor of the order $\alpha$. In the extremal limit, the mass-angular momentum relation reduces to $M=\frac32 \pi^{\frac13} J^{\frac23} + \pi \alpha$. The positive sign of the $\alpha$-correction implies that the centrifugal repulsion associated with rotations becomes weaker than the gravitational attraction under the unitary requirement for the Gauss-Bonnet term.
In this article, we process the approximate wave function of the Dirac particle outside the horizon of the KN ds black hole to effective potential V, and then derive V (including real and imaginary parts). We know that fermions cannot produce superradiation, but we can prove that Dirac particles in the KN black hole background can have a special solution through a certain operation, forming a Cooper pair, thus producing superradiation.We deal with the real and imaginary parts separately. When V (real part or imaginary part) has a maximum value, there may be a potential barrier outside the field of view to have a chance to produce superradiation.
We derive conservation laws in Symmetric Teleparallel Equivalent of General Relativity (STEGR) with direct application of Noether's theorem. This approach allows us to construct covariant conserved currents, corresponding superpotentials and invariant charges. A necessary component of our constructions is the concept of "turning off" gravity, introduced in the framework of STEGR to define the flat and torsionless connection. By calculating currents, one can obtain local characteristics of gravitational field like energy density. Surface integration of superpotentials gives charges which correspond to global quantities of the system like mass, momentum, etc. To test our results for the obtained currents and superpotentials, we calculate the energy density measured by freely falling observer in the simple solutions (Friedman universe, Schwartzchild black hole) and total mass of the Schwartzchild black hole. We find ambiguities in obtaining the connection, which explicitly affect the values of conserved quantities, and discuss possible solutions to this problem.
We examine how conformal boundaries encode energy transport coefficients -- namely transmission and reflection probabilities -- of corresponding conformal interfaces in symmetric orbifold theories. These constitute a large class of irrational theories and are closely related to holographic setups. Our central goal is to compare such coefficients at the orbifold point (a field theory calculation) against their values when the orbifold is highly deformed (a gravity calculation) -- an approach akin to past AdS/CFT-guided comparisons of physical quantities at strong versus weak coupling. At the orbifold point, we find that the (weighted-average) transport coefficients are simply averages of coefficients in the underlying seed theory. We then focus on the symmetric orbifold of the $\mathbb{T}^4$ sigma model interface CFT dual to type IIB supergravity on the 3d Janus solution. We compare the holographic transmission coefficient, which was found by [1], to that of the orbifold point. We find that the profile of the transmission coefficient substantially increases with the coupling, in contrast to boundary entropy. We also present some related ideas about twisted-sector data encoded by boundary states.
The observation of gravitational waves from multiple compact binary coalescences by the LIGO-Virgo-KAGRA detector networks has enabled us to infer the underlying distribution of compact binaries across a wide range of masses, spins, and redshifts. In light of the new features found in the mass spectrum of binary black holes and the uncertainty regarding binary formation models, non-parametric population inference has become increasingly popular. In this work, we develop a data-driven clustering framework that can identify features in the component mass distribution of compact binaries simultaneously with those in the corresponding redshift distribution, from gravitational wave data in the presence of significant measurement uncertainties, while making very few assumptions on the functional form of these distributions. Our generalized model is capable of inferring correlations among various population properties such as the redshift evolution of the shape of the mass distribution itself, in contrast to most existing non-parametric inference schemes. We test our model on simulated data and demonstrate the accuracy with which it can re-construct the underlying distributions of component masses and redshifts. We also re-analyze public LIGO-Virgo-KAGRA data from events in GWTC-3 using our model and compare our results with those from some alternative parametric and non-parametric population inference approaches. Finally, we investigate the potential presence of correlations between mass and redshift in the population of binary black holes in GWTC-3 (those observed by the LIGO-Virgo-KAGRA detector network in their first 3 observing runs), without making any assumptions about the specific nature of these correlations.
We consider a little known aspect of signature change, where the overall sign of the metric is allowed to change, with physical implications. We show how, in different formulations of general relativity, this type of classical signature change across boundaries with a degenerate metric can be made consistent with a change in sign (and value) of the cosmological constant $\Lambda$. In particular, the separate "mostly plus" and "mostly minus" signature sectors of Lorentzian gravity are most naturally associated with different signs of $\Lambda$. We show how this general phenomenon allows for classical solutions where the open dS patch can arise from a portion of AdS space time. These can be interpreted as classical "imaginary space" extensions of the usual Lorentzian theory, with $a^2<0$.
Anisotropic spherically symmetric solutions within the framework of the Brans-Dicke theory are uncovered through a unique gravitational decoupling approach involving a minimal geometric transformation. This transformation effectively divides the Einstein field equations into two separate systems, resulting in the alteration of the radial metric component. The first system encompasses the influence of the seed source, derived from the metric functions of the isotropic Tolman IV solution. Meanwhile, the anisotropic source is subjected to two specific constraints in order to address the second system. By employing matching conditions to determine the unknown constants at the boundary of the stellar object, a comprehensive examination of the internal structure of stellar systems ensues. This investigation delves into the impact of the decoupling parameter, the Brans-Dicke parameters, and a scalar field on the structural characteristics of anisotropic spherically symmetric spacetimes, all while considering the strong energy conditions.
In the absence of an exact development of the cosmological models based on the Mashhoon nonlocal gravity, the Newtonian regime clarifies some aspects of it. To improve this model more reliable, going through some semi-Post-Newtonian considerations may be useful. One important feature to consider is the formation of horizons, which can be understood as a consequence of the finite interaction velocity. This highlights the importance of extending the integration across the entire space-time, rather than solely focusing on the spatial sector. In this context, we show that the density of effective dark matter would increase, while the density of Baryonic matter decreases, during the deep matter-dominated era. This finding is in contradiction with the predictions of the standard model of cosmology and it raises concerns about the compatibility of using Tohline--Kuhn kernel and considering the nonlocal effects as an effective dark matter in $\Lambda$CDM.
Kastor and Traschen constructed totally anti-symmetric conserved currents that are linear in the Riemann curvature in spacetimes admitting Killing-Yano tensors. The construction does not refer to any field equations and is built on the algebraic and differential symmetries of the Riemann tensor as well as on the Killing-Yano equation. Here we give a systematic generalization of their work and find divergence-free currents that are built from the powers of the curvature tensor. A rank-4 divergence-free tensor that is constructed from the powers of the curvature tensor plays a major role here and it comes from the Lanczos-Lovelock theory.
In this work, we probe the weak gravitational lensing by a static spherically symmetric black hole in view of $f(R)$ gravity in the background of the non-plasma medium (vacuum). We provide a discussion on a light ray in a static black hole solution in $f(R)$ gravity. To adore this purpose, we find the Gaussian optical curvature in weak gravitational lensing by utilizing the optical geometry of this black hole solution. Furthermore, we find the deflection angle up to the leading order by employing the Gauss-Bonnet theorem. We present the graphical analysis of the deflection angle with respect to the various parameters that govern the black hole. Further, we calculate the Hawking temperature for this black hole via a topological method and compare it with a standard method of deriving the Hawking temperature. We also analyze the Schr\"odinger-like Regge-Wheeler equation and derive a bound on the greybody factor for a static black hole in the framework of $f(R)$ gravity and graphically inquire that bound converges to 1. We also investigate the silhouette or shadow generated by this static $f(R)$ black hole. Moreover, we constrain the non-negative real constant and cosmological constant from the observed angular diameters of M87* and Sgr A* released by the EHT. We then probe how cosmological constant, non-negative real constant and mass affected the radius of shadow. Finally, we demonstrate that, in the eikonal limit, the real part of scalar field quasinormal mode frequency can be determined from the shadow radius.
Starting with on-shell amplitudes compatible with the scattering of Kerr black holes, we produce the gravitational waveform and memory effect including spin at their leading post-Minkowskian orders to all orders in the spins of both scattering objects. For the memory effect, we present results at next-to-leading order as well, finding a closed form for all spin orders when the spins are anti-aligned and equal in magnitude. Considering instead generically oriented spins, we produce the next-to-leading-order memory to sixth order in spin. Compton-amplitude contact terms up to sixth order in spin are included throughout our analysis.
We revisit and improve the analytic study arXiv:1804.03462 of spherically symmetric but dynamical black holes in Einstein's gravity coupled to a real scalar field. We introduce a series expansion in a small parameter $\epsilon$ that implements slow time dependence. At the leading order, the generic solution is a quasi-stationary Schwarzschild-de Sitter (SdS) metric, i.e. one where time-dependence enters only through the mass and cosmological constant parameters of SdS. The two coupled ODEs describing the leading order time dependence are solved up to quadrature for an arbitrary scalar potential. Higher order corrections can be consistently computed, as we show by explicitly solving the Einstein equations at the next to leading order as well. We comment on how the quasi-stationary expansion we introduce here is equivalent to the non-relativistic $1/c$ expansion.
Using the Schwinger-Keldysh path integral, we draw a connection between localized quantum field theories and more commonly used models of local probes in Relativistic Quantum Information (RQI). By integrating over and then tracing out the inaccessible modes of the localized field being used as a probe, we show that, at leading order in perturbation theory, the dynamics of any finite number of modes of the probe field is exactly that of a finite number of harmonic-oscillator Unruh-DeWitt (UDW) detectors. The equivalence is valid for a rather general class of input states of the probe-target field system, as well as for any arbitrary number of modes included as detectors. The path integral also provides a closed-form expression which gives us a systematic way of obtaining the corrections to the UDW model at higher orders in perturbation theory due to the existence of the additional modes that have been traced out. This approach vindicates and extends a recently proposed bridge between detector-based and field-theory-based measurement frameworks for quantum field theory [arXiv:2308.11698], and also points to potential connections between particle detector models in RQI and other areas of physics where path integral methods are more commonplace -- in particular, the Wilsonian approach to the renormalization group and effective field theories.
The chiral crossover of QCD at finite temperature and vanishing baryon density turns into a second order phase transition if lighter than physical quark masses are considered. If this transition occurs sufficiently close to the physical point, its universal critical behaviour would largely control the physics of the QCD phase transition. We quantify the size of this region in QCD using functional approaches, both Dyson-Schwinger equations and the functional renormalisation group. The latter allows us to study both critical and non-critical effects on an equal footing, facilitating a precise determination of the scaling regime. We find that the physical point is far away from the critical region. Importantly, we show that the physics of the chiral crossover is dominated by soft modes even far beyond the critical region. While scaling functions determine all thermodynamic properties of the system in the critical region, the order parameter potential is the relevant quantity away from it. We compute this potential in QCD using the functional renormalisation group and Dyson-Schwinger equations and provide a simple parametrisation for phenomenological applications.
We derive the finite-temperature quantum-tunneling rate from first principles. The decay rate depends on both real- and imaginary-time; we demonstrate that the relevant instantons should therefore be defined on a Keldysh-Schwinger contour, and how the familiar Euclidean-time result arises from it in the limit of large physical times. We generalize previous results for excited initial states, and identify distinct behavior in the high- and low-temperature limits, incorporating effects from background fields. We construct a consistent perturbative scheme that incorporates large finite-temperature effects.
Parton showers which can efficiently incorporate quantum interference effects have been shown to be run efficiently on quantum computers. However, so far these quantum parton showers did not include the full kinematical information required to reconstruct an event, which in classical parton showers requires the use of a veto algorithm. In this work, we show that adding one extra assumption about the discretization of the evolution variable allows to construct a quantum veto algorithm, which reproduces the full quantum interference in the event, and allows to include kinematical effects. We finally show that for certain initial states the quantum interference effects generated in this veto algorithm are classically tractable, such that an efficient classical algorithm can be devised.
We derive a general formula for two-loop counterterms in Effective Field Theories (EFTs) using a geometric approach. This formula allows the two-loop results of our previous paper to be applied to a wide range of theories. The two-loop results hold for loop graphs in EFTs where the interaction vertices contain operators of arbitrarily high dimension, but at most two derivatives. We also extend our previous one-loop result to include operators with an arbitrary number of derivatives, as long as there is at most one derivative acting on each field. The final result for the two-loop counterterms is written in terms of geometric quantities such as the Riemann curvature tensor of the scalar manifold and its covariant derivatives. As applications of our results, we give the two-loop counterterms and renormalization group equations for the $O(n)$ EFT to dimension six, the scalar sector of the Standard Model Effective Field Theory (SMEFT) to dimension six, and chiral perturbation theory to order $p^6$.
The software feyntrop for direct numerical evaluation of Feynman integrals is presented. We focus on the underlying combinatorics and polytopal geometries facilitating these methods. Especially matroids, generalized permutohedra and normality are discussed in detail.
The analytical exact iteration method (AEIM) have been used widely to calculate N-dimensional radial Schrodinger equation with medium modified form of Cornell potential and is generalized to the finite value of magnetic field (eB) with quasi-particle approach in hot quantum chromodynamics (QCD) medium. In N-dimensional space the energy eigen values have been calculated for any states (n,l). These results have been used to study the properties of quarkonium states (i.e, the binding energy and mass spectra, dissociation temperature and thermodynamical properties in the N-dimensional space). We have determined the binding energy of the ground states of quarkonium with magnetic field and dimensionality number. We have also determined the effects of magnetic field and dimensionality number on mass spectra for ground states of quarkonia. But main result is quite noticeable for the values of dissociation temperature in terms of magnetic field and dimensionality number for ground states of quarkonia after using the criteria of dissociation energy. At last, we have also calculated the thermodynamical properties of QGP (i.e., pressure, energy density and speed of sound) using the parameter eB with ideal equation of states (EoS).
We present novel analyses on accessing the 3D gluon content of the proton via spin-dependent TMD gluon densities, calculated through the spectator-model approach. Our formalism embodies a fit-based spectator-mass modulation function, suited to catch longitudinal-momentum effects in a wide kinematic range. Particular attention is paid to the time-reversal even Boer--Mulders and the time-reversal odd Sivers functions, whose accurate knowledge, needed to perform precise 3D analyses of nucleons, motivates synergies between LHC and EIC Communities.
Lattice QCD offers the possibility of computing parton distributions from first principles, although not in the usual $\overline{MS}$ factorization scheme. We study in this paper the evolution of non-singlet parton distribution functions (PDFs) in the short-distance factorization scheme which notably arises in lattice calculations in the pseudo-distribution approach. We provide an assessment of non-perturbative evolution of PDFs from already published lattice matrix elements, and show how this evolution can be used to reduce the fluctuation of the lattice data. We compare our result with expectations obtained thanks to a perturbative matching to $\overline{MS}$. By highlighting the limitations of the current computations, we advocate for a new strategy using lattice calculations in small volume.
The charged Kaon meson ($K^+$) features several hadronic decay modes, but the most relevant contribution to its decay width stems from the leptonic decay $K^+ \rightarrow \mu^+ \nu_\mu $. Given the precision acquired on the rare decay mode $K^+ \rightarrow \mu^+ \nu_\mu + X$, one can use the data to set constraints on sub-GeV hidden sectors featuring light species that could contribute to it. Light gauge bosons that couple to muons could give rise to sizeable contributions. In this work, we will use data from the $K^+ \rightarrow \mu^+\nu_{\mu} l^+l^-$, and $K^+ \rightarrow \mu^+ \nu_{\mu} \nu \bar{\nu}$ decays to place limits on light vector bosons present in Two Higgs Doublet Models (2HDM) augmented by an Abelian gauge symmetry, 2HDM-$U(1)_X$. We put our findings into perpective with collider bounds, atomic parity violation, neutrino-electron scattering, and polarized electron scattering probes to show that rare Kaon decays provide competitive bounds in the sub-GeV mass range for different values of $\tan\beta$.
Many different approaches have been made to explain the nature of dark matter (DM), but it remains and unsolved mystery of our universe. In this work we examine a type II two-Higgs-doublet model extended by a complex singlet (2HDMS), where the pseudo-scalar component of the singlet acts as a natural DM candidate. The DM candidate is stabilized by a Z'2 symmetry, which is broken spontaneously by the singlet acquiring a vacuum expectation value (vev). This vev in turn causes the scalar component of the singlet to mix with the scalar components of the two doublets, which results in three scalar Higgs particles. Additionally we aim to include an excess around 95 GeV, which was observed at CSM and LEP and can be explained by one of the three scalar Higgs particles. After introducing the model, we apply experimental and theoretical constraints and find a viable benchmark point. We then look into the DM phenomenology as well as collider phenomenology.
Effective field theory of low-energy exitations-magnons that describes antiferromagnets is mapped into scalar electrodynamics of a charged scalar field interacting with an external electromagnetic potential. In the presence of a constant inhomogeneous external magnetic field the latter problem is technically reduced to the problem of charged-particle creation from the vacuum by an electric potential step (x-step). Magnetic moment plays here the role of the electric charge, and magnons and antimagnons differ from each other in the sign of the magnetic moment. In the framework of such a consideration, it is important to take into account the vacuum instability (the Schwinger effect) under the magnon-antimagnon production on magnetic field inhomogeneities (an analog of pair creation from the vacuum by electric-like fields). We demonstrate how to use the strong field QED with x-steps developed by the authors (SPG and DMG) to study the magnon-antimagnon pair production on magnetic field inhomogeneities. Characteristics of the vacuum instability obtained for some magnetic steps that allows exact solving the Klein-Gordon equation are presented. In particular, we consider examples of magnetic steps with very sharp field derivatives that correspond to a regularization of the Klein step. In the case of smooth-gradient steps, we describe an universal behavior of the flux density of created magnon pairs. We also note that since the low-energy magnons are bosons with small effective mass, then for the first time maybe the opportunity will arise to observe the Schwinger effect in the case of the Bose statistics, in particular, the bosonic Klein effect in laboratory conditions. Moreover, it turns out that in the case of the Bose statistics appears a new mechanism for amplifying the effect of the pair creation, which we call statistically-assisted Schwinger effect.
Built upon the state-of-the-art model a multiphase transport (AMPT), we develop a new module of chiral anomaly transport (CAT), which can trace the evolution of the initial topological charge of gauge field created through sphaleron transition at finite temperature and external magnetic field in heavy ion collisions. The eventual experimental signals of chiral magnetic effect(CME) can be measured. The CAT explicitly shows the generation and evolution of the charge separation, and the signals of CME through the CAT are quantitatively in agreement with the experimental measurements in Au+Au collision at $\sqrt{s}=200 {\rm GeV}$, and the centrality dependence of the CME fraction follows that of the fireball temperature.
Masses of the ground, orbitally and radially excited states of the asymmetric fully heavy tetraquarks, composed of charm (c) and bottom (b) quarks and antiquarks are calculated in the relativistic diquark-antidiquark picture. The relativistic quark model based on the quasipotential approach and quantum chromodynamics is used to construct the quasipotentials of the quark-quark and diquark-antidiquark interactions. These quasipotentials consist of the short-range one-gluon exchange and long-distance linear confinement interactions. Relativistic effects are consistently taken into account. A tetraquark is considered as a bound state of a diquark and an antidiquark which are treated as a spatially extended colored objects and interact as a whole. It is shown that most of the investigated tetraquarks states (including all ground states) lie above the fall-apart strong decay thresholds into a meson pair. As a result they can be observed as wide resonances. Nevertheless, several orbitally excited states lie slightly above or even below these fall-apart thresholds, thus they could be narrow states.
We measure the complete set of angular coefficients $J_i$ for exclusive $\bar{B} \to D^* \ell \bar{\nu}_\ell$ decays ($\ell = e, \mu$). Our analysis uses the full $711\,\mathrm{fb}^{-1}$ Belle data set with hadronic tag-side reconstruction. The results allow us to extract the form factors describing the $B \to D^*$ transition and the Cabibbo-Kobayashi-Maskawa matrix element $|V_{\rm cb}|$. Using recent lattice QCD calculations for the hadronic form factors, we find $|V_{\rm cb}| = (41.0 \pm 0.7) \times 10^3 $ using the BGL parameterization, compatible with determinations from inclusive semileptonic decays. We search for lepton flavor universality violation as a function of the hadronic recoil parameter $w$, and investigate the differences of the electron and muon angular distributions. We find no deviation from Standard Model expectations.
We compute the first moments of the $q^2$ distribution in inclusive semileptonic $B$ decays as functions of the lower cut on $q^2$, confirming a number of results given in the literature and adding the $O(\alpha_s^2\beta_0)$ BLM contributions. We then include the $q^2$-moments recently measured by Belle and Belle II in a global fit to the moments. The new data are compatible with the other measurements and slightly decrease the uncertainty on the nonperturbative parameters and on $|V_{cb}|$. Our updated value is $|V_{cb}|=(41.97\pm 0.48)\times 10^{-3}$.
The idea of partial compositeness (PC) in Composite Higgs models offers an attractive means to explain the flavour hierarchies observed in nature. In this talk, predictions of a minimal UV realisation of PC, considering each Standard-Model (SM) fermion to mix linearly with a bound state consisting of a new scalar and a new fermion, are presented, taking into account the dynamical emergence of the composites. Employing the non-perturbative functional renormalisation group, the scaling of the relevant correlation functions is examined and the resulting SM-fermion mass spectrum is analysed.
The ALICE collaboration recently reported the mean transverse momentum as a function of charged-particle multiplicity for different pp-collisions classes defined based on the "jettiness" of the event. The event "jettiness" is quantified using transverse spherocity that is measured at midpseudorapidity ($|\eta|<0.8$) considering charged particles with transverse momentum within $0.15<p_{\rm T}<10$ GeV/$c$. Comparisons to PYTHIA 8 (tune Monash) predictions show a notable disagreement between the event generator and data for jetty events that increases as a function of charged-particle multiplicity. This paper reports on the origin of such a disagreement using PYTHIA 8 event generator. Since at intermediate and high $p_{\rm T}$ ($2<p_{\rm T}<10$ GeV/$c$), the spectral shape is expected to be modified by color reconnection or jets, their effects on the average $p_{\rm T}$ are studied. The results indicate that the origin of the discrepancy is the overpredicted multijet yield by PYTHIA 8 which increases with the charged particle multiplicity. This finding is important to understand the way transverse spherocity and multiplicity bias the pp collisions, and how well models like PYTHIA 8 reproduce those biases. The studies are pertinent since transverse spherocity is currently used as an event classifier by experiments at the LHC.
We investigate the $K_1(1270)-K_1(1400)$ mixing caused by the flavor $SU(3)$ symmetry breaking. The mixing angle is expressed by a $K_{1A}\to K_{1B}$ matrix element induced by the operators that breaks flavor $SU(3)$ symmetry. The QCD contribution to this matrix element is assumed to be dominated and calculated by QCD sum rules. A three-point correlation function is defined and handled both at the hadron and quark-gluon levels. The quark-gluon level calculation is based on operator product expansion up to dimension-5 condensates. A detailed numerical analysis is performed to determine the Borel parameters, and the obtained mixing angle is $\theta_{K_1}=22^{\circ}\pm 7^{\circ}$ or $\theta_{K_1}=68^{\circ}\pm 7^{\circ}$.
The Higgs boson decay to a massive bottom quark pair provides the dominant contribution to the Higgs boson width. We present an exact result for such a decay induced by the bottom quark Yukawa coupling with next-to-next-to-leading order (NNLO) QCD corrections. We have adopted the canonical differential equations in the calculation and obtained the result in terms of multiple polylogarithms. We also compute the contribution from the decay to four bottom quarks which consist of complete elliptic integrals or their one-fold integrals. The small bottom quark mass limit coincides with the previous calculation using the large momentum expansion. The threshold expansion exhibits power divergent terms in the bottom quark velocity, which has a structure different from that in $e^+e^-\to t\bar{t}$ but can be reproduced by computing the corresponding Coulomb Green function. The NNLO corrections significantly reduce the uncertainties from both the renormalization scale and the renormalization scheme of the bottom quark Yukawa coupling. Our result can be applied to a heavy scalar decay to a top quark pair.
We derive the order $p^8$ Lagrangian of odd intrinsic parity for mesonic chiral perturbation theory, and provide the resulting operator basis in the supplementary material. Neglecting the non-zero singlet trace, we find $999$ operators for a general number of quark flavours $N_f$, $705$ for $N_f=3$ and $92$ for $N_f=2$. Our numbers agree with those obtained through the Hilbert series approach in the literature. Including a singlet trace, as needed for the physical case of $N_f=2$, instead yields $1210$ operators for a general $N_f$, $892$ for $N_f=3$ and $211$ for $N_f=2$.
In this talk we review jet production in a large variety of collision systems using the JETSCAPE event generator and Hybrid Hadronization. Hybrid Hadronization combines quark recombination, applicable when distances between partons in phase space are small, and string fragmentation appropriate for dilute parton systems. It can therefore smoothly describe the transition from very dilute parton systems like $e^++e^-$ to full $A+A$ collisions. We test this picture by using JETSCAPE to generate jets in various systems. Comparison to experimental data in $e^++e^-$ and $p+p$ collisions allows for a precise tuning of vacuum baseline parameters in JETSCAPE and Hybrid Hadronization. Proceeding to systems with jets embedded in a medium, we study in-medium hadronization for jet showers. We quantify the effects of an ambient medium, focusing in particular on the dependence on the collective flow and size of the medium. Our results clarify the effects we expect from in-medium hadronization of jets on observables like fragmentation functions, hadron chemistry and jet shape.
After the possible discovery of new particles, it will be crucial to determine the properties, and in particular the couplings, of the new states. Here, we focus on scalar trilinear couplings, employing as an example the case of the trilinear coupling of scalar top quarks (stops) to the Higgs boson in the Minimal Supersymmetric Standard Model (MSSM). We discuss possible strategies for experimentally determining the stop trilinear coupling parameter, which controls the stop--stop--Higgs interaction, and we demonstrate the impact of different prescriptions for the renormalisation of this parameter. We find that the best prospects for determining the stop trilinear coupling arise from its quantum effects entering the model prediction for the mass of the SM-like Higgs boson in comparison to the measured value, pointing out that the prediction for the Higgs-boson mass has a high sensitivity to the stop trilinear coupling even for heavy masses of the non-standard particles. Regarding the renormalisation of the stop trilinear coupling, we identify a renormalisation scheme that is preferred given the present level of accuracy, and we clarify the origin of potentially large logarithms that cannot be resummed with standard renormalisation group methods.
Based on our previous work, we study the harmonic coefficient of both inclusive and diffractive azimuthal angle dependent lepton-jet correlations in Hadron-Electron Ring Accelerator and the future electron-ion collider. Numerical calculations for inclusive and diffractive harmonics and the ratio of harmonics in $e+\text{Au}$~and $e+p$ indicate their strong discriminating power for non-saturation model and saturation model. Additionally, we demonstrate that the t-dependent diffractive harmonics can serve as novel observables for nuclear density profile.
The trilinear Higgs coupling $\lambda_{hhh}$ is a crucial tool to probe the structure of the Higgs potential and to search for possible effects of physics beyond the Standard Model (SM). Focusing on the Two-Higgs-Doublet Model as a concrete example, we identify parameter regions in which $\lambda_{hhh}$ is significantly enhanced with respect to its SM prediction. Taking into account all relevant corrections up to the two-loop level, we show that current experimental bounds on $\lambda_{hhh}$ already rule out significant parts of the otherwise unconstrained parameter space. We illustrate the interpretation of the current results and future measurement prospects on $\lambda_{hhh}$ for a benchmark scenario. Recent results from direct searches for BSM scalars in the $A\to ZH$ channel and their implications will also be discussed in this context.
The observed pattern of fermion masses and mixing is an outstanding puzzle in particle physics, generally known as the flavor problem. Over the years, guided by precision neutrino oscillation data, discrete flavor symmetries have often been used to explain the neutrino mixing parameters, which look very different from the quark sector. In this review, we discuss the application of non-Abelian finite groups to the theory of neutrino masses and mixing in the light of current and future neutrino oscillation data. We start with an overview of the neutrino mixing parameters, comparing different global fit results and limits on normal and inverted neutrino mass ordering schemes. Then, we discuss a general framework for implementing discrete family symmetries to explain neutrino masses and mixing. We discuss CP violation effects, giving an update of CP predictions for trimaximal models with nonzero reactor mixing angle and models with partial $\mu-\tau$ reflection symmetry, and constraining models with neutrino mass sum rules. The connection between texture zeroes and discrete symmetries is also discussed. We summarize viable higher-order groups, which can explain the observed pattern of lepton mixing where the non-zero $\theta_{13}$ plays an important role. We also review the prospects of embedding finite discrete symmetries in the Grand Unified Theories and with extended Higgs fields. Models based on modular symmetry are also briefly discussed. A major part of the review is dedicated to the phenomenology of flavor symmetries and possible signatures in the current and future experiments at the intensity, energy, and cosmic frontiers. In this context, we discuss flavor symmetry implications for neutrinoless double beta decay, collider signals, leptogenesis, dark matter, as well as gravitational waves.
The $\rm{keV}$ scale sterile neutrino was a qualified candidate for dark matter particles in the Dodelson-Widrow mechanism. But the mixing angle, needed to provide enough amount of dark matter, is in contradiction with the astrophysical observations. To alleviate such tension, we introduce an effective interaction, i.e. $g_a (\phi/\Lambda)\partial_{\mu}a \overline{\nu_\alpha}\gamma^{\mu} \gamma_5 \nu_\alpha$, among Standard Model neutrino $\nu_\alpha$, axion $a$, and singlet $\phi$. The axial-vector interaction form is determined by the axion shift symmetry, and the singlet $\phi$ with dynamically varied vacuum expectation value is introduced to reinforce the axial-vector coupling strength and evade the stringent neutrino oscillation constraints. The effective potential generated by the new interaction {could cancel} the SM counterpart, resulting in an {enhanced converting} probability between SM neutrino and sterile neutrino. Hence, the production rate of sterile neutrinos can be substantially enlarged with smaller mixing compared to the DW mechanism.
The Collins-Soper (CS) kernel is a nonperturbative function that characterizes the rapidity evolution of transverse-momentum-dependent parton distribution functions (TMDPDFs) and wave functions. In this Letter, we calculate the CS kernel for pion and proton targets and for quasi-TMDPDFs of leading and next-to-leading power. The calculations are carried out on the CLS ensemble H101 with dynamical $N_f=2+1$ clover-improved Wilson fermions. Our analyses demonstrate the consistency of different lattice extractions of the CS kernel for mesons and baryons, as well as for twist-two and twist-three operators, even though lattice artifacts could be significant. This consistency corroborates the universality of the lattice-determined CS kernel and suggests that a high-precision determination of it is in reach.
Decays of unstable heavy particles usually involve the coherent sum of several amplitudes, like in a multiple slit experiment. Dedicated amplitude analysis techniques have been widely used to resolve these amplitudes for better understanding of the underlying dynamics. For special cases, where two spin-1/2 particles and two (pseudo-)scalar particles are present in the process, multiple equivalent solutions are found due to intrinsic symmetries in the summed probability density function. In this paper, the problem of multiple solutions is discussed and a scheme to overcome this problem is proposed by fixing some free parameters. Toys are generated to validate the strategy. A new approach to align helicities of initial- and final-state particles in different decay chains is also introduced.
The masses, current couplings and widths of the fully heavy scalar tetraquarks $X_{\mathrm{4Q}}=QQ\overline{Q}\overline{Q}$, $Q=c, b$ are calculated by modeling them as four-quark systems composed of axial-vector diquark and antidiquark. The masses $m^{(\prime)}$ and couplings $ f^{(\prime)}$ of these tetraquarks are computed in the context of the QCD sum rule method by taking into account a nonperturbative term proportional to the gluon condensate $\langle \alpha _{s}G^{2}/ \pi \rangle$. Results $ m=(6570 \pm 55)~\mathrm{MeV}$ and $m^{\prime}=(18540 \pm 50)~\mathrm{MeV}$ are used to fix kinematically allowed hidden-flavor decay channels of these states. It turns out that, the processes $X_{\mathrm{4c}}\rightarrow J/\psi J/\psi $, $X_{\mathrm{4c}}\rightarrow \eta _{c}\eta _{c}$, and $X_{\mathrm{4c }}\rightarrow \eta _{c}\chi _{c1}(1P)$ are possible decay modes of $X_{ \mathrm{4c}}$. The partial widths of these channels are evaluated by means of the couplings $g_{i}, i=1,2,3$ which describe strong interactions of tetraquark $X_{\mathrm{4c}}$ and mesons at relevant vertices. The couplings $ g_{i}$ are extracted from the QCD three-point sum rules by extrapolating corresponding form factors $g_{i}(Q^2) $ to the mass-shell of a final meson. The mass of the scalar tetraquark $X_{\mathrm{4b}}$ is below the $\eta_b \eta_b$ and $\Upsilon(1S)\Upsilon(1S)$ thresholds, therefore it does not fall apart to these bottomonia, but transforms to conventional particles through other mechanisms. Comparing $m=(6570 \pm 55)~\mathrm{MeV}$ and $ \Gamma _{\mathrm{4c}}=(110 \pm 21)~\mathrm{MeV}$ with parameters of structures observed by the LHCb, ATLAS and CMS collaborations, we interpret $ X_{4c}$ as the resonance $X(6600)$ reported by CMS. Comparisons are made with other theoretical predictions.
The idea that new physics could take the form of feebly interacting particles (FIPs) - particles with a mass below the electroweak scale, but which may have evaded detection due to their tiny couplings or very long lifetime - has gained a lot of traction in the last decade, and numerous experiments have been proposed to search for such particles. It is important, and now very timely, to consistently compare the potential of these experiments for exploring the parameter space of various well-motivated FIPs. The present paper addresses this pressing issue by presenting an open-source tool to estimate the sensitivity of many experiments - located at Fermilab or the CERN's SPS, LHC, and FCC-hh - to various models of FIPs in a unified way: the Mathematica-based code SensCalc.
We study the behavior of a hadronic matter in the presence of an external magnetic field within the van der Waals hadron resonance gas model, considering both attractive and repulsive interactions among the hadrons. Various thermodynamic quantities like pressure ($P$), energy density ($\varepsilon$), magnetization ($\mathcal{M}$), entropy density ($s$), squared speed of sound ($c_{\rm s}^{2}$), and specific-heat capacity at constant volume ($c_{v}$) are calculated as functions of temperature ($T$) and static finite magnetic field ($eB$). We also consider the effect of baryochemical potential ($\mu_{B}$) on the above-mentioned thermodynamic observables in the presence of a magnetic field. Further, we estimate the magnetic susceptibility ($\chi_{\rm M}^{2}$), relative permeability ($\mu_{\rm r}$), and electrical susceptibility ($\chi_{\rm Q}^{2}$) which can help us to understand the system better. Through this model, we quantify a liquid-gas phase transition in the T-eB-$\mu_B$ phase space.
It was recently estimated that the strong isospin-symmetry breaking (ISB) corrections to the Fermi matrix element in free neutron decay could be of the order $10^{-4}$, one order of magnitude larger than the na\"{\i}ve estimate based on the Behrends-Sirlin-Ademollo-Gatto theorem. To investigate this claim, we derive a general expression of the leading ISB correction to hadronic Fermi matrix elements, which takes the form of a four-point correlation function in lattice gauge theory and is straightforward to compute from first principles. Our formalism paves the way for the first determination of such correction in the neutron sector with fully-controlled theory uncertainties.
We propose a new dark matter contender within the context of the so-called ``dark dimension'', an innovative 5-dimensional construct that has a compact space with characteristic length-scale in the micron range. The new dark matter candidate is the radion, a bulk scalar field whose quintessence-like potential drives an inflationary phase described by a 5-dimensional de Sitter (or approximate) solution of Einstein equations. We show that the radion could be ultralight and thereby serve as a fuzzy dark matter candidate. We advocate a simple cosmological production mechanism bringing into play unstable Kaluza-Klein graviton towers which are fueled by the decay of the inflaton. We demonstrate that the fuzzy radion can accommodate the signal recently observed in pulsar timing arrays.
We review the status of the Standard Model theory of neutron beta decay. Particular emphasis is put on the recent developments in the electroweak radiative corrections. Given that some existing approaches give slightly different results, we thoroughly review the origin of discrepancies, and provide our recommended value for the radiative correction to the neutron and nuclear decay rates. The use of dispersion relation, lattice Quantum Chromodynamics and effective field theory framework allows for high-precision theory calculations at the level of $10^{-4}$, turning neutron beta decay into a powerful tool to search for new physics, complementary to high-energy collider experiments. We offer an outlook to the future improvements.
Heavy quarks are excellent probes to understand the hot and dense medium formed in ultra-relativistic collisions. In a hadronic medium, studying the transport properties, e.g. the drag ($\gamma$), momentum diffusion ($B_{0}$), and spatial diffusion ($D_{s}$) coefficients of open charmed hadrons can provide useful information about the medium. Moreover, the fluctuations of charmed hadrons can help us to locate the onset of their deconfinement. In this work, we incorporate attractive and repulsive interactions in the well-established van der Waals hadron resonance gas model (VDWHRG) and study the diffusion and fluctuations of charmed hadrons. This study helps us understand the importance of interactions in the system, which affect both the diffusion and fluctuations of charmed hadrons.
We perform a theoretical analysis of the semileptonic decays $\eta^{(\prime)} \to \pi^0 \ell^+ \ell^-$ and $\eta' \to \eta \ell^+ \ell^-$, where $\ell = e, \mu$, via a charge-conjugation-conserving two-photon mechanism. The underlying form factors are modeled using vector-meson dominance, phenomenological input, and $\mathrm{U}(3)$ flavor symmetry. We consider both a monopole and a dipole model, the latter tailored such that the expected high-energy behavior is ensured. Furthermore, we benchmark the effect of $S$-wave rescattering contributions to the decays. We infer significant effects of the form factors neglected in the literature so far, still finding branching ratios of the various decays well below the current experimental upper limits.
We obtain two-dimensional relativistic densities and currents of energy and momentum in a proton at rest. These densities are obtained at surfaces of fixed light front time, which physically corresponds to using an alternative synchronization convention. Mathematically, this is done using tilted light front coordinates, which consist of light front time and ordinary spatial coordinates. In this coordinate system, all sixteen components of the energy-momentum tensor obtain clear physical interpretations, and the nine Galilean components reproduce results from standard light front coordinates. We find angular modulations in several densities that are absent in the corresponding instant form results, which are explained as optical effects arising from using fixed light front time when motion is present within the target. Additionally, transversely-polarized spin-half targets exhibit an energy dipole moment -- which evaluates to $-1/4$ for all targets if the Belinfante EMT is used, but which is target dependent and vanishes for pointlike fermions if the asymmetric EMT is instead used.
In this letter, we recall the main facts concerning the g-factor of positronium and we show how the value of the g-factor of the positronium is important. Taking it better into consideration may provide a solution to the reported discrepancy between QED theory and experiments concerning the hyperfine splitting of the fundamental level of the positronium. We also give the only experimental value that existing experiments can provide, $g_{\mathrm{Ps}}=2.0023\pm 0.0012$ at $3\sigma$.
We find a general formula for the two-loop renormalization counterterms of a scalar quantum field theory with interactions containing up to two derivatives, extending 't~Hooft's one-loop result. The method can also be used for theories with higher derivative interactions, as long as the terms in the Lagrangian have at most one derivative acting on each field. We show that diagrams with factorizable topologies do not contribute to the renormalization group equations. The results in this paper will be combined with the geometric method in a subsequent paper to obtain the counterterms and renormalization group equations for the scalar sector of effective field theories (EFT) to two-loop order.
Reconstructing unstable heavy particles requires sophisticated techniques to sift through the large number of possible permutations for assignment of detector objects to the underlying partons. An approach based on a generalized attention mechanism, symmetry preserving attention networks (Spa-Net), has been previously applied to top quark pair decays at the Large Hadron Collider which produce only hadronic jets. Here we extend the Spa-Net architecture to consider multiple input object types, such as leptons, as well as global event features, such as the missing transverse momentum. In addition, we provide regression and classification outputs to supplement the parton assignment. We explore the performance of the extended capability of Spa-Net in the context of semi-leptonic decays of top quark pairs as well as top quark pairs produced in association with a Higgs boson. We find significant improvements in the power of three representative studies: a search for ttH, a measurement of the top quark mass, and a search for a heavy Z' decaying to top quark pairs. We present ablation studies to provide insight on what the network has learned in each case.
We consider the impact of combining precision spectroscopic measurements made in atomic hydrogen with similar measurements made in atomic deuterium on the search for physics beyond the Standard Model. Specifically we consider the wide class of models that can be described by an effective Yukawa-type interaction between the nucleus and the electron. We find that it is possible to set bounds on new light-mass bosons that are orders of magnitude more sensitive than those set using a single isotope only, provided the interaction couples differently to the deuteron and proton. Further enhancements of these bounds by an order of magnitude or more would be made possible by extending the current measurements of the isotope shift of the 1s$_{1/2}$-2s$_{1/2}$ transition frequency to that of a transition between the 2s$_{1/2}$ state and a Rydberg s-state.
Event-by-event particle ratio fluctuations for simulated data sets of three different models named UrQMD, AMPT, and Pythia are studied using the fluctuation variable $\nu_{dyn}$. The simulated data sets produced in $pp$ collisions at four different LHC energies $\sqrt{s} = 2.76, 5.02, 7$ and $13$ TeV are generated and considered for this analysis. The variation of fluctuation parameter $\nu_{dyn}$ for accepted pair of meson and baryon combination $i.e.$ $[\pi, K]$, $[\pi, p]$ and $[p, K]$ with the increasing value of the mean multiplicity of charged particles ($\langle N_{ch} \rangle$) are investigated. It has been observed that the correlation between the particle pair $[\pi, K]$ is more than that of the two other particle pairs $[\pi, p]$ and $[p, K]$. However, the energy-wise inspection of the fluctuation variable $\nu_{dyn}$ for $0-10\%$ centrality data shows the increase in the correlation between the particles in each pair for all three models considered.
Aims. In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data. Methods. The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment. Results.We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
Black holes evolve by evaporation of their event horizon. While this process is believed to be unitary, there is no consensus on the recovery of information in black hole entropy. A missing link is a unit of information in black hole evaporation. Distinct from Hawking radiation, we identify evaporation in entangled pairs by $\mathbb{P}^2$ topology of the event horizon consistent with the Bekenstein-Hawking entropy in a uniformly spaced horizon area, where $k_B$ denotes the Boltzmann constant. It derives by continuation of $\mathbb{P}^2$ in Rindler spacetime prior to gravitational collapse, subject to a tight correlation of the fundamental frequency of Quasi-Normal-Mode (QNM) ringing in gravitational and electromagnetic radiation. Information extraction from entangled pairs by detecting one over the surface spanned by three faces of a large cube carries a unit of information of $2\log3$ upon including measurement of spin.
We calculate the typical bipartite entanglement entropy $\langle S_A\rangle_N$ in systems containing indistinguishable particles of any kind as a function of the total particle number $N$, the volume $V$, and the subsystem fraction $f=V_A/V$, where $V_A$ is the volume of the subsystem. We expand our result as a power series $\langle S_A\rangle_N=a f V+b\sqrt{V}+c+o(1)$, and find that $c$ is universal (i.e., independent of the system type), while $a$ and $b$ can be obtained from a generating function characterizing the local Hilbert space dimension. We illustrate the generality of our findings by studying a wide range of different systems, e.g., bosons, fermions, spins, and mixtures thereof. We provide evidence that our analytical results describe the entanglement entropy of highly excited eigenstates of quantum-chaotic spin and boson systems, which is distinct from that of integrable counterparts.
We discuss two-dimensional conformal field theories (CFTs) which are invariant under gauging a non-invertible global symmetry. At every point on the orbifold branch of $c=1$ CFTs, it is known that the theory is self-dual under gauging a $\mathbb{Z}_2 \times \mathbb{Z}_2$ symmetry, and has $\mathsf{Rep}(H_8)$ and $\mathsf{Rep}(D_8)$ fusion category symmetries as a result. We find that gauging the entire $\mathsf{Rep}(H_8)$ fusion category symmetry maps the orbifold theory at radius $R$ to that at radius $2/R$. At $R=\sqrt{2}$, which corresponds to two decoupled Ising CFTs (Ising$^2$ in short), the theory is self-dual under gauging the $\mathsf{Rep}(H_8)$ symmetry. This implies the existence of a new topological defect line in the Ising$^2$ CFT obtained from half-space gauging of the $\mathsf{Rep}(H_8)$ symmetry, which commutes with the $c=1$ Virasoro algebra but does not preserve the fully extended chiral algebra. We bootstrap its action on the $c=1$ Virasoro primary operators, and find that there are no relevant or marginal operators preserving it. Mathematically, the new topological line combines with the $\mathsf{Rep}(H_8)$ symmetry to form a bigger fusion category which is a $\mathbb{Z}_2$-extension of $\mathsf{Rep}(H_8)$. We solve the pentagon equations including the additional topological line and find 8 solutions, where two of them are realized in the Ising$^2$ CFT. Finally, we show that the torus partition functions of the Monster$^2$ CFT and Ising$\times$Monster CFT are also invariant under gauging the $\mathsf{Rep}(H_8)$ symmetry.
We develop a theory of flows in the space of Riemannian metrics induced by neural network gradient descent. This is motivated in part by recent advances in approximating Calabi-Yau metrics with neural networks and is enabled by recent advances in understanding flows in the space of neural networks. We derive the corresponding metric flow equations, which are governed by a metric neural tangent kernel, a complicated, non-local object that evolves in time. However, many architectures admit an infinite-width limit in which the kernel becomes fixed and the dynamics simplify. Additional assumptions can induce locality in the flow, which allows for the realization of Perelman's formulation of Ricci flow that was used to resolve the 3d Poincar\'e conjecture. We apply these ideas to numerical Calabi-Yau metrics, including a discussion on the importance of feature learning.
Following on our previous work arXiv:2204.07593 and arXiv:2306.01043 studying the orbits of quantum states under Clifford circuits via `reachability graphs', we introduce `contracted graphs' whose vertices represent classes of quantum states with the same entropy vector. These contracted graphs represent the double cosets of the Clifford group, where the left cosets are built from the stabilizer subgroup of the starting state and the right cosets are built from the entropy-preserving operators. We study contracted graphs for stabilizer states, as well as W states and Dicke states, discussing how the diameter of a state's contracted graph constrains the `entropic diversity' of its $2$-qubit Clifford orbit. We derive an upper bound on the number of entropy vectors that can be generated using any $n$-qubit Clifford circuit, for any quantum state. We speculate on the holographic implications for the relative proximity of gravitational duals of states within the same Clifford orbit. Although we concentrate on how entropy evolves under the Clifford group, our double-coset formalism, and thus the contracted graph picture, is extendable to generic gate sets and generic state properties.
We present a numerical quantum Monte Carlo (QMC) method for simulating the 3D phase transition on the recently proposed fuzzy sphere [Phys. Rev. X 13, 021009 (2023)]. By introducing an additional $SU(2)$ layer degree of freedom, we reformulate the model into a form suitable for sign-problem-free QMC simulation. From the finite-size-scaling, we show that this QMC-friendly model undergoes a quantum phase transition belonging to the 3D Ising universality class, and at the critical point we compute the scaling dimensions from the state-operator correspondence, which largely agrees with the prediction from the conformal field theory. These results pave the way to construct sign-problem-free models for QMC simulations on the fuzzy sphere, which could advance the future study on more sophisticated criticalities.
We construct the embedding of the $\lambda$-model on $SL(2, \mathbb{R}) \times SU(2) \times SU(2)$ in type-II supergravity. In the absence of deformation, the ten-dimensional background corresponds to the near-horizon limit of the NS1-NS5-NS5 brane intersection. We show that when the deformation is turned on, supersymmetry breaks by half, and the solution preserves 8 supercharges. The Penrose limits along two null geodesics of the deformed geometry are also considered. It turns out that none of the associated plane-wave backgrounds exhibit supernumerary supercharges.
Nicolai maps offer an alternative description of supersymmetric theories via nonlinear and nonlocal transformations characterized by the so-called `free-action' and `determinant-matching' conditions. The latter expresses the equality of the Jacobian determinant of the transformation with the one obtained by integrating out the fermions, which so far have been considered only to quadratic terms. We argue that such a restriction is not substantial, as Nicolai maps can be constructed for arbitrary nonlinear sigma models, which feature four-fermion interactions. The fermionic effective one-loop action then gets generalized to higher loops and the perturbative tree expansion of such Nicolai maps receives quantum corrections in the form of fermion loop decorations. The `free-action condition' continues to hold for the classical map, but the `determinant-matching condition' is extended to an infinite hierarchy in fermion loop order. After general considerations for sigma models in four dimensions, we specialize to the case of $\mathbb{C}\mathrm{P}^N$ symmetric spaces and construct the associated Nicolai map. These sigma models admit a formulation with only quadratic fermions via an auxiliary vector field, which however does not simplify the construction of the map.
Recent works have proposed the use of the formalism of Positive Operator Valued Measures to describe time measurements in quantum mechanics. This work aims to expand on the work done by other authors, by generalizing the previously proposed construction method of such measures to include causal Poincar\'e transformations, in order to construct measures which are covariant with respect to such transformations.
We investigate the existence of $\frac18$-BPS black hole microstates in Type IIB string theory on $\mathrm{AdS}_5 \times \mathrm{S}^5$. As will be explained, these states are in one-to-one correspondence with the Schur operators comprising the chiral algebra of $\mathcal{N}=4$ super-Yang-Mills, and a conjecture of Beem et al. implies that the Schur sector only contains graviton operators and hence $\frac18$-BPS black holes do not exist. We scrutinize this conjecture from multiple angles. Concerning the macroscopic counting, we rigorously prove that the flavored Schur index cannot exhibit black hole entropy growth, and provide numerical evidence that the flavored MacDonald index also does not exhibit such growth. Next, we go beyond counting to examine the algebraic structure, beginning by presenting evidence for the well-definedness of the super-$\mathcal{W}$ algebra of Beem et al., then using modular differential equations to argue for an upper bound on the lightest non-graviton operator if existent, and finally performing a systematic construction of cohomologies to recover only gravitons. Along the way, we clarify key aspects of the 4d/2d correspondence using the formalism of the holomorphic topological twist.
We give a mathematical perspective on string compactifications. Submitted as a chapter in the Encyclopedia of Mathematical Physics.
We consider equal-mass quantum Toda lattice with balanced loss-gain for two and three particles. The two-particle Toda lattice is integrable and two integrals of motion which are in involution have been found. The bound-state energy and the corresponding eigenfunctions have been obtained numerically for a few low-lying states. The three-particle quantum Toda lattice with balanced loss-gain and velocity mediated coupling admits mixed phases of integrability and chaos depending on the value of the loss-gain parameter. We have obtained analytic expressions for two integrals of motion which are in involution. Although an analytic expression for the third integral has not been found, the numerical investigation suggests integrability below a critical value of the loss-gain strength and chaos above this critical value. The level spacing distribution changes from the Wigner-Dyson to the Poisson distribution as the loss-gain parameter passes through this critical value and approaches zero. An identical behaviour is seen in terms of the gap-ratio distribution of the energy levels. The existence of mixed phases of quantum integrability and chaos in the specified ranges of the loss-gain parameter has also been confirmed independently via the study of level repulsion and complexity in higher order excited states.
We study statistical properties of matrix elements entering the eigenstate thermalization hypothesis by studying the observables written in the energy eigenbasis of generic quantum systems and truncated to small microcanonical windows. We put forward a picture, that below certain energy scale collective statistical properties of matrix elements exhibit emergent unitary symmetry. In particular, below this scale the spectrum of the microcanonically truncated operator exhibits universal behavior for which we introduce readily testable criteria. We support this picture by numerical simulations and demonstrate existence of emergent unitary symmetry scale for all considered operators in chaotic many-body quantum systems. We discuss operator and system-size dependence of this energy scale and put our findings into context of previous works exploring emergence of random-matrix behavior in narrow energy windows.
The variational method employing the amplitude and width as collective coordinates of the Klein-Gordon oscillon leads to a dynamical system with unstable periodic orbits that blow up when perturbed. We propose a multiscale variational approach free from the blow-up singularities. An essential feature of the proposed trial function is the inclusion of the third collective variable: a correction for the nonuniform phase growth. In addition to determining the parameters of the oscillon, our approach detects the onset of its instability.
We investigate higher-order corrections to correlators in a general CFT with the double trace $T\bar{T}$ deformation. Traditional perturbation theory proves inadequate for addressing this issue, due to the intricate stress tensor flow induced by the deformation. To tackle this challenge, we introduce a novel technique termed the conservation equation method. This method leverages the trace relation and conservation property of the stress tensor to establish relationships between higher and lower-order corrections and subsequently determine the correlators by enforcing symmetry properties. As an illustration, we compute both first and higher-order corrections, demonstrating the impact of stress tensor deformation on correlators in a general deformed CFT. Our results align with existing calculations in the literature.
We give an overview of moduli stabilization in compactifications of string theory. We summarize current methods for construction and analysis of vacua with stabilized moduli, and we describe applications to cosmology and particle physics. This is a contribution to the Handbook of Quantum Gravity.
The extended algebra of the free electromagnetic fields, including infrared singular fields, and the almost radial gauge, both introduced earlier, are postulated for the construction of the quantum electrodynamics in a Hilbert space (no indefinite metric). Both the Dirac and electromagnetic fields are constructed up to the first order (based on the incoming fields) as operators in the Hilbert space, and shown to have physically well interpretable asymptotic behavior in far past and spacelike separations. The Dirac field tends in far past to the free incoming field, carrying its own Coulomb field, but with no 'soft photon dressing'. The spacelike asymptotic limit of the electromagnetic field yields a conserved operator field, which is a sum of contributions of the incoming Coulomb field, and of the low energy limit of the incoming free electromagnetic field. This should agree with the operator field similarly constructed with the use of outgoing fields, which then relates these past and future characteristics. Higher orders are expected not to change this picture, but their construction needs a treatment of the UV question, which has not been undertaken and remains a problem for further investigation.
We extend a recent de Sitter holographic proposal and entanglement entropy prescription to generic closed FRW cosmologies in arbitrary dimensions, and propose that for large classes of bouncing and Big Bang/Big Crunch cosmologies, the full spacetime can be encoded holographically on two holographic screens, associated to two antipodal observers. In the expanding phase, the two screens lie at the apparent horizons. In the contracting phase, there is an infinite number of possible trajectories of the holographic screens, which can be grouped in equivalence classes. In each class the effective holographic theory can be derived from a pair of ``parent'' screens on the apparent horizons. A number of cases including moduli-dominated cosmologies escape our discussion, and it is expected that two antipodal observers and their associated screens do not suffice to reconstruct these cosmologies. The leading contributions to the entanglement entropy between the screens arise from a minimal extremal trapped or anti-trapped surface lying in the region between them. This picture entails a time-dependent realization of the ER=EPR conjecture, where an effective geometrical bridge connecting the screens via the minimal extremal surface emerges from entanglement. For the Big Crunch contracting cases, the screens disentangle and the geometrical bridge closes off when the minimal extremal trapped sphere hits the Big Crunch singularity at a finite time before the collapse of the Universe. Semiclassical, thermal corrections are incorporated in the cases of radiation-dominated cosmologies.
The worldsheet axion plays a crucial role in the dynamics of the Yang-Mills confining flux tubes. According to the lattice measurements, its mass is of order the string tension and its coupling is close to a certain critical value. Using the S-matrix Bootstrap, we construct non-perturbative $2 \to 2$ branon scattering amplitudes which also feature a weakly coupled axion resonance with these properties. We study the extremal bootstrap amplitudes in detail and show that the axion plays a dominant role in their UV completion in two distinct regimes, in one of which it cannot be considered a parametrically light particle. We conjecture that the actual flux tube amplitudes exhibit a similar behavior.
We show that the recently proposed equations for holomorphic sector of higher-spin theory in $d=4$, also known as chiral, can be naturally extended to describe interacting symmetric higher-spin gauge fields in any dimension. This is achieved with the aid of Vasiliev's off shell higher-spin algebra. The latter contains ideal associated to traces that has to be factored out in order to set the equations on shell. To identify the ideal in interactions we observe the global $sp(2)$ that underlies it to all orders. The $sp(2)$ field dependent generators are found in closed form and appear to be remarkably simple. The traceful higher-spin vertices are analyzed against locality and shown to be all-order space-time spin-local in the gauge sector, as well as spin-local in the Weyl sector. The vertices are found manifestly in the form of curious integrals over hypersimplices. We also extend to any $d$ the earlier observed in $d=4$ higher-spin shift symmetry known to be tightly related to spin-locality.
In this note, we use the new bottom up method based on soft theorems to construct the expansion of single-trace Yang-Mills-scalar amplitudes recursively. The resulted expansion manifests the gauge invariance for any polarization carried by external gluons, as well as the permutation symmetry among external gluons. Our result is equivalent to that found by Clifford Cheung and James Mangan via the so called covariant color-kinematic duality approach.
In this paper we calculate matrix of modular transformations of the one-point toric conformal blocks in the Neveu-Schwarz sector of $N=1$ super Liouville field theory. For this purpose we use explicit expression for this matrix as integral of product of certain elements of fusion matrix. This integral is computed using the chain of integral identities for supersymmetric hyperbolic gamma functions derived by the degeneration of the integrals of parafermionic elliptic gamma functions.
Twist operators implement symmetries in bounder regions of the space. Standard twists are a special class of twists constructed using modular tools. The twists corresponding to translations have interesting special properties. They can move continuously an operator from a region to a disjoint one without ever passing through the gap separating the two. In addition, they have generators satisfying the spectrum condition. We compute explicitly these twists for the two dimensional chiral fermion field. The twist generator gives place to a new type of energy inequality where the smeared energy density is bounded below by an operator.
In this letter we present a scan for new vacua within consistent truncations of eleven/ten-dimensional supergravity down to five dimensions that preserve $N = 2$ supersymmetry, after their complete classification in arXiv:2112.03931. We first make explicit the link between the equations of exceptional Sasaki-Einstein backgrounds in arXiv:1602.02158 and the standard BPS equations for $5d$ $N = 2$ of arXiv:1601.00482. This derivation allows us to expedite a scan for vacua preserving $N = 2$ supersymmetry within the framework used for the classification presented in arXiv:2112.03931.
A search for W' bosons decaying to a top and a bottom quark in final states including an electron or a muon is performed with the CMS detector at the LHC. The analyzed data correspond to an integrated luminosity of 138 fb$^{-1}$ of proton-proton collisions at a center-of-mass energy of 13 Tev. Good agreement with the standard model expectation is observed and no evidence for the existence of the W' boson is found over the mass range examined. The largest observed deviation from the standard model expectation is found for a W' boson mass ($m_\mathrm{W'}$) hypothesis of 3.8 TeV with a relative decay width of 1%, with a local (global) significance of 2.6 (2.0) standard deviations. Upper limits on the production cross sections of W' bosons decaying to a top and a bottom quark are set. Left- and right-handed W' bosons with $m_\mathrm{W'}$ below 3.9 and 4.3 TeV, respectively, are excluded at the 95% confidence level, under the assumption that the new particle has a narrow decay width. Limits are also set for relative decay widths up to 30%. These are the most stringent limits to date on this W' boson decay channel.
Development related to PandABox-based fly scans is an important part of the active work on Mamba, the software framework for beamline experiments at the High Energy Photon Source (HEPS); presented in this paper is the progress of our development, and some outlook for advanced fly scans based on knowledge learned during the process. By treating fly scans as a collaboration between a few loosely coupled subsystems - motors / mechanics, detectors / data processing, sequencer devices like PandABox - systematic analyses of issues in fly scans are conducted. Interesting products of these analyses include a general-purpose software-based fly-scan mechanism, a general way to design undulator-monochromator fly scans, a sketch of how to practically implement online tuning of fly-scan behaviours based on processing of the data acquired, and many more. Based on the results above, an architectural discussion on >=10kHz fly scans is given.
We report the first search for the Sagittarius tidal stream of axion dark matter around 4.55 $\mu$eV using CAPP-12TB haloscope data acquired in March of 2022. Our result excluded the Sagittarius tidal stream of Dine-Fischler-Srednicki-Zhitnitskii and Kim-Shifman-Vainshtein-Zakharov axion dark matter densities of $\rho_a\gtrsim0.184$ and $\gtrsim0.025$ GeV/cm$^3$, respectively, over a mass range from 4.51 to 4.59 $\mu$eV at a 90\% confidence level.
The NA48/2 experiment at CERN reports the first observation of the $K^{\pm} \rightarrow \pi^{0} \pi^{0} \mu^{\pm} \nu$ decay based on a sample of 2437 candidates with 15% background contamination collected in 2003--2004. The decay branching ratio in the kinematic region of the squared dilepton mass above $0.03$~GeV$^2/c^4$ is measured to be $(0.65 \pm 0.03) \times 10^{-6}$. The extrapolation to the full kinematic space, using a specific model, is found to be $(3.45 \pm 0.16) \times 10^{-6}$, in agreement with chiral perturbation theory predictions.
One of the prime goals of the COMPASS experiment at CERN is the study of the light meson spectrum, with a particular emphasis on the search for exotic states. The focus of this paper is on signals of the lightest hybrid candidate $\pi_1(1600)$ with spin-exotic quantum numbers $J^{PC}=1^{-+}$ in several decay channels such as $\pi^-\pi^+\pi^-$, $\eta^{(\prime)}\pi^-$, $\omega\pi^-\pi^0$, $\pi^-\pi^+\pi^-\eta$, and $K_S K_S \pi$. In addition, we highlight new results for the $K^-\pi^+\pi^-$ final state, which indicate a supernumerary state with respect to the constituent quark model with $J^{P}=0^-$.
Future e$^+$e$^-$ colliders, thanks to their clean environment and triggerless operation, offer a unique opportunity to search for long-lived particles (LLPs). Considered in this contribution are promising prospects for LLP searches offered by the International Large Detector (ILD), with a Time Projection Chamber (TPC) as the core of its tracking systems, providing almost continuous tracking. The ILD has been developed as a detector concept for the ILC, however, studies on understanding of the ILD performance at other collider concepts are ongoing. Based on the full detector simulation, we study the possibility of reconstructing decays of both light and heavy LLPs at the ILD. For the heavy, $\mathcal{O}$(100 GeV) LLPs, we consider a challenging scenario with small mass splitting between the LLP and the dark matter candidate, resulting in only a very soft displaced track pair in the final state, not pointing to the interaction point. We account for the soft beam-induced background (from measurable e$^+$e$^-$ pairs and hadron photo-production processes), expected to give the dominant background contribution due to a very high cross section, and show the possible means of its reduction. As the opposite extreme scenario we consider the production of a light, $\mathcal{O}$(1 GeV) pseudo-scalar LLP, which decays to two highly boosted and almost colinear displaced tracks. We also present the corresponding results for an alternative ILD design, where the TPC is replaced by a silicon tracker modified from the Compact Linear Collider detector (CLICdet) design.
The heavy-ion collisions (A--A) at the Large Hadron Collider (LHC) energies have confirmed the production of the quark-gluon plasma (QGP), a new state of nuclear matter where quarks and gluons are deconfined. The light-flavour hadrons ($\pi$, K, p), constitute the bulk of the produced particles, carry useful information of the collision geometry, collective behaviour and thermal property of the QGP. The measurements of light-flavour hadron production in small collision systems (pp and p--A) at the LHC energies have shown the onset of collective phenomena (e.g. radial flow and long-range correlations) that resemble what is typically observed in nucleus-nucleus collisions and attributed to the formation of a deconfined system of quarks and gluons. The new results of the identified light-flavour particle production measured in high-multiplicity triggered pp collisions at $\sqrt{s}=13$~TeV of ALICE Run 2 will be presented in search of collective behaviour in small collision systems. The transverse momenta $p_{\rm T}$-spectra of the identified particles show hardening at the mid-$p_{\rm T}$. The mean transverse momenta ($\langle p_{\rm T} \rangle$) are shown as a function of charged-particle multiplicity. The ratios of $p_{\rm T}$-spectra and the ratios of the integrated yields of kaon- and proton-to-pion are also presented and compared with published results.
Using neutrinos produced at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL), the COHERENT collaboration has studied the Pb($\nu_e$,X$n$) process with a lead neutrino-induced-neutron (NIN) detector. Data from this detector are fit jointly with previously collected COHERENT data on this process. A combined analysis of the two datasets yields a cross section that is $0.29^{+0.17}_{-0.16}$ times that predicted by the MARLEY event generator using experimentally-measured Gamow-Teller strength distributions, consistent with no NIN events at 1.8$\sigma$. This is the first inelastic neutrino-nucleus process COHERENT has studied, among several planned exploiting the high flux of low-energy neutrinos produced at the SNS.
Reported here are transverse single-spin asymmetries ($A_{N}$) in the production of charged hadrons as a function of transverse momentum ($p_T$) and Feynman-$x$ ($x_F$) in polarized $p^{\uparrow}$+$p$, $p^{\uparrow}$+Al, and $p^{\uparrow}$+Au collisions at $\sqrt{s_{_{NN}}}=200$ GeV. The measurements have been performed at forward and backward rapidity ($1.4<|\eta|<2.4$) over the range of $1.5<p_{T}<7.0~{\rm GeV}/c$ and $0.04<|x_{F}|<0.2$. A nonzero asymmetry is observed for positively charged hadrons at forward rapidity ($x_F>0$) in $p^{\uparrow}$+$p$ collisions, whereas the $p^{\uparrow}$+Al and $p^{\uparrow}$+Au results show smaller asymmetries. This finding provides new opportunities to investigate the origin of transverse single-spin asymmetries and a tool to study nuclear effects in $p$+$A$ collisions.
We search for energetic electron recoil signals induced by boosted dark matter (BDM) from the galactic center using the COSINE-100 array of NaI(Tl) crystal detectors at the Yangyang Underground Laboratory. The signal would be an excess of events with energies above 4 MeV over the well-understood background. Because no excess of events are observed in a 97.7 kg$\cdot$years exposure, we set limits on BDM interactions under a variety of hypotheses. Notably, we explored the dark photon parameter space, leading to competitive limits compared to direct dark photon search experiments, particularly for dark photon masses below 4\,MeV and considering the invisible decay mode. Furthermore, by comparing our results with a previous BDM search conducted by the Super-Kamionkande experiment, we found that the COSINE-100 detector has advantages in searching for low-mass dark matter. This analysis demonstrates the potential of the COSINE-100 detector to search for MeV electron recoil signals produced by the dark sector particle interactions.
We report the results of a search for inelastic scattering of weakly interacting massive particles (WIMPs) off $^{127}$I nuclei using NaI(Tl) crystals with a data exposure of 97.7 kg$\cdot$years from the COSINE-100 experiment. The signature of inelastic WIMP-$^{127}$I scattering is a nuclear recoil accompanied by a 57.6 keV $\gamma$-ray from the prompt deexcitation, producing a more energetic signal compared to the typical WIMP nuclear recoil signal. We found no evidence for this inelastic scattering signature and set a 90 $\%$ confidence level upper limit on the WIMP-proton spin-dependent, inelastic scattering cross section of $1.2 \times 10^{-37} {\rm cm^{2}}$ at the WIMP mass 500 ${\rm GeV/c^{2}}$.
Construction of the new all-silicon Inner Tracker (ITk), developed by the ATLAS collaboration for the High Luminosity LHC, started in 2020 and is expected to continue till 2028. The ITk detector will include 18,000 highly segmented and radiation hard n+-in-p silicon strip sensors (ATLAS18), which are being manufactured by Hamamatsu Photonics. Mechanical and electrical characteristics of produced sensors are measured upon their delivery at several institutes participating in a complex Quality Control (QC) program. The QC tests performed on each individual sensor check the overall integrity and quality of the sensor. During the QC testing of production ATLAS18 strip sensors, an increased number of sensors that failed the electrical tests was observed. In particular, IV measurements indicated an early breakdown, while large areas containing several tens or hundreds of neighbouring strips with low interstrip isolation were identified by the Full strip tests, and leakage current instabilities were measured in a long-term leakage current stability setup. Moreover, a high surface electrostatic charge reaching a level of several hundreds of volts per inch was measured on a large number of sensors and on the plastic sheets, which mechanically protect these sensors in their paper envelopes. Accumulated data indicates a clear correlation between observed electrical failures and the sensor charge-up. To mitigate the above-described issues, the QC testing sites significantly modified the sensor handling procedures and introduced sensor recovery techniques based on irradiation of the sensor surface with UV light or application of intensive flows of ionized gas. In this presentation, we will describe the setups implemented by the QC testing sites to treat silicon strip sensors affected by static charge and evaluate the effectiveness of these setups in terms of improvement of the sensor performance.
Using a sample of $(10087\pm44)\times 10^6$ $J/\psi$ events, which is about fifty times larger than that was previously analyzed, a further investigation on the $J/\psi \rightarrow \gamma 3(\pi^+\pi^-)$ decay is performed. A significant distortion at 1.84 GeV/$c^2$ in the line-shape of the $3(\pi^+\pi^-)$ invariant mass spectrum is observed for the first time, which is analogous to the behavior of $X(1835)$ and could be resolved by two overlapping resonant structures, $X(1840)$ and $X(1880)$. The new state $X(1880)$ is observed with a statistical significance of $14.7\sigma$. The mass and width of $X(1880)$ are determined to be $1882.1\pm1.7\pm0.7$ MeV/$c^2$ and $30.7\pm5.5 \pm2.4$ MeV, respectively, which indicates the existence of a $p\bar{p}$ bound state.
Although water is almost transparent to visible light, we demonstrate that the air-water interface interacts strongly with visible light via what we hypothesize as the photomolecular effect. In this effect, transverse-magnetic polarized photons cleave off water clusters from the air-water interface. We use over 10 different experiments to demonstrate the existence of this effect and its dependence on the wavelength, incident angle and polarization of visible light. We further demonstrate that visible light heats up thin fogs, suggesting that this process can impact weather, climate, and the earth's water cycle. Our study suggests that the photomolecular effect should happen widely in nature, from clouds to fogs, ocean to soil surfaces, and plant transpiration, and can also lead to new applications in energy and clear water.
One of the most striking many-body phenomena in nature is the sudden change of macroscopic properties as the temperature or energy reaches a critical value. Such equilibrium transitions have been predicted and observed in two and three spatial dimensions, but have long been thought not to exist in one-dimensional (1D) systems. Fifty years ago, Dyson and Thouless pointed out that a phase transition in 1D can occur in the presence of long-range interactions, but an experimental realization has so far not been achieved due to the requirement to both prepare equilibrium states and realize sufficiently long-range interactions. Here we report on the first experimental demonstration of a finite-energy phase transition in 1D. We use the simple observation that finite-energy states can be prepared by time-evolving product initial states and letting them thermalize under the dynamics of a many-body Hamiltonian. By preparing initial states with different energies in a 1D trapped-ion quantum simulator, we study the finite-energy phase diagram of a long-range interacting quantum system. We observe a ferromagnetic equilibrium phase transition as well as a crossover from a low-energy polarized paramagnet to a high-energy unpolarized paramagnet in a system of up to $23$ spins, in excellent agreement with numerical simulations. Our work demonstrates the ability of quantum simulators to realize and study previously inaccessible phases at finite energy density.
Generating entanglement between distant quantum systems is at the core of quantum networking. In recent years, numerous theoretical protocols for remote entanglement generation have been proposed, of which many have been experimentally realized. Here, we provide a modular theoretical framework to elucidate the general mechanisms of photon-mediated entanglement generation between single spins in atomic or solid-state systems. Our framework categorizes existing protocols at various levels of abstraction and allows for combining the elements of different schemes in new ways. These abstraction layers make it possible to readily compare protocols for different quantum hardware. To enable the practical evaluation of protocols tailored to specific experimental parameters, we have devised numerical simulations based on the framework with our codes available online.
While quantum state tomography is notoriously hard, most states hold little interest to practically-minded tomographers. Given that states and unitaries appearing in Nature are of bounded gate complexity, it is natural to ask if efficient learning becomes possible. In this work, we prove that to learn a state generated by a quantum circuit with $G$ two-qubit gates to a small trace distance, a sample complexity scaling linearly in $G$ is necessary and sufficient. We also prove that the optimal query complexity to learn a unitary generated by $G$ gates to a small average-case error scales linearly in $G$. While sample-efficient learning can be achieved, we show that under reasonable cryptographic conjectures, the computational complexity for learning states and unitaries of gate complexity $G$ must scale exponentially in $G$. We illustrate how these results establish fundamental limitations on the expressivity of quantum machine learning models and provide new perspectives on no-free-lunch theorems in unitary learning. Together, our results answer how the complexity of learning quantum states and unitaries relate to the complexity of creating these states and unitaries.
The Jordan-Wigner transformation is frequently utilised to rewrite quantum spin chains in terms of fermionic operators. When the resulting Hamiltonian is bilinear in fermions, i.e. the fermions are free, the exact spectrum follows from the eigenvalues of a matrix typically growing only linearly with the size of the system. However, several Hamiltonians that do not admit a Jordan-Wigner transformation to fermion bilinears still have the same type of free-fermion spectra. The spectra of such ``free fermions in disguise" models can be found exactly by an intricate but explicit construction of the raising and lowering operators. We generalise the methods further to find a family of such spin chains. We compute the exact spectrum, and generalise an elegant graph-theory construction to our model. We also explain how this family admits an $N$=2 lattice supersymmetry.
Simulating quantum spin systems at finite temperatures is an open challenge in many-body physics. This work studies the temperature-dependent spin dynamics of a pivotal compound, FeI$_2$, to determine if universal quantum effects can be accounted for by a phenomenological renormalization of the dynamical spin structure factor $S(\mathbf{q}, \omega)$ measured by inelastic neutron scattering. Renormalization schemes based on the quantum-to-classical correspondence principle are commonly applied at low temperatures to the harmonic oscillators describing normal modes. However, it is not clear how to extend this renormalization to arbitrarily high temperatures. Here we introduce a temperature-dependent normalization of the classical moments, whose magnitude is determined by imposing the quantum sum rule, i.e. $\int d\omega d\mathbf{q} S(\mathbf{q}, \omega) = N_S S (S+1)$ for $N_S$ dipolar magnetic moments. We show that this simple renormalization scheme significantly improves the agreement between the calculated and measured $S(\mathbf{q}, \omega)$ for FeI$_{2}$ at all temperatures. Due to the coupled dynamics of dipolar and quadrupolar moments in that material, this renormalization procedure is extended to classical theories based on SU(3) coherent states, and by extension, to any SU(N) coherent state representation of local multipolar moments.
Enhancing the ability to resolve axial details is crucial in three-dimensional optical imaging. We provide experimental evidence showcasing the ultimate precision achievable in axial localization using vortex beams. For Laguerre-Gauss (LG) beams, this remarkable limit can be attained with just a single intensity scan. This proof-of-principle demonstrates that microscopy techniques based on LG vortex beams can potentially benefit from the introduced quantum-inspired superresolution protocol.
In this paper, we propose a novel bipartite entanglement purification protocol built upon hashing and upon the guessing random additive noise decoding (GRAND) approach recently devised for classical error correction codes. Our protocol offers substantial advantages over existing hashing protocols, requiring fewer qubits for purification, achieving higher fidelities, and delivering better yields with reduced computational costs. We provide numerical and semi-analytical results to corroborate our findings and provide a detailed comparison with the hashing protocol of Bennet et al. Although that pioneering work devised performance bounds, it did not offer an explicit construction for implementation. The present work fills that gap, offering both an explicit and more efficient purification method. We demonstrate that our protocol is capable of purifying states with noise on the order of 10% per Bell pair even with a small ensemble of 16 pairs. The work explores a measurement-based implementation of the protocol to address practical setups with noise. This work opens the path to practical and efficient entanglement purification using hashing-based methods with feasible computational costs. Compared to the original hashing protocol, the proposed method can achieve some desired fidelity with a number of initial resources up to one hundred times smaller. Therefore, the proposed method seems well-fit for future quantum networks with a limited number of resources and entails a relatively low computational overhead.
Expectation values of observables are routinely estimated using so-called classical shadows$\unicode{x2014}$the outcomes of randomized bases measurements on a repeatedly prepared quantum state. In order to trust the accuracy of shadow estimation in practice, it is crucial to understand the behavior of the estimators under realistic noise. In this work, we prove that any shadow estimation protocol involving Clifford unitaries is stable under gate-dependent noise for observables with bounded stabilizer norm$\unicode{x2014}$originally introduced in the context of simulating Clifford circuits. For these observables, we also show that the protocol's sample complexity is essentially identical to the noiseless case. In contrast, we demonstrate that estimation of `magic' observables can suffer from a bias that scales exponentially in the system size. We further find that so-called robust shadows, aiming at mitigating noise, can introduce a large bias in the presence of gate-dependent noise compared to unmitigated classical shadows. On a technical level, we identify average noise channels that affect shadow estimators and allow for a more fine-grained control of noise-induced biases.
The vibrational relaxation of NO molecules scattering from an Au(111) surface has served as the focus of efforts to understand nonadiabatic energy transfer at metal-molecule interfaces. Experimental measurements and previous theoretical efforts suggest that multi-quantal NO vibrational energy relaxation occurs via electron hole pair excitations in the metal. Here, using a Linearized Semiclassical approach, we accurately predict the vibrational relaxation of NO from $\nu_i=3$ state for different incident translational energies. We also accurately capture the central role of transient electron transfer from the metal to the molecule in mediating vibrational relaxation process, but fall short of quantitatively predicting the full extent of multi-quantum relaxation for high incident vibrational excitations ($\nu_i = 16$).
Pure shape dynamics (PSD) is a novel implementation of the relational framework originally proposed by Julian Barbour and Bruno Bertotti. PSD represents a Leibnizian/Machian approach to physics in that it completely describes the dynamical evolution of a physical system without resorting to any structure external to the system itself. The chapter discusses how PSD effectively describes a de Broglie-Bohm N-body system and the conceptual benefits of such a relational description. The analysis will highlight the new directions in the quest for an understanding of the nature of the wave function that are opened up by a modern relationalist elaboration on de Broglie's and Bohm's original insights.
Atomic defects in solid-state materials are promising candidates for quantum interconnect and networking applications. Recently, a series of atomic defects have been identified in the silicon platform, where scalable device integration can be enabled by mature silicon photonics and electronics technologies. In particular, T centers hold great promise due to their telecom band optical transitions and the doublet ground state electronic spin manifold with long coherence times. However, an open challenge for advancing the T center platform is to enhance its weak and slow zero phonon line emission. In this work, we demonstrate the cavity-enhanced fluorescence emission from a single T center. This is realized by integrating single T centers with a low-loss, small mode-volume silicon photonic crystal cavity, which results in an enhancement of the fluorescence decay rate by a factor of $F$ = 6.89. Efficient photon extraction enables the system to achieve an average photon outcoupling rate of 73.3 kHz at the zero phonon line. The dynamics of the coupled system is well modeled by solving the Lindblad master equation. These results represent a significant step towards building efficient T center spin-photon interfaces for quantum information processing and networking applications.
We theoretically investigate the merging behaviour of two identical supersolids through dipolar Bose-Einstein condensates confined within a double-well potential. By adiabatically tuning the barrier height and the spacing between the two wells for specific trap aspect ratios, the two supersolids move toward each other and lead to the emergence of a variety of ground state phases, including a supersolid state, a macrodroplet state, a ring state, and a labyrinth state. We construct a phase diagram that characterizes various states seen during the merging transition. Further, we calculate the force required to pull the two portions of the gas apart, finding that the merged supersolids act like a deformable plastic material. Our work paves the way for future studies of layer structure in dipolar supersolids and the interaction between them in experiments.
Modern 4-wave mixing spectroscopies are expensive to obtain experimentally and computationally. In certain cases, the unfavorable scaling of quantum dynamics problems can be improved using a generalized quantum master equation (GQME) approach. However, the inclusion of multiple (light-matter) interactions complicates the equation of motion and leads to seemingly unavoidable cubic scaling in time. In this paper, we present a formulation that greatly simplifies and reduces the computational cost of previous work that extended the GQME framework to treat arbitrary numbers of quantum measurements. Specifically, we remove the time derivatives of quantum correlation functions from the modified Mori-Nakajima-Zwanzig framework by switching to a discrete-convolution implementation inspired by the transfer-tensor approach. We then demonstrate the method's capabilities by simulating 2D electronic spectra for the excitation-energy-transfer dimer model. In our method, the resolution of the data can be arbitrarily coarsened, especially along the $t_2$ axis, which mirrors how the data are obtained experimentally. Even in a modest case, this demands $\mathcal{O}(10^3)$ fewer data points. We are further able to decompose the spectra into 1-, 2-, and 3-time correlations, showing how and when the system enters a Markovian regime where further measurements are unnecessary to predict future spectra and the scaling becomes quadratic. This offers the ability to generate long-time spectra using only short-time data, enabling access to timescales previously beyond the reach of standard methodologies.
A commercial quantum key distribution (QKD) system needs to be formally certified to enable its wide deployment. The certification should include the system's robustness against known implementation loopholes and attacks that exploit them. Here we ready a fiber-optic QKD system for this procedure. The system has a prepare-and-measure scheme with decoy-state BB84 protocol, polarisation encoding, qubit source rate of 312.5 MHz, and is manufactured by QRate in Russia. We detail its hardware and post-processing. We analyse the hardware for any possible implementation loopholes and discuss countermeasures. We then amend the system design to address the highest-risk loopholes identified. We also work out technical requirements on the certification lab and outline its possible structure.
Kerr parametric oscillators (KPOs) can stabilize the superpositions of coherent states, which can be utilized as qubits, and are promising candidates for realizing hardware-efficient quantum computers. Although elementary gates for universal quantum computation with KPO qubits have been proposed, these gates are usually based on adiabatic operations and thus need long gate times, which result in errors caused by photon loss in KPOs realized by, e.g., superconducting circuits. In this work, we accelerate the elementary gates by experimentally feasible control methods, which are based on numerical optimization of pulse shapes for shortcuts to adiabaticity. By numerical simulations, we show that the proposed methods can achieve speedups compared to adiabatic ones by up to six times with high gate fidelities of 99.9%. These methods are thus expected to be useful for quantum computers with KPOs.
Gibbs states (i.e., thermal states) can be used for several applications such as quantum simulation, quantum machine learning, quantum optimization, and the study of open quantum systems. Moreover, semi-definite programming, combinatorial optimization problems, and training quantum Boltzmann machines can all be addressed by sampling from well-prepared Gibbs states. With that, however, comes the fact that preparing and sampling from Gibbs states on a quantum computer are notoriously difficult tasks. Such tasks can require large overhead in resources and/or calibration even in the simplest of cases, as well as the fact that the implementation might be limited to only a specific set of systems. We propose a method based on sampling from a quasi-distribution consisting of tensor products of mixed states on local clusters, i.e., expanding the full Gibbs state into a sum of products of local "Gibbs-cumulant" type states easier to implement and sample from on quantum hardware. We begin with presenting results for 4-spin linear chains with XY spin interactions, for which we obtain the $ZZ$ dynamical spin-spin correlation functions. We also present the results of measuring the specific heat of the 8-spin chain Gibbs state $\rho_8$.
Quantum image processing is a growing field attracting attention from both the quantum computing and image processing communities. We propose a novel method in combining a graph-theoretic approach for optimal surface segmentation and hybrid quantum-classical optimization of the problem-directed graph. The surface segmentation is modeled classically as a graph partitioning problem in which a smoothness constraint is imposed to control surface variation for realistic segmentation. Specifically, segmentation refers to a source set identified by a minimum s-t cut that divides graph nodes into the source (s) and sink (t) sets. The resulting surface consists of graph nodes located on the boundary between the source and the sink. Characteristics of the problem-specific graph, including its directed edges, connectivity, and edge capacities, are embedded in a quadratic objective function whose minimum value corresponds to the ground state energy of an equivalent Ising Hamiltonian. This work explores the use of quantum processors in image segmentation problems, which has important applications in medical image analysis. Here, we present a theoretical basis for the quantum implementation of LOGISMOS and the results of a simulation study on simple images. Quantum Approximate Optimization Algorithm (QAOA) approach was utilized to conduct two simulation studies whose objective was to determine the ground state energies and identify bitstring solutions that encode the optimal segmentation of objective functions. The objective function encodes tasks associated with surface segmentation in 2-D and 3-D images while incorporating a smoothness constraint. In this work, we demonstrate that the proposed approach can solve the geometric-constrained surface segmentation problem optimally with the capability of locating multiple minimum points corresponding to the globally minimal solution.
Polaritonic lattice configurations in dimensions $D=2$ are used as simulators of topological phases, based on symmetry class A Hamiltonians. Numerical and topological studies are performed in order to characterise the bulk topology of insulating phases, which is predicted to be connected to non-trivial edge mode states on the boundary. By using spectral flattened Hamiltonians on specific lattice geometries with time reversal symmetry breaking, e.g. Kagome lattice, we obtain maps from the Brillouin zone into Grassmannian spaces, which are further investigated by the topological method of space fibrations. Numerical evidence reveals a connection between the sum of valence band Chern numbers and the index of the projection operator onto the valence band states. Along these lines, we discover an index formula which resembles other index theorems and the classical result of Atiyah-Singer, but without any Dirac operator and from a different perspective. Through a combination of different tools, in particular homotopy and homology-cohomology duality, we provide a comprehensive mathematical framework, which fully addresses the source and structure of topological phases in coupled polaritonic array systems. Based on these results, it becomes possible to infer further designs and models of two-dimensional single sheet Chern insulators, implemented as polariton simulators.
This article proposes a new method to increase the efficiency of stimulated Raman adiabatic passage (STIRAP) in superconducting circuits using a shortcut to the adiabaticity (STA) method. The STA speeds up the adiabatic process before decoherence has a significant effect, thus leading to increased efficiency. This method achieves fast, high-fidelity coherent population transfer, known as super-adiabatic STIRAP (saSTIRAP), in a dressed state-engineered $\Lambda$ system with polariton states in circuit QED.
We demonstrate that it is possible to construct operators that stabilize the constraint-satisfying subspaces of computational problems in their Ising representations. We provide an explicit recipe to construct unitaries and associated measurements for some such constraints. The stabilizer measurements allow the detection of constraint violations, and provide a route to recovery back into the constrained subspace. We call this technique ``subspace correction". As an example, we explicitly investigate the stabilizers using the simplest local constraint subspace: Independent Set. We find an algorithm that is guaranteed to produce a perfect uniform or weighted distribution over all constraint-satisfying states when paired with a stopping condition: a quantum analogue of partial rejection sampling. The stopping condition can be modified for sub-graph approximations. We show that it can prepare exact Gibbs distributions on $d-$regular graphs below a critical hardness $\lambda_d^*$ in sub-linear time. Finally, we look at a potential use of subspace correction for fault-tolerant depth-reduction. In particular we investigate how the technique detects and recovers errors induced by Trotterization in preparing maximum independent set using an adiabatic state preparation algorithm.
We study the complex nonlinear dynamics of the two-photon Dicke model in the semiclassical limit by considering cavity and qubit dissipation. In addition to the normal and super-radiant phases, another phase that contains abundant chaos-related phenomena is found under balanced rotating and counter-rotating couplings. In particular, chaos may manifest itself through period-doubling bifurcation, intermittent chaos, or quasi-periodic oscillation, depending on the value of qubit frequency. Transition mechanisms that exist in these three distinct routes are investigated through the system's long-time evolution and bifurcation diagram. Additionally, we provide a comprehensive phase diagram detailing both the existence of stable fixed points and the aforementioned chaos-related dynamics.
An analytical theory to calculate the dissipatively stable concurrence in the system of two coupled flux superconducting qubits in the strong driving field is developed. The conditions for the entanglement state generation and destruction during the formation of the multiphoton transitions regions due to the interference of Landau--Zener--St\"uckelberg--Majorana are found. Based on the solution of the Floquet--Markov equation, the technique is proposed to adjust the amplitudes of dc- and ac-fields for effective control of the entanglement between qubit states while taking into account the effects of the decoherence.
The concepts of geometric phase and wave-particle duality are interlinked to several fundamental phenomena in quantum physics, but their mutual relationship still forms an uncharted open problem. Here we address this question by studying the geometric phase of a photon in double-slit interference. We especially discover a general complementarity relation for the photon that connects the geometric phase it exhibits in the observation plane and the which-path information it encases at the two slits. The relation can be seen as quantifying wave-particle duality of the photon via the geometric phase, thus corroborating a foundational link between two ubiquitous notions in quantum physics research.
The shareability of quantum correlations among the constituent parties of a multiparty quantum system is restricted by the quantum information theoretic concept called monogamy. Depending on the multiparty quantum systems, different measures of quantum correlations show disparate signatures for monogamy. We characterize the shareability of quantum correlations, from both entanglement-separability and information-theoretic kinds, in a multiparty quantum spin system containing two- and three-body interactions with respect to its system parameters and external applied magnetic field. Monogamy score in this system exhibits both monogamous and non-monogamous traits depending on the quantum correlation measure, strengths of system parameters and external magnetic field. The percentage of non-monogamous states when the information-theoretic quantum correlations are considered is higher than that of the entanglement-separability kind in allowed ranges of these variables. The integral powers of the quantum correlation measures for which the non-monogamous states become monogamous are identified.
The problem of implementation security in quantum key distribution (QKD) refers to the difficulty of meeting the requirements of mathematical security proofs in real-life QKD systems. Here, we provide a succint review on this topic, focusing on discrete variable QKD setups. Particularly, we discuss some of their main vulnerabilities and comment on possible approaches to overcome them.
The unitary description of beam splitters (BSs) and optical parametric amplifiers (OPAs) in terms of the dynamical Lie groups $SU(2)$ and $SU(1,1)$ has a long history. Recently, an inherent duality has been proposed that relates the unitaries of both optical devices. At the physical level, this duality relates the linear nature of a lossless BS to the nonlinear Parametric Down-Conversion (PDC) process exhibited by an OPA. Here, we argue that the duality between BS and PDC can instead be naturally interpreted by analyzing the geometrical properties of both Lie groups, an approach that explicitly connects the dynamical group description of the optical devices with the aforementioned duality. Furthermore, we show that the BS-PDC duality can be represented through tensor network diagrams, enabling the implementation of a PDC as a circuit on a standard quantum computing platform. Thus, it is feasible to simulate nonlinear processes by using single-qubit unitaries that can be implemented on currently available digital quantum processors.
We introduce and analyze the entanglement properties of randomized hypergraph states, as an extended notion of the randomization procedure in the quantum logic gates for the usual graph states, recently proposed in the literature. The probabilities of applying imperfect generalized controlled-$Z$ gates simulate the noisy operations over the qubits. We obtain entanglement measures as negativity, concurrence, and genuine multiparticle negativity, and show that entanglement exhibits a non-monotonic behavior in terms of the randomness parameters, which is a consequence of the non-uniformity of the associated hypergraphs, reinforcing the claim that the entanglement of randomized graph states is monotonic since they are related to $2$-uniform hypergraphs. Moreover, we observed the phenomena of entanglement sudden death and entanglement sudden birth in RH states. This work comes to unveil a connection between the non-uniformity of hypergraphs and loss of entanglement.
Quantum process tomography (QPT) is a fundamental task to characterize the dynamics of quantum systems. In contrast to standard QPT, ancilla-assisted process tomography (AAPT) framework introduces an extra ancilla system such that a single input state is needed. In this paper, we extend the two-stage solution, a method originally designed for standard QPT, to perform AAPT. Our algorithm has $O(Md_A^2d_B^2)$ computational complexity where $ M $ is the type number of the measurement operators, $ d_A $ is the dimension of the quantum system of interest, and $d_B$ is the dimension of the ancilla system. Then we establish an error upper bound and further discuss the optimal design on the input state in AAPT. A numerical example on a phase damping process demonstrates the effectiveness of the optimal design and illustrates the theoretical error analysis.
Scalable quantum computers hold the promise to solve hard computational problems, such as prime factorization, combinatorial optimization, simulation of many-body physics, and quantum chemistry. While being key to understanding many real-world phenomena, simulation of non-conservative quantum dynamics presents a challenge for unitary quantum computation. In this work, we focus on simulating non-unitary parity-time symmetric systems, which exhibit a distinctive symmetry-breaking phase transition as well as other unique features that have no counterpart in closed systems. We show that a qutrit, a three-level quantum system, is capable of realizing this non-equilibrium phase transition. By using two physical platforms - an array of trapped ions and a superconducting transmon - and by controlling their three energy levels in a digital manner, we experimentally simulate the parity-time symmetry-breaking phase transition. Our results indicate the potential advantage of multi-level (qudit) processors in simulating physical effects, where additional accessible levels can play the role of a controlled environment.
Quantum computers are nearing the thousand qubit mark, with the current focus on scaling to improve computational performance. As quantum processors grow in complexity, new challenges arise such as the management of device variability and the interface with supporting electronics. Spin qubits in silicon quantum dots are poised to address these challenges with their proven control fidelities and potential for compatibility with large-scale integration. Here, we demonstrate the integration of 1024 silicon quantum dots with on-chip digital and analogue electronics, all operating below 1 K. A high-frequency analogue multiplexer provides fast access to all devices with minimal electrical connections, enabling characteristic data across the quantum dot array to be acquired in just 5 minutes. We achieve this by leveraging radio-frequency reflectometry with state-of-the-art signal integrity, reaching a minimum integration time of 160 ps. Key quantum dot parameters are extracted by fast automated machine learning routines to assess quantum dot yield and understand the impact of device design. We find correlations between quantum dot parameters and room temperature transistor behaviour that may be used as a proxy for in-line process monitoring. Our results show how rapid large-scale studies of silicon quantum devices can be performed at lower temperatures and measurement rates orders of magnitude faster than current probing techniques, and form a platform for the future on-chip addressing of large scale qubit arrays.
Photonic lattices enable experimental exploration of transport and localization phenomena, two of the mayor goals in physics and technology. In particular, the optical excitation of some lattice sites which evanescently couple to a lattice array emulates radiation processes into structured reservoirs, a fundamental subject in quantum optics. Moreover, the simultaneous excitation of two sites simulates collective phenomena, leading to phase-controlled enhanced or suppressed radiation, namely super and subradiance. This work presents an experimental study of collective radiative processes on a photonic kagome lattice. A single or simultaneous -- in or out-of-phase -- excitation of the outlying sites controls the radiation dynamics. Specifically, we demonstrate a controlable transition between a fully localized profile at the two outlying sites and a completely dispersed state into the quasi-continuum. Our result presents photonic lattices as a platform to emulate and experimentally explore quantum optical phenomena in two-dimensional structured reservoirs, while harnessing such phenomena for controlling transport dynamics and implementing all-optical switching devices.
In this study, we explore how quantum pathfinding algorithm called Gaussian Amplitude Amplification (GAA) can be used to solve the DNA sequencing problem. To do this, sequencing by hybridization was assumed wherein short fragments of the nucleic acids called oligonucleotides of length l were gathered and were then assembled. The process of reassembling the sequence was then abstracted into a graph problem of finding the Hamiltonian path with the least cost. The constructed directed graph was then converted into sequential bipartite graphs in order to use GAA. The results of our simulation revealed that for the case where l = 2 and spectrum size of |S| = 4, the probability of finding the optimal solution (with the least cost) is approximately 70.92% - a significant improvement compared to 4.17% when the path is chosen randomly. While this study only focused on the ideal scenario where there are no errors in the spectrum, the outcomes presented here demonstrate the plausibility of using GAA as a genome sequencing method.
One-dimensional 3-body Wolves model with 2- and 3-body interactions also known as $G_2/I_6$-rational integrable model of the Hamiltonian reduction is exactly-solvable and superintegrable. Its Hamiltonian $H$ and two integrals ${\cal I}_{1}, {\cal I}_{2}$, which can written as algebraic differential operators in two variables of the 2nd and 6th orders, respectively, are represented as non-linear combinations of $g^{(2)}$ or $g^{(3)}$ (hidden) algebra generators in a minimal manner. By using a specially designed MAPLE-18 code it is found that $(H, {\cal I}_1, {\cal I}_2, [{\cal I}_1, {\cal I}_2])$ are the four generating elements of the {\it quartic} polynomial algebra of integrals. This algebra is embedded in the universal enveloping algebra $g^{(3)}$. 3-body Calogero model is mentioned briefly.
Beyond their origin in modeling many-body quantum systems, tensor networks have emerged as a promising class of models for solving machine learning problems, notably in unsupervised generative learning. While possessing many desirable features arising from their quantum-inspired nature, tensor network generative models have previously been largely restricted to binary or categorical data, limiting their utility in real-world modeling problems. We overcome this by introducing a new family of tensor network generative models for continuous data, which are capable of learning from distributions containing continuous random variables. We develop our method in the setting of matrix product states, first deriving a universal expressivity theorem proving the ability of this model family to approximate any reasonably smooth probability density function with arbitrary precision. We then benchmark the performance of this model on several synthetic and real-world datasets, finding that the model learns and generalizes well on distributions of continuous and discrete variables. We develop methods for modeling different data domains, and introduce a trainable compression layer which is found to increase model performance given limited memory or computational resources. Overall, our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
Transformers are increasingly employed for graph data, demonstrating competitive performance in diverse tasks. To incorporate graph information into these models, it is essential to enhance node and edge features with positional encodings. In this work, we propose novel families of positional encodings tailored for graph transformers. These encodings leverage the long-range correlations inherent in quantum systems, which arise from mapping the topology of a graph onto interactions between qubits in a quantum computer. Our inspiration stems from the recent advancements in quantum processing units, which offer computational capabilities beyond the reach of classical hardware. We prove that some of these quantum features are theoretically more expressive for certain graphs than the commonly used relative random walk probabilities. Empirically, we show that the performance of state-of-the-art models can be improved on standard benchmarks and large-scale datasets by computing tractable versions of quantum features. Our findings highlight the potential of leveraging quantum computing capabilities to potentially enhance the performance of transformers in handling graph data.
Quantum state teleportation represents a pillar of quantum information and a milestone on the roadmap towards quantum networks with a large number of nodes. Successful photonic demonstrations of this protocol have been carried out employing different qubit encodings. However, demonstrations in the Fock basis encoding are challenging, due to the impossibility of creating a coherent superposition of vacuum-one photon states on a single mode with linear optics. Previous realizations using such an encoding strongly relied on ancillary modes of the electromagnetic field, which only allowed the teleportation of subsystems of entangled states. Here, we enable quantum teleportation of genuine vacuum-one photon states avoiding ancillary modes, by exploiting coherent control of a resonantly excited semiconductor quantum dot in a micro-cavity. Within our setup, we can teleport vacuum-one-photon qubits and perform entanglement swapping in such an encoding. Our results may disclose new potentialities of quantum dot single-photon sources for quantum information applications.
The Holstein model, which describes purely local coupling of an itinerant excitation (electron, hole, exciton) with zero-dimensional (dispersionless) phonons, represents the paradigm for short-range excitation-phonon interactions. It is demonstrated here how spectral properties of small Holstein polarons -- heavily phonon-dressed quasiparticles, formed in the strong-coupling regime of the Holstein model -- can be extracted from an analog quantum simulator of this model. This simulator, which is meant to operate in the dispersive regime of circuit quantum electrodynamics, has the form of an array of capacitively coupled superconducting transmon qubits and microwave resonators, the latter being subject to a weak external driving. The magnitude of $XY$-type coupling between adjacent qubits in this system can be tuned through an external flux threading the SQUID loops between those qubits; this translates into an {\em in-situ} flux-tunable hopping amplitude of a fictitious itinerant spinless-fermion excitation, allowing one to access all the relevant physical regimes of the Holstein model. By employing the kernel-polynomial method, based on expanding dynamical response functions in Chebyshev polynomials of the first kind and their recurrence relation, the relevant single-particle momentum-frequency resolved spectral function of this system is computed here for a broad range of parameter values. To complement the evaluation of the spectral function, it is also explained how -- by making use of the many-body version of the Ramsey interference protocol -- this dynamical-response function can be measured in the envisioned analog simulator.
Two transformative waves of computing have redefined the way we approach science. The first wave came with the birth of the digital computer, which enabled scientists to numerically simulate their models and analyze massive datasets. This technological breakthrough led to the emergence of many sub-disciplines bearing the prefix "computational" in their names. Currently, we are in the midst of the second wave, marked by the remarkable advancements in artificial intelligence. From predicting protein structures to classifying galaxies, the scope of its applications is vast, and there can only be more awaiting us on the horizon. While these two waves influence scientific methodology at the instrumental level, in this dissertation, I will present the computational lens in science, aiming at the conceptual level. Specifically, the central thesis posits that computation serves as a convenient and mechanistic language for understanding and analyzing information processing systems, offering the advantages of composability and modularity. This dissertation begins with an illustration of the blueprint of the computational lens, supported by a review of relevant previous work. Subsequently, I will present my own works in quantum physics and neuroscience as concrete examples. In the concluding chapter, I will contemplate the potential of applying the computational lens across various scientific fields, in a way that can provide significant domain insights, and discuss potential future directions.
Single-photon emitters (SPEs) within wide-bandgap materials represent an appealing platform for the development of single-photon sources operating at room temperatures. Group III- nitrides have previously been shown to host efficient SPEs which are attributed to deep energy levels within the large bandgap of the material, in a way that is similar to extensively investigated colour centres in diamond. Anti-bunched emission from defect centres within gallium nitride (GaN) and aluminium nitride (AlN) have been recently demonstrated. While such emitters are particularly interesting due to the compatibility of III-nitrides with cleanroom processes, the nature of such defects and the optimal conditions for forming them are not fully understood. Here, we investigate Al implantation on a commercial AlN epilayer through subsequent steps of thermal annealing and confocal microscopy measurements. We observe a fluence-dependent increase in the density of the emitters, resulting in creation of ensembles at the maximum implantation fluence. Annealing at 600 {\deg}C results in the optimal yield in SPEs formation at the maximum fluence, while a significant reduction in SPE density is observed at lower fluences. These findings suggest that the mechanism of vacancy formation plays a key role in the creation of the emitters, and open new perspectives in the defect engineering of SPEs in solid state.
We study Haar-random orthonormal bases and pretty good measurement as measurement choices for Bayesian state estimation. We consider different ensembles of states under an $N$-updates Bayesian algorithm given $N$ Haar-random measurement bases. We obtain a bound on fidelity averaged over IID sequences of such random measurement bases for a uniform ensemble of pure states. We also find that Clifford unitaries can only give a weak lower bound for average fidelity in contrast to Haar random unitaries for ensembles of mixed qubit states. For a single-shot-update, we argue using the Petz recovery correspondence for pretty good measurement that it can give pretty good Bayesian mean estimates.
Entanglement in continuous-variable non-Gaussian states provides irreplaceable advantages in many quantum information tasks. However, the sheer amount of information in such states grows exponentially and makes a full characterization impossible. Here, we develop a neural network that allows us to use correlation patterns to effectively detect continuous-variable entanglement through homodyne detection. Using a recently defined stellar hierarchy to rank the states used for training, our algorithm works not only on any kind of Gaussian state but also on a whole class of experimentally achievable non-Gaussian states, including photon-subtracted states. With the same limited amount of data, our method provides higher accuracy than usual methods to detect entanglement based on maximum-likelihood tomography. Moreover, in order to visualize the effect of the neural network, we employ a dimension reduction algorithm on the patterns. This shows that a clear boundary appears between the entangled states and others after the neural network processing. In addition, these techniques allow us to compare different entanglement witnesses and understand their working. Our findings provide a new approach for experimental detection of continuous-variable quantum correlations without resorting to a full tomography of the state and confirm the exciting potential of neural networks in quantum information processing.
Fault-tolerant quantum computation with bosonic qubits often necessitates the use of noisy discrete-variable ancillae. In this work, we establish a comprehensive and practical fault-tolerance framework for such a hybrid system and synthesize it with fault-tolerant protocols by combining bosonic quantum error correction (QEC) and advanced quantum control techniques. We introduce essential building blocks of error-corrected gadgets by leveraging ancilla-assisted bosonic operations using a generalized variant of path-independent quantum control (GPI). Using these building blocks, we construct a universal set of error-corrected gadgets that tolerate a single photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably, our construction only requires dispersive coupling between bosonic modes and ancillae, as well as beam-splitter coupling between bosonic modes, both of which have been experimentally demonstrated with strong strengths and high accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC in the full fault-tolerant setting. We numerically demonstrate the feasibility of our schemes using current experimental parameters in the circuit-QED platform. Finally, we present a hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code utilizing only beam-splitter couplings. Our estimates suggest that the overall noise threshold can be reached using existing hardware. These developed fault-tolerant schemes extend beyond their applicability to four-legged cat qubits and can be adapted for other rotation-symmetrical codes, offering a promising avenue toward scalable and robust quantum computation with bosonic qubits.
A quantum memory is a crucial keystone for enabling large-scale quantum networks. Applicable to the practical implementation, specific properties, i.e., long storage time, selective efficient coupling with other systems, and a high memory efficiency are desirable. Though many quantum memory systems have been developed thus far, none of them can perfectly meet all requirements. This work herein proposes a quantum memory based on color centers in hexagonal boron nitride (hBN), where its performance is evaluated based on a simple theoretical model of suitable defects in a cavity. Employing density functional theory calculations, 257 triplet and 211 singlet spin electronic transitions have been investigated. Among these defects, we found that some defects inherit the $\Lambda$ electronic structures desirable for a Raman-type quantum memory and optical transitions can couple with other quantum systems. Further, the required quality factor and bandwidth are examined for each defect to achieve a 95\% writing efficiency. Both parameters are influenced by the radiative transition rate in the defect state. In addition, inheriting triplet-singlet spin multiplicity indicates the possibility of being a quantum sensing, in particular, optically detected magnetic resonance. This work therefore demonstrates the potential usage of hBN defects as a quantum memory in future quantum networks.
Long-range, terrestrial quantum networks will require high brightness single-photon sources emitting in the telecom C-band for maximum transmission rate. Many applications additionally demand triggered operation with high indistinguishability and narrow spectral linewidth. This would enable the efficient implementation of photonic gate operations and photon storage in quantum memories, as for instance required for a quantum repeater. Especially, semiconductor quantum dots (QDs) have shown these properties in the near-infrared regime. However, the simultaneous demonstration of all these properties in the telecom C-band has been elusive. Here, we present a coherently (incoherently) optically-pumped narrow-band (0.8 GHz) triggered single-photon source in the telecom C-band. The source shows simultaneously high single-photon purity with $g^{(2)}(0) = 0.026$ ($g^{(2)}(0) = 0.014$), high two-photon interference visibility of 0.508 (0.664) and high application-ready rates of 0.75 MHz (1.45 MHz) of polarized photons. The source is based on a QD coupled to a circular Bragg grating cavity combined with spectral filtering. Coherent (incoherent) operation is performed via the novel SUPER scheme (phonon-assisted excitation).
One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution problem to the Learning with Errors problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry that this reduction can be made more powerful by replacing the learning with errors problem with a quantum equivalent, where the errors are given in quantum superposition. In the context of codes, this can be adapted to a reduction from finding short codewords to a quantum decoding problem for random linear codes. We therefore consider in this paper the quantum decoding problem, where we are given a superposition of noisy versions of a codeword and we want to recover the corresponding codeword. When we measure the superposition, we get back the usual classical decoding problem for which the best known algorithms are in the constant rate and error-rate regime exponential in the codelength. However, we will show here that when the noise rate is small enough, then the quantum decoding problem can be solved in quantum polynomial time. Moreover, we also show that the problem can in principle be solved quantumly (albeit not efficiently) for noise rates for which the associated classical decoding problem cannot be solved at all for information theoretic reasons. We then revisit Regev's reduction in the context of codes. We show that using our algorithms for the quantum decoding problem in Regev's reduction matches the best known quantum algorithms for the short codeword problem. This shows in some sense the tightness of Regev's reduction when considering the quantum decoding problem and also paves the way for new quantum algorithms for the short codeword problem.
Semiconductor quantum dots (QDs) in planar germanium (Ge) heterostructures have emerged as frontrunners for future hole-based quantum processors. Notably, the large spin-orbit interaction of holes offers rapid, coherent electrical control of spin states, which can be further beneficial for interfacing hole spins to microwave photons in superconducting circuits via coherent charge-photon coupling. Here, we present strong coupling between a hole charge qubit, defined in a double quantum dot (DQD) in a planar Ge, and microwave photons in a high-impedance ($Z_\mathrm{r} = 1.3 ~ \mathrm{k}\Omega$) superconducting quantum interference device (SQUID) array resonator. Our investigation reveals vacuum-Rabi splittings with coupling strengths up to $g_{0}/2\pi = 260 ~ \mathrm{MHz}$, and a cooperativity of $C \sim 100$, dependent on DQD tuning, confirming the strong charge-photon coupling regime within planar Ge. Furthermore, utilizing the frequency tunability of our resonator, we explore the quenched energy splitting associated with strongly-correlated Wigner molecule (WM) states that emerge in Ge QDs. The observed enhanced coherence of the WM excited state signals the presence of distinct symmetries within related spin functions, serving as a precursor to the strong coupling between photons and spin-charge hybrid qubits in planar Ge. This work paves the way towards coherent quantum connections between remote hole qubits in planar Ge, required to scale up hole-based quantum processors.
Spin-based quantum information processing makes extensive use of spin-state manipulation. This ranges from dynamical decoupling of nuclear spins in quantum sensing experiments to applying logical gates on qubits in a quantum processor. Here we present an antenna for strong driving in quantum sensing experiments and theoretically address challenges of the strong driving regime. First, we designed and implemented a micron-scale planar spiral RF antenna capable of delivering intense fields to a sample. The planar antenna is tailored for quantum sensing experiments using the diamond's nitrogen-vacancy (NV) center and should be applicable to other solid-state defects. The antenna has a broad bandwidth of 22 MHz, is compatible with scanning probes, and is suitable for cryogenic and ultrahigh vacuum conditions. We measure the magnetic field induced by the antenna and estimate a field-to-current ratio of $113\pm 16$ G/A, representing a x6 increase in efficiency compared to the state-of-the-art. We demonstrate the antenna by driving Rabi oscillations in $^1$H spins of an organic sample on the diamond surface and measure $^1$H Rabi frequencies of over 500 kHz, i.e., $\mathrm{\pi}$-pulses shorter than 1 $\mu s$ - faster than previously reported in NV-based nuclear magnetic resonance (NMR). Finally, we discuss the implications of driving spins with a field tilted from the transverse plane in a regime where the driving amplitude is comparable to the spin-state splitting, such that the rotating wave approximation does not describe the dynamics well. We present a recipe to optimize pulse fidelity in this regime based on a phase and offset-shifted sine drive, that may be optimized without numerical optimization procedures or precise modeling of the experiment. We consider this approach in a range of driving amplitudes and show that it is particularly efficient in the case of a tilted driving field.
Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need of mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ era does not allow reliable computations. Emulation arises as the main near-term alternative to explore the potential of QRNNs, but existing quantum emulators are not dedicated to circuits with multiple intermediate measurements. In this context, we design a specific emulation method that relies on density matrix formalism. The mathematical development is explicitly provided as a compact formulation by using tensor notation. It allows us to show how the present and past information from a time series is transmitted through the circuit, and how to reduce the computational cost in every time step of the emulated network. In addition, we derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters, with an eye on gradient-based training and noisy outputs that would appear when using real quantum processors. We finally test the presented methods using a novel hardware-efficient ansatz and three diverse datasets that include univariate and multivariate time series. Our results show how QRNNs can make accurate predictions of future values by capturing non-trivial patterns of input series with different complexities.
In multipartite Bell scenarios, we study the nonlocality robustness of the Greenberger-Horne-Zeilinger (GHZ) state. When each party performs planar measurements forming a regular polygon, we exploit the symmetry of the resulting correlation tensor to drastically accelerate the computation of (i) a Bell inequality via Frank-Wolfe algorithms, and (ii) the corresponding local bound. The Bell inequalities obtained are facets of the symmetrised local polytope and they give the best known upper bounds on the nonlocality robustness of the GHZ state for three to ten parties. Moreover, for four measurements per party, we generalise our facets and hence show, for any number of parties, an improvement on Mermin's inequality in terms of noise robustness. We also compute the detection efficiency of our inequalities and show that some give rise to activation of nonlocality in star networks, a property that was only shown with an infinite number of measurements.
Quantum batteries are quantum systems used to store energy to be later extracted by an external agent in the form of work to perform some task. Here we study the charging of a hybrid quantum battery via a collisional model mediated by an anti-Jaynes Cummings interaction obtained from an off-resonant Raman configuration. The battery is made of two distinct components: a stationary infinite dimensional single quantum system (e.g. an harmonic oscillator) and a stream of small dimensional ones (e.g. qutrits). The charging protocol consists of sequentially interacting the harmonic oscillator with each element of the stream, one at a time, under the action of an external energy source and the goal is to analyze how the charging of both the harmonic oscillator and the qutrits is affected by the correlation properties of the stream.
We demonstrate a simple technique for adding controlled dissipation to Rydberg atom experiments. In our experiments we excite cold rubidium atoms in a magneto-optical trap to $70$-S Rydberg states whilst simultaneously inducing forced dissipation by resonantly coupling the Rydberg state to a hyperfine level of the short-lived $6$-P state. The resulting effective dissipation can be varied in strength and switched on and off during a single experimental cycle.
The quantum dense coding (DC) protocol, which has no security feature, deals with the transmission of classical information encoded in a quantum state by using shared entanglement between a single sender and a single receiver. Its appropriate variant has been established as a quantum key distribution (QKD) scheme for shared two-qubit maximally entangled states, with the security proof utilizing the uncertainty relation of complementary observables and the Shor-Preskill entanglement purification scheme. We present the DC-based QKD protocol for higher dimensional systems and report the lower bounds on secret key rate, when the shared state is a two-qudit maximally entangled state, and mixtures of maximally entangled states with different ranks. The analysis also includes the impact of noisy channels on the secure key rates, before and after encoding. In both the noiseless and the noisy scenarios, we demonstrate that the key rate as well as the robustness of the protocol against noise increases with the dimension. Further, we prove that the set of useless states in the DC-based QKD protocol is convex and compact.
High-dimensional entanglement provides unique ways of transcending the limitations of current approaches in quantum information processing, quantum communications based on qubits. The generation of time-frequency qudit states offer significantly increased quantum capacities while keeping the number of photons constant, but pose significant challenges regarding the possible measurements for certification of entanglement. Here, we develop a new scheme and experimentally demonstrate the certification of 24-dimensional entanglement and a 9-dimensional quantum steering. We then subject our photon-pairs to dispersion conditions equivalent to the transmission through 600-km of fiber and still certify 21-dimensional entanglement. Furthermore, we use a steering inequality to prove 7-dimensional entanglement in a semi-device independent manner, proving that large chromatic dispersion is not an obstacle in distributing and certifying high-dimensional entanglement and quantum steering. Our highly scalable scheme is based on commercial telecommunication optical fiber components and recently developed low-jitter high-efficiency single-photon detectors, thus opening new pathways towards advanced large-scale quantum information processing and high-performance, noise-tolerant quantum communications with time-energy measurements
We propose two different derivations of Pythagoras Theorem and apply the same to study discrete and continuum states.
The scarcity of qubits is a major obstacle to the practical usage of quantum computers in the near future. To circumvent this problem, various circuit knitting techniques have been developed to partition large quantum circuits into subcircuits that fit on smaller devices, at the cost of a simulation overhead. In this work, we study a particular method of circuit knitting based on quasiprobability simulation of nonlocal gates with operations that act locally on the subcircuits. We investigate whether classical communication between these local quantum computers can help. We provide a positive answer by showing that for circuits containing $n$ nonlocal CNOT gates connecting two circuit parts, the simulation overhead can be reduced from $O(9^n)$ to $O(4^n)$ if one allows for classical information exchange. Similar improvements can be obtained for general Clifford gates and, at least in a restricted form, for other gates such as controlled rotation gates.
We consider the computational task of sampling a bit string $x$ from a distribution $\pi(x)=|\langle x|\psi\rangle|^2$, where $\psi$ is the unique ground state of a local Hamiltonian $H$. Our main result describes a direct link between the inverse spectral gap of $H$ and the mixing time of an associated continuous-time Markov Chain with steady state $\pi$. The Markov Chain can be implemented efficiently whenever ratios of ground state amplitudes $\langle y|\psi\rangle/\langle x|\psi\rangle$ are efficiently computable, the spectral gap of $H$ is at least inverse polynomial in the system size, and the starting state of the chain satisfies a mild technical condition that can be efficiently checked. This extends a previously known relationship between sign-problem free Hamiltonians and Markov chains. The tool which enables this generalization is the so-called fixed-node Hamiltonian construction, previously used in Quantum Monte Carlo simulations to address the fermionic sign problem. We implement the proposed sampling algorithm numerically and use it to sample from the ground state of Haldane-Shastry Hamiltonian with up to 56 qubits. We observe empirically that our Markov chain based on the fixed-node Hamiltonian mixes more rapidly than the standard Metropolis-Hastings Markov chain.
A prerequisite to the successful development of quantum computers and simulators is precise understanding of physical processes occurring therein, which can be achieved by measuring the quantum states they produce. However, the resources required for traditional quantum-state estimation scale exponentially with the system size, highlighting the need for alternative approaches. Here we demonstrate an efficient method for reconstruction of significantly entangled multi-qubit quantum states. Using a variational version of the matrix product state ansatz, we perform the tomography (in the pure-state approximation) of quantum states produced in a 20-qubit trapped-ion Ising-type quantum simulator, using the data acquired in only 27 bases with 1000 measurements in each basis. We observe superior state reconstruction quality and faster convergence compared to the methods based on neural network quantum state representations: restricted Boltzmann machines and feedforward neural networks with autoregressive architecture. Our results pave the way towards efficient experimental characterization of complex states produced by the quench dynamics of many-body quantum systems.
Neutron imaging is a non-destructive technique for analyzing a wide class of materials, such as archaeological or structures of industrial materials. Technological advances, in recent decades, have had a great impact on the neutron imaging technique, evolving from simple radiographs using films (2D) to modern tomography systems with digital processing (3D). The Instituto de Pesquisas Energ\'eticas e Nucleares (IPEN), in Brazil, houses a 5MW research nuclear reactor, called IEA-R1, where there is a neutron imaging instrument with $1.0 \times 10^{6}$ $n/cm^{2}s$ in the sample position. IEA-R1 is over 60 years old and the future of neutron science in Brazil, including imaging, will be expanded on a new facility called the Brazilian Multipurpose Reactor (RMB). The new reactor will house a suite of instruments at the Neutron National Laboratory, including the neutron imaging facility called Neinei. Inspired by recent work, we model the Neinei instrument through stochastic Monte Carlo simulations. We investigate the sensitivity of the neutron imaging technique parameter ($L/D$ ratio) with the neutron flux, and the results are compared to data from the Neutra (PSI), Antares (FRM II), BT2 (NIST) and DINGO (OPAL) instruments. The results are promising and provide avenues for future improvements.
Genuine multipartite entanglement of a given multipartite pure quantum state can be quantified through its geometric measure of entanglement, which, up to logarithms, is simply the maximum overlap of the corresponding unit tensor with product unit tensors, a quantity which is also known as the injective norm of the tensor. Our general goal in this work is to estimate this injective norm for randomly sampled tensors. To this end, we study and compare various algorithms, based either on the widely used alternating least squares method or on a novel normalized gradient descent approach, and suited to either symmetrized or non-symmetrized random tensors. We first benchmark their respective performances on the case of symmetrized real Gaussian tensors, whose asymptotic average injective norm is known analytically. Having established that our proposed normalized gradient descent algorithm generally performs best, we then use it to provide approximate numerical values for the average injective norm of complex Gaussian tensors (i.e.~up to normalization uniformly distributed multipartite pure quantum states), with or without permutation-invariance. Finally, we are also able to estimate the average injective norm of random matrix product states constructed from Gaussian local tensors, with or without translation-invariance. All these results constitute the first numerical estimates on the amount of genuinely multipartite entanglement typically present in various models of random multipartite pure states.
We propose a scheme to realize a frustrated Bose-Hubbard model with ultracold atoms in an optical lattice that comprises the frustrated spin-1/2 quantum XX model. Our approach is based on a square ladder of magnetic flux close to $\pi$ with one real and one synthetic spin dimension. Although this system does not have geometrical frustration, we show that at low energies it maps into an effective triangular ladder with staggered fluxes for specific values of the synthetic tunneling. We numerically investigate its rich phase diagram and show that it contains bond-ordered-wave and chiral superfluid phases. Our scheme gives access to minimal instances of frustrated magnets without the need for real geometrical frustration, in a setup of minimal experimental complexity.
We present a tomographic protocol for the characterization of multiqubit quantum channels. We discuss a specific class of input states, for which the set of Pauli measurements at the output of the channel directly relates to its Pauli transfer matrix components. We compare our results to those of standard quantum process tomography, showing an exponential reduction in the number of different experimental configurations required by a single matrix element extraction, while keeping the same number of shots. This paves the way for more efficient experimental implementations, whenever a selective knowledge of the Pauli transfer matrix is needed. We provide several examples and simulations.
Extracting useful information from noisy near-term quantum simulations requires error mitigation strategies. A broad class of these strategies rely on precise characterization of the noise source. We study the robustness of such strategies when the noise is imperfectly characterized. We adapt an Imry-Ma argument to predict the existence of a threshold in the robustness of error mitigation for random spatially local circuits in spatial dimensions $D \geq 2$: noise characterization disorder below the threshold rate allows for error mitigation up to times that scale with the number of qubits. For one-dimensional circuits, by contrast, mitigation fails at an $\mathcal{O}(1)$ time for any imperfection in the characterization of disorder. As a result, error mitigation is only a practical method for sufficiently well-characterized noise. We discuss further implications for tests of quantum computational advantage, fault-tolerant probes of measurement-induced phase transitions, and quantum algorithms in near-term devices.
We investigate and characterize the emergence of finite-component dissipative phase transitions (DPTs) in nonlinear photon resonators subject to $n$-photon driving and dissipation. Exploiting a semiclassical approach, we derive general results on the occurrence of second-order DPTs in this class of systems. We show that for all odd $n$, no second-order DPT can occur while, for even $n$, the competition between higher-order nonlinearities determines the nature of the criticality and allows for second-order DPTs to emerge only for $n=2$ and $n=4$. As pivotal examples, we study the full quantum dynamics of three- and four-photon driven-dissipative Kerr resonators, confirming the prediction of the semiclassical analysis on the nature of the transitions. The stability of the vacuum and the typical timescales needed to access the different phases are also discussed. We also show a first-order DPT where multiple solutions emerge around zero, low, and high-photon numbers. Our results highlight the crucial role played by strong and weak symmetries in triggering critical behaviors, providing a Liouvillian framework to study the effects of high-order nonlinear processes in driven-dissipative systems, that can be applied to problems in quantum sensing and information processing.
Quantum complexity theory is concerned with the amount of elementary quantum resources needed to build a quantum system or a quantum operation. The fundamental question in quantum complexity is to define and quantify suitable complexity measures. This non-trivial question has attracted the attention of quantum information scientists, computer scientists, and high energy physicists alike. In this paper, we combine the approach in \cite{LBKJL} and well-established tools from noncommutative geometry \cite{AC, MR, CS} to propose a unified framework for \textit{resource-dependent complexity measures of general quantum channels}, also known as \textit{Lipschitz complexity}. This framework is suitable to study the complexity of both open and closed quantum systems. The central class of examples in this paper is the so-called \textit{Wasserstein complexity} introduced in \cite{LBKJL, PMTL}. We use geometric methods to provide upper and lower bounds on this class of complexity measures \cite{N1,N2,N3}. Finally, we study the Lipschitz complexity of random quantum circuits and dynamics of open quantum systems in finite dimensional setting. In particular, we show that generically the complexity grows linearly in time before the \textit{return time}. This is the same qualitative behavior conjecture by Brown and Susskind \cite{BS1, BS2}. We also provide an infinite dimensional example where linear growth does not hold.
The emergence of fractonic topological phases and novel universality classes for quantum dynamics highlights the importance of dipolar symmetry in condensed matter systems. In this work, we study the properties of symmetry-breaking phases of the dipolar symmetries in fermionic models in various spatial dimensions. In such systems, fermions obtain energy dispersion through dipole condensation. Due to the nontrivial commutation between the translation symmetry and dipolar symmetry, the Goldstone modes of the dipolar condensate are strongly coupled to the dispersive fermions and naturally give rise to non-Fermi liquids at low energies. The IR description of the dipolar symmetry-breaking phase is analogous to the well-known theory of a Fermi surface coupled to an emergent U(1) gauge field. We also discuss the crossover behavior when the dipolar symmetry is slightly broken and the cases with anisotropic dipolar conservation.
We investigate the emergence and corresponding nature of exceptional points located on exceptional hyper-surfaces of non-hermitian transfer matrices for finite-range one-dimensional lattice models. We unravel the non-trivial role of these exceptional points in determining the system size scaling of electrical conductance in non-equilibrium steady state. We observe that the band edges of the system always correspond to the transfer matrix exceptional points. Interestingly, albeit the lower band edge always occurs at wave-vector $k=0$, the upper band edge may or may not correspond to $k=\pi$. Nonetheless, in all the cases, the system exhibits universal subdiffusive transport for conductance at every band edge with scaling $N^{-b}$ with scaling exponent $b= 2$. However, for cases when the upper band edge is not located at $k=\pi$, the conductance features interesting oscillations with overall $N^{-2}$ scaling. Our work further reveals that this setup is uniquely suited to systematically generate higher order transfer matrix exceptional points at upper band edge when one considers finite range hoppings beyond nearest neighbour. Additional exceptional points other than those at band edges are shown to occur, although interestingly, these do not give rise to anomalous transport.
We investigate bounds on speed, non-adiabatic entropy production and trade-off relation between them for classical stochastic processes with time-independent transition rates. Our results show that the time required to evolve from an initial to a desired target state is bounded from below by the informational-theoretic $\infty$-R\'enyi divergence between these states, divided by the total rate. Furthermore, we conjecture and provide extensive numerical evidence for an information-theoretical bound on the non-adiabatic entropy production and a novel dissipation-time trade-off relation that outperforms previous bounds in some cases.
Long-lived singlet states (LLS) of nuclear spin pairs have been extensively studied and utilized in the isotropic phase via liquid state NMR. However, there are hardly any reports of LLS in the anisotropic phase that allows contribution from the dipolar coupling in addition to the scalar coupling, thereby opening many exciting possibilities. Here we report observing LLS in a pair of nuclear spins partially oriented in the nematic phase of a liquid crystal solvent. The spins are strongly interacting via the residual dipole-dipole coupling. We observe LLS in the oriented phase living up to three times longer than the usual spin-lattice relaxation time constant ($T_1$). Upon heating, the system undergoes a phase transition from nematic into isotropic phase, wherein the LLS is up to five times longer lived than the corresponding $T_1$. Interestingly, the LLS prepared in the oriented phase can survive the transition from the nematic to the isotropic phase. As an application of LLS in the oriented phase, we utilize its longer life to measure the small translational diffusion coefficient of solute molecules in the liquid crystal solvent. Finally, we propose utilizing the phase transition to lock or unlock access to LLS.
It is shown that rational extensions of the isotropic Dunkl oscillator in the plane can be obtained by adding some terms either to the radial equation or to the angular one obtained in the polar coordinates approach. In the former case, the isotropic harmonic oscillator is replaced by an isotropic anharmonic one, whose wavefunctions are expressed in terms of $X_m$-Laguerre exceptional orthogonal polynomials. In the latter, it becomes an anisotropic potential, whose explicit form has been found in the simplest case associated with $X_1$-Jacobi exceptional orthogonal polynomials.
The controlled-SWAP and controlled-controlled-NOT gates are at the heart of the original proposal of reversible classical computation by Fredkin and Toffoli. Their widespread use in quantum computation, both in the implementation of classical logic subroutines of quantum algorithms and in quantum schemes with no direct classical counterparts, has made it imperative early on to pursue their efficient decomposition in terms of the lower-level gate sets native to different physical platforms. Here, we add to this body of literature by providing several logically equivalent circuits for the Toffoli and Fredkin gates under all-to-all and linear qubit connectivity, the latter with two different routings for control and target qubits. Besides achieving the lowest CNOT counts in the literature for all these configurations, we also demonstrate the remarkable effectiveness of the obtained decompositions at mitigating coherent errors on near-term quantum computers via equivalent circuit averaging. We first quantify the performance of the method in silico with a coherent-noise model before validating it experimentally on a superconducting quantum processor. In addition, we consider the case where the three qubits on which the Toffoli or Fredkin gates act nontrivially are not adjacent, proposing a novel scheme to reorder them that saves one CNOT for every SWAP. This scheme also finds use in the shallow implementation of long-range CNOTs. Our results highlight the importance of considering different entanglement structures and connectivity constraints when designing efficient quantum circuits.
Consequences of enforcing permutational symmetry, as required by the Pauli principle (spin-statistical theorem), on the state space of molecular ensembles interacting with the quantized radiation mode of a cavity are discussed. The Pauli-allowed collective states are obtained by means of group theory, i.e., by projecting the state space onto the appropriate irreducible representations of the permutation group of the indistinguishable molecules. It is shown that with increasing number of molecules the ratio of Pauli-allowed collective states decreases very rapidly. Bosonic states are more abundant than fermionic states, and the brightness of Pauli-allowed state space (contribution from photon excited states) increases(decreases) with increasing fine structure in the energy levels of the material ground(excited) state manifold. Numerical results are shown for the realistic example of rovibrating H$_2$O molecules interacting with an infrared (IR) cavity mode.
The {\eta} pseudo PT symmetry theory, denoted by the symbol {\eta}, explores the conditions under which non-Hermitian Hamiltonians can possess real spectra despite the violation of PT symmetry, that is the adjoint of H, denoted H^{{\dag}} is expressed as H^{{\dag}}=PTHPT. This theory introduces a new symmetry operator, {\eta}=PT{\eta}, which acts on the Hilbert space. The {\eta} pseudo PT symmetry condition requires the Hamiltonian to commute with the {\eta} operator, leading to real eigenvalues. We discuss some general implications of our results for the coupled non hermitian harmonic oscillator.
In the context of non-Hermitian quantum mechanics, many systems are known to possess a pseudo PT symmetry , i.e. the non-Hermitian Hamiltonian H is related to its adjoint H^{{\dag}} via the relation, H^{{\dag}}=PTHPT . We propose a derivation of pseudo PT symmetry and {\eta} -pseudo-Hermiticity simultaneously for the time dependent non-Hermitian Hamiltonians by intoducing a new metric {\eta}(t)=PT{\eta}(t) that not satisfy the time-dependent quasi-Hermiticity relation but obeys the Heisenberg evolution equation. Here, we solve the SU(1,1) time-dependent non-Hermitian Hamiltonian and we construct a time-dependent solutions by employing this new metric and discuss a concrete physical applications of our results.
We propose a distributed quantum computing (DQC) architecture in which individual small-sized quantum computers are connected to a shared quantum gate processing unit (S-QGPU). The S-QGPU comprises a collection of hybrid two-qubit gate modules for remote gate operations. In contrast to conventional DQC systems, where each quantum computer is equipped with dedicated communication qubits, S-QGPU effectively pools the resources (e.g., the communication qubits) together for remote gate operations, and thus significantly reduces the cost of not only the local quantum computers but also the overall distributed system. Our preliminary analysis and simulation show that S-QGPU's shared resources for remote gate operations enable efficient resource utilization. When not all computing qubits (also called data qubits) in the system require simultaneous remote gate operations, S-QGPU-based DQC architecture demands fewer communication qubits, further decreasing the overall cost. Alternatively, with the same number of communication qubits, it can support a larger number of simultaneous remote gate operations more efficiently, especially when these operations occur in a burst mode.
Quantum many-body scars (QMBS) are exceptional energy eigenstates of quantum many-body systems associated with violations of thermalization for special non-equilibrium initial states. Their various systematic constructions require fine-tuning of local Hamiltonian parameters. In this work we demonstrate that the setting of long-range interacting quantum spin systems generically hosts robust QMBS. We analyze spectral properties upon raising the power-law decay exponent $\alpha$ of spin-spin interactions from the solvable permutationally-symmetric limit $\alpha=0$. First, we numerically establish that despite spectral signatures of chaos appear for infinitesimal $\alpha$, the towers of $\alpha=0$ energy eigenstates with large collective spin are smoothly deformed as $\alpha$ is increased, and exhibit characteristic QMBS features. To elucidate the nature and fate of these states in larger systems, we introduce an analytical approach based on mapping the spin Hamiltonian onto a relativistic quantum rotor non-linearly coupled to an extensive set of bosonic modes. We exactly solve for the eigenstates of this interacting impurity model, and show their self-consistent localization in large-spin sectors of the original Hamiltonian for $0<\alpha<d$. Our theory unveils the stability mechanism of such QMBS for arbitrary system size and predicts instances of its breakdown e.g. near dynamical critical points or in presence of semiclassical chaos, which we verify numerically in long-range quantum Ising chains. As a byproduct, we find a predictive criterion for presence or absence of heating under periodic driving for $0<\alpha<d$, beyond existing Floquet-prethermalization theorems. Broader perspectives of this work range from independent applications of the technical toolbox developed here to informing experimental routes to metrologically useful multipartite entanglement.
Here we investigate the role of quantum interference in the quantum homogenizer, whose convergence properties model a thermalization process. In the original quantum homogenizer protocol, a system qubit converges to the state of identical reservoir qubits through partial-swap interactions, that allow interference between reservoir qubits. We design an alternative, incoherent quantum homogenizer, where each system-reservoir interaction is moderated by a control qubit using a controlled-swap interaction. We show that our incoherent homogenizer satisfies the essential conditions for homogenization, being able to transform a qubit from any state to any other state to arbitrary accuracy, with negligible impact on the reservoir qubits' states. Our results show that the convergence properties of homogenization machines that are important for modelling thermalization are not dependent on coherence between qubits in the homogenization protocol. We then derive bounds on the resources required to re-use the homogenizers for performing state transformations. This demonstrates that both homogenizers are universal for any number of homogenizations, for an increased resource cost.
The quantum rate-distortion function plays a fundamental role in quantum information theory, however there is currently no practical algorithm which can efficiently compute this function to high accuracy for moderate channel dimensions. In this paper, we show how symmetry reduction can significantly simplify common instances of the entanglement-assisted quantum rate-distortion problems. This allows for more efficient computation regardless of the numerical algorithm being used, and provides insight into the quantum channels which obtain the optimal rate-distortion tradeoff. Additionally, we propose an inexact variant of the mirror descent algorithm to compute the quantum rate-distortion function with provable sublinear convergence rates. We show how this mirror descent algorithm is related to Blahut-Arimoto and expectation-maximization methods previously used to solve similar problems in information theory. Using these techniques, we present the first numerical experiments to compute a multi-qubit quantum rate-distortion function, and show that our proposed algorithm solves faster and to higher accuracy when compared to existing methods.
Non-Hermitian dynamics, as observed in photonic, atomic, electrical, and optomechanical platforms, holds great potential for sensing applications and signal processing. Recently, fully tunable nonreciprocal optical interaction has been demonstrated between levitated nanoparticles. Here, we use this tunability to investigate the collective non-Hermitian dynamics of two nonreciprocally and nonlinearly interacting nanoparticles. We observe parity-time symmetry breaking and, for sufficiently strong coupling, a collective mechanical lasing transition, where the particles move along stable limit cycles. This work opens up a research avenue of nonequilibrium multi-particle collective effects, tailored by the dynamic control of individual sites in a tweezer array.
Applying a Weyl-Stratonovich transform to the evolution equation of the Wigner function in an electromagnetic field yields a multidimensional gauge-invariant equation which is numerically very challenging to solve. In this work, we apply simplifying assumptions for linear electromagnetic fields and the evolution of an electron in a plane (two-dimensional transport), which reduces the complexity and enables to gain first experiences with a gauge-invariant Wigner equation. We present an equation analysis and show that a finite difference approach for solving the high-order derivatives allows for reformulation into a Fredholm integral equation. The resolvent expansion of the latter contains consecutive integrals, which is favorable for Monte Carlo solution approaches. To that end, we present two stochastic (Monte Carlo) algorithms that evaluate averages of generic physical quantities or directly the Wigner function. The algorithms give rise to a quantum particle model, which interprets quantum transport in heuristic terms.
Quantum error mitigation is the technique to post-process the error occurring in the quantum system, which reduces the expected errors to achieve higher accuracy. Zero-noise extrapolation is one of the methods of quantum error mitigation, which first amplifies the noise and then extrapolates the observable expectation of interest to the noise-free point. Conventionally, this method depends on the error model of noise, since error rates, the parameter describing the degree of noise, are presumed in the procedure of noise amplification. In this paper, we show that the purity of output states of noisy circuits can assist in the extrapolation procedure to avoid the presumption of error rates. We also discuss the form of fitting model used in extrapolation. We verify this method and compare it with the ordinary zero-noise extrapolation method via numerical simulations and experiments on the cloud-based quantum computer, Quafu. It is shown that with the assistance of purity, the extrapolation is more stable under the random fluctuation of measurements, and different kinds of noise.
Using quantum algorithms, we obtain, for accuracy $\epsilon,0<\epsilon<1/4$ and confidence $1-\delta,0<\delta <1,$ a new sample complexity upper bound of $O((\mbox{log}(\frac{1}{\delta}))/\epsilon)$ as $\epsilon,\delta\rightarrow 0$ (up to a polylogarithmic factor in $\epsilon^{-1}$) for a general agnostic learning model, provided the hypothesis class is of finite cardinality. This greatly improves upon a corresponding sample complexity of asymptotic order $\Theta((\mbox{log}(\frac{1}{\delta}))/\epsilon^{2})$ known in the literature to be attainable by means of classical (non-quantum) algorithms for an agnostic learning problem also with hypothesis set of finite cardinality (see, for example, Arunachalam and de Wolf (2018) and the classical statistical learning theory references cited there). Thus, for general agnostic learning, the quantum speedup in the rate of learning that we achieve is quadratic in $\epsilon^{-1}$ (up to a polylogarithmic factor).
Quantum information processing is the foundation of quantum technology. Protocols of quantum information share secrets between two distant parties for secure communication (quantum key distribution), teleport quantum states, and stand at the heart of quantum computation. While various protocols of quantum communication have already been realized, and even commercialized, their communication speed is generally low, limited by the narrow electronic bandwidth of the measurement apparatus in the MHz-to-GHz range, which is orders-of-magnitude lower than the optical bandwidth of available quantum optical sources (10-100 THz). We present and demonstrate an efficient method to process quantum information with such broadband sources in parallel over multiplexed frequency channels using parametric homodyne detection for simultaneous measurement of all the channels. Specifically, we propose two basic protocols: A multiplexed Continuous-Variable Quantum Key Distribution (CV-QKD) and A multiplexed continuous-variable quantum teleportation protocol. We demonstrate the multiplexed CV-QKD protocol in a proof-of-principle experiment, where we successfully carry out QKD over 23 uncorrelated spectral channels and show the ability to detect eavesdropping in any of them. These multiplexed methods (and similar) will enable to carry out quantum processing in parallel over hundreds of channels, potentially increasing the throughput of quantum protocols by orders of magnitude
Circuit description languages are a class of quantum programming languages in which programs are classical and produce a description of a quantum computation, in the form of a quantum circuit. Since these programs can leverage all the expressive power of high-level classical languages, circuit description languages have been successfully used to describe complex and practical quantum algorithms, whose circuits, however, may involve many more qubits and gate applications than current quantum architectures can actually muster. In this paper, we present Proto-Quipper-R, a circuit description language endowed with a linear dependent type-and-effect system capable of deriving parametric upper bounds on the width of the circuits produced by a program. We prove both the standard type safety results and that the resulting resource analysis is correct with respect to a big-step operational semantics. We also show that our approach is expressive enough to verify realistic quantum algorithms.
Satellite-based greenhouse gases (GHG) sensing technologies play a critical role in the study of global carbon emissions and climate change. However, none of the existing satellite-based GHG sensing technologies can achieve the measurement of broad bandwidth, high temporal-spatial resolution, and high sensitivity at the same time. Recently, dual-comb spectroscopy (DCS) has been proposed as a superior candidate technology for GHG sensing because it can measure broadband spectra with high temporal-spatial resolution and high sensitivity. The main barrier to DCS's display on satellites is its short measurement distance in open air achieved thus far. Prior research has not been able to implement DCS over 20 km of open-air path. Here, by developing a bistatic setup using time-frequency dissemination and high-power optical frequency combs, we have implemented DCS over a 113 km turbulent horizontal open-air path. Our experiment successfully measured GHG with 7 nm spectral bandwidth and a 10 kHz frequency and achieved a CO2 sensing precision of <2 ppm in 5 minutes and <0.6 ppm in 36 minutes. Our results represent a significant step towards advancing the implementation of DCS as a satellite-based technology and improving technologies for GHG monitoring
The RKKY interaction is an important theoretical model for indirect exchange interaction in magnetic multilayer. The expression for RKKY range function in three dimension and lower has been derived in the 1950s. However, the expression for one dimension is still studied in recent years, due to its strong singularity. By using an adiabatic limit of retarded Green's function form that directly related to RKKY interaction in one dimension, we decompose the singularity and recover the range function. Furthermore, we show in adiabatic limit, RKKY interaction also induces one-dimensional spin pumping and distance-dependent magnetic damping.