Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-14 11:30 to 2023-11-17 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
This white paper explores the detectability of intermediate-mass black holes (IMBHs) wandering in the Milky Way (MW) and massive local galaxies, with a particular emphasis on the role of AXIS. IMBHs, ranging within $10^{3-6} \, M_\odot$, are commonly found at the centers of dwarf galaxies and may exist, yet undiscovered, in the MW. By using model spectra for advection-dominated accretion flows (ADAFs), we calculated the expected fluxes emitted by a population of wandering IMBHs with a mass of $10^5 \, M_\odot$ in various MW environments and extrapolated our results to massive local galaxies. Around $40\%$ of the potential population of wandering IMBHs in the MW can be detected in an AXIS deep field. We proposed criteria to aid in selecting IMBH candidates using already available optical surveys. We also showed that IMBHs wandering in $>200$ galaxies within $10$ Mpc can be easily detected with AXIS when passing within dense galactic environments (e.g., molecular clouds and cold neutral medium). In summary, we highlighted the potential X-ray detectability of wandering IMBHs in local galaxies and provided insights for guiding future surveys. Detecting wandering IMBHs is crucial for understanding their demographics, evolution, and the merging history of galaxies.
Overdense regions at high redshift ($z \gtrsim 2$) are perfect laboratories to study the relations between environment and SMBH growth, and the AGN feedback processes on the surrounding galaxies and diffuse gas. In this white paper, we discuss how AXIS will 1) constrain the AGN incidence in protoclusters, as a function of parameters such as redshift, overdensity, mass of the structure; 2) search for low-luminosity and obscured AGN in the satellite galaxies of luminous QSOs at $z>6$, exploiting the large galaxy density around such biased objects; 3) probe the AGN feedback on the proto-ICM via the measurement of the AGN contribution to the gas ionization and excitation, and the detection of extended X-ray emission from the ionized gas and from radio jets; 4) discover new large-scale structures in the wide and deep AXIS surveys as spikes in the redshift distribution of X-ray sources. These goals can be achieved only with an X-ray mission with the capabilities of AXIS, ensuring a strong synergy with current and future state-of-the-art facilities in other wavelengths. This White Paper is part of a series commissioned for the AXIS Probe Concept Mission; additional AXIS White Papers can be found at this http URL with a mission overview at https://arxiv.org/abs/2311.00780.
We explore the potential of using the low-redshift Lyman-$\alpha$ (Ly$\alpha$) forest surrounding luminous red galaxies (LRGs) as a tool to constrain active galactic nuclei (AGN) feedback models. Our analysis is based on snapshots from the Illustris and IllustrisTNG simulations at a redshift of $z=0.1$. These simulations offer an ideal platform for studying the influence of AGN feedback on the gas surrounding galaxies, as they share the same initial conditions and underlying code but incorporate different feedback prescriptions. Both simulations show significant impacts of feedback on the temperature and density of the gas around massive halos. Following our previous work, we adjusted the UV background in both simulations to align with the observed number density of Ly$\alpha$ lines ($\rm dN/dz$) in the intergalactic medium and study the Ly$\alpha$ forest around massive halos hosting LRGs, at impact parameters ($r_{\perp}$) ranging from 0.1 to 100 pMpc. Our findings reveal that $\rm dN/dz$, as a function of $r_{\perp}$, is approximately 1.5 to 2 times higher in IllustrisTNG compared to Illustris up to $r_{\perp}$ of $\sim 10$ pMpc. To further assess whether existing data can effectively discern these differences, we search for archival data containing spectra of background quasars probing foreground LRGs. Through a feasibility analysis based on this data, we demonstrate that ${\rm dN/dz} (r_{\perp})$ measurements can distinguish between feedback models of IllustrisTNG and Illustris with a precision exceeding 12$\sigma$. This underscores the potential of ${\rm dN/dz} (r_{\perp})$ measurements around LRGs as a valuable benchmark observation for discriminating between different feedback models.
Galaxies are biased tracers of the underlying cosmic web, which is dominated by dark matter components that cannot be directly observed. The relationship between dark matter density fields and galaxy distributions can be sensitive to assumptions in cosmology and astrophysical processes embedded in the galaxy formation models, that remain uncertain in many aspects. Based on state-of-the-art galaxy formation simulation suites with varied cosmological parameters and sub-grid astrophysics, we develop a diffusion generative model to predict the unbiased posterior distribution of the underlying dark matter fields from the given stellar mass fields, while being able to marginalize over the uncertainties in cosmology and galaxy formation.
The X-ray spectrum of the Coma galaxy cluster was studied using the data from the XMM-Newton observatory. We combined 7 observations performed with the MOS camera of XMM-Newton in the 40'x 40' region centered at the Coma cluster. The analyzed observations were performed in 2000-2005 and have a total duration of 196 ksec. We focus on the analysis of the MOS camera spectra due to their lower affection by strong instrumental line-like background. The obtained spectrum was fitted with a model including contributions from the Solar system/Milky Way hot plasma and a power law X-ray background. The contribution of the instrumental background was modeled as a power law (not convolved with the effective area) and a number of Gaussian lines. The contribution from the Coma cluster was modeled with a single-temperature hot plasma emission. In addition, we searched for possible non-thermal radiation present in the vicinity of the center of the Coma cluster, originating e.g. from synchrotron emission of relativistic electrons on a turbulent magnetic field. We compared the results with previous works by other authors and spectra obtained from other instruments that operate in the similar energy range of 1-10 keV. Careful and detailed spectrum analysis shall be a necessary contribution to our future work - searching for axion-like particles' manifestations in the Coma cluster.
In this work, we quantify the cosmological signatures of dark energy radiation -- a novel description of dark energy, which proposes that the dynamical component of dark energy is comprised of a thermal bath of relativistic particles sourced by thermal friction from a slowly rolling scalar field. For a minimal model with particle production emerging from first principles, we find that the abundance of radiation sourced by dark energy can be as large as $\Omega_{\text{DER}} = 0.03$, exceeding the bounds on relic dark radiation by three orders of magnitude. Although the background and perturbative evolution of dark energy radiation is distinct from Quintessence, we find that current and near-future cosmic microwave background and supernova data will not distinguish these models of dark energy. We also find that our constraints on all models are dominated by their impact on the expansion rate of the Universe. Considering extensions that allow the dark radiation to populate neutrinos, axions, and dark photons, we evaluate the direct detection prospects of a thermal background comprised of these candidates consistent with cosmological constraints on dark energy radiation. Our study indicates that a resolution of $\sim 6 \, \text{meV}$ is required to achieve sensitivity to relativistic neutrinos compatible with dark energy radiation in a neutrino capture experiment on tritium. We also find that dark matter axion experiments lack sensitivity to a relativistic thermal axion background, even if enhanced by dark energy radiation, and dedicated search strategies are required to probe new parameter space. We derive constraints arising from a dark photon background from oscillations into visible photons, and find that several orders of magnitude of viable parameter space can be explored with planned experimental programs such as DM Radio and LADERA.
Anisotropy properties -- halo spin, shape, position offset, velocity offset, and orientation -- are an important family of dark matter halo properties that indicate the level of directional variation of the internal structures of haloes. These properties reflect the dynamical state of haloes, which in turn depends on the mass assembly history. In this work, we study the evolution of anisotropy properties in response to merger activity using the IllustrisTNG simulations. We find that the response trajectories of the anisotropy properties significantly deviate from secular evolution. These trajectories have the same qualitative features and timescales across a wide range of merger and host properties. We propose explanations for the behaviour of these properties and connect their evolution to the relevant stages of merger dynamics. We measure the relevant dynamical timescales. We also explore the dependence of the strength of the response on time of merger, merger ratio, and mass of the main halo. These results provide insight into the physics of halo mergers and their effects on the statistical behaviour of halo properties. This study paves the way towards a physical understanding of scaling relations, particularly to how systematics in their scatter are connected to the mass assembly histories of haloes.
The post-inflationary Universe can pass through a long epoch of effective matter-dominated expansion. This era may allow for both the parametric amplification of initial fluctuations and the gravitational collapse of inflaton perturbations. We perform first-of-their-kind high-resolution simulations that span the resonant phase and the subsequent gravitational collapse of the inflaton field by seguing from a full Klein-Gordon treatment of resonance to a computationally efficient Schr\"odinger-Poisson description that accurately captures the gravitational dynamics when most quanta are nonrelativistic. We consider a representative example in which resonance generates $\mathcal{O}(10^{-1})$ overdensities and gravitational collapse follows promptly as resonance ends. We observe the formation of solitonic cores inside inflaton halos and complex gravitational dynamics on scales of $10^{-27}\,\mathrm{m}$, greatly extending the possible scope of nonlinear post-inflationary gravitational dynamics.
We analyse theories that do not have a de Sitter vacuum and cannot lead to slow-roll quintessence, but which nevertheless support a transient era of accelerated cosmological expansion due to interactions between a scalar $\phi$ and either a hidden sector thermal bath, which evolves as Dark Radiation, or an extremely-light component of Dark Matter. We show that simple models can explain the present-day Dark Energy of the Universe consistently with current observations. This is possible both when $\phi$'s potential has a hilltop form and when it has a steep exponential run-away, as might naturally arise from string theory. We also discuss a related theory of multi-field quintessence, in which $\phi$ is coupled to a sector that sources a subdominant component of Dark Energy, which overcomes many of the challenges of slow-roll quintessence.
With the detection of black hole mergers by the LIGO gravitational wave telescope, there has been increasing interest in the possibility that dark matter may be in the form of solar mass primordial black holes. One of the predictions implicit in this idea is that compact clouds in the broad emission line regions of high redshift quasars will be microlensed, leading to changes in line structure and the appearance of new emission features. In this paper the effect of microlensing on the broad emission line region is reviewed by reference to gravitationally lensed quasar systems where microlensing of the emission lines can be unambiguously identified. It is then shown that although changes in Seyfert galaxy line profiles occur on timescales of a few years, they are too nearby for a significant chance that they could be microlensed, and are plausibly attributed to intrinsic changes in line structure. In contrast, in a sample of 53 high redshift quasars, 9 quasars show large changes in line profile at a rate consistent with microlensing. These changes occur on a timescale an order of magnitude too short for changes associated with the dynamics of the emission line region. The main conclusion of the paper is that the observed changes in quasar emission line profiles are consistent with microlensing by a population of solar mass compact bodies making up the dark matter, although other explanations like intrinsic variability are possible. Such bodies are most plausibly identified as primordial black holes.
We propose a new scheme to regularize the stress-energy tensor and the two-point function of free quantum scalar fields propagating in cosmological spacetimes. We generalize the adiabatic regularization method by introducing two additional mass scales not present in the standard program. If we set them to the order of the physical scale of the problem, we obtain ultraviolet-regularized quantities that do not distort the amplitude of the power spectra at the infrared momentum scales amplified by the non-adiabatic expansion of the universe. This is not ensured by the standard adiabatic method. We also show how our proposed subtraction terms can be interpreted as renormalization of coupling constants in the Einstein's equations. We finally illustrate our proposed regularization method in two scenarios of cosmological interest: de Sitter inflation and geometric reheating.
It has recently been suggested that black holes (BH) may exhibit growth of their mass with time, so that their mass is proportional to the cosmological scale factor to the power $n$, with suggested values $n \sim 3$ for supermassive BHs in elliptical galaxies. Here we test these predictions with stellar mass BHs in X-ray binaries using their masses and ages. We perform two sets of tests to assess the compatible values of $n$. First, we assume that no compact object grows over the Tolman-Oppenheimer-Volkof limit which marks the borderline between neutron stars and BHs. We show that half of the BHs would be born with a mass below this limit if $n=3$ applies. The possibility that all BHs were born above the limit is rejected at $4\,\sigma$ if $n=3$ applies. In the second test, we assume that masses of BHs at their formation stay the same over cosmic history. We compare the mass distribution of the youngest BHs, which could have not grown yet, to their older counterparts. Distributions are compatible for $n = -0.8^{+1.2}_{-4.5}$, with $n=3$ excluded with $87\,\%$ confidence. This result may be biased, as massive BHs tend to have a massive companion. Correcting for this bias yields $n\sim 0$. We conclude that mass and age estimates of stellar mass BHs are incompatible with cosmological growth with $n \sim 3$ and favor their mass not changing with time.
We study scatter-like interactions between neutrinos and dark matter in light of different combinations of temperature, polarization and lensing data released by three independent CMB experiments - the Planck satellite, the Atacama Cosmology Telescope (ACT), and the South Pole Telescope (SPT) - in conjunction with Baryon Acoustic Oscillation (BAO) measurements. We apply two different statistical methodologies. Alongside the usual marginalization technique, we cross-check all the results through a Profile Likelihood analysis. As a first step, working under the assumption of massless neutrinos, we perform a comprehensive (re-)analysis aimed at assessing the validity of some recent results hinting at a mild preference for non-vanishing interactions from small-scale CMB data. We find compelling resilience in the results already documented in the literature, confirming that interactions with a strength $u_{\nu\rm{DM}} \sim 10^{-5} - 10^{-4}$ appear to be globally favored by ACT (both alone and in combination with Planck). This result is corroborated by the inclusion of additional data, such as the recent ACT-DR6 lensing likelihood, as well as by the Profile Likelihood analysis. Interestingly, a fully consistent preference for interactions emerges from SPT, as well, although it is weaker than the one obtained from ACT. As a second step, we repeat the same analysis considering neutrinos as massive particles. Despite the larger parameter space, all the hints pointing towards interactions are confirmed also in this more realistic case. In addition, we report a very mild preference for interactions in Planck+BAO alone (not found in the massless case) which aligns with small-scale data. While this latter result is not fully confirmed by the Profile Likelihood analysis, the profile distribution does confirm that interactions are not disfavoured by Planck.
Sterile neutrinos only interact with the Standard Model through the neutrino sector, and thus represent a simple dark matter (DM) candidate with many potential astrophysical and cosmological signatures. Recently, sterile neutrinos produced through self-interactions of active neutrinos have received attention as a particle candidate that can yield the entire observed DM relic abundance without violating the most stringent constraints from X-ray observations. We examine consistency of this production mechanism with the abundance of small-scale structure in the universe, as captured by the population of ultra-faint dwarf galaxies orbiting the Milky Way, and derive a lower bound on the sterile-neutrino particle mass of $37$ keV. Combining these results with previous limits from particle physics and astrophysics excludes $100\%$ sterile neutrino DM produced by strong neutrino self-coupling, mediated by a heavy ($\gtrsim 1~\mathrm{GeV}$) scalar particle; however, data permits sterile-neutrino DM production via a light mediator.
A joint hadronic model is shown to quantitatively explain the observations of diffuse radio emission from galaxy clusters in the form of minihalos, giant halos, relics, and their hybrid, transitional stages. Cosmic-ray diffusion of order $D\sim 10^{31\text{--}32}\text{ cm}^2\text{ s}^{-1}$, inferred independently from relic energies, the spatial variability of giant-halo spectra, and the spectral evolution of relics, reproduces the observed spatio-spectral distributions, explains the recently discovered mega-halos as enhanced peripheral magnetization, and quenches electron (re)acceleration by weak shocks or turbulence. For instance, the hard-to-soft evolution along secondary-electron diffusion explains both the soft spectra in most halo peripheries and relic downstreams, and the hard spectra in most halo centres and relic edges, where the photon index can reach $\alpha\simeq -0.5$ regardless of the Mach number $\mathcal{M}$ of the coincident shock. Such spatio-spectral modeling, recent $\gamma$-ray observations, and additional accumulated evidence are thus shown to support a previous claim (Keshet 2010) that the seamless transitions among minihalos, giant halos, and relics, their similar energetics, integrated spectra, and delineating discontinuities, the inconsistent $\mathcal{M}$ inferred from radio vs. X-rays in leptonic models, and additional observations, all indicate that these diffuse radio phenomena are manifestations of the same cosmic-ray ion population, with no need to invoke less natural alternatives.
Primordial magnetic fields (PMF) can enhance baryon perturbations on scales below the photon mean free path. However, a magnetically driven baryon fluid becomes turbulent near recombination, thereby damping out baryon perturbations below the turbulence scale. In this letter, we show that the initial growth in baryon perturbations gravitationally induces growth in the dark matter perturbations, which are unaffected by turbulence and eventually collapse to form $10^{-11}-10^3\ M_{\odot}$ dark matter minihalos. If the magnetic fields purportedly detected in the blazar observations are PMFs generated after inflation and have a Batchelor spectrum, then such PMFs could potentially produce dark matter minihalos.
In this work we revisit power law, $\frac{1}{M^2}R^\beta$, inflation to find the deviations from $R^2$ inflation allowed by current CMB and LSS observations. We compute the power spectra for scalar and tensor perturbations numerically and perform MCMC analysis to put constraints on parameters $M$ and $\beta$ from Planck-2018, BICEP3 and other LSS observations. We consider general reheating scenario and also vary the number of e-foldings during inflation, $N_{pivot}$, along with the other parameters. We find $\beta = 1.966^{+0.035}_{-0.042}$, $M= \left(3.31^{+5}_{-2}\right)\times 10^{-5}$ and $N_{pivot} = 41^{+10}_{-10}$ with $95\%\, C.\, L.$. This indicates that the current observations allow deviation from Starobinsky inflation. The scalar spectral index, $n_s$, and tensor-to-scalar ratio, $r$, derived from these parameters, are consistent with the Planck and BICEP3 observations.
We study whether the signal seen by pulsar timing arrays (PTAs) may originate from gravitational waves (GWs) induced by large primordial perturbations. Such perturbations may be accompanied by a sizeable primordial black hole (PBH) abundance. We improve existing analyses and show that PBH overproduction disfavors Gaussian scenarios for scalar-induced GWs at 2{\sigma} and single-field inflationary scenarios, accounting for non-Gaussianity, at 3{\sigma} as the explanation of the most constraining NANOGrav 15-year data. This tension can be relaxed in models where non-Gaussianites suppress the PBH abundance. On the flip side, the PTA data does not constrain the abundance of PBHs.
Recently, pulsar timing array (PTA) experiments have provided compelling evidence for the existence of the nanohertz stochastic gravitational wave background (SGWB). In this work, we demonstrated that cosmic string loops generated from cosmic global strings offer a viable explanation for the observed nanohertz SGWB data, requiring a cosmic string tension parameter of $\log(G\mu) \sim -12$ and a loop number density of $\log N \sim 4$. Additionally, we revisited the impact of cosmic string loops on the abundance of massive galaxies at high redshifts. However, our analysis revealed challenges in identifying a consistent parameter space that can concurrently explain both the SGWB data and observations from the James Webb Space Telescope. This indicates the necessity for either extending the existing model employed in this research or acknowledging distinct physical origins for these two phenomena.
Recently, Pulsar Timing Array (PTA) collaborations have detected a stochastic gravitational wave background (SGWB) at nano-Hz frequencies, with Domain Wall networks (DWs) proposed as potential sources. To be cosmologically viable, they must annihilate before dominating the universe energy budget, thus generating a SGWB. While sub-horizon DWs shrink and decay rapidly, causality requires DWs with super-horizon size to continue growing until they reach the Hubble horizon. Those entering the latest can be heavier than a Hubble patch and collapse into Primordial Black Holes (PBHs). By applying percolation theory, we pioneer an estimation of the PBH abundance originating from DW networks. We conduct a Bayesian analysis of the PTA signal, interpreting it as an outcome of SGWB from DW networks, accounting for PBH overproduction as a prior. We included contributions from supermassive black hole binaries along with their astrophysical priors. Our findings indicate that DWs, as the proposed source of the PTA signal, result in the production of PBHs about ten times heavier than the sun. The binary mergers occurring within these PBHs generate a second SGWB in the kilo-Hz domain which could be observable in on-going or planned Earth-based interferometers if the correlation length of the DW network is greater than approximately 60$\%$ than the cosmic horizon, $L \gtrsim 0.6 t$.
NANOGrav, EPTA, PPTA, and CPTA have announced the evidence for a stochastic signal from their latest data sets. Supermassive black hole binaries (SMBHBs) are supposed to be the most promising gravitational-wave (GW) sources of pulsar timing arrays. Assuming an astro-informed formation model, we use the NANOGrav 15-year data set to constrain the gravitational wave background (GWB) from SMBHBs. Our results prefer a large turn-over eccentricity of the SMBHB orbit when GWs begin to dominate the SMBHBs evolution. Furthermore, the GWB spectrum is extrapolated to the space-borne GW detector frequency band by including inspiral-merge-cutoff phases of SMBHBs and should be detected by LISA, Taiji and TianQin in the near future.
We perform a Bayesian analysis of NANOGrav 15yr and IPTA DR2 pulsar timing residuals and show that the recently detected stochastic gravitational-wave background (SGWB) is compatible with a SGWB produced by bubble dynamics during a cosmological first-order phase transition. The timing data suggests that the phase transition would occur around QCD confinement temperature and would have a slow rate of completion. This scenario can naturally lead to the abundant production of primordial black holes (PBHs) with solar masses. These PBHs can potentially be detected by current and advanced gravitational wave detectors LIGO-Virgo-Kagra, Einstein Telescope, Cosmic Explorer, by astrometry with GAIA and by 21-cm survey.
Combining cosmological probes has consolidated the standard cosmological model with percent precision, but some tensions have recently emerged when certain parameters are estimated from the local or primordial Universe. The origin of this behaviour is still under debate, however, it is crucial to study as many probes as possible to cross-check the results with independent methods and provide additional pieces of information to the cosmological puzzle. In this work, by combining several late-Universe probes (0$<z<$10), namely, Type Ia SuperNovae, Baryon Acoustic Oscillations, Cosmic Chronometers and Gamma-Ray Bursts, we aim to derive cosmological constraints independently of local or early-Universe anchors. To test the standard cosmological model and its various extensions, considering an evolving Dark Energy Equation of State and the curvature as a free parameter, we analyse each probe individually and all their possible permutations. Assuming a flat $\Lambda$CDM model, the full combination of probes provides $H_0=67.2^{+3.4}_{-3.2}$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_m=0.325\pm0.015$ (68$\%$ C.L.). Considering a flat $w$CDM model, we measure $w_0=-0.91^{+0.07}_{-0.08}$ (68$\%$ C.L.), while by relaxing the flatness assumption ($\Lambda$CDM model, 95$\%$ C.L.) we obtain $\Omega_k=0.125^{+0.167}_{-0.165}$. Finally, we analytically characterize the degeneracy directions and the relative orientation of the probes' contours. By calculating the Figure-of-Merit, we quantify the synergies among independent methods, estimate the constraining power of each probe and identify which provides the best contribution to the inference process. Pending the new cosmological surveys, this study confirms the exigency for new emerging probes in the landscape of modern cosmology.
A cosmological scenario in which the onset of neutrino free streaming in the early Universe is delayed until close to the epoch of matter-radiation equality has been shown to provide a good fit to some cosmic microwave background (CMB) data, while being somewhat disfavored by Planck CMB polarization data. To clarify this situation, we investigate in this paper CMB-independent constraints on this scenario from the Full Shape of the galaxy power spectrum. Although this scenario predicts significant changes to the linear matter power spectrum, we find that it can provide a good fit to the galaxy power spectrum data. Interestingly, we show that the data display a modest preference for a delayed onset of neutrino free streaming over the standard model of cosmology, which is driven by the galaxy power spectrum data on mildly non-linear scales. This conclusion is supported by both profile likelihood and Bayesian exploration analyses, showing robustness of the results. Compared to the standard cosmological paradigm, this scenario predicts a significant suppression of structure on subgalactic scales. While our analysis relies on the simplest cosmological representation of neutrino self-interactions, we argue that this persistent - and somehow consistent - picture in which neutrino free streaming is delayed motivates the exploration of particle models capable of reconciling all CMB, large-scale structure, and laboratory data.
This paper presents the third data release of the INvestigating Stellar Population In RElics (INSPIRE) project, comprising 52 ultra-compact massive galaxies (UCMGs) observed with the X-Shooter spectrograph. We measure integrated stellar velocity dispersion, [Mg/Fe] abundances, ages, and metallicities for all the INSPIRE objects. We thus infer star formation histories and confirm the existence of a degree of relicness (DoR), defined in terms of the fraction of stellar mass formed by $z=2$, the time at which a galaxy has assembled 75\% of its mass, and the final assembly time. Objects with a high DoR assembled their stellar mass at early epochs, while low-DoR objects show a non-negligible fraction of later-formed populations and hence a spread in ages and metallicities. A higher DoR correlates with larger [Mg/Fe], super-solar metallicity, and larger velocity dispersion values. The 52 UMCGs span a large range of DoR from 0.83 to 0.06, with 38 of them having formed more than 75\% of their mass by $z=2$. Of these, nine are extreme relics (DoR$>0.7$), since they formed the totality ($>99\%$) of their stellar mass by redshift $z=2$. The remaining 14 UCMGs cannot be considered relics, as they are characterised by more extended star formation histories. With INSPIRE, we built the first sizeable sample of relics outside the local Universe, up to $z\sim0.4$, increasing the number of confirmed relics by a factor of $>10$, and opening up an important window to explain the mass assembly of massive galaxies in the high-z Universe.
There is compelling evidence that the Universe is undergoing a late phase of accelerated expansion. One of the simplest explanations for this behaviour is the presence of dark energy. A plethora of microphysical models for dark energy have been proposed. The hope is that, with the ever increasing precision of cosmological surveys, it will be possible to precisely pin down the model. We show that this is unlikely and that, at best, we will have a phenomenological description for the microphysics of dark energy. Furthermore, we argue that the current phenomenological prescriptions are ill-equipped for shedding light on the fundamental theory of dark energy.
We predict the X-ray background (XRB) expected from the population of quasars detected by the JWST spectroscopic surveys over the redshift range $z \sim 4-7$. We find that the measured UV emissivities, in combination with a best-fitting quasar SED template, imply a $\sim 10$ times higher unresolved X-ray background than constrained by current experiments. We illustrate the difficulty of simultaneously matching the faint-end of the quasar luminosity function and the X-ray background constraints. We discuss possible origins and consequences of this discrepancy.
We study how structural properties of globular clusters and dwarf galaxies are linked to their orbits in the Milky Way halo. From the inner to the outer halo, orbital energy increases and stellar-systems gradually move out of internal equilibrium: in the inner halo, high-surface brightness globular clusters are at pseudo-equilibrium, while further away, low-surface brightness clusters and dwarfs appear more tidally disturbed. Dwarf galaxies are the latest to arrive into the halo as indicated by their large orbital energies and pericenters, and have no time for more than one orbit. Their (gas-rich) progenitors likely lost their gas during their recent arrival in the Galactic halo. If dwarfs are at equilibrium with their dark matter (DM) content, the DM density should anti-correlate with pericenter. However, the transformation of DM dominated dwarfs from gas-rich rotation-supported into gas-poor dispersion-supported systems is unlikely accomplished during a single orbit. We suggest instead that the above anti-correlation is brought by the combination of ram-pressure stripping and of Galactic tidal shocks. Recent gas removal leads to an expansion of their stellar content caused by the associated gravity loss, making them sufficiently fragile to be transformed near pericenter passage. Out of equilibrium dwarfs would explain the observed anti-correlation of kinematics-based DM density with pericenter without invoking DM density itself, questioning its previous estimates. Ram-pressure stripping and tidal shocks may contribute to the dwarf velocity dispersion excess. It predicts the presence of numerous stars in their outskirts and a few young stars in their cores.
Most Milky Way dwarf galaxies are much less bound to their host than are relics of Gaia-Sausage-Enceladus and Sgr. These dwarfs are expected to have fallen into the Galactic halo less than 3 Gyr ago, and will therefore have undergone no more than one full orbit. Here, we have performed hydrodynamical simulations of this process, assuming that their progenitors are gas-rich, rotation-supported dwarfs. We follow their transformation through interactions with the hot corona and gravitational field of the Galaxy. Our dedicated simulations reproduce the structural properties of three dwarf galaxies: Sculptor, Antlia II and, with somewhat a lower accuracy, Crater II. This includes reproducing their large velocity dispersions, which are caused by ram-pressure stripping and Galactic tidal shocks. Differences between dwarfs can be interpreted as due to different orbital paths, as well as to different initial conditions for their progenitor gas and stellar contents. However, we failed to suppress in a single orbit the rotational support of our Sculptor analog if it is fully dark-matter dominated. In addition, we have found that classical dwarf galaxies like Sculptor may have stellar cores sufficiently dense to survive the pericenter passage through adiabatic contraction. On the contrary, our Antlia II and Crater II analogs are tidally stripped, explaining their large sizes, extremely low surface brightnesses, and velocity dispersion. This modeling explains differences between dwarf galaxies by reproducing them as being at different stages of out-of-equilibrium stellar systems.
One of main sources of uncertainty in modern cosmology is the present rate of the Universe's expansion, H0, called the Hubble Constant. Once again different observational techniques bring different results causing a new "Hubble tension". In the present work we review the historical roots of Hubble constant from the beginning of the XX century, when modern cosmology started, to the present. We develop the arguments that gave rise to the importance of measuring the expansion of the Universe, its discovery and describing the different pioneering works to measure it. There has been a long dispute on this matter, even in the present epoch that is marked by high-tech instrumentation and therefore resulting in smaller uncertainties in the relevant parameters. It is again now necessary to conduct a careful and critical revision of the different methods, before one invokes new physics to solve the so-called Hubble tension.
Gravitational waves (GW) from chirping binary black holes (BBHs) provide unique opportunities to test general relativity (GR) in the strong-field regime. However, testing GR can be challenging when incomplete physical modeling of the expected signal gives rise to systematic biases. In this study, we investigate the potential influence of wave effects in gravitational lensing (which we refer to as microlensing) on tests of GR using GWs for the first time. We utilize an isolated point-lens model for microlensing with the lens mass ranging from $10-10^5~$M$_\odot$ and base our conclusions on an astrophysically motivated population of BBHs in the LIGO-Virgo detector network. Our analysis centers on two theory-agnostic tests of gravity: the inspiral-merger-ringdown consistency test (IMRCT) and the parameterized tests. Our findings reveal two key insights: First, microlensing can significantly bias GR tests, with a confidence level exceeding $5\sigma$. Notably, substantial deviations from GR $(\sigma > 3)$ tend to align with a strong preference for microlensing over an unlensed signal, underscoring the need for microlensing analysis before claiming any erroneous GR deviations. Nonetheless, we do encounter scenarios where deviations from GR remain significant ($1 < \sigma < 3$), yet the Bayes factor lacks the strength to confidently assert microlensing. Second, deviations from GR correlate with pronounced interference effects, which appear when the GW frequency ($f_\mathrm{GW}$) aligns with the inverse time delay between microlens-induced images ($t_\mathrm{d}$). These false deviations peak in the wave-dominated region and fade where $f_\mathrm{GW}\cdot t_\mathrm{d}$ significantly deviates from unity. Our findings apply broadly to any microlensing scenario, extending beyond specific models and parameter spaces, as we relate the observed biases to the fundamental characteristics of lensing.
This study presents the capabilities of the AXIS telescope in estimating redshifts from X-ray spectra alone (X-ray redshifts, XZs). Through extensive simulations, we establish that AXIS observations enable reliable XZ estimates for more than 5500 obscured Active Galactic Nuclei (AGN) up to redshift $z\sim 6$ in the proposed deep (7 Ms) and intermediate (375 ks) surveys. Notably, at least 1600 of them are expected to be in the Compton-Thick regime ($\log N_H/\mathrm{cm^{-2}}\geq 24$), underscoring the pivotal role of AXIS in sample these elusive objects that continue to be poorly understood. XZs provide an efficient alternative for optical/infrared faint sources, overcoming the need for time-consuming spectroscopy, potential limitations of photometric redshifts, and potential issues related to multi-band counterpart association. This approach will significantly enhance the accuracy of constraints on the X-ray luminosity function and obscured AGN fractions up to high redshift. This White Paper is part of a series commissioned for the AXIS Probe Concept Mission; additional AXIS White Papers can be found at the AXIS website (this http URL) with a mission overview here: arXiv:2311.00780.
Hot gas around a supermassive black hole (SMBH) should be captured within the gravitational "sphere of influence", characterized by the Bondi radius. Deep Chandra observations have spatially resolved the Bondi radii of at least five nearby SMBHs. Contrary to earlier hot accretion models that predicted a steep temperature increase within the Bondi radius, none of the resolved temperature profiles exhibit such an increase. The temperature inside the Bondi radius appears to be complex, indicative of a multi-temperature phase of hot gas with a cooler component at about 0.2-0.3 keV. The density profiles within the Bondi regions are shallow, suggesting the presence of strong outflows. These findings might be explained by recent realistic numerical simulations that suggest that large-scale accretion inside the Bondi radius can be chaotic, with cooler gas raining down in some directions and hotter gas outflowing in others. With an angular resolution similar to Chandra and a significantly larger collecting area, AXIS will collect enough photons to map the emerging accretion flow within and around the "sphere of influence" of a sample of active galactic nuclei (AGNs). AXIS will reveal transitions in the inflow that ultimately fuels the AGN, as well as outflows that provide feedback to the environment.
The full, radio to $\gamma$-ray spectrum of the Fermi bubbles is shown to be consistent with standard strong-shock electron acceleration at the bubble edge, without ad-hoc energy cutoffs, if the ambient interstellar radiation is strong; the $\gamma$-ray cooling break should then have a microwave counterpart, undetected until now. Indeed, a broadband bubble-edge analysis uncovers a pronounced downstream dust component, which masked the anticipated $\sim35$ GHz spectral break, and the same overall radio softening consistent with Kraichnan diffusion previously reported in $\gamma$-rays.
We present VLT spectroscopy, high-resolution imaging and time-resolved photometry of KY TrA, the optical counterpart to the X-ray binary A 1524-61. We perform a refined astrometry of the field, yielding improved coordinates for KY TrA and the field star interloper of similar optical brightness that we locate $0.64 \pm 0.04$ arcsec SE. From the spectroscopy, we refine the radial velocity semi-amplitude of the donor star to $K_2 = 501 \pm 52$ km s$^{-1}$ by employing the correlation between this parameter and the full-width at half-maximum of the H$\alpha$ emission line. The $r$-band light curve shows an ellipsoidal-like modulation with a likely orbital period of $0.26 \pm 0.01$ d ($6.24 \pm 0.24$ h). These numbers imply a mass function $f(M_1) = 3.2 \pm 1.0$ M$_\odot$. The KY TrA de-reddened quiescent colour $(r-i)_0 = 0.27 \pm 0.08$ is consistent with a donor star of spectral type K2 or later, in case of significant accretion disc light contribution to the optical continuum. The colour allows us to place a very conservative upper limit on the companion star mass, $M_2 \leq 0.94$ M$_\odot$, and, in turn, on the binary mass ratio, $q = M_2/M_1 \leq 0.31$. By exploiting the correlation between the binary inclination and the depth of the H$\alpha$ line trough, we establish $i = 57 \pm 13$ deg. All these values lead to a compact object and donor mass of $M_1 = 5.8^{+3.0}_{-2.4}$ M$_\odot$ and $M_2 = 0.5 \pm 0.3$ M$_\odot$, respectively, thus confirming the black hole nature of the accreting object. In addition, we estimate a distance toward the system of $8.0 \pm 0.9$ kpc.
Microquasar binary stellar systems emit electromagnetic radiation and high-energy particles over a broad energy spectrum. However, they are so far away that it is hard to observe their details. A simulation offers the link between relatively scarce observational data and the rich theoretical background. In this work, high-energy particle emission from simulated twin microquasar jets is calculated in a unified manner. From the cascade of emission within an element of jet matter to the dynamic and radiative whole jet model, the series of physical processes involved are integrated together. A programme suite assembled around model data produces synthetic images and spectra directly comparable to potential observations by contemporary arrays. The model is capable of describing a multitude of system geometries, incorporating increasing levels of realism depending on need and available computational resources. As an application, the modelling process is applied to a typical microquasar, which is synthetically observed from different angles using various imaging geometries. Furthermore, the resulting intensities are comparable to the sensitivity of existing detectors. The combined background emission from a potential distribution of microquasars is also modelled.
Kinetic simulations of relativistic turbulence have significantly advanced our understanding of turbulent particle acceleration. Recent progress has highlighted the need for an updated acceleration theory that can account for acceleration within the plasma's coherent structures. Here, we investigate how turbulent intermittency models connect statistical fluctuations in turbulence to regions of high dissipation. This connection is established by employing a generalized She-Leveque model to describe the exponents $\zeta_p$ for the structure functions $S^p \propto l^{\zeta_p}$. The fitting of the scaling exponents provide us with a measure of the co-dimension of the dissipative structures, and we subsequently measure their filling fraction. We perform our analysis for a range of magnetizations $\sigma$ and magnetic field fluctuations ${\delta B_0}/{B_0}$. We find that increasing the values of $\sigma$ and ${\delta B_0}/{B_0}$ allows the cascade to break sheets into smaller regions of dissipation that resemble chains of plasmoids. However, as their dissipation increases, the dissipative regions become less volume filling. With this work we aim to inform future turbulent acceleration theories that incorporate particle energization from interactions with coherent structures within relativistic turbulence.
Highly collimated relativistic jets are a defining feature of certain active galactic nuclei (AGN), yet their formation mechanism remains elusive. Previous observations and theoretical models have proposed that the ambient medium surrounding the jets could exert pressure, playing a crucial role in shaping the jets. However, direct observational confirmation of such a medium has been lacking. In this study, we present very long baseline interferometric (VLBI) observations of 3C 84 (NGC 1275), located at the center of the Perseus Cluster. Through monitoring observations with the Very Long Baseline Array (VLBA) at 43 GHz, a jet knot was detected to have been ejected from the sub-parsec scale core in the late 2010s. Intriguingly, this knot propagated in a direction significantly offset from the parsec-scale jet direction. To delve deeper into the matter, we employ follow-up VLBA 43 GHz observations, tracing the knot's trajectory until the end of 2022. We discovered that the knot abruptly changed its trajectory in the early 2020s, realigning itself with the parsec-scale jet direction. Additionally, we present results from an observation of 3C 84 with the Global VLBI Alliance (GVA) at 22 GHz, conducted near the monitoring period. By jointly analyzing the GVA 22 GHz image with a VLBA 43 GHz image observed about one week apart, we generated a spectral index map, revealing an inverted spectrum region near the edge of the jet where the knot experienced deflection. These findings suggest the presence of a dense, cold ambient medium characterized by an electron density exceeding $\sim10^5\ {\rm cm^{-3}}$, which guides the jet's propagation on parsec scales and significantly contributes to the overall shaping of the jet.
We present Event Horizon Telescope (EHT) 1.3 mm measurements of the radio source located at the position of the supermassive black hole Sagittarius A* (Sgr A*), collected during the 2017 April 5--11 campaign. The observations were carried out with eight facilities at six locations across the globe. Novel calibration methods are employed to account for Sgr A*'s flux variability. The majority of the 1.3 mm emission arises from horizon scales, where intrinsic structural source variability is detected on timescales of minutes to hours. The effects of interstellar scattering on the image and its variability are found to be subdominant to intrinsic source structure. The calibrated visibility amplitudes, particularly the locations of the visibility minima, are broadly consistent with a blurred ring with a diameter of $\sim$50 $\mu$as, as determined in later works in this series. Contemporaneous multi-wavelength monitoring of Sgr A* was performed at 22, 43, and 86 GHz and at near infrared and X-ray wavelengths. Several X-ray flares from Sgr A* are detected by Chandra, one at low significance jointly with Swift on 2017 April 7 and the other at higher significance jointly with NuSTAR on 2017 April 11. The brighter April 11 flare is not observed simultaneously by the EHT but is followed by a significant increase in millimeter flux variability immediately after the X-ray outburst, indicating a likely connection in the emission physics near the event horizon. We compare Sgr A*'s broadband flux during the EHT campaign to its historical spectral energy distribution and find both the quiescent and flare emission are consistent with its long-term behaviour.
We present the first Event Horizon Telescope (EHT) observations of Sagittarius A* (Sgr A$^*$), the Galactic center source associated with a supermassive black hole. These observations were conducted in 2017 using a global interferometric array of eight telescopes operating at a wavelength of $\lambda=1.3\,{\rm mm}$. The EHT data resolve a compact emission region with intrahour variability. A variety of imaging and modeling analyses all support an image that is dominated by a bright, thick ring with a diameter of $51.8 \pm 2.3$\,\uas (68\% credible interval). The ring has modest azimuthal brightness asymmetry and a comparatively dim interior. Using a large suite of numerical simulations, we demonstrate that the EHT images of Sgr A$^*$ are consistent with the expected appearance of a Kerr black hole with mass ${\sim}4 \times 10^6\,{\rm M}_\odot$, which is inferred to exist at this location based on previous infrared observations of individual stellar orbits as well as maser proper motion studies. Our model comparisons disfavor scenarios where the black hole is viewed at high inclination ($i > 50^\circ$), as well as non-spinning black holes and those with retrograde accretion disks. Our results provide direct evidence for the presence of a supermassive black hole at the center of the Milky Way galaxy, and for the first time we connect the predictions from dynamical measurements of stellar orbits on scales of $10^3-10^5$ gravitational radii to event horizon-scale images and variability. Furthermore, a comparison with the EHT results for the supermassive black hole M87$^*$ shows consistency with the predictions of general relativity spanning over three orders of magnitude in central mass.
PSR J1641+8049 is a 2 ms black widow pulsar with the 2.2 h orbital period detected in the radio and $\gamma$-rays. We performed new phase-resolved multi-band photometry of PSR J1641+8049 using the OSIRIS instrument at the Gran Telescopio Canarias. The obtained data were analysed together with the new radio-timing observations from the Canadian Hydrogen Intensity Mapping Experiment (CHIME), the X-ray data from the Spectrum-RG/eROSITA all-sky survey, and all available optical photometric observations. An updated timing solution based on CHIME data is presented, which accounts for secular and periodic modulations in pulse dispersion. The system parameters obtained through the light curve analysis, including the distance to the source 4.6-4.8 kpc and the orbital inclination 56-59 deg, are found to be consistent with previous studies. However, the optical flux of the source at the maximum brightness phase faded by a factor of $\sim$2 as compared to previous observations. Nevertheless, the face of the J1641+8049 companion remains one of the most heated (8000-9500 K) by a pulsar among the known black widow pulsars. We also report a new estimation on the pulsar proper motion of $\approx$2 mas yr$^{-1}$, which yields a spin down luminosity of $\approx$4.87$\times 10^{34}$ ergs s$^{-1}$ and a corresponding heating efficiency of the companion by the pulsar of 0.3-0.7. The pulsar was not detected in X-rays implying its X-ray-luminosity was <3 $\times$ 10$^{31}$ erg s$^{-1}$ at the date of observations.
Non-thermal components in the intra-cluster medium (ICM) such as turbulence, magnetic field, and cosmic rays imprint the past and current energetic activities of jets from active galactic neuclie (AGN) of member galaxies as well as disturbance caused by galaxy cluster mergers. Meter- and centimeter-radio observations of synchrotron radiation allow us to diagnose the nonthermal component. Here we report on our discovery of an unidentified diffuse radio source, named the Flying Fox, near the center of the Abell 1060 field. The Flying Fox has an elongated ring-like structure and a central bar shape, but there is no obvious host galaxy. The average spectral index of the Flying Fox is -1.4, which is steeper than those for radio sources seen at meter wavelength. We discussed the possibilities of radio lobes, phoenixes, radio halos and relics, and Odd Radio Circle (ORC). In conclusion, the Flying Fox is not clearly explained by known radio sources.
In this paper we quantify the temporal variability and image morphology of the horizon-scale emission from Sgr A*, as observed by the EHT in 2017 April at a wavelength of 1.3 mm. We find that the Sgr A* data exhibit variability that exceeds what can be explained by the uncertainties in the data or by the effects of interstellar scattering. The magnitude of this variability can be a substantial fraction of the correlated flux density, reaching $\sim$100\% on some baselines. Through an exploration of simple geometric source models, we demonstrate that ring-like morphologies provide better fits to the Sgr A* data than do other morphologies with comparable complexity. We develop two strategies for fitting static geometric ring models to the time-variable Sgr A* data; one strategy fits models to short segments of data over which the source is static and averages these independent fits, while the other fits models to the full dataset using a parametric model for the structural variability power spectrum around the average source structure. Both geometric modeling and image-domain feature extraction techniques determine the ring diameter to be $51.8 \pm 2.3$ $\mu$as (68\% credible intervals), with the ring thickness constrained to have an FWHM between $\sim$30\% and 50\% of the ring diameter. To bring the diameter measurements to a common physical scale, we calibrate them using synthetic data generated from GRMHD simulations. This calibration constrains the angular size of the gravitational radius to be $4.8_{-0.7}^{+1.4}$ \mathrm{\mu as}, which we combine with an independent distance measurement from maser parallaxes to determine the mass of Sgr A* to be $4.0_{-0.6}^{+1.1} \times 10^6$ M$_{\odot}$.
We carried a detailed temporal and spectral study of the BL\,Lac by using the long-term \emph{Fermi}-LAT and \emph{Swift}-XRT/UVOT observations, during the period MJD\,59000-59943. The daily-binned $\gamma$-ray light curve displays a maximum flux of $1.74\pm 0.09\times 10^{-5} \rm ph\,cm^{-2}\,s^{-1}$ on MJD\,59868, which is the highest daily $\gamma$-ray flux observed from BL\,Lac. The $\gamma$-ray variability is characterised by power-spectral-density (PSD), r.m.s-flux relation and flux-distribution study. We find that power-law model fits the PSD with index $\sim 1$, which suggest for long memory process at work. The observed r.m.s.-flux relation exhibits a linear trend, which indicates that the $\gamma$-ray flux distribution follows a log-normal distribution. The skewness/Anderson-Darling test and histogram-fit reject the normality of flux distribution, and instead suggest that the flux distribution is log-normal distribution. The fractional-variability amplitude shows that source is more variable in X-ray band than in optical/UV/$\gamma$-ray bands. In order to obtain an insight into the underlying physical process, we extracted broadband spectra from different time periods of the lightcurve. The broadband spectra are statistically fitted with the convolved one-zone leptonic model with different forms of the particle energy distribution. We found that spectral energy distribution during different flux states can be reproduced well with the synchrotron, synchrotron-self-Compton and external-Compton emissions from a broken power-law electron distribution, ensuring equipartition condition. A comparison between the best fit physical parameters show that the variation in different flux-states are mostly related to increase in the bulk-Lorentz factor and spectral hardening of the particle distribution.
This work studies the dynamics of geodesic motion within a curved spacetime around a Schwarzschild black hole, perturbed by a gravitational field of a far axisymmetric distribution of mass enclosing the system. This spacetime can serve as a versatile model for a diverse range of astrophysical scenarios and, in particular, for extreme mass ratio inspirals as in our work. We show that the system is non-integrable by employing Poincar\'e surface of section and rotation numbers. By utilising the rotation numbers, the widths of resonances are calculated, which are then used in establishing the relation between the underlying perturbation parameter driving the system from integrability and the quadrupole parameter characterising the perturbed metric. This relation allows us to estimate the phase shift caused by the resonance during an inspiral.
Field line helicity measures the net linking of magnetic flux with a single magnetic field line. It offers a finer topological description than the usual global magnetic helicity integral, while still being invariant in an ideal evolution unless there is a flux of helicity through the domain boundary. In this chapter, we explore how to appropriately define field line helicity in different volumes in a way that preserves a meaningful topological interpretation. We also review the time evolution of field line helicity under both boundary motions and magnetic reconnection.
Core-collapse supernovae (CCSNe) are among the most energetic processes in our Universe and are crucial for the understanding of the formation and chemical composition of the Universe. The precise measurement of the neutrino light curve from CCSNe is crucial to understanding the hydrodynamics and fundamental processes that drive CCSNe. The IceCube Neutrino Observatory has mass-independent sensitivity within the Milky Way and some sensitivity to the higher mass CCSNe in the Large and Small Magellanic clouds. The envisaged large-scale extension of the IceCube detector, IceCube-Gen2, opens the possibility for new sensor design and trigger concepts that could increase the number of neutrinos detected from a CCSNe burst compared to IceCube. In this contribution, we study how wavelength-shifting technology can be used in IceCube-Gen2 to measure the fast modulations of the neutrino signal due to standing accretion shock instabilities (SASI).
We propose a novel idea for the coherent intense millisecond radio emission of cosmic fast radio bursts (FRBs), which have recently been identified with flares from a magnetar. Motivated by the conventional paradigm of Type III solar radio bursts, we will explore the emission of coherent plasma line radiation at the electron plasma frequency and its harmonic as potential candidates of the coherent FRB emissions associated with magnetar flares. We discuss the emissions region parameters in relativistic strongly magnetized plasmas consisting of electrons, positrons and protons. The goal is to make observable predictions of this model to confront the multi-wavelength observations of FRBs from magnetars. These results will impact both observational radio astronomy and space-based astrophysics
An unidentified $\gamma$-ray source 4FGL J1838.2+3223 has been proposed as a pulsar candidate. We present optical time-series multi-band photometry of its likely optical companion obtained with the 2.1-m telescope of Observatorio Astron\'omico Nacional San Pedro M\'artir, Mexico. The observations and the data from the Zwicky Transient Facility revealed the source brightness variability with a period of $\approx$4.02 h likely associated with the orbital motion of the binary system. The folded light curves have a single sine-like peak per period with an amplitude of about three magnitude accompanied by fast sporadic flares up to one magnitude level. We reproduce them modelling the companion heating by the pulsar. As a result, the companion side facing the pulsar is strongly heated up to 11300$\pm$400 K, while the temperature of its back side is only 2300$\pm$700 K. It has a mass of 0.10$\pm$0.05 ${\rm M}_\odot$ and underfills its Roche lobe with a filling factor of $0.60^{+0.10}_{-0.06}$. This implies that 4FGL J1838.2+3223 likely belongs to the `spider' pulsar family. The estimated distance of $\approx$3.1 kpc is compatible with Gaia results. We detect a flare from the source in X-rays and ultraviolet using Swift archival data and another one in X-rays with the eROSITA all-sky survey. Both flares have X-ray luminosity of $\sim$10$^{34}$ erg s$^{-1}$ which is two orders of magnitude higher than the upper limit in quiescence obtained from eROSITA assuming spectral shape typical for spider pulsars. If the spider interpretation is correct, these flares are among the strongest flares observed from non-accreting spider pulsars.
Neutron stars may experience differential rotation on short, dynamical timescales following extreme astrophysical events like binary neutron star mergers. In this work, the masses and radii of differentially rotating neutron star models are computed. We employ a set of equations of states for dense hypernuclear and $\Delta$-admixed-hypernuclear matter obtained within the framework of CDF theory in the relativistic Hartree-Fock (RHF) approximation. Results are shown for varying meson-$\Delta$ couplings, or equivalently the $\Delta$-potential in nuclear matter. A comparison of our results with those obtained for non-rotating stars shows that the maximum mass difference between differentially rotating and static stars is independent of the underlying particle composition of the star. We further find that the decrease in the radii and increase in the maximum masses of stellar models when $\Delta$-isobars are added to hyperonuclear matter (as initially observed for static and uniformly rotating stars) persist also in the case of differentially rotating neutron stars.
The fraction of local dwarf galaxies that hosts massive black holes is arguably the cleanest diagnostic of the dominant seed formation mechanism of today's supermassive black holes. A 5 per cent constraint on this quantity can be achieved with AXIS observations of 3300 galaxies across the mass spectrum through a combination of serendipitous extra-galactic fields plus a dedicated 1 Msec GO program.
Some of the most important information on a radio pulsar is derived from its average pulse profile. Many early pulsar studies were necessarily based on only few such profiles. There, discrete profile components were linked to emission mechanism models for individual stars through human interpretation. For the population as a whole, profiles morphology must reflect the geometry and overall evolution of the radio emitting regions. The problem, however, is that this population is becoming too large for intensive studies of all sources individually. Moreover, connecting profiles from a large collection of pulsars rapidly becomes cumbersome. In this article, we present ToPP, the first-ever unsupervised method to sort pulsars by profile-shape similarity, using graph topology. We apply ToPP to the publicly available European Pulsar Network profile database, providing the first organised visual overview of multi-frequency profiles representing 90 individual pulsars. We find discrete evolutionary tracks, varying from simple, single component profiles at all frequencies, towards diverse mixtures of more complex profiles with frequency evolution. The profile evolution is continuous, extending out to millisecond pulsars, and does not fall in sharp classes. We interpret the profiles as a mixture of pulsar core/cone emission type, spin-down energetics, and the line-of-sight impact angle towards the magnetic axis. We show how ToPP can systematically classify sources into the Rankin empirical profile scheme. ToPP forms one of the key unsupervised methods that will be essential to explore upcoming pulsar census data such as expected by the Square Kilometer Array.
A (toy) model for cold and luke-warm strongly-coupled nuclear matter at finite baryon density is used to study neutrino transport. The complete charged current two-point correlators are computed in the strongly-coupled medium and their impact on neutrino transport is analyzed. The full result is compared with various approximations for the current correlators and the distributions, including the degenerate approximation, the hydrodynamic approximation as well as the diffusive approximation and we comment on their successes. Further improvements are discussed.
With the aim of exploring the evidence for or against phase transitions in cold and dense baryonic matter, the inference of the sound speed and equation-of-state for dense matter in neutron stars is extended in view of recent new observational data. The impact of the heavy (2.35 $M_\odot$) black widow pulsar PSR J0952-0607 and of the unusually light supernova remnant HESS J1731-347 is inspected. In addition a detailed re-analysis is performed of the low-density constraint based on chiral effective field theory and of the perturbative QCD constraint at asymptotically high densities, in order to clarify the influence of these constraints on the inference procedure. The trace anomaly measure, $\Delta = 1/3 - P/\varepsilon$, is also computed and discussed. A systematic Bayes factor assessment quantifies the evidence (or non-evidence) of low averaged sound speeds $(c_s^2 \leq 0.1)$, a prerequisite for a phase transition, within the range of densities realized in the core of neutron stars. One of the consequences of including PSR J0952-0607 in the data base is a further stiffening of the equation-of-state, resulting for a 2.1 solar-mass neutron star in a reduced central density of less than five times the equilibrium density of normal nuclear matter at the 68\% level. The evidence against small sound speeds in neutron star cores is further strengthened. Within the inferred 68\% posterior credible bands, only a weak first-order phase transition with a coexistence density interval $\Delta n/n \lesssim 0.2$ would be compatible with the observed data.
We investigate the influence of magnetic field amplification on the core-collapse supernovae in highly magnetized progenitors through three-dimensional simulations. By considering rotating models, we observe a strong correlation between the exponential growth of the magnetic field in the gain region and the initiation of shock revival, with a faster onset compared to the non-rotating model. We highlight that the mean magnetic field experiences exponential amplification as a result of $\alpha$-effect in the dynamo process, which works efficiently with the increasing kinetic helicity of the turbulence within the gain region. Our findings indicate that the significant amplification of the mean magnetic fields leads to the development of locally intense turbulent magnetic fields, particularly in the vicinity of the poles, thereby promoting the revival of the shock by neutrino heating.
In 1977, Blandford and Znajek showed that the electromagnetic field surrounding a rotating black hole can harvest its spin energy and use it to power a collimated astrophysical jet, such as the one launched from the center of the elliptical galaxy M87. Today, interferometric observations with the Event Horizon Telescope (EHT) are delivering high-resolution, event-horizon-scale, polarimetric images of the supermassive black hole M87* at the jet launching point. These polarimetric images offer an unprecedented window into the electromagnetic field structure around a black hole. In this paper, we show that a simple polarimetric observable -- the phase $\angle\beta_2$ of the second azimuthal Fourier mode of the linear polarization in a near-horizon image -- depends on the sign of the electromagnetic energy flux and therefore provides a direct probe of black hole energy extraction. In Boyer-Lindquist coordinates, the Poynting flux for axisymmetric electromagnetic fields is proportional to the product $B^\phi B^r$. The phase $\angle\beta_2$ likewise depends on the ratio $B^\phi/B^r$, thereby enabling an observer to experimentally determine the direction of electromagnetic energy flow in the near-horizon environment. Data from the 2017 EHT observations of M87* are consistent with electromagnetic energy outflow. Currently envisioned multi-frequency observations of M87* will achieve higher dynamic range and angular resolution, and hence deliver measurements of $\angle\beta_2$ closer to the event horizon as well as better constraints on Faraday rotation. Such observations will enable a definitive test for energy extraction from the black hole M87*.
One of the active debates in core-collapse supernova (CCSN) theory is how significantly neutrino flavor conversions induced by neutrino-neutrino self-interactions change the conventional picture of CCSN dynamics. Recent studies have indicated that strong flavor conversions can occur inside neutrino spheres where neutrinos are tightly coupled to matter. These flavor conversions are associated with either collisional instability or fast neutrino-flavor conversion (FFC) or both. The impact of these flavor conversions on CCSN dynamics is, however, still highly uncertain due to the lack of global simulations of quantum kinetic neutrino transport with appropriate microphysical inputs. Given fluid profiles from a recent CCSN model at three different time snapshots in the early post-bounce phase, we perform global quantum kinetic simulations in spherical symmetry with an essential set of microphysics. We find that strong flavor conversions occur in optically thick regions, resulting in a substantial change of neutrino radiation field. The neutrino heating in the gain region is smaller than the case with no flavor conversions, whereas the neutrino cooling in the optically thick region is commonly enhanced. Based on the neutrino data obtained from our multi-angle neutrino transport simulations, we also assess some representative classical closure relations by applying them to diagonal components of density matrix of neutrinos. We find that Eddington tensors can be well approximated by these closure relations except for the region where flavor conversions occur vividly. We also analyze the neutrino signal by carrying out detector simulations for Super-Kamiokande, DUNE, and JUNO. We propose a useful strategy to identify the sign of flavor conversions in neutrino signal, that can be easily implemented in real data analyses of CCSN neutrinos.
Astrometry from the Gaia mission was recently used to discover the two nearest known stellar-mass black holes (BHs), Gaia BH1 and Gaia BH2. Both systems contain $\sim 1\,M_{\odot}$ stars in wide orbits ($a\approx$1.4 AU, 4.96 AU) around $\sim9\,M_{\odot}$ BHs. These objects are among the first stellar-mass BHs not discovered via X-rays or gravitational waves. The companion stars -- a solar-type main sequence star in Gaia BH1 and a low-luminosity red giant in Gaia BH2 -- are well within their Roche lobes. However, the BHs are still expected to accrete stellar winds, leading to potentially detectable X-ray or radio emission. Here, we report observations of both systems with the Chandra X-ray Observatory and radio observations with the Very Large Array (for Gaia BH1) and MeerKAT (for Gaia BH2). We did not detect either system, leading to X-ray upper limits of $L_X < 10^{29.4}$ and $L_X < 10^{30.1}\,\rm erg\,s^{-1}$ and radio upper limits of $L_r < 10^{25.2}$ and $L_r < 10^{25.9}\,\rm erg\,s^{-1}$. For Gaia BH2, the non-detection implies that the the accretion rate near the horizon is much lower than the Bondi rate, consistent with recent models for hot accretion flows. We discuss implications of these non-detections for broader BH searches, concluding that it is unlikely that isolated BHs will be detected via ISM accretion in the near future. We also calculate evolutionary models for the binaries' future evolution using Modules for Experiments in Stellar Astrophysics (MESA). We find that Gaia BH1 will be X-ray bright for 5--50 Myr when the star is a red giant, including 5 Myr of stable Roche lobe overflow. Since no symbiotic BH X-ray binaries are known, this implies either that fewer than $\sim 10^4$ Gaia BH1-like binaries exist in the Milky Way, or that they are common but have evaded detection, perhaps due to very long outburst recurrence timescales.
Jets powered by AGN in the early Universe ($z \gtrsim 6$) have the potential to not only define the trajectories of the first-forming massive galaxies but to enable the accelerated growth of their associated SMBHs. Under typical assumptions, jets could even rectify observed quasars with light seed formation scenarios; however, not only are constraints on the parameters of the first jets lacking, observations of these objects are scarce. Owing to the significant energy density of the CMB at these epochs capable of quenching radio emission, observations will require powerful, high angular resolution X-ray imaging to map and characterize these jets. As such, \textit{AXIS} will be necessary to understand early SMBH growth and feedback.
Polarized (sub)millimeter emission from dust grains in circumstellar disks was initially thought to be due to grains aligned with the magnetic field. However, higher resolution multi-wavelength observations along with improved models found that this polarization is dominated by self-scattering at shorter wavelengths (e.g., 870 $\mu$m) and by grains aligned with something other than magnetic fields at longer wavelengths (e.g., 3 mm). Nevertheless, the polarization signal is expected to depend on the underlying substructure, and observations hitherto have been unable to resolve polarization in multiple rings and gaps. HL Tau, a protoplanetary disk located 147.3 $\pm$ 0.5 pc away, is the brightest Class I or Class II disk at millimeter/submillimeter wavelengths. Here we show deep, high-resolution 870 $\mu$m polarization observations of HL Tau, resolving polarization in both the rings and gaps. We find that the gaps have polarization angles with a significant azimuthal component and a higher polarization fraction than the rings. Our models show that the disk polarization is due to both scattering and emission from aligned effectively prolate grains. The intrinsic polarization of aligned dust grains is likely over 10%, which is much higher than what was expected in low resolution observations (~1%). Asymmetries and dust features are revealed in the polarization observations that are not seen in non-polarimetric observations.
Some stars are known to explode at the end of their lives, called supernovae (SNe). The substantial amount of matter and energy that SNe release provides significant feedback to star formation and gas dynamics in a galaxy. SNe release a substantial amount of matter and energy to the interstellar medium, resulting in significant feedback to star formation and gas dynamics in a galaxy. While such feedback has a crucial role in galaxy formation and evolution, in simulations of galaxy formation, it has only been implemented using simple {\it sub-grid models} instead of numerically solving the evolution of gas elements around SNe in detail due to a lack of resolution. We develop a method combining machine learning and Gibbs sampling to predict how a supernova (SN) affects the surrounding gas. The fidelity of our model in the thermal energy and momentum distribution outperforms the low-resolution SN simulations. Our method can replace the SN sub-grid models and help properly simulate un-resolved SN feedback in galaxy formation simulations. We find that employing our new approach reduces the necessary computational cost to $\sim$ 1 percent compared to directly resolving SN feedback.
Wide binaries, with separations between two stars from a few AU to more than several thousand AU, are valuable objects for various research topics in Galactic astronomy. As the number of newly reported wide binaries continues to increase, studying the chemical abundances of their component stars becomes more important. We conducted high-resolution near-infrared (NIR) spectroscopy for six pairs of wide binary candidates using the Immersion Grating Infrared Spectrometer (IGRINS) at the Gemini-South telescope. One pair was excluded from the wide binary samples due to a significant difference in radial velocity between its component stars, while the remaining five pairs exhibited homogeneous properties in 3D motion and chemical composition among the pair stars. The differences in [Fe/H] ranged from 0.00 to 0.07 dex for these wide binary pairs. The abundance differences between components are comparable to the previous results from optical spectroscopy for other samples. In addition, when combining our data with literature data, it appears that the variation of abundance differences increases in wide binaries with larger separations. However, the SVO2324 and SVO3206 showed minimal differences in most elements despite their large separation, supporting the concept of multiple formation mechanisms depending on each wide binary. This study is the first approach to the chemical properties of wide binaries based on NIR spectroscopy. Our results further highlight that NIR spectroscopy is an effective tool for stellar chemical studies based on equivalent measurements of chemical abundances from the two stars in each wide binary system.
The most metal-poor stars (e.g. [Fe/H] $\leq-2.5$) are the ancient fossils from the early assembly epoch of our Galaxy, very likely before the formation of the thick disc. Recent studies have shown that a non-negligible fraction of them have prograde planar orbits, which makes their origin a puzzle. It has been suggested that a later-formed rotating bar could have driven these old stars from the inner Galaxy outward, and transformed their orbits to be more rotation-dominated. However, it is not clear if this mechanism can explain these stars as observed in the solar neighborhood. In this paper, we explore the possibility of this scenario by tracing these stars backwards in an axisymmetric Milky Way potential with a bar perturber. We integrate their orbits backward for 6 Gyr under two bar models: one with a constant pattern speed and another one with a decelerating speed. Our experiments show that, under the constantly-rotating bar model, the stars of interest are little affected by the bar and cannot have been shepherded from a spheroidal inner Milky Way to their current orbits. In the extreme case of a rapidly decelerating bar, some of the very metal-poor stars on planar and prograde orbits can be brought from the inner Milky Way, but $\sim90\%$ of them were nevertheless already rotation-dominated ($J_{\phi}$ $\geq$ 1000 km s$^{-1}$ kpc) 6 Gyr ago. The chance of these stars having started with spheroid-like orbits with small rotation ($J_{\phi}$ $\lesssim$ 600 km s$^{-1}$ kpc) is very low ($<$ 3$\%$). We therefore conclude that, within the solar neighborhood, the bar is unlikely to have shepherded a significant fraction of inner Galaxy spheroid stars to produce the overdensity of stars on prograde, planar orbits that is observed today.
The Kennicutt-Schmidt (KS) relation between the gas and the star formation rate (SFR) surface density ($\Sigma_{\rm gas}$-$\Sigma_{\rm SFR}$) is essential to understand star formation processes in galaxies. So far, it has been measured up to z~2.5 in main-sequence galaxies. In this letter, we aim to put constraints at z~4.5 using a sample of four massive main-sequence galaxies observed by ALMA at high resolution. We obtained ~0.3"-resolution [CII] and continuum maps of our objects, which we then converted into gas and obscured SFR surface density maps. In addition, we produced unobscured SFR surface density maps by convolving Hubble ancillary data in the rest-frame UV. We then derived the average $\Sigma_{\rm SFR}$ in various $\Sigma_{\rm gas}$ bins, and estimated the uncertainties using a Monte Carlo sampling. Our galaxy sample follows the KS relation measured in main-sequence galaxies at lower redshift and is slightly lower than predictions from simulations. Our data points probe the high end both in terms of $\Sigma_{\rm gas}$ and $\Sigma_{\rm gas}$, and gas depletion timescales (285-843 Myr) remain similar to z~2 objects. However, three of our objects are clearly morphologically disturbed, and we could have expected shorter gas depletion timescales (~100 Myr) similar to merger-driven starbursts at lower redshifts. This suggests that the mechanisms triggering starbursts at high redshift may be different than in the low- and intermediate-z Universe.
Spatially resolved observations of AGN host galaxies undergoing feedback processes are one of the most relevant avenues through which galactic evolution can be studied, given the long lasting effects AGN feedback has on gas reservoirs, star formation, and AGN environments at all scales. Within this context we report results from VLT/MUSE integral field optical spectroscopy of TN J1049-1258, one of the most powerful radio sources known, at a redshift of 3.7. We detected extended ($\sim$ 18 kpc) Lyman $\alpha$ emission, spatially aligned with the radio axis, redshifted by 2250 $\pm$ 60 km s$^{-1}$ with respect to the host galaxy systemic velocity, and co-spatial with UV continuum emission. This Lyman $\alpha$ emission could arise from a companion galaxy, although there are arguments against this interpretation. Alternatively, it might correspond to an outflow of ionized gas stemming from the radio galaxy. The outflow would be the highest redshift spatially resolved ionized outflow to date. The enormous amount of energy injected, however, appears to be unable to quench the host galaxy's prodigious star formation, occurring at a rate of $\sim$4500 M$_{\odot} yr^{-1}$, estimated using its far infra-red luminosity. Within the field we also found two companion galaxies at projected distances of $\sim$25 kpc and $\sim$60 kpc from the host, which suggests the host galaxy is harbored within a protocluster.
We present two dimensional (2D) particle-in-cell (PIC) simulations of 2D Bernstein-Greene-Kruskal (BGK) modes, which are exact nonlinear steady-state solutions of the Vlasov-Poisson equations, on a 2D plane perpendicular to a background magnetic field, with a cylindrically symmetric electric potential localized on the plane. PIC simulations are initialized using analytic electron distributions and electric potentials from the theory. We confirm the validity of such solutions using high-resolutions up to a 2048^2 grid. We show that the solutions are dynamically stable for a stronger background magnetic field, while keeping other parameters of the model fixed, but become unstable when the field strength is weaker than a certain value. When a mode becomes unstable, we observe that the instability begins with the excitation of azimuthal electrostatic waves that ends with a spiral pattern.
This paper presents an overview of the QUARKS survey, which stands for `Querying Underlying mechanisms of massive star formation with ALMA-Resolved gas Kinematics and Structures'. The QUARKS survey is observing 139 massive clumps covered by 156 pointings at ALMA Band 6 ($\lambda\sim$ 1.3 mm). In conjunction with data obtained from the ALMA-ATOMS survey at Band 3 ($\lambda\sim$ 3 mm), QUARKS aims to carry out an unbiased statistical investigation of massive star formation process within protoclusters down to a scale of 1000 au. This overview paper describes the observations and data reduction of the QUARKS survey, and gives a first look at an exemplar source, the mini-starburst Sgr B2(M). The wide-bandwidth (7.5 GHz) and high-angular-resolution (~0.3 arcsec) observations of the QUARKS survey allow to resolve much more compact cores than could be done by the ATOMS survey, and to detect previously unrevealed fainter filamentary structures. The spectral windows cover transitions of species including CO, SO, N$_2$D$^+$, SiO, H$_{30}\alpha$, H$_2$CO, CH$_3$CN and many other complex organic molecules, tracing gas components with different temperatures and spatial extents. QUARKS aims to deepen our understanding of several scientific topics of massive star formation, such as the mass transport within protoclusters by (hub-)filamentary structures, the existence of massive starless cores, the physical and chemical properties of dense cores within protoclusters, and the feedback from already formed high-mass young protostars.
Magnetic fields are a defining yet enigmatic aspect of the interstellar medium (ISM), with their three-dimensional mapping posing a substantial challenge. In this study, we harness the innovative Velocity Gradient Technique (VGT), underpinned by magnetohydrodynamic (MHD) turbulence theories, to elucidate the magnetic field structure by applying it to the atomic neutral hydrogen (HI) emission line and the molecular tracer $^{12}$CO. We construct the tomography of the magnetic field in the low-mass star-forming region L1688, utilizing two approaches: (1) VGT-HI combined with the Galactic rotational curve, and (2) stellar polarization paired with precise star parallax measurements. Our analysis reveals that the magnetic field orientations deduced from stellar polarization undergo a distinct directional change in the vicinity of L1688, providing evidence that the misalignment between VGT-HI and stellar polarization stems from the influence of the molecular cloud's magnetic field on the polarization of starlight. When comparing VGT-$^{12}$CO to stellar polarization and Planck polarization data, we observe that VGT-$^{12}$CO effectively reconciles the misalignment noted with VGT-HI, showing statistical alignment with Planck polarization measurements. This indicates that VGT-$^{12}$CO could be integrated with VGT-HI, offering vital insights into the magnetic fields of molecular clouds, thereby enhancing the accuracy of our 3D magnetic field reconstructions.
We report the discovery of a new Blue Large-Amplitude Pulsator, SMSS J184506-300804 (SMSS-BLAP-1) in Data Release 2 of the SkyMapper Southern Sky Survey. We conduct high-cadence photometric observations in the $u$ band to confirm a periodic modulation of the lightcurve. SMSS-BLAP-1 has a ~19-min pulsation period with an amplitude of 0.2 mag in u band, and is similar to the classical BLAPs found by OGLE. From spectroscopic observations with the Wide-Field Spectrograph on the ANU 2.3m telescope, we confirm it as a low-gravity BLAP: best-fit parameters from our spectral model grid are estimated as $T_\mathrm{eff}$ = 27,000 K and $\log g$ (cm s$^{-2}$) = 4.4. Remarkably, we find evidence of a periodic signal in the residual lightcurve that could hint at a non-radial pulsation mode, and an excess of Ca II K and Na I D absorption from potential circumstellar material.
The tremendous tidal force that is linked to the supermassive black hole (SMBH) at the center of our galaxy is expected to strongly subdue star formation in its vicinity. Stars within 1" from the SMBH thus likely formed further from the SMBH and migrated to their current positions. In this study, spectroscopic observations of the star S0-6/S10, one of the closest (projected distance from the SMBH of about 0.3") late-type stars were conducted. Using metal absorption lines in the spectra of S0-6, the radial velocity of S0-6 from 2014 to 2021 was measured, and a marginal acceleration was detected, which indicated that S0-6 is close to the SMBH. The S0-6 spectra were employed to determine its stellar parameters including temperature, chemical abundances ([M/H], [Fe/H], [alpha/Fe], [Ca/Fe], [Mg/Fe], [Ti/Fe]), and age. As suggested by the results of this study, S0-6 is very old (> ~10 Gyr) and has an origin different from that of stars born in the central pc region.
We investigate the black hole mass function (BHMF) and Eddington ratio distribution function (ERDF) of broad-line AGNs at z=4, based on a sample of 52 quasars with i<23.2 at 3.50 < z < 4.25 from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) S16A-Wide2 dataset, and 1,462 quasars with i<20.2 in the same redshift range from the Sloan Digital Sky Survey (SDSS) DR7 quasar catalog. Virial BH masses of quasars are estimated using the width of the CIV 1549{\AA} line and the continuum luminosity at 1350{\AA}. To obtain the intrinsic broad-line AGN BHMF and ERDF, we correct for the incompleteness in the low-mass and/or low-Eddington-ratio ranges caused by the flux-limited selection. The resulting BHMF is constrained down to $\log M_{\rm BH}/M_{\odot}\sim7.5$. In comparison with broad-line AGN BHMFs at z=2 in the literature, we find that the number density of massive SMBHs peaks at higher redshifts, consistent with the "down-sizing" evolutionary scenario. Additionally, the resulting ERDF shows a negative dependence on BH mass, suggesting more massive SMBHs tend to accrete at lower Eddington ratios at z=4. With the derived intrinsic broad-line AGN BHMF, we also evaluate the active fraction of broad-line AGNs among the entire SMBH population at z=4. The resulting active fraction may suggest a positive dependence on BH mass. Finally, we examine the time evolution of broad-line AGN BHMF between z=4 and 6 through solving the continuity equation. The results suggest that the broad-line AGN BHMFs at z=4-6 only show evolution in their normalization, but with no significant changes in their shape.
We provide a catalogue of atmospheric parameters for 1,806,921 cool dwarfs from Gaia DR3 which lie within the range covered by LAMOST cool dwarf spectroscopic parameters: 3200 K < T_{eff}< 4300 K, -0.8 < [M/H] < 0.2 dex, and 4.5 <log{g} < 5.5 dex. Our values are derived based on Machine Learning models trained with multi-band photometry corrected for dust. The photometric data comprises of optical from SDSS r, i, z bands, near-infrared from 2MASS J, H, K and mid-infrared from ALLWISE W1, W2. We used both random forest and LightGBM machine learning models and found similar results from both with an error dispersion of 68 K, 0.22 dex, and 0.05 dex for T_{eff}, [M/H], and log {g}, respectively. Assessment of the relative feature importance of different photometric colors indicated W1 -- W2 as most sensitive to both T_{eff} and log{g}, with J -- H most sensitive to [M/H]. We find that our values show a good agreement with APOGEE, but are significantly different to those provided as part of Gaia DR3.
Context: Radio wavelengths offer a unique possibility of tracing the total star-formation rate (SFR) in galaxies, both obscured and unobscured. To prob the dust-unbiased star-formation history, an accurate measurement of the radio luminosity function (LF) for star-forming galaxies (SFGs) is crucial. Aims: We make use of an SFG sample (5915 sources) from the Very Large Array (VLA) COSMOS 3 GHz data to perform a new modeling of the radio LF. By integrating the analytical LF we aim to calculate the cosmic SFR density (SFRD) history since $z\sim5$. Methods: For the first time, we use both models of the pure luminosity evolution (PLE) and joint luminosity+density evolution (LADE) to fit the LFs directly to the radio data using a full maximum likelihood analysis, considering the sample completeness correction. We also incorporate the updated observations on local radio LFs and radio source counts into the fitting process to obtain additional constraints. Results: We find that the PLE model is not applicable to describe the evolution of the radio LF at high redshift ($z>2$). By construct, our LADE models can successfully fit a large amount of data on radio LFs and source counts of SFGs from recent observations. Our SFRD curve shows a good fit to the SFRD points derived by previous radio estimates. In view of that our radio LFs are not biased in contrast to previous studies performed by fitting the $1/V_{\rm max}$ LF points, our SFRD results should be an improvement to these previous estimates. Below $z<2$, our SFRD matches well with the multi-wavelength compilation from Madau \& Dickinson, while our SFRD turns over at a slightly higher redshift ($2<z<2.5$) and falls more rapidly out to high redshift.
Giant Radio Galaxies (GRGs) are Active Galactic Nuclei (AGN) with radio emission that extends over projected sizes $>0.7\,$Mpc. The large angular sizes associated with GRGs complicate their identification in radio survey images using traditional source finders. In this Note, we use DRAGNhunter, an algorithm designed to find double-lobed radio galaxies, to search for GRGs in the Faint Images of the Radio Sky at Twenty cm survey (FIRST). Radio and optical images of identified candidates are visually inspected to confirm their authenticity, resulting in the discovery of $63$ previously unreported GRGs.
Context. Star clusters constitute a relevant part of the stellar population in our Galaxy. The feedback processes they exert on the interstellar medium impact multiple physical processes, from the chemical to the dynamical evolution of the Galaxy. In addition, young and massive stellar clusters might act as efficient particle accelerators, possibly contributing to the production of cosmic rays. Aims. We aim at evaluating the wind luminosity driven by the young (< 30 Myr) Galactic open stellar clusters observed by the Gaia space mission, which is crucial to determine the energy channeled into accelerated particles. Methods. To this extent, we develop a method relying on the number, magnitude and line-of-sight extinction of the stars observed per cluster. Assuming that the stellar mass function follows a Kroupa mass distribution, and accounting for the maximum stellar mass allowed by both the parent cluster age and mass, we conservatively estimate the mass and wind luminosity of 387 local clusters within the second data release of Gaia. Results. We compare the results of our computation with recent estimations of young cluster masses. With respect to these, we provide a sample three times more abundant, particularly above a few thousand solar masses, which is of the utmost relevance for predicting the gamma-ray emission resulting from the interaction of accelerated particles. In fact, the cluster wind luminosity distribution we obtain is found to extend up to 3 x 10^38 erg/s, a promising feature in terms of potential particle acceleration scenarios.
Galaxies play a key role in our endeavor to understand how structure formation proceeds in the Universe. For any precision study of cosmology or galaxy formation, there is a strong demand for huge sets of realistic mock galaxy catalogs, spanning cosmologically significant volumes. For such a daunting task, methods that can produce a direct mapping between dark matter halos from dark matter-only simulations and galaxies are strongly preferred, as producing mocks from full-fledged hydrodynamical simulations or semi-analytical models is too expensive. Here we present a Graph Neural Network-based model that is able to accurately predict key properties of galaxies such as stellar mass, $g-r$ color, star formation rate, gas mass, stellar metallicity, and gas metallicity, purely from dark matter properties extracted from halos along the full assembly history of the galaxies. Tests based on the TNG300 simulation of the IllustrisTNG project show that our model can recover the baryonic properties of galaxies to high accuracy, over a wide redshift range ($z = 0-5$), for all galaxies with stellar masses more massive than $10^9\,M_\odot$ and their progenitors, with strong improvements over the state-of-the-art methods. We further show that our method makes substantial strides toward providing an understanding of the implications of the IllustrisTNG galaxy formation model.
Planetary nebulae (PNe) are essential tracers of the kinematics of the diffuse halo and intracluster light where stellar spectroscopy is unfeasible, due to their strong emission lines. However, that is not all they can reveal about the underlying stellar population. In recent years, it has also been found that PNe in the metal-poor halos of galaxies have different properties (specific frequency, luminosity function), than PNe in the more metal-rich galaxy centers. A more quantitative understanding of the role of age and metallicity in these relations would turn PNe into valuable stellar-population tracers. In order to do that, a full characterization of PNe in regions where the stellar light can also be analysed in detail is necessary. In this work, we make use of integral-field spectroscopic data covering the central regions of galaxies, which allow us to measure both stellar ages and metallicities as well as to detect PNe. This analysis is fundamental to calibrate PNe as stellar population tracers and to push our understanding of galaxy properties at unprecedented galactocentric distances.
Using the SWIFT simulation code we study different forms of active galactic nuclei (AGN) feedback in idealized galaxy groups and clusters. We first present a physically motivated model of black hole (BH) spin evolution and a numerical implementation of thermal isotropic feedback (representing the effects of energy-driven winds) and collimated kinetic jets that they launch at different accretion rates. We find that kinetic jet feedback is more efficient at quenching star formation in the brightest cluster galaxies (BCGs) than thermal isotropic feedback, while simultaneously yielding cooler cores in the intracluster medium (ICM). A hybrid model with both types of AGN feedback yields moderate star formation rates, while having the coolest cores. We then consider a simplified implementation of AGN feedback by fixing the feedback efficiencies and the jet direction, finding that the same general conclusions hold. We vary the feedback energetics (the kick velocity and the heating temperature), the fixed efficiencies and the type of energy (kinetic versus thermal) in both the isotropic and the jet case. The isotropic case is largely insensitive to these variations. In particular, we highlight that kinetic isotropic feedback (used e.g. in IllustrisTNG) is similar in its effects to its thermal counterpart (used e.g. in EAGLE). On the other hand, jet feedback must be kinetic in order to be efficient at quenching. We also find that it is much more sensitive to the choice of energy per feedback event (the jet velocity), as well as the efficiency. The former indicates that jet velocities need to be carefully chosen in cosmological simulations, while the latter motivates the use of BH spin evolution models.
Black hole driven outflows in galaxies hosting active galactic nuclei (AGN) may interact with their interstellar medium (ISM) affecting star formation. Such feedback processes, reminiscent of those seen in massive galaxies, have been reported recently in some dwarf galaxies. However, such studies have usually been on kiloparsec and larger scales and our knowledge on the smallest spatial scales to which these feedback processes can operate is unclear. Here we demonstrate radio jet$-$ISM interaction on the scale of an asymmetric triple radio structure of $\sim$ 10 parsec size in NGC 4395. This triple radio structure is seen in the 15 GHz continuum image and the two asymmetric jet-like structures are situated on either side of the radio core that coincides with the optical {\it Gaia} position. The high resolution radio image and the extended [OIII]$\lambda$5007 emission, indicative of an outflow, are spatially coincident and are consistent with the interpretation of a low power radio jet interacting with the ISM. Modelling of the spectral lines using {\tt MAPPINGS}, and estimation of temperature using optical integral field spectroscopic data suggest shock ionization of the gas. The continuum emission at 237 GHz, though weak, was found to spatially coincide with the AGN. However, the CO(2$-$1) line emission was found to be displaced by around 20 parsec northward of the AGN core. The spatial coincidence of molecular H$_2$$\lambda$2.4085 along the jet direction, the morphology of ionised [OIII]$\lambda$5007 and displacement of the CO(2$-$1) emission argues for conditions less favourable for star formation in the central $\sim$ 10 parsec region.
The Milky Way is thought to host a huge population of interstellar objects (ISOs), numbering approximately $10^{15}\mathrm{pc}^{-3}$ around the Sun, which are formed and shaped by a diverse set of processes ranging from planet formation to galactic dynamics. We define a novel framework: firstly to predict the properties of this Galactic ISO population by combining models of processes across planetary and galactic scales, and secondly to make inferences about the processes modelled, by comparing the predicted population to what is observed. We predict the spatial and compositional distribution of the Galaxy's population of ISOs by modelling the Galactic stellar population with data from the APOGEE survey and combining this with a protoplanetary disk chemistry model. Selecting ISO water mass fraction as an example observable quantity, we evaluate its distribution both at the position of the Sun and averaged over the Galactic disk; our prediction for the Solar neighbourhood is compatible with the inferred water mass fraction of 2I/Borisov. We show that the well-studied Galactic stellar metallicity gradient has a corresponding ISO compositional gradient. We also demonstrate the inference part of the framework by using the current observed ISO composition distribution to constrain the parent star metallicity dependence of the ISO production rate. This constraint, and other inferences made with this framework, will improve dramatically as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) progresses and more ISOs are observed. Finally, we explore generalisations of this framework to other Galactic populations, such as that of exoplanets.
We extend the analysis presented in \cite{contini2023a} to higher redshifts, up to $z=2$, by focusing on the relation between the intracluster light (ICL) fraction and the halo mass, its dependence with redshift, role played by the halo concentration and formation time, in a large sample of simulated galaxy groups/clusters with $13\lesssim \log M_{halo} \lesssim 15$. Moreover, a key focus is to isolate the relative contributions provided by the main channels for the ICL formation to the total amount. The ICL fraction at higher redshift is weakly dependent on halo mass, and comparable with that at the present time, in agreement with recent observations. Stellar stripping, mergers and pre-processing are the major responsible channels of the ICL formation, with stellar stripping that accounts for $\sim 90\%$ of the total ICL, regardless of halo mass and redshift. Pre-processing is an important process for clusters to accrete already formed ICL. The diffuse component forms very early, $z\sim 0.6$, and its formation depends on both concentration and formation time of the halo, with more concentrated and earlier formed haloes that assemble their ICL earlier than later formed ones. The efficiency of this process is independent of halo mass, but increases with decreasing redshift, which implies that stellar stripping becomes more important with time as the concentration increases. This highlights the link between the ICL and the dynamical state of a halo: groups/clusters that have a higher fraction of diffuse light are more concentrated, relaxed and in an advanced stage of growth.
Dynamical friction can be a valuable tool for inferring dark matter properties that are difficult to constrain by other methods. Most applications of dynamical friction calculations are concerned with the long-term angular momentum loss and orbital decay of the perturber within its host. This, however, assumes knowledge of the unknown initial conditions of the system. We advance an alternative methodology to infer the host properties from the perturber's shape distortions induced by the tides of the wake of dynamical friction, which we refer to as the tidal dynamical friction. As the shape distortions rely on the tidal field that has a predominantly local origin, we present a strategy to find the local wake by integrating the stellar orbits back in time along with the perturber, then removing the perturber's potential and re-integrating them back to the present. This provides perturbed and unperturbed coordinates and hence a change in coordinates, density, and acceleration fields, which yields the back-reaction experienced by the perturber. The method successfully recovers the tidal field of the wake based on a comparison with N-body simulations. We show that similar to the tidal field itself, the noise and randomness of the dynamical friction force due to the finite number of stars is also dominated by regions close to the perturber. Stars near the perturber influence it more but are smaller in number, causing a high variance in the acceleration field. These fluctuations are intrinsic to dynamical friction. We show that a stellar density of $0.0014 {\rm M_\odot\, kpc^{-3}}$ yields an inherent variance of 10% to the dynamical friction. The current method extends the family of dynamical friction methods that allow for the inference of host properties from tidal forces of the wake. It can be applied to specific galaxies, such as Magellanic Clouds, with Gaia data.
We delved into the assembly pathways and environments of compact groups (CGs) of galaxies using mock catalogues generated from semi-analytical models (SAMs) on the Millennium simulation. We investigate the ability of SAMs to replicate the observed CG environments and whether CGs with different assembly histories tend to inhabit specific cosmic environments. We also analyse whether the environment or the assembly history is more important in tailoring CG properties. We find that about half of the CGs in SAMs are non-embedded systems, 40% are inhabiting loose groups or nodes of filaments, while the rest distribute evenly in filaments and voids, in agreement with observations. We observe that early-assembled CGs preferentially inhabit large galaxy systems (~ 60%), while around 30% remain non-embedded. Conversely, lately-formed CGs exhibit the opposite trend. We also obtain that lately-formed CGs have lower velocity dispersions and larger crossing times than early-formed CGs, but mainly because they are preferentially non-embedded. Those lately-formed CGs that inhabit large systems do not show the same features. Therefore, the environment plays a strong role in these properties for lately-formed CGs. Early-formed CGs are more evolved, displaying larger velocity dispersions, shorter crossing times, and more dominant first-ranked galaxies, regardless of the environment. Finally, the difference in brightness between the two brightest members of CGs is dependent only on the assembly history and not on the environment. CGs residing in diverse environments have undergone varied assembly processes, making them suitable for studying their evolution and the interplay of nature and nurture on their traits.
Binary stars are believed to be key determinants in understanding globular cluster evolution. In this paper, we present the Multi-band photometric analyses of five variables in the nearest galactic globular cluster M4, from the observations of CASE, M4 Core Project with HST for four variables (V48, V49, V51, and V55) and the data collected from T40 and C18 Telescopes of Wise Observatory for one variable (NV4). The light curves of five binaries are analyzed using the Wilson-Devinney method (WD) and their fundamental parameters have been derived. A period variation study was carried out using times of minima obtained from the literature for four binaries and the nature of the variation observed is discussed. The evolutionary state of systems is evaluated using M-R diagram, correlating with that of a few well-studied close binaries in other globular clusters. Based on the data obtained from the Gaia DR3 database, a three-dimensional Vector-Point Diagrams (VPD) was built to evaluate the cluster membership of the variables, and two out of them (V49 and NV4) were found to be not cluster members.
Our understanding of how the size of galaxies has evolved over cosmic time is based on the use of the half-light (effective) radius as a size indicator. Although the half-light radius has many advantages for structurally parameterising galaxies, it does not provide a measure of the global extent of the objects, but only an indication of the size of the region containing the innermost 50% of the galaxy's light. Therefore, the observed mild evolution of the effective radius of disc galaxies with cosmic time is conditioned by the evolution of the central part of the galaxies rather than by the evolutionary properties of the whole structure. Expanding on the works by Trujillo et al. (2020) and Chamba et al. (2022), we study the size evolution of disc galaxies using as a size indicator the radial location of the gas density threshold for star formation. As a proxy to evaluate this quantity, we use the radial position of the truncation (edge) in the stellar surface mass density profiles of galaxies. To conduct this task, we have selected 1048 disc galaxies with M$_{\rm stellar}$ $>$ 10$^{10}$ M$_{\odot}$ and spectroscopic redshifts up to z=1 within the HST CANDELS fields. We have derived their surface brightness, colour and stellar mass density profiles. Using the new size indicator, the observed scatter of the size-mass relation (~0.1 dex) decreases by a factor of ~2 compared to that using the effective radius. At a fixed stellar mass, Milky Way-like (M$_{\rm stellar}$ ~ 5$\times$10$^{10}$ M$_{\odot}$) disc galaxies have on average increased their sizes by a factor of two in the last 8 Gyr, while the surface stellar mass density at the edge position has decreased by more than an order of magnitude from ~13 M$_{\odot}$/pc$^2$ (z=1) to ~1 M$_{\odot}$/pc$^2$ (z=0). These results reflect a dramatic evolution of the outer part of MW-like disc galaxies, growing ~1.5 kpc Gyr$^{-1}$.
Low energy anti-neutrinos detected from reactors or other sources have typically used the conversion of an anti-neutrino on hydrogen, producing a positron and a free neutron. This neutron is subsequently captured on a secondary element with a large neutron capture cross section such as Gadolinium or Cadmium. We have studied the anti-neutrino conversion and suggest other elements that have a comparable cross section for anti-neutrino reactions. With most neutron captures on Gadolinium, it is possible to get two or three delayed gamma signals of known energy to occur. With today's fast electronics, this leads to the possibility of having a triple delayed coincidence using the positron annihilation on atomic shell electrons as the starting signal. We have also found an isotope of Tungsten, $^{183}$W that offers a large anti-neutrino interaction cross section of $1.19 \times 10^{-46}$ m$^2$ and an anti-neutrino threshold energy of 2.094 MeV. This reaction makes a nuclear m1 excited state of $^{183}$Ta$^*$ that emits a signature secondary gamma pulse of 73 keV with a 107 ns half-life. This offers a new delayed coincidence technique that can be used to cleanly identify anti-neutrinos with less shielding with the added advantage of the anti-neutrino threshold shifting down to low energies.
Context. The accuracy of photometric calibration has gradually become a limiting factor in various fields of astronomy, limiting the scientific output of a host of research. Calibration using artificial light sources in low earth orbit remains largely unexplored. Aims. We aim to demonstrate that photometric calibration using light sources in low earth orbit is a viable and competitive alternative/complement to current calibration techniques, and explore the associated ideas and basic theory. Methods. We present the publicly-available Python code Streaktools as a means to simulate and perform photometric calibration using real and simulated light streaks. We use Streaktools to perform `pill' aperture photometry on 131 simulated streaks, and MCMC-based PSF model-fitting photometry on 425 simulated streaks in an attempt to recover the magnitude zeropoint of a real exposure of the DECam instrument on the Blanco 4m telescope. Results. We show that calibration using pill photometry is too inaccurate to be useful, but that PSF photometry is able to produce unbiased and accurate ($1\sigma$ error = 3.4mmag) estimates of the zeropoint of a real image in a realistic scenario, with a reasonable light source.
The search for extraterrestrial (alien) life is one of the greatest scientific quests yet raises fundamental questions about just what we should be looking for and how. We approach alien hunting from the perspective of an experimenter engaging in binary classification with some true and confounding positive probability (TPP and CPP). We derive the Bayes factor in such a framework between two competing hypotheses, which we use to classify experiments as either impotent, imperfect or ideal. Similarly, the experimenter can be classified as dogmatic, biased or agnostic. We show how the unbounded explanatory and evasion capability of aliens poses fundamental problems to experiments directly seeking aliens. Instead, we advocate framing the experiments as looking for that outside of known processes, which means the hypotheses we test do not directly concern aliens per se. To connect back to aliens requires a second level of model selection, for which we derive the final odds ratio in a Bayesian framework. This reveals that it is fundamentally impossible to ever establish alien life at some threshold odds ratio, $\mathcal{O}_{\mathrm{crit}}$, unless we deem the prior probability that some as-yet-undiscovered natural process could explain the event is less than $(1+\mathcal{O}_{\mathrm{crit}})^{-1}$. This elucidates how alien hunters need to carefully consider the challenging problem of how probable unknown unknowns are, such as new physics or chemistry, and how it is arguably most fruitful to focus on experiments for which our domain knowledge is thought to be asymptotically complete.
In April to May 2023, the superBIT telescope was lifted to the Earth's stratosphere by a helium-filled super-pressure balloon, to acquire astronomical imaging from above (99.5% of) the Earth's atmosphere. It was launched from New Zealand then, for 40 days, circumnavigated the globe five times at a latitude 40 to 50 degrees South. Attached to the telescope were four 'DRS' (Data Recovery System) capsules containing 5 TB solid state data storage, plus a GNSS receiver, Iridium transmitter, and parachute. Data from the telescope were copied to these, and two were dropped over Argentina. They drifted 61 km horizontally while they descended 32 km, but we predicted their descent vectors within 2.4 km: in this location, the discrepancy appears irreducible below 2 km because of high speed, gusty winds and local topography. The capsules then reported their own locations to within a few metres. We recovered the capsules and successfully retrieved all of superBIT's data - despite the telescope itself being later destroyed on landing.
Halite (NaCl mineral) has exhibited the potential to preserve microorganisms for millions of years on Earth. This mineral was also identified on Mars and in meteorites. In this study, we investigated the potential of halite crystals to protect microbial life forms on the surface of an airless body (e.g., meteorite), for instance, during a lithopanspermia process (interplanetary travel step) in the early Solar System. To investigate the effect of the radiation of the young Sun on microorganisms, we performed extensive simulation experiments by employing a synchrotron facility. We focused on two exposure conditions: vacuum (low Earth orbit, 10^{-4}Pa) and vacuum-ultraviolet (VUV) radiation (range 57.6 - 124 nm, flux 7.14 W m^{-2}), with the latter representing an extreme scenario with high VUV fluxes comparable to the amount of radiation of a stellar superflare from the young Sun. The stellar VUV parameters were estimated by using the very well-studied solar analog of the young Sun, k^{1}Cet. To evaluate the protective effects of halite, we entrapped a halophilic archaeon (Haloferax volcanii) and a non-halophilic bacterium (Deinococcus radiodurans) in laboratory-grown halite. Control groups were cells entrapped in salt crystals (mixtures of different salts and NaCl) and non-trapped (naked) cells, respectively. All groups were exposed either to vacuum alone or to vacuum plus VUV. Our results demonstrate that halite can serve as protection against vacuum and VUV radiation, regardless of the type of microorganism. In addition, we found that the protection is higher than provided by crystals obtained from mixtures of salts. This extends the protective effects of halite documented in previous studies and reinforces the possibility to consider the crystals of this mineral as potential preservation structures in airless bodies or as vehicles for the interplanetary transfer of microorganisms.
The Sloan Digital Sky Survey (SDSS) is one of the largest international astronomy organizations. We present demographic data based on surveys of its members from 2014, 2015 and 2016, during the fourth phase of SDSS (SDSS-IV). We find about half of SDSS-IV collaboration members were based in North America, a quarter in Europe, and the remainder in Asia and Central and South America. Overall, 26-36% are women (from 2014 to 2016), up to 2% report non-binary genders. 11-14% report that they are racial or ethnic minorities where they live. The fraction of women drops with seniority, and is also lower among collaboration leadership. Men in SDSS-IV were more likely to report being in a leadership role, and for the role to be funded and formally recognized. SDSS-IV collaboration members are twice as likely to have a parent with a college degree, than the general population, and are ten times more likely to have a parent with a PhD. This trend is slightly enhanced for female collaboration members. Despite this, the fraction of first generation college students (FGCS) is significant (31%). This fraction increased among collaboration members who are racial or ethnic minorities (40-50%), and decreased among women (15-25%). SDSS-IV implemented many inclusive policies and established a dedicated committee, the Committee on INclusiveness in SDSS (COINS). More than 60% of the collaboration agree that the collaboration is inclusive; however, collaboration leadership more strongly agree with this than the general membership. In this paper, we explain these results in full, including the history of inclusive efforts in SDSS-IV. We conclude with a list of suggested recommendations based on our findings, which can be used to improve equity and inclusion in large astronomical collaborations, which we argue is not only moral, but will also optimize their scientific output.
The synergy between high-resolution rotational spectroscopy and quantum-chemical calculations is essential for exploring future detection of molecules, especially when spectroscopy parameters are not available yet. By using highly correlated ab initio quartic force fields (QFFs) from explicitly correlated coupled cluster theory, a complete set of rotational constants and centrifugal distortion constants for D$_2$CS and cis/trans-DCSD isomers have been produced. Comparing our new ab initio results for D$_2$CS with new rotational spectroscopy laboratory data for the same species, the accuracy of the computed B and C rotational constants is within 0.1% while the A constant is only slightly higher. Additionally, quantum chemical vibrational frequencies are also provided, and these spectral reference data and new experimental rotational lines will provide additional references for potential observation of these deuterated sulfur species with either ground-based radio telescopes or space-based infrared observatories.
We investigate the impact of transient noise artifacts, or {\it glitches}, on gravitational wave inference, and the efficacy of data cleaning procedures in recovering unbiased source properties. Due to their time-frequency morphology, broadband glitches demonstrate moderate to significant biasing of posterior distributions away from true values. In contrast, narrowband glitches have negligible biasing effects owing to distinct signal and glitch morphologies. We inject simulated binary black hole signals into data containing three common glitch types from past LIGO-Virgo observing runs, and reconstruct both signal and glitch waveforms using {\tt BayesWave}, a wavelet-based Bayesian analysis. We apply the standard LIGO-Virgo-KAGRA deglitching procedure to the detector data - we subtract the glitch waveform estimated by the joint {\tt BayesWave} inference before performing parameter estimation with detailed compact binary waveform models. We find that this deglitching effectively mitigates bias from broadband glitches, with posterior peaks aligning with true values post deglitching. This provides a baseline validation of existing techniques, while demonstrating waveform reconstruction improvements to the Bayesian algorithm for robust astrophysical characterization in glitch-prone detector data.
We investigate the propagation of certain non-plane wave solutions to Maxwell's equations in both flat and curved spacetimes. We find that such solutions (or rather parts of them) exhibit accelerative behaviour, and in particular do not propagate on straight lines. Having established these results, we then turn to their conceptual significance -- which, in brief, we take to be the following: (i) one should not assume that the part of electromagnetic waves from outer space that is subject to detection is localised onto null trajectories; therefore (ii) astrophysicists and cosmologists should at least be wary about making such assumptions in their inferences from obtained data, for to do so may lead to incorrect inferences regarding the nature of our universe.
We calculate the reflection of diffuse galactic emission by meteor trails and investigate its potential relationship to Meteor Radio Afterglow (MRA). The formula to calculate the reflection of diffuse galactic emission is derived from a simplified case, assuming that the signals are mirrored by the cylindrical over-dense ionization trail of meteors. The overall observed reflection is simulated through a ray tracing algorithm together with the diffuse galactic emission modelled by the GSM sky model. We demonstrate that the spectrum of the reflected signal is broadband and follows a power law with a negative spectral index of around -1.3. The intensity of the reflected signal varies with local sidereal time and the brightness of the meteor and can reach 2000 Jy. These results agree with some previous observations of MRAs. Therefore, we think that the reflection of galactic emission by meteor trails can be a possible mechanism causing MRAs, which is worthy of further research.
Solutions pertaining to a Kerr black hole with a flat horizon undergoing gradual rotation are explored in the context of gravitational theories modified by dynamical Chern-Simons terms with cylindrical metrics, which approach asymptotically the anti de Sitter spacetime. It is shown that the cross-term of a metric component is unaffected by the perturbations of the Chern-Simons scalar independently of whether the dynamical Chern-Simons field equation is uncharged or charged with an electric field. From this result, it is ensured that the Chern-Simons scalar field can affect the spaces of the metric that approach asymptotically the flat spacetime only.
In loop quantum cosmology, ambiguities in the Hamiltonian constraint can result in models with varying phenomenological predictions. In the homogeneous isotropic models, these ambiguities were settled, and the improved dynamics was found to be a unique and phenomenologically viable choice. This issue has remained unsettled on the inclusion of anisotropies, and in the Bianchi-I model there exist two generalizations of isotropic improved dynamics. In the first of these, labelled as $\bar \mu$ quantization, the edge length of holonomies depends on the inverse of the directional scale factor. This quantization has been favored since it results in universal bounds on energy density and anisotropic shear, and can be viably formulated for non-compact as well as compact spatial manifolds. However, there exists an earlier quantization, labelled as $\bar \mu'$ quantization, where edge lengths of holonomies depend on the inverse of the square root of directional triads. This quantization is also non-singular and so far believed to yield a consistent physical picture for spatially compact manifolds. We examine the issue of the physical viability of these quantizations for different types of matter in detail by performing a large number of numerical simulations. Our analysis reveals certain limitations which have so far remained unnoticed. We find that while being non-singular, the $\bar \mu'$ quantization suffers from a surprising problem where one of the triad components and associated polymerized term retains Planckian character even at large volumes. As a result, not only is the anisotropic shear not preserved across the bounce, which is most highlighted in the vacuum case, but the universe can exhibit an unexpected cyclic evolution. These problematic features are absent from the $\bar \mu$ quantization leaving it as the only viable prescription for loop quantizing the Bianchi-I model.
In this paper, we develop a quantum theory of homogeneously curved tetrahedron geometry, by applying the combinatorial quantization to the phase space of tetrahedron shapes defined in arXiv:1506.03053. Our method is based on the relation between this phase space and the moduli space of SU(2) flat connections on a 4-punctured sphere. The quantization results in the physical Hilbert space as the solution of the quantum closure constraint, which quantizes the classical closure condition $M_4M_3M_2M_1=1$, $M_\nu\in$ SU(2), for the homogeneously curved tetrahedron. The quantum group Uq(su(2)) emerges as the gauge symmetry of a quantum tetrahedron. The physical Hilbert space of the quantum tetrahedron coincides with the Hilbert space of 4-valent intertwiners of Uq(su(2)). In addition, we define the area operators quantizing the face areas of the tetrahedron and compute the spectrum. The resulting spectrum is consistent with the usual Loop-Quantum-Gravity area spectrum in the large spin regime but is different for small spins. This work closely relates to 3+1 dimensional Loop Quantum Gravity in presence of cosmological constant and provides a justification for the emergence of quantum group in the theory.
Emergent modified gravity is a canonical theory based on general covariance where the spacetime is not fundamental, but rather an emergent object. This feature allows for modifications of the classical theory and can be used to model new effects, such as those suggested by quantum gravity. We discuss how matter fields can be coupled to emergent modified gravity, realize the coupling of the perfect fluid, identify the symmetries of the system, and explicitly obtain the Hamiltonian in spherical symmetry. We formulate the Oppenheimer-Snyder collapse model in canonical terms, permitting us to extend the model to emergent modified gravity and obtain an exact solution to the dust collapsing from spatial infinity including some effects suggested by quantum gravity. In this solution the collapsing dust forms a black hole, then the star radius reaches a minimum with vanishing velocity and finite positive acceleration, and proceeds to emerge out now behaving as a white hole. While the geometry on the minimum-radius surface is regular in the vacuum, it is singular in the presence of dust. However, the fact that the geometry is emergent, and the fundamental fields that compose the phase-space are regular, allows us to continue the canonical solution in a meaningful way, obtaining the global structure for the interior of the star. The star-interior solution is complemented by the vacuum solution describing the star-exterior region by a continuous junction at the star radius. This gluing process can be viewed as the imposition of boundary conditions, which is non-unique and does not follow from the equations of motion. This ambiguity gives rise to different possible physical outcomes of the collapse. We discuss two such phenomena: the formation of a wormhole and the transition from a black hole to a white hole.
Parameterized Kerr spacetimes allow us to test the nature of black holes in model-independent ways. Such spacetimes contain several arbitrary functions and, as a matter of practicality, one Taylor expands them about infinity and keeps only to finite orders in the expansion. In this paper, we focus on the parameterized spacetime preserving Killing symmetries of a Kerr spacetime and show that an unphysical divergence may appear in the metric if such a truncation is performed in the series expansion. To remedy this, we redefine the arbitrary functions so that the divergence disappears, at least for several known black hole solutions that can be mapped to the parameterized Kerr spacetime. We propose two restricted classes of the refined parameterized Kerr spacetime that only contain one or two arbitrary functions and yet can reproduce exactly all the example black hole spacetimes considered in this paper. The Petrov class of the parameterized Kerr spacetime is of type I while that for the restricted class with one arbitrary function remains type D. We also compute the ringdown frequencies and the shapes of black hole shadows for the parameterized spacetime and show how they deviate from Kerr. The refined black hole metrics with Kerr symmetries presented here are practically more useful than those proposed in previous literature.
We study the redistribution of the fermionic steering and the relation among fermionic Bell nonlocality, steering, and entanglement in the background of the Garfinkle-Horowitz-Strominger dilaton black hole. We analyze the meaning of the fermionic steering in terms of the Bell inequality in curved spacetime. We find that the fermionic steering, which is previously found to survive in the extreme dilaton black hole, cannot be considered to be nonlocal. We also find that the dilaton gravity can redistribute the fermionic steering, but cannot redistribute Bell nonlocality, which means that the physically inaccessible steering is also not nonlocal. Unlike the inaccessible entanglement, the inaccessible steering may increase non-monotonically with the dilaton. Furthermore, we obtain some monogamy relations between the fermionic steering and entanglement in dilaton spacetime. In addition, we show the difference between the fermionic and bosonic steering in curved spacetime.
The orbital eccentricity plays a crucial role in shaping the dynamics of binary black hole (BBH) mergers. Remarkably, our recent findings reveal a universal oscillation in essential dynamic quantities: peak luminosity $L_{\text{peak}}$, masses $M_f$, spins $\alpha_f$, and recoil velocity $V_f$ of the final remnant black hole, as the initial eccentricity $e_0$ undergoes variation. In this letter, by leveraging RIT's extensive numerical relativistic simulations of nonspinning eccentric orbital BBH mergers, we not only confirm the universal oscillation in peak amplitudes (including harmonic modes), similar to the oscillations observed in $L_{\text{peak}}$, $M_f$, $\alpha_f$, and $V_f$, but also make the first discovery of a ubiquitous spiral-like internal fine structure that correlates $L_{\text{peak}}$, $M_f$, $\alpha_f$, $V_f$, and peak amplitudes. This distinctive feature, which we term the "fingerprint" of eccentric orbital BBH mergers, carries important implications for unraveling the intricate dynamics and astrophysics associated with eccentric orbital BBH mergers.
In this paper, we consider the analytical model of a dynamical photon sphere described by charged Vaidya spacetime. By using the homothetic Killing vector, We transform the metric to the conformally static coordinates in which an extra conserved quantity exists along null trajectories which allows us to reduce the geodesic motion equations to the first-order differential equations. We compare the radius of a photon sphere with one in uncharged Vaidya spacetime and find that a charge always decreases the radius of a photon sphere.
A universal framework for quantum-classical dynamics based on information-theoretic approaches is presented. Based on this, we analyze the interaction between quantum matter and a classical gravitational field. We point out that, under the assumption of conservation of momentum or energy, the classical gravitational field cannot cause the change of the momentum or energy of the quantum system, which is not consistent with the observation of existing experiments (e.g. the free fall experiment), while on the contrary the quantum gravitational field can do so. Our analysis exposes the fundamental relationship between conservation laws and the quantum properties of objects, offering new perspectives for the study of quantum gravity.
The electromagnetic radiation emitted by an accelerated charged particle can be described theoretically as the interaction of the charge with the so-called Fulling-Davies-Unruh thermal bath in the coordinate frame co-accelerating with the charge. We derive a similar theoretical description of the gravitational radiation associated with a point-like object as the interaction of the classical system with the Fulling-Davies-Unruh thermal bath in the co-accelerating coordinate frame.
We study the null asymptotic structure of Einstein-Maxwell theory in three-dimensional (3D) spacetimes. Although devoid of bulk gravitational degrees of freedom, the system admits a massless photon and can therefore accommodate electromagnetic radiation. We derive fall-off conditions for the Maxwell field that contain both Coulombic and radiative modes with non-vanishing news. The latter produces non-integrability and fluxes in the asymptotic surface charges, and gives rise to a non-trivial 3D Bondi mass loss formula. The resulting solution space is thus analogous to a dimensional reduction of 4D pure gravity, with the role of gravitational radiation played by its electromagnetic cousin. We use this simplified setup to investigate choices of charge brackets in detail, and compute in particular the recently introduced Koszul bracket. When the latter is applied to Wald-Zoupas charges, which are conserved in the absence of news, it leads to the field-dependent central extension found earlier in [arXiv:1503.00856]. We also consider (Anti-)de Sitter asymptotics to further exhibit the analogy between this model and 4D gravity with leaky boundary conditions.
Very recently Harada has proposed a gravitational theory which is of third order in the derivatives of the metric tensor with the property that any solution of Einstein's field equations (EFEs) possibly with a cosmological constant is necessarily a solution of the new theory. Remarkably he showed that even in a matter-dominated universe with zero cosmological constant, there is a late-time transition from decelerating to accelerating expansion. Harada also derived an exact solution which is generalisation of the Schwarzschild solution. However, this was not the most general static spherically vacuum solution of the theory and the general solution was subsequently obtained by Barnes. Recently Tarciso et al. have considered regular black holes in Harada's theory coupled to non-linear electrodynamics and scalar fields. In particular they exhibit a four-parameter solution with a zero scalar field whose source is a Maxwell electromagnetic field. It is a straightforward generalisation of Harada's vacuum solution analagous to the Reissner-Nordstrom generalistaion of the Schwarzschild solution. However, this solution is not the most general static spherically symmetric solution of the Harada-Maxwell field equations (i.e.\ Harada gravitational fields with a Maxwell electromagnetic source). The most general such solution is obtained in this paper.
To the first post-Newtonian order, if two test particles revolve in opposite directions about a massive, spinning body along two circular and equatorial orbits with the same radius, they take different times to return to the reference direction relative to which their motion is measured: it is the so-called gravitomagnetic clock effect. The satellite moving in the same sense of the rotation of the primary is slower, and experiences a retardation with respect to the case when the latter does not spin, while the one circling in the opposite sense of the rotation of the source is faster, and its orbital period is shorter than it would be in the static case. The resulting time difference due to the stationary gravitomagnetic field of the central spinning body is proportional to the angular momentum per unit mass of the latter through a numerical factor which so far has been found to be $4\pi$. A numerical integration of the equations of motion of a fictitious test particle moving along a circular path lying in the equatorial plane of a hypothetical rotating object by including the gravitomagnetic acceleration to the first post-Newtonian order shows that, actually, the gravitomagnetic corrections to the orbital periods are larger by a factor of $4$ in both the prograde and retrograde cases. Such an outcome, which makes the proportionality coefficient of the gravitomagnetic difference in the orbital periods of the two counter-revolving orbiters equal to $16\pi$, confirms an analytical calculation recently published in the literature by the present author.
There are a number of theoretical predictions for astrophysical and cosmological objects, which emit high frequency ($10^6-10^9$~Hz) Gravitation Waves (GW) or contribute somehow to the stochastic high frequency GW background. Here we propose a new sensitive detector in this frequency band, which is based on existing cryogenic ultra-high quality factor quartz Bulk Acoustic Wave cavity technology, coupled to near-quantum-limited SQUID amplifiers at $20$~mK. We show that spectral strain sensitivities reaching $10^{-22}$ per $\sqrt{\text{Hz}}$ per mode is possible, which in principle can cover the frequency range with multiple ($>100$) modes with quality factors varying between $10^6-10^{10}$ allowing wide bandwidth detection. Due to its compactness and well established manufacturing process, the system is easily scalable into arrays and distributed networks that can also impact the overall sensitivity and introduce coincidence analysis to ensure no false detections.
The motion of a radiating point particle can be represented by a series of geodesics whose "constants" of motion evolve slowly with time. The evolution of these constants of motion can be determined directly from the self-force equations of motion. In the presence of spacetime symmetries, the situation simplifies: there exist not only constants of motion conjugate to these symmetries, but also conserved currents whose fluxes can be used to determine their evolution. Such a relationship between point-particle motion and fluxes of conserved currents is a flux-balance law. However, there exist constants of motion that are not related to spacetime symmetries, the most notable example of which is the Carter constant in the Kerr spacetime. In this paper, we first present a new approach to flux-balance laws for spacetime symmetries, using the techniques of symplectic currents and symmetry operators, which can also generate more general conserved currents. We then derive flux-balance laws for all constants of motion in the Kerr spacetime, using the fact that the background, geodesic motion is integrable. For simplicity, we restrict derivations in this paper to the scalar self-force problem. While generalizing the discussion in this paper to the gravitational case will be straightforward, there will be additional complications in turning these results into a practical flux-balance law in this case.
We introduce configuration space path integrals for quantum fields interacting with classical fields. We show that this can be done consistently by proving that the dynamics are completely positive directly, without resorting to master equation methods. These path integrals allow one to readily impose space-time symmetries, including Lorentz invariance or diffeomorphism invariance. They generalize and combine the Feynman-Vernon path integral of open quantum systems and the stochastic path integral of classical stochastic dynamics while respecting symmetry principles. We introduce a path integral formulation of general relativity where the space-time metric is treated classically, as well as a diffeomorphism invariant theory based on the trace of Einstein's equations. The theory is a candidate for a fundamental theory that reconciles general relativity with quantum mechanics.
We use the Hamilton-Jacobi formalism to derive asymptotic solutions to the dynamical equations for inflationary T-models in a flat Friedmann-Lema\^itre-Robertson-Walker spacetime, both in the kinetic-dominance stage and in the slow-roll stage. With an appropriate Pad\'e summation, the expansions for the Hubble parameter in those two stages can be matched, which in turn determines the relation between the expansions for the number of e-folds and allows us to compute the total amount of inflation as a function of the initial data or, conversely, to select initial data that correspond to a fixed total amount of inflation. Using the slow-roll stage expansions, we also derive expressions for the corresponding spectral indexes $n_s$ accurate to order $1/N^2$, and $r$ accurate to order $1/N^3$ in the number of e-folds $N$ in the slow-roll approximation.
We seek the inverse formulas for the cosmological unifying relation between gluons and conformally coupled scalars. We demonstrate that the weight-shifting operators derived from the conformal symmetry at the dS late-time boundary can serve as the inverse operators for the 3-point cosmological correlators. However, in the case of the 4-point cosmological correlator, we observe that the inverse of the unifying relation cannot be constructed from the weight-shifting operators. Despite this failure, we are inspired to propose a "weight-shifting uplifting" method for the 4-point gluon correlator.
The power-law parametrization for the energy density spectrum of gravitational wave (GW) background is a useful tool to study its physics and origin. While scalar induced secondary gravitational waves (SIGWs) from some particular models fit the signal detected by NANOGrav, Parkers Pulsar Timing Array, European Pulsar Timing Array, and Chinese Pulsar Timing Array collaborations better than GWs from supermassive black hole binaries (SMBHBs), we test the consistency of the data with the infrared part of SIGWs which is somewhat independent of models. Through Bayesian analysis, we show that the infrared parts of SIGWs fit the data better than GW background from SMBHBs. The results give tentative evidence for SIGWs.
Beginning with the Everett-DeWitt many-worlds interpretation of quantum mechanics, there have been a series of proposals for how the state vector of a quantum system might split at any instant into orthogonal branches, each of which exhibits approximately classical behavior. In an earlier version of the present work, we proposed a decomposition of a state vector into branches by finding the minimum of a measure of the mean squared quantum complexity of the branches in the branch decomposition. In the present article, we adapt the earlier version to quantum electrodynamics of electrons and protons on a lattice in Minkowski space. The earlier version, however, here is simplified by replacing a definition of complexity based on the physical vacuum with a definition based on the bare vacuum. As a consequence of this replacement, the physical vacuum itself is expected to branch yielding branches with energy densities slightly larger than that of the unbranched vacuum but no observable particle content. If the vacuum energy renormalization constant is chosen as usual to give 0 energy density to the unbranched vacuum, vacuum branches will appear to have a combination of dark energy and dark matter densities. The hypothesis that vacuum branching is the origin of the observed dark energy and dark matter densities leads to an estimate of $O(10^{-18} m^3)$ for the parameter $b$ which enters the complexity measure governing branch formation and sets the boundary between quantum and classical behavior.
We compare recent one-loop-level, scattering-amplitude-based, computations of the classical part of the gravitational bremsstrahlung waveform to the frequency-domain version of the corresponding Multipolar-Post-Minkowskian waveform result. When referring the one-loop result to the classical averaged momenta $\bar p_a = \frac12 (p_a+p'_a)$, the two waveforms are found to agree at the Newtonian and first post-Newtonian levels, as well as at the first-and-a-half post-Newtonian level, i.e. for the leading-order quadrupolar tail. However, we find that there are significant differences at the second-and-a-half post-Newtonian level, $O\left( \frac{G^2}{c^5} \right)$, i.e. when reaching: (i) the first post-Newtonian correction to the linear quadrupole tail; (ii) Newtonian-level linear tails of higher multipolarity (odd octupole and even hexadecapole); (iii) radiation-reaction effects on the worldlines; and (iv) various contributions of cubically nonlinear origin (notably linked to the quadrupole$\times$ quadrupole$\times$ quadrupole coupling in the wavezone). These differences are reflected at the sub-sub-sub-leading level in the soft expansion, $ \sim \omega \ln \omega $, i.e. $O\left(\frac{1}{t^2} \right)$ in the time domain. Finally, we computed the first four terms of the low-frequency expansion of the Multipolar-Post-Minkowskian waveform and checked that they agree with the corresponding existing classical soft graviton results.
Inspired by DGP gravity, this paper investigates higher derivative gravity localized on the brane. Similar to the case in bulk, we find the brane-localized higher derivative gravity generally suffers the ghost problem. Besides, the spectrum includes complex-mass modes and is unstable in some parameters. On the other hand, the DGP gravity and brane-localized Gauss-Bonnet gravity are well-defined for suitable parameters. We also find novel algebraic identities of the mass spectrum, which reveal the global nature and can characterize the phase transformation of the mass spectrum. Furthermore, we discuss various constraints on parameters of brane-localized gravity in AdS/BCFT and wedge holography, respectively. They include the tachyon-free and ghost-free conditions of Kaluza-Klein and brane-bending modes, the positive definiteness of boundary central charges, and entanglement entropy. The tachyon-free and ghost-free conditions impose the most substantial restrictions, which require a non-negative DGP gravity and Gauss-Bonnet gravity on the brane. The ghost-free condition rules out one class of brane-localized higher derivative gravity. Thus, such higher derivative gravity should be understood as a low energy effective theory on the brane, under the ghost energy scale. Finally, we briefly discuss the applications of our results.
Using effective field theory methods, we derive the Carrollian analog of the geodesic action. We find that it contains both `electric' and `magnetic' contributions that are in general coupled to each other. The equations of motion descending from this action are the Carrollian pendant of geodesics, allowing surprisingly rich dynamics. As an example, we derive Carrollian geodesics on a Carroll-Schwarzschild background and discover an effective potential similar to the one appearing in geodesics on Schwarzschild backgrounds. However, the Newton term in the potential turns out to depend on the Carroll particle's energy. As a consequence, there is only one circular orbit localized at the Carroll extremal surface, and this orbit is unstable. For large impact parameters, the deflection angle is half the value of the general relativistic light-bending result. For impact parameters slightly bigger than the Schwarzschild radius, orbits wind around the Carroll extremal surface. For small impact parameters, geodesics get reflected by the Carroll black hole, which acts as a perfect mirror.
We explore the role of parametrizations for nonperturbative QCD functions in global analyses, with a specific application to extending a phenomenological analysis of the parton distribution functions (PDFs) in the charged pion realized in the xFitter fitting framework. The parametrization dependence of PDFs in our pion fits substantially enlarges the uncertainties from the experimental sources estimated in the previous analyses. We systematically explore the parametrization dependence by employing a novel technique to automate generation of polynomial parametrizations for PDFs that makes use of B\'ezier curves. This technique is implemented in a C++ module Fant\^omas that is included in the xFitter program. Our analysis reveals that the sea and gluon distributions in the pion are not well disentangled, even when considering measurements in leading-neutron deep inelastic scattering. For example, the pion PDF solutions with a vanishing gluon and large quark sea are still experimentally allowed, which elevates the importance of ongoing lattice and nonperturbative QCD calculations, together with the planned pion scattering experiments, for conclusive studies of the pion structure.
In several models of beyond Standard Model physics (BSM) discrete symmetries play an important role. For instance, in order to avoid flavor changing neutral currents (FCNC), a discrete $Z_2$ symmetry is imposed on Two-Higgs-Doublet-Models (2HDM). This can lead to the formation of domain walls (DW) as the $Z_2$ symmetry gets spontaneously broken during electroweak symmetry breaking (EWSB) in the early universe. Due to this simultaneous spontaneous breaking of both the discrete symmetry and the electroweak symmetry, the vacuum manifold has the structure of two disconnected 3-spheres and the formed domain walls can exhibit several special properties in contrast to standard domain walls. We focus on some of these properties such as CP and electric charge violating vacua localized inside the domain walls. The breaking of $U(1)_{em}$ inside the wall leads to the known phenomenon of "clash-of-symmetries" mechanism, meaning that the symmetry group inside the wall is smaller than the symmetry group far from the wall. We also discuss the scattering of top quarks off such types of domain walls and show, for example, that they can be reflected or transmitted off the wall as a bottom quark.
We study spontaneous emission of a photon during the transitions between relativistic Landau states of an electron in a constant magnetic field that can reach the Schwinger value of $H_c = 4.4 \times 10^9$ T. In contrast to the conventional method in which detection of both the final electron and the photon is implied in a certain basis, here we derive the photon state as it evolves from the process itself. It is shown that the emitted photon state represents a twisted Bessel beam propagating along the field axis with a total angular momentum (TAM) projection onto this axis $\ell-\ell'$ where $\ell$ and $\ell'$ are the TAM of the initial electron and of the final one, respectively. Thus, the majority of the emitted photons turn out to be twisted with $\ell-\ell' \gtrsim 1$, even when the magnetic field reaches the critical value of $H\sim H_c$. The transitions without a change of the electron angular momentum, $\ell'=\ell$, are possible, yet much less probable. We also compare our findings with those for a spinless charged particle and demonstrate their good agreement for the transitions without change of the electron spin projection even in the critical fields, while the spin-flip transitions are generally suppressed. In addition, we argue that whereas the ambiguous choice of an electron spin operator affects the differential probability of emission, this problem can partially be circumvented for the photon evolved state because it is the electron TAM rather than the spin alone that defines the TAM of the emitted twisted photon.
Exploiting an evolution scheme for parton distribution functions (DFs) that is all-orders exact, contemporary lattice-QCD (lQCD) results for low-order Mellin moments of the pion valence quark DF are shown to be mutually consistent. The analysis introduces a means by which key odd moments can be obtained from the even moments in circumstances where only the latter are available. Combining these elements, one arrives at parameter-free lQCD-based predictions for the pointwise behaviour of pion valence, glue, and sea DFs, with sound uncertainty estimates. The behaviour of the pion DFs at large light-front momentum fraction, $x> 0.85$, is found to be consistent with QCD expectations and continuum analyses of pion structure functions, i.e., damping like $(1 -x)^{\beta_{\rm parton}}$, with $\beta_{\rm valence} \approx 2.4$, $\beta_{\rm glue} \approx 3.6$, $\beta_{\rm sea} \approx 4.6$. It may be possible to test these predictions using data from forthcoming experiments.
Two particular ratios related to mesons are proposed for the study of the conformal window in $SU(3)$ gauge theory and fundamental fermions. Lattice and other studies indicate that the lower end, $N_f^*$, is at around 7 - 13 flavors which is a wide range without a clear consensus. Here we propose the decay constant to mass ratios of mesons, $f_{PS,V} / m_V$, as a proxy since below the conformal window lattice studies have shown that they are largely $N_f$-independent while at the upper end of the conformal window they are vanishing. The drop from the non-zero constant value to zero at $N_f = 16.5$ might be indicative of $N_f^*$. We compute $f_V / m_V$ to N$^3$LO and $f_{PS} / m_V$ to NNLO order in (p)NRQCD. The results are unambiguously reliable just below $N_f = 16.5$, hence the results are expanded \'a la Banks-Zaks in $\varepsilon = 16.5 - N_f$. The convergence properties of the series and matching with the non-perturbative infinite volume, continuum and chiral extrapolated lattice results at $N_f = 10$ suggest that the perturbative results might be reliable down to $N_f = 12$. A sudden drop is observed at $N_f = 12$ and $N_f = 13$ in $f_V / m_V$ and $f_{PS} / m_V$, respectively.
We overview the current status and future perspectives of the QCD-based method of light-cone sum rules. The two main versions of these sum rules, using light-meson and $B$-meson distribution amplitudes are introduced and the most important applications of the method are discussed. We also outline open problems and future perspectives of this method.
We present a determination of the parton distribution functions (PDFs) of the proton from HERA data using a PDF parametrization inspired by a quantum statistical model of the proton dynamics. This parametrization is characterised by a very small number of parameters, yet it leads to a reasonably good description of the data, comparable with other parametrizations on the market. It may thus provide an alternative to standard parametrizations, useful for studying parametrization bias and to possibly simplify the fit procedure thanks to the small number of parameters. Interestingly, the model reproduces key physical features, such as a $\bar d$ distribution larger than $\bar u$, that HERA data alone are not able to constrain when using more flexible parametrizations. Moreover, polarized distributions are described in the model by the same parameters of the unpolarized ones, opening up the possibility of a simultaneous fit with the same number of parameters. The results of this study can motivate future work in this direction.
This study delves into the contribution of the strange quark within the proton, which influences several fundamental proton properties. By establishing a robust relationship between the proton's quantum anomalous energy and the sigma term, we successfully extract the strangeness sigma term ($\sigma_{sN}$) with a precise value of $420.1\pm59.3$ MeV. Additionally, we present novel results for the proton trace anomalous energy, observing a value of $0.12\pm0.02$. Our analysis integrates the most recent data from the Hall C and GlueX collaborations, warranting a strong endorsement for the existence of a non-zero strangeness component inside the proton, with a remarkable statistical significance of $7.1\sigma$. Furthermore, we investigate the possibility of scheme-independence in the extraction results and examine the uniformity of sigma terms obtained from the datasets provided by the two collaborations. Our analysis reveals compatibility between the extraction results of the respective experiments.
The super-weak force is a minimal, anomaly-free U(1) extension of the standard model, designed to explain the origin of (i) neutrino masses and mixing matrix elements, (ii) dark matter, (iii) cosmic inflation, (iv) stabilization of the electroweak vacuum and (v) leptogenesis. In this talk we discuss the phenomenological status of the model and provide viable scenarios for the physics of the items in this list.
We study a one-loop induced neutrino mass model with an inert isospin triplet scalar field of $Y=0$ and heavier Majorana right-handed fermions. We show numerical analysis of neutrino oscillation and lepton flavor violations and demonstrate our allowed regions in cases of normal and inverted hierarchies. Then, we move on to the discussion of dark matter (DM) candidates to satisfy the relic density where we have two candidates; fermionic DM (FDM) and bosonic DM (BDM). And, we classify four cases NH+FDM, NH+BDM, IH+FDM, IH+BDM and search for each of the allowed points in the model.
We compute the equation of state (EoS) of strange quark stars (SQSs) with the MIT Bag model using density dependent bag pressure, characterized by a Gaussian distribution function. The bag pressure's density dependence is controlled by three key parameters namely the asymptotic value ($B_{as}$), $\Delta B(=B_0 - B_{as})$, and $\beta$. We explore various parameter combinations ($B_{as}$, $\Delta B$, $\beta$) that adhere to the Bodmer-Witten conjecture, a criterion for the stability of SQSs. Our primary aim is to analyze the effects of these parameter variations on the structural properties of SQSs. However we find that none of the combinations can satisfy the NICER data for PSR J0030+0451 and the constraint on tidal deformability from GW170817. So it can be emphasized that this model cannot describe reasonable SQS configurations. We also extend our work to calculate structural properties of hybrid stars (HSs). With the density dependent bag model (DDBM), these astrophysical constraints are fulfilled by the HSs configurations within a very restricted range of the three parameters. The present work is the first to constrain the parameters of DDBM for both SQS and HSs using the recent astrophysical constraints on tidal deformabiity from GW170817 and that on mass-radius relationship from NICER data.
We propose a leptogenesis scenario where baryon asymmetry generation is assisted by the kinetic motion of the majoron, $J$, in the process of lepton-number violating inverse decays of a right-handed neutrino, $N$. We investigate two distinct scenarios depending on the sources of majoron kinetic motion: 1) the misalignment mechanism, and 2) the kinetic misalignment mechanism. The former case can naturally generate the observed baryon asymmetry for the majoron mass $m_J \gtrsim \,{\rm TeV}$ and the right-handed neutrino's mass $M_{N} \gtrsim 10^{11}\,{\rm GeV}$. However, an additional decay channel of the majoron is required to avoid the overclosure problem of the majoron oscillation. The later scenario works successfully for $m_J \lesssim 100\,{\rm keV}$, and $M_{N} \lesssim 10^9\,{\rm GeV}$ while $M_N$ can be even far below the temperature of the electroweak phase transition as long as sufficiently large kinetic misalignment is provided. We also find that a sub-$100\,{\rm keV}$ majoron is a viable candidate for dark matter.
In this paper, we estimate the number of event topologies that have the potential to be produced in $pp$ collisions at the Large Hadron Collider (LHC) without violating kinematic and other constraints. We use numeric calculations and combinatorics, guided by the large-scale Monte Carlo simulations of the Standard Model (SM) processes. Then we set the upper limit on the probability that a new physics may escape the detection assuming a model-agnostic approach. The calculated probability is surprisingly large, and the fact that the LHC did not find a new physics up to now is not entirely surprising. We argue that the most optimal direction for maximising the chances of finding new physics is to use unsupervised machine learning for anomaly detection.
Building an organized Yukawa structure of quarks and leptons is an essential mission to understand fermion mass hierarchy and flavor mixing in particle physics. Inspired by the similarity of CKM and PMNS mixings, a common mass pattern for up-type and down-type quarks, charged leptons, and Dirac neutrinos is realized in terms of hierarchal masses. An organized structure of Yukawa couplings can be expressed on a Yukawa basis. The CKM mixing and PMNS mixing have the same mathematical forms due to $SO(2)$ family symmetry. By fitting the CKM/PMNS mixing experiment, the flat pattern becomes a successful flavor structure. The fundamental fermions can be explained by a new picture to show the relationship between gauge structure and flavor structure.
We study the exclusive photoproduction of a $ \pi ^{0}\gamma $ pair with large invariant mass $ M_{\gamma \pi}^2 $, which is sensitive to the exchange of either two quarks or two gluons in the $ t $-channel. In this letter, we show that the process involving two-gluon exchanges does not factorize in the Bjorken limit at the leading twist. This can be explicitly demonstrated by the fact that there exist diagrams, which contribute at the leading twist, for which Glauber gluons remain trapped, due to the pinching of the contour integrations of both the plus and minus component of the Glauber gluon momentum. For the same reason, $\pi^0$-nucleon scattering to two photons also suffers from the same issue. On the other hand, we stress that there are no issues with respect to collinear factorization for the quark channels. By considering an analysis of all potential reduced diagrams of leading pinch surfaces, we argue that the quark channel is safe from Glauber pinches, and therefore, a collinear factorization in that case follows through without any problems. This means that processes where gluon exchanges are forbidden, such as the exclusive photoproduction of $ \pi ^{\pm}\gamma $ and $ \rho^{0,\,\pm} \gamma $, are unaffected by the factorization breaking effects we point out in this letter.
We use an anti-de Sitter/Quantum Chromodynamics (AdS/QCD) based holographic light-front wavefunction (LFWF) for vector meson, in conjunction with the dipole model to investigate the cross-sections data for the diffractive and exclusive J/psi and Psi(2S) production. We confront the experimental data using a new explicit form of the holographic LFWF, where the longitudinal confinement dynamics in light-front Schrodinger equation is captured by 't Hooft equation of (1+1)-dim, in large Nc approximation, in addition to the transverse confinement dynamics governed by the confining mass scale parameter, kappa in vector mesons. We obtain the LFWF parameters from fitting to the exclusive J/psi electro production data from electron-proton collision at HERA collider for m_c = 1.27 GeV. Our results suggest that the dipole model together with holographic meson LFWFs with longitudinal confinement is able to give a successful description for differential scattering cross-section for exclusive J/psi electro production for H1 and ZEUS data. We also predict the rapidity distributions of differential scattering cross-section and total photo production of J/psi and Psi(2S) states in proton-proton ultra-peripheral collisions(UPCs) at center of mass energy sqrt s = 7, 13 TeV. Using the minimum set of parameters, our predictions for the UPCs are in good agreement with the recent experimental observations of UPCs at ALICE and LHCb Collaboration.
There is a significant interest in testing quantum entanglement and Bell inequality violation in high-energy experiments. Since the analyses in high-energy experiments are performed with events statistically averaged over phase space, the states used to determine observables depend on the choice of coordinates through an event-dependent basis and are thus not genuine quantum states, but rather "fictitious states." We prove that if Bell inequality violation is observed with a fictitious state, then it implies the same for a quantum sub-state. We further show analytically that the basis which diagonalizes the spin-spin correlations is optimal for constructing fictitious states, and for maximizing the violation of Bell's inequality.
We study the dynamics of scalar fields with compact field spaces, or axions, in de Sitter space. We argue that the field space topology can qualitatively affect the physics of these fields beyond just which terms are allowed in their actions. We argue that the sharpest difference is for massless fields -- the free massless noncompact scalar field does not admit a two-point function that is both de Sitter-invariant and well-behaved at long distances, while the massless compact scalar does. As proof that this difference can be observable, we show that the long-distance behavior of a heavy scalar field, and thus its cosmological collider signal, can qualitatively change depending on whether it interacts with a light compact or noncompact scalar field. We find an interesting interplay between the circumference of the field space and the Hubble scale. When the field space is much larger than Hubble, the compact field behaves similarly to a light noncompact field and forces the heavy field to dilute much faster than any free field can. However, depending on how much smaller the field space is compared to Hubble, the compact field can cause the heavy scalar to decay either faster or slower than any free field and so we conclude that there can be qualitative and observable consequences of the field space's topology in inflationary correlation functions.
Studies of $J/$$\psi$ $v_2$ at RHIC and LHC energies have provided important elements towards the understanding on the production mechanisms and on the thermalization of charm quarks. Bottomonia has an advantage since it is a cleaner probe. A brief discussion has been provided for $\Upsilon(1S)$ $v_2$, which can become the new probe for QGP, including the necessity of studies for small systems.
The analytic continuations (ACs) of the double variable Horn $H_1$ and $H_5$ functions have been derived for the first time using the automated symbolic $\textit{Mathematica}$ package $\texttt{Olsson.wl}$. The use of Pfaff-Euler transformations have been emphasised to derive AC to cover regions which are otherwise not possible. The corresponding region of convergence (ROC) is obtained using its companion package $\texttt{ROC2.wl}$. A $\textit{Mathematica}$ package $\texttt{HornH1H5.wl}$, containing all the derived ACs and the associated ROCs, along with a demonstration file of the same is made publicly available in https://github.com/souvik5151/Horn_H1_H5 .
The investigation of the two-particle source function in lead-lead collisions simulated by the EPOS model at a center of mass energy per nucleon pair of $\sqrt{s_{_{\text{NN}}}}=2.76$ TeV is presented. The two-particle source functions are reconstructed directly on an event-by-event basis for pions, kaons and protons separately, using the final stage of EPOS. A L\'evy source shape is observed for all three particle species in the individual events, deviating significantly from a Gaussian shape. The source parameters are extracted as functions of collision centrality and pair average transverse mass ($m_{\text{T}}$). The L\'evy exponent is found to be ordered accordingly to particle mass. The L\'evy scale parameter is found to scale for all particle species with $m_{\text{T}}$ according to Gaussian hydrodynamical predictions; however, there is no $m_{\text{T}}$-scaling found across these species. In case of pions, the effects of the decay products and hadronic rescattering are also investigated. The L\'evy exponent is decreased when decay products are also included in the analysis. Without hadronic rescattering and decay products, the source shape is close to a Gaussian.
If dark matter is light, it may be due to a seesaw mechanism just as neutrinos are. It is postulated that both originate from the same type of heavy fermion anchors, either singlets or triplets. In the latter case, a shift of the $W$ mass is predicted, as suggested by the $CDF$ precision measurement. A spontaneously broken dark $U(1)$ gauge symmetry is assumed, resulting in freeze-in long-lived light dark matter.
Generation of simulated detector response to collision products is crucial to data analysis in particle physics, but computationally very expensive. One subdetector, the calorimeter, dominates the computational time due to the high granularity of its cells and complexity of the interactions. Generative models can provide more rapid sample production, but currently require significant effort to optimize performance for specific detector geometries, often requiring many models to describe the varying cell sizes and arrangements, without the ability to generalize to other geometries. We develop a $\textit{geometry-aware}$ autoregressive model, which learns how the calorimeter response varies with geometry, and is capable of generating simulated responses to unseen geometries without additional training. The geometry-aware model outperforms a baseline unaware model by over $50\%$ in several metrics such as the Wasserstein distance between the generated and the true distributions of key quantities which summarize the simulated response. A single geometry-aware model could replace the hundreds of generative models currently designed for calorimeter simulation by physicists analyzing data collected at the Large Hadron Collider. This proof-of-concept study motivates the design of a foundational model that will be a crucial tool for the study of future detectors, dramatically reducing the large upfront investment usually needed to develop generative calorimeter models.
Electroweak dipole operators in the Standard Model Effective Field Theory (SMEFT) are important indirect probes of quantum effects of new physics beyond the Standard Model (SM), yet they remain poorly constrained by current experimental analyses for lack of interference with the SM amplitudes in constructing cross section observables. In this Letter, we point out that dipole operators flip fermion helicities so are ideally studied through single transverse spin asymmetries. We illustrate this at a future electron-positron collider with transversely polarized beams, where such effect exhibits as azimuthal $\cos\phi$ and $\sin\phi$ distributions which originate from the interference of the electron dipole operators with the SM and are linearly dependent on their Wilson coefficients. This new method can improve the current constraints on the electron dipole couplings by one to two orders of magnitude, without depending on other new physics operators, and can also simultaneously constrain both their real and imaginary parts, offering a new opportunity for probing potential $CP$-violating effects.
Q-balls are bound-state configurations of complex scalars stabilized by a conserved Noether charge Q. They are solutions to a second-order differential equation that is structurally identical to Euclidean vacuum-decay bounce solutions in three dimensions. This enables us to translate the recent tunneling potential approach to Q-balls, which amounts to a reformulation of the problem that can simplify the task of finding approximate and even exact Q-ball solutions.
We analyze the UV breakdown of Sub-GeV dark matter models that live in a new, dark U(1) sector. Many of these models include a scalar field, which is either the dark matter itself or a dark Higgs field that generates mass terms for the dark matter particle via spontaneous symmetry breaking. A quartic self coupling of this scalar field is generically allowed, and we show that its running is largely governed by the strength of the U(1) gauge field, $\alpha_D$. Furthermore, it consistently has a lower Landau pole than the gauge coupling. Link fields, which couple to both the dark sector and the Standard Model (SM), connect these Landau poles to constraints on SM charged particles. Current LHC constraints on link fields are compatible with $\alpha_D \lesssim 0.5 - 1$ for most of the mass range in most models, while smaller values, $\alpha_D \lesssim 0.15$, are favored for Majorana DM.
Viscous hydrodynamics serves as a successful mesoscopic description of the Quark-Gluon Plasma produced in relativistic heavy-ion collisions. In order to investigate, how such an effective description emerges from the underlying microscopic dynamics we calculate the hydrodynamic and non-hydrodynamic modes of linear response in the sound channel from a first-principle calculation in kinetic theory. We do this with a new approach wherein we discretize the collision kernel to directly calculate eigenvalues and eigenmodes of the evolution operator. This allows us to study the Green's functions at any point in the complex frequency space. Our study focuses on scalar theory with quartic interaction and we find that the analytic structure of Green's functions in the complex plane is far more complicated than just poles or cuts which is a first step towards an equivalent study in QCD kinetic theory.
Inspired by the recent observation of $e^+e^-\to \omega X(3872)$ by the BESIII Collaboration, in this work we study the production of the charmonium $\chi_{cJ}(2P)$ by $e^+e^-$ annihilation. We find that the $e^+e^-\to\omega\chi_{c0}(2P)$ and $e^+e^-\to \omega\chi_{c2}(2P)$ have sizable production rates, when taking the cross section data from $e^+e^-\to \omega X(3872)$ as the scaling point and treating the $X(3872)$ as the charmonium $\chi_{c1}(2P)$. Considering that the dominant decay modes of $\chi_{c0}$ and $\chi_{c2}(2P)$ involve $D\bar{D}$ final states, we propose that $e^+e^-\to \omega D\bar{D}$ is an ideal process to identify $\chi_{c0}(2P)$ and $\chi_{c2}(2P)$, which is similar to the situation that happens in the $D\bar{D}$ invariant mass spectrum of the $\gamma\gamma\to D\bar{D}$ and $B^+\to D^+{D}^- K^+$ processes. With continuous accumulation of experimental data, these proposed production processes offer a promising avenue for exploration by the BESIII and Belle II collaborations.
The initiation of a novel neutrino physics program at the Large Hadron Collider (LHC) and the purpose-built Forward Physics Facility (FPF) proposal have motivated studies exploring the discovery potential of these searches. This requires resolving degeneracies between new predictions and uncertainties in modeling neutrino production in the forward kinematic region. The present work investigates a broad selection of existing predictions for the parent hadron spectra at FASER$\nu$ and the FPF to parameterize expected correlations in the neutrino spectra produced in their decays and to determine the highest achievable precision for their observation based on Fisher information. This allows for setting constraints on various physics processes within and beyond the Standard Model, including neutrino non-standard interactions. We also illustrate how combining multiple neutrino observables could lead to experimental confirmation of the enhanced-strangeness scenario proposed to resolve the cosmic-ray muon puzzle already during the ongoing LHC Run 3.
The partial decay widths and production mechanism of the three pentaquark states, $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$, discovered by the LHCb Collaboration in 2019, are still under debate. In this work, we employ the contact-range effective field theory approach to construct the $\bar{D}^{(*)}\Sigma_{c}^{(*)}$, $\bar{D}^{*}\Lambda_c$, $\bar{D}\Lambda_c$, $J/\psi p$, and $\eta_c p$ coupled-channel interactions to dynamically generate the multiplet of hidde-charm pentaquark molecules by reproducing the masses and widths of $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$. Assuming that the pentaquark molecules are produced in the $\Lambda_b$ decay via the triangle diagrams, where $\Lambda_{b}$ firstly decays into $D_{s}^{(\ast)}\Lambda_{c}$, then $D_{s}^{(\ast)}$ scatters into $\bar{D}^{(\ast)}K$, and finally the molecules are dynamically generated by the $\bar{D}^{(\ast)}\Lambda_{c}$ interactions, we calculate the branching fractions of the decays $\Lambda_b \to {P_{\psi}^{N}}K$ using the effective Lagrangian approach. With the partial decay widths of these pentaquark molecules, we further estimate the branching fraction of the decays $ \Lambda_b \to ( P_{\psi}^{N} \to J/\psi p )K $ and $ \Lambda_b \to ( P_{\psi}^{N}\to \bar{D}^* \Lambda_c )K $. Our results show that the pentaquark states $P_{\psi}^{N}(4312)$, $P_{\psi}^{N}(4440)$, and $P_{\psi}^{N}(4457)$ as hadronic molecules can be produced in the $\Lambda_b$ decay, and on the other hand their heavy quark spin symmetry partners are invisible in the $J/\psi p$ invariant mass distribution because of the small production rates. Our studies show that is possible to observe some of the pentaquark states in the $\Lambda_b\to \bar{D}^*\Lambda_c K$ decays.
Experiments at particle colliders are the primary source of insight into physics at microscopic scales. Searches at these facilities often rely on optimization of analyses targeting specific models of new physics. Increasingly, however, data-driven model-agnostic approaches based on machine learning are also being explored. A major challenge is that such methods can be highly sensitive to the presence of many irrelevant features in the data. This paper presents Boosted Decision Tree (BDT)-based techniques to improve anomaly detection in the presence of many irrelevant features. First, a BDT classifier is shown to be more robust than neural networks for the Classification Without Labels approach to finding resonant excesses assuming independence of resonant and non-resonant observables. Next, a tree-based probability density estimator using copula transformations demonstrates significant stability and improved performance over normalizing flows as irrelevant features are added. The results make a compelling case for further development of tree-based algorithms for more robust resonant anomaly detection in high energy physics.
The problem of normalisation of the modular forms in modular invariant lepton and quark flavour models is discussed. Modular invariant normalisations of the modular forms are proposed.
We study Berry connections for supersymmetric ground states of 2d $\mathcal{N}=(2,2)$ GLSMs quantised on a circle, which are generalised periodic monopoles, with the aim to provide a fruitful physical arena for mathematical constructions related to the latter. These are difference modules encoding monopole solutions due to Mochizuki, as well as an alternative algebraic description of solutions in terms of vector bundles endowed with filtrations. The simultaneous existence of these descriptions is an example of a Riemann-Hilbert correspondence. We demonstrate how these constructions arise naturally by studying the ground states as the cohomology of a one-parameter family of supercharges. Through this, we show that the two sides of this correspondence are related to two types of monopole spectral data that have a direct interpretation in terms of the physics of the GLSM: the Cherkis-Kapustin spectral variety (difference modules) as well as twistorial spectral data (vector bundles with filtrations). By considering states generated by D-branes and leveraging the difference modules, we derive novel difference equations for brane amplitudes. We then show that in the conformal limit, these degenerate into novel difference equations for hemisphere or vortex partition functions, which are exactly calculable. Beautifully, when the GLSM flows to a nonlinear sigma model with K\"ahler target $X$, we show that the difference modules are related to deformations of the equivariant quantum cohomology of $X$, whereas the vector bundles with filtrations are related to the equivariant K-theory.
The ALE partition functions of a 6d (1,0) SCFT are interesting observables which are able to detect the global structure of the SCFT. They are defined to be the equivariant partition functions of the SCFT on a background with the topology of a two-dimensional torus times an ALE singularity. In this work, we compute the ALE partition functions of M-string orbifold SCFTs, extending our previous results for the M-string SCFTs. Via geometric engineering, our results about ALE partition functions are connected to the theory of higher-rank Donaldson-Thomas invariants for resolutions of elliptic Calabi-Yau threefold singularities. We predict that their generating functions satisfy interesting modular properties. The partition functions receive contributions from BPS strings probing the ALE singularity, whose worldsheet theories we determine via a chain of string dualities. For this class of backgrounds the BPS strings' worldsheet theories become relative field theories that are sensitive to discrete data generalizing to 6d the familiar choices of flat connections at infinity for instantons on ALE spaces. A novel feature we observe in the case of M-string orbifold SCFTs, which does not arise for the M-string SCFT, is the existence of frozen BPS strings which are pinned at the orbifold singularity and carry fractional instanton charge with respect to the 6d gauge fields.
Heterotic toriodal Z2xZ2 orbifolds may possess discrete torsion between the two defining orbifold twists in the form of additional cocycle factors in their one-loop partition functions. Using Gauged Linear Sigma Models (GLSMs) the consequences of discrete torsion can be uncovered when the orbifold is smoothed out by switching on appropriate blowup modes. Here blowup modes with twisted oscillator excitations are chosen to reproduce bundles that are close to the standard embedding without torsion. The orbifold resolutions with discrete torsion are distinguished from resolutions without torsion, since they require NS5-branes at their exceptional cycles.
We study examples of fourth-order Picard-Fuchs operators that are Hadamard products of two second order Picard-Fuchs operators. Each second order Picard-Fuchs operator is associated with a family of elliptic curves, and the Hadamard product computes period integrals on the fibred product of the two elliptic surfaces. We construct 3-cycles on this geometry as the union of 2-cycles in the fibre over contours on the base. We then use the special Lagrangian condition to constrain the contours on the base. This leads to a construction reminiscent of spectral networks and exponential networks that have previously appeared in string theory literature.
We study the construction of a manifestly covariant worldline action from a coadjoint orbit. A coadjoint orbit is a submanifold in the dual vector space of the Lie algebra and generated by coadjoint actions. Since a coadjoint orbit is a symplectic space, we derive the worldline particle action from the symplectic two-form. One subtlety in formulating worldline particle actions from coadjoint orbits is choosing the coordinate system that sufficiently illustrates physical properties of the particles. We introduce Hamiltonian constraints by the defining conditions of the isometry. This allows us to write a manifestly covariant worldline action. We demonstrate our method to the both massive and massless particles in Minkowski and AdS spacetime.
We study the unflavored Schur indices in the $\mathcal{N}=4$ super-Yang-Mills theory for the $B_n,C_n,D_n, G_2$ gauge groups. We explore two methods, namely the character expansion method and the Fermi gas method, to efficiently compute the $q$-series expansion of the Schur indices to some high orders. Using the available data and the modular properties, we are able to fix the exact formulas for the general gauge groups up to some high ranks and discover some interesting new features. We also identify some empirical modular anomaly equations, but unlike the case of $A_n$ groups, they are quite complicated and not sufficiently useful to fix exact formulas for gauge groups of arbitrary rank.
We develop a split representation for celestial amplitudes in celestial holography, by cutting internal lines of Feynman diagrams in Minkowski space. More explicitly, the bulk-to-bulk propagators associated with the internal lines are expressed as a product of two boundary-to-bulk propagators with a coinciding boundary point integrated over the celestial sphere. Applying this split representation, we compute the conformal partial wave and conformal block expansions of celestial four-point functions of massless scalars and photons on the Euclidean celestial sphere. In the $t$-channel massless scalar amplitude, we observe novel intermediate exchanges of staggered modules in the conformal block expansion.
We study supersymmetric AdS$_3$ flux vacua of massive type-IIA supergravity on anisotropic G2 orientifolds. Depending on the value of the $F_4$ flux the seven-dimensional compact space can either have six small and one large dimension such that the external space is scale-separated and effectively four-dimensional, or all seven compact dimensions small and parametrically scale-separated from the three external ones. Within this setup we also discuss the Distance Conjecture (including appropriate D4-branes), and highlight that such vacua provide a non-trivial example of the so-called Strong Spin-2 Conjecture.
By means of $\epsilon$ and large $N$ expansions, we study generalizations of the $O(N)$ model where the fundamental fields are tensors of rank $r$ rather than vectors, and where the global symmetry (up to additional discrete symmetries and quotients) is $O(N)^r$, focusing on the cases $r\leq 5$. Owing to the distinct ways of performing index contractions, these theories contain multiple quartic operators, which mix under the RG flow. At all large $N$ fixed points, melonic operators are absent and the leading Feynman diagrams are bubble diagrams, so that all perturbative fixed points can be readily matched to full large $N$ solutions obtained from Hubbard-Stratonovich transformations. The family of fixed points we uncover extend to arbitrary higher values of $r$, and as their number grows superexponentially with $r$, these theories offer a vast generalization of the critical $O(N)$ model. We also study sextic $O(N)^r$ theories, whose large $N$ limits are obscured by the fact that the dominant Feynman diagrams are not restricted to melonic or bubble diagrams. For these theories the large $N$ dynamics differ qualitatively across different values of $r$, and we demonstrate that the RG flows possess a numerous and diverse set of perturbative fixed points beginning at rank four.
In this study, we demonstrate that an inviscid fluid in a near-equilibrium state, when viewed in the Lagrangian picture in d+1 spacetime dimensions, can be reformulated as a (d-1)-form gauge theory. We construct a fluid/p-form dictionary and show that volume-preserving diffeomorphisms on the fluid side manifest as a U(1) gauge symmetry on the {(p+1)-form} gauge theory side. {Intriguingly, Kelvin's circulation theorem and the mass continuity equation respectively appear as the Gauss law and the Bianchi identity on the gauge theory side.} Furthermore, we show that at the level of the sources, the vortices in the fluid side correspond to the p-branes in the gauge theory side. We also consider fluid mechanics in the presence of boundaries and examine the boundary symmetries and corresponding charges from both the fluid and gauge theory perspectives.
We study avatars of T-duality within Sen's formalism for self-dual field strengths in various dimensions. This formalism is shown to naturally accommodate the T-duality relation between Type IIA/IIB theories when compactified on a circle without the need for imposing the self-duality constraint by hand, as is usually done. We also continue our study of this formalism on two-dimensional target spacetimes and initiate its study as a worldsheet theory. In particular, we show that Sen's action provides a natural worldsheet-based understanding of twisted and asymmetrically twisted strings. Finally, we show that the $\mathrm{T}\bar{\mathrm{T}}$-deformed theory of left- and right-chiral bosons described in Sen's formalism possesses a scaling limit that is related via field-theoretic T-duality to a recently studied integrable deformation of quantum mechanics.
We study ensembles of 1/2-BPS bound states of fundamental strings and NS-fivebranes (NS5-F1 states) in the AdS decoupling limit. We revisit a solution corresponding to an ensemble average of these bound states, and find that the appropriate duality frame for describing the near-source structure is the T-dual NS5-P frame, where the bound state is a collection of momentum waves on the fivebranes. We find that the fivebranes are generically well-separated; this property results in the applicability of perturbative string theory. The geometry sourced by the typical microstate is not close to that of an extremal non-rotating black hole; instead the fivebranes occupy a ball whose radius is parametrically much larger than the "stretched horizon" scale of the corresponding black hole. These microstates are thus better characterized as BPS fivebrane stars than as small black holes. When members of the ensemble spin with two fixed angular potentials about two orthogonal planes, we find that the spherical ball of the non-rotating ensemble average geometry deforms into an ellipsoid. This contrasts with ring structures obtained when fixing the angular momenta instead of the angular potentials; we trace this difference of ensembles to large fluctuations of the angular momentum in the ensemble of fixed angular potential.
We investigate quantum fluctuations of metric components in coherent 1/2-BPS bound states of $n_1$ fundamental strings and $n_5$ NS5-branes. The leading order contribution in an expansion in $1/(n_1n_5)$ is calculated via a combination of analytical and numerical methods. We find that the fluctuations are small away from a tiny distance from the source, comparable to the 6d Planck scale. Comparing this result with an analysis in the literature of fluctuations in the maximally mixed state, we conclude that the large fluctuations previously found for the latter are statistical rather than quantum in nature, and that perturbative string theory provides an accurate description of these backgrounds.
We extend the left-handed string formalism at one-loop level to focus on only the infrared limit, where the Green's function for the left-handed string is expanded around the cusp of the modular parameter. This expansion leads to the separating degeneration limit of a Riemann surface corresponding to a sphere and a torus connected by a long tube. The well-behaved short-distance behavior of the Green's function requires all marked points to be inserted on the sphere. Analogous to the tree-level calculations, we obtain Dirac $\delta$-functions by integrating out the anti-holomorphic variables. The constraints embedded in these $\delta$-functions, associated with the marked points on the sphere part of the Riemann surface, are the same Scattering Equations at the tree-level. After the integration over the modular parameter, we observe the expected pattern of the infrared divergence, consistent with the one-loop results from the box diagram calculations in field theory.
We study the Drinfeld double of the (equivariant spherical) Cohomological Hall algebra in the sense of Kontsevich and Soibelman, associated to a smooth toric Calabi-Yau 3-fold $X$. By general reasons, the COHA acts on the cohomology of the moduli spaces of certain perverse coherent systems on $X$ via "raising operators". Conjecturally the COHA action extends to an action of the Drinfeld double by adding the "lowering operators". In this paper, we show that the Drinfeld double is a generalization of the notion of the Cartan doubled Yangian defined earlier by Finkelberg and others. We extend this "$3d$ Calabi-Yau perspective" on the Lie theory furthermore by associating a root system to certain families of $X$. We formulate a conjecture that the above-mentioned action of the Drinfeld double factors through a shifted Yangian of the root system. The shift is explicitly determined by the moduli problem and the choice of stability conditions, and is expressed explicitly in terms of an intersection number in $X$. We check the conjectures in several examples, including a special case of an earlier conjecture of Costello.
We provide a prescription for computing two-point tree amplitudes in the pure spinor formalism that are finite and agree with the corresponding expression in the field theories. In [arXiv:1906.06051v1-arXiv:1909.03672v3], same results were presented for bosonic strings and it was mentioned they can be generalized to superstrings. The pure spinor formalism is a successful super-Poincare covariant approach to quantization of superstrings [arXiv:hep-th/0001035v2]. Because the pure spinor formalism is equivalent to other superstring formalisms, we explicitly verify the above claim. We introduce a mostly BRST exact operator in order to achieve this.
The entanglement entropy of a free scalar field in its ground state is dominated by an area law term. It is noteworthy, however, that the study of entanglement in scalar field theory has not advanced far beyond the ground state. In this paper, we extend the study of entanglement of harmonic systems, which include free scalar field theory as a continuum limit, to the case of the most general Gaussian states, namely the squeezed states. We find the eigenstates and the spectrum of the reduced density matrix and we calculate the entanglement entropy. Finally, we apply our method to free scalar field theory in 1+1 dimensions and show that, for very squeezed states, the entanglement entropy is dominated by a volume term, unlike the ground-state case. Even though the state of the system is time-dependent in a non-trivial manner, this volume term is time-independent. We expect this behaviour to hold in higher dimensions as well, as it emerges in a large-squeezing expansion of the entanglement entropy for a general harmonic system.
The last years have seen a rapid development of applications of quantum computation to quantum field theory. The first algorithms for quantum simulation of scattering have been proposed in the context of scalar and fermionic theories, requiring thousands of logical qubits. These algorithms are not suitable to simulate scattering of incoming bound states, as the initial state preparation relies typically on adiabatically transforming wavepackets of the free theory into wavepackets of the interacting theory. In this paper we present a strategy to excite wavepackets of the interacting theory directly from the vacuum of the interacting theory, allowing for preparation of states of composite particles. This is the first step towards digital quantum simulation of scattering of bound states. The approach is based on the Haag-Ruelle scattering theory, which provides a way to construct creation and annihilation operators of a theory in a full, nonperturbative framework. We provide a quantum algorithm requiring a number of ancillary qubits that is logarithmic in the size of the wavepackets, and with a success probability depending on the state being prepared and on the lattice parameters. The gate complexity for a single iteration of the circuit is equivalent to that of a time evolution for a fixed time.
We derive various properties of symmetric product orbifolds of $T\bar{T}$ and $J\bar{T}$ - deformed CFTs from a field-theoretical perspective. First, we generalise the known formula for the torus partition function of a symmetric orbifold theory in terms of the one of the seed to non-conformal two-dimensional QFTs; specialising this to seed $T\bar{T}$ and $J\bar{T}$ - deformed CFTs reproduces previous results in the literature. Second, we show that the single-trace $T\bar{T}$ and $J\bar{T}$ deformations preserve the Virasoro and Kac-Moody symmetries of the undeformed symmetric product orbifold CFT, including their fractional counterparts, as well as the KdV charges. Finally, we discuss correlation functions in these theories. By extending a previously-proposed basis of operators for $J\bar{T}$ - deformed CFTs to the single-trace case, we explicitly compute the correlation functions of both untwisted and twisted-sector operators and compare them to an appropriate set of holographic correlators. Our derivations are based mainly on Hilbert space techniques and completely avoid the use of conformal invariance, which is not present in these models.
We compute the equivariant partition function of the six-dimensional M-string SCFTs on a background with the topology of a product of a two-dimensional torus and an ALE singularity. We determine the result by exploiting BPS strings probing the singularity, whose worldvolume theories we determine via a chain of string dualities. A distinguished feature we observe is that for this class of background the BPS strings' worldsheet theories become relative field theories that are sensitive to finer discrete data generalizing to 6d the familiar choices of flat connections at infinity for instantons on ALE spaces. We test our proposal against a conjectural 6d N = (1,0) generalization of the Nekrasov master formula, as well as against known results on ALE partition functions in four dimensions.
In this paper we present a large class of flux backgrounds and solve the shortest vector problem in type IIB string theory on an orientifold of the $1^9$ Landau-Ginzburg model.
We study systems of staggered boson Hamiltonians in a one dimensional lattice and in particular how the translation symmetry by one unit in these systems is in reality a non-invertible symmetry closely related to T-duality. We also study the simplest systems of clock models derived from these staggered boson Hamiltonians. We show that the non-invertible symmetries of these lattice models together with the discrete ${\mathbb Z}_N$ symmetry predict that these are critical points with a $U(1)$ current algebra at $c=1$ and radius $\sqrt{2N}$ whenever $N>4$.
M{\o}ller maps are identifications between the observables of a perturbatively interacting physical system and the observables of its underlying free (i.e. non-interacting) system. This work studies and characterizes obstructions to the existence of such identifications. The main results are existence and importantly also non-existence theorems, which in particular imply that M{\o}ller maps do not exist for non-Abelian Chern-Simons and Yang-Mills theories on globally hyperbolic Lorentzian manifolds.
The interaction of a high-current $O$(100~\textmu A), medium energy $O$(10\,GeV) electron beam with a thick target $O$(1m) produces an overwhelming shower of standard matter particles in addition to hypothetical Light Dark Matter particles. While most of the radiation (gamma, electron/positron, and neutron) is contained in the thick target, deep penetrating particles (muons, neutrinos, and light dark matter particles) propagate over a long distance, producing high-intense secondary beams. Using sophisticated Monte Carlo simulations based on FLUKA and GEANT4, we explored the characteristics of secondary muons and neutrinos and (hypothetical) dark scalar particles produced by the interaction of Jefferson Lab 11 GeV intense electron beam with the experimental Hall-A beam dump. Considering the possible beam energy upgrade, this study was repeated for a 20 GeV CEBAF beam.
The production of prompt $D^+_{s}$ and $D^+$ mesons is measured by the LHCb experiment in proton-lead ($p\mathrm{Pb}$) collisions in both the forward ($1.5<y^*<4.0$) and backward ($-5.0<y^*<-2.5$) rapidity regions at a nucleon-nucleon center-of-mass energy of $\sqrt {s_{\mathrm{NN}}}=8.16\,$TeV. The nuclear modification factors of both $D^+_{s}$ and $D^+$ mesons are determined as a function of transverse momentum, $p_{\mathrm{T}}$, and rapidity. In addition, the $D^+_{s}$ to $D^+$ cross-section ratio is measured as a function of the charged particle multiplicity in the event. An enhanced $D^+_{s}$ to $D^+$ production in high-multiplicity events is observed for the whole measured $p_{\mathrm{T}}$ range, in particular at low $p_{\mathrm{T}}$ and backward rapidity, where the significance exceeds six standard deviations. This constitutes the first observation of strangeness enhancement in charm quark hadronization in high-multiplicity $p\mathrm{Pb}$ collisions. The results are also qualitatively consistent with the presence of quark coalescence as an additional charm quark hadronization mechanism in high-multiplicity proton-lead collisions.
The energy and mass measurements of jets are crucial tasks for the Large Hadron Collider experiments. This paper presents a new calibration method to simultaneously calibrate these quantities for large-radius jets measured with the ATLAS detector using a deep neural network (DNN). To address the specificities of the calibration problem, special loss functions and training procedures are employed, and a complex network architecture, which includes feature annotation and residual connection layers, is used. The DNN-based calibration is compared to the standard numerical approach in an extensive series of tests. The DNN approach is found to perform significantly better in almost all of the tests and over most of the relevant kinematic phase space. In particular, it consistently improves the energy and mass resolutions, with a 30% better energy resolution obtained for transverse momenta $p_{\text{T}}>500$ GeV.
This work introduces the software tool Comprehensive Particle Identification (CPID). It is a modular approach to combined PID for future Higgs factories and implemented in the Key4hep framework. Its structure is explained, the current module library laid out and initial performance measures for the ILD detector as an example presented. A basic run of CPID works already as well as the default full-simulation ILD PID reconstruction, but allows for an easy and convenient addition of more PID observables, improving PID performance in future analyses and high-level reconstruction, such as strange tagging.
This work presents the status and plans of the International Large Detector (ILD) concept, one of the most detailed and comprehensive detector concepts for a future Higgs factory. Most hardware groups have demonstrated ILD's performance targets and continue development with focus on improving further and making ILD fit for a circular collider. Their status, new developments and plans are elaborated. Two examples are given of new reconstruction methods that utilise hardware developments and contribute to advanced physics analyses prospects.
INO-ICAL is a proposed underground particle physics experiment to study the neutrino oscillation parameters by detecting neutrinos produced in the atmospheric air showers. Iron CALorimeter (ICAL) is to have 151 layers of iron stacked vertically, with active detector elements in between the iron layers. The iron layers will be magnetized to enable the measurement of momentum and charge of the $\mu^-$ (or $\mu^+$) produced by $\nu_\mu$ (or $\bar{\nu}_\mu$) interactions. Resistive Plate Chambers (RPCs) have been chosen as the active detector elements due to their large area coverage, uncompromised sensitivity, consistent performance for decades, as well as cost effectiveness. The major factors that decide the physics potential of the ICAL experiment are efficiency, position resolution and time resolution of the large area RPCs. A prototype detector called miniICAL (with 11 iron layers) was commissioned to understand the engineering challenges in building the large scale magnet and its ancillary systems, and also to study the performance of the RPC detectors and readout electronics developed by the INO collaboration. As part of the performance study of the RPC detectors, an attempt is made to improve the position and time resolution of them. Even a small improvement in the position and time resolution will help to improve the measurements of momentum and directionality of the neutrinos in ICAL. The Time-over-Threshold (ToT) of the RPC pulses (signals) is recorded by the readout electronics. ToT is a measure of the pulse width and consequently the amplitude. This information is used to improve the time and position resolution of the RPCs and consequently INO physics potential.
This paper presents a measurement of fiducial and differential cross-sections for $W^{+}W^{-}$ production in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS experiment at the Large Hadron Collider using a dataset corresponding to an integrated luminosity of 139 fb$^{-1}$. Events with exactly one electron, one muon and no hadronic jets are studied. The fiducial region in which the measurements are performed is inspired by searches for the electroweak production of supersymmetric charginos decaying to two-lepton final states. The selected events have moderate values of missing transverse momentum and the `stransverse mass' variable $m_{\textrm{T2}}$, which is widely used in searches for supersymmetry at the LHC. The ranges of these variables are chosen so that the acceptance is enhanced for direct $W^{+}W^{-}$ production and suppressed for production via top quarks, which is treated as a background. The fiducial cross-section and particle-level differential cross-sections for six variables are measured and compared with two theoretical SM predictions from perturbative QCD calculations.
Searches for the exclusive decays of the Higgs boson to an $\omega$ meson and a photon or a $K^{*}$ meson and a photon can probe flavour-conserving and flavour-violating Higgs boson couplings to light quarks, respectively. Searches for these decays, along with the analogous $Z$ boson decay to an $\omega$ meson and a photon, are performed with a $pp$ collision data sample corresponding to integrated luminosities of up to 134 fb$^{-1}$ collected at $\sqrt{s}=13$ TeV with the ATLAS detector at the CERN Large Hadron Collider. The obtained 95% confidence-level upper limits on the respective branching fractions are ${\cal B}(H\rightarrow\omega\gamma)< 5.5\times 10^{-4}$, ${\cal B}(H\rightarrow K^{*}\gamma)< 2.2\times10^{-4}$ and ${\cal B}(Z\rightarrow \omega\gamma)<3.9\times 10^{-6}$. The limits for $H\rightarrow \omega\gamma$ and $Z\rightarrow \omega\gamma$ are 370 times and 140 times the Standard Model expected values, respectively. The result for $Z\rightarrow \omega\gamma$ corresponds to a two-orders-of-magnitude improvement over the limit obtained by the DELPHI experiment at LEP.
This work is the second part of a simulation study investigating the processing of densely packed and moving granular assemblies by positron emission particle tracking (PEPT). Since medical PET scanners commonly used for PEPT are very expensive, a PET-like detector system based on cost-effective organic plastic scintillator bars is being developed and tested for its capabilities. In this context, the spatial resolution of a resting positron source, a source moving on a freely designed model path, and a particle motion given by a DEM (Discrete Element Method) simulation is studied using Monte Carlo simulations and the software toolkit Geant4. This not only extended the simulation and reconstruction to a moving source but also significantly improved the spatial resolution compared to previous work by adding oversampling and iteration to the reconstruction algorithm. Furthermore, in the case of a source following a trajectory developed from DEM simulations, a very good resolution of about 1 mm in all three directions and an average three-dimensional deviation between simulated and reconstructed events of 2.3 mm could be determined. Thus, the resolution for a realistic particle motion within the generic grate system (which is the test rig for further experimental studies) is well below the smallest particle size. The simulation of the dependence of the reconstruction accuracy on tracer particle location revealed a nearly constant efficiency within the entire detector system, which demonstrates that boundary effects can be neglected.
Identification of quark flavor is essential for collider experiments in high-energy physics, relying on the flavor tagging algorithm. In this study, using a full simulation of the Circular Electron Positron Collider (CEPC), we investigated the flavor tagging performance of two different algorithms: ParticleNet, originally developed at CMS, and LCFIPlus, the current flavor tagging algorithm employed at CEPC. Compared to LCFIPlus, ParticleNet significantly enhances flavor tagging performance, resulting in a significant improvement in benchmark measurement accuracy, i.e., a 36% improvement for $\nu\bar{\nu}H\to c\bar{c}$ measurement and a 75% improvement for $|V_{cb}|$ measurement via W boson decay when CEPC operates as a Higgs factory at the center-of-mass energy of 240 GeV and integrated luminosity of 5.6 $ab^{-1}$. We compared the performance of ParticleNet and LCFIPlus at different vertex detector configurations, observing that the inner radius is the most sensitive parameter, followed by material budget and spatial resolution.
Bell experiments have confirmed that quantum entanglement is an inseparable correlation but there is no faster-than-light influence between two entangled particles when a local measurement is performed. However, how such an inseparable correlation is maintained and manifested when the two entangled particle are space-like separated is still not well understood. The recently proposed least observability principle for quantum mechanics brings new insights to this question. Here we show that even though the inseparable correlation may be initially created by previous physical interaction between the two particles, the preservation and manifestation of such inseparable correlation are achieved through extremizing an information metric that measures the additional observability of the bipartite system due to vacuum fluctuations. This is physically realized even though there is no further interaction when the two particles move apart, and the underlying vacuum fluctuations are local. An example of two entangled free particles described by Gaussian wave packets is provided to illustrate these results.
Adiabatic quantum computing has demonstrated how quantum Zeno can be used to construct quantum optimisers. However, much less work has been done to understand how more general Zeno effects could be used in a similar setting. We use a construction based on three state systems rather than directly in qubits, so that a qubit can remain after projecting out one of the states. We find that our model of computing is able to recover the dynamics of a transverse field Ising model, several generalisations are possible, but our methods allow for constraints to be implemented non-perturbatively and does not need tunable couplers, unlike simple transverse field implementations. We further discuss how to implement the protocol physically using methods building on STIRAP protocols for state transfer. We find a substantial challenge, that settings defined exclusively by measurement or dissipative Zeno effects do not allow for frustration, and in these settings pathological spectral features arise leading to unfavorable runtime scaling. We discuss methods to overcome this challenge for example including gain as well as loss as is often done in optical Ising machines.
These are the lecture notes of the master's course "Quantum Computing", taught at Chalmers University of Technology every fall since 2020, with participation of students from RWTH Aachen and Delft University of Technology. The aim of this course is to provide a theoretical overview of quantum computing, excluding specific hardware implementations. Topics covered in these notes include quantum algorithms (such as Grover's algorithm, the quantum Fourier transform, phase estimation, and Shor's algorithm), variational quantum algorithms that utilise an interplay between classical and quantum computers [such as the variational quantum eigensolver (VQE) and the quantum approximate optimisation algorithm (QAOA), among others], quantum error correction, various versions of quantum computing (such as measurement-based quantum computation, adiabatic quantum computation, and the continuous-variable approach to quantum information), the intersection of quantum computing and machine learning, and quantum complexity theory. Lectures on these topics are compiled into 12 chapters, most of which contain a few suggested exercises at the end, and interspersed with four tutorials, which provide practical exercises as well as further details. At Chalmers, the course is taught in seven weeks, with three two-hour lectures or tutorials per week. It is recommended that the students taking the course have some previous experience with quantum physics, but not strictly necessary.
From the perspective of quantum many-body physics, the Floquet code of Hastings and Haah can be thought of as a measurement-only version of the Kitaev honeycomb model where a periodic sequence of two-qubit XX, YY, and ZZ measurements dynamically stabilizes a toric code state with two logical qubits. However, the most striking feature of the Kitaev model is its intrinsic fractionalization of quantum spins into an emergent gauge field and itinerant Majorana fermions that form a Dirac liquid, which is absent in the Floquet code. Here we demonstrate that by varying the measurement strength of the honeycomb Floquet code one can observe features akin to the fractionalization physics of the Kitaev model at finite temperature. Introducing coherent errors to weaken the measurements we observe three consecutive stages that reveal qubit fractionalization (for weak measurements), the formation of a Majorana liquid (for intermediate measurement strength), and Majorana pairing together with gauge ordering (for strong measurements). Our analysis is based on a mapping of the imperfect Floquet code to random Gaussian fermionic circuits (networks) that can be Monte Carlo sampled, exposing two crossover peaks. With an eye on circuit implementations, our analysis demonstrates that the Floquet code, in contrast to the toric code, does not immediately break down to a trivial state under weak measurements, but instead gives way to a long-range entangled Majorana liquid state.
Fractonic constraints can lead to exotic properties of quantum many-body systems. Here, we investigate the dynamics of fracton excitations on top of the ground states of a one-dimemnsional, dipole-conserving Bose-Hubbard model. We show that nearby fractons undergo a collective motion mediated by exchanging virtual dipole excitations, which provides a powerful dynamical tool to characterize the underlying ground state phases. We find that in the gapped Mott insulating phase, fractons are confined to each other as motion requires the exchange of massive dipoles. When crossing the phase transition into a gapless Luttinger liquid of dipoles, fractons deconfine. Their transient deconfinement dynamics scales diffusively and exhibits strong but subleading contributions described by a quantum Lifshitz model. We examine prospects for the experimental realization in tilted Bose-Hubbard chains by numerically simulating the adiabatic state preparation and subsequent time evolution, and find clear signatures of the low-energy fracton dynamics.
Efficient coupling of optically active qubits to optical cavities is a key challenge for solid-state-based quantum optics experiments and future quantum technologies. Here we present a quantum photonic interface based on a single Tin-Vacancy center in a micrometer-thin diamond membrane coupled to a tunable open microcavity. We use the full tunability of the microcavity to selectively address individual Tin-Vacancy centers within the cavity mode volume. Purcell enhancement of the Tin-Vacancy center optical transition is evidenced both by optical excited state lifetime reduction and by optical linewidth broadening. As the emitter selectively reflects the single-photon component of the incident light, the coupled emitter-cavity system exhibits strong quantum nonlinear behavior. On resonance, we observe a transmission dip of 50 % for low incident photon number per Purcell-reduced excited state lifetime, while the dip disappears as the emitter is saturated with higher photon number. Moreover, we demonstrate that the emitter strongly modifies the photon statistics of the transmitted light by observing photon bunching. This work establishes a versatile and tunable platform for advanced quantum optics experiments and proof-of-principle demonstrations towards quantum networking with solid-state qubits.
Magic is a property of a quantum state that characterizes its deviation from a stabilizer state, serving as a useful resource for achieving universal quantum computation e.g., within schemes that use Clifford operations. In this work, we study magic, as quantified by the stabilizer Renyi entropy, in a class of models known as generalized Rokhsar-Kivelson systems, i.e., Hamiltonians that allow a stochastic matrix form (SMF) decomposition. The ground state wavefunctions of these systems can be written explicitly throughout their phase diagram, and their properties can be related to associated classical statistical mechanics problems, thereby allowing powerful analytical and numerical approaches that are not usually available in conventional quantum many body settings. As a result, we are able to express the SRE in terms of wave function coefficients that can be understood as a free energy difference of related classical problems. We apply this insight to a range of quantum many body SMF Hamiltonians, which affords us to study numerically the SRE of large high-dimensional systems, and in some cases to obtain analytical results. We observe that the behaviour of the SRE is relatively featureless across quantum phase transitions in these systems, although it is indeed singular (in its first or higher order derivative, depending on the nature of the transition). On the contrary, we find that the maximum of the SRE generically occurs at a cusp away from the quantum critical point, where the derivative suddenly changes sign. Furthermore, we compare the SRE and the logarithm of overlaps with specific stabilizer states, asymptotically realised in the ground state phase diagrams of these systems. We find that they display strikingly similar behaviors, which in turn establish rigorous bounds on the min-relative entropy of magic.
The increasing complexity of the recent photonic experiments challenges developing efficient multi-channel coincidence counting systems with high-level functionality. Here, we report a coincidence unit able to count detection events ranging from singles to 16-fold coincidences with full channel-number resolution. The device operates within sub-100~ps coincidence time windows, with a maximum input frequency of 1.5~GHz and an overall jitter of less than 10~ps. The unit high-level timing performance renders it suitable for quantum photonic experiments employing low-timing-jitter single-photon detectors. Additionally, the unit can be used in complex photonic systems to drive feed-forward loops. We have demonstrated the developed coincidence counting unit in photon-number-resolving detection to directly quantify the statistical properties of light, specifically coherent and thermal states, with a fidelity exceeding 0.999 up to 60~photons.
Deriving an arrow of time from time-reversal symmetric microscopic dynamics is a fundamental open problem in physics. Here we focus on several derivations of dissipative dynamics and the thermodynamic arrow of time to study precisely how time-reversal symmetry is broken in open classical and quantum systems. These derivations all involve the Markov approximation applied to a system interacting with an infinite heat bath. We find that the Markov approximation does not imply a violation of time-reversal symmetry. Our results show instead that the time-reversal symmetry is maintained in standard dissipative equations of motion, such as the Langevin equation and the Fokker-Planck equation in open classical dynamics, and the Brownian motion, the Lindblad and the Pauli master equations in open quantum dynamics. In all cases, the resulting equations of motion describe thermalisation that occurs into the future as well as into the past. As a consequence, we argue that the resulting dynamics are better described by a definition of Markovianity that is symmetric with respect to the future and the past.
In this paper, we study a faithful translation of a two-player quantum Morra game, which builds on previous work by including the classical game as a special case. We propose a natural deformation of the game in the quantum regime in which Alice has a winning advantage, breaking the balance of the classical game. A Nash equilibrium can be found in some cases by employing a pure strategy, which is impossible in the classical game where a mixed strategy is always required. We prepared our states using photonic qubits on a linear optics setup, with an average deviation less than 2% with respect to the measured outcome probabilities. Finally, we discuss potential applications of the quantum Morra game to the study of quantum information and communication.
We present an all-optical method to measure and compensate for residual magnetic fields present in a cloud of ultracold atoms trapped in an optical dipole trap. Our approach leverages the increased loss from the trapped atomic sample through electromagnetically induced absorption. Modulating the excitation laser provides coherent sidebands, resulting in {\Lambda}-type pump-probe scheme. Scanning an additional magnetic offset field leads to pairs of sub-natural linewidth resonances, whose positions encode the magnetic field in all three spatial directions. Our measurement scheme is readily implemented in a typical quantum gas experiments and has no particular hardware requirements.
Variational quantum approaches have shown great promise in finding near-optimal solutions to computationally challenging tasks. Nonetheless, enforcing constraints in a disciplined fashion has been largely unexplored. To address this gap, this work proposes a hybrid quantum-classical algorithmic paradigm termed VQEC that extends the celebrated VQE to handle optimization with constraints. As with the standard VQE, the vector of optimization variables is captured by the state of a variational quantum circuit (VQC). To deal with constraints, VQEC optimizes a Lagrangian function classically over both the VQC parameters as well as the dual variables associated with constraints. To comply with the quantum setup, variables are updated via a perturbed primal-dual method leveraging the parameter shift rule. Among a wide gamut of potential applications, we showcase how VQEC can approximately solve quadratically-constrained binary optimization (QCBO) problems, find stochastic binary policies satisfying quadratic constraints on the average and in probability, and solve large-scale linear programs (LP) over the probability simplex. Under an assumption on the error for the VQC to approximate an arbitrary probability mass function (PMF), we provide bounds on the optimality gap attained by a VQC. Numerical tests on a quantum simulator investigate the effect of various parameters and corroborate that VQEC can generate high-quality solutions.
We study the electronic excitation spectra in solid molecular hydrogen (phase I) at ambient temperature and 5-90 GPa pressures using Quantum Monte Carlo methods and Many-Body Perturbation Theory. In this range, the system changes from a wide gap molecular insulator to a semiconductor, altering the nature of the excitations from localized to delocalized. Computed gaps and spectra agree with experiments, proving the ability to predict accurately band gaps of many-body systems in presence of nuclear quantum and thermal effects. We explore changes in the electronic gap for the hydrogen isotopes.
We analyze the robust character against non-static noise of clock transitions implemented via a method of continuous dynamical decoupling (CDD) in a hyperfine Zeeman multiplet in ^{87}\textrm{Rb}. The emergence of features specific to the quadratic corrections to the linear Zeeman effect is evaluated. Our analytical approach, which combines methods of stochastic analysis with time-dependent perturbation theory, allows tracing the decoherence process for generic noise sources. Working first with a basic CDD scheme, it is shown that the amplitude and frequency of the (driving) field of control can be appropriately chosen to force the non-static random input to have a (time-dependent) perturbative character. Moreover, in the dressed-state picture, the effect of noise is described in terms of an operative random variable whose properties, dependent on the driving field, can be analytically characterized. In this framework, the relevance of the spectral density of the fluctuations to the performance of the CDD technique is precisely assessed. In particular, the range of noise correlation times where the method of decoherence reduction is still efficient is identified. The results obtained in the basic CDD framework are extrapolated to concatenated schemes. The generality of our approach allows its applicability beyond the specific atomic system considered.
Microwave-to-optical transducers are integral to the future of superconducting quantum computing, as they would enable scaling and long-distance communication of superconducting quantum processors through optical fiber links. However, optically-induced microwave noise poses a significant challenge in achieving quantum transduction between microwave and optical frequencies. In this work, we study light-induced microwave noise in an integrated electro-optical transducer harnessing Pockels effect of thin film lithium niobate. We reveal three sources of added noise with distinctive time constants ranging from sub-100 nanoseconds to milliseconds. Our results gain insights into the mechanisms and corresponding mitigation strategies for light-induced microwave noise in superconducting microwave-optical transducers, and pave the way towards realizing the ultimate goal of quantum transduction.
In this paper we study the connections of three paradigms in number theory: the adelic formulation of the Riemann zeta function, the Weil explicit formula and the concepts of the so called probabilistic number theory initiated by Harald Bohr. We give a different reformulation, rooted in the adelic framework, of the theory of distribution values of the Riemann zeta function. By introducing the Bohr compactification of the real numbers as a natural probability space for this theory, we show that the Weil explicit sum can be expressed in terms of covariances and expected values attached to random variables defined on this space. Moreover, we express the explicit formula as a limit of spectral integrals attached to operators defined on the Hilbert space of square-integrable functions on the Bohr compactification. This gives a probabilistic and a geometrical interpretation of the Weil explicit formula.
We develop a new theoretical framework for describing light-matter interactions in cavity quantum electrodynamics (QED), optimized for efficient convergence at arbitrarily strong coupling strengths and is naturally applicable to low-dimensional materials. This new Hamiltonian is obtained by applying a unitary gauge transformation on the p$\cdot$A Hamiltonian, with a shift on both the matter coordinate and the photonic coordinate, then performing a phase rotation and transforming in the reciprocal space of the matter. By formulating the light-matter interaction in terms of an upper-bounded effective coupling parameter, this method allows one to easily converge eigenspectra calculations for any coupling strength, even far into the ultra-strong and deep-strong coupling regimes. We refer to this new approach as the Reciprocal Asymptotically Decoupled (RAD) Hamiltonian. The RAD Hamiltonian allows for a fast convergence of the polariton eigenspectrum with a much smaller matter and photon basis, compared to the commonly used p$\cdot$A or dipole gauge Hamiltonians. The RAD Hamiltonian also allows one to go beyond the commonly used long-wavelength approximation and accurately describes the spatial variations of the field inside the cavity, which ensures the conservation of momentum between light and matter.
The cost of measuring quantum expectation values of an operator can be reduced by grouping the Pauli string ($SU(2)$ tensor product) decomposition of the operator into maximally commuting sets. We detail an algorithm, presented in [1], to partition the full set of $m$-qubit Pauli strings into the minimal number of commuting families, and benchmark the performance with dense Hamiltonians on IBM hardware. Here we also compare how our method scales compared to graph-theoretic techniques for the generally commuting case.
As we venture into the Intermediate-Scale Quantum (ISQ) era, the proficiency of modular arithmetic operations becomes pivotal for advancing quantum cryptographic algorithms. This study presents an array of quantum circuits, each precision-engineered for modular arithmetic functions critical to cryptographic applications. Central to our exposition are quantum modular adders, multipliers, and exponential operators, whose designs are rigorously optimized for ISQ devices. We provide a theoretical framework and practical implementations in the PennyLane quantum software, bridging the gap between conceptual and applied quantum computing. Our simulations validate the efficacy of these methodologies, offering a strategic compass for developing quantum algorithms that align with the rapid progression of quantum technology.
Causal networks beyond that in the paradigmatic Bell's theorem can lead to new kinds and applications of non-classical behavior. Their study, however, has been hindered by the fact that they define a non-convex set of correlations and only very incomplete or approximated descriptions have been obtained so far, even for the simplest scenarios. Here, we take a different stance on the problem and consider the relative volume of classical or non-classical correlations a given network gives rise to. Among many other results, we show instances where the inflation technique, arguably the most disseminated tool in the community, is unable to detect a significant portion of the non-classical behaviors. Interestingly, we also show that the use of interventions, a central tool in causal inference, can enhance substantially our ability to witness non-classicality.
High quality squeezed light is an important resource for a variety of applications. Multiple methods for generating squeezed light are known, having been demonstrated theoretically and experimentally. However, the effectiveness of these methods -- in particular, the inherent limitations to the signals that can be produced -- has received little consideration. Here we present a comparative theoretical analysis for generating a highly-displaced high-brightness squeezed light from a linear optical method -- a beam-splitter mixing a squeezed vacuum and a strong coherent state -- and parametric amplification methods including an optical parametric oscillator, an optical parametric amplifier, and a dissipative optomechanical squeezer seeded with coherent states. We show that the quality of highly-displaced high-brightness squeeze states that can be generated using these methods is limited on a fundamental level by the physical mechanism utilized; across all methods there are significant tradeoffs between brightness, squeezing, and overall uncertainty. We explore the nature and extent of these tradeoffs specific to each mechanism and identify the optimal operation modes for each, and provide an argument for why this type of tradeoff is unavoidable for parametric amplifier type squeezers.
The mobility edge, as a central concept in disordered models for localization-delocalization transitions, has rarely been discussed in the context of random matrix theory (RMT). Here we report a new class of random matrix model by direct coupling between two random matrices, showing that their overlapped spectra and un-overlapped spectra exhibit totally different scaling behaviors, which can be used to construct tunable mobility edges. This model is a direct generalization of the Rosenzweig-Porter model, which hosts ergodic, localized, and non-ergodic extended (NEE) phases. A generic theory for these phase transitions is presented, which applies equally well to dense, sparse, and even corrected random matrices in different ensembles. We show that the phase diagram is fully characterized by two scaling exponents, and they are mapped out in various conditions. Our model provides a general framework to realize the mobility edges and non-ergodic phases in a controllable way in RMT, which pave avenue for many intriguing applications both from the pure mathematics of RMT and the possible implementations of ME in many-body models, chiral symmetry breaking in QCD and the stability of the large ecosystems.
Classical locally recoverable codes, which permit highly efficient recovery from localized errors as well as global recovery from larger errors, provide some of the most useful codes for distributed data storage in practice. In this paper, we initiate the study of quantum locally recoverable codes (qLRCs). In the long term, like their classical counterparts, such qLRCs may be used for large-scale quantum data storage. Our results also have concrete implications for quantum LDPC codes, which are applicable to near-term quantum error-correction. After defining quantum local recoverability, we provide an explicit construction of qLRCs based on the classical LRCs of Tamo and Barg (2014), which we show have (1) a close-to-optimal rate-distance tradeoff (i.e. near the Singleton bound), (2) an efficient decoder, and (3) permit good spatial locality in a physical implementation. Although the analysis is significantly more involved than in the classical case, we obtain close-to-optimal parameters by introducing a "folded" version of our quantum Tamo-Barg (qTB) codes, which we then analyze using a combination of algebraic techniques. We furthermore present and analyze two additional constructions using more basic techniques, namely random qLRCs, and qLRCs from AEL distance amplification. Each of these constructions has some advantages, but neither achieves all 3 properties of our folded qTB codes described above. We complement these constructions with Singleton-like bounds that show our qLRC constructions achieve close-to-optimal parameters. We also apply these results to obtain Singleton-like bounds for qLDPC codes, which to the best of our knowledge are novel. We then show that even the weakest form of a stronger locality property called local correctability, which permits more robust local recovery and is achieved by certain classical codes, is impossible quantumly.
In this paper we study single qutrit quantum circuits consisting of words over the Clifford+ $\mathcal{D}$ gate set, where $\mathcal{D}$ consists of cyclotomic gates of the form $\text{diag}(\pm\xi^{a},\pm\xi^{b},\pm\xi^{c}),$ where $\xi$ is a primitive $9$-th root of unity and integers $a,b,c$. We characterize classes of qutrit unit vectors $z$ with entries in $\mathbb{Z}[\xi, \frac{1}{\chi}]$ based on the possibility of reducing their smallest denominator exponent (sde) with respect to $\chi := 1 - \xi,$ by acting an appropriate gate in Clifford+$\mathcal{D}$. We do this by studying the notion of `derivatives mod $3$' of an arbitrary element of $\mathbb{Z}[\xi]$ and using it to study the smallest denominator exponent of $HDz$ where $H$ is the qutrit Hadamard gate and $D \in \mathcal{D}.$ In addition, we reduce the problem of finding all unit vectors of a given sde to that of finding integral solutions of a positive definite quadratic form along with some additional constraints. As a consequence we prove that the Clifford + $\mathcal{D}$ gates naturally arise as gates with sde $0$ and $3$ in the group $U(3,\mathbb{Z}[\xi, \frac{1}{\chi}])$ of $3 \times 3$ unitaries with entries in $\mathbb{Z}[\xi, \frac{1}{\chi}]$
In this paper, we study the synchronization of dissipative quantum harmonic oscillators in the framework of quantum open system via the Active-Passive Decomposition (APD) configuration. We show that two or more quantum systems may be synchronized when the quantum systems of interest are embedded in dissipative environments and influenced by a common classical system. Such a classical system is typically termed as a controller, which (1) can drive quantum systems to cross different regimes (e.g., from periodic to chaotic motions) and (2) constructs the so-called Active-Passive Decomposition configuration such that all the quantum objects under consideration may be synchronized. The main findings of this paper is that we demonstrate that the complete synchronizations measured by the standard quantum deviation may be achieved for both stable regimes (quantum limit circles) and unstable regimes (quantum chaotic motions). As an example, we numerically show in an optomechanical setup that the complete synchronization can be realized in quantum mechanical resonators.
This study presents a solution to the Yakubovsky equations for four-body bound states in momentum space, bypassing the common use of two-body $t-$matrices. Typically, such solutions are dependent on the fully-off-shell two-body $t-$matrices, which are obtained from the Lippmann-Schwinger integral equation for two-body subsystem energies controlled by the second and third Jacobi momenta. Instead, we use a version of the Yakubovsky equations that doesn't require $t-$matrices, facilitating the direct use of two-body interactions. This approach streamlines the programming and reduces computational time. Numerically, we found that this direct approach to the Yakubovsky equations, using 2B interactions, produces four-body binding energy results consistent with those obtained from the conventional $t-$matrix dependent Yakubovsky equations, for both separable (Yamaguchi and Gaussian) and non-separable (Malfliet-Tjon) interactions.
The twig edge states in graphene-like structures are viewed as the fourth states complementary to their zigzag, bearded, and armchair counterparts. In this work, we study a rod-in-plasma system in honeycomb lattice with twig edges under external magnetic fields and lattice scaling and show that twig edge states can exist in different phases of the system, such as quantum Hall phase, quantum spin Hall phase and insulating phase. The twig edge states in the quantum Hall phase exhibit robust one-way transmission property immune to backscattering and thus provide a novel avenue for solving the plasma communication blackout problem. Moreover, we demonstrate that corner and edge states can exist within the trivial band gap of the insulating phase by modulating the on-site potential of the twig edges. Especially, helical edge states with the unique feature of pseudospin-momentum locking that could be exited by chiral sources are demonstrated at the twig edges within the trivial band gap. Our results show that many topological-like behaviors of electromagnetic waves are not necessarily tied to the exact topology of the systems and the twig edges and interface engineering can bring new opportunities for more flexible manipulation of electromagnetic waves.
Quantum devices in the Noisy Intermediate-Scale Quantum (NISQ) era are limited by high error rates and short decoherence times. Typically, compiler optimisations have provided solutions at the gate level. Alternatively, we exploit the finest level of quantum control and introduce a set of pulse level quantum compiler optimisations: sQueeze. Instead of relying on existing calibration that may be inaccurate, we provide a method for the live calibration of two new parameterised basis gates $R_{x}(\theta)$ and $R_{zx}(\theta)$ using an external server. We validate our techniques using the IBM quantum devices and the OpenPulse control interface over more than 8 billion shots. The $R_{x}(\theta)$ gates are on average 52.7% more accurate than their current native Qiskit decompositions, while $R_{zx}(\theta)$ are 22.6% more accurate on average. These more accurate pulses also provide up to a 4.1$\times$ speed-up for single-qubit operations and 3.1$\times$ speed-up for two-qubit gates. Then sQueeze demonstrates up to a 39.6% improvement in the fidelity of quantum benchmark algorithms compared to conventional approaches.
The electromagnetic trapping of ion chains can be regarded as a process of non-trivial entangled quantum state preparation within Hilbert spaces of the local axial motional modes. To begin uncovering properties of this entanglement resource produced as a byproduct of conventional ion-trap quantum information processing, the quantum continuous-variable formalism is herein utilized to focus on the leading-order entangled ground state of local motional modes in the presence of a quadratic trapping potential. The decay of entanglement between disjoint subsets of local modes is found to exhibit features of entanglement structure and response to partial measurement reminiscent of the free massless scalar field vacuum. With significant fidelities between the two, even for large system sizes, a framework is established for initializing quantum field simulations via "imaging" extended entangled states from natural sources, rather than building correlations through deep circuits of few-body entangling operators. By calculating probabilities in discrete Fock subspaces of the local motional modes, considerations are presented for locally transferring these pre-distributed entanglement resources to the qudits of ion internal energy levels, improving this procedure's anticipated experimental viability.
In the BCS limit density profiles for unpolarized trapped fermionic clouds of atoms are largely featureless. Therefore, it is a delicate task to analyze them in order to quantify their respective interaction and temperature contributions. Temperature measurements have so far been mostly considered in an indirect way, where one sweeps isentropically from the BCS to the BEC limit. Instead we suggest here a direct thermometry, which relies on measuring the column density and comparing the obtained data with a Hartree-Bogoliubov mean-field theory combined with a local density approximation. In case of an attractive interaction between two-components of $^{6}$Li atoms trapped in a tri-axial harmonic confinement we show that minimizing the error within such an experiment-theory collaboration turns out to be a reasonable criterion for analyzing in detail measured densities and, thus, for ultimately determining the sample temperatures. The findings are discussed in view of various possible sources of errors.
Though the topic of causal inference is typically considered in the context of classical statistical models, recent years have seen great interest in extending causal inference techniques to quantum and generalized theories. Causal identification is a type of causal inference problem concerned with recovering from observational data and qualitative assumptions the causal mechanisms generating the data, and hence the effects of hypothetical interventions. A major obstacle to a theory of causal identification in the quantum setting is the question of what should play the role of "observational data," as any means of extracting data at a certain locus will almost certainly disturb the system. Hence, one might think a priori that quantum measurements are already too much like interventions, so that the problem of causal identification trivializes. This is not the case. Fixing a limited class of quantum instruments (namely the class of all projective measurements) to play the role of "observations," we note that as in the classical setting, there exist scenarios for which causal identification is not possible. We then present sufficient conditions for quantum causal identification, starting with a quantum analogue of the well-known "front-door criterion" and finishing with a broader class of scenarios for which the effect of a single intervention is identifiable. These results emerge from generalizing the process-theoretic account of classical causal inference due to Jacobs, Kissinger, and Zanasi beyond the setting of Markov categories, and thereby treating the classical and quantum problems uniformly.
Continuous time crystals (CTCs) are characterized by sustained oscillations that break the time translation symmetry. Since the ruling out of equilibrium CTCs by no-go theorems, the emergence of such dynamical phases has been observed in various driven-dissipative quantum platforms. The current understanding of CTCs is mainly based on mean-field (MF) theories, which fail to address the problem of whether the long-range time crystalline order exists in noisy, spatially extended systems without the protection of all-to-all couplings. Here, we propose a new kind of CTC realized in a quantum contact model through self-organized bistability (SOB). The exotic CTCs stem from the interplay between collective dissipation induced by the first-order absorbing phase transitions (APTs) and slow constant driving provided by an incoherent pump. The stability of such oscillatory phases in finite dimensions under the action of intrinsic quantum fluctuations is scrutinized by the functional renormalization group method and numerical simulations. Occurring at the edge of quantum synchronization, the CTC phase exhibits an inherent period and amplitude with a coherence time diverging with system size, thus also constituting a boundary time crystal (BTC). Our results serve as a solid route towards self-protected CTCs in strongly interacting open systems.
The study of noise assisted transport in quantum systems is essential in a wide range of applications from near-term NISQ devices to models for quantum biology. Here, we study a generalised XXZ model in the presence of stochastic collision noise, which allows to describe environments beyond the standard Markovian formulation. Our analysis through the study of the local magnetization, the inverse participation ratio (IPR) or its generalisation, the Inverse Ergodicity Ratio (IER), showed clear regimes where the transport rate and coherence time can be controlled by the dissipation in a consistent manner. In addition, when considering several excitations, we characterize the interplay between collisions and system interactions identifying regimes in which transport is counterintuitively enhanced when increasing the collision rate, even in the case of initially separated excitations. These results constitute an example of the essential building blocks for the understanding of quantum transport in structured noisy and warm disordered environments.
We introduce a variational manifold of simple tensor network states for the study of a family of constrained models that describe spin-1/2 systems as realized by Rydberg atom arrays. Our manifold permits analytical calculation via perturbative expansion of one- and two-point functions in arbitrary spatial dimensions and allows for efficient computation of the matrix elements required for variational energy minimization and variational time evolution in up to three dimensions. We apply this framework to the PXP model on the hypercubic lattice in 1D, 2D, and 3D, and show that, in each case, it exhibits quantum phase transitions breaking the sub-lattice symmetry in equilibrium, and hosts quantum many body scars out of equilibrium. We demonstrate that our variational ansatz qualitatively captures all these phenomena and predicts key quantities with an accuracy that increases with the dimensionality of the lattice, and conclude that our method can be interpreted as a generalization of mean-field theory to constrained spin models.
sQUlearn introduces a user-friendly, NISQ-ready Python library for quantum machine learning (QML), designed for seamless integration with classical machine learning tools like scikit-learn. The library's dual-layer architecture serves both QML researchers and practitioners, enabling efficient prototyping, experimentation, and pipelining. sQUlearn provides a comprehensive toolset that includes both quantum kernel methods and quantum neural networks, along with features like customizable data encoding strategies, automated execution handling, and specialized kernel regularization techniques. By focusing on NISQ-compatibility and end-to-end automation, sQUlearn aims to bridge the gap between current quantum computing capabilities and practical machine learning applications.
The noncommutative sum-of-squares (ncSoS) hierarchy was introduced by Navascu\'{e}s-Pironio-Ac\'{i}n as a sequence of semidefinite programming relaxations for approximating values of noncommutative polynomial optimization problems, which were originally intended to generalize quantum values of nonlocal games. Recent work has started to analyze the hierarchy for approximating ground energies of local Hamiltonians, initially through rounding algorithms which output product states for degree-2 ncSoS applied to Quantum Max-Cut. Some rounding methods are known which output entangled states, but they use degree-4 ncSoS. Based on this, Hwang-Neeman-Parekh-Thompson-Wright conjectured that degree-2 ncSoS cannot beat product state approximations for Quantum Max-Cut and gave a partial proof relying on a conjectural generalization of Borrell's inequality. In this work we consider a family of Hamiltonians (called the quantum rotor model in condensed matter literature or lattice $O(k)$ vector model in quantum field theory) with infinite-dimensional local Hilbert space $L^{2}(S^{k - 1})$, and show that a degree-2 ncSoS relaxation approximates the ground state energy better than any product state.
Repeated measurements can induce entanglement phase transitions in the dynamics of quantum systems. Interacting models, both chaotic and integrable, generically show a stable volume-law entangled phase at low measurement rates which disappears for free, Gaussian fermions. Interactions break the Gaussianity of a dynamical map in its unitary part, but non-Gaussianity can be introduced through measurements as well. By comparing the entanglement and non-Gaussianity structure of different protocols, we propose a new single-particle indicator of the measurement-induced phase transition and we use it to argue in favour of the stability of the transition when non-Gaussianity is purely provided by measurements
We provide a simple prescription to extract an effective Pauli noise model from classical simulations of a noisy experimental protocol for a unitary gate. This prescription yields the closest Pauli channel approximation to the error channel associated with the gate implementation, as measured by the Frobenius distance between quantum channels. Informed by these results, we highlight some puzzles regarding the quantitative treatment of coherent errors.
In the classical context, it is well known that, sometimes, if the search does not find its target, it is better to start the process anew again, known as resetting. The quantum counterpart of resetting also indicates speeding up the detection process by eliminating the dark states, i.e., situations where the particle avoids detection. In this work, we introduce a most probable position resetting (MPR) protocol in which we reset the particle in a position where the probability of finding the particle could have been maximum, provided one would let the system evolve unitarily in a given time window. In a tight-binding lattice model, there exists a 2-fold degeneracy (left and right) of the positions of maximum probability. The survival probability with optimal restart rate approaches zero (detection probability approaches one) when the particle is reset with equal probability on both sides. This protocol significantly reduces the optimal mean first-detected-passage time (FDT) and performs better even if the detector is far apart compared to the usual resetting protocols where the particle is brought back to the initial position. We propose a modified protocol, adaptive MPR, by making the associated probabilities of resetting to the right and left a function of resetting steps. In this protocol, we see a further reduction of the optimal mean FDT and improvement in the search process when the detector is far apart.
We introduce an explicit construction for a key distribution protocol in the Quantum Computational Timelock (QCT) security model, where one assumes that computationally secure encryption may only be broken after a time much longer than the coherence time of available quantum memories. Taking advantage of the QCT assumptions, we build a key distribution protocol called HM-QCT from the Hidden Matching problem for which there exists an exponential gap in one-way communication complexity between classical and quantum strategies. We establish that the security of HM-QCT against arbitrary i.i.d. attacks can be reduced to the difficulty of solving the underlying Hidden Matching problem with classical information. Legitimate users, on the other hand, can use quantum communication, which gives them the possibility of sending multiple copies of the same quantum state while retaining an information advantage. This leads to an everlasting secure key distribution scheme over $n$ bosonic modes. Such a level of security is unattainable with purely classical techniques. Remarkably, the scheme remains secure with up to $\mathcal{O}\big( \frac{\sqrt{n}}{\log(n)}\big)$ input photons for each channel use, extending the functionalities and potentially outperforming QKD rates by several orders of magnitudes.
Preparing thermal and ground states is an essential quantum algorithmic task for quantum simulation. In this work, we construct the first efficiently implementable and exactly detailed-balanced Lindbladian for Gibbs states of arbitrary noncommutative Hamiltonians. Our construction can also be regarded as a continuous-time quantum analog of the Metropolis-Hastings algorithm. To prepare the quantum Gibbs state, our algorithm invokes Hamiltonian simulation for a time proportional to the mixing time and the inverse temperature $\beta$, up to polylogarithmic factors. Moreover, the gate complexity reduces significantly for lattice Hamiltonians as the corresponding Lindblad operators are (quasi-) local (with radius $\sim\beta$) and only depend on local Hamiltonian patches. Meanwhile, purifying our Lindbladians yields a temperature-dependent family of frustration-free "parent Hamiltonians", prescribing an adiabatic path for the canonical purified Gibbs state (i.e., the Thermal Field Double state). These favorable features suggest that our construction is the ideal quantum algorithmic counterpart of classical Markov chain Monte Carlo sampling.
The traditional view from particle physics is that quantum gravity effects should only become detectable at extremely high energies and small length scales. Due to the significant technological challenges involved, there has been limited progress in identifying experimentally detectable effects that can be accessed in the foreseeable future. However, in recent decades, the size and mass of quantum systems that can be controlled in the laboratory have reached unprecedented scales, enabled by advances in ground-state cooling and quantum-control techniques. Preparations of massive systems in quantum states paves the way for the explorations of a low-energy regime in which gravity can be both sourced and probed by quantum systems. Such approaches constitute an increasingly viable alternative to accelerator-based, laser-interferometric, torsion-balance, and cosmological tests of gravity. In this review, we provide an overview of proposals where massive quantum systems act as interfaces between quantum mechanics and gravity. We discuss conceptual difficulties in the theoretical description of quantum systems in the presence of gravity, review tools for modeling massive quantum systems in the laboratory, and provide an overview of the current state-of-the-art experimental landscape. Proposals covered in this review include, among others, precision tests of gravity, tests of gravitationally-induced wavefunction collapse and decoherence, as well as gravity-mediated entanglement. We conclude the review with an outlook and discussion of future questions.
Topological and symmetry-protected non-Hermitian zero modes have attracted considerable interest in the past few years. Here we reveal that they can exhibit an unusual behavior when transitioning between the extended and localized regimes: When weakly coupled to a non-Hermitian reservoir, such a zero mode displays a linearly decreasing amplitude as a function of space, which is not caused by an EP of a Hamiltonian, either of the entire system or the reservoir itself. Instead, we attribute it to the non-Bloch solution of a linear homogeneous recurrence relation, together with the underlying non-Hermitian particle-hole symmetry and the zeroness of its energy.
Previously proposed quantum algorithms for solving linear systems of equations cannot be implemented in the near term due to the required circuit depth. Here, we propose a hybrid quantum-classical algorithm, called Variational Quantum Linear Solver (VQLS), for solving linear systems on near-term quantum computers. VQLS seeks to variationally prepare $|x\rangle$ such that $A|x\rangle\propto|b\rangle$. We derive an operationally meaningful termination condition for VQLS that allows one to guarantee that a desired solution precision $\epsilon$ is achieved. Specifically, we prove that $C \geq \epsilon^2 / \kappa^2$, where $C$ is the VQLS cost function and $\kappa$ is the condition number of $A$. We present efficient quantum circuits to estimate $C$, while providing evidence for the classical hardness of its estimation. Using Rigetti's quantum computer, we successfully implement VQLS up to a problem size of $1024\times1024$. Finally, we numerically solve non-trivial problems of size up to $2^{50}\times2^{50}$. For the specific examples that we consider, we heuristically find that the time complexity of VQLS scales efficiently in $\epsilon$, $\kappa$, and the system size $N$.
We propose a new algorithm to synthesise quantum circuits for phase polynomials, which takes into account the qubit connectivity of the quantum computer. We focus on the architectures of currently available NISQ devices. Our algorithm generates circuits with a smaller CNOT depth than the algorithms currently used in Staq and tket, while improving the runtime with respect the former.
In quantum mechanics, measurement can be used to prepare a quantum state. This principle is applicable even for macroscopic objects, which may enable us to see classical-quantum transition. Here, we demonstrate conditional mechanical squeezing of a mg-scale suspended mirror (i.e. the center-of-mass mode of a pendulum) near quantum regimes, through continuous linear position measurement and quantum state prediction. The experiment involved the pendulum interacting with photon coherent fields in a detuned optical cavity, which creates an optical spring. Futhermore, the detuned cavity allows us to perform linear position measurement by direct photo-detection of the reflected light. We experimentally verify the conditional squeezing using the theory combining prediction and retrodiction based on the causal and anti-causal filters. As a result, the standard deviation of position and momentum are respectively given by 36 times the zero-point amplitude of position $q_{\rm zpf}$ and 89 times the zero-point amplitude of momentum $p_{\rm zpf}$. The squeezing level achieved is about 5 times closer to the zero-point motion, despite that the mass of the mechanical oscillator is approximately 7 orders of magnitude greater, compared to the previous study. Thus, our demonstration is the first step towards quantum control for massive objects whose mass-scale is high enough to measure gravitational interactions. Such quantum control will pave the way to test quantum mechanics using the center-of-mass mode of massive objects.
We focus on the measurement defined by the decomposition based on Schur-Weyl duality on $n$ qubits. As the first setting, we discuss the asymptotic behavior of the measurement outcome when the state is given as the permutation mixture $\rho_{mix,n,l}$ of the state $| 1^{l} \, 0^{n-l} \rangle := | 1 \rangle^{\otimes l} \otimes |0\rangle^{\otimes (n-l)}$. In contrast, when the state is given as the Dicke state $|\Xi_{n,l}\rangle$, the measurement outcome takes one deterministic value. These two cases have completely different behaviors. As the second setting, we study the case when the state is given as the tensor product of the permutation mixture $\rho_{mix,k,l}$ and the Dicke state $| \Xi_{n-k,m-l} \rangle$. We derive various types of asymptotic distribution including a kind of central limit theorem when $n$ goes to infinity.
We use purity, a principle borrowed from the foundations of quantum information, to show that all special symmetric dagger-Frobenius algebras in CPM(fHilb) are canonical, i.e. that they arise by doubling of special symmetric dagger-Frobenius algebras in fHilb. In particular, this applies to all classical structures.
Diagrammatic representations of quantum algorithms and circuits offer novel approaches to their design and analysis. In this work, we describe extensions of the ZX-calculus especially suitable for parameterized quantum circuits, in particular for computing observable expectation values as functions of or for fixed parameters, which are important algorithmic quantities in a variety of applications ranging from combinatorial optimization to quantum chemistry. We provide several new ZX-diagram rewrite rules and generalizations for this setting. In particular, we give formal rules for dealing with linear combinations of ZX-diagrams, where the relative complex-valued scale factors of each diagram must be kept track of, in contrast to most previously studied single-diagram realizations where these coefficients can be effectively ignored. This allows us to directly import a number useful relations from the operator analysis to ZX-calculus setting, including causal cone and quantum gate commutation rules. We demonstrate that the diagrammatic approach offers useful insights into algorithm structure and performance by considering several ansatze from the literature including realizations of hardware-efficient ansatze and QAOA. We find that by employing a diagrammatic representation, calculations across different ansatze can become more intuitive and potentially easier to approach systematically than by alternative means. Finally, we outline how diagrammatic approaches may aid in the design and study of new and more effective quantum circuit ansatze.
We give a presentation by generators and relations of the group of Clifford+T operators on two qubits. The proof relies on an application of the Reidemeister-Schreier theorem to an earlier result of Greylyn, and has been formally verified in the proof assistant Agda.
We establish a formal bridge between qubit-based and photonic quantum computing. We do this by defining a functor from the ZX calculus to linear optical circuits. In the process we provide a compositional theory of quantum linear optics which allows to reason about events involving multiple photons such as those required to perform linear-optical and fusion-based quantum computing.
Quipper and Proto-Quipper are a family of quantum programming languages that, by their nature as circuit description languages, involve two runtimes: one at which the program generates a circuit and one at which the circuit is executed, normally with probabilistic results due to measurements. Accordingly, the language distinguishes two kinds of data: parameters, which are known at circuit generation time, and states, which are known at circuit execution time. Sometimes, it is desirable for the results of measurements to control the generation of the next part of the circuit. Therefore, the language needs to turn states, such as measurement outcomes, into parameters, an operation we call dynamic lifting. The goal of this paper is to model this interaction between the runtimes by providing a general categorical structure enriched in what we call "bisets". We demonstrate that the biset-enriched structure achieves a proper semantics of the two runtimes and their interaction, by showing that it models a variant of Proto-Quipper with dynamic lifting. The present paper deals with the concrete categorical semantics of this language, whereas a companion paper deals with the syntax, type system, operational semantics, and abstract categorical semantics.
Phase gadgets have proved to be an indispensable tool for reasoning about ZX-diagrams, being used in optimisation and simulation of quantum circuits and the theory of measurement-based quantum computation. In this paper we study phase gadgets for qutrits. We present the flexsymmetric variant of the original qutrit ZX-calculus, which allows for rewriting that is closer in spirit to the original (qubit) ZX-calculus. In this calculus phase gadgets look as you would expect, but there are non-trivial differences in their properties. We devise new qutrit-specific tricks to extend the graphical Fourier theory of qubits, resulting in a translation between the 'additive' phase gadgets and a 'multiplicative' counterpart we dub phase multipliers. This enables us to generalise the qubit notion of multiple-control to qutrits in two ways. The first type is controlling on a single tritstring, while the second type applies the gate a number of times equal to the tritwise multiplication modulo 3 of the control qutrits.We show how both types of control can be implemented for any qutrit Z or X phase gate, ancilla-free, and using only Clifford and phase gates. The first requires a polynomial number of gates and exponentially small phases, while the second requires an exponential number of gates, but constant sized phases. This is interesting, because such a construction is not possible in the qubit setting. As an application of these results we find a construction for emulating arbitrary qubit diagonal unitaries, and specifically find an ancilla-free emulation for the qubit CCZ gate that only requires three single-qutrit non-Clifford gates, provably lower than the four T gates needed for qubits with ancilla.
Path sums are a convenient symbolic formalism for quantum operations with applications to the simulation, optimization, and verification of quantum protocols. Unlike quantum circuits, path sums are not limited to unitary operations, but can express arbitrary linear ones. Two problems, therefore, naturally arise in the study of path sums: the unitarity problem and the extraction problem. The former is the problem of deciding whether a given path sum represents a unitary operator. The latter is the problem of constructing a quantum circuit, given a path sum promised to represent a unitary operator. In this paper, we show that the unitarity problem is co-NP-hard in general, but that it is in P when restricted to Clifford path sums. We then provide an algorithm to synthesize a Clifford circuit from a unitary Clifford path sum. The circuits produced by our extraction algorithm are of the form C1-H-C2, where C1 and C2 are Hadamard-free circuits and H is a layer of Hadamard gates. We also provide a heuristic generalization of our extraction algorithm to arbitrary path sums. While this algorithm is not guaranteed to succeed, it often succeeds and typically produces natural looking circuits. Alongside applications to the optimization and decompilation of quantum circuits, we demonstrate the capability of our algorithm by synthesizing the standard quantum Fourier transform directly from a path sum.
Many quantum computers have constraints regarding which two-qubit operations are locally allowed. To run a quantum circuit under those constraints, qubits need to be mapped to different quantum registers, and multi-qubit gates need to be routed accordingly. Recent developments have shown that compiling strategies based on the Steiner tree provide a competitive tool to route CNOTs. However, these algorithms require the qubit map to be decided before routing. Moreover, the qubit map is fixed throughout the computation, i.e. the logical qubit will not be moved to a different physical qubit register. This is inefficient with respect to the CNOT count of the resulting circuit. In this paper, we propose the algorithm PermRowCol for routing CNOTs in a quantum circuit. It dynamically remaps logical qubits during the computation, and thus results in fewer output CNOTs than the algorithms Steiner-Gauss and RowCol. Here we focus on circuits over CNOT only, but this method could be generalized to a routing and mapping strategy on Clifford+T circuits by slicing the quantum circuit into subcircuits composed of CNOTs and single-qubit gates. Additionally, PermRowCol can be used in place of Steiner-Gauss in the synthesis of phase polynomials as well as the extraction of quantum circuits from ZX diagrams.
In this paper, we investigate the performances of tunable quantum neural networks in the Quantum Probably Approximately Correct (QPAC) learning framework. Tunable neural networks are quantum circuits made of multi-controlled X gates. By tuning the set of controls these circuits are able to approximate any Boolean functions. This architecture is particularly suited to be used in the QPAC-learning framework as it can handle the superposition produced by the oracle. In order to tune the network so that it can approximate a target concept, we have devised and implemented an algorithm based on amplitude amplification. The numerical results show that this approach can efficiently learn concepts from a simple class.
In the one-way model of measurement-based quantum computation (MBQC), computation proceeds via measurements on some standard resource state. So-called flow conditions ensure that the overall computation is deterministic in a suitable sense, with Pauli flow being the most general of these. Existing work on rewriting MBQC patterns while preserving the existence of flow has focused on rewrites that reduce the number of qubits. In this work, we show that introducing new Z-measured qubits, connected to any subset of the existing qubits, preserves the existence of Pauli flow. Furthermore, we give a unique canonical form for stabilizer ZX-diagrams inspired by recent work of Hu & Khesin. We prove that any MBQC-like stabilizer ZX-diagram with Pauli flow can be rewritten into this canonical form using only rules which preserve the existence of Pauli flow, and that each of these rules can be reversed while also preserving the existence of Pauli flow. Hence we have complete graphical rewriting for MBQC-like stabilizer ZX-diagrams with Pauli flow.
Q# is a standalone domain-specific programming language from Microsoft for writing and running quantum programs. Like most industrial languages, it was designed without a formal specification, which can naturally lead to ambiguity in its interpretation. We aim to provide a formal language definition for Q#, placing the language on a solid mathematical foundation and enabling further evolution of its design and type system. This paper presents $\lambda$-Q#, an idealized version of Q# that illustrates how we may view Q# as a quantum Algol (algorithmic language). We show the safety properties enforced by $\lambda$-Q#'s type system and present its equational semantics based on a fully complete algebraic theory by Staton.
We provide a universal construction of the category of finite-dimensional C*-algebras and completely positive trace-nonincreasing maps from the rig category of finite-dimensional Hilbert spaces and unitaries. This construction, which can be applied to any dagger rig category, is described in three steps, each associated with their own universal property, and draws on results from dilation theory in finite dimension. In this way, we explicitly construct the category that captures hybrid quantum/classical computation with possible nontermination from the category of its reversible foundations. We discuss how this construction can be used in the design and semantics of quantum programming languages.
Ambiguity is a natural language phenomenon occurring at different levels of syntax, semantics, and pragmatics. It is widely studied; in Psycholinguistics, for instance, we have a variety of competing studies for the human disambiguation processes. These studies are empirical and based on eye-tracking measurements. Here we take first steps towards formalizing these processes for semantic ambiguities where we identified the presence of two features: (1) joint plausibility degrees of different possible interpretations, (2) causal structures according to which certain words play a more substantial role in the processes. The novel sheaf-theoretic model of definite causality developed by Gogioso and Pinzani in QPL 2021 offers tools to model and reason about these features. We applied this theory to a dataset of ambiguous phrases extracted from Psycholinguistics literature and their human plausibility judgements collected by us using the Amazon Mechanical Turk engine. We measured the causal fractions of different disambiguation orders within the phrases and discovered two prominent orders: from subject to verb in the subject-verb and from object to verb in the verb object phrases. We also found evidence for delay in the disambiguation of polysemous vs homonymous verbs, again compatible with Psycholinguistic findings.
We present a topology-aware optimisation technique for circuits of mixed ZX phase gadgets, based on conjugation by CX gates and simulated annealing.
De Finetti theorems tell us that if we expect the likelihood of outcomes to be independent of their order, then these sequences of outcomes could be equivalently generated by drawing an experiment at random from a distribution, and repeating it over and over. In particular, the quantum de Finetti theorem says that exchangeable sequences of quantum states are always represented by distributions over a single state produced over and over. The main result of this paper is that this quantum de Finetti construction has a universal property as a categorical limit. This allows us to pass canonically between categorical treatments of finite dimensional quantum theory and the infinite dimensional. The treatment here is through understanding properties of (co)limits with respect to the contravariant functor which takes a C*-algebra describing a physical system to its convex, compact space of states, and through discussion of the Radon probability monad. We also show that the same categorical analysis also justifies a continuous de Finetti theorem for classical probability.
Tavis-Cummings (TC) cavity quantum electrodynamical effects, describing the interaction of $N$ atoms with an optical resonator, are at the core of atomic, optical and solid state physics. The full numerical simulation of TC dynamics scales exponentially with the number of atoms. By restricting the open quantum system to a single excitation, typical of experimental realizations in quantum optics, we analytically solve the TC model with an arbitrary number of atoms with linear complexity. This solution allows us to devise the Quantum Mapping Algorithm of Resonator Interaction with $N$ Atoms (Q-MARINA), an intuitive TC mapping to a quantum circuit with linear space and time scaling, whose $N+1$ qubits represent atoms and a lossy cavity, while the dynamics is encoded through $2N$ entangling gates. Finally, we benchmark the robustness of the algorithm on a quantum simulator and superconducting quantum processors against the quantum master equation solution on a classical computer.
It is well known that the future thermal cone -- which is the set of all states thermomajorized by a given initial state -- forms a convex polytope in the quasi-classical realm, and that one can explicitly write down a map which relates the permutations to the extreme points of this polytope. Given any such extreme point we review a formula for a Gibbs-stochastic matrix that maps the initial state to said extremal state, and we uncover the simple underlying structure. This allows us to draw a connection to the theory of transportation polytopes, which leads to the notions of "well-structured" and "stable" Gibbs states. While the former relates to the number of extremal states being maximal, the latter characterizes when thermomajorization is a partial order in the quasi-classical realm; this corresponds to the impossibility of cyclic state transfers. Moreover, we give simple criteria for degeneracy of the polytope, that is, for checking whether the extreme point map maps two different permutations to the same state.
The Pauli measurements (the measurements that can be performed with Clifford operators followed by measurement in the computational basis) are a fundamental object in quantum information. It is well-known that there is no assignment of outcomes to all Pauli measurements that is both complete and consistent. We define two classes of hidden variable assignments based on relaxing either condition. Partial hidden variable assignments retain the consistency condition, but forfeit completeness. Contextual hidden variable assignments retain completeness but forfeit consistency. We use techniques from spectral graph theory to characterize the incompleteness and inconsistency of the respective hidden variable assignments. As an application, we interpret our incompleteness result as a statement of contextuality and our inconsistency result as a statement of nonlocality. Our results show that we can obtain large amounts of contextuality and nonlocality using Clifford gates and measurements.
We show the applicability of the Cartan decomposition of Lie algebras to quantum circuits. This approach can be used to synthesize circuits that can efficiently implement any desired unitary operation. Our method finds explicit quantum circuit representations of the algebraic generators of the relevant Lie algebras allowing the direct implementation of a Cartan decomposition on a quantum computer. The construction is recursive and allows us to expand any circuit down to generators and rotation matrices on individual qubits, where through our recursive algorithm we find that the generators themselves can be expressed with controlled-not (CNOT) and SWAP gates explicitly. Our approach is independent of the standard CNOT implementation and can be easily adapted to other cross-qubit circuit elements. In addition to its versatility, we also achieve near-optimal counts when working with CNOT gates, achieving an asymptotic cnot cost of $\frac{21}{16}4^n$ for $n$ qubits.
We systematically characterize the dynamical evolution of time-parity ($\mathcal{PT}$)-symmetric two-level systems with spin-dependent dissipations. When the energy-gap control parameters are tuned, a section of imaginary spectra ended with exceptional points (EP) appears in the regimes where the dissipation term is dominant. If the parameters are linearly tuned with time, the dynamical evolution can be characterized with the parabolic cylinder equations, which can be analytically solved. We find that the asymptotic behaviors of particle probability on the two levels show initial-state-independent redistribution in the slow-tuning-speed limit when the system is nonadiabatically driven across EPs. Equal distributions appear when the non-dissipative Hamiltonian shows gap closing. So long as the non-dissipative Hamiltonian displays level anti-crossing, the final distribution becomes unbalanced. The ratios between the occupation probabilities are given analytically. These results are confirmed with numerical simulations. The predicted equal-distribution phenomenon may be employed to identify gap closing from anti-crossing between two energy bands.
As qubit-based platforms face near-term technical challenges in terms of scalability, qudits, $d$-level quantum bases of information, are being implemented in multiple platforms as an alternative for Quantum Information Processing (QIP). It is, therefore, crucial to study their efficiencies for QIP compared to more traditional qubit platforms, specifically since each additional quantum level represents an additional source of environmental coupling. We present a comparative study of the infidelity scalings of a qudit and $n$-qubit systems, both with identical Hilbert space dimensions and noisy environments. The first-order response of the Average Gate Infidelity (AGI) to the noise in the Lindblad formalism, which was found to be gate-independent, was calculated analytically in the two systems being compared. This yielded a critical curve $(d^2-1)/3\log_2(d)$ of the ratio of their respective figure of merits, defined as the gate time in units of decoherence time. This quantity indicates how time-efficient operations on these systems are relative to decoherence timescales, and the critical curve is especially useful for precisely benchmarking qudit platforms with smaller values of $d$. The curve delineates regions where each system has a higher rate of increase of the AGI than the other. This condition on gate efficiency was applied to different existing platforms. Specific qudit platforms were found to possess gate efficiencies competitive with state-of-the-art qubit platforms. Numerical simulations complemented this work and allowed for discussion of the applicability and limits of the linear response formalism.
Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations such as photon losses, non-ideal measurements, and quantum memories with short coherence times into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. Thus, in this work, we present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution.
Large-scale quantum computers will inevitably need quantum error correction (QEC) to protect information against decoherence. Given that the overhead of such error correction is often formidable, autonomous quantum error correction (AQEC) proposals offer a promising near-term alternative. AQEC schemes work by transforming error states into excitations that can be efficiently removed through engineered dissipation. We propose a new AQEC scheme, called the Star code, which can autonomously correct or suppress all single qubit error channels using two transmons as encoders with a tunable coupler and two lossy resonators as a cooling source. We theoretically and numerically demonstrate quadratic improvements in logical states' lifetime for realistic parameters. The Star code requires only two-photon interactions and can be realized with linear coupling elements, avoiding higher-order drive or dissipation terms that are difficult to implement in many other AQEC proposals. The Star code can be adapted to other planar superconducting circuits, offering a scalable alternative to single qubits for incorporation in larger quantum computers or error correction codes.
Quantum Error Mitigation (QEM) enables the extraction of high-quality results from the presently-available noisy quantum computers. In this approach, the effect of the noise on observables of interest can be mitigated using multiple measurements without additional hardware overhead. Unfortunately, current QEM techniques are limited to weak noise or lack scalability. In this work, we introduce a QEM method termed `Adaptive KIK' that adapts to the noise level of the target device, and therefore, can handle moderate-to-strong noise. The implementation of the method is experimentally simple -- it does not involve any tomographic information or machine-learning stage, and the number of different quantum circuits to be implemented is independent of the size of the system. Furthermore, we have shown that it can be successfully integrated with randomized compiling for handling both incoherent as well as coherent noise. Our method handles spatially correlated and time-dependent noise which enables to run shots over the scale of days or more despite the fact that noise and calibrations change in time. Finally, we discuss and demonstrate why our results suggest that gate calibration protocols should be revised when using QEM. We demonstrate our findings in the IBM quantum computers and through numerical simulations.
We explain how to apply a Gaussian-preserving operator to a fermionic Gaussian state. We use this method to study the evolution of the entanglement entropy of an Ising spin chain following a Lindblad dynamics with string measurement operators, focusing on the quantum-jump unraveling of such Lindbladian. We find that the asymptotic entanglement entropy obeys an area law for finite-range string operators and a volume law for ranges of the string which scale with the system size. The same behavior is observed for the measurement-only dynamics, suggesting that measurements can play a leading role in this context.
Preparing ground states and thermal states is essential for simulating quantum systems on quantum computers. Despite the hope for practical quantum advantage in quantum simulation, popular state preparation approaches have been challenged. Monte Carlo-style quantum Gibbs samplers have emerged as an alternative, but prior proposals have been unsatisfactory due to technical obstacles rooted in energy-time uncertainty. We introduce simple continuous-time quantum Gibbs samplers that overcome these obstacles by efficiently simulating Nature-inspired quantum master equations (Lindbladians). In addition, we construct the first provably accurate and efficient algorithm for preparing certain purified Gibbs states (called thermal field double states in high-energy physics) of rapidly thermalizing systems; this algorithm also benefits from a quantum walk speedup. Our algorithms' costs have a provable dependence on temperature, accuracy, and the mixing time (or spectral gap) of the relevant Lindbladian. We complete the first rigorous proof of finite-time thermalization for physically derived Lindbladians by developing a general analytic framework for nonasymptotic secular approximation and approximate detailed balance. Given the success of classical Markov chain Monte Carlo (MCMC) algorithms and the ubiquity of thermodynamics, we anticipate that quantum Gibbs sampling will become indispensable in quantum computing.
Thin-film nanostructures with embedded M\"ossbauer nuclei have been successfully used for x-ray quantum optical applications with hard x-rays coupling in grazing incidence. Here we address theoretically a new geometry, in which hard x-rays are coupled in forward incidence (front coupling), setting the stage for waveguide QED with nuclear x-ray resonances. We develop a general model based on the Green's function formalism of the field-nucleus interaction in one dimensional waveguides, and show that it combines aspects of both nuclear forward scattering, visible as dynamical beating in the spatio-temporal response, and the resonance structure from grazing incidence, visible in the spectrum of guided modes. The interference of multiple modes is shown to play an important role, resulting in beats with wavelengths on the order of tens of microns, on the scale of practical photolithography. This allows for the design of special sample geometries to explore the resonant response or micro-striped waveguides, opening a new toolbox of geometrical design for hard X-ray quantum optics.
Quantum algorithms are getting extremely popular due to their potential to significantly outperform classical algorithms. Yet, applying quantum algorithms to optimization problems meets challenges related to the efficiency of quantum algorithms training, the shape of their cost landscape, the accuracy of their output, and their ability to scale to large-size problems. Here, we present an approximate gradient-based quantum algorithm for hardware-efficient circuits with amplitude encoding. We show how simple linear constraints can be directly incorporated into the circuit without additional modification of the objective function with penalty terms. We employ numerical simulations to test it on MaxCut problems with complete weighted graphs with thousands of nodes and run the algorithm on a superconducting quantum processor. We find that for unconstrained MaxCut problems with more than 1000 nodes, the hybrid approach combining our algorithm with a classical solver called CPLEX can find a better solution than CPLEX alone. This demonstrates that hybrid optimization is one of the leading use cases for modern quantum devices.
We introduce a classical algorithm for sampling the output of shallow, noisy random circuits on two-dimensional qubit arrays. The algorithm builds on the recently-proposed "space-evolving block decimation" (SEBD) and extends it to the case of noisy circuits. SEBD is based on a mapping of 2D unitary circuits to 1D {\it monitored} ones, which feature measurements alongside unitary gates; it exploits the presence of a measurement-induced entanglement phase transition to achieve efficient (approximate) sampling below a finite critical depth $T_c$. Our noisy-SEBD algorithm unravels the action of noise into measurements, further lowering entanglement and enabling efficient classical sampling up to larger circuit depths. We analyze a class of physically-relevant noise models (unital qubit channels) within a two-replica statistical mechanics treatment, finding weak measurements to be the optimal (i.e. most disentangling) unraveling. We then locate the noisy-SEBD complexity transition as a function of circuit depth and noise strength in realistic circuit models. As an illustrative example, we show that circuits on heavy-hexagon qubit arrays with noise rates of $\approx 2\%$ per CNOT, based on IBM Quantum processors, can be efficiently sampled up to a depth of 5 iSWAP (or 10 CNOT) gate layers. Our results help sharpen the requirements for practical hardness of simulation of noisy hardware.
We investigate the Casimir-Lifshitz force (CLF) between two identical graphene strip gratings, laid on finite dielectric substrates, by using the scattering matrix (S-matrix) approach derived from the Fourier Modal Method with Local Basis Functions (FMM-LBF). We fully take into account the high-order electromagnetic diffractions, the multiple scattering and the exact 2D feature of the graphene strips. We show that the non-additivity, which is one of the most interesting features of the CLF in general, is significantly high and can be modulated in situ, without any change in the actual material geometry and this by varying the graphene chemical potential. We discuss the nature of the geometrical effects and show the relevance of the geometric parameter d/D (i.e. the ratio between separation and grating period), which allows to explore the regions of parameters where the additive result is fully acceptable or where the full calculation is needed. This study can open to deeper experimental exploration of the non-additive features of the CLF with micro- or nano-electromechanical graphene-based systems.
The theory of entropic gravity conjectures that gravity emerges thermodynamically rather than being a fundamental force. One of the main criticisms of entropic gravity is that it would lead to quantum massive particles losing coherence in free fall, which is not observed experimentally. This criticism was refuted in [Phys. Rev. Res. 3, 033065 (2021)], where a nonrelativistic master equation modeling gravity as an open quantum system interaction demonstrated that in the strong coupling limit, coherence could be maintained and reproduce conventional free-fall dynamics. Moreover, the nonrelativistic master equation was shown to be fully compatible with the qBounce experiment for ultracold neutrons. Motivated by this, we extend these results to gravitationally accelerating Dirac fermions. We achieve this by using the Dirac equation in Rindler space and modeling entropic gravity as a thermal bath thus adopting the open quantum systems approach as well. We demonstrate that in the strong coupling limit, our entropic gravity model maintains quantum coherence for Dirac fermions. In addition, we demonstrate that spin is not affected by entropic gravity. We use the Foldy-Wouthysen transformation to demonstrate that it reduces to the nonrelativistic master equation, supporting the entropic gravity hypothesis for Dirac fermions. Also, we demonstrate how antigravity seemingly arises from the Dirac equation for free-falling antiparticles but use numerical simulations to show that this phenomenon originates from zitterbewegung thus not violating the equivalence principle.
Consequences of enforcing permutational symmetry, as required by the Pauli principle (spin-statistical theorem), on the state space of molecular ensembles interacting with the quantized radiation mode of a cavity are discussed. The Pauli-allowed collective states are obtained by means of group theory, i.e., by projecting the state space onto the appropriate irreducible representations of the permutation group of the indistinguishable molecules. It is shown that with increasing number of molecules the ratio of Pauli-allowed collective states decreases very rapidly. Bosonic states are more abundant than fermionic states, and the brightness of Pauli-allowed state space (contribution from photon excited states) increases(decreases) with increasing fine structure in the energy levels of the material ground(excited) state manifold. Numerical results are shown for the realistic example of rovibrating H$_2$O molecules interacting with an infrared (IR) cavity mode.
Choosing an optimal time step $\delta t$ is crucial for an efficient Hamiltonian simulation based on Trotterization but difficult due to the complex structure of the Trotter error. Here we develop a method measuring the Trotter error by combining the second- and fourth-order Trotterizations rather than consulting with mathematical error bounds. Implementing this method, we construct an algorithm, which we name Trotter24, for adaptively using almost the largest stepsize $\delta t$, which keeps quantum circuits shallowest, within an error tolerance $\epsilon$ preset for our purpose. Trotter24 applies to generic Hamiltonians, including time-dependent ones, and can be generalized to any orders of Trotterization. Benchmarking it in a quantum spin chain, we find the adaptively chosen $\delta t$ to be about ten times larger than that inferred from known upper bounds of Trotter errors. Trotter24 allows us to keep the quantum circuit thus shallower within the error tolerance in exchange for paying the cost of measurements.
Quantum metrology is recognized for its capability to offer high-precision estimation by utilizing quantum resources, such as quantum entanglement. Here, we propose a generalized Tavis-Cummings model by introducing the $XY$ spin interaction to explore the impact of the many-body effect on estimation precision, quantified by the quantum Fisher information (QFI). By deriving the effective description of our model, we establish a closed relationship between the QFI and the spin fluctuation induced by the $XY$ spin interaction. Based on this exact relation, we emphasize the indispensable role of the spin anisotropy in achieving the Heisenberg-scaling precision for estimating a weak magnetic field. Furthermore, we observe that the estimation precision can be enhanced by increasing the strength of the spin anisotropy. We also reveal a clear scaling transition of the QFI in the Tavis-Cummings model with the reduced Ising interaction. Our results contribute to the enrichment of metrology theory by considering many-body effects, and they also present an alternative approach to improving the estimation precision by harnessing the power provided by many-body quantum phases.
Quantum networks will be able to service consumers with long distance entanglement by use of repeater nodes that can both generate external Bell pairs with their neighbors, iid with probability $p$, as well as perform internal Bell State Measurements (BSMs) which succeed with some probability $q$. The actual values of these probabilities is dependent upon the experimental parameters of the network in question. While global link state knowledge is needed to maximize the rate of entanglement generation between any two consumers, this may be an unreasonable request due to the dynamic nature of the network. This work evaluates a local link state knowledge, multi-path routing protocol that works with time multiplexed repeaters that are able to perform BSMs across different time steps. This study shows that the average rate increases with the time multiplexing block length, $k$, although the initial latency also increases. When a step function memory decoherence model is introduced so that qubits are held in the quantum memory for a time exponentially distributed with mean $\mu$, an optimal $k$ ($k_\text{opt}$) value appears. As $p$ decreases or $\mu$ increases the value of $k_\text{opt}$ increases. This value is such that the benefits from time multiplexing are balanced with the increased risk of losing a previously established entangled pair.
Single-photon transitions are one of the key technologies for designing and operating very-long-baseline atom interferometers tailored for terrestrial gravitational-wave and dark-matter detection. Since such setups aim at the detection of relativistic and beyond-Standard-Model physics, the analysis of interferometric phases as well as of atomic diffraction must be performed to this precision and including these effects. In contrast, most treatments focused on idealized diffraction so far. Here, we study single-photon transitions, both magnetically-induced and direct ones, in gravity and Standard-Model extensions modeling dark matter as well as Einstein-equivalence-principle violations. We take into account relativistic effects like the coupling of internal to center-of-mass degrees of freedom, induced by the mass defect, as well as the gravitational redshift of the diffracting light pulse. To this end, we also include chirping of the light pulse required by terrestrial setups, as well as its associated modified momentum transfer for single-photon transitions.
Electromagnetically induced transparency (EIT) and Autler-Townes splitting (ATS) are generally characterized and distinguished by the width of the transparency created in the absorption profile of a weak probe in presence of a strong control field. This often leads to ambiguities, as both phenomena yield similar spectroscopic signature. However, an objective method based on Akaike's Information Criterion (AIC) test offers a quantitative way to discern the two regimes when applied on the probe absorption profile. The obtained transition value of control field strength was found to be higher than the value given by pole analysis of the corresponding off-diagonal density matrix element. By contrast, we apply the test on ground state coherence and the measured coherence quantifier, which yields a distinct transition point around the predicted value even in presence of noise. Our test accurately captures the transition between two regimes, indicating that a proper measure of coherence is essential for making such distinctions.
Coherent Ising Machine (CIM) is a network of optical parametric oscillators that solves combinatorial optimization problems by finding the ground state of an Ising Hamiltonian. In CIMs, a problem arises when attempting to realize the Zeeman term because of the mismatch in size between interaction and Zeeman terms due to the variable amplitude of the optical parametric oscillator pulses corresponding to spins. There have been three approaches proposed so far to address this problem for CIM, including the absolute mean amplitude method, the auxiliary spin method, and the chaotic amplitude control (CAC) method. This paper focuses on the efficient implementation of Zeeman terms within the mean-field CIM model, which is a physics-inspired heuristic solver without quantum noise. With the mean-field model, computation is easier than with more physically accurate models, which makes it suitable for implementation in field programmable gate arrays (FPGAs) and large-scale simulations. Firstly, we examined the performance of the mean-field CIM model for realizing the Zeeman term with the CAC method, as well as their performance when compared to a more physically accurate model. Next, we compared the CAC method to other Zeeman term realization techniques on the mean-field model and a more physically accurate model. In both models, the CAC method outperformed the other methods while retaining similar performance.
Non-Hermitian quasicrystal constitutes a unique class of disordered open system with PT-symmetry breaking, localization and topological triple phase transitions. In this work, we uncover the effect of quantum correlation on phase transitions and entanglement dynamics in non-Hermitian quasicrystals. Focusing on two interacting bosons in a Bose-Hubbard lattice with quasiperiodically modulated gain and loss, we find that the onsite interaction between bosons could drag the PT and localization transition thresholds towards weaker disorder regions compared with the noninteracting case. Moreover, the interaction facilitates the expansion of the critical point of a triple phase transition in the noninteracting system into a critical phase with mobility edges, whose domain could be flexibly controlled by tuning the interaction strength. Systematic analyses of the spectrum, inverse participation ratio, topological winding number, wavepacket dynamics and entanglement entropy lead to consistent predictions about the correlation-driven phases and transitions in our system. Our findings pave the way for further studies of the interplay between disorder and interaction in non-Hermitian quantum matter.
The development of classical ergodic theory has had a significant impact in the areas of mathematics, physics, and, in general, applied sciences. The quantum ergodic theory of Hamiltonian dynamics has its motivations to understand thermodynamics and statistical mechanics and is still debated a lot. Quantum channel, a completely positive trace-preserving map, represents a most general representation of quantum dynamics and is an essential aspect of quantum information theory and quantum computation. In this work, we study the ergodic theory of quantum channels by characterizing different levels of ergodic hierarchy from integrable to mixing. The quantum channels on single systems are constructed from the unitary operators acting on bipartite states and tracing out the environment. The interaction strength of these unitary operators measured in terms of operator entanglement provides sufficient conditions for the channel to be mixing. By using block diagonal unitary operators, we construct a set of non-ergodic channels. From integrable to mixing is characterized explicitly in the case of the two-qubit unitary operator. Moreover, we also study interacting many-body quantum systems that include the famous Sachdev-Ye-Kitaev (SYK) model and show that they display mixing within the framework of the quantum channel.
We study the Quantum Zeno Effect (QZE) on a single qubit on IBM Quantum Experience devices under the effect of multiple measurements. We consider two possible cases: the Rabi evolution and the free decay. In both cases we observe the occurrence of the QZE as an increasing of the survival probability with the number of measurements.
We introduce a new quantum decoder based on a variant of the pretty good measurement, but defined via an alternative matrix quotient. We use this decoder to show new lower bounds on the error exponent both in the one-shot and asymptotic regimes for the classical-quantum and the entanglement-assisted channel coding problem. Our bounds are expressed in terms of measured (for the one-shot bounds) and sandwiched (for the asymptotic bounds) channel R\'enyi mutual information of order between 1/2 and 1. Our results are not comparable with some previously established bounds for general instances, yet they are tight (for rates close to capacity) when the underlying channel is classical.
Frustration in magnetic materials arising from competing exchange interactions can prevent the system from adopting long-range magnetic order and can instead lead to a diverse range of novel quantum and topological states with exotic quasiparticle excitations. Here, we review prominent examples of such emergent phenomena, including magnetically-disordered and extensively degenerate spin ices, which feature emergent magnetic monopole excitations, highly-entangled quantum spin liquids with fractional spinon excitations, topological order and emergent gauge fields, as well as complex particle-like topological spin textures known as skyrmions. We provide an overview of recent advances in the search for magnetically-disordered candidate materials on the three-dimensional pyrochlore lattice and two-dimensional triangular, kagome and honeycomb lattices, the latter with bond-dependent Kitaev interactions, and on lattices supporting topological magnetism. We highlight experimental signatures of these often elusive phenomena and single out the most suitable experimental techniques that can be used to detect them. Our review also aims at providing a comprehensive guide for designing and investigating novel frustrated magnetic materials, with the potential of addressing some important open questions in contemporary condensed matter physics.
Using quantum algorithms, we obtain, for accuracy $\epsilon>0$ and confidence $1-\delta,0<\delta <1,$ a new sample complexity upper bound of $O((\mbox{log}(\frac{1}{\delta}))/\epsilon)$ as $\epsilon,\delta\rightarrow 0$ (up to a polylogarithmic factor in $\epsilon^{-1}$) for a general agnostic learning model, provided the hypothesis class is of finite cardinality. This greatly improves upon a corresponding sample complexity of asymptotic order $\Theta((\mbox{log}(\frac{1}{\delta}))/\epsilon^{2})$ known in the literature to be attainable by means of classical (non-quantum) algorithms for an agnostic learning problem also with hypothesis set of finite cardinality (see, for example, Arunachalam and de Wolf (2018) and the classical statistical learning theory references cited there). Thus, for general agnostic learning, the quantum speedup in the rate of learning that we achieve is quadratic in $\epsilon^{-1}$ (up to a polylogarithmic factor).
We offer an argument against simplicity as a sole intrinsic criterion for nomic realism. The argument is based on the simplicity bubble effect. Underdetermination in quantum foundations illustrates the case.
The complete characterization of a generic $d$-dimensional Floquet topological phase is usually hard for the requirement of information about the micromotion throughout the entire driving period. In a recent work [L. Zhang et al., Phys. Rev. Lett. 125, 183001 (2020)], an experimentally feasible dynamical detection scheme was proposed to characterize the integer Floquet topological phases using quantum quenches. However, this theory is still far away from completion, especially for free-fermion Floquet topological phases, where the states can also be characterized by $Z_{2}$ invariants. Here we develop the first full and unified dynamical characterization theory for the $Z_{2}$ Floquet topological phases of different dimensionality and tenfold-way symmetry classes by quenching the system from a trivial and static initial state to the Floquet topological regime through suddenly changing the parameters and turning on the periodic driving. By measuring the minimal information of Floquet bands via the stroboscopic time-averaged spin polarizations, we show that the topological spin texture patterns emerging on certain discrete momenta of Brillouin zone called the $0$ or $\pi$ gap highest-order band-inversion surfaces provide a measurable dynamical $Z_{2}$ Floquet invariant, which uniquely determines the Floquet boundary modes in the corresponding quasienergy gap and characterizes the $Z_{2}$ Floquet topology. The applications of our theory are illustrated via one- and two-dimensional models that are accessible in current quantum simulation experiments. Our work provides a highly feasible way to detect the $Z_{2}$ Floquet topology and completes the dynamical characterization for the full tenfold classes of Floquet topological phases, which shall advance the research in theory and experiments.
Standard cavity cooling of atoms or dielectric particles is based on the action of dispersive optical forces in high-finesse cavities. We investigate here a complementary regime characterized by large cavity losses, resembling the standard Doppler cooling technique. For a single two-level emitter a modification of the cooling rate is obtained from the Purcell enhancement of spontaneous emission in the large cooperativity limit. This mechanism is aimed at cooling of quantum emitters without closed transitions, which is the case for molecular systems, where the Purcell effect can mitigate the loss of population from the cooling cycle. We extend our analytical formulation to the many particle case governed by weak individual coupling but exhibiting collective strong Purcell enhancement to a cavity mode.
We describe a quantum algorithm for finding the smallest eigenvalue of a Hermitian matrix. This algorithm combines Quantum Phase Estimation and Quantum Amplitude Estimation to achieve a quadratic speedup with respect to the best classical algorithm in terms of matrix dimensionality, i.e., $\widetilde{\mathcal{O}}(\sqrt{N}/\epsilon)$ black-box queries to an oracle encoding the matrix, where $N$ is the matrix dimension and $\epsilon$ is the desired precision. In contrast, the best classical algorithm for the same task requires $\Omega(N)\text{polylog}(1/\epsilon)$ queries. In addition, this algorithm allows the user to select any constant success probability. We also provide a similar algorithm with the same runtime that allows us to prepare a quantum state lying mostly in the matrix's low-energy subspace. We implement simulations of both algorithms and demonstrate their application to problems in quantum chemistry and materials science.