Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-10 12:30 to 2023-11-14 11:30 | Next meeting is Friday Nov 1st, 11:30 am.
Semi-analytic modeling furnishes an efficient avenue for characterizing the properties of dark matter halos associated with satellites of Milky Way-like systems, as it easily accounts for uncertainties arising from halo-to-halo variance, the orbital disruption of satellites, baryonic feedback, and the stellar-to-halo mass (SMHM) relation. We use the SatGen semi-analytic satellite generator -- which incorporates both empirical models of the galaxy-halo connection in the field as well as analytic prescriptions for the orbital evolution of these satellites after they enter a host galaxy -- to create large samples of Milky Way-like systems and their satellites. By selecting satellites in the sample that match the observed properties of a particular dwarf galaxy, we can then infer arbitrary properties of the satellite galaxy within the Cold Dark Matter paradigm. For the Milky Way's classical dwarfs, we provide inferred values (with associated uncertainties) for the maximum circular velocity $v_{max}$ and the radius $r_{max}$ at which it occurs, varying over two choices of feedback model and two prescriptions for the SMHM relation that populate dark matter halos with physically distinct galaxies. While simple empirical scaling relations can recover the median inferred value for $v_{max}$ and $r_{max}$, this approach provides realistic correlated uncertainties and aids interpretability through variation of the model. For these different models, we also demonstrate how the internal properties of a satellite's dark matter profile correlate with its orbit, and we show that it is difficult to reproduce observations of the Fornax dwarf without strong baryonic feedback. The technique developed in this work is flexible in its application of observational data and can leverage arbitrary information about the satellite galaxies to make inferences about their dark matter halos and population statistics.
We study how structural properties of globular clusters and dwarf galaxies are linked to their orbits in the Milky Way halo. From the inner to the outer halo, orbital energy increases and stellar-systems gradually move out of internal equilibrium: in the inner halo, high-surface brightness globular clusters are at pseudo-equilibrium, while further away, low-surface brightness clusters and dwarfs appear more tidally disturbed. Dwarf galaxies are the latest to arrive into the halo as indicated by their large orbital energies and pericenters, and have no time for more than one orbit. Their (gas-rich) progenitors likely lost their gas during their recent arrival in the Galactic halo. If dwarfs are at equilibrium with their dark matter (DM) content, the DM density should anti-correlate with pericenter. However, the transformation of DM dominated dwarfs from gas-rich rotation-supported into gas-poor dispersion-supported systems is unlikely accomplished during a single orbit. We suggest instead that the above anti-correlation is brought by the combination of ram-pressure stripping and of Galactic tidal shocks. Recent gas removal leads to an expansion of their stellar content caused by the associated gravity loss, making them sufficiently fragile to be transformed near pericenter passage. Out of equilibrium dwarfs would explain the observed anti-correlation of kinematics-based DM density with pericenter without invoking DM density itself, questioning its previous estimates. Ram-pressure stripping and tidal shocks may contribute to the dwarf velocity dispersion excess. It predicts the presence of numerous stars in their outskirts and a few young stars in their cores.
We investigate a cosmological model in which a fraction of the dark matter is atomic dark matter (ADM). This ADM consists of dark versions of the electron and of the proton, interacting with each other and with dark photons just as their light sector versions do, but interacting with everything else only gravitationally. We find constraints given current cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) data, with and without an $H_0$ prior, and with and without enforcing a big bang nucleosynthesis consistent helium abundance. We find that, at low dark photon temperature, one can have consistency with BAO and CMB data, with a fraction of dark matter that is ADM ($f_{\rm adm}$) as large as $\sim 0.1$. Such a large $f_{\rm adm}$ leads to a suppression of density fluctuations today on scales below about 60 Mpc that may be of relevance to the $\sigma_8$ tension. Our work motivates calculation of nonlinear corrections to matter power spectrum predictions in the ADM model. We forecast parameter constraints to come from future ground-based CMB surveys, and find that if ADM is indeed the cause of the $\sigma_8$ tension, the influence of the ADM, primarily on CMB lensing, will likely be detectable at high significance.
Most Milky Way dwarf galaxies are much less bound to their host than are relics of Gaia-Sausage-Enceladus and Sgr. These dwarfs are expected to have fallen into the Galactic halo less than 3 Gyr ago, and will therefore have undergone no more than one full orbit. Here, we have performed hydrodynamical simulations of this process, assuming that their progenitors are gas-rich, rotation-supported dwarfs. We follow their transformation through interactions with the hot corona and gravitational field of the Galaxy. Our dedicated simulations reproduce the structural properties of three dwarf galaxies: Sculptor, Antlia II and, with somewhat a lower accuracy, Crater II. This includes reproducing their large velocity dispersions, which are caused by ram-pressure stripping and Galactic tidal shocks. Differences between dwarfs can be interpreted as due to different orbital paths, as well as to different initial conditions for their progenitor gas and stellar contents. However, we failed to suppress in a single orbit the rotational support of our Sculptor analog if it is fully dark-matter dominated. In addition, we have found that classical dwarf galaxies like Sculptor may have stellar cores sufficiently dense to survive the pericenter passage through adiabatic contraction. On the contrary, our Antlia II and Crater II analogs are tidally stripped, explaining their large sizes, extremely low surface brightnesses, and velocity dispersion. This modeling explains differences between dwarf galaxies by reproducing them as being at different stages of out-of-equilibrium stellar systems.
This paper provides an overview of genetic algorithms as a powerful tool in optimization for single and multi-modal functions. We illustrate this technique using analytical examples and then, we explore how genetic algorithms can be used as a parameter estimation tool in cosmological models to maximize the likelihood function. Finally, we discuss potential future applications of these algorithms in the cosmological field.
This paper investigates non-thermal leptogenesis from inflaton decays in the minimal extension of the canonical type-I seesaw model, where a complex singlet scalar $\phi$ is introduced to generate the Majorana masses of right-handed neutrinos (RHNs) and to play the role of inflaton. First, we systematically study non-thermal leptogenesis with the least model dependence. We give a general classification of the parameter space and find four characteristic limits by carefully examining the interplay between inflaton decay into RHNs and the decay of RHNs into the standard-model particles. Three of the four limits are truly non-thermal, with a final efficiency larger than that of thermal leptogenesis. Two analytic estimates for these three limits are provided with working conditions to examine the validity. In particular, we find that the {\it strongly non-thermal RHNs} scenario occupies a large parameter space, including the oscillation-preferred $K$ range, and works well for a relatively-low reheating temperature $T^{}_{\rm RH} \geq 10^3~{\rm GeV}$, extending the lower bound on the RHN mass to $2\times 10^{7}~{\rm GeV}$. The lepton flavor effects are discussed. Second, we demonstrate that such a unified picture for inflation, neutrino masses, and baryon number asymmetry can be realized by either a Coleman-Weinberg potential (for the real part of $\phi$) or a natural inflation potential (for the imaginary part of $\phi$). The allowed parameter ranges for successful inflation and non-thermal leptogenesis are much more constrained than those without inflationary observations. We find that non-thermal leptogenesis from inflaton decay offers a testable framework for the early Universe. It can be further tested with upcoming cosmological and neutrino data. The model-independent investigation of non-thermal leptogenesis should be useful in exploring this direction.
Theoretical models struggle to reproduce dynamically cold disks with significant rotation-to-dispersion support($V_{\rm{rot}}/\sigma$) observed in star-forming galaxies in the early Universe, at redshift $z>4$. We aim to explore the possible emergence of dynamically cold disks in cosmological simulations and to understand if different kinematic tracers can help reconcile the tension between theory and observations. We use 3218 galaxies from the SERRA suite of zoom-in simulations, with $8<\log(M_*/M_{\odot})<10.3$ and SFR$<128\,M_{\odot}{yr}^{-1}$, within $4<z<9$ range. We generate hyper-spectral data cubes for 6436 synthetic observations of H$\alpha$ and [CII]. We find that the choice of kinematic tracer strongly influences gas velocity dispersion estimates. When using H$\alpha$ ([CII]) synthetic observations, we observe a strong (mild) correlation between $\sigma$ and $M_*$. Such a difference arises mostly for $M_*>10^9\,M_{\odot}$ galaxies, for which $\sigma_{H\alpha}>2\sigma_{CII}$ for a significant fraction of the sample. Regardless of the tracer, our predictions suggest the existence of massive ($M_*>10^{10}M_{\odot}$) galaxies with $V_{rot}/\sigma>10$ at $z>4$, maintaining cold disks for >10 orbital periods (200Myr). Furthermore, we do not find any significant redshift dependence for $V_{rot}/\sigma$ ratio in our sample. Our simulations predict the existence of dynamically cold disks in the early Universe. However, different tracers are sensitive to different kinematic properties. While [CII] effectively traces the thin, gaseous disk of galaxies, H$\alpha$ includes the contribution from ionized gas beyond the disk, characterized by prevalent vertical or radial motions that may be associated with outflows. The presence of H$\alpha$ halos could be a signature of such galactic outflows. This emphasizes the importance of combining ALMA and JWST/NIRspec studies of high-z galaxies.
The small-scale linear information in galaxy samples typically lost during non-linear growth can be restored to a certain level by the density field reconstruction, which has been demonstrated for improving the precision of the baryon acoustic oscillations (BAO) measurements. As proposed in the literature, a joint analysis of the power spectrum before and after the reconstruction enables an efficient extraction of information carried by high-order statistics. However, the statistics of the post-reconstruction density field are difficult to model. In this work, we circumvent this issue by developing an accurate emulator for the pre-reconstructed, post-reconstructed, and cross power spectra ($P_{\rm pre}$, $P_{\rm post}$, $P_{\rm cross}$) up to $k=0.5~h~{\rm Mpc^{-1}}$ based on the \textsc{Dark Quest} N-body simulations. The accuracy of the emulator is at percent level, namely, the error of the emulated monopole and quadrupole of the power spectra is less than $1\%$ and $5\%$ of the ground truth, respectively. A fit to an example power spectra using the emulator shows that the constraints on cosmological parameters get largely improved using $P_{\rm pre}$+$P_{\rm post}$+$P_{\rm cross}$ with $k_{\rm max}=0.25~h~{\rm Mpc^{-1}}$, compared to that derived from $P_{\rm pre}$ alone, namely, the constraints on ($\Omega_m$, $H_0$, $\sigma_8$) are tightened by $\sim41 \%-55\%$, and the uncertainties of the derived BAO and RSD parameters ($\alpha_{\perp}$, $\alpha_{||}$, $f\sigma_8$) shrink by $\sim 28\%-54\%$, respectively. This highlights the complementarity among $P_{\rm pre}$, $P_{\rm post}$ and $P_{\rm cross}$, which demonstrates the efficiency and practicability of a joint $P_{\rm pre}$, $P_{\rm post}$ and $P_{\rm cross}$ analysis for cosmological implications.
Intensity mapping of 21cm emission from neutral hydrogen (HI) promises to be a powerful probe of large-scale structure in the post-reionisation epoch. However, HI intensity mapping (IM) experiments will suffer the loss of long-wavelength line-of-sight HI modes in the galactic foreground subtraction process. The loss of these modes is particularly problematic for detecting HI IM cross-correlations with projected large-scale structure tracers, such as CMB secondary anisotropies. Here, we propose a cross-bispectrum estimator to recover the cross-correlation of the HI IM field, $\delta T_{21},$ with the CMB lensing field, $\kappa,$ constructed by correlating the position-dependent HI power spectrum with the mean overdensity traced by CMB lensing. We study the cross-bispectrum estimator, $B^{\bar \kappa \delta T_{21} \delta T_{21}},$ in the squeezed limit and forecast its detectability based on HI IM measurements from HIRAX and CMB lensing measurements from AdvACT. The cross-bispectrum improves constraints on cosmological parameters; in particular, the constraint on the dark energy equation-of-state parameter, $w_0,$ improves on the HI IM auto-power spectra constraint by 44\% (to 0.014), while the constraint on $w_a$ improves by 33\% (to 0.08), assuming Planck priors in each case. These results are robust to HI IM foreground removal because they largely derive from small-scale HI modes. The HI-HI-$\kappa$ cross-bispectrum thus provides a novel way to recover HI correlations with CMB lensing and constrain cosmological parameters at a level that is competitive with next-generation galaxy redshift surveys. As a striking example of this, we find that the combined constraint on the sum of the neutrino masses, while varying all redshift and standard cosmological parameters within a $w_0w_a\Omega_K$CDM model, is 5.5 meV.
In this proceeding we consider primordial black holes (PBHs) as a dark matter candidate. We discuss the existing limits on the fraction $f_{pbh}$ of the dark matter constituting of PBHs as a function of PBHs mass. The discussed limits cover almost all possible mass range with the currently only open window in $3\cdot 10^{16}-10^{18}$ g in which the PBHs can make up to 100% of the dark matter content of the universe. We present the estimates of the capabilities of the near-future instruments (Einstein Probe/WXT, SVOM/MXT) and discuss the potential of next-generation missions(Athena, THESEUS, eXTP) to probe this mass range. We discuss the targets most suitable for the PBH dark matter searches with these missions and the potential limiting factor of the systematics on the derived results.
The study of supernova siblings, supernovae with the same host galaxy, is an important avenue for understanding and measuring the properties of Type Ia Supernova (SN Ia) light curves (LCs). Thus far, sibling analyses have mainly focused on optical LC data. Considering that LCs in the near-infrared (NIR) are expected to be better standard candles than those in the optical, we carry out the first analysis compiling SN siblings with only NIR data. We perform an extensive literature search of all SN siblings and find six sets of siblings with published NIR photometry. We calibrate each set of siblings ensuring they are on homogeneous photometric systems, fit the LCs with the SALT3-NIR and SNooPy models, and find median absolute differences in $\mu$ values between siblings of 0.248 mag and 0.186 mag, respectively. To evaluate the significance of these differences beyond measurement noise, we run simulations that mimic these LCs and provide an estimate for uncertainty on these median absolute differences of $\sim$0.052 mag, and we find that our analysis supports the existence of intrinsic scatter in the NIR at the 99% level. When comparing the same sets of SN siblings, we observe a median absolute difference in $\mu$ values between siblings of 0.177 mag when using optical data alone as compared to 0.186 mag when using NIR data alone. We attribute this to either limited statistics, poor quality NIR data, or poor reduction of the NIR data; all of which will be improved with the Nancy Grace Roman Space Telescope.
The large-angular-scale falloff in the autocorrelation function for the cosmic microwave background (CMB) temperature has long intrigued cosmologists and fueled speculation about suppressed superhorizon power. Here we highlight an inconsistency between the temperature quadrupole and the more recently obtained E-mode polarization quadrupole from Planck PR3. The temperature quadrupole arises primarily at the CMB surface of last scatter, while the polarization primarily from the epoch of reionization, but the two still probe comparable distance scales. Although the temperature quadrupole is intriguingly low (much greater than a $1\sigma$ fluctuation) compared with that expected in the standard $\Lambda$CDM cosmological model, the polarization quadrupole turns out to be somewhat high, at the $1\sigma$ level. We calculate the joint probability distribution function for both and find a slight tension: the observed pair of quadrupoles is inconsistent at a $2.3\sigma$ confidence level. The problem is robust to simple changes to the cosmological model. If the high polarization quadrupole survives further scrutiny, then this result disfavors, at comparable significance, new superhorizon physics. The full-sky coverage and pristine foreground subtraction of the LiteBIRD satellite will be ideal to help resolve this question.
The imprint of interacting dark energy (IDE) needs to be correctly identified in order to avoid bias in constraints on IDE. This paper investigates the large-scale imprint of IDE in redshift space distortions, using Euclid-like photometric prescriptions. A first attempt at incorporating the IDE dynamics in the galaxy (clustering and evolution) biases is made. Without IDE dynamics taken into account in the galaxy biases, as is conventionally done, the results suggest that for a constant dark energy equation of state parameter, an IDE model where the dark energy transfer rate is proportional to the dark energy density exhibits an alternating, positive-negative effect in the redshift space distortions angular power spectrum. However, when the IDE dynamics is incorporated in the galaxy biases, it is found that the apparent positive-negative alternating effect vanishes: implying that neglecting IDE dynamics in the galaxy biases can result in "artefacts" that can lead to incorrect identification of the IDE imprint. In general, the results show that multi-tracer analysis will be needed to beat down cosmic variance in order for the redshift space distortions angular power spectrum as a statistic to be a viable diagnostic of IDE. Moreover, it is found that redshift space distortions hold the potential to constrain IDE on large scales, at redshifts $z \,{\leq}\, 1$; with the scenario having IDE dynamics incorporated in the biases showing better potential.
We explore in detail the dynamics of multi-field inflationary models. We first revisit the two-field case and rederive the coordinate independent expression for the attractor solution with either small or large turn rate, emphasizing the role of isometries for the existence of rapid-turn solutions. Then, for three fields in the slow-twist regime we provide elegant expressions for the attractor solution for generic field-space geometries and potentials and study the behaviour of first order perturbations. For generic $\mathcal{N}$-field models, our method quickly grows in algebraic complexity. We observe that field-space isometries are common in the literature and are able to obtain the attractor solutions and deduce stability for some isometry classes of $\mathcal{N}$-field models. Finally, we apply our discussion to concrete supergravity models. These analyses conclusively demonstrate the existence of $\mathcal{N}>2$ dynamical attractors distinct from the two-field case, and provide tools useful for future studies of their phenomenology in the cosmic microwave background and stochastic gravitational wave spectrum.
We report the discovery of a compact group of galaxies, CGG-z5, at z~5.2 in the EGS field covered by the JWST/CEERS survey. CGG-z5 was selected as the highest overdensity of galaxies at z>2 in recent JWST public surveys and it consists of six candidate members lying within a projected area of $1.5"\times3"$ (10$\times$20~kpc$^2$). All group members are HST/F435W and HST/F606W dropouts while securely detected in the JWST/NIRCam bands, yielding a narrow range of robust photometric redshifts $5.0<z<5.3$. The most massive galaxy in the group has a stellar mass log$(M_{*}/M_{\odot})\approx9.8$, while the rest are low-mass satellites (log$(M_{*}/M_{\odot})\approx8.4-9.2$). While several group members were already detected in the HST and IRAC bands, the low stellar masses and the compactness of the structure required the sensitivity and resolution of JWST for its identification. To assess the nature and evolutionary path of CGG-z5, we searched for similar compact structures in the \textsc{Eagle} simulations and followed their evolution with time. We find that all the identified structures merge into a single galaxy by z=3 and form a massive galaxy (log$(M_{*}/M_{\odot})>11$) at z~1. This implies that CGG-z5 could be a "proto-massive galaxy" captured during a short-lived phase of massive galaxy formation.
We study a single-field inflation model in which the inflaton potential has an upward step between two slow-roll regimes by taking into account the finite width of the step. We calculate the probability distribution function (PDF) of the curvature perturbation $P[{\cal{R}}]$ using the $\delta N$ formalism. The PDF has an exponential-tail only for positive ${\cal{R}}$ whose slope depends on the step width. We find that the tail may have a significant impact on the estimation of the primordial black hole abundance. We also show that the PDF $P[{\cal{R}}]$ becomes highly asymmetric on a particular scale exiting the horizon before the step, at which the curvature power spectrum has a dip. This asymmetric PDF may leave an interesting signature in the large scale structure such as voids.
We generalize the recently proposed Stepped Partially Acoustic Dark Matter (SPartAcous) model by including additional massless degrees of freedom in the dark radiation sector. We fit SPartAcous and its generalization against cosmological precision data from the cosmic microwave background, baryon acoustic oscillations, large-scale structure, supernovae type Ia, and Cepheid variables. We find that SPartAcous significantly reduces the $H_0$ tension but does not provide any meaningful improvement of the $S_8$ tension, while the generalized model succeeds in addressing both tensions, and provides a better fit than $\Lambda\mathrm{CDM}$ and other dark sector models proposed to address the same tensions. In the generalized model, $H_0$ can be raised to $71.4~\mathrm{km/s/Mpc}$ (the 95% upper limit) if the fitted data does not include the direct measurement from the SH0ES collaboration, and to $73.7~\mathrm{km/s/Mpc}$ (95% upper limit) if it does. A version of $\texttt{CLASS}$ that has been modified to analyze this model is publicly available at https://github.com/ManuelBuenAbad/class_spartacous
We consider in detail a possibility that the observed neutrino oscillations are due to refraction on ultralight scalar boson dark matter. We introduce the refractive mass squared, $\tilde{m}^2$, and study its properties: dependence on neutrino energy, state of the background, etc. If the background is in a state of cold gas of particles, $\tilde{m}^2$ shows a resonance dependence on energy. Above the resonance ($E \gg E_R $), we find that $\tilde{m}^2$ has the same properties as usual vacuum mass squared. Below the resonance, $\tilde{m}^2$ decreases with energy, which (if realised) allows us to avoid the cosmological bound on the sum of neutrino masses. Also, $\tilde{m}^2$ may depend on time. We consider the validity of the results: effects of multiple interactions with scalars, and modification of the dispersion relation. We show that for values of parameters of the system required to reproduce the observed neutrino masses, perturbativity is broken at low energies, which border above the resonance. If the background is in the state of coherent classical field, the refractive mass does not depend on energy explicitly but may show time dependence. It coincides with the refractive mass in a cold gas at high energies. The refractive nature of neutrino mass can be tested by searches of its dependence on energy and time.
Cosmic Microwave Background (CMB) independent approaches are frequently used in the literature to provide estimates of Hubble constant ($H_0$). In this work, we report CMB independent constraints on $H_0$ in an anisotropic extension of $\Lambda$CDM model using the Big Bang Nucleosynthesis (BBN), Baryonic Acoustic Oscillations (BAOs), Cosmic Chronometer (CC), and Pantheon+ (PP) compilation of Type Ia supernovae and SH0ES Cepheid host distance anchors data. In the anisotropic model, we find $H_{\rm 0}=70.1^{+1.2}_{-1.5}\; (72.67\pm 0.85)\;\rm km\, s^{-1}\, Mpc^{-1}$ both with 68\% CL from BAO+BBN+CC+PP (BAO+BBN+CC+PPSH0ES) data. The analyses of the anisotropic model with the two combinations of data sets reveal that anisotropy is positively correlated with $H_0$, and an anisotropy of the order $10^{-14}$ in the anisotropic model reduces the $H_0$ tension by $\sim 2\sigma$.
The extragalactic high-energy $\gamma$-ray sky is dominated by blazars, which are active galactic nuclei with their jets pointing towards us. Distance measurements are of fundamental importance yet for some of these sources are challenging because any spectral signature from the host galaxy may be outshone by the non-thermal emission from the jet. In this paper, we present a method to constrain redshifts for these sources that relies only on data from the Large Area Telescope on board the Fermi Gamma-ray Space Telescope. This method takes advantage of the signatures that the pair-production interaction between photons with energies larger than approximately 10 GeV and the extragalactic background light leaves on $\gamma$-ray spectra. We find upper limits for the distances of 303 $\gamma$-ray blazars, classified as 157 BL Lacertae objects, 145 of uncertain class, and 1 flat-spectrum-radio quasar, whose redshifts are otherwise unknown. These derivations can be useful for planning observations with imaging atmospheric Cherenkov telescopes and also for testing theories of supermassive black hole evolution. Our results are applied to estimate the detectability of these blazars with the future Cherenkov Telescope Array, finding that at least 21 of them could be studied in a reasonable exposure of 20 h.
We study the gravitational wave (GW) spectrum produced by acoustic waves in the early universe, such as would be produced by a first order phase transition, focusing on the low-frequency side of the peak. We confirm with numerical simulations the Sound Shell model prediction of a steep rise with wave number $k$ of $k^9$ to a peak whose magnitude grows at a rate $(H/k_\text{p})H$, where $H$ is the Hubble rate and $k_\text{p}$ the peak wave number, set by the peak wave number of the fluid velocity power spectrum. We also show that hitherto neglected terms give a shallower part with amplitude $(H/k_\text{p})^2$ in the range $H \lesssim k \lesssim k_\text{p}$, which in the limit of small $H/k$ rises as $k$. This linear rise has been seen in other modelling and also in direct numerical simulations. The relative amplitude between the linearly rising part and the peak therefore depends on the peak wave number of the velocity spectrum and the lifetime of the source, which in an expanding background is bounded above by the Hubble time $H^{-1}$. For slow phase transitions, which have the lowest peak wave number and the loudest signals, the acoustic GW peak appears as a localized enhancement of the spectrum, with a rise to the peak less steep than $k^9$. The shape of the peak, absent in vortical turbulence, may help to lift degeneracies in phase transition parameter estimation at future GW observatories.
This note aims at investigating two different situations where the classical general relativistic dynamics compete with the evolution driven by Hawking evaporation. We focus, in particular, on binary systems of black holes emitting gravitational waves and gravitons, and on the cosmological evolution when black holes are immersed in their own radiation bath. Several non-trivial features are underlined in both cases.
The signal to noise ratio efficiency $\epsilon_{\rm SNR}$ in axion dark matter searches has been estimated using large-statistic simulation data reflecting the background information and the expected axion signal power obtained from a real experiment. This usually requires a lot of computing time even with the assistance of powerful computing resources. Employing a Savitzky-Golay filter for background subtraction, in this work, we estimated a fully analytical $\epsilon_{\rm SNR}$ without relying on large-statistic simulation data, but only with an arbitrary axion mass and the relevant signal shape information. Hence, our work can provide $\epsilon_{\rm SNR}$ using minimal computing time and resources prior to the acquisition of experimental data, without the detailed information that has to be obtained from real experiments. Axion haloscope searches have been observing the coincidence that the frequency independent scale factor $\xi$ is approximately consistent with the $\epsilon_{\rm SNR}$. This was confirmed analytically in this work, when the window length of the Savitzky-Golay filter is reasonably wide enough, i.e., at least 5 times the signal window.
The cosmological principle is fundamental to the standard cosmological model. It assumes that the Universe is homogeneous and isotropic on very large scales. As the basic assumption, it must stand the test of various observations. In this work, using the region fitting (RF) method, we mapped the all-sky distribution of cosmological parameters ($\Omega_{m}$ and $H_{0}$) and find that the distribution significantly deviates from isotropy. A local matter underdensity region exists toward (${308.4^{\circ}}$$_{-48.7}^{+47.6}$, ${-18.2^{\circ}}$$_{-28.8}^{+21.1}$) as well as a preferred direction of the cosmic anisotropy (${313.4^{\circ}}$$_{-18.2}^{+19.6}$, ${-16.8^{\circ}}$$_{-10.7}^{+11.1}$) in galactic coordinates. Similar directions may imply that local matter density might be responsible for the anisotropy of the accelerated expansion of the Universe. Results of statistical isotropy analyses including Isotropy and Isotropy with real-data positions (RP) show high confidence levels. For the local matter underdensity, the statistical significances are 2.78$\sigma$ (isotropy) and 2.34$\sigma$ (isotropy RP). For the cosmic anisotropy, the statistical significances are 3.96$\sigma$ (isotropy) and 3.15$\sigma$ (isotropy RP). The comparison of these two kinds of statistical isotropy analyses suggests that inhomogeneous spatial distribution of real sample can increase the deviation from isotropy. The similar results and findings are also found from reanalyses of the low-redshift sample (lp+) and the lower screening angle ($\theta_\mathrm{max}$ = 60$^{\circ}$), but with a slight decrease in statistical significance. Overall, our results provide clear indications for a possible cosmic anisotropy. This possibility must be taken seriously. Further testing is needed to better understand this signal.
In this work, we compute multi-field core and halo properties in wave Dark Matter models. We focus on the case where Dark Matter consists of two light (real) scalars, interacting gravitationally. As in the single-field Ultra Light Dark Matter (ULDM) case, the scalar field behaves as a coherent BEC with a definite ground state (at fixed total mass), often referred to in the literature as a gravitational soliton. We establish an efficient algorithm to find the ground and excited states of such two-field systems. We then use simulations to investigate the gravitational collapse and virialization, starting from different initial conditions, into solitons and surrounding halo. As in the single-field case, a virialized halo forms with a gravitational soliton (ground state) at the center. We find some evidence for an empirical relation between the soliton mass and energy and those of the host halo. We use this to then find a numerical relation between the properties of the two. Finally, we use this to address the issue of alleviating some of the tensions that single-field ULDM has with observational data, in particular, the issue of how a galaxy's core and radius are related. We find that if galaxies of different masses have similar percentages of the two species, then the core-radius scaling tension is not addressed. However, if the lighter species is more abundant in lighter galaxies, then the tension can be alleviated.
The Diffuse Supernova Neutrino Background (DSNB) -- a probe of the core-collapse mechanism and the cosmic star-formation history -- has not been detected, but its discovery may be imminent. A significant obstacle for DSNB detection in Super-Kamiokande (Super-K) is detector backgrounds, especially due to atmospheric neutrinos (more precisely, these are foregrounds), which are not sufficiently understood. We perform the first detailed theoretical calculations of these foregrounds in the range 16--90 MeV in detected electron energy, taking into account several physical and detector effects, quantifying uncertainties, and comparing our predictions to the 15.9 livetime years of pre-gadolinium data from Super-K stages I--IV. We show that our modeling reasonably reproduces this low-energy data as well as the usual high-energy atmospheric-neutrino data. To accelerate progress on detecting the DSNB, we outline key actions to be taken in future theoretical and experimental work. In a forthcoming paper, we use our modeling to detail how low-energy atmospheric-neutrino events register in Super-K and suggest new cuts to reduce their impact.
We explore the impact of a magnetar giant flare (GF) on the neutron star (NS) crust, and the associated potential baryon mass ejection. We consider that sudden magnetic energy dissipation creates a thin high-pressure shell above a portion of the NS surface, which drives a relativistic shockwave into the crust, heating a fraction of these layers to sufficiently high energies to become unbound along directions unconfined by the magnetic field. We explore this process by means of spherically-symmetric relativistic hydrodynamical simulations. For an initial shell pressure $P_{\rm GF}$ we find that the total unbound ejecta mass roughly obeys the relation $M_{\rm ej}\sim4-9\times 10^{24}$ g $(P_{\rm GF}/10^{30}$ ergs cm$^{-3})^{1.43}$. For $P_{\rm GF}\sim10^{30}-10^{31}$ ergs cm$^{-3}$ corresponding to the dissipation of a magnetic field of strength $\sim10^{15.5}-10^{16}$ G, we find $M_{\rm ej}\sim10^{25}-10^{26}$ g with asymptotic velocities $v_{\rm ej}/c\sim 0.3-0.6$ compatible with the ejecta properties inferred from the radio afterglow of the GF from SGR 1806-20. Because the flare excavates crustal material to a depth characterized by an electron fraction $Y_e \approx 0.40-0.46$, and is ejected with high entropy and rapid expansion timescale, the conditions are met for heavy element $r$-process nucleosynthesis via the alpha-rich freeze-out mechanism. Given an energetic GF rate of roughly once per century in the Milky Way, we find that GFs could contribute an appreciable heavy $r$-process source that tracks star formation. We predict that GFs are accompanied by short minutes long, luminous $\sim 10^{39}$ ergs s$^{-1}$ optical transients powered by $r$-process decay ("nova brevis"), akin to scaled-down kilonovae. Our findings also have implications for FRBs from repeating magnetar flares, particularly the high rotation measures of the synchrotron nebulae surrounding these sources.
Astrometry from the Gaia mission was recently used to discover the two nearest known stellar-mass black holes (BHs), Gaia BH1 and Gaia BH2. Both systems contain $\sim 1\,M_{\odot}$ stars in wide orbits ($a\approx$1.4 AU, 4.96 AU) around $\sim9\,M_{\odot}$ BHs. These objects are among the first stellar-mass BHs not discovered via X-rays or gravitational waves. The companion stars -- a solar-type main sequence star in Gaia BH1 and a low-luminosity red giant in Gaia BH2 -- are well within their Roche lobes. However, the BHs are still expected to accrete stellar winds, leading to potentially detectable X-ray or radio emission. Here, we report observations of both systems with the Chandra X-ray Observatory and radio observations with the Very Large Array (for Gaia BH1) and MeerKAT (for Gaia BH2). We did not detect either system, leading to X-ray upper limits of $L_X < 10^{29.4}$ and $L_X < 10^{30.1}\,\rm erg\,s^{-1}$ and radio upper limits of $L_r < 10^{25.2}$ and $L_r < 10^{25.9}\,\rm erg\,s^{-1}$. For Gaia BH2, the non-detection implies that the the accretion rate near the horizon is much lower than the Bondi rate, consistent with recent models for hot accretion flows. We discuss implications of these non-detections for broader BH searches, concluding that it is unlikely that isolated BHs will be detected via ISM accretion in the near future. We also calculate evolutionary models for the binaries' future evolution using Modules for Experiments in Stellar Astrophysics (MESA). We find that Gaia BH1 will be X-ray bright for 5--50 Myr when the star is a red giant, including 5 Myr of stable Roche lobe overflow. Since no symbiotic BH X-ray binaries are known, this implies either that fewer than $\sim 10^4$ Gaia BH1-like binaries exist in the Milky Way, or that they are common but have evaded detection, perhaps due to very long outburst recurrence timescales.
Gravitational-wave (GW) observations of neutron star-black hole (NSBH) mergers are sensitive to the nuclear equation of state (EOS). Using realistic simulations of NSBH mergers, incorporating both GW and electromagnetic (EM) selection to ensure sample purity, we find that a GW detector network operating at O5-sensitivities will constrain the radius of a $1.4~M_{\odot}$ NS and the maximum NS mass with $1.6\%$ and $13\%$ precision, respectively. The results demonstrate strong potential for insights into the nuclear EOS, provided NSBH systems are robustly identified.
Massive stars can explode in powerful supernovae (SNe) forming neutron stars but they may also collapse directly into black holes (BHs). Understanding and predicting their final fate is increasingly important, e.g, in the context of gravitational-wave astronomy. The interior mixing of stars in general and convective boundary mixing remain some of the largest uncertainties in their evolution. Here, we investigate the influence of convective boundary mixing on the pre-SN structure and explosion properties of massive stars. Using the 1D stellar evolution code Mesa, we model single, non-rotating stars of solar metallicity with initial masses of $5-70\mathrm{M_\odot}$ and convective core step-overshooting of $0.05-0.50H_\mathrm{P}$. Stars are evolved until the onset of iron core collapse, and the pre-SN models are exploded using a parametric, semi-analytic SN code. We use the compactness parameter to describe the interior structure of stars at core collapse. Larger convective core overshooting shifts the location of the compactness peak by $1-2\mathrm{M_\odot}$ to higher $M_\mathrm{CO}$. As the luminosity of the pre-SN progenitor is determined by $M_\mathrm{CO}$, we predict BH formation for progenitors with luminosities $5.35<\log(L/\mathrm{L_\odot})<5.50$ and $\log(L/\mathrm{L_\odot})>5.80$. The luminosity range of BH formation agrees well with the observed luminosity of the red supergiant star N6946BH1 that disappeared without a bright SN and likely collapsed into a BH. While some of our models in the luminosity range $\log(L/\mathrm{L_\odot})=5.1-5.5$ indeed collapse to form BHs, this does not fully explain the lack of observed SN~IIP progenitors at these luminosities, ie the missing red-supergiant problem. Convective core overshooting affects the BH masses, the pre-SN location of stars in the Hertzsprung-Russell diagram, the plateau luminosity and duration of SN~IIP lightcurves.[Abridged]
Collisionless shock waves have long been considered amongst the most prolific particle accelerators in the universe. Shocks alter the plasma they propagate through and often exhibit complex evolution across multiple scales. Interplanetary (IP) traveling shocks have been recorded in-situ for over half a century and act as a natural laboratory for experimentally verifying various aspects of large-scale collisionless shocks. A fundamentally interesting problem in both helio and astrophysics is the acceleration of electrons to relativistic energies (more than 300 keV) by traveling shocks. This letter presents first observations of field-aligned beams of relativistic electrons upstream of an IP shock observed thanks to the instrumental capabilities of Solar Orbiter. This study aims to present the characteristics of the electron beams close to the source and contribute towards understanding their acceleration mechanism. On 25 July 2022, Solar Orbiter encountered an IP shock at 0.98 AU. The shock was associated with an energetic storm particle event which also featured upstream field-aligned relativistic electron beams observed 14 minutes prior to the actual shock crossing. The distance of the beam's origin was investigated using a velocity dispersion analysis (VDA). Peak-intensity energy spectra were anaylzed and compared with those obtained from a semi-analytical fast-Fermi acceleration model. By leveraging Solar Orbiter's high-time resolution Energetic Particle Detector (EPD), we have successfully showcased an IP shock's ability to accelerate relativistic electron beams. Our proposed acceleration mechanism offers an explanation for the observed electron beam and its characteristics, while we also explore the potential contributions of more complex mechanisms.
Dark matter is a popular candidate to a new source of primary-charged particles, especially positrons in cosmic rays, which are proposed to account for observable anomalies. While this hypothesis of decaying or annihilating DM is mostly applied for our Galaxy, it could possibly lead to some interesting phenomena when applied for the other ones. In this work, we look into the hypothetical asymmetry in gamma radiation from the upper and lower hemisphere of the dark matter halo of the Andromeda galaxy due to inverse Compton scattering of starlight on the DM-produced electrons and positrons. While our 2D toy model raises expectations for the possible effect, a more complex approach gives negligible effect for the dark halo case, but shows some prospects for a dark disk~model.
Magnetohydrodynamic turbulence drives the central engine of post-merger remnants, potentially powering both a nucleosynthetically active disk wind and the relativistic jet behind a short gamma ray burst. We explore the impact of the magnetic field on this engine by simulating three post-merger black hole accretion disks using general relativistic magnetohydrodynamics with Monte Carlo neutrino transport, in each case varying the initial magnetic field strength. We find increasing ejecta masses associated with increasing magnetic field strength. We find that a fairly robust main r -process pattern is produced in all three cases, scaled by the ejected mass. Changing the initial magnetic field strength has a considerable effect on the geometry of the outflow and hints at complex central engine dynamics influencing lanthanide outflows. We find that actinide production is especially sensitive to magnetic field strength, with overall actinide mass fraction calculated at 1 Gyr post-merger increasing by more than a factor of six with a tenfold increase in magnetic field strength. This hints at a possible connection to the variability in actinide enhancements exhibited by metal poor, r -process-enhanced stars.
We present a framework for modeling astrophysical pulses from radio pulsars and fast radio bursts (FRBs). This framework, called fitburst, generates synthetic representations of dynamic spectra that are functions of several physical and heuristic parameters; the heuristic parameters can nonetheless accommodate a vast range of distributions in spectral energy. fitburst is designed to optimize the modeling of features induced by effects that are intrinsic and extrinsic to the emission mechanism, including the magnitude and frequency dependence of pulse dispersion and scatter-broadening. fitburst removes intra-channel smearing through two-dimensional upsampling, and can account for phase wrapping of "folded" signals that are typically acquired during pulsar-timing observations. We demonstrate the effectiveness of fitburst in modeling data containing pulsars and FRBs observed with the Canadian Hydrogen Intensity Mapping Experiment (CHIME) telescope.
Using publicly available gamma-ray observations of Saggitarius A* (Sgr A*), we constructed its ~6 months (from 2022 June 22 to 2022 December 19) light curve and subsequently we built its associated periodogram to search for a clear periodical signal. The lightcurve was constructed using the Fermitools package from observations of the Fermi satellite. The associated periodogram was built using the R-package RobPer algorithm, through a two-step model-fitting procedure employing the unweighted tau-regression method. To reduce the likelihood of false positive detections, we incorporated a Window Function method into our analysis. We identify a clear significant peak on the periodogram at 76.32 minutes. The found periodicity is consistent with two other works in the literature at different wavelengths, supporting the idea of a unique oscillatory physical mechanism.
Detailed measurements of the spectral structure of cosmic-ray electrons and positrons from 10.6 GeV to 7.5 TeV are presented from over 7 years of observations with the CALorimetric Electron Telescope (CALET) on the International Space Station. Because of the excellent energy resolution (a few percent above 10 GeV) and the outstanding e/p separation (10$^5$), CALET provides optimal performance for a detailed search of structures in the energy spectrum. The analysis uses data up to the end of 2022, and the statistics of observed electron candidates has increased more than 3 times since the last publication in 2018. By adopting an updated boosted decision tree analysis, a sufficient proton rejection power up to 7.5 TeV is achieved, with a residual proton contamination less than 10%. The observed energy spectrum becomes gradually harder in the lower energy region from around 30 GeV, consistently with AMS-02, but from 300 to 600 GeV it is considerably softer than the spectra measured by DAMPE and Fermi-LAT. At high energies, the spectrum presents a sharp break around 1 TeV, with a spectral index change from -3.15 to -3.91, and a broken power law fitting the data in the energy range from 30 GeV to 4.8 TeV better than a single power law with 6.9 sigma significance, which is compatible with the DAMPE results. The break is consistent with the expected effects of radiation loss during the propagation from distant sources (except the highest energy bin). We have fitted the spectrum with a model consistent with the positron flux measured by AMS-02 below 1 TeV and interpreted the electron + positron spectrum with possible contributions from pulsars and nearby sources. Above 4.8 TeV, a possible contribution from known nearby supernova remnants, including Vela, is addressed by an event-by-event analysis providing a higher proton-rejection power than a purely statistical analysis.
Direct detection of gravitational waves and binary black hole mergers have proven to be remarkable investigations of general relativity. In order to have a definitive answer as to whether the black hole spacetime under test is the Kerr or non-Kerr, one requires accurate mapping of the metric. Since EMRIs are perfect candidates for space-based detectors, Laser Interferometer Space Antenna (LISA) observations will serve a crucial purpose in mapping the spacetime metric. In this article, we consider such a study with the Johannsen spacetime that captures the deviations from the Kerr black hole and further discuss their detection prospects. We analytically derive the leading order post-Newtonian corrections in the average loss of energy and angular momentum fluxes generated by a stellar-mass object exhibiting eccentric equatorial motion in the Johannsen background. We further obtain the orbital evolution of the inspiralling object within the adiabatic approximation and estimate the orbital phase. We lastly provide the possible detectability of deviations from the Kerr black hole by estimating gravitational wave dephasing and highlight the crucial role of LISA observations.
We report the detection of 5 new candidate binary black hole (BBH) merger signals in the publicly released data from the second half of the third observing run (O3b) of advanced LIGO and advanced Virgo. The LIGO-Virgo-KAGRA (LVK) collaboration reported 35 compact binary coalescences (CBCs) in their analysis of the O3b data [1], with 30 BBH mergers having coincidence in the Hanford and Livingston detectors. We confirm 17 of these for a total of 22 detections in our analysis of the Hanford-Livingston coincident O3b data. We identify candidates using a search pipeline employing aligned-spin quadrupole-only waveforms. Our pipeline is similar to the one used in our O3a coincident analysis [2], except for a few improvements in the veto procedure and the ranking statistic, and we continue to use an astrophysical probability of one half as our detection threshold, following the approach of the LVK catalogs. Most of the new candidates reported in this work are placed in the upper and lower-mass gap of the black hole (BH) mass distribution. One BBH event also shows a sign of spin-orbit precession with negatively aligned spins. We also identify a possible neutron star-black hole (NSBH) merger. We expect these events to help inform the black hole mass and spin distributions inferred in a full population analysis.
M60, an elliptical galaxy located 16.5~Mpc away, has an active nucleus with a very low luminosity and an extremely low accretion rate. Its central supermassive black hole has a mass of $M_{\rm BH}\sim4.5\times10^{9}\, M_{\odot}$ and a Schwarzschild radii corresponding to $R_{\rm S}\sim5.4\,\mu\mathrm{as}$. To investigate the nature of its innermost radio nucleus, data from the Very Long Baseline Array (VLBA) at 4.4 and 7.6~GHz were reduced. The VLBA images reveal a compact component with total flux densities of $\sim$20~mJy at both frequencies, a size of $\leq$0.27~mas (99.7$\%$ confidence level), about 0.022~pc ($50\,R_{\rm S}$) at 7.6~GHz, and a brightness temperature of $\geq6\times10^{9}$~K. This suggests that the observed centi-parsec-scale compact core could be attributed to a nonthermal jet base or an advection-dominated accretion flow (ADAF) with nonthermal electrons. The extremely compact structure also supports the presence of an SMBH in the center. Our results indicate that M60 is a promising target for broad-band VLBI observations at millimeter wavelengths to probe ADAF scenarios and tightly constrain the potential photon ring (about 28\,$\mu$as) around its SMBH.
Numerical models allow the investigation of phenomena that cannot exist in a laboratory. Computational simulations are therefore essential for advancing our knowledge of astrophysics, however, the very nature of simulation requires making assumptions that can substantially affect their outcome. Here, we present the challenges faced when simulating dim thermonuclear explosions, Type Iax supernovae. This class of dim events produce a slow moving, sparse ejecta that presents challenges for simulation. We investigate the limitations of the equation of state and its applicability to the expanding, cooling ejecta. We also discuss how the "fluff", i.e. the low-density gas on the grid in lieu of vacuum, inhibits the ejecta as it expands. We explore how the final state of the simulation changes as we vary the character of the burning, which influences the outcome of the explosion. These challenges are applicable to a wide range of astrophysical simulations, and are important to discuss and overcome as a community.
Jets from supermassive black holes in the centers of active galaxies are the most powerful persistent sources of electromagnetic radiation in the Universe. To infer the physical conditions in the otherwise out-of-reach regions of extragalactic jets we usually rely on fitting of their spectral energy distribution (SED). The calculation of radiative models for the jet non-thermal emission usually relies on numerical solvers of coupled partial differential equations. In this work machine learning is used to tackle the problem of high computational complexity in order to significantly reduce the SED model evaluation time, which is needed for SED fitting with Bayesian inference methods. We compute SEDs based on the synchrotron self-Compton model for blazar emission using the radiation code ATHE${\nu}$A, and use them to train Neural Networks exploring whether these can replace the original computational expensive code. We find that a Neural Network with Gated Recurrent Unit neurons can effectively replace the ATHE${\nu}$A leptonic code for this application, while it can be efficiently coupled with MCMC and nested sampling algorithms for fitting purposes. We demonstrate this through an application to simulated data sets and with an application to observational data. We offer this tool in the community through a public repository. We present a proof-of-concept application of neural networks to blazar science. This is the first step in a list of future applications involving hadronic processes and even larger parameter spaces.
Core-collapse supernovae (CCSNe) offer extremely valuable insights into the dynamics of galaxies. Neutrino time profiles from CCSNe, in particular, could reveal unique details about collapsing stars and particle behavior in dense environments. However, CCSNe in our galaxy and the Large Magellanic Cloud are rare and only one supernova neutrino observation has been made so far. To maximize the information obtained from the next Galactic CCSN, it is essential to combine analyses from multiple neutrino experiments in real time and transmit any relevant information to electromagnetic facilities within minutes. Locating the CCSN, in particular, is challenging, requiring disentangling CCSN localization information from observational features associated with the properties of the supernova progenitor and the physics of the neutrinos. Yet, being able to estimate the progenitor distance from the neutrino signal would be of great help for the optimisation of the electromagnetic follow-up campaign that will start soon after the propagation of the neutrino alert. Existing CCSN distance measurement algorithms based on neutrino observations hence rely on the assumption that neutrino properties can be described by the Standard Model. This paper presents a swift and robust approach to extract CCSN and neutrino physics information, leveraging diverse next-generation neutrino detectors to counteract potential measurement biases from Beyond the Standard Model effects.
The stochastic gravitational-wave background is imprinted on the times of arrival of radio pulses from millisecond pulsars. Traditional pulsar timing analyses fit a timing model to each pulsar and search the residuals of the fit for a stationary time correlation. This method breaks down at gravitational-wave frequencies below the inverse observation time of the array; therefore, existing analyses restrict their searches to frequencies above 1 nHz. An effective method to overcome this challenge is to study the correlation of secular drifts of parameters in the pulsar timing model itself. In this paper, we show that timing model correlations are sensitive to sub-nanohertz stochastic gravitational waves and perform a search using existing measurements of pulsar spin-decelerations and pulsar binary orbital decay rates. We do not observe a signal at our present sensitivity, constraining the stochastic gravitational-wave relic energy density to $\Omega_\text{GW} ( f ) < 3.8 \times 10 ^{ - 9} $ at 450~pHz with sensitivity which scales as the frequency squared until approximately 10 pHz. We place additional limits on the amplitude of a power-law spectrum of $A_\star \lesssim 1.8\times10^{-14}$ for a reference frequency of $f_* = 1~{\rm year} ^{-1} $ and the spectral index expected from supermassive black hole binaries, $\gamma = 13/3$. If detection of a supermassive black hole binary signal above 1 nHz is confirmed, this search method will serve as a critical complementary probe of the dynamics of galaxy evolution.
Numerical simulations of merging compact objects and their remnants form the theoretical foundation for gravitational wave and multi-messenger astronomy. While Cartesian-coordinate-based adaptive mesh refinement is commonly used for simulations, spherical-like coordinates are more suitable for nearly spherical remnants and azimuthal flows due to lower numerical dissipation in the evolution of fluid angular momentum, as well as requiring fewer numbers of computational cells. However, the use of spherical coordinates to numerically solve hyperbolic partial differential equations can result in severe Courant-Friedrichs-Lewy (CFL) stability condition timestep limitations, which can make simulations prohibitively expensive. This paper addresses this issue for the numerical solution of coupled spacetime and general relativistic magnetohydrodynamics evolutions by introducing a double FFT filter and implementing it within the fully MPI-parallelized SphericalNR framework in the Einstein Toolkit. We demonstrate the effectiveness and robustness of the filtering algorithm by applying it to a number of challenging code tests, and show that it passes these tests effectively, demonstrating convergence while also increasing the timestep significantly compared to unfiltered simulations.
We study the underlying physics of cosmic-ray (CR) driven instabilities that play a crucial role for CR transport across a wide range of scales, from interstellar to galaxy cluster environments. By examining the linear dispersion relation of CR-driven instabilities in a magnetised electron-ion background plasma, we establish that both, the intermediate and gyroscale instabilities have a resonant origin and show that these resonances can be understood via a simple graphical interpretation. These instabilities destabilise wave modes parallel to the large-scale background magnetic field at significantly distinct scales and with very different phase speeds. Furthermore, we show that approximating the electron-ion background plasma with either magnetohydrodynamics (MHD) or Hall-MHD fails to capture the fastest growing instability in the linear regime, namely the intermediate-scale instability. This finding highlights the importance of accurately characterising the background plasma for resolving the most unstable wave modes. Finally, we discuss the implications of the different phase speeds of unstable modes on particle-wave scattering. Further work is needed to investigate the relative importance of these two instabilities in the non-linear, saturated regime and to develop a physical understanding of the effective CR transport coefficients in large-scale CR hydrodynamics theories.
General Teleparallel theories assume that curvature is vanishing in which case gravity can be solely represented by torsion and/or nonmetricity. Using differential form language, we express the Riemannian Gauss-Bonnet invariant concisely in terms of two General Teleparallel Gauss-Bonnet invariants, a bulk and a boundary one. Both terms are boundary terms in four dimensions. We also find that the split is not unique and present two possible alternatives. In the absence of nonmetricity our expressions coincide with the well-known Metric Teleparallel Gauss-Bonnet invariants for one of the splits. Next, we focus on the description where only nonmetricity is present and show some examples in different spacetimes. We finish our discussion by formulating novel modified Symmetric Teleparallel theories constructed with our new scalars.
The next generation of ground-based gravitational-wave detectors, Einstein Telescope (ET) and Cosmic Explorer (CE), present a unique opportunity to put constraints on dense matter, among many other groundbreaking scientific goals. In a recent study the science case of ET was further strengthened, studying in particular the performances of different detector designs. In this paper we present a more detailed study of the nuclear physics section of that work. In particular, focusing on two different detector configurations (the single-site triangular-shaped design and a design consisting of two widely separated "L-shaped" interferometers), we study the detection prospects of binary neutron star (BNS) mergers, and how they can reshape our understanding of the underlying equation of state (EoS) of dense matter. We employ several state-of-the-art EoS models and state-of-the-art synthetic BNS merger catalogs, and we make use of the Fisher information formalism (FIM) to quantify statistical errors on the astrophysical parameters describing individual BNS events. To check the reliability of the FIM method, we further perform a full parameter estimation for a few simulated events. Based on the uncertainties on the tidal deformabilities associated to these events, we outline a mechanism to extract the underlying injected EoS using a recently developed meta-modelling approach within a Bayesian framework. Our results suggest that with $\gtrsim 500$ events with signal-to-noise ratio greater than $12$, we will be able to pin down very precisely the underlying EoS governing the neutron star matter.
Low-metallicity very massive stars with an initial mass of $\sim 140$--$260\, {\rm M_\odot}$ are expected to end their lives as pair-instability supernovae (PISNe). The abundance pattern resulting from a PISN differs drastically from regular core-collapse supernova (CCSN) models and is expected to be seen in very metal-poor (VMP) stars of ${\rm[Fe/H]}\lesssim -2$. Despite the routine discovery of many VMP stars, the unique abundance pattern expected from PISNe has not been unambiguously detected. The recently discovered VMP star LAMOST J1010+2358, however, shows a peculiar abundance pattern that is remarkably well fit by a PISN, indicating the potential first discovery of a bonafide star born from gas polluted by a PISN. In this paper, we study the detailed nucleosynthesis in a large set of models of CCSN of Pop III and Pop II star of metallicity ${\rm[Fe/H]}=-3$ with masses ranging from $12$--$30\,{\rm M_\odot}$. We find that the observed abundance pattern in LAMOST J1010+2358 can be fit at least equally well by CCSN models of $\sim 12$--$14\,{\rm M_\odot}$ that undergo negligible fallback following the explosion. The best-fit CCSN models provide a fit that is even marginally better than the best-fit PISN model. We conclude the measured abundance pattern in LAMOST J1010+2358 could have originated from a CCSN and therefore cannot be unambiguously identified with a PISN given the set of elements measured in it to date. We identify key elements that need to be measured in future detections in stars like LAMOST J1010+2358 that can differentiate between CCSN and PISN origin.
In this work, we investigate the electromagnetic energy released by astrophysical black holes within the Kerr-Taub-NUT solution, which describes rotating black holes with a nonvanishing gravitomagnetic charge. In our study, we consider the black holes in the X-ray binary systems GRS 1915+105, GRO J1655-40, XTE J1550-564, A0620-00, H1743-322, and GRS 1124-683. We show that the Kerr-Taub-NUT spacetime can explain the radiative efficiency of these sources inferred from the continuum fitting method (CFM). We also show that, in the framework of the Blandford-Znajeck mechanism, it is possible to reproduce the observed jet power. We unify the results of the two analyses for the selected objects to get more stringent constraints on the spacetime parameters. We show that, as in the case of the Kerr spacetime, the Kerr-Taub-NUT solution cannot simultaneously explain the observed jet power and radiative efficiency of GRS 1915+105.
Ultraluminous X-ray sources (ULXs) represent an extreme class of accreting compact objects: from the identification of some of the accretors as neutron stars to the detection of powerful winds travelling at 0.1-0.2 c, the increasing evidence points towards ULXs harbouring stellar-mass compact objects undergoing highly super-Eddington accretion. Measuring their intrinsic properties, such as the accretion rate onto the compact object, the outflow rate, the masses of accretor/companion -- hence their progenitors, lifetimes, and future evolution -- is challenging due to ULXs being mostly extragalactic and in crowded fields. Yet ULXs represent our best opportunity to understand super-Eddington accretion physics and the paths through binary evolution to eventual double compact object binaries and gravitational wave sources. Through a combination of end-to-end and single-source simulations, we investigate the ability of HEX-P to study ULXs in the context of their host galaxies and compare it to XMM-Newton and NuSTAR, the current instruments with the most similar capabilities. HEX-P's higher sensitivity, which is driven by its narrow point-spread function and low background, allows it to detect pulsations and broad spectral features from ULXs better than XMM-Newton and NuSTAR. We describe the value of HEX-P in understanding ULXs and their associated key physics, through a combination of broadband sensitivity, timing resolution, and angular resolution, which make the mission ideal for pulsation detection and low-background, broadband spectral studies.
In a previous work [1], given a putative vortex, it was determined whether it is non abelian or not by studying its radiation channels. The example considered there was a $SU(2)$ gauge model whose internal orientational modes are described by an sphere $S^2$. The non abelian effects presented in this reference were not very pronounced, due to the compactness of this space. In the present work, this analysis is extended for a vortex whose internal space is non compact. This situation may be realised by semi-local supersymmetric vortices [2]-[9]. As the internal space has infinite volume, a largely energetic perturbation may propagate along the object. A specific configuration is presented, when the internal space is the resolved conifold with its Ricci flat metric. The curious feature about it is that it corresponds to a static vortex, that is, the perturbation is only due to the internal modes. Even being static, the emission of gravitational radiation is in the present case of considerable order. This suggest that the presence of slowly moving objects that can emit a large amount of gravitational radiation is a hint of non abelianity.
We report on a campaign on the bright black hole X-ray binary Swift J1727.8$-$1613 centered around five observations by the Imaging X-ray Polarimetry Explorer (IXPE). This is the first time it has been possible to trace the evolution of the X-ray polarization of a black hole X-ray binary across a hard to soft state transition. The 2--8 keV polarization degree slowly decreased from $\sim$4\% to $\sim$3\% across the five observations, but remained in the North-South direction throughout. Using the Australia Telescope Compact Array (ATCA), we measure the intrinsic 7.25 GHz radio polarization to align in the same direction. Assuming the radio polarization aligns with the jet direction (which can be tested in the future with resolved jet images), this implies that the X-ray corona is extended in the disk plane, rather than along the jet axis, for the entire hard intermediate state. This in turn implies that the long ($\gtrsim$10 ms) soft lags that we measure with the Neutron star Interior Composition ExploreR (NICER) are dominated by processes other than pure light-crossing delays. Moreover, we find that the evolution of the soft lag amplitude with spectral state differs from the common trend seen for other sources, implying that Swift J1727.8$-$1613 is a member of a hitherto under-sampled sub-population.
Jellyfish galaxies are promising laboratories for studying radiative cooling and magnetic fields in multiphase gas flows. Their long, dense tails are observed to be magnetised, and they extend up to 100 kpc into the intracluster medium (ICM), suggesting that their gas is thermally unstable so that the cold gas mass grows with time rather than being fully dissolved in the hot wind as a result of hydrodynamical interface instabilities. In this paper we use the AREPO code to perform magnetohydrodynamical windtunnel simulations of a jellyfish galaxy experiencing ram-pressure stripping by interacting with an ICM wind. The ICM density, temperature and velocity that the galaxy encounters are time-dependent and comparable to what a real jellyfish galaxy experiences while orbiting the ICM. In simulations with a turbulent magnetised wind we reproduce observations, which show that the magnetic field is aligned with the jellyfish tails. During the galaxy infall into the cluster with a near edge-on geometry, the gas flow in the tail is fountain-like, implying preferential stripping of gas where the rotational velocity vectors add up with the ram pressure while fall-back occurs in the opposite case. Hence, the tail velocity shows a memory of the rotation pattern of the disc. At the time of the nearest cluster passage, ram-pressure stripping is so strong that the fountain flow is destroyed and instead the tail is dominated by removal of gas. We show that gas in the tail is very fragmentative, which is a prediction of shattering due to radiative cooling.
We present CloudFlex, a new open-source tool for predicting the absorption-line signatures of cool gas in galaxy halos with complex small-scale structure. Motivated by analyses of cool material in hydrodynamical simulations of turbulent, multiphase media, we model individual cool gas structures as assemblies of cloudlets with a power-law distribution of cloudlet mass $\propto m_{\rm cl}^{-\alpha}$ and relative velocities drawn from a turbulent velocity field. The user may specify $\alpha$, the lower limit of the cloudlet mass distribution ($m_{\rm cl,min}$), and several other parameters that set the total mass, size, and velocity distribution of the complex. We then calculate the MgII 2796 absorption profiles induced by the cloudlets along pencil-beam lines of sight. We demonstrate that at fixed metallicity, the covering fraction of sightlines with equivalent widths $W_{2796} < 0.3$ Ang increases significantly with decreasing $m_{\rm cl,min}$, cool cloudlet number density ($n_{\rm cl}$), and cloudlet complex size. We then present a first application, using this framework to predict the projected $W_{2796}$ distribution around ${\sim}L^*$ galaxies. We show that the observed incidences of $W_{2796}>0.3$ Ang sightlines within 10 kpc < $R_{\perp}$ < 50 kpc are consistent with our model over much of parameter space. However, they are underpredicted by models with $m_{\rm cl,min}\ge100M_{\odot}$ and $n_{\rm cl}\ge0.03$ $\rm cm^{-3}$, in keeping with a picture in which the inner cool circumgalactic medium (CGM) is dominated by numerous low-mass cloudlets ($m_{\rm cl}\lesssim100M_{\odot}$) with a volume filling factor ${\lesssim}1\%$. When used to simultaneously model absorption-line datasets built from multi-sightline and/or spatially-extended background probes, CloudFlex will enable detailed constraints on the size and velocity distributions of structures comprising the photoionized CGM.
Radiative cooling and AGN heating are thought to form a feedback loop that regulates the evolution of low redshift cool-core galaxy clusters. Numerical simulations suggest that formation of multiphase gas in the cluster core imposes a floor on the ratio of cooling time ($t_{\rm cool}$) to free-fall time ($t_{\rm ff}$) at $\min ( t_{\rm cool} / t_{\rm ff} ) \approx 10$. Observations of galaxy clusters show evidence for such a floor, and usually the cluster cores with $\min ( t_{\rm cool} / t_{\rm ff} ) \lesssim 30$ contain abundant multiphase gas. However, there are important outliers. One of them is Abell 2029, a massive galaxy cluster ($M_{200} \gtrsim 10^{15}$ M$_\odot$) with $\min( t_{\rm cool}/t_{\rm ff}) \sim 20$, but little apparent multiphase gas. In this paper, we present high resolution 3D hydrodynamic AMR simulations of a cluster similar to A2029 and study how it evolves over a period of 1-2 Gyr. Those simulations suggest that Abell 2029 self-regulates without producing multiphase gas because the mass of its central black hole ($\sim 5\times 10^{10} \, M_\odot$) is great enough for Bondi accretion of hot ambient gas to produce enough feedback energy to compensate for radiative cooling.
Studies of star formation in various galaxy cluster mergers have reached apparently contradictory conclusions regarding whether mergers stimulate star formation, quench it, or have no effect. Because the mergers studied span a range of time since pericenter (TSP), it is possible that the apparent effect on star formation is a function of TSP. We use a sample of 12 bimodal mergers to assess the star formation as a function of TSP. We measure the equivalent width of the H-alpha emission line in ${\sim}100$ member galaxies in each merger, classify galaxies as emitters or non-emitters, and then classify emitters as star-forming galaxies (SFG) or active galactic nucleus (AGN) based on the [NII] $\lambda6583$ line. We quantify the distribution of SFG and AGN relative to non-emitters along the spatial axis defined by the subcluster separation. The SFG and AGN fractions vary from merger to merger, but show no trend with TSP. The spatial distribution of SFG is consistent with that of non-emitters in eight mergers, but show significant avoidance of the system center in the remaining four mergers, including the three with the lowest TSP. If there is a connection between star formation activity and TSP, probing it further will require more precise TSP estimates and more mergers with TSP in the range of 0-400 Myr.
The Milky Way has undergone significant transformations in its early history, characterised by violent mergers and the accretion of satellite galaxies. Among these events, the infall of the satellite galaxy Gaia-Enceladus/Sausage is recognised as the last major merger event, fundamentally altering the evolution of the Milky Way and shaping its chemo-dynamical structure. However, recent observational evidence suggests that the Milky Way remains undergone notable events of star formation in the past 4 Gyr, which is thought to be triggered by the perturbations from Sagittarius dwarf galaxy (Sgr). Here we report chemical signatures of the Sgr accretion event in the past 4 Gyr, using the [Fe/H] and [O/Fe] ratios in the thin disc, which is reported for the first time. It reveals that the previously discovered V-shape structure of age-[Fe/H] relation varies across different Galactic locations and has rich substructures. Interestingly, we discover a discontinuous structure at z$_{\rm max}$ $<$ 0.3 kpc, interrupted by a recent burst of star formation from 4 Gyr to 2 Gyr ago. In this episode, we find a significant rise in oxygen abundance leading to a distinct [O/Fe] gradient, contributing to the formation of young O-rich stars. Combined with the simulated star formation history and chemical abundance of Sgr, we suggest that the Sgr is an important actor in the discontinuous chemical evolution of the Milky Way disc.
This is the fourth paper of our new release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library. Following the updated solar-scaled, alpha-enhanced, and white dwarf model libraries, we present here alpha-depleted ([alpha/Fe] = -0.2) evolutionary tracks and isochrones, suitable to study the alpha-depleted stars discovered in Local Group dwarf galaxies and in the Milky Way. These calculations include all improvements and updates of the solar-scaled and alpha-enhanced models, and span a mass range between 0.1 and 15 Msun, 21 metallicities between [Fe/H] = -3.20 and +0.45 with a helium-to-metal enrichment ratio dY/dZ = 1.31, homogeneous with the solar-scaled and alpha-enhanced models. The isochrones -- available in several photometric filters -- cover an age range between 20 Myr and 14.5 Gyr, including the pre-main-sequence phase. We have compared our isochrones with independent calculations of alpha-depleted stellar models, available for the same alpha-element depletion adopted in present investigation. We have also discussed the effect of an alpha-depleted heavy element distribution on the bolometric corrections in different wavelength regimes. Our alpha-depleted evolutionary tracks and isochrones are publicly available at the BaSTI website.
Results of surface photometry of 50 galaxies in the Local Volume based on archived images obtained with the Hubble Space Telescope are presented. Integrated magnitudes in the V and I bands are introduced for the sample galaxies, along with brightness and color profiles. The obtained photometric parameters are compared with the measurements of other authors.
We compare the properties of the stellar populations of the globular clusters and field stars in two dwarf spheroidal galaxies (dSphs): ESO269-66, a near neighbor of the giant S0 galaxy NGC 5128, and KKs3, one of the few extremely isolated dSphs within 10 Mpc. The histories of star formation in these galaxies are known from previous work on deep stellar photometry using images from the Hubble Space Telescope (HST). The age and metal content for the nuclear globular clusters in KKs3 and ESO269-66 are known from literature spectroscopic studies: T=12.6 billion years, [Fe/H]=-1.5 and -1.55 dex. We use the Sersic function to construct and analyze the profiles of the surface density of the stars with high and low metallicities (red and blue) in KKs3 and ESO269-66, and show that (1) the profiles of the density of red stars are steeper than those of blue stars, which is indicative of gradients of metallicity and age in the galaxies, and (2) the globular clusters in KKs3 and ESO269-66 contain roughly 4 and 40%, respectively, of all the old stars in the galaxies with metallicities [Fe/H]~-1.5 to -1.6 dex and ages of 12-14 billion years. The globular clusters are, therefore, relicts of the first, most powerful bursts of star formation in the central regions of these objects. It is probable that, because of its isolation, KKs3 has lost a smaller fraction of old low-metallicity stars than ESO269-66.
Surface photometry data on 90 dwarf irregular galaxies (dIrrs) in a wide vicinity of the Virgo cluster and 30 isolated dIrrs are presented. Images from the Sloan Digital Sky Survey (SDSS) are used. The following mean photometric characteristics (color and central surface brightness) are obtained for objects in the two samples:(V-I)o=0.75 mag (sigma=0.19 mag), (B-V)o=0.51 mag (sigma=0.13 mag), SBv=22.16 mag/sq.arcsec (sigma=1.02 mag/sq.arcsec) for the dIrrs in the vicinity of the Virgo cluster and (V-I)o=0.66 mag (sigma=0.43 mag), (B-V)o=0.57 mag (sigma=0.16 mag), SBv=22.82 mag/sq.arcsec (sigma=0.73 mag/sq.arcsec) for the isolated galaxies. The mean central surface brightnesses for the isolated galaxies in this sample is lower than for the dIrrs in a denser environment. The average color characteristics of the dIrrs in the different environments are the same to within ~0.2 mag.
We report on the discovery of a significant and compact over-density of old and metal-poor stars in the KiDS survey (data release 4). The discovery is confirmed by deeper HSC-SSC data revealing the old Main Sequence Turn-Off of a stellar system located at a distance from the sun of $D_{\sun}=145^{+14}_{-13}$~kpc in the direction of the Sextans constellation. The system has absolute integrated magnitude ($M_V=-3.9^{+0.4}_{-0.3}$), half-light radius ($r_h=193^{+61}_{-46}$~pc), and ellipticity ($e=0.46^{+0.11}_{-0.15}$) typical of Ultra Faint Dwarf galaxies (UFDs). The central surface brightness is near the lower limits of known local dwarf galaxies of similar integrated luminosity, as expected for stellar systems that escaped detection until now. The distance of the newly found system suggests that it is likely a satellite of our own Milky Way, consequently, we tentatively baptise it Sextans~II (KiDS-UFD-1).
Given the high incidence of binaries among mature field massive stars, it is clear that multiplicity is an inevitable outcome of high-mass star formation. Understanding how massive multiples form requires the study of the birth environments of massive stars, covering the innermost to outermost regions. We aim to detect and characterise low-mass companions around massive young stellar objects (MYSOs) during and shortly after their formation phase. To investigate large spatial scales, we carried out an $L'$-band high-contrast direct imaging survey seeking low-mass companions (down to $L_{\text{bol}}\approx 10 L_{\odot}$, or late A-type) around thirteen previously identified MYSOs using the VLT/NACO instrument. From those images, we looked for the presence of companions on a wide orbit, covering scales from 300 to 56,000 au. Detection limits were determined for all targets and we tested the gravitational binding to the central object based on chance projection probabilities. We have discovered a total of thirty-nine potential companions around eight MYSOs, the large majority of which have never been reported to date. We derived a multiplicity frequency (MF) of $62\pm13$% and a companion fraction (CF) of $3.0\pm0.5$. The derived MF and CF are compared to other studies for similar separation ranges. The comparisons are effective for a fixed evolutionary stage spanning a wide range of masses and vice versa. We find an increased MF and CF compared to the previous studies targeting MYSOs, showing that the statement in which multiplicity scales with primary mass also extends to younger evolutionary stages. The separations at which the companions are found and their location with relation to the primary star allow us to discuss the implications for the massive star formation theories.
We present an analysis of the UV continuum slopes for a sample of $176$ galaxy candidates at $8 < z_{\mathrm{phot}} < 16$. Focusing primarily on a new sample of $125$ galaxies at $\langle z \rangle \simeq 11$ selected from $\simeq 320$ arcmin$^2$ of public JWST imaging data across $15$ independent datasets, we investigate the evolution of $\beta$ in the galaxy population at $z > 8$. In the redshift range $8 < z < 10$, we find evidence for a relationship between $\beta$ and $M_{\rm UV}$, such that galaxies with brighter UV luminosities display redder UV slopes, with $\rm{d}\beta/ \rm{d} M_{\rm UV} = -0.17 \pm 0.03$. A comparison with literature studies down to $z\simeq2$ suggests that a $\beta-M_{\rm UV}$ relation has been in place from at least $z\simeq10$, with a slope that does not evolve strongly with redshift, but with an evolving normalisation such that galaxies at higher redshifts become bluer at fixed $M_{\rm UV}$. We find a significant trend between $\beta$ and redshift, with the inverse-variance weighted mean value evolving from $\langle \beta \rangle = -2.17 \pm 0.05$ at $z = 9.5$ to $\langle \beta \rangle = -2.56 \pm 0.05$ at $z = 11.5$. Based on a comparison with stellar population models, we find that at $z>10.5$ the average UV continuum slope is consistent with the intrinsic blue limit of `dust-free' stellar populations $(\beta_{\mathrm{int}} \simeq -2.6)$. These results suggest that the moderately dust-reddened galaxy population at $z < 10$ was essentially dust free at $z \simeq 11$. The extremely blue galaxies being uncovered at $z>10$ place important constraints on the dust content of early galaxies, and imply that the already observed galaxy population is likely supplying an ionizing photon budget capable of maintaining ionized IGM fractions of $\gtrsim 5$ per cent at $z\simeq11$.
The mid-infrared spectra of star-forming galaxies (SFGs) are characterized by characteristic broad PAH emission features at 3-20 $\mu$m. As these features are redshifted, they are predicted to dominate the flux at specific mid-infrared wavelengths, leading to substantial redshift-dependent color variations in broad-band photometry. The advent of JWST for the first time allows the study of this effect for normal SFGs. Based on spectral energy distribution templates, we here present tracks in mid-infrared (4.4, 7.7, 10, 15, and 18 $\mu$m) color-color diagrams describing the redshift dependence of SFG colors. In addition, we present simulated color-color diagrams by populating these tracks using the cosmic star-formation history and the star-formation rate function. Depending on redshift, we find that SFGs stand out in the color-color diagrams by several magnitudes. We provide the first observational demonstration of this effect for galaxies detected in the JWST Early Release Observations of the field towards the lensing cluster SMACS J0723.3$-$7327. While the distribution of detected galaxies is consistent with the simulations, the numbers are substantially boosted by lensing effects. The PAH emitter with the highest spectroscopic redshift, detected in all bands, is a multiply-imaged galaxy at $z=1.45$. There is also a substantial number of cluster members, which do not exhibit PAH emission, except for one SFG at $z=0.38$. Future wider-field observations will further populate mid-infrared color-color diagrams and provide insight into the evolution of typical SFGs.
A tight positive correlation between the stellar mass and the gas-phase metallicity of galaxies has been observed at low redshifts. The redshift evolution of this correlation can strongly constrain theories of galaxy evolution. The advent of JWST allows probing the mass-metallicity relation at redshifts far beyond what was previously accessible. Here we report the discovery of two emission-line galaxies at redshifts 8.15 and 8.16 in JWST NIRCam imaging and NIRSpec spectroscopy of targets gravitationally lensed by the cluster RXJ2129.4$+$0005. We measure their metallicities and stellar masses along with nine additional galaxies at $7.2 < z_{\rm spec} < 9.5$ to report the first quantitative statistical inference of the mass-metallicity relation at $z\approx8$. We measure $\sim 0.9$ dex evolution in the normalization of the mass-metallicity relation from $z \approx 8$ to the local Universe; at fixed stellar mass, galaxies are 8 times less metal enriched at $z \approx 8$ compared to the present day. Our inferred normalization is in agreement with the predictions of the FIRE simulations. Our inferred slope of the mass-metallicity relation is similar to or slightly shallower than that predicted by FIRE or observed at lower redshifts. We compare the $z \approx 8$ galaxies to extremely low metallicity analog candidates in the local Universe, finding that they are generally distinct from extreme emission-line galaxies or "green peas" but are similar in strong emission-line ratios and metallicities to "blueberry galaxies". Despite this similarity, at fixed stellar mass, the $z \approx 8$ galaxies have systematically lower metallicities compared to blueberry galaxies.
We present a study of the environments of 17 Lyman-$\alpha$ (Ly$\alpha$) emitting galaxies (LAEs) in the reionisation era ($5.8 < z < 8$) identified by JWST/NIRSpec as part of the JWST Advanced Deep Extragalactic Survey (JADES). Unless situated in sufficiently (re)ionised regions, Ly$\alpha$ emission from these galaxies would be strongly absorbed by neutral gas in the intergalactic medium (IGM). We conservatively estimate sizes of the ionised regions required to reconcile the relatively low Ly$\alpha$ velocity offsets ($\Delta v_\text{Ly$\alpha$}<300\,\mathrm{km\,s^{-1}}$) with moderately high Ly$\alpha$ escape fractions ($f_\mathrm{esc,\,Ly\alpha}>5\%$) observed in our sample of LAEs, suggesting the presence of ionised hydrogen along the line of sight towards at least eight out of 17 LAEs. We find minimum physical `bubble' sizes of the order of $R_\text{ion}\sim0.1$-$1\,\mathrm{pMpc}$ are required in a patchy reionisation scenario where ionised bubbles containing the LAEs are embedded in a fully neutral IGM. Around half of the LAEs in our sample are found to coincide with large-scale galaxy overdensities seen in FRESCO at $z \sim 5.8$-$5.9$ and $z\sim7.3$, suggesting Ly$\alpha$ transmission is strongly enhanced in such overdense regions, and underlining the importance of LAEs as tracers of the first large-scale ionised bubbles. Considering only spectroscopically confirmed galaxies, we find our sample of UV-faint LAEs ($M_\text{UV}\gtrsim-20\,\mathrm{mag}$) and their direct neighbours are generally not able to produce the required ionised regions based on the Ly$\alpha$ transmission properties, suggesting lower-luminosity sources likely play an important role in carving out these bubbles. These observations demonstrate the combined power of JWST multi-object and slitless spectroscopy in acquiring a unique view of the early Universe during cosmic reionisation via the most distant LAEs.
In June 2022, Gaia DR3 has provided the astronomy community with about one million spectra from the Radial Velocity Spectrometer (RVS) covering the CaII triplet region. However, one-third of the published spectra have 15<S/N<25 per pixel such that they pose problems for classical spectral analysis pipelines, and therefore, alternative ways to tap into these large datasets need to be devised. We aim to leverage the versatility and capabilities of machine learning techniques for supercharged stellar parametrisation by combining Gaia-RVS spectra with the full set of Gaia products and high-resolution, high-quality ground-based spectroscopic reference datasets. We developed a hybrid convolutional neural network (CNN) that combines the Gaia DR3 RVS spectra, photometry (G, G_BP, G_RP), parallaxes, and XP coefficients to derive atmospheric parameters (Teff, log(g) as well as overall [M/H]) and chemical abundances ([Fe/H] and [{\alpha}/M]). We trained the CNN with a high-quality training sample based on APOGEE DR17 labels. With this CNN, we derived homogeneous atmospheric parameters and abundances for 886080 RVS stars that show remarkable precision and accuracy compared to external datasets (such as GALAH and asteroseismology). The CNN is robust against noise in the RVS data, and we derive very precise labels down to S/N=15. We managed to characterise the [{\alpha}/M]-[M/H] bimodality from the inner regions to the outer parts of the Milky Way, which has never been done using RVS spectra or similar datasets. This work is the first to combine machine learning with such diverse datasets and paves the way for large-scale machine learning analysis of Gaia-RVS spectra from future data releases. Large, high-quality datasets can be optimally combined thanks to the CNN, thereby realising the full power of spectroscopy, astrometry, and photometry.
The coupling state between ions and neutrals in the interstellar medium plays a key role in the dynamics of magnetohydrodynamic (MHD) turbulence, but is challenging to study numerically. In this work, we investigate the damping of MHD turbulence in a partially ionized medium using 3D two-fluid (ions+neutrals) simulations generated with the AthenaK code. Specifically, we examine the velocity, density, and magnetic field statistics of the two-fluid MHD turbulence in different regimes of neutral-ion coupling. Our results demonstrate that when ions and neutrals are strongly coupled, the velocity statistics resemble those of single-fluid MHD turbulence. Both the velocity structures and kinetic energy spectra of ions and neutrals are similar, while their density structures can be significantly different. With an excess of small-scale sharp density fluctuations in ions, the density spectrum in ions is shallower than that of neutrals. When ions and neutrals are weakly coupled, the turbulence in ions is more severely damped due to the ion-neutral collisional friction than that in neutrals, resulting in a steep kinetic energy spectrum and density spectrum in ions compared to the Kolmogorov spectrum. We also find that the magnetic energy spectrum basically follows the shape of the kinetic energy spectrum of ions, irrespective of the coupling regime. In addition, we find large density fluctuations in ions and neutrals and thus spatially inhomogeneous ionization fractions. As a result, the neutral-ion decoupling and damping of MHD turbulence take place over a range of length scales.
Cold, substellar objects such as brown dwarfs have long been recognized as contaminants in color-selected samples of active galactic nuclei (AGNs). In particular, their near- to mid-infrared colors (1-5 $\mu$m) can closely resemble the V-shaped ($f_{\lambda}$) spectra of highly-reddened accreting supermassive black holes ("little red dots"), especially at $6 < z < 7$. Recently, a NIRCam-selected sample of little red dots over 45 arcmin$^2$ has been followed up with deep NIRSpec multi-object prism spectroscopy through the UNCOVER program. By investigating the acquired spectra, we identify three of the 14 followed-up objects as T/Y dwarfs with temperatures between 650 and 1300 K and distances between 0.8 and 4.8 kpc. At $4.8^{+0.6}_{-0.1}$ kpc, Abell2744-BD1 is the most distant brown dwarf discovered to date. We identify the remaining 11 objects as extragalactic sources at $z_{\rm spec} \gtrsim 5$. Given that three of these sources are strongly-lensed images of the same AGN (Abell2744-QSO1), we derive a brown dwarf contamination fraction of $25\%$ in this NIRCam-selection of little red dots. We find that in the near-infrared filters, brown dwarfs appear much bluer than the highly-reddened AGN, providing an avenue for distinguishing the two and compiling cleaner samples of photometrically selected highly-reddened AGN.
We report the discovery of a large-scale structure at z=3.44 revealed by JWST data in the EGS field. This structure, dubbed "Cosmic Vine", consists of 20 galaxies with spectroscopic redshifts at $3.43<z<3.45$ and six galaxy overdensities with consistent photometric redshifts, making up a vine-like structure extending over a ~4x0.2 pMpc^2 area. The two most massive galaxies (M*~10^10.9 Msun) of the Cosmic Vine are found to be quiescent with bulge-dominated morphologies ($B/T>70\%$). Comparisons with simulations suggest that the Cosmic Vine would form a cluster with halo mass >10^14 Msun at z=0, and the two massive galaxies are likely forming the brightest cluster galaxies (BCGs). The results unambiguously reveal that massive quiescent galaxies can form in growing large-scale structures at z>3, thus disfavoring the environmental quenching mechanisms that require a virialized cluster core. Instead, as suggested by the interacting and bulge-dominated morphologies, the two galaxies are likely quenched by merger-triggered starburst or AGN feedback before falling into a cluster core. Moreover, we found that the observed specific star formation rates of massive quiescent galaxies in z>3 dense environments are two orders of magnitude lower than that of the BCGs in the TNG300 simulation. This discrepancy potentially poses a challenge to the models of massive cluster galaxy formation. Future studies comparing a large sample with dedicated cluster simulations are required to solve the problem.
The sensitivity of aLIGO detectors is adversely affected by the presence of noise caused by light scattering. Low frequency seismic disturbances can create higher frequency scattering noise adversely impacting the frequency band in which we detect gravitational waves. In this paper, we analyze instances of a type of scattered light noise we call "Fast Scatter" that is produced by motion at frequencies greater than 1 Hz, to locate surfaces in the detector that may be responsible for the noise. We model the phase noise to better understand the relationship between increases in seismic noise near the site and the resulting Fast Scatter observed. We find that mechanical damping of the Arm Cavity Baffles (ACBs) led to a significant reduction of this noise in recent data. For a similar degree of seismic motion in the 1-3 Hz range, the rate of noise transients is reduced by a factor of ~ 50.
We present a framework for the efficient computation of optimal Bayesian decisions under intractable likelihoods, by learning a surrogate model for the expected utility (or its distribution) as a function of the action and data spaces. We leverage recent advances in simulation-based inference and Bayesian optimization to develop active learning schemes to choose where in parameter and action spaces to simulate. This allows us to learn the optimal action in as few simulations as possible. The resulting framework is extremely simulation efficient, typically requiring fewer model calls than the associated posterior inference task alone, and a factor of $100-1000$ more efficient than Monte-Carlo based methods. Our framework opens up new capabilities for performing Bayesian decision making, particularly in the previously challenging regime where likelihoods are intractable, and simulations expensive.
Transition-edge sensor (TES) bolometers are broadly used for background-limited astrophysical measurements from the far-infrared to mm-waves. Many planned future instruments require increasingly large detector arrays, but their scalability is limited by their cryogenic readout electronics. Microwave SQUID multiplexing offers a highly capable scaling solution through the use of inherently broadband circuitry, enabling readout of hundreds to thousands of channels per microwave line. As with any multiplexing technique, the channelization mechanism gives rise to electrical crosstalk which must be understood and controlled so as to not degrade the instrument sensitivity. Here, we explore implications relevant for TES bolometer array applications, focusing in particular on upcoming mm-wave observatories such as the Simons Observatory and AliCPT. We model the relative contributions of the various underlying crosstalk mechanisms, evaluate the difference between fixed-tone and tone-tracking readout systems, and discuss ways in which crosstalk nonlinearity will complicate on-sky measurements.
ASTRI-Horn is the prototype of the nine telescopes that form the ASTRI Mini-Array, under construction at the Teide Observatory in Spain, devoted to observe the sky above 10 TeV. It adopts an innovative optical design based on a dual-mirror Schwarzschild-Couder configuration, and the camera, composed by a matrix of monolithic multipixel silicon photomultipliers (SiPMs) is managed by ad-hoc tailored front-end electronics based on a peak-detector operation mode. During the Crab Nebula campaign in 2018-2019, ASTRI-Horn was affected by gain variations induced by high levels of night sky background. This paper reports the work performed to detect and quantify the effects of these gain variations in shower images. The analysis requested the use of simultaneous observations of the night sky background flux in the wavelength band 300-650 nm performed with the auxiliary instrument UVscope, a calibrated multi-anode photomultiplier working in single counting mode. As results, a maximum gain reduction of 15% was obtained, in agreement with the value previously computed from the variance of the background level in each image. This ASTRI-Horn gain reduction was caused by current limitation of the voltage supply. The analysis presented in this paper provides a method to evaluate possible variations in the nominal response of SiPMs when scientific observations are performed in the presence of high night sky background as in dark or gray conditions.
An accurate description of the center-to-limb variation (CLV) of stellar spectra is becoming an increasingly critical factor in both stellar and exoplanet characterization. In particular, the CLV of spectral lines is extremely challenging as its characterization requires highly detailed knowledge of the stellar physical conditions. To this end, we present the Numerical Empirical Sun-as-a-Star Integrator (NESSI) as a tool for translating high-resolution solar observations of a partial field of view into disk-integrated spectra that can be used to test common assumptions in stellar physics.
We report results of an in-depth numerical investigation of three-dimensional projection effects which could influence the observed loop-like structures in an optically thin solar corona. Several archetypal emitting geometries are tested, including collections of luminous structures with circular cross-sections of fixed and random size, light-emitting structures with highly anisotropic cross-sections, as well as two-dimensional stochastic current density structures generated by fully-developed magnetohydrodynamic (MHD) turbulence. A comprehensive set of statistical signatures is used to compare the line of sight -integrated emission signals predicted by the constructed numerical models with the loop profiles observed by the extreme ultraviolet telescope onboard the flight 2.1 of the High-Resolution Coronal Imager (Hi-C). The results suggest that typical cross-sectional emission envelopes of the Hi-C loops are unlikely to have high eccentricity, and that the observed loops cannot be attributed to randomly oriented quasi-two dimensional emitting structures, some of which would produce anomalously strong optical signatures due to an accidental line-of-sight alignment expected in the coronal veil scenario \citep{malanushenko2022}. The possibility of apparent loop-like projections of very small (close to the resolution limit) or very large (comparable with the size of an active region) light-emitting sheets remains open, but the intermediate range of scales commonly associated with observed loop systems is most likely filled with true quasi-one dimensional (roughly axisymmetric) structures embedded into the three-dimensional coronal volume.
In any physical system, when we move from short to large scales, new spacetime symmetries emerge which help us to simplify the dynamics of the system. In this letter we demonstrate that certain variations on the symmetries of General Relativity at large scales, generate the effects equivalent to Dark Matter. In particular, we reproduce the Tully-Fisher law, consistent with the predictions proposed by MOND. Additionally, we demonstrate that the dark matter effects derived in this way, are consistent with the predictions suggested by MOND, without modifying gravity.
In this work, we address the thermodynamical evolution of the universe in the context of Loop Quantum Cosmology by considering the conditions for the existence of a time arrow in this approach. We find out that, for the existence of a time arrow in our universe, in terms of its obedience to the Generalized Second Law of Thermodynamics, the initial state of the cosmos must correspond to a negative entropy one.
We study the Hilbert space of a system of $n$ black holes with an inner product induced by replica wormholes. This takes the form of a sum over permutations, which we interpret in terms of a gauge symmetry. The resulting inner product is degenerate, with null states lying in representations corresponding to Young diagrams with too many rows. We count the remaining states in a large $n$ limit, which is governed by an emergent collective Coulomb gas description describing the shape of typical Young diagrams. This exhibits a third-order phase transition when the null states become numerous. We find that the dimension of the black hole Hilbert space accords with a microscopic interpretation of Bekenstein-Hawking entropy.
Only 5% of what makes up the Universe is well understood, and it consists of baryonic matter and radiation. Dark matter and energy correspond to the remaining 95% of the Universe, and their origin and evolution have not yet been satisfactorily explained. Dark matter, supposedly present in the galaxies' halo region, appears to be the mechanism that causes the unusual behavior of the stars' tangential velocity, which is higher than that predicted by the interaction with visible matter. With the solutions of the equations of motion for spacetime generated by a spinning cosmic string with an internal structure in Brans-Dicke gravitation, the present work aimed to evaluate whether this type of string can play the role currently defined as that of dark matter, which is to originate the typical rotational curves of galaxies, responsible for maintaining the tangential velocities of the stars that form these galaxies, whose behavior cannot be justified solely by the observed baryonic matter. For this, the model was used to obtain the velocities of the stars of 4 Sc-type galaxies, to be compared with their respective experimentally observed values.
We continue our work on the study of spherically symmetric loop quantum gravity coupled to two spherically symmetric scalar fields, one which acts as a clock. As a consequence of the presence of the latter, we can define a true Hamiltonian for the theory. The spherically symmetric context allows to carry out precise detailed calculations. Here we study the theory for regions of large values of the radial coordinate. This allows us to define in detail the vacuum of the theory and study its quantum states, yielding a quantum field theory on a quantum space time that makes contact with the usual treatment on classical space times.
We calculate the total thrust resulting from the interaction between charged scalar modes and a superradiant Reissner-Nordstr\"om black hole, when the modes are deflected by a hemispherical perfect mirror located at a finite distance from the black hole's horizon.
In this work we introduce and study the unimodular-mimetic $f(\mathcal{G})$ gravity, where unimodular and mimetic constraints are incorporated through corresponding Lagrange multipliers. We present field equations governing this theory and discuss their main properties. By using the reconstruction scheme, we obtain quadratic unimodular-mimetic $f(\mathcal{G})=A\mathcal{G}^2$ gravity capable of describing hybrid expansion law and the power law evolution. Furthermore, we employ an inverted reconstruction technique in order to derive specific $f(\mathcal{G})$ function that reproduces the Hubble rate of symmetric bounce. The unimodular-mimetic $f(\mathcal{G})=A\mathcal{G}^2$ is also shown to be compatible with the BICEP2/Keck and Planck data. To this end, we incorporate updated constraints on the scalar-to-tensor ratio and spectral index, utilizing a perfect fluid approach to the slow-roll parameters. Through an analysis of that kind, we demonstrate that the theoretical framework presented here can indeed characterize inflation that agrees with the observational findings. Consequently, the introduced extension appears to have potential to describe and encompass a wide spectrum of cosmological models.
The emission of continuous gravitational waves (CWs), with duration much longer than the typical data taking runs, is expected from several sources, notably spinning neutron stars, asymmetric with respect to their rotation axis and more exotic sources, like ultra-light scalar boson clouds formed around Kerr black holes and sub-solar mass primordial binary black holes. Unless the signal time evolution is well predicted and its relevant parameters accurately known, the search for CWs is typically based on semi-coherent methods, where the full data set is divided in shorter chunks of given duration, which are properly processed, and then incoherently combined. In this paper we present a semi-coherent method, in which the so-called \textit{5-vector} statistics is computed for the various data segments and then summed after the removal of the Earth Doppler modulation and signal intrinsic spin-down. The method can work with segment duration of several days, thanks to a double stage procedure in which an initial rough correction of the Doppler and spin-down is followed by a refined step in which the residual variations are removed. This method can be efficiently applied for directed searches, where the source position is known to a good level of accuracy, and in the candidate follow-up stage of wide-parameter space searches.
The observation of gravitational waves opens up a new window to probe the universe and the nature of the gravitational field itself. As a result, they serve as a new and promising tool to not only test our current theories but to study different models that go beyond our current understanding. In this paper, inspired by recent successes in scalar and Maxwell electrodynamics, we analyze the role played by the (quantum) Unruh effect on the production of both classical and quantum gravitational waves by a uniformly accelerated mass. In particular, we show the fundamental role played by zero-energy (Rindler) gravitons in building up the gravitational radiation, as measured by inertial observers, emitted by the body.
We show that the stress tensor of a real scalar quantum field on Reissner-Nordstr{\"o}m-deSitter spacetime exhibits correlations over macroscopic distances near the Cauchy horizon. These diverge as the Cauchy horizon is approached and are universal, i.e., state-independent. This signals a breakdown of the semi-classical approximation near the Cauchy horizon. We also investigate the effect of turning on a charge of the scalar field and consider the correlation of the stress tensor between the two poles of the Cauchy horizon of Kerr-de Sitter spacetime.
We consider the classification of supersymmetric black hole solutions to five-dimensional STU gauged supergravity that admit torus symmetry. This reduces to a problem in toric K\"ahler geometry on the base space. We introduce the class of separable toric K\"ahler surfaces that unify product-toric, Calabi-toric and orthotoric K\"ahler surfaces, together with an associated class of separable 2-forms. We prove that any supersymmetric toric solution that is timelike, with a separable K\"ahler base space and Maxwell fields, outside a horizon with a compact (locally) spherical cross-section, must be locally isometric to the known black hole or its near-horizon geometry. An essential part of the proof is a near-horizon analysis which shows that the only possible separable K\"ahler base space is Calabi-toric. In particular, this also implies that our previous black hole uniqueness theorem for minimal gauged supergravity applies to the larger class of separable K\"ahler base spaces.
We study the asymptotic behaviour of Bianchi type VI$_0$ spacetimes with orthogonal perfect fluid matter satisfying Einstein's equations. In particular, we prove a conjecture due to Wainwright about the initial singularity of such spacetimes. Using the expansion-normalized variables of Wainwright-Hsu, we demonstrate that for a generic solution the initial singularity is vacuum dominated, anisotropic and silent. In addition, by employing known results on Bianchi backgrounds, we obtain convergence results on the asymptotics of solutions to the Klein-Gordon equation on all backgrounds of this type, except for one specific case.
In holographic duality an eternal AdS black hole is described by two copies of the boundary CFT in the thermal field double state. This identification has many puzzles, including the boundary descriptions of the event horizons, the interiors of the black hole, and the singularities. Compounding these mysteries is the fact that, while there is no interaction between the CFTs, observers from them can fall into the black hole and interact. We address these issues in this paper. In particular, we (i) present a boundary formulation of a class of in-falling bulk observers; (ii) present an argument that a sharp bulk event horizon can only emerge in the infinite $N$ limit of the boundary theory; (iii) give an explicit construction in the boundary theory of an evolution operator for a bulk in-falling observer, making manifest the boundary emergence of the black hole horizons, the interiors, and the associated causal structure. A by-product is a concept called causal connectability, which is a criterion for any two quantum systems (which do not need to have a known gravity dual) to have an emergent sharp horizon structure.
We compute the contact manifold of null geodesics of the family of spacetimes $\{(\mathbb{S}^2\times\mathbb{S}^1, g_\circ-\frac{d^2}{c^2}dt^2)\}_{d,c\in\mathbb{N}^+\text{ coprime}}$, with $g_\circ$ the round metric on $\mathbb{S}^2$ and $t$ the $\mathbb{S}^1$-coordinate. We find that these are the lens spaces $L(2c,1)$ together with the pushforward of the canonical contact structure on $ST\mathbb{S}^2\cong L(2,1)$ under the natural projection $L(2,1)\to L(2c,1)$. We extend this computation to $Z\times \mathbb{S}^1$ for $Z$ a Zoll manifold. On the other hand, motivated by these examples, we show how Engel geometry can be used to describe the manifold of null geodesics of a certain class of three-dimensional spacetimes, by considering the Cartan deprolongation of their Lorentz prolongation. We characterize the three-dimensional contact manifolds that are contactomorphic to the space of null geodesics of a spacetime. The characterization consists in the existence of an overlying Engel manifold with a certain foliation and, in this case, we also retrieve the spacetime.
In holographic duality an eternal AdS black hole is described by two copies of the boundary CFT in the thermal field double state. In this paper we provide explicit constructions in the boundary theory of infalling time evolutions which can take bulk observers behind the horizon. The constructions also help to illuminate the boundary emergence of the black hole horizons, the interiors, and the associated causal structure. A key element is the emergence, in the large $N$ limit of the boundary theory, of a type III$_1$ von Neumann algebraic structure from the type I boundary operator algebra and the half-sided modular translation structure associated with it.
To what extent does the black hole information paradox lead to violations of quantum mechanics? I explain how black hole complementarity provides a framework to articulate how quantum characterizations of black holes can remain consistent despite the information paradox. I point out that there are two ways to cash out the notion of consistency in play here: an operational notion and a descriptive notion. These two ways of thinking about consistency lead to (at least) two principles of black hole complementarity: an operational principle and a descriptive principle. Our background philosophy of science regarding realism/instrumentalism might initially lead us to prefer one principle over the other. However, the recent physics literature, which applies tools from quantum information theory and quantum computational complexity theory to various thought experiments involving quantum systems in or around black holes, implies that the operational principle is successful where the descriptive principle is not. This then lets us see that for operationalists the black hole information paradox might no longer be pressing.
We initiate an investigation into separable, but physically reasonable, states in relativistic quantum field theory. In particular we will consider the minimum amount of energy density needed to ensure the existence of separable states between given spacelike separated regions. This is a first step towards improving our understanding of the balance between entanglement entropy and energy (density), which is of great physical interest in its own right and also in the context of black hole thermodynamics. We will focus concretely on a linear scalar quantum field in a topologically trivial, four-dimensional globally hyperbolic spacetime. For rather general spacelike separated regions $A$ and $B$ we prove the existence of a separable quasi-free Hadamard state. In Minkowski spacetime we provide a tighter construction for massive free scalar fields: given any $R>0$ we construct a quasi-free Hadamard state which is stationary, homogeneous, spatially isotropic and separable between any two regions in an inertial time slice $t=\mathrm{const.}$ all of whose points have a distance $>R$. We also show that the normal ordered energy density of these states can be made $\le 10^{31}\frac{m^4}{(mR)^8}e^{-\frac14mR}$ (in Planck units). To achieve these results we use a rather explicit construction of test-functions $f$ of positive type for which we can get sufficient control on lower bounds on $\hat{f}$.
Carrollian conformal field theories (carrollian CFTs) are natural field theories on null infinity of an asymptotically flat spacetime or, in general, geometries with conformal carrollian structure. Using a basis transformation, gravitational S-matrix elements can be brought into the form of correlators of a carrollian CFT. Therefore, it has been suggested that carrollian CFTs could provide a co-dimension one dual description to gravity in asymptotically flat spacetimes. In this work, we construct an embedding space formalism for three-dimensional carrollian CFTs and use it to determine two- and three-point correlators. These correlators are fixed by the global subgroup ISO(3,1) of the carrollian conformal symmetries, i.e., the Bondi--van der Burg--Metzner--Sachs symmetries (BMS). The correlators coincide with well-known two- and three-point scattering amplitudes in Minkowski space written with respect to a basis of asymptotic position states.
We find a large internal symmetry within 4-dimensional Poincare gauge theory. In the Riemann-Cartan geometry of Poincare gauge theory the field equation and geodesics are invariant under projective transformation, just as in affine geometry. However, in the Riemann-Cartan case the torsion and nonmetricity tensors change. By generalizing the Riemann-Cartan geometry to allow both torsion and nonmetricity while maintaining local Lorentz symmetry the difference of the antisymmetric part of the nonmetricity Q and the torsion T is a projectively invariant linear combination $S = T - Q$ with the same symmetry as torsion. The structure equations may be written entirely in terms of S and the corresponding Riemann-Cartan curvature. The new description of the geometry has manifest projective and Lorentz symmetries, and vanishing nonmetricity. Torsion, S and Q lie in the vector space of vector-valued 2-forms. Within the extended geometry we define rotations with axis in the direction of S. These rotate both torsion and nonmetricity while leaving S invariant. In n dimensions and (p, q) signature this gives a large internal symmetry. The four dimensional case acquires SO(11,9) or Spin(11,9) internal symmetry, sufficient for the Standard Model. The most general action up to linearity in second derivatives of the solder form now includes combinations quadratic in torsion and nonmetricity, torsion-nonmetricity couplings, and the Einstein-Hilbert action. Imposing projective invariance reduces this to dependence on S and curvature alone. The new internal symmetry decouples from gravity in agreement with the Coleman-Mandula theorem.
We deal with suitable nonlinear versions of Jauregui's isocapacitary mass in 3-manifolds with nonnegative scalar curvature and compact outermost minimal boundary. These masses, which depend on a parameter $1<p\leq 2$, interpolate between Jauregui's mass $p=2$ and Huisken's isoperimetric mass, as $p \to 1^+$. We derive positive mass theorems for these masses under mild conditions at infinity, and we show that these masses do coincide with the ADM mass when the latter is defined. We finally work out a nonlinear potential theoretic proof of the Penrose inequality in the optimal asymptotic regime.
The study of Kerr geodesics has a long history, particularly for those occurring within the equatorial plane, which is generally well-understood. However, upon comparison with the classification introduced by one of us \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.105.024075}{[Phys. Rev. D 105, 024075 (2022)]}, it becomes apparent that certain classes of geodesics, such as trapped orbits, are still lacking analytical solutions. Thus, in this study, we provide explicit analytical solutions for equatorial timelike geodesics in Kerr spacetime, including solutions of trapped orbits, which capture the characteristics of special geodesics, such as the positions and conserved quantities of circular orbits, bound orbits, and deflecting orbits. Specifically, we determine the precise location at which retrograde orbits undergo a transition from counter-rotating to prograde motion due to the strong gravitational effects near the rotating black hole. Interestingly, we observe that for orbits with negative energy, the trajectory remains prograde despite the negative angular momentum. Furthermore, we investigate the intriguing phenomenon of deflecting orbits exhibiting an increased number of revolutions around the black hole as the turning point approaches the turning point of the trapped orbit. Additionally, we find that only prograde marginal deflecting geodesics are capable of traversing through the ergoregion. In summary, our findings present explicit solutions for equatorial timelike geodesics and offer insights into the dynamics of particle motion in the vicinity of a rotating black hole.
States of Low Energy are a class of exact Hadamard states for free quantum fields on cosmological spacetimes whose structure is fixed at {\it all} scales by a minimization principle. The original construction was for Friedmann-Lema\^{i}tre geometries and is here generalized to anisotropic Bianchi I geometries relevant to primordial cosmology. In addition to proving the Hadamard property, systematic series expansions in the infrared and ultraviolet are developed. The infrared expansion is convergent and induces in the massless case a leading spatial long distance decay that is always Minkowski-like but anisotropy modulated. The ultraviolet expansion is shown to be equivalent to the Hadamard property, and a non-recursive formula for its coefficients is presented.
Although the WKB series converges only asymptotically and guarantees the exact result solely in the eikonal regime, we have managed to derive concise analytical expressions for the quasinormal modes and grey-body factors of black holes, expanding beyond the eikonal approximation. Remarkably, these expressions demonstrate unexpectedly strong accuracy. We suggest a comprehensive approach for deriving analytical expressions for grey-body factors and quasinormal modes at various orders beyond the eikonal approximation. Two cases are examined as examples: the Schwarzschild-de Sitter black hole and hairy black holes within the framework of Effective Field Theory. We have publicly shared a generic code that calculates analytical expressions for grey-body factors and quasinormal modes of spherical black holes.
In this thesis, we investigate the method of conformal renormalization applied to theories with degrees of freedom beyond the metric ones. Specifically, we examine this method in the presence of a scalar field. To do this, as part of a review, we revisit the action principle of General Relativity and Einstein's equations, in addition to re-examining the conditions for this theory to have a well-defined variational principle when Dirichlet boundary conditions are imposed. We then explore various methods for calculating conserved charges in asymptotically flat spaces. In asymptotically anti-de Sitter spaces, we study two renormalization schemes that are relevant to this work. To motivate the method used here, we observe that Conformal Gravity is finite for spaces that are asymptotically anti-de Sitter, as demonstrated in [Grumiller, 2014]. This guiding principle gives us a clue as to how conformal symmetry may be related to renormalization in spacetimes with such asymptotics. The basis of this construction is the extension of a covariant tensor under Weyl rescalings composed of the metric and the scalar field as proposed in [Oliva and Ray, 2011]. This extension ensures that the conformal weight of this tensor is equal to that of the Weyl tensor. We extend this realization by considering tensor-scalar theories with conformal symmetry, coupled with the Einstein-AdS action written in the MacDowell-Mansouri form. Despite the fact that the Einstein-AdS sector breaks conformal symmetry, we show that the entire theory can still be renormalized if the scalar field has an appropriate decay when considering asymptotically anti-de Sitter solutions. Finally, we study black hole-type solutions, calculating their Hawking temperature and the Euclidean on-shell action, explicitly demonstrating that the latter is finite for asymptotically anti-de Sitter spaces.
We examined the quantum properties of scalar-tensor gravity with a coupling to the Gauss-Bonnet term, exploring both linear and quadratic couplings. We calculate the leading order corrections to the non-relativistic one-body gravitational potential and the metric studying the external gravitational field of a point-like scalar particle. The light-like scattering was studied and compared with the classical theory. We find that loop corrections are strongly suppressed and cannot significantly affect the black hole shadow for quadratic coupling. The leading order corrections are important for small-angle scattering and can contribute to the formation of the black hole shadow for the case of linear coupling.
Cosmologies of the lower Bianchi types, i.e. except those of type VIII or IX, admit a two-dimensional Abelian subgroup of the isometry group, the $G_2$. For orthogonal perfect fluid cosmologies of almost all lower Bianchi types the $G_2$ acts orthogonally-transitively, which is related to a cessation of the oscillations observed in the higher Bianchi types. However, in orthogonal perfect fluid cosmologies of type VI$_{-1/9}$ the $G_2$ does not necessarily act orthogonally-transitively. As a consequence, the dynamics of type VI$_{-1/9}$ orthogonal perfect fluid cosmologies have the same degrees of freedom as those of the higher types VIII and IX and their dynamics are expected to be markedly different compared to those of other lower Bianchi types. In this article we take a different approach to quiescence, namely the presence of an orthogonal stiff fluid. On the one hand, this completes the analysis of the initial singularity for Bianchi cosmologies with an orthogonal stiff fluid. On the other hand, this allows us to get a grasp of the underlying dynamics, in particular the effect of orthogonal transitivity as well as possible (asymptotic) polarization conditions. In particular, we shown that a generic type VI$_{-1/9}$ cosmology with an orthogonal stiff fluid has similar asymptotics as a generic Bianchi type VIII or IX cosmology with an orthogonal stiff fluid. The exceptions to this genericity are solutions satisfying an asymptotic polarization condition and solutions for which the $G_2$ acts orthogonally-transitively. Only in those cases the limits of the eigenvalues of the expansion-normalized Weingarten map may be negative. We also obtain a concise way to represent the dynamics which works more generally for the exceptional type VI$_{-1/9}$ cosmologies, and obtain a monotonic function for the case of a non-stiff orthogonal perfect fluid that is more stiff than a radiation fluid.
The work proposes a new method for measuring the gluon jet fraction in jet sample produced at the hadron collider. This method uses model quark/gluon templates - distributions of quark and gluon jets over the jet macro parameter. Within the framework of this method, it is possible to find a model uncertainty of measurement associated with the deviation of model quark/gluon templates from the true ones. The issue of data-motivated corrections to the model quark/gluon templates is discussed.
To measure the characteristics of quark and gluon jets in hadron-hadron collisions, two samples of jets are used. Given the large statistics of jets at the LHC, the two-sample method requires taking into account the following corrections: (1) use of measured fractions of quark and gluon jets instead of the model ones, (2) amendment for the contribution of jets with an unidentified flavour, (3) taking into account the dependence of the distributions of quark and gluon jets on the conditions for jet sample selections. The paper presents an improved two-sample method that takes into account these corrections.
In this work we put forward the inclusion of error mitigation routines in the process of training Variational Quantum Circuit (VQC) models. In detail, we define a Real Time Quantum Error Mitigation (RTQEM) algorithm to coadiuvate the task of fitting functions on quantum chips with VQCs. While state-of-the-art QEM methods cannot adress the exponential loss concentration induced by noise in current devices, we demonstrate that our RTQEM routine can enhance VQCs' trainability by reducing the corruption of the loss function. We tested the algorithm by simulating and deploying the fit of a monodimensional {\it u}-Quark Parton Distribution Function (PDF) on a superconducting single-qubit device, and we further analyzed the scalability of the proposed technique by simulating a multidimensional fit with up to 8 qubits.
In this paper we discuss $SU(5)^{3}$ with cyclic symmetry as a possible grand unified theory (GUT). The basic idea of such a tri-unification is that there is a a separate $SU(5)$ for each fermion family, with the Higgs doublet(s) arising from the third family $SU(5)$, providing a basis for charged fermion mass hierarchies. $SU(5)^{3}$ tri-unification is the first theory that reconciles the idea of gauge non-universality with the idea of gauge coupling unification, opening the possibility to build consistent non-universal descriptions of Nature that are valid all the way up to the scale of grand unification. As a concrete example, we propose a grand unified embedding of the tri-hypercharge model $U(1)_Y^3$ based on an $SU(5)^{3}$ framework with cyclic symmetry. We discuss a minimal tri-hypercharge example which can account for all the quark and lepton (including neutrino) masses and mixing parameters. We show that it is possible to unify the many gauge couplings into a single gauge coupling associated with the cyclic $SU(5)^{3}$ gauge group, by assuming minimal multiplet splitting, together with a set of colour octet scalars. We also study proton decay in this example, and present the predictions for the proton lifetime in the dominant $e^+\pi^0$ channel.
We explore the ideas of resurgence and Pad\'{e}-Borel resummation in the Euler-Heisenberg Lagrangian of scalar quantum electrodynamics, which has remained largely unexamined in these contexts. We thereby extend the related seminal works in spinor quantum electrodynamics, while contrasting the similarities and differences in the two cases. We investigate in detail the efficacy of resurgent extrapolations starting from just a finite number of terms in the weak-field expansions of the 1-loop and 2-loop scalar quantum electrodynamics Euler-Heisenberg Lagrangian. While we re-derive some of the well-known 1-loop and 2-loop contributions in representations suitable for Pad\'{e}-Borel analyses, other contributions have been derived for the first time. For instance, we find a closed analytic form for the one-particle reducible contribution at 2-loop, which until recently was thought to be unimportant. It is pointed out that there could be an interesting interplay between the one-particle irreducible and one-particle reducible terms in the strong-field limit. For the 1-loop contribution, the resurgent analysis may be mapped effectively into two copies of the spinor quantum electrodynamics weak-field expansions, differing only in their prefactors and masses. This simple mapping is no longer true at 2-loops. The singularity structures in the Pad\'{e}-Borel transforms at 1-loop and 2-loop are examined in some detail. Analytic continuation to the electric field case and the generation of an imaginary part is also studied. We compare the Pad\'{e}-Borel reconstructions to closed analytic forms or to numerically computed values in the full theory. It is seen that Pad\'{e}-Borel resummations based on even a small number of terms from the weak-field expansions can accurately approximate the strong-field behaviour.
Two of the conditions that have been suggested to determine the lower boundary of the conformal window in asymptotically free gauge theories are the linear condition, $\gamma_{\bar\psi\psi,IR}=1$, and the quadratic condition, $\gamma_{\bar\psi\psi,IR}(2-\gamma_{\bar\psi\psi,IR})=1$, where $\gamma_{\bar\psi\psi,IR}$ is the anomalous dimension of the operator $\bar\psi\psi$ at an infrared fixed point in a theory. We compare these conditions as applied to an ${\cal N}=1$ supersymmetric gauge theory with gauge group $G$ and $N_f$ pairs of massless chiral superfields $\Phi$ and $\tilde \Phi$ transforming according to the respective representations ${\cal R}$ and $\bar {\cal R}$ of $G$. We use the fact that $\gamma_{\bar\psi\psi,IR}$ and the value $N_f = N_{f,cr}$ at the lower boundary of the conformal window are both known exactly for this theory. In contrast to the case with a non-supersymmetric gauge theory, here we find that in higher-order calculations, the linear condition provides a more accurate determination of $N_{f,cr}$ than the quadratic condition when both are calculated to the same finite order of truncation in a scheme-independent expansion.
It is pointed out that every renormalizable field theory has a symmetry which is hidden in plain sight. In all practical cases, it is also broken softly, either explicitly or spontaneously. Implications for neutrino mass and dark matter are discussed.
We derive an analytical expression for the contribution of the order $m\alpha^2 (Z\alpha)^6$ to the hydrogen Lamb shift which comes from the diagrams for radiative corrections to the Wichmann-Kroll potential. We use modern methods of multiloop calculations, based on IBP reduction, DRA method and differential equations.
We present ERNIE, a computer program which generates nuclear reactor electron antineutrinos and inverse beta decay events induced by these particles, using the Monte-Carlo method. The program allows the usage of different antineutrino energy spectra models and can simulate the time evolution of the overall antineutrino spectrum because of the burn-up effect. The output of the program can readily be used in detector simulations made with eg. GEANT 4.
A new upcoming version of SusHi is introduced. It features unified input for the Standard Model (SM) and beyond the SM models (BSM) parameters for higher-order total cross sections for Higgs production in gluon fusion, heavy-quark annhilation, as well as Higgsstrahlung. Like previous versions of SusHi, it provides links to codes like 2HDMC and FeynHiggs, but can also process standard SLHA output of spectrum generators like SOFTSUSY and SPheno.
We consider the contribution of the Odderon to diffractive $pp$ and $p\bar p$ elastic scattering at large center of mass energy. We identify the Odderon and Pomeron with the Reggeized $1^{\pm-}$ and $2^{++}$ glueballs in the bulk, respectively. We use for the gravity dual description the repulsive wall model, to account for the proper Gribov diffusion for off-forward scattering. The eikonalized and unitarized amplitudes exhibit a vanishingly small rho-parameter, and a slope parameter fixed by twice the closed string slope. The results for the differential and total cross sections are compared to the empirical results reported recently by the TOTEM collaboration.
Generalized distribution amplitudes (GDAs) of mesons can be probed by the reactions $e^- e^+ \to M_1 M_2 \gamma$, which are accessible at electron-positron colliders such as BESIII and Belle II. After discussing the neutral meson production case in the first paper of this series, we discuss here the complementary case of the charged meson ($M^+ M^-$) production, where one can extract the complete information on GDAs from the interference of the amplitudes of the two competing processes where the photon is emitted either in the initial or in the final state. Considering the importance of the charged meson production, we present a complete expression for the interference term of the cross section which is experimentally accessible thanks to its charge conjugation specific property. We adopt two types of models for leading twist $\pi \pi$ GDAs to estimate the size of the interference term in the process $e^- e^+ \to \pi^+ \pi^- \gamma$ numerically, namely a model extracted from previous experimental results on $\gamma^* \gamma \to \pi^0 \pi^0$ at Belle and the asymptotic form predicted by QCD evolution equations. We include in the calculation the kinematical higher-twist corrections up to twist 4 for the helicity amplitudes. Both models of GDAs indicate that the kinematical corrections are not negligible for the interference term of the cross section measured at BESIII, thus it is necessary to include them if we try to extract the GDAs precisely. On the other side, the kinematical corrections are very tiny for the measurements at Belle II, and the leading twist-2 formula of the interference term will be good enough to describe the charge conjugation odd part of the differential cross section.
We describe the approach to lattice extraction of Generalized Parton Distributions (GPDs) that is based on the use of the double distributions (DDs) formalism within the pseudo-distribution framework. The advantage of using DDs is that GPDs obtained in this way have the mandatory polynomiality property, a non-trivial correlation between $x$- and $\xi$-dependences of GPDs. Another advantage of using DDs is that the $D$-term appears as an independent entity in the DD formalism rather than a part of GPDs $H$ and $E$. We relate the $\xi$-dependence of GPDs to the width of the $\alpha$-profiles of the corresponding DDs, and discuss strategies for fitting lattice-extracted pseudo-distributions by DDs.
Nuclear structure at short $NN$-distances is still poorly understood. In particular, the full quantum structure of the nucleus with a correlated $NN$-pair is a challenge to theory. So far, model descriptions have been limited to the average mean-field picture of the remaining nuclear system after removing the $NN$-pair. In the recent experiment of the BM@N Collaboration at JINR \cite{Patsyuk:2021fju}, the reactions $^{12}\mbox{C}(p,2pn_s)^{10}\mbox{B}$ and $^{12}\mbox{C}(p,2pp_s)^{10}\mbox{Be}$ induced by the hard elastic $pp$ scattering were studied. Here, $n_s$ or $p_s$ denote the undetected slow nucleon in the rest frame of $^{12}\mbox{C}$. In contrast to the previous experiments, the residual bound nucleus was also detected which requires a new level of theoretical understanding. In the present work, we apply the technique of fractional parentage coefficients of the translationally-invariant shell model (TISM) to calculate the spectroscopic amplitude of the system $NN-B$ where $B$ is the remaining nuclear system. The spectroscopic amplitude enters the full amplitude of a nuclear reaction. The relative $NN-B$ wave function is no longer a free parameter of the model but is uniquely related to the internal state of $B$. The interaction of the target proton with the $NN$-pair is considered in the impulse approximation. We also include the initial- and final state interactions of absorptive type as well as the single charge exchange processes. Our calculations are in a reasonable agreement with the BM@N data.
We use the available data on $<dN/dy>$ and $<\it{p}_T>$ for the identified hadrons including $\pi^{+}+\pi^{-}$, $K^{+}+K^{-}$, $p+\overline{p}$, $K^*(892)^0$ and $\varphi$-mesons, registered at midrapidity ($\vert y\vert < 0.5)$ in central 0-5% Au-Au, Pb-Pb and Xe+Xe collisions in a broad range of energies in order to compare the relative contributions to the Bjorken energy density. Particles, like strangeness-neutral $\varphi$-meson (a system of $s\overline{s})$ quarks) and K-meson (containing single s-quark), are of specific interest because they might have different production mechanisms and differ in sensitivity to the properties of the QGP-medium formed in relativistic heavy-ion collisions.
Using the dispersive method we perform a two-loop analysis of the leading non-perturbative power correction to the change in jet transverse momentum $p_T$, in the small $R$ limit of a Cambridge-Aachen jet clustering algorithm. We frame the calculation in such a way so as to maintain connection with the universal "Milan factor" that corrects for the naive inclusive treatment of the leading hadronization corrections. We derive an enhancement factor that differs from the universal Milan factor computed for event-shape variables as well as the corresponding enhancement factor previously derived for the $k_t$ algorithm. Our calculation directly exploits the soft and triple-collinear limit of the QCD matrix element and phase space, which is relevant for capturing the coefficient of the leading $1/R$ power correction. As an additional check on our new calculational framework, we also independently confirm the known result for the $k_t$ algorithm.
In the context of the Standard Model effective field theory (SMEFT) the next-to-next-to-leading (NNLO) QCD corrections to the Higgsstrahlungs ($Vh$) processes in hadronic collisions are calculated and matched to a parton shower (PS). NNLO+PS precision is achieved for the complete sets of SMEFT operators that describe the interactions between the Higgs and two vector bosons and the couplings of the Higgs, a $W$ or a $Z$ boson, and light fermions. A POWHEG-BOX implementation of the computed NNLO SMEFT corrections is provided that allows for a realistic exclusive description of $Vh$ production at the level of hadronic events. This feature makes it an essential tool for future Higgs characterisation studies by the ATLAS and CMS collaborations. Utilising our new Monte Carlo code the numerical impact of NNLO$+$PS corrections on the kinematic distributions in $pp \to Zh \to \ell^+ \ell^- h$ production is explored, employing well-motivated SMEFT benchmark scenarios.
Parton-level event generators are one of the most computationally demanding parts of the simulation chain for the Large Hadron Collider. The rapid deployment of computing hardware different from the traditional CPU+RAM model in data centers around the world mandates a change in event generator design. These changes are required in order to provide economically and ecologically sustainable simulations for the high-luminosity era of the LHC. We present the first complete leading-order parton-level event generation framework capable of utilizing most modern hardware. Furthermore, we discuss its performance in the standard candle processes of vector boson and top-quark pair production with up to five additional jets.
A minimal extension of the Standard Model (SM) featuring two scalar leptoquarks, an SU(2) doublet with hypercharge 1/6 and a singlet with hypercharge 1/3, is proposed as an economical benchmark model for studies of an interplay between flavour physics and properties of the neutrino sector. The presence of such type of leptoquarks radiatively generates neutrino masses and offers a simultaneous explanation for the current B-physics anomalies involving $b \to c \ell \nu_\ell$ decays. The model can also accommodate both the muon magnetic moment and the recently reported $W$ mass anomalies, while complying with the most stringent lepton flavour violating observables.
In the present work, we re-analyze the available data for $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ by considering the contributions from the $N(2080){3/2}^-$ and $N(2270)3/2^-$ molecules instead of any nucleon resonances in the $s$ channel, where the $N(2080)3/2^-$ was proposed to be a $K^\ast \Sigma$ molecule as the strange partner of the $P_c^+(4457)$ hadronic molecular state, and the $N(2270)3/2^-$ was assumed to be a $K^*\Sigma^*$ molecule as the strange partner of the $\bar{D}^\ast \Sigma^\ast_c$ bound states that are predicated as members in the same heavy-quark spin symmetry multiplet as the $P_c$ states. It turns out that all the available cross-section data can be well reproduced, indicating that the molecular structures of the possible $N(2080){3/2}^-$ and $N(2270)3/2^-$ states are compatible with the available data for $K^\ast\Sigma$ photoproduction reactions. Further analysis shows that for both $\gamma p\to K^{\ast +}\Sigma^0$ and $\gamma p \to K^{\ast 0}\Sigma^+$ reactions, the $N(2080){3/2}^-$ exchange provides dominant contributions to the cross-sections in the near-threshold energy region, and significant contributions from the $N(2270)3/2^-$ exchange to the cross-sections in the higher energy region are also found. Predictions of the beam asymmetry $\Sigma$, target asymmetry $T$, and recoil baryon asymmetry $P$ are presented and compared with those from our previous work. Measurements of the data on these observables are called on to further constrain the reaction mechanisms of $K^\ast\Sigma$ photoproduction reactions and to verify the molecular scenario of the $N(2080){3/2}^-$ and $N(2270)3/2^-$ states.
Motivated by the recent measurement of muon anomalous magnetic moment at Fermilab, the rapid progress of the LHC search for supersymmetry, and the significantly improved sensitivities of dark matter direct detection experiments, we studied their impacts on the Minimal Supersymmetric Standard Model (MSSM). We conclude that higgsino mass should be larger than about $500~{\rm GeV}$ for $M_1 < 0 $ and $630~{\rm GeV}$ for $M_1 > 100~{\rm GeV}$, where $M_1$ denotes the bino mass. These improved bounds imply a tuning of ${\cal{O}}(1\%)$ to predict the $Z$-boson mass and simultaneously worsen the naturalness of the $Z$- and $h$-mediated resonant annihilations to achieve the measured dark matter density. We also conclude that the LHC restrictions have set lower bounds on the sparticle mass spectra: $ m_{\tilde{\chi}_1^0} \gtrsim 210~{\rm GeV}$, $m_{\tilde{\chi}_2^0}, m_{\tilde{\chi}_1^\pm} \gtrsim 235~{\rm GeV}$, $m_{\tilde{\chi}_3^0} \gtrsim 515~{\rm GeV}$, $m_{\tilde{\chi}_4^0} \gtrsim 525~{\rm GeV}$, $m_{\tilde{\chi}_2^\pm} \gtrsim 530~{\rm GeV}$, $m_{\tilde{\nu}_\mu} \gtrsim 235~{\rm GeV}$, $ m_{\tilde{\mu}_1} \gtrsim 215~{\rm GeV}$, and $m_{\tilde{\mu}_2} \gtrsim 250~{\rm GeV}$, where $\tilde{\chi}_{2}^0$ and $\tilde{\chi}_1^\pm$ are wino-dominated when they are lighter than about $500~{\rm GeV}$. These bounds are far beyond the reach of the LEP experiments in searching for supersymmetry and have not been acquired before. In addition, we illuminate how some parameter spaces of the MSSM have been tested at the LHC and provide five scenarios in which the theory coincides with the LHC restrictions. Once the muon g-2 anomaly is confirmed to originate from supersymmetry, this research may serve as a guide to explore the characteristics of the MSSM in future experiments.
We present a recast in different benchmark models of the recent CMS search that uses the endcap muon detector system to identify displaced showers produced by decays of long-lived particles (LLPs). The exceptional shielding provided by the steel between the stations of the muon system drastically reduces the Standard Model background that limits other existing ATLAS and CMS searches. At the same time, by using the muon system as a sampling calorimeter, the search is sensitive to LLPs energies rather than masses. We show that, thanks to these characteristics, this new search approach is sensitive to LLPs masses even lighter than a GeV, and can be complementary to proposed and existing dedicated LLP experiments.
We study the contribution of a heavy right-handed Majorana neutrino to neutrinoless double beta decay ($0\nu\beta\beta$) via four-fermion effective interactions of Nambu-Jona-Lasinio (NJL) type. In this physical scenario, the sterile neutrino contributes to the nuclear transition through gauge, contact, and mixed interactions. Using the lower limit on the half-life of $0\nu\beta\beta$ from the KamLAND-Zen experiment, we then constrain the effective right-handed coupling between the sterile neutrino and the $W$ boson: $\mathcal{G}^{W}_{R}$. Eventually, we show that the obtained bounds are compatible with those found in the literature, which highlights the complementarity of this type of phenomenological study with high-energy experiments.
Some properties of a neutrino may differ significantly depending on whether it is Dirac or Majorana type. The type is determined by the relative size of Dirac and Majorana masses, which may vary if they arise from an oscillating scalar dark matter. We show that the change can be significant enough to convert the neutrino type between Dirac and Majorana periodically while satisfying constraints on the dark matter. This neutrino type oscillation predicts periodic modulations in the event rates in various neutrino phenomena including the neutrinoless double beta decay. As the energy density and, thus, the oscillation amplitude of the dark matter evolves in the cosmic time scale, the neutrino masses change accordingly, which provides an interesting link between the present-time neutrino physics to the early universe cosmology including the leptogenesis.
We point out a dark matter candidate which arises in a minimal extension of solutions to the hierarchy problem based on compositeness. In such models, some or all of the Standard Model fields are composites of a conformal field theory (CFT) which confines near the electroweak scale. We posit an elementary scalar field, whose mass is expected to lie near the cutoff of the CFT, and whose couplings to the Standard Model are suppressed by the cutoff. Hence it can naturally be ultraheavy and feebly coupled. This scalar can constitute all of the dark matter for masses between $10^{10}$ GeV and $10^{18}$ GeV, with the relic abundance produced by the freeze-in mechanism via a coupling to the CFT. The principal experimental constraints come from bounds on the tensor-to-scalar ratio. We speculate about future detection prospects.
In this work we have presented the one-loop calculation of the bulk viscosity of a system of rotating, hot and dense spin 1/2 fermions within the framework of Kubo formalism calculated from correlation functions of fields which in turn is used to calculate the spectral function of energy-momentum tensors. The calculation has been done in curved space by the help of tetrad formalism, where the the gamma matrices in this set-up assume their generic structure by becoming space dependent. The techniques of thermal field theory have been employed which take into account the three energy scales viz. temperature, chemical potential and angular velocity into account in the Matsubara frequency summation. The study has been performed in the ambience of very large angular velocities, ranging from 0.1 to 1.0 GeV. The fermion propagator used in this work is appropiate for the regime of large angular velocities. We explore the behaviour of bulk viscosity with angular velocity, temperature and chemical potential through our plots.
We present a procedure leveraging Bayesian deep active learning to rapidly produce highly accurate approximate bounded-from-below conditions for arbitrary renormalizable scalar potentials, in the form of a neural network which may be saved and exported for use in arbitrary parameter space scans. We explore the performance of our procedure on three different scalar potentials with either highly nontrivial or unknown symbolic bounded-from-below conditions (the two-Higgs doublet model, the three-Higgs doublet model, and a version of the Georgi-Machacek model without custodial symmetry). We find that we can produce fast and highly accurate binary classifiers for all three potentials. Furthermore, for the potentials for which no known symbolic necessary and sufficient conditions on boundedness-from-below exist, our classifiers substantially outperform some common approximate analytical methods, such as producing tractable sufficient but not necessary conditions or evaluating boundedness-from-below conditions for scenarios in which only a subset of the theory's fields achieve vev's. Our methodology can be readily adapted to any renormalizable scalar field theory. For the community's use, we have developed a Python package, BFBrain, which allows for the rapid implementation of our analysis procedure on user-specified scalar potentials with a high degree of customizability.
We investigate the differential emission rate of neutral scalar bosons from a highly magnetized relativistic plasma. We show that three processes contribute at the leading order: particle splitting ($\psi\rightarrow \psi+\phi $), antiparticle splitting ($\bar{\psi} \rightarrow \bar{\psi}+\phi $), and particle-antiparticle annihilation ($\psi + \bar{\psi}\rightarrow \phi $). This is in contrast to the scenario with zero magnetic field, where only the annihilation processes contribute to boson production. We examine the impact of Landau-level quantization on the energy dependence of the rate and investigate the angular distribution of emitted scalar bosons. The differential rate resulting from both (anti)particle splitting and annihilation processes are typically suppressed in the direction of the magnetic field and enhanced in perpendicular directions. Overall, the background magnetic field significantly amplifies the total emission rate. We speculate that our model calculations provide valuable theoretical insights with potentially important applications.
We consider continuum formulation QCD in four dimensions with twelve massless fundamental quark flavors. Splitting the SU(N) gauge field into background and fluctuation parts, we use well developed techniques to calculate the one-loop effective action for the theory. We find that for constant self-dual background field-strength tensor the notorious infrared divergences of the effective action cancel between gauge and matter sectors if the number of massless quark flavors is exactly $N_f = 4N$. The ultraviolet divergencies of the effective action are non-perturbatively renormalized with a $\beta$-function that matches the known perturbative result in the weak coupling limit. The resulting UV- and IR-finite effective action possesses a non-trivial minimum which has lower free energy than the perturbative vacuum, and for which the expectation value of the Polyakov loop vanishes. Inclusion of finite temperature effects points to the presence of a first-order phase transition to the perturbative vacuum with a calculable critical temperature.
We calculate the four-top quark operator contributions to Higgs production via gluon fusion in the Standard Model Effective Field Theory. The four-top operators enter for the first time via two-loop diagrams. Due to their chiral structure they contain $\gamma_5$, so special care needs to be taken when using dimensional regularisation for the loop integrals. We use two different schemes for the continuation of $\gamma_5$ to $D$ space-time dimensions in our calculations and present a mapping for the parameters in the two schemes. This generically leads to an interplay of different operators, such as four-top operators, chromomagnetic operators or Yukawa-type operators at the loop level. We validate our results by examples of matching onto UV models.
We derive a general formula for two-loop counterterms in Effective Field Theories (EFTs) using a geometric approach. This formula allows the two-loop results of our previous paper to be applied to a wide range of theories. The two-loop results hold for loop graphs in EFTs where the interaction vertices contain operators of arbitrarily high dimension, but at most two derivatives. We also extend our previous one-loop result to include operators with an arbitrary number of derivatives, as long as there is at most one derivative acting on each field. The final result for the two-loop counterterms is written in terms of geometric quantities such as the Riemann curvature tensor of the scalar manifold and its covariant derivatives. As applications of our results, we give the two-loop counterterms and renormalization group equations for the O(n) EFT to dimension six, the scalar sector of the Standard Model Effective Field Theory (SMEFT) to dimension six, and chiral perturbation theory to order $p^6$.
Kaon physics is at a turning point -- while the rare-kaon experiments NA62 and KOTO are in full swing, the end of their lifetime is approaching and the future experimental landscape needs to be defined. With HIKE, KOTO-II and LHCb-Phase-II on the table and under scrutiny, it is a very good moment in time to take stock and contemplate about the opportunities these experiments and theoretical developments provide for particle physics in the coming decade and beyond. This paper provides a compact summary of talks and discussions from the Kaons@CERN 2023 workshop.
It has recently been understood that the complete global symmetry of finite group topological gauge theories contains the structure of a higher-group. Here we study the higher-group structure in (3+1)D $\mathbb{Z}_2$ gauge theory with an emergent fermion, and point out that pumping chiral $p+ip$ topological states gives rise to a $\mathbb{Z}_{8}$ 0-form symmetry with mixed gravitational anomaly. This ordinary symmetry mixes with the other higher symmetries to form a 3-group structure, which we examine in detail. We then show that in the context of stabilizer quantum codes, one can obtain logical CCZ and CS gates by placing the code on a discretization of $T^3$ (3-torus) and $T^2 \rtimes_{C_2} S^1$ (2-torus bundle over the circle) respectively, and pumping $p+ip$ states. Our considerations also imply the possibility of a logical $T$ gate by placing the code on $\mathbb{RP}^3$ and pumping a $p+ip$ topological state.
We consider the 4-dimensional $\mathcal{N}=1$ Lie superconformal algebra and search for completely "symmetric" (in the graded sense) 3-index invariant tensors. The solution we find is unique and we show that the corresponding invariant polynomial cubic in the generalized curvatures of superconformal gravity vanishes. Consequently, the associated Chern-Simons polynomial is a non-trivial anomaly cocycle. We explicitly compute this cocycle to all orders in the independent fields of superconformal gravity and establish that it is BRST equivalent to the so-called superconformal $a$-anomaly. We briefly discuss the possibility that the superconformal $c$-anomaly also admits a similar Chern-Simons formulation.
Periodically driven quantum systems known as Floquet insulators can host topologically protected bound states known as "$\pi$ modes" that exhibit response at half the frequency of the drive. Such states can also appear in undriven lattice field theories when time is discretized as a result of fermion doubling, raising the question of whether these two phenomena could be connected. Recently, we demonstrated such a connection at the level of an explicit mapping between the spectra of a continuous-time Floquet model and a discrete-time undriven lattice fermion model. However, this mapping relied on a symmetry of the single-particle spectrum that is not present for generic drive parameters. Inspired by the example of the temporal Wilson term in lattice field theory, in this paper we extend this mapping to the full drive parameter space by allowing the parameters of the discrete-time model to be frequency-dependent. The spectra of the resulting lattice fermion models exactly match the quasienergy spectrum of the Floquet model in the thermodynamic limit. Our results demonstrate that spectral features characteristic of beyond-equilibrium physics in Floquet systems can be replicated in static systems with appropriate time discretization.
We formulate a geometric measurement theory of dynamical classical systems possessing both continuous and discrete degrees of freedom. The approach is covariant with respect to choices of clocks and canonically incorporates laboratories. The latter are embedded symplectic submanifolds of an odd-dimensional symplectic structure. When suitably defined, symplectic geometry in odd dimensions is exactly the structure needed for covariance. A fundamentally probabilistic viewpoint allows classical supergeometries to describe discrete dynamics. We solve the problem of how to construct probabilistic measures on supermanifolds given a (possibly odd dimensional) supersymplectic structure. This relies on a superanalog of the Hodge star for differential forms and a description of probabilities by convex cones. We also show how stochastic processes such as Markov chains can be described by supergeometry.
We employ the Einstein-Abelian-Higgs theory to investigate the structure of vortex-antivortex lattices within a superconductor driven by spatial periodic magnetic fields. By adjusting the parameters of the external magnetic field, including the period ($\mathcal{T}$) and the amplitude ($B_0$), various distinct vortex states emerge. These states encompass the Wigner crystallization state, the vortex cluster state, and the suppressed state. Additionally, we present a comprehensive phase diagram to demarcate the specific regions where these structures emerge, contributing to our understanding of superconductivity in complex magnetic environments.
We point out that area laws of quantum-information concepts indicate limitations of block transformations as well-behaved real-space renormalization group (RG) maps, which in turn guides the design of better RG schemes. Mutual-information area laws imply the difficulty of Kadanoff's block-spin method in two dimensions (2D) or higher due to the growth of short-scale correlations among the spins on the boundary of a block. A leap to the tensor-network RG, in hindsight, follows the guidance of mutual information and is efficient in 2D, thanks to its mixture of quantum and classical perspectives and the saturation of entanglement entropy in 2D. In three dimensions (3D), however, entanglement grows according to the area law, posing a threat to 3D block-tensor map as an apt RG transformation. As a numerical evidence, we show that estimations of 3D Ising critical exponents fail to improve by retaining more couplings. As a guidance to proceed, a tensor-network toy model is proposed to capture the 3D entanglement-entropy area law.
We study the one-loop renormalisation of 4d SU(N) Yang-Mills theory with $M$ adjoint representation scalar multiplets. We calculate the coupled one-loop renormalization group flows for this theory by developing an algebraic description, which we find to be characterised by a non-associative algebra of marginal couplings. The 4d one-loop beta function of the gauge coupling $g^2$ vanishes for the case $M = 22$, which is intriguing for string theory. There are real fixed flows (fixed points of $\lambda/g^2$) only for $M\geq406$, rendering one-loop fixed points of the gauge coupling and scalar couplings incompatible.
We extend the computation of one-loop partition function in AdS$_{d+1}$ using the method in [arXiv:2201.09043] and [arXiv:2303.02711] for scalars and fermions to the case of $U(1)$ vectors. This method utilizes the eigenfunctions of the AdS Laplacian for vectors. For finite temperature, the partition function is obtained by generalizing the eigenfunctions so that they are invariant under the quotient group action, which defines the thermal AdS spaces. The results obtained match with those available in the literature. As application of these results, we then analyze phases of scalar QED theories at one-loop in $d=2,3$. We do this first as functions of AdS radius at zero temperature showing that the results reduce to those in flat space in the large AdS radius limit. Thereafter the phases are studied as a function of the scalar mass and temperature. We also derive effective potentials for the scalar QED theories with $N$ scalars.
Supersymmetry is a theme with many facets that has dominated much of high energy physics over the past decades. In this contribution I present a very personal perspective on these developments, which has also been shaped in an important way by my interactions with Julius Wess.
We propose a generalized left-handed (chiral) gauge choice for the genus one Riemann surface, realized through a singular gauge transformation of worldsheet coordinates. The transformation predominantly affects the logarithmic non-zero modes of the Green's function, leaving non-holomorphic and non-logarithmic modes unchanged. This procedure yields $\delta$-functions for chiral coordinates and box-diagram-like integrals in terms of modular parameters. The resulting $\delta$-functions formulate one-loop level Scattering Equations that simplify to satisfy the tree-level solutions, constraining the locations of the marked points. Subsequent integrals agree with the field-theoretic box diagram for the four-point amplitude, in accordance with the divergent $\epsilon$ expansions derived from dimensional regularization in the infrared limit. We conclude by highlighting potential avenues for future research, including the exploration of methodologies that preclude the need for worldsheet coordinates reparametrization and their implications for accurately capturing infrared behavior from modular parameter integrals.
We discuss a decomposition formula of simple products of fermion correlation functions with cyclic constrains and its applications to spin sums of super string amplitudes. Based on some facts which are noted or derived in this paper, we propose a candidate of the form of this decomposition formula for some of higher genus cases which includes genus two case. Although we had to use several conjectures and assumptions due to unsolved mathematical difficulties, the method described in the text may be an efficient way to obtain the decomposition formula in higher genus cases. In particular, for those cases, we propose a concrete method to sum over non singular even spin structures for the product of arbitrary number of the fermion correlation functions with cyclic constraints in super string amplitudes. We also propose an explicit generalization of Eisenstein-Kronecker series to the higher genus cases in the process of considerations above.
We revisit semiclassical strings, in particular we focus on rigidly rotating strings, in the near horizon geometry of two orthogonal stacks of NS5-branes (I-branes) using the string sigma model. We determine the conserved charges for the probe string moving in the resulting $\mathbb{R}_t \times S^3_{\theta_1}\times S^3_{\theta_2}$ background supported by NS-NS two-forms and find a regularized dispersion relation for different values of integration constants. Using configurations that move simultaneously on both spheres, we obtain the giant magnon like dispersion relation for one particular set of parameters, while other consistent set of values gives us a dispersion relation reminiscent of the single spike.
We calculate the instanton corrections to energy spectra of one-dimensional quantum mechanical oscillators to all orders and unify them in a closed form transseries description. Using alien calculus, we clarify the resurgent structure of these transseries and demonstrate two approaches in which the Stokes constants can be derived. As a result, we formulate a minimal one-parameter transseries for the natural nonperturbative extension to the perturbative energy, which captures the Stokes phenomenon in a single stroke. We derive these results in three models: quantum oscillators with cubic, symmetric double well and cosine potentials. In the latter two examples, we find that the resulting full transseries for the energy has a more convoluted structure that we can factorise in terms of a minimal and a median transseries. For the cosine potential we briefly discuss this more complicated transseries structure in conjunction with topology and the concept of the resurgence triangle.
The superconformal index $Z$ of the 6d (2,0) theory on $S^5 \times S^1$ (which is related to the localization partition function of 5d SYM on $S^5$) should be captured at large $N$ by the quantum M2 brane theory in the dual M-theory background. Generalizing the type IIA string theory limit of this relation discussed in arXiv:2111.15493 and arXiv:2304.12340, we consider semiclassically quantized M2 branes in a half-supersymmetric 11d background which is a twisted product of thermal AdS$_7$ and $S^4$. We show that the leading non-perturbative term at large $N$ is reproduced precisely by the 1-loop partition function of an "instanton" M2 brane wrapped on $S^1\times S^2$ with $S^2\subset S^4$. Similarly, the (2,0) theory analog of the BPS Wilson loop expectation value is reproduced by the partition function of a "defect" M2 brane wrapped on thermal AdS$_3\subset$ AdS$_7$. We comment on a curious analogy of these results with similar computations in arXiv:2303.15207 and arXiv:2307.14112 of the partition function of quantum M2 branes in AdS$_4 \times S^7/\mathbb Z_k$ which reproduced the corresponding localization expressions in the ABJM 3d gauge theory.
In a recent paper, we stated conjectural presentations for the equivariant quantum K ring of partial flag varieties, motivated by physics considerations. In this companion paper, we analyze these presentations mathematically. We prove that if the conjectured relations hold, then they must form a complete set of relations. Our main result is a proof of the conjectured presentation in the case of the incidence varieties. We also show that if a quantum K divisor axiom holds (as conjectured by Buch and Mihalcea), then the conjectured presentation also holds for the complete flag variety.
We study the multiplicity of irreducible representations in the decomposition of $n$ fundamentals of $SU(N)$ weighted by a power of their dimension in the large $n$ and large $N$ double scaling limit. A nontrivial scaling is obtained by keeping $n/N^2$ fixed, which plays the role of an order parameter. We find that the system generically undergoes a fourth order phase transition in this parameter, from a dense phase to a dilute phase. The transition is enhanced to third order for the unweighted multiplicity, and disappears altogether when weighting with the first power of the dimension. This corresponds to the infinite temperature partition function of non-Abelian ferromagnets, and the results should be relevant to the thermodynamic limit of such ferromagnets at high temperatures.
We have constructed a generative artificial intelligence model to predict dual gravity solutions when provided with the input of holographic entanglement entropy. The model utilized in our study is based on the transformer algorithm, widely used for various natural language tasks including text generation, summarization, and translation. This algorithm possesses the ability to understand the meanings of input and output sequences by utilizing multi-head attention layers. In the training procedure, we generated pairs of examples consisting of holographic entanglement entropy data and their corresponding metric solutions. Once the model has completed the training process, it demonstrates the ability to generate predictions regarding a dual geometry that corresponds to the given holographic entanglement entropy. Subsequently, we proceed to validate the dual geometry to confirm its correspondence with the holographic entanglement entropy data.
We study the two-matrix model for double-scaled SYK model, called ETH matrix model introduced by Jafferis et al [arXiv:2209.02131]. If we set the parameters $q_A,q_B$ of this model to zero, the potential of this two-matrix model is given by the Gaussian terms and the $q$-commutator squared interaction. We find that this model is solvable in the large $N$ limit and we explicitly construct the planar one- and two-point function of resolvents in terms of elliptic functions.
Hadronic resonance production plays an important role both in elementary and in nucleus-nucleus collisions. In heavy-ion collisions, since the lifetimes of short-lived resonances are comparable with the lifetime of the late hadronic phase, regeneration and rescattering effects become important and ratios of yields of resonances relative to those of longer lived particles can be used to estimate the time interval between the chemical and kinetic freeze-out. The measurements in pp and p--Pb collisions constitute a reference for nuclear collisions and provide information for tuning event generators inspired by Quantum Chromodynamics.
The proposed Circular Electron Positron Collider (CEPC) imposes new challenges for the vertex detector in terms of pixel size and material budget. A Monolithic Active Pixel Sensor (MAPS) prototype called TaichuPix, based on a column drain readout architecture, has been developed to address the need for high spatial resolution. In order to evaluate the performance of the TaichuPix-3 chips, a beam test was carried out at DESY II TB21 in December 2022. Meanwhile, the Data Acquisition (DAQ) for a muti-plane configuration was tested during the beam test. This work presents the characterization of the TaichuPix-3 chips with two different processes, including cluster size, spatial resolution, and detection efficiency. The analysis results indicate the spatial resolution better than 5 $\mu m$ and the detection efficiency exceeds 99.5 % for both TaichuPix-3 chips with the two different processes.
Recognizing symmetries in data allows for significant boosts in neural network training. In many cases, however, the underlying symmetry is present only in an idealized dataset, and is broken in the training data, due to effects such as arbitrary and/or non-uniform detector bin edges. Standard approaches, such as data augmentation or equivariant networks fail to represent the nature of the full, broken symmetry. We introduce a novel data-augmentation scheme that respects the true underlying symmetry and avoids artifacts by augmenting the training set with transformed pre-detector examples whose detector response is then resimulated. In addition, we encourage the network to treat the augmented copies identically, allowing it to learn the broken symmetry. While the technique can be extended to other symmetries, we demonstrate its application on rotational symmetry in particle physics calorimeter images. We find that neural networks trained with pre-detector rotations converge to a solution more quickly than networks trained with standard post-detector augmentation, and that networks modified to encourage similar internal treatment of augmentations of the same input converge even faster.
The first observation of the decays $J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0}$ and $J\!/\!\psi \rightarrow p \bar{\Sigma}^{-} K_{S}^{0}$ is reported using $(10087\pm44)\times10^{6}$ $J\!/\!\psi$ events recorded by the BESIII detector at the BEPCII storage ring. The branching fractions of each channel are determined to be $\mathcal{B}(J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0})=(1.361 \pm 0.006 \pm 0.025) \times 10^{-4}$ and $\mathcal{B}(J\!/\!\psi \rightarrow p \bar{\Sigma}^{-} K_{S}^{0})=(1.352 \pm 0.006 \pm 0.025) \times 10^{-4}$. The combined result is $\mathcal{B}(J\!/\!\psi \rightarrow \bar{p} \Sigma^{+} K_{S}^{0} +c.c.)=(2.725 \pm 0.009 \pm 0.050) \times 10^{-4}$, where the first uncertainty is statistical and the second systematic. The results presented are in good agreement with the branching fractions of the isospin partner decay $J\!/\!\psi \rightarrow p K^- \bar\Sigma^0 + c.c.$.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline experiment exploiting the liquid argon TPC technology. DUNE will have sensitivity to low energy physics searches, such as the detection of supernova and solar neutrinos. DUNE will consist of four modules of 70-kton liquid argon mass in total, placed 1.5 km underground at the Sanford Underground Research Facility in the USA. These modules are being designed considering the specific requirements of the low energy physics searches. As a result, DUNE will have a unique sensitivity for the detection of electron neutrinos from a core-collapse supernova burst, and solar and diffuse supernova background neutrinos can also be detected.
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.
We propose a Rydberg-atom-based single-photon detector for signal readout in dark matter haloscope experiments between 40 ${\mu}$eV and 200 ${\mu}$eV (10 GHz and 50 GHz). At these frequencies, standard haloscope readout using linear amplifiers is limited by quantum measurement noise, which can be avoided by using a single-photon detector. Our single-photon detection scheme can offer scan rate enhancements up to a factor of $10^4$ over traditional linear amplifier readout, and is compatible with many different haloscope cavities. We identify multiple haloscope designs that could use our Rydberg-atom-based single-photon detector to search for QCD axions with masses above 40 ${\mu}$eV (10 GHz), currently a minimally explored parameter space.
Protecting entanglement from decoherence is a critical aspect of quantum information processsing. For many-body quantum systems evolving under decoherence, estimating multipartite entanglement is challenging. This challenge can be met up by considering distance based measure such as relative entropy of entanglement which decisively measures entanglement in both pure as well as mixed states. In this work, we investigate the tripartite entanglement dynamics of pure and mixed states in the presence of a structured dephasing environment at finite temperature. We show that the robustness of the quantum system to decoherence is dependent on the distribution of entanglement and its relation to different configurations of the bath. If the bath is structured individually such that each qubit has its own environment, the system has different dynamics compared to when the bath is common to all the three qubits. From the results we conjecture that there is a connection between the distribution of entanglement among the qubits and the distribution of bath degrees of freedom, and the interplay of these two distributions determines the decay rate of the entanglement dynamics. The sustainability of tripartite entanglement is shown to be enhanced significantly in presence of reservoir memory.
In this paper, we demonstrate transient dynamics of Husimi Q-representation to visualize and characterize the phase synchronization behavior of a two-level system (qubit) driven by a laser field in both the Markov and non-Markov regime. In the Markov regime, phase preference of the qubit goes away in the long time limit, whereas the long-time phase localization persists in the non-Markovian regime. We also plot the maximum of the shifted phase distribution in two different ways: (a) by varying the detuning and laser drive strength, and (b) by varying the system-bath coupling and laser drive strength. Signature of quantum phase synchronization viz. the Arnold tongue is demonstrated through the maximal value of the shifted phase distribution. The phase synchronization is observed inside the tongue region while the region outside the tongue is desynchronized. The synchronization regions are determined by various system-environment parameters and the qubit phase synchronization is shown to be enhanced in the non-Markov regime.
We solve the dynamics of multi-mode cavity QED with frustrated atom-photon couplings using non-perturbative diagrammatics. Our technique enables a thorough investigation of the nature of the spin glass transition hosted in these platforms. We focus in particular on the role of quantum noise in each of the atomic ensembles which form the frustrated spin network modeling the experiment. We report on the stabilizing effect of strong quantum fluctuations in fostering a glassy phase over extended time scales. At variance with this behaviour, in the semi-classical limit, spin glass order is instead pre-thermally obstructed by the ferromagnetic correlations present at the level of individual atomic ensembles, which substantially delay spin glass formation. Our results set the stage for studying cavity QED experiments with tunable quantum fluctuations, and accompanying them in the transition from semi-classical to strongly correlated operational regimes.
The interplay between delocalisation and repulsive interactions can cause electronic systems to undergo a Mott transition between a metal and an insulator. Here we use neural network hidden fermion determinantal states (HFDS) to uncover this transition in the disordered, fully-connected Hubbard model. Whilst dynamical mean-field theory (DMFT) provides exact solutions to physical observables of the model in the thermodynamic limit, our method allows us to directly access the wavefunction for finite system sizes well beyond the reach of exact diagonalisation. We directly benchmark our results against state-of-the-art calculations obtained using a Matrix Product State (MPS) ansatz. We demonstrate how HFDS is able to obtain more accurate results in the metallic regime and in the vicinity of the transition, with the volume law of entanglement exhibited by the system being prohibitive to the MPS ansatz. We use the HFDS method to calculate the amplitudes of the wavefunction, the energy and double occupancy, the quasi-particle weight and the energy gap, hence providing novel insights into this model and the nature of the transition. Our work paves the way for the study of strongly correlated electron systems with neural quantum states.
Strong laser-atom interactions can produce highly non-classical states of light by using the process of high-harmonic generation in atoms. When the high-harmonic generation is present, the quantum state of the fundamental mode following the interaction is known as the Schr\"odinger cat state, which is a superposition of the laser's initial coherent state and the coherent state with a smaller amplitude that results from its interaction with atoms. Here, we demonstrate that new light states with significantly different Wigner function distributions can be produced by combining two separate Schr\"odinger cat states. Through the engineering of Schr\"odinger cat states' parameters, we are able to produce Wigner functions that exhibit high sensitivity in relation to the system parameter. Our research paves the way for the creation of non-classical light by superposing Schr\"odinger cat states with application in such as quantum sensing.
The resource estimation tools provided by Azure Quantum and Microsoft Quantum Development Kit are described. Using these tools one can automatically evaluate the logical and physical resources required to run algorithms on fault-tolerant quantum computers. An example is given of obtaining resource estimates for quantum fault-tolerant implementations of three different multiplication algorithms.
Much of our progress in understanding microscale biology has been powered by advances in microscopy. For instance, super-resolution microscopes allow the observation of biological structures at near-atomic-scale resolution, while multi-photon microscopes allow imaging deep into tissue. However, biological structures and dynamics still often remain out of reach of existing microscopes, with further advances in signal-to-noise, resolution and speed needed to access them. In many cases, the performance of microscopes is now limited by quantum effects -- such as noise due to the quantisation of light into photons or, for multi-photon microscopes, the low cross-section of multi-photon scattering. These limitations can be overcome by exploiting features of quantum mechanics such as entanglement. Quantum effects can also provide new ways to enhance the performance of microscopes, such as new super-resolution techniques and new techniques to image at difficult to reach wavelengths. This review provides an overview of these various ways in which quantum techniques can improve microscopy, including recent experimental progress. It seeks to provide a realistic picture of what is possible, and what the constraints and opportunities are.
The dynamics of quantum systems under the adiabatic Hamiltonian has attracted attention not only in quantum control but also in a wide range of fields from condensed matter physics to high-energy physics because of its non-perturbative behavior. Here we analyze the adiabatic dynamics in the two-level systems and the multilevel systems using the exact WKB analysis, which is one of the non-perturbative analysis. As a result, we obtain the formula of the transition probability which is similar to the known formula in the two-level system. For multilevel systems, we show that the same analysis can be applied as long as the Hamiltonian is a real symmetric matrix. The results will serve as a basis for the application of the exact WKB analysis in various fields of physics.
Exploiting the power of quantum computation to realise superior machine learning algorithmshas been a major research focus of recent years, but the prospects of quantum machine learning (QML) remain dampened by considerable technical challenges. A particularly significant issue is that generic QML models suffer from so-called barren plateaus in their training landscapes -- large regions where cost function gradients vanish exponentially in the number of qubits employed, rendering large models effectively untrainable. A leading strategy for combating this effect is to build problem-specific models which take into account the symmetries of their data in order to focus on a smaller, relevant subset of Hilbert space. In this work, we introduce a family of rotationally equivariant QML models built upon the quantum Fourier transform, and leverage recent insights from the Lie-algebraic study of QML models to prove that (a subset of) our models do not exhibit barren plateaus. In addition to our analytical results we numerically our rotationally equivariant models on a dataset of simulated scanning tunnelling microscope images of phosphorus impurities in silicon, where rotational symmetry naturally arises, and find that they dramatically outperform their generic counterparts in practice.
In recent years, quantum computing has evolved as an exciting frontier, with the development of numerous algorithms dedicated to constructing quantum circuits that adeptly represent quantum many-body states. However, this domain remains in its early stages and requires further refinement to understand better the effective construction of highly-entangled quantum states within quantum circuits. Here, we demonstrate that quantum many-body states can be universally represented using a quantum circuit comprising multi-qubit gates. Furthermore, we evaluate the efficiency of a quantum circuit constructed with two-qubit gates in quench dynamics for the transverse-field Ising model. In this specific model, despite the initial state being classical without entanglement, it undergoes long-time evolution, eventually leading to a highly-entangled quantum state. Our results reveal that a diamond-shaped quantum circuit, designed to approximate the multi-qubit gate-based quantum circuit, remarkably excels in accurately representing the long-time dynamics of the system. Moreover, the diamond-shaped circuit follows the volume law behavior in entanglement entropy, offering a significant advantage over alternative quantum circuit constructions employing two-qubit gates.
Atomic array coupled to a one-dimensional nanophotonic waveguide allows photon-mediated dipole-dipole interactions and nonreciprocal decay channels, which hosts many intriguing quantum phenomena owing to its distinctive and emergent quantum correlations. In this atom-waveguide quantum system, we theoretically investigate the atomic excitation dynamics and its transport property, specifically at an interface of dissimilar atomic arrays with different interparticle distances. We find that the atomic excitation dynamics hugely depends on the interparticle distances of dissimilar arrays and the directionality of nonreciprocal couplings. By tuning these parameters, a dominant excitation reflection can be achieved at the interface of the arrays. We further study two effects on the transport property-of external drive and of single excitation delocalization over multiple atoms, where we manifest a rich interplay between multi-site excitation and the relative phase in determining the transport properties. Finally, we present an intriguing trapping effect of atomic excitation by designing multiple zones of dissimilar arrays. Our results can provide insights to nonequilibrium quantum dynamics in dissimilar arrays and shed light on confining and controlling quantum registers useful for quantum information processing.
As quantum processors grow, new performance benchmarks are required to capture the full quality of the devices at scale. While quantum volume is an excellent benchmark, it focuses on the highest quality subset of the device and so is unable to indicate the average performance over a large number of connected qubits. Furthermore, it is a discrete pass/fail and so is not reflective of continuous improvements in hardware nor does it provide quantitative direction to large-scale algorithms. For example, there may be value in error mitigated Hamiltonian simulation at scale with devices unable to pass strict quantum volume tests. Here we discuss a scalable benchmark which measures the fidelity of a connecting set of two-qubit gates over $N$ qubits by measuring gate errors using simultaneous direct randomized benchmarking in disjoint layers. Our layer fidelity can be easily related to algorithmic run time, via $\gamma$ defined in Ref.\cite{berg2022probabilistic} that can be used to estimate the number of circuits required for error mitigation. The protocol is efficient and obtains all the pair rates in the layered structure. Compared to regular (isolated) RB this approach is sensitive to crosstalk. As an example we measure a $N=80~(100)$ qubit layer fidelity on a 127 qubit fixed-coupling "Eagle" processor (ibm\_sherbrooke) of 0.26(0.19) and on the 133 qubit tunable-coupling "Heron" processor (ibm\_montecarlo) of 0.61(0.26). This can easily be expressed as a layer size independent quantity, error per layered gate (EPLG), which is here $1.7\times10^{-2}(1.7\times10^{-2})$ for ibm\_sherbrooke and $6.2\times10^{-3}(1.2\times10^{-2})$ for ibm\_montecarlo.
We revisit the problem of two-terminal transport of non-interacting Fermi particles across the tight-binding chain by employing the semi-microscopic model for the contacts, where we mimic the self-thermalization property of the contacts by using the Lindblad relaxation operators. It is argued that the dissipative dynamics of the contacts can essentially modify the line-shape of resonant peaks as compared to the Landauer-B\"uttiker theory. We also address the effect of this dissipative dynamics, which we refer to as external decoherence, on particle transport in disorder chains. It is shown that external decoherence reduces conductance fluctuations but does not affect the Anderson localization length.
We consider the discrimination of bipartite quantum states and establish a relation between nonlocal quantum state ensemble and quantum data hiding processing. Using a bound on optimal local discrimination of bipartite quantum states, we provide a sufficient condition for a bipartite quantum state ensemble to be used to construct a quantum data-hiding scheme. Our results are illustrated by examples in multidimensional bipartite quantum systems.
Ensembles of nitrogen-vacancy (NV) center spins in diamond offer a robust, precise and accurate magnetic sensor. As their applications move beyond the laboratory, practical considerations including size, complexity, and power consumption become important. Here, we compare two commonly-employed NV magnetometry techniques -- continuous-wave (CW) vs pulsed magnetic resonance -- in a scenario limited by total available optical power. We develop a consistent theoretical model for the magnetic sensitivity of each protocol that incorporates NV photophysics - in particular, including the incomplete spin polarization associated with limited optical power; after comparing the models' behaviour to experiments, we use them to predict the relative DC sensitivity of CW versus pulsed operation for an optical-power-limited, shot-noise-limited NV ensemble magnetometer. We find a $\sim 2-3 \times$ gain in sensitivity for pulsed operation, which is significantly smaller than seen in power-unlimited, single-NV experiments. Our results provide a resource for practical sensor development, informing protocol choice and identifying optimal operation regimes when optical power is constrained.
We analyze the expressivity of a universal deep neural network that can be organized as a series of nested qubit rotations, accomplished by adjustable data re-uploads. While the maximal expressive power increases with the depth of the network and the number of qubits, it is fundamentally bounded by the data encoding mechanism. Focusing on regression problems, we systematically investigate the expressivity limits for different measurements and architectures. The presence of entanglement, either by entangling layers or global measurements, saturate towards this bound. In these cases, entanglement leads to an enhancement of the approximation capabilities of the network compared to local readouts of the individual qubits in non-entangling networks. We attribute this enhancement to a larger survival set of Fourier harmonics when decomposing the output signal.
Convex functions of quantum states play a key role in quantum physics, with examples ranging from Bell inequalities to von Neumann entropy. However, in experimental scenarios, direct measurements of these functions are often impractical. We address this issue by introducing two methods for determining rigorous confidence bounds for convex functions based on informationally incomplete measurements. Our approach outperforms existing protocols by providing tighter bounds for a fixed confidence level and number of measurements. We evaluate the performance of our methods using both numerical and experimental data. Our findings demonstrate the efficacy of our approach, paving the way for improved quantum state certification in real-world applications.
We consider a damped oscillator mode that is resonantly driven and is coupled to an arbitrary target system via the position quadrature operator. For such a composite open quantum system, we develop a numerical method to compute the reduced density matrix of the target system and the low-order moments of the quadrature operators. In this method, we solve the evolution equations for quantities related to moments of the quadrature operators, rather than for the density matrix elements as in the conventional approach. The application to an optomechanical setting shows that the new method can compute the correlation functions accurately with a significant reduction in the computational cost. Since the method does not involve any approximation in its abstract formulation itself, we investigate the numerical accuracy closely. This study reveals the numerical sensitivity of the new approach in certain parameter regimes. We find that this issue can be alleviated by using the position basis instead of the commonly used Fock basis.
We show that multiqubit quantum channels which may be realised via stabilizer circuits without classical control (Clifford channels) have a particularly simple structure. They can be equivalently defined as channels that preserve mixed stabilizer states, or the channels with stabilizer Choi state. Up to unitary encoding and decoding maps any Clifford channel is a product of stabilizer state preparations, qubit discardings, identity channels and dephasing channels. This simple structure allows to characterise information-theoretic properties of such channels.
Open quantum many-body systems with controllable dissipation can exhibit novel features in their dynamics and steady states. A paradigmatic example is the dissipative transverse field Ising model. It has been shown recently that the steady state of this model with all-to-all interactions is genuinely non-equilibrium near criticality, exhibiting a modified time-reversal symmetry and violating the fluctuation-dissipation theorem. Experimental study of such non-equilibrium steady-state phase transitions is however lacking. Here we propose realistic experimental setups and measurement schemes for current trapped-ion quantum simulators to demonstrate this phase transition, where controllable dissipation is engineered via a continuous weak optical pumping laser. With extensive numerical calculations, we show that strong signatures of this dissipative phase transition and its non-equilibrium properties can be observed with a small system size across a wide range of system parameters. In addition, we show that the same signatures can also be seen if the dissipation is instead achieved via Floquet dynamics with periodic and probabilistic resetting of the spins. Dissipation engineered in this way may allow the simulation of more general types of driven-dissipative systems or facilitate the dissipative preparation of useful many-body entangled states.
In this paper, the SUSY partner Hamiltonians of the quasi-exactly solvable (QES) sextic potential $V^{\rm qes}(x) = \nu\, x^{6} + 2\, \nu\, \mu\,x^{4} + \left[\mu^2-(4N+3)\nu \right]\, x^{2}$, $N \in \mathbb{Z}^+$, are revisited from a Lie algebraic perspective. It is demonstrated that, in the variable $ \tau=x^2$, the underlying $\mathfrak{sl}_2(\mathbb{R})$ hidden algebra of $V^{\rm qes}(x)$ is inherited by its SUSY partner potential $V_1(x)$ only for $N=0$. At fixed $N>0$, the algebraic polynomial operator $h(x,\,\partial_x;\,N)$ that governs the $N$ exact eigenpolynomial solutions of $V_1$ is derived explicitly. These odd-parity solutions appear in the form of zero modes. The potential $V_1$ can be represented as the sum of a polynomial and rational parts. In particular, it is shown that the polynomial component is given by $V^{\rm qes}$ with a different non-integer (cohomology) parameter $N_1=N-\frac{3}{2}$. A confluent second-order SUSY transformation is also implemented for a modified QES sextic potential possessing the energy reflection symmetry. By taking $N$ as a continuous real constant and using the Lagrange-mesh method, highly accurate values ($\sim 20$ s. d.) of the energy $E_n=E_n(N)$ in the interval $N \in [-1,3]$ are calculated for the three lowest states $n=0,1,2$ of the system. The critical value $N_c$ above which tunneling effects (instanton-like terms) can occur is obtained as well. At $N=0$, the non-algebraic sector of the spectrum of $V^{\rm qes}$ is described by means of compact physically relevant trial functions. These solutions allow us to determine the effects in accuracy when the first-order SUSY approach is applied on the level of approximate eigenfunctions.
This tutorial aims at giving an introductory treatment of the circuit analysis of superconducting qubits, i.e., two-level systems in superconducting circuits. It also touches upon couplings between such qubits and how microwave driving and these couplings can be used for single- and two-qubit gates, as well as how to include noise when calculating the dynamics of the system. We also discuss higher-dimensional superconducting qudits. The tutorial is intended for new researchers with limited or no experience with the field but should be accessible to anyone with a bachelor's degree in physics. The tutorial introduces the basic methods used in quantum circuit analysis, starting from a circuit diagram and ending with a quantized Hamiltonian, that may be truncated to the lowest levels. We provide examples of all the basic techniques throughout the discussion, while in the last part of the tutorial we discuss several of the most commonly used circuits for quantum-information applications. This includes both worked examples of single qubits and examples of how to analyze the coupling methods that allow multiqubit operations. In several detailed appendices, we provide the interested reader with an introduction to more advanced techniques for handling larger circuit designs.
We prove the STP=BQP conjecture of Freedman, Hastings and Shokrian-Zini [1], namely that the two-qubit singlet/triplet measurement is quantum computationally universal given only an initial ensemble of maximally mixed single qubits. This provides a method for quantum computing that is fully rotationally symmetric (i.e. reference frame independent), using primitives that are both physically very-accessible and provably the simplest possible.
We developed the theory of elastic electron tunneling through a potential barrier driven by a strong high-frequency electromagnetic field. It is demonstrated that the driven barrier can be considered as a stationary two-barrier potential which contains the quasi-stationary electron states confined between these two barriers. When the energy of an incident electron coincides with the energy of the quasi-stationary state, the driven barrier becomes fully transparent for the electron (the resonant tunneling). The developed theory is applied to describe electron transport through a quantum point contact irradiated by an electromagnetic wave.
Spin-based applications of the negatively charged nitrogen-vacancy (NV) center in diamonds require efficient spin readout. One approach is the spin-to-charge conversion (SCC), relying on mapping the spin states onto the neutral (NV$^0$) and negative (NV$^-$) charge states followed by a subsequent charge readout. With high charge-state stability, SCC enables extended measurement times, increasing precision and minimizing noise in the readout compared to the commonly used fluorescence detection. Nano-scale sensing applications, however, require shallow NV centers within a few $\si{\nano \meter}$ distance from the surface where surface related effects might degrade the NV charge state. In this article, we investigate the charge state initialization and stability of single NV centers implanted $\approx \SI{5}{\nano \meter}$ below the surface of a flat diamond plate. We demonstrate the SCC protocol on four shallow NV centers suitable for nano-scale sensing, obtaining a reduced readout noise of 5--6 times the spin-projection noise limit. We investigate the general applicability of SCC for shallow NV centers and observe a correlation between NV charge-state stability and readout noise. Coating the diamond with glycerol improves both charge initialization and stability. Our results reveal the influence of the surface-related charge environment on the NV charge properties and motivate further investigations to functionalize the diamond surface with glycerol or other materials for charge-state stabilization and efficient spin-state readout of shallow NV centers suitable for nano-scale sensing.
Measurement-based quantum thermal machines are fascinating models of thermodynamic cycles where measurement protocols play an important role in the performance and functioning of the cycle. Despite theoretical advances, interesting experimental implementations have been reported. Here we move a step further by considering in this class of cycle $\mathcal{PT}$-symmetric non-Hermitian Hamiltonians and their implications in quantum thermal machines fueled by generalized measurements. We present theoretical results indicating that $\mathcal{PT}$-symmetric effects and measurement protocols are related along the cycle. Furthermore, tuning the parameters suitably it is possible to improve the power output (engine configuration) and the cooling rate (refrigerator configuration), operating in the Otto limit, in a finite-time cycle that satisfies the quantum adiabatic theorem. Our model also allows switching the configuration of the cycle, engine, or refrigerator, depending on the strength of the measurement protocol.
The choice of mathematical representation when describing physical systems is of great consequence, and this choice is usually determined by the properties of the problem at hand. Here we examine the little-known wave operator representation of quantum dynamics, and explore its connection to standard methods of quantum dynamics. This method takes as its central object the square root of the density matrix, and consequently enjoys several unusual advantages over standard representations. By combining this with purification techniques imported from quantum information, we are able to obtain a number of results. Not only is this formalism able to provide a natural bridge between phase and Hilbert space representations of both quantum and classical dynamics, we also find the waveoperator representation leads to novel semiclassical approximations of both real and imaginary time dynamics, as well as a transparent correspondence to the classical limit. This is demonstrated via the example of quadratic and quartic Hamiltonians, while the potential extensions of the waveoperator and its application to quantum-classical hybrids is discussed. We argue that the wave operator provides a new perspective that links previously unrelated representations, and is a natural candidate model for scenarios (such as hybrids) in which positivity cannot be otherwise guaranteed.
We consider a quantum lattice spin model featuring exact quasiparticle towers of eigenstates with low entanglement at finite size, known as quantum many-body scars (QMBS). We show that the states in the neighboring part of the energy spectrum can be superposed to construct entire families of low-entanglement states whose energy variance decreases asymptotically to zero as the lattice size is increased. As a consequence, they have a relaxation time that diverges in the thermodynamic limit, and therefore exhibit the typical behavior of exact QMBS although they are not exact eigenstates of the Hamiltonian for any finite size. We refer to such states as \textit{asymptotic} QMBS. These states are orthogonal to any exact QMBS at any finite size, and their existence shows that the presence of an exact QMBS leaves important signatures of non-thermalness in the rest of the spectrum; therefore, QMBS-like phenomena can hide in what is typically considered the thermal part of the spectrum. We support our study using numerical simulations in the spin-1 XY model, a paradigmatic model for QMBS, and we conclude by presenting a weak perturbation of the model that destroys the exact QMBS while keeping the asymptotic QMBS.
We consider a two-level atom that follows a wordline of constant velocity, while interacting with a massless scalar field in a thermal state through: (i) an Unruh-DeWitt coupling, and (ii) a coupling that involves the time derivative of the field. We treat the atom as an open quantum system, with the field playing the role of the environment, and employ a master equation to describe its time evolution. We study the dynamics of entanglement between the moving atom and a (auxiliary) qubit at rest and isolated from the thermal field. We find that in the case of the standard Unruh-DeWitt coupling and for high temperatures of the environment the decay of entanglement is delayed due to the atom's motion. Instead, in the derivative coupling case, the atom's motion always causes the rapid death of entanglement.
It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension $d$. The derivation of this result is more nuanced than in the classical case due to the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. An important consequence of our analysis is that the ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is worse than previously thought, as we prove that expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$. Finally, we discuss how our results improve our understanding of concentration in $t$-designs.
Electronic spin defects in the environment of an optically-active spin can be used to increase the size and hence the performance of solid-state quantum registers, especially for applications in quantum metrology and quantum communication. Previous works on multi-qubit electronic-spin registers in the environment of a Nitrogen-Vacancy (NV) center in diamond have only included spins directly coupled to the NV. As this direct coupling is limited by the central spin coherence time, it significantly restricts the register's maximum attainable size. To address this problem, we present a scalable approach to increase the size of electronic-spin registers. Our approach exploits a weakly-coupled probe spin together with double-resonance control sequences to mediate the transfer of spin polarization between the central NV spin and an environmental spin that is not directly coupled to it. We experimentally realize this approach to demonstrate the detection and coherent control of an unknown electronic spin outside the coherence limit of a central NV. Our work paves the way for engineering larger quantum spin registers with the potential to advance nanoscale sensing, enable correlated noise spectroscopy for error correction, and facilitate the realization of spin-chain quantum wires for quantum communication.
A new approximate Quantum State Preparation (QSP) method is introduced, called the Walsh Series Loader (WSL). The WSL approximates quantum states defined by real-valued functions of single real variables with a depth independent of the number $n$ of qubits. Two approaches are presented: the first one approximates the target quantum state by a Walsh Series truncated at order $O(1/\sqrt{\epsilon})$, where $\epsilon$ is the precision of the approximation in terms of infidelity. The circuit depth is also $O(1/\sqrt{\epsilon})$, the size is $O(n+1/\sqrt{\epsilon})$ and only one ancilla qubit is needed. The second method represents accurately quantum states with sparse Walsh series. The WSL loads $s$-sparse Walsh Series into $n$-qubits with a depth doubly-sparse in $s$ and $k$, the maximum number of bits with value $1$ in the binary decomposition of the Walsh function indices. The associated quantum circuit approximates the sparse Walsh Series up to an error $\epsilon$ with a depth $O(sk)$, a size $O(n+sk)$ and one ancilla qubit. In both cases, the protocol is a Repeat-Until-Success (RUS) procedure with a probability of success $P=\Theta(\epsilon)$, giving an averaged total time of $O(1/\epsilon^{3/2})$ for the WSL (resp. $O(sk/\epsilon)$ for the sparse WSL). Amplitude amplification can be used to reduce by a factor $O(1/\sqrt{\epsilon})$ the total time dependency with $\epsilon$ but increases the size and depth of the associated quantum circuits, making them linearly dependent on $n$. These protocols give overall efficient algorithms with no exponential scaling in any parameter. They can be generalized to any complex-valued, multi-variate, almost-everywhere-differentiable function. The Repeat-Until-Success Walsh Series Loader is so far the only method which prepares a quantum state with a circuit depth and an averaged total time independent of the number of qubits.
In this paper, the maze generation using quantum annealing is proposed. We reformulate a standard algorithm to generate a maze into a specific form of a quadratic unconstrained binary optimization problem suitable for the input of the quantum annealer. To generate more difficult mazes, we introduce an additional cost function $Q_{update}$ to increase the difficulty. The difficulty of the mazes was evaluated by the time to solve the maze of 12 human subjects. To check the efficiency of our scheme to create the maze, we investigated the time-to-solution of a quantum processing unit, classical computer, and hybrid solver.
Spontaneous pattern formation from a uniform state is a widely studied nonlinear optical phenomenon that shares similarities with non-equilibrium pattern formation in other scientific domains. Here we show how a single layer of atoms in an array can undergo nonlinear amplification of fluctuations, leading to the formation of intricate optical patterns. The origin of the patterns is intrinsically cooperative, eliminating the necessity of mirrors or cavities, although introduction of a mirror in the vicinity of the atoms significantly modifies the scattering profiles. The emergence of these optical patterns is tied to a bistable collective response, which can be qualitatively described by a long-wavelength approximation, similar to a nonlinear Schr\"odinger equation of optical Kerr media or ring cavities. These collective excitations have the ability to form singular defects and unveil atomic position fluctuations through wave-like distortions.
We apply principal component analysis (PCA) to a set of electrical output signals from a commercially available superconducting nanowire single-photon detector (SNSPD) to investigate their photon-number-resolving capability. We find that the rising edge as well as the amplitude of the electrical signal have the most dependence on photon number. Accurately measuring the rising edge while simultaneously measuring the voltage of the pulse amplitude maximizes the photon-number resolution of SNSPDs. Using an optimal basis of principle components, we show unambiguous discrimination between one- and two-photon events, as well as partial resolution up to five photons. This expands the use-case of SNSPDs to photon-counting experiments, without the need of detector multiplexing architectures.
We consider the quantum dynamics of a many-fermion system in $\mathbb R^d$ with an ultraviolet regularized pair interaction as previously studied in [M. Gebert, B. Nachtergaele, J. Reschke, and R. Sims, Ann. Henri Poincar\'e 21.11 (2020)]. We provide a Lieb-Robinson bound under substantially relaxed assumptions on the potentials. We also improve the associated one-body Lieb-Robinson bound on $L^2$-overlaps to an almost ballistic one (i.e., an almost linear light cone) under the same relaxed assumptions. Applications include the existence of the infinite-volume dynamics and clustering of ground states in the presence of a spectral gap. We also develop a fermionic continuum notion of conditional expectation and use it to approximate time-evolved fermionic observables by local ones, which opens the door to other applications of the Lieb-Robinson bounds.
Out of thermal equilibrium, bosonic quantum systems can Bose-condense away from the ground state, featuring a macroscopic occupation of an excited state, or even of multiple states in the so-called Bose-selection scenario. While theory has been developed describing such effects as they result from the nonequilibrium kinetics of quantum jumps, a theoretical understanding, and the development of practical strategies, to control and drive the system into desired Bose condensation patterns have been lacking. We show how fine-tuned single or multiple condensate modes, including their relative occupation, can be engineered by coupling the system to artificial quantum baths. Moreover, we propose a Bose `condenser', experimentally implementable in a superconducting circuit, where bath engineering is realized via auxiliary driven-damped two-level systems, that induces targeted Bose condensation into eigenstates of a chain of resonators. We further discuss the engineering of transition points between different Bose condensation configurations, which may find application for amplification, heat-flow control, and the design of highly-structured quantum baths.
Lasers with high spectral purity are indispensable for optical clocks and coherent manipulation of atomic and molecular qubits for applications such as quantum computing and quantum simulation. Stabilisation of the laser to a reference can provide a narrow linewidth and high spectral purity. However, widely-used diode lasers exhibit fast phase noise that prevents high fidelity qubit manipulation. Here we demonstrate a self-injection locked diode laser system utilizing a medium finesse cavity. The cavity not only provides a stable resonance frequency, but at the same time acts as a low-pass filter for phase noise beyond the cavity linewidth of around 100 kHz, resulting in low phase noise from dc to the injection lock limit. We model the expected laser performance and benchmark it using a single trapped $^{40}$Ca$^{+}$-ion as a spectrum analyser. We show that the fast phase noise of the laser at relevant Fourier frequencies of 100 kHz to >2 MHz is suppressed to a noise floor of between -110 dBc/Hz and -120 dBc/Hz, an improvement of 20 to 30 dB over state-of-the-art Pound-Drever-Hall-stabilized extended-cavity diode lasers. This strong suppression avoids incoherent (spurious) spin flips during manipulation of optical qubits and improves laser-driven gates in using diode lasers with applications in quantum logic spectroscopy, quantum simulation and quantum computation.
Gaussian Boson sampling (GBS) plays a crucially important role in demonstrating quantum advantage. As a major imperfection, the limited connectivity of the linear optical network weakens the quantum advantage result in recent experiments. Here we present a faster classical algorithm to simulate the GBS process with limited connectivity. In this work, we introduce an enhanced classical algorithm for simulating GBS processes with limited connectivity. It computes the loop Hafnian of an $n \times n$ symmetric matrix with bandwidth $w$ in $O(nw2^w)$ time which is better than the previous fastest algorithm which runs in $O(nw^2 2^w)$ time. This classical algorithm is helpful on clarifying how limited connectivity affects the computational complexity of GBS and tightening the boundary of quantum advantage in the GBS problem.
The Bin Packing Problem is a classic problem with wide industrial applicability. In fact, the efficient packing of items into bins is one of the toughest challenges in many logistic corporations and is a critical issue for reducing storage costs or improving vehicle space allocation. In this work, we resort to our previously published quantum-classical framework known as Q4RealBPP, and elaborate on the solving of real-world oriented instances of the Bin Packing Problem. With this purpose, this paper gravitates on the following characteristics: i) the existence of heterogeneous bins, ii) the extension of the framework to solve not only three-dimensional, but also one- and two-dimensional instances of the problem, iii) requirements for item-bin associations, and iv) delivery priorities. All these features have been tested in this paper, as well as the ability of Q4RealBPP to solve real-world oriented instances.
This paper concerns the long-standing question of representing (totally) anti-symmetric functions in high dimensions. We propose a new ansatz based on the composition of an odd function with a fixed set of anti-symmetric basis functions. We prove that this ansatz can exactly represent every anti-symmetric and continuous function and the number of basis functions has efficient scaling with respect to dimension (number of particles).