CWRU PAT Coffee Agenda

Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30

Showing votes from 2023-11-17 12:30 to 2023-11-21 11:30 | Next meeting is Friday Nov 1st, 11:30 am.

users

  • No papers in this section today!

astro-ph.CO

  • Ultralight $(L_\mu-L_\tau)$ vector dark matter interpretation of NANOGrav observations.- [PDF] - [Article]

    Debtosh Chowdhury, Arpan Hait, Subhendra Mohanty, Suraj Prakash
     

    The angular correlation of pulsar residuals observed by NANOGrav and other pulsar timing array (PTA) collaborations show evidence in support of the Hellings-Downs correlation expected from stochastic gravitational waves (SGW). In this paper, we offer a non-gravitational wave explanation of the observed pulsar timing correlations as caused by an ultra-light $L_{\mu} - L_{\tau}$ gauge boson dark matter (ULDM). ULDM can affect the pulsar correlations in two ways. The gravitational potential of vector ULDM gives rise to a Shapiro time-delay of the pulsar signals and a non-trivial angular correlation (as compared to the scalar ULDM case). In addition, if the pulsars have a non-zero charge of the dark matter gauge group then the electric field of the local dark matter causes an oscillation of the pulsar and a corresponding Doppler shift of the pulsar signal. We point out that pulsars carry a significant charge of muons and thus the $L_{\mu} - L_{\tau}$ vector dark matter contributes to both the Doppler oscillations and the time-delay of the pulsar signals. Our analysis shows that the NANOGrav data has a better fit to the $L_{\mu} - L_{\tau}$ ULDM scenario compared to the SGW or the SGW with Shapiro time-delay hypotheses.

  • Revisiting quantum black holes from effective loop quantum gravity.- [PDF] - [Article]

    Geeth Ongole, Parampreet Singh, Anzhong Wang
     

    We systematically study a family of loop quantizations for the classical Kruskal spacetimes using the effective description motivated from loop quantum gravity for four generic parameters, $c_o, m, \delta_b$ and $\delta_c$, where the latter two denote the polymerization parameters which capture the underlying quantum geometry. We focus on the family where polymerization parameters are constant on dynamical trajectories, and of which the Ashtekar-Olmedo-Singh (AOS) and Corichi-Singh (CS) models appear as special cases. We study general features of singularity resolution in all these models due to quantum gravity effects and analytically extend the solutions across the white hole (WH) and black hole (BH) horizons to the exterior. We find that the leading term in the asymptotic expansion of the Kretschmann scalar is $r^{-4}$. However, for AOS and CS models black holes with masses greater than solar mass the dominant term behaves as $r^{-6}$ for the size of the observable universe and {our analysis can be used to phenomenologically constrain the choice of parameters for other models.} In addition, one can uniquely fix the parameter $c_o$ by requiring that the Hawking temperature at the BH horizon to the leading order be consistent with its classical value for a macroscopic BH. Assuming that the BH and WH masses are of the same order, we are able to identify a family of choices of $\delta_b$ and $\delta_c$ which share all the desired properties of the AOS model.

  • Holographic Quantum-Foam Blurring, and Localization of Gamma-Ray Burst GRB221009A.- [PDF] - [Article]

    Eric Steinbring
     

    Gamma-ray burst GRB221009A was of unprecedented brightness in gamma-rays and X-rays, and through to the far ultraviolet, allowing for identification within a host galaxy at redshift z=0.151 by multiple space and ground-based optical/near-infrared telescopes and enabling a first association - via cosmic-ray air-shower events - with a photon of 251 TeV. That is in direct tension with a potentially observable phenomenon of quantum gravity (QG), where spacetime "foaminess" accumulates in wavefronts propagating cosmological distances, and at high-enough energy could render distant yet bright pointlike objects invisible, by effectively spreading their photons out over the whole sky. But this effect would not result in photon loss, so it remains distinct from any absorption by extragalactic background light. A simple multiwavelength average of foam-induced blurring is described, analogous to atmospheric seeing from the ground. When scaled within the fields of view for the Fermi and Swift instruments, it fits all z<5 GRB angular-resolution data of 10 MeV or any lesser peak energy and can still be consistent with the highest-energy localization of GRB221009A: a limiting bound of about 1 degree is in agreement with a holographic QG-favored formulation.

  • Microlensing of halo objects in the exterior part of the Galaxy.- [PDF] - [Article]

    Tabib Rayed Hossain, Prabir Kumar Haldar, Mehedi Kalam
     

    In the context of this paper, microlenses present as oblate clusters of dark matter structures called massive astrophysical compact halo object(MACHO) in the galactic halo are considered. The NFW density profile [1] is derived from the observational data and works best in the halo region of the exterior part of the galaxy. Hence this profile is used to plot the potential, deflection angle, and critical and caustic curves for the aforementioned microlenses using numerical methods. Moreover, this model is compared with an older density profile model [2], and the differences in their caustic and critical curves are pointed out. This leads to the conclusion that the NFW model produces caustic and critical curves that occupy a smaller region than portrayed by the caustic and critical curves produced by the older density profile model. However, the differences are not that significant to these structures act as microlenses as they are so small and beyond the scope of modern telescopes, and thus only the light curves can be detected from these structures.

  • Quantum technologies for fundamental (HE) physics.- [PDF] - [Article]

    D. Blas
     

    In this brief contribution I will highlight some directions where the developments in the frontier of (quantum) metrology may be key for fundamental high energy physics (HEP). I will focus on the detection of dark matter and gravitational waves, and introduce ideas from atomic clocks and magnetometers, large atomic interferometers and detection of small fields in electromagnetic cavities. Far from being comprehensive, this contribution is an invitation to everyone in the HEP and quantum technologies communities to explore this fascinating topic.

  • Optimal squeezing for high-precision atom interferometers.- [PDF] - [Article]

    Polina Feldmann, Fabian Anders, Alexander Idel, Christian Schubert, Dennis Schlippert, Luis Santos, Ernst M. Rasel, Carsten Klempt
     

    We show that squeezing is a crucial resource for interferometers based on the spatial separation of ultra-cold interacting matter. Atomic interactions lead to a general limitation for the precision of these atom interferometers, which can neither be surpassed by larger atom numbers nor by conventional phase or number squeezing. However, tailored squeezed states allow to overcome this sensitivity bound by anticipating the major detrimental effect that arises from the interactions. We envisage applications in future high-precision differential matter-wave interferometers, in particular gradiometers, e.g., for gravitational-wave detection.

  • Evolution of X-ray galaxy Cluster Properties in a Representative Sample (EXCPReS). Optimal binning for temperature profile extraction.- [PDF] - [Article]

    C.M.H. Chen, M. Arnaud, E. Pointecouteau, G.W. Pratt, A. Iqbal
     

    We present XMM-Newton observations of a representative X-ray selected sample of 31 galaxy clusters at moderate redshift $(0.4<z<0.6)$, spanning the mass range $10^{14} < M_{\textrm 500} < 10^{15}$~M$_\odot$. This sample, EXCPRES (Evolution of X-ray galaxy Cluster Properties in a Representative Sample), is used to test and validate a new method to produce optimally-binned cluster X-ray temperature profiles. The method uses a dynamic programming algorithm, based on partitioning of the soft-band X-ray surface brightness profile, to obtain a binning scheme that optimally fulfills a given signal-to-noise threshold criterion out to large radius. From the resulting optimally-binned EXCPRES temperature profiles, and combining with those from the local REXCESS sample, we provide a generic scaling relation between the relative error on the temperature and the [0.3-2] keV surface brightness signal-to-noise ratio, and its dependence on temperature and redshift. We derive an average scaled 3D temperature profile for the sample. Comparing to the average scaled 3D temperature profiles from REXCESS, we find no evidence for evolution of the average profile shape within the redshift range that we probe.

  • Quantum Enhancement in Dark Matter Detection with Quantum Computation.- [PDF] - [Article]

    Shion Chen, Hajime Fukuda, Toshiaki Inada, Takeo Moroi, Tatsumi Nitta, Thanaporn Sichanugrist
     

    We propose a novel method to significantly enhance the signal rate in the qubit-based dark matter detection experiments with the help of quantum interference. Various quantum sensors possess ideal properties for detecting wave-like dark matter, and qubits, commonly employed in quantum computers, are excellent candidates for dark matter detectors. We demonstrate that, by designing an appropriate quantum circuit to manipulate the qubits, the signal rate scales proportionally to $n_{\rm q}^2$, with $n_{\rm q}$ being the number of sensor qubits, rather than linearly with $n_{\rm q}$. Consequently, in the dark matter detection with a substantial number of sensor qubits, a significant increase in the signal rate can be expected. We provide a specific example of a quantum circuit that achieves this enhancement by coherently combining the phase evolution in each individual qubit due to its interaction with dark matter. We also demonstrate that the circuit is fault tolerant to de-phasing noises, a critical quantum noise source in quantum computers. The enhancement mechanism proposed here is applicable to various modalities for quantum computers, provided that the quantum operations relevant to enhancing the dark matter signal can be applied to these devices.

  • Galaxy clustering multi-scale emulation.- [PDF] - [Article]

    Tyann Dumerchat, Julian Bautista
     

    Simulation based inference has seen increasing interest in the past few years as a promising approach to model the non linear scales of galaxy clustering. The common approach using Gaussian process is to train an emulator over the cosmological and galaxy-halo connection parameters independently for every scales. We present a new Gaussian process model allowing to extend the input parameter space dimensions and to use non-diagonal noise covariance matrix. We use our new framework to emulate simultaneously every scales of the non-linear clustering of galaxies in redshift space from the AbacusSummit N-body simulations at redshift $z=0.2$. The model includes nine cosmological parameters, five halo occupation distribution (HOD) parameters and one scale dimension. Accounting for the limited resolution of the simulations, we train our emulator on scales from $0.3~h^{-1}\mathrm{Mpc}$ to $60~h^{-1}\mathrm{Mpc}$ and compare its performance with the standard approach of building one independent emulator for each scales. The new model yields more accurate and precise constraints on cosmological parameters compared to the standard approach. As the new model is able to interpolate over the scales space, we are also able to account for the Alcock-Paczynski distortion effect leading more accurate constraints on the cosmological parameters.

  • Ripped {\Lambda}CDM: an observational contender to the consensus cosmological model.- [PDF] - [Article]

    R. Lazkoz, V. Salzano, L. Fernandez-Jambrina, M. Bouhmadi-López
     

    Current observations do not rule out the possibility that the Universe might end up in an abrupt event. Different such scenarios may be explored through suitable parameterizations of the dark energy and then confronted to cosmological background data. Here we parameterize a pseudorip scenario using a particular sigmoid function and carry an in-depth multifaceted examination of its evolutionary features and statistical performance. This depiction of a non violent final fate of our cosmos seems to be arguably statistically favoured over the consensus {\Lambda}CDM model according to some Bayesian discriminators.

  • Observational bounds on extended minimal theories of massive gravity: New limits on the graviton mass.- [PDF] - [Article]

    Antonio De Felice, Suresh Kumar, Shinji Mukohyama, Rafael C. Nunes
     

    In this work, we derive for the first time observational constraints on the extended Minimal Theory of Massive Gravity (eMTMG) framework in light of Planck-CMB data, geometrical measurements from Baryon Acoustic Oscillation (BAO), Type Ia supernovae from the recent Pantheon+ samples, and also using the auto and cross-correlations cosmic shear measurements from KIDS-1000 survey. Given the great freedom of dynamics choice for the theory, we consider an observationally motivated subclass in which the background evolution of the Universe goes through a transition from a (positive or negative) value of the effective cosmological constant to another value. From the statistical point of view, we did not find evidence of such a transition, i.e. deviation from the standard $\Lambda$CDM behavior, and from the joint analysis using Planck + BAO + Pantheon+ data, we constrain the graviton mass to $< 6.6 \times 10^{-34}$ eV at 95% CL. We use KIDS-1000 survey data to constrain the evolution of the scalar perturbations of the model and its limits for the growth of structure predicted by the eMTMG scenario. In this case, we find small evidence at 95% CL for a non-zero graviton mass. We interpret and discuss these results in light of the current tension on the $S_8$ parameter. We conclude that, within the subclass considered, the current data are only able to impose upper bounds on the eMTMG dynamics. Given its potentialities beyond the subclass, eMTMG can be classified as a good candidate for modified gravity, serving as a framework in which observational data can effectively constrain (or confirm) the graviton mass and deviations from the standard $\Lambda$CDM behavior.

  • A Method for Obtaining Cosmological Models Consistency Relations and Gaussian Processes Testing.- [PDF] - [Article]

    J. F. Jesus, A. A. Escobal, R. Valentim, S. H. Pereira
     

    In the present work, we apply consistency relation tests to several cosmological models, including the flat and non-flat $\Lambda$CDM models, as well as the flat XCDM model. The analysis uses a non-parametric Gaussian Processes method to reconstruct various cosmological quantities of interest, such as the Hubble parameter $H(z)$ and its derivatives from $H(z)$ data, as well as the comoving distance and its derivatives from SNe Ia data. We construct consistency relations from these quantities which should be valid only in the context of each model and test them with the current data. We were able to find a general method of constructing such consistency relations in the context of $H(z)$ reconstruction. In the case of comoving distance reconstruction, there were not a general method of constructing such relations and this work had to write an specific consistency relation for each model. From $H(z)$ data, we have analyzed consistency relations for all the three above mentioned models, while for SNe Ia data we have analyzed consistency relations only for flat and non-flat $\Lambda$CDM models. Concerning the flat $\Lambda$CDM model, some inconsistency was found, at more than $2\sigma$ c.l., with the $H(z)$ data in the interval $1.8\lesssim z\lesssim2.4$, while the other models were all consistent at this c.l. Concerning the SNe Ia data, the flat $\Lambda$CDM model was consistent in the $0<z<2.5$ interval, at $1\sigma$ c.l., while the nonflat $\Lambda$CDM model was consistent in the same interval, at 2$\sigma$ c.l.

  • Direct Optimal Mapping Image Power Spectrum and its Window Functions.- [PDF] - [Article]

    Zhilei Xu, Honggeun Kim, Jacqueline N. Hewitt, Kai-Feng Chen, Nicholas S. Kern, Elizabeth Rath, Ruby Byrne, Adélie Gorce, Zachary E. Martinot, Joshua S. Dillon, Bryna J. Hazelton, Adrian Liu, Miguel F. Morales, Zara Abdurashidova, Tyrone Adams, James E. Aguirre, Paul Alexander, Zaki S. Ali, Rushelle Baartman, Yanga Balfour, Adam P. Beardsley, Gianni Bernardi, Tashalee S. Billings, Judd D. Bowman, Richard F. Bradley, Philip Bull, Jacob Burba, Steven Carey, Chris L. Carilli, Carina Cheng, David R. DeBoer, Eloy de Lera Acedo, Matt Dexter, Nico Eksteen, John Ely, Aaron Ewall-Wice, Nicolas Fagnoni, Randall Fritz, Steven R. Furlanetto, Kingsley Gale-Sides, Brian Glendenning, Deepthi Gorthi, Bradley Greig, Jasper Grobbelaar, Ziyaad Halday, Jack Hickish, Daniel C. Jacobs, Austin Julius, MacCalvin Kariseb, et al. (32 additional authors not shown)
     

    The key to detecting neutral hydrogen during the epoch of reionization (EoR) is to separate the cosmological signal from the dominating foreground radiation. We developed direct optimal mapping (Xu et al. 2022) to map interferometric visibilities; it contains only linear operations, with full knowledge of point spread functions from visibilities to images. Here we present an FFT-based image power spectrum and its window functions based on direct optimal mapping. We use noiseless simulation, based on the Hydrogen Epoch of Reionization Array (HERA) Phase I configuration, to study the image power spectrum properties. The window functions show $<10^{-11}$ power leakage from the foreground-dominated region into the EoR window; the 2D and 1D power spectra also verify the separation between the foregrounds and the EoR. Furthermore, we simulated visibilities from a $uv$-complete array and calculated its image power spectrum. The result shows that the foreground--EoR leakage is further suppressed below $10^{-12}$, dominated by the tapering function sidelobes; the 2D power spectrum does not show signs of the horizon wedge. The $uv$-complete result provides a reference case for future 21cm cosmology array designs.

  • Soft Scattering Evaporation of Dark Matter Subhalos by Inner Galactic Gases.- [PDF] - [Article] - [UPDATED]

    Xiao-jun Bi, Yu Gao, Mingjie Jin, Yugen Lin, Qian-Fei Xiang
     

    The large gap between a galactic dark matter subhalo's velocity and its own gravitational binding velocity creates the situation that small subhalos can be evaporated before dark matter thermalize with baryons due to the low binding velocity. In case dark matter acquires an electromagnetic dipole moment, the survival of low-mass subhalos requires stringent limits on the photon-mediated soft scattering. The current stringent direct detection limits indicate for a small dipole moment, which lets DM decouple early and allows small subhalos to form. We calculate the DM kinetic decoupling temperature in the Early Universe and evaluate the smallest protohalo mass. In the late Universe, low-mass subhalos can be evaporated via soft collision by ionized gas and accelerated cosmic rays. We calculate the subhalos evaporation rate and show that subhalos lighter than $10^{-5}M_{\odot}$ in the gaseous inner galactic region are subject to evaporation via dark matter's effective electric and magnetic dipole moments below current direct detection limits, which potentially affects the low-mass subhalos distribution in the galactic center.

  • Detection of hidden photon dark matter using the direct excitation of transmon qubits.- [PDF] - [Article] - [UPDATED]

    Shion Chen, Hajime Fukuda, Toshiaki Inada, Takeo Moroi, Tatsumi Nitta, Thanaporn Sichanugrist
     

    We propose a novel dark matter detection method utilizing the excitation of superconducting transmon qubits. Assuming the hidden photon dark matter of a mass of $O(10)\ \mu{\rm eV}$, the classical wave-matter oscillation induces an effective ac electric field via the small kinetic mixing with the ordinary photon. This serves as a coherent drive field for a qubit when it is resonant, evolving it from the ground state towards the first-excited state. We evaluate the rate of such evolution and observable excitations in the measurements, as well as the search sensitivity to the hidden photon dark matter. For a selected mass, one can reach $\epsilon \sim 10^{-12}-10^{-14}$ (where $\epsilon$ is the kinetic mixing parameter of the hidden photon) with a single standard transmon qubit. A simple extension to the frequency-tunable SQUID-based transmon enables the mass scan to cover the whole $4-40\ \mu{\rm eV}$ ($1-10$ GHz) range within a reasonable length of run time. The sensitivity scalability along the number of the qubits also makes it a promising platform in accord to the rapid evolution of the superconducting quantum computer technology.

  • Local Limit of Nonlocal Gravity: A Teleparallel Extension of General Relativity.- [PDF] - [Article] - [UPDATED]

    Javad Tabatabaei, Shant Baghram, Bahram Mashhoon
     

    We describe a general constitutive framework for a teleparallel extension of the general theory of relativity. This approach goes beyond the teleparallel equivalent of general relativity (TEGR) by broadening the analogy with the electrodynamics of media. In particular, the main purpose of this paper is to investigate in detail a local constitutive extension of TEGR that is the local limit of nonlocal gravity (NLG). Within this framework, we study the modified FLRW cosmological models. Of these, the most cogent turns out to be the modified Cartesian flat model which is shown to be inconsistent with the existence of a positive cosmological constant. Moreover, dynamic dark energy and other components of the modified Cartesian flat model evolve differently with the expansion of the universe as compared to the standard flat cosmological model. The observational consequences of the modified Cartesian flat model are briefly explored and it is shown that the model is capable of resolving the H_0 tension.

  • Strong gravitational lensing's 'external shear' is not shear.- [PDF] - [Article] - [UPDATED]

    Amy Etherington, James W. Nightingale, Richard Massey, Sut-Ieng Tam, XiaoYue Cao, Anna Niemiec, Qiuhan He, Andrew Robertson, Ran Li, Aristeidis Amvrosiadis, Shaun Cole, Jose M. Diego, Carlos S. Frenk, Brenda L. Frye, David Harvey, Mathilde Jauzac, Anton M. Koekemoer, David J. Lagattuta, Marceau Limousin, Guillaume Mahler, Ellen Sirks, Charles L. Steinhardt
     

    The distribution of mass in galaxy-scale strong gravitational lenses is often modelled as an elliptical power law plus 'external shear', which notionally accounts for neighbouring galaxies and cosmic shear. We show that it does not. Except in a handful of rare systems, the best-fit values of external shear do not correlate with independent measurements of shear: from weak lensing in 45 Hubble Space Telescope images, or in 50 mock images of lenses with complex distributions of mass. Instead, the best-fit shear is aligned with the major or minor axis of 88% of lens galaxies; and the amplitude of the external shear increases if that galaxy is disky. We conclude that 'external shear' attached to a power law model is not physically meaningful, but a fudge to compensate for lack of model complexity. Since it biases other model parameters that are interpreted as physically meaningful in several science analyses (e.g. measuring galaxy evolution, dark matter physics or cosmological parameters), we recommend that future studies of galaxy-scale strong lensing should employ more flexible mass models.

  • Model-independent reconstruction of the Interacting Dark Energy Kernel: Binned and Gaussian process.- [PDF] - [Article] - [UPDATED]

    Luis A. Escamilla, Ozgur Akarsu, Eleonora Di Valentino, J. Alberto Vazquez
     

    The cosmological dark sector remains an enigma, offering numerous possibilities for exploration. One particularly intriguing option is the (non-minimal) interaction scenario between dark matter and dark energy. In this paper, to investigate this scenario, we have implemented Binned and Gaussian model-independent reconstructions for the interaction kernel alongside the equation of state; while using data from BAOs, Pantheon+ and Cosmic Chronometers. In addition to the reconstruction process, we conducted a model selection to analyze how our methodology performed against the standard $\Lambda$CDM model. The results revealed a slight indication, of at least 1$\sigma$ confidence level, for some oscillatory dynamics in the interaction kernel and, as a by-product, also in the DE and DM. A consequence of this outcome is the possibility of a sign change in the direction of the energy transfer between DE and DM and a possible transition from a negative DE energy density in early-times to a positive one at late-times. While our reconstructions provided a better fit to the data compared to the standard model, the Bayesian Evidence showed an intrinsic penalization due to the extra degrees of freedom. Nevertheless these reconstructions could be used as a basis for other physical models with lower complexity but similar behavior.

  • Photon to axion conversion during Big Bang Nucleosynthesis.- [PDF] - [Article] - [UPDATED]

    Antonio J. Cuesta, José I. Illana, Manuel Masip
     

    We investigate how the resonant conversion at a temperature $\bar{T}=25$-$65$ keV of a fraction of the CMB photons into an axion-like majoron affects BBN. The scenario, that assumes the presence of a primordial magnetic field and the subsequent decay of the majorons into neutrinos at $T\approx 1$ eV, has been proposed to solve the $H_0$ tension. We find two main effects. First, since we lose photons to majorons at $\bar{T}$, the baryon to photon ratio is smaller at the beginning of BBN $(T>\bar{T})$ than during decoupling and structure formation ($T\ll \bar{T}$). This relaxes the $2\sigma$ mismatch between the observed deuterium abundance and the one predicted by the standard $\Lambda$CDM model. Second, since the conversion implies a sudden drop in the temperature of the CMB during the final phase of BBN, it interrupts the synthesis of lithium and beryllium and reduces their final abundance, possibly alleviating the lithium problem.

  • Scalar-Induced Gravitational Waves from Ghost Inflation and Parity Violation.- [PDF] - [Article] - [UPDATED]

    Sebastian Garcia-Saenz, Yizhou Lu, Zhiming Shuai
     

    We calculate the scalar-induced gravitational wave energy density in the theory of Ghost Inflation, assuming scale invariance and taking into account both the power spectrum- and trispectrum-induced contributions. For the latter we consider the leading cubic and quartic couplings of the comoving curvature perturbation in addition to two parity-violating quartic operators. In the parity-even case, we find the relative importance of the trispectrum-induced signal to be suppressed by the requirement of perturbativity, strengthening a no-go theorem recently put forth. The parity-odd signal, even though also bound to be small, is non-degenerate with the Gaussian contribution and may in principle be comparable to the parity-even non-Gaussian part, thus potentially serving as a probe of the Ghost Inflation scenario and of parity violating physics during inflation.

  • Biases in velocity reconstruction: investigating the effects on growth rate and expansion measurements in the local universe.- [PDF] - [Article] - [UPDATED]

    Ryan J. Turner, Chris Blake
     

    The local galaxy peculiar velocity field can be reconstructed from the surrounding distribution of large-scale structure and plays an important role in calibrating cosmic growth and expansion measurements. In this paper, we investigate the effect of the stochasticity of these velocity reconstructions on the statistical and systematic errors in cosmological inferences. By introducing a simple statistical model between the measured and theoretical velocities, whose terms we calibrate from linear theory, we derive the bias in the model velocity. We then use lognormal realisations to explore the potential impact of this bias when using a cosmic flow model to measure the growth rate of structure, and to sharpen expansion rate measurements from host galaxies for gravitational wave standard sirens with electromagnetic counterparts. Although our illustrative study does not contain fully realistic observational effects, we demonstrate that in some scenarios these corrections are significant and result in a measurable improvement in determinations of the Hubble constant compared to standard forecasts.

  • Constraining the halo-ISM connection through multi-transition carbon monoxide line-intensity mapping.- [PDF] - [Article] - [UPDATED]

    Dongwoo T. Chung
     

    Line-intensity mapping (LIM) surveys will characterise the cosmological large-scale structure of emissivity in a range of atomic and molecular spectral lines, but existing literature rarely considers whether these surveys can recover excitation properties of the tracer gas species, such as the carbon monoxide (CO) molecule. Combining basic empirical and physical assumptions with the off-the-shelf Radex radiative transfer code or a Gaussian process emulator of Radex outputs, we devise a basic dark matter halo model for CO emission by tying bulk CO properties to halo properties, exposing physical variables governing CO excitation as free parameters. The CO Mapping Array Project (COMAP) is working towards a multi-band survey programme to observe both CO(1-0) and CO(2-1) at $z\sim7$. We show that this programme, as well as a further 'Triple Deluxe' extension to higher frequencies covering CO(3-2), is fundamentally capable of successfully recovering the connection between halo mass and CO abundances, and constraining the molecular gas kinetic temperature and density within the star-forming interstellar medium in ways that single-transition CO LIM cannot. Given a fiducial thermal pressure of $\sim10^4$ K cm$^{-3}$ for molecular gas in halos of $\sim10^{10}\,M_\odot$, simulated multi-band COMAP surveys successfully recover the thermal pressure within 68% interval half-widths of 0.5--0.6 dex. Construction of multi-frequency LIM instrumentation to access multiple CO transitions is crucial in harnessing this capability, as part of a cosmic statistical probe of gas metallicity, dust chemistry, and other physical parameters in star-forming regions of the first galaxies and proto-galaxies out of reionisation.

  • What can galaxy shapes tell us about physics beyond the standard model?.- [PDF] - [Article] - [UPDATED]

    Oliver H. E. Philcox, Morgane J. König, Stephon Alexander, David N. Spergel
     

    The shapes of galaxies trace scalar physics in the late-Universe through the large-scale gravitational potential. Are they also sensitive to higher-spin physics? We present a general study into the observational consequences of vector and tensor modes in the early and late Universe, through the statistics of cosmic shear and its higher-order generalization, flexion. Higher-spin contributions arise from both gravitational lensing and intrinsic alignments, and we give the leading-order correlators for each (some of which have been previously derived), in addition to their flat-sky limits. In particular, we find non-trivial sourcing of shear $EB$ and $BB$ spectra, depending on the parity properties of the source. We consider two sources of vector and tensor modes: scale-invariant primordial fluctuations and cosmic strings, forecasting the detectability of each for upcoming surveys. Shear is found to be a powerful probe of cosmic strings, primarily through the continual sourcing of vector modes; flexion adds little to the constraining power except on very small scales ($\ell\gtrsim 1000$), though it could be an intriguing probe of as-yet-unknown rank-three tensors or halo-scale physics. Such probes could be used to constrain new physics proposed to explain recent pulsar timing array observations.

  • Generalized cold-atom simulators for vacuum decay.- [PDF] - [Article] - [UPDATED]

    Alexander C. Jenkins, Ian G. Moss, Thomas P. Billam, Zoran Hadzibabic, Hiranya V. Peiris, Andrew Pontzen
     

    Cold-atom analogue experiments are a promising new tool for studying relativistic vacuum decay, allowing us to empirically probe early-Universe theories in the laboratory. However, existing analogue proposals place stringent requirements on the atomic scattering lengths that are challenging to realize experimentally. Here we generalize these proposals and show that any stable mixture between two states of a bosonic isotope can be used as a relativistic analogue. This greatly expands the range of suitable experimental setups, and will thus expedite efforts to study vacuum decay with cold atoms.

  • A visual tool for assessing tension-resolving models in the $H_0$-$\sigma_8$ plane.- [PDF] - [Article] - [UPDATED]

    Igor de O. C. Pedreira, Micol Benetti, Elisa G. M. Ferreira, Leila L. Graef, Laura Herold
     

    Beyond-$\Lambda$CDM models, which were proposed to resolve the "Hubble tension", often have an impact on the discrepancy in the amplitude of matter clustering, the "$\sigma_8$-tension". To explore the interplay between the two tensions, we propose a simple method to visualize the relation between the two parameters $H_0$ and $\sigma_8$: For a given extension of the $\Lambda$CDM model and data set, we plot the relation between $H_0$ and $\sigma_8$ for different amplitudes of the beyond-$\Lambda$CDM physics. We use this visualization method to illustrate the trend of selected cosmological models, including non-minimal Higgs-like inflation, early dark energy, a varying effective electron mass, an extra number of relativistic species and modified dark energy models. We envision that the proposed method could be a useful diagnostic tool to illustrate the behaviour of complex cosmological models with many parameters in the context of the $H_0$ and $\sigma_8$ tensions.

astro-ph.HE

  • Future Prospects for Constraining Black-Hole Spacetime: Horizon-scale Variability of Astrophysical Jet.- [PDF] - [Article]

    Kotaro Moriyama, Alejandro Cruz-Osorio, Yosuke Mizuno, Christian M. Fromm, Antonios Nathanail, Luciano Rezzolla
     

    The Event Horizon Telescope (EHT) Collaboration has recently published the first horizon-scale images of the supermassive black holes M87* and Sgr A* and provided some first information on the physical conditions in their vicinity. The comparison between the observations and the three-dimensional general-relativistic magnetohydrodynamic (GRMHD) simulations has enabled the EHT to set initial constraints on the properties of these black-hole spacetimes. However, accurately distinguishing the properties of the accretion flow from those of the spacetime, most notably, the black-hole mass and spin, remains challenging because of the degeneracies the emitted radiation suffers when varying the properties of the plasma and those of the spacetime. The next-generation EHT (ngEHT) observations are expected to remove some of these degeneracies by exploring the complex interplay between the disk-jet dynamics, which represents one of the most promising tools for extracting information on the black-hole spin. By using GRMHD simulations of magnetically arrested disks (MADs) and general-relativistic radiative-transfer (GRRT) calculations of the emitted radiation, we have studied the properties of the jet and the accretion-disk dynamics on spatial scales that are comparable with the horizon. In this way, we are able to highlight that the radial and azimuthal dynamics of the jet are well correlated with the black-hole spin. Based on the resolution and image reconstruction capabilities of the ngEHT observations of M87*, we can assess the detectability and associated uncertainty of this correlation. Overall, our results serve to assess what are the prospects for constraining the black-hole spin with future EHT observations.

  • Mergers of black hole binaries driven by misaligned circumbinary discs.- [PDF] - [Article]

    Rebecca G. Martin, Stephen Lepp, Bing Zhang, C. J. Nixon, Anna C. Childs
     

    With hydrodynamical simulations we examine the evolution of a highly misaligned circumbinary disc around a black hole binary including the effects of general relativity. We show that a disc mass of just a few percent of the binary mass can significantly increase the binary eccentricity through von-Zeipel--Kozai-Lidov (ZKL) like oscillations provided that the disc lifetime is longer than the ZKL oscillation timescale. The disc begins as a relatively narrow ring of material far from the binary and spreads radially. When the binary becomes highly eccentric, disc breaking forms an inner disc ring that quickly aligns to polar. The polar ring drives fast retrograde apsidal precession of the binary that weakens the ZKL effect. This allows the binary eccentricity to remain at a high level and may significantly shorten the black hole merger time. The mechanism requires the initial disc inclination relative to the binary to be closer to retrograde than to prograde.

  • Minutes-duration Optical Flares with Supernova Luminosities.- [PDF] - [Article]

    Anna Y. Q. Ho, Daniel A. Perley, Ping Chen, Steve Schulze, Vik Dhillon, 6), Harsh Kumar, Aswin Suresh, Vishwajeet Swain, Michael Bremer, Stephen J. Smartt, 10), Joseph P. Anderson, 12), G. C. Anupama, Supachai Awiphan, Sudhanshu Barway, Eric C. Bellm, Sagi Ben-Ami, Varun Bhalerao, Thomas de Boer, Thomas G. Brink, Rick Burruss, Poonam Chandra, Ting-Wan Chen, 21), Wen-Ping Chen, Jeff Cooke, 24, 25), Michael W. Coughlin, Kaustav K. Das, Andrew J. Drake, Alexei V. Filippenko, James Freeburn, 24), Christoffer Fremling, 28), Michael D. Fulton, Avishay Gal-Yam, Lluís Galbany, 30), Hua Gao, Matthew J. Graham, Mariusz Gromadzki, Claudia P. Gutiérrez, 29), et al. (40 additional authors not shown)
     

    In recent years, certain luminous extragalactic optical transients have been observed to last only a few days. Their short observed duration implies a different powering mechanism from the most common luminous extragalactic transients (supernovae) whose timescale is weeks. Some short-duration transients, most notably AT2018cow, display blue optical colours and bright radio and X-ray emission. Several AT2018cow-like transients have shown hints of a long-lived embedded energy source, such as X-ray variability, prolonged ultraviolet emission, a tentative X-ray quasiperiodic oscillation, and large energies coupled to fast (but subrelativistic) radio-emitting ejecta. Here we report observations of minutes-duration optical flares in the aftermath of an AT2018cow-like transient, AT2022tsd (the "Tasmanian Devil"). The flares occur over a period of months, are highly energetic, and are likely nonthermal, implying that they arise from a near-relativistic outflow or jet. Our observations confirm that in some AT2018cow-like transients the embedded energy source is a compact object, either a magnetar or an accreting black hole.

  • Multi-messenger astrophysics in the gravitational-wave era.- [PDF] - [Article]

    Geoffrey Mo, Rahul Jayaraman, Danielle Frostig, Michael M. Fausnaugh, Erik Katsavounidis, George R. Ricker
     

    The observation of GW170817, the first binary neutron star merger observed in both gravitational waves (GW) and electromagnetic (EM) waves, kickstarted the age of multi-messenger GW astronomy. This new technique presents an observationally rich way to probe extreme astrophysical processes. With the onset of the LIGO-Virgo-KAGRA Collaboration's O4 observing run and wide-field EM instruments well-suited for transient searches, multi-messenger astrophysics has never been so promising. We review recent searches and results for multi-messenger counterparts to GW events, and describe existing and upcoming EM follow-up facilities, with a particular focus on WINTER, a new near-infrared survey telescope, and TESS, an exoplanet survey space telescope.

  • Numerical viscosity and resistivity in MHD turbulence simulations.- [PDF] - [Article]

    Lakshmi Malvadi Shivakumar, Christoph Federrath
     

    To ensure that magnetohydrodynamical (MHD) turbulence simulations accurately reflect the physics, it is critical to understand numerical dissipation. Here we determine the hydrodynamic and magnetic Reynolds number (Re and Rm) as a function of linear grid resolution N, in MHD simulations with purely numerical viscosity and resistivity (implicit large eddy simulations; ILES). We quantify the numerical viscosity in the subsonic and supersonic regime, via simulations with sonic Mach numbers of Mach=0.1 and Mach=10, respectively. We find Re=(N/N_Re)^p_Re, with p_Re=[1.2,1.4] and N_Re=[0.8,1.7] for Mach=0.1, and p_Re=[1.5,2.0] and N_Re=[0.8,4.4] for Mach=10, and we find Rm=(N/N_Rm)^p_Rm, with p_Rm=[1.3,1.5] and N_Rm=[1.1,2.3] for Mach=0.1, and p_Rm=[1.2,1.6] and N_Rm=[0.1,0.7] for Mach=10. The resulting magnetic Prandtl number (Pm=Rm/Re) is consistent with a constant value of Pm=1.3+/-1.1 for Mach=0.1, and Pm=2.0+/-1.4 for Mach=10. We compare our results with an independent study in the subsonic regime and find excellent agreement in p_Re and p_Rm, and agreement within a factor of ~2 for N_Re and N_Rm (due to differences in the codes and solvers). We compare these results to the target Re and Rm set in direct numerical simulations (DNS, i.e., using explicit viscosity and resistivity) from the literature. This comparison and our ILES relations can be used to determine whether a target Re and Rm can be achieved in a DNS for a given N. We conclude that for the explicit (physical) dissipation to dominate over the numerical dissipation, the target Reynolds numbers must be set lower than the corresponding numerical values.

  • Optical intra-day variability of the blazar S5 0716+714.- [PDF] - [Article]

    Tushar Tripathi, Alok C. Gupta, Ali Takey, Rumen Bachev, Oliver Vince, Anton Strigachev, Pankaj Kushwaha, E. G. Elhosseiny, Paul J. Wiita, G. Damljanovic, Vinit Dhiman, A. Fouad, Haritma Gaur, Minfeng Gu, G. E. Hamed, Shubham Kishore, A. Kurtenkov, Shantanu Rastogi, E. Semkov, I. Zead, Zhongli Zhang
     

    We present an extensive recent multi-band optical photometric observations of the blazar S5 0716+714 carried out over 53 nights with two telescopes in India, two in Bulgaria, one in Serbia, and one in Egypt during 2019 November -- 2022 December. We collected 1401, 689, 14726, and 165 photometric image frames in B, V, R, and I bands, respectively. We montiored the blazar quasi-simultaneously during 3 nights in B, V, R, and I bands; 4 nights in B, V, and R; 2 nights in V, R, and I; 5 nights in B and R; and 2 nights in V and R bands. We also took 37 nights of data only in R band. Single band data are used to study intraday flux variability and two or more bands quasi-simultaneous observations allow us to search for colour variation in the source. We employ the power-enhanced F-test and the nested ANOVA test to search for genuine flux and color variations in the light curves of the blazar on intraday timescales. Out of 12, 11, 53, and 5 nights observations, intraday variations with amplitudes between ~3% and ~20% are detected in 9, 8, 31 and 3 nights in B, V, R, and I bands, respectively, corresponding to duty cycles of 75%, 73%, 58% and 60%. These duty cycles are lower than those typically measured at earlier times. On these timescales color variations with both bluer-when-brighter and redder-when-brighter are seen, though nights with no measurable colour variation are also present. We briefly discuss possible explanations for this observed intraday variability.

  • Realtime Follow-up of External Alerts with the IceCube Supernova Data Acquisition System.- [PDF] - [Article]

    Nora Valtonen-Mattila
     

    The IceCube Neutrino Observatory is uniquely sensitive to MeV neutrinos emitted during a core-collapse supernova. The Supernova Data Acquisition System (SNDAQ) monitors in real-time the detector rate deviation searching for bursts of MeV neutrinos. We present a new analysis stream that uses SNDAQ to respond to external alerts from gravitational waves detected in LIGO-Virgo-KAGRA.

  • Electromagnetic Counterparts Powered by Kicked Remnants of Black Hole Binary Mergers in AGN Disks.- [PDF] - [Article]

    Ken Chen, Zi-Gao Dai
     

    The disk of an active galactic nucleus (AGN) is widely regarded as a prominent formation channel of binary black hole (BBH) mergers that can be detected through gravitational waves (GWs). Besides, the presence of dense environmental gas offers the potential for an embedded BBH merger to produce electromagnetic (EM) counterparts. In this paper, we investigate EM emission powered by the kicked remnant of a BBH merger occurring within the AGN disk. The remnant BH will launch a jet via accreting magnetized medium as it traverses the disk. The resulting jet will decelerate and dissipate energy into a lateral cocoon during its propagation. We explore three radiation mechanisms of the jet cocoon system: jet breakout emission, disk cocoon cooling emission, and jet cocoon cooling emission, and find that the jet cocoon cooling emission is more likely to be detected in its own frequency bands. We predict a soft X-ray transient, lasting for O($10^3$) s, to serve as an EM counterpart, of which the time delay O(10) days after the GW trigger contributes to follow-up observations. Consequently, BBH mergers in the AGN disk represent a novel multimessenger source. In the future, enhanced precision in measuring and localizing GWs, coupled with diligent searches for such associated EM signal, will effectively validate or restrict the origin of BBH mergers in the AGN disk.

  • Superradiance in the Kerr-Taub-NUT spacetime.- [PDF] - [Article]

    Bum-Hoon Lee, Wonwoo Lee, Yong-Hui Qi
     

    Superradiance is the effect of field waves being amplified during reflection from a charged or rotating black hole. In this paper, we study the low-energy dynamics of super-radiant scattering of massive scalar and massless higher spin field perturbations in a generic axisymmetric stationary Kerr-Taub-NUT (Newman-Unti-Tamburino) spacetime, which represents sources with both gravitomagnetic monopole moment (magnetic mass) and gravitomagnetic dipole moment (angular momentum). We obtain a generalized Teukolsky master equation for all spin perturbation fields. The equations are separated into their angular and radial parts. The angular equations lead to spin-weighted spheroidal harmonic functions that generalize those in Kerr spacetime. We identify an effective spin as a coupling between frequency (or energy) and the NUT parameter. The behaviors of the radial wave function near the horizon and at the infinite boundary are studied. We provide analytical expressions for low-energy observables such as emission rates and cross sections of all massless fields with spin, including scalar, neutrino, electromagnetic, Rarita-Schwinger, and gravitational waves.

  • Exploring the Potential for Detecting Rotational Instabilities in Binary Neutron Star Merger Remnants with Gravitational Wave Detectors.- [PDF] - [Article]

    Argyro Sasli, Nikolaos Karnesis, Nikolaos Stergioulas
     

    We explore the potential for detecting rotational instabilities in the post-merger phase of binary neutron star mergers using different network configurations of upgraded and next-generation gravitational wave detectors. Our study employs numerically generated post-merger waveforms, which reveal the re-excitation of the $l=m=2$ $f$-mode at a time of $O(10{\rm})$ms after merger. We evaluate the detectability of these signals by injecting them into colored Gaussian noise and performing a reconstruction as a sum of wavelets using Bayesian inference. Computing the overlap between the reconstructed and injected signal, restricted to the instability part of the post-merger phase, we find that one could infer the presence of rotational instabilities with a network of planned 3rd-generation detectors, depending on the total mass and distance to the source. For a recently suggested high-frequency detector design, we find that the instability part would be detectable even at 200 Mpc, significantly increasing the anticipated detection rate. For a network consisting of the existing HLV detectors, but upgraded to twice the A+ sensitivity, we confirm that the peak frequency of the whole post-merger gravitational-wave emission could be detectable with a network signal-to-noise ratio of 8 at a distance of 40Mpc.

  • Absolutely Scintillating: constraining $\nu$ mass with black hole-forming supernovae.- [PDF] - [Article]

    George Parker, Michael Wurm
     

    The terrestrial detection of a neutrino burst from the next galactic core-collapse supernova (CCSN) will provide profound insight into stellar astrophysics, as well as fundamental neutrino physics. Using Time-Of-Flight (ToF) effects, a CCSN signal can be used to constrain the absolute neutrino mass. In this work, we study the case where a black hole forms during core-collapse, abruptly truncating the neutrino signal. This sharp cutoff is a feature that can be leveraged in a ToF study, enabling strict limits to be set on the neutrino mass which are largely model-independent. If supernova neutrinos are detected on Earth in liquid scintillator detectors, the exceptional energy resolution would allow an energy-dependent sampling of the ToF effects at low neutrino energies. One promising experimental program is the Jiangmen Underground Neutrino Observatory (JUNO), a next-generation liquid scintillator detector currently under construction in China. Using three-dimensional black hole-forming core-collapse supernova simulations, the sensitivity of a JUNO-like detector to the absolute neutrino mass is conservatively estimated to be $m_\nu < 0.39^{+0.06}_{-0.01}$ eV for a 95% CL bound. A future-generation liquid scintillator observatory like THEIA-100 could even achieve sub-0.2 eV sensitivity.

  • Polarimetric Geometric Modeling for mm-VLBI Observations of Black Holes.- [PDF] - [Article]

    Freek Roelofs, Michael D. Johnson, Andrew Chael, Michael Janssen, Maciek Wielgus, Avery E. Broderick, Event Horizon Telescope Collaboration
     

    The Event Horizon Telescope (EHT) is a millimeter very long baseline interferometry (VLBI) array that has imaged the apparent shadows of the supermassive black holes M87* and Sagittarius A*. Polarimetric data from these observations contain a wealth of information on the black hole and accretion flow properties. In this work, we develop polarimetric geometric modeling methods for mm-VLBI data, focusing on approaches that fit data products with differing degrees of invariance to broad classes of calibration errors. We establish a fitting procedure using a polarimetric "m-ring" model to approximate the image structure near a black hole. By fitting this model to synthetic EHT data from general relativistic magnetohydrodynamic models, we show that the linear and circular polarization structure can be successfully approximated with relatively few model parameters. We then fit this model to EHT observations of M87* taken in 2017. In total intensity and linear polarization, the m-ring fits are consistent with previous results from imaging methods. In circular polarization, the m-ring fits indicate the presence of event-horizon-scale circular polarization structure, with a persistent dipolar asymmetry and orientation across several days. The same structure was recovered independently of observing band, used data products, and model assumptions. Despite this broad agreement, imaging methods do not produce similarly consistent results. Our circular polarization results, which imposed additional assumptions on the source structure, should thus be interpreted with some caution. Polarimetric geometric modeling provides a useful and powerful method to constrain the properties of horizon-scale polarized emission, particularly for sparse arrays like the EHT.

  • Magnetohydrodynamics predicts heavy-tailed distributions of axion-photon conversion.- [PDF] - [Article] - [UPDATED]

    Pierluca Carenza, Ramkishor Sharma, M.C. David Marsh, Axel Brandenburg, Eike Ravensburg
     

    The interconversion of axionlike particles (ALPs) and photons in magnetised astrophysical environments provides a promising route to search for ALPs. The strongest limits to date on light ALPs use galaxy clusters as ALP-photon converters. However, such studies traditionally rely on simple models of the cluster magnetic fields, with the state-of-the-art being Gaussian random fields (GRFs). We present the first systematic study of ALP-photon conversion in more realistic, turbulent fields from dedicated magnetohydrodynamic (MHD) simulations, which we compare with GRF models. For GRFs, we analytically derive the distribution of conversion ratios at fixed energy and find that it follows an exponential law. We find that the MHD models agree with the exponential law for typical, small-amplitude mixings but exhibit distinctly heavy tails for rare and large mixings. We explain how non-Gaussian features, e.g.~coherent structures and local spikes in the MHD magnetic field, are responsible for the heavy tail. Our results suggest that limits placed on ALPs using GRFs are robust.

  • Modeling Solids in Nuclear Astrophysics with Smoothed Particle Hydrodynamics.- [PDF] - [Article] - [UPDATED]

    Irina Sagert, Oleg Korobkin, Ingo Tews, Bing-Jyun Tsao, Hyun Lim, Michael J. Falato, Julien Loiseau
     

    Smoothed Particle Hydrodynamics (SPH) is a frequently applied tool in computational astrophysics to solve the fluid dynamics equations governing the systems under study. For some problems, for example when involving asteroids and asteroid impacts, the additional inclusion of material strength is necessary in order to accurately describe the dynamics. In compact stars, that is white dwarfs and neutron stars, solid components are also present. Neutron stars have a solid crust which is the strongest material known in nature. However, their dynamical evolution, when modeled via SPH or other computational fluid dynamics codes, is usually described as a purely fluid dynamics problem. Here, we present the first 3D simulations of neutron-star crustal toroidal oscillations including material strength with the Los Alamos National Laboratory SPH code FleCSPH. In the first half of the paper, we present the numerical implementation of solid material modeling together with standard tests. The second half is on the simulation of crustal oscillations in the fundamental toroidal mode. Here, we dedicate a large fraction of the paper to approaches which can suppress numerical noise in the solid. If not minimized, the latter can dominate the crustal motion in the simulations.

  • Optical Emission Model for Binary Black Hole Merger Remnants Travelling through Discs of Active Galactic Nuclei.- [PDF] - [Article] - [UPDATED]

    J. C. Rodríguez-Ramírez, C. R. Bom, B. Fraga, R. Nemmen
     

    Active galactic nuclei (AGNs) have been proposed as plausible sites for hosting a sizable fraction of the binary black hole (BBH) mergers measured through gravitational waves (GWs) by the LIGO-Virgo-Kagra (LVK) experiment. These GWs could be accompanied by radiation feedback due to the interaction of the BBH merger remnant with the AGN disc. We present a new predicted radiation signature driven by the passage of a kicked BBH remnant throughout a thin AGN disc. We analyse the situation of a merger occurring outside the thin disc, where the merger is of second or higher generation in a merging hierarchical sequence. The coalescence produces a kicked BH remnant that eventually plunges into the disc, accretes material, and inflates jet cocoons. We consider the case of a jet cocoon propagating quasi-parallel to the disc plane and study the outflow that results when the cocoon emerges from the disc. We calculate the transient emission of the emerging cocoon using a photon diffusion model typically employed to describe the light curves of supernovae. Depending on the parameter configuration, the flare produced by the emerging cocoon could be comparable to or exceed the AGN background emission at optical, and extreme ultraviolet wavelengths. For instance, in AGNs with central engines of $\sim 5\times10^{6}$ M$_\odot$, flares driven by BH remnants with masses of $\sim$ 100 M$_\odot$ can appear in about $\sim$[10-100] days after the GW, lasting for few days.

  • Revisiting the implications of Liouville's theorem to the anisotropy of cosmic rays.- [PDF] - [Article] - [UPDATED]

    Cainã de Oliveira, Leonardo Paulo Maia, Vitor de Souza
     

    We present a solution to Liouville's equation for an ensemble of charged particles propagating in magnetic fields. The solution is presented using an expansion in spherical harmonics of the phase space density, allowing a direct interpretation of the distribution of arrival directions of cosmic rays. The results are found for chosen conditions of variability and source distributions. We show there are two conditions for an initially isotropic flux of particles to remain isotropic while traveling through a magnetic field: isotropy and homogeneity of the sources. The formalism is used to analyze the data measured by the Pierre Auger Observatory, contributing to the understanding of the dependence of the dipole amplitude with energy and predicting the energy in which the quadrupole signal should be measured.

  • IceCube search for neutrinos from GRB 221009A.- [PDF] - [Article] - [UPDATED]

    Karlijn Kruiswijk, Bennett Brinson, Rachel Procter-Murphy, Jessie Thwaites, Nora Valtonen-Mattila, (1) Centre for Cosmology, Particle Physics and Phenomenology - CP3, Université catholique de Louvain, Louvain-la-Neuve, Belgium, (2) School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, (3) Dept. of Physics, University of Maryland, (4) Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin - Madison, (5) Dept. of Physics and Astronomy, Uppsala University)
     

    GRB 221009A is the brightest Gamma Ray Burst (GRB) ever observed. The observed extremely high flux of high and very-high-energy photons provide a unique opportunity to probe the predicted neutrino counterpart to the electromagnetic emission. We have used a variety of methods to search for neutrinos in coincidence with the GRB over several time windows during the precursor, prompt and afterglow phases of the GRB. MeV scale neutrinos are studied using photo-multiplier rate scalers which are normally used to search for galactic core-collapse supernovae neutrinos. GeV neutrinos are searched starting with DeepCore triggers. These events don't have directional localization, but instead can indicate an excess in the rate of events. 10 GeV - 1 TeV and >TeV neutrinos are searched using traditional neutrino point source methods which take into account the direction and time of events with DeepCore and the entire IceCube detector respectively. The >TeV results include both a fast-response analysis conducted by IceCube in real-time with time windows of T$_0 - 1$ to T$_0 + 2$ hours and T$_0 \pm 1$ day around the time of GRB 221009A, as well as an offline analysis with 3 new time windows up to a time window of T$_0 - 1$ to T$_0 + 14$ days, the longest time period we consider. The combination of observations by IceCube covers 9 orders of magnitude in neutrino energy, from MeV to PeV, placing upper limits across the range for predicted neutrino emission.

  • Estimation of redshift and associated uncertainty of Fermi/LAT extra-galactic sources with Deep Learning.- [PDF] - [Article] - [UPDATED]

    Sarvesh Gharat, Abhimanyu Borthakur, Gopal Bhatta
     

    With the advancement of technology, machine learning-based analytical methods have pervaded nearly every discipline in modern studies. Particularly, a number of methods have been employed to estimate the redshift of gamma-ray loud active galactic nuclei (AGN), which are a class of supermassive black hole systems known for their intense multi-wavelength emissions and violent variability. Determining the redshifts of AGNs is essential for understanding their distances, which, in turn, sheds light on our current understanding of the structure of the nearby universe. However, the task involves a number of challenges such as the need for meticulous follow-up observations across multiple wavelengths and astronomical facilities. In this study, we employ a simple yet effective deep learning model with a single hidden layer having $64$ neurons and a dropout of 0.25 in the hidden layer, on a sample of AGNs with known redshifts from the latest AGN catalog, 4LAC-DR3, obtained from Fermi-LAT. We utilized their spectral, spatial, and temporal properties to robustly predict the redshifts of AGNs as well quantify their associated uncertainties, by modifying the model using two different variational inference methods. We achieve a correlation coefficient of 0.784 on the test set from the frequentist model and 0.777 and 0.778 from both the variants of variational inference, and, when used to make predictions on the samples with unknown redshifts, we achieve mean predictions of 0.421, 0.415 and 0.393, with standard deviations of 0.258, 0.246 and 0.207 from the models, respectively.

  • Unraveling Parameter Degeneracy in GRB Data Analysis.- [PDF] - [Article] - [UPDATED]

    Keneth Garcia-Cifuentes, Rosa Leticia Becerra, Fabio De Colle, Felipe Vargas
     

    Gamma-ray burst (GRB) afterglow light curves and spectra provide information about the density of the environment, the energy of the explosion, the properties of the particle acceleration process, and the structure of the decelerating jet. Due to the large number of parameters involved, the model can present a certain degree of parameter degeneracy. In this paper, we generated synthetic photometric data points using a standard GRB afterglow model and fit them using the Markov Chain Monte Carlo (MCMC) method. This method has emerged as the preferred approach for analysing and interpreting data in astronomy. We show that, depending on the choice of priors, the parameter degeneracy can go unnoticed by the MCMC method. Furthermore, we apply the MCMC method to analyse the GRB~170817A afterglow. We find that there is a complete degeneracy between the energy of the explosion $E$, the density of the environment $n$, and the microphysical parameters describing the particle acceleration process (e.g. $\epsilon_e$ and $\epsilon_B$), which cannot be determined by the afterglow light curve alone. Our results emphasise the importance of gaining a deep understanding of the degeneracy properties which can be present in GRB afterglows models, as well as the limitations of the MCMC method. In the case of GRB 170817, we get the following values for the physical parameters: $E=8\times 10^{50}-1 \times 10^{53}$ erg, $n=7\times 10^{-5}-9\times10^{-3}$, $\epsilon_e=10^{-3}-0.3$, $\epsilon_B=10^{-10}-0.3$.

  • Induced magnetic field in accretion disks around neutron stars.- [PDF] - [Article] - [UPDATED]

    A. V. Kuzin
     

    There are X-ray pulsating sources that are explained by accretion from disks around neutron stars. Such disks deserve a detailed analysis. In particular, the dipole magnetic field of the central star may penetrate the disk, giving rise to an induced magnetic field inside the disk due to the frozen-in condition. The growth of the induced field can be limited by the turbulent diffusion. In the present work, I calculate the induced field in this case. The problem is reduced to the induction equation to which I have found an analytical solution describing radial and vertical structures of the induced field. The radial structure is close to the earlier predicted dependence on the difference in angular velocities between the magnetosphere and disk, $b \propto \Omega_{\rm s} - \Omega_{\rm k}$, while the vertical structure of an induced field is close to the linear dependence on the altitude above the equator, $b \propto z$. The possibility of the existence of non-stationary quasi-periodic components of the induced field is discussed.

astro-ph.GA

  • Multi-phase characterization of AGN winds in 5 local type-2 quasars.- [PDF] - [Article]

    G. Speranza, C. Ramos Almeida, J. A. Acosta-Pulido, A. Audibert, L. R. Holden, C. N. Tadhunter, A.Lapi, O. González-Martín, M. Brusa, I. E. López, B. Musiimenta, F. Shankar
     

    We present MEGARA (Multi-Espectr\'ografo en GTC de Alta Resoluci\'on para Astronom\'ia) Integral Field Unit (IFU) observations of 5 local type-2 quasars (QSO2s, z $\sim 0.1$) from the Quasar Feedback (QSOFEED) sample. These active galactic nuclei (AGN) have bolometric luminosities of 10$^{45.5-46}$ erg/s and stellar masses of $\sim$10$^{11}$ M$_{\odot}$. We explore the kinematics of the ionized gas through the [O~III]$\lambda$5007 $\r{A}$ emission line. The nuclear spectra of the 5 QSO2s, extracted in a circular aperture of $\sim$ 1.2" ($\sim$ 2.2 kpc) in diameter, show signatures of high velocity winds in the form of broad (full width at half maximum; 1300$\leq$FWHM$\leq$2240 km/s and blueshifted components. We find that 4 out of the 5 QSO2s present outflows that we can resolve with our seeing-limited data, and they have radii ranging from 3.1 to 12.6 kpc. In the case of the two QSO2s with extended radio emission, we find that it is well-aligned with the outflows, suggesting that low-power jets might be compressing and accelerating the ionized gas in these radio-quiet QSO2s. In the four QSO2s with spatially resolved outflows, we measure ionized mass outflow rates of 3.3-6.5 Msun/yr when we use [S~II]-based densities, and of 0.7-1.6 Msun/yr when trans-auroral line-based densities are considered instead. We compare them with the corresponding molecular mass outflow rates (8 - 16 Msun/yr), derived from CO(2-1) ALMA observations at 0.2" resolution. Both phases show lower outflow mass rates than those expected from observational scaling relations where uniform assumptions on the outflow properties were adopted. This might be indicating that the AGN luminosity is not the only driver of massive outflows and/or that these relations need to be re-scaled using accurate outflow properties. We do not find a significant impact of the outflows on the global star formation rates.

  • The Age Distribution of Stellar Orbit Space Clumps.- [PDF] - [Article]

    Verena Fürnkranz, Hans-Walter Rix, Johanna Coronado, Rhys Seeburger
     

    The orbit distribution of young stars in the Galactic disk is highly structured, from well-defined clusters to streams of stars that may be widely dispersed across the sky, but are compact in orbital action-angle space. The age distribution of such groups can constrain the timescales over which co-natal groups of stars disperse into the `field'. Gaia data have proven powerful to identify such groups in action-angle space, but the resulting member samples are often too small and have too narrow a CMD coverage to allow robust age determinations. Here, we develop and illustrate a new approach that can estimate robust stellar population ages for such groups of stars. This first entails projecting the predetermined action-angle distribution into the 5D space of positions, parallaxes and proper motions, where much larger samples of likely members can be identified over a much wider range of the CMD. It then entails isochrone fitting that accounts for a) widely varying distances and reddenings; b) outliers and binaries; c) sparsely populated main sequence turn-offs, by incorporating the age information of the low-mass main sequence; and d) the possible presence of an intrinsic age spread in the stellar population. When we apply this approach to 92 nearby stellar groups identified in 6D orbit space, we find that they are predominately young ($\lesssim 1$ Gyr), mono-age populations. Many groups are established (known) localized clusters with possible tidal tails, others tend to be widely dispersed and manifestly unbound. This new age-dating tool offers a stringent approach to understanding on which orbits stars form in the solar neighborhood and how quickly they disperse into the field.

  • Ursa Major III/UNIONS 1: the darkest galaxy ever discovered?.- [PDF] - [Article]

    Raphaël Errani, Julio F. Navarro, Simon E. T. Smith, Alan W. McConnachie
     

    The recently-discovered stellar system Ursa Major III/UNIONS 1 (UMa3/U1) is the faintest known Milky Way satellite to date. With a stellar mass of $16^{+6}_{-5}\,\rm M_\odot$ and a half-light radius of $3\pm1$ pc, it is either the darkest galaxy ever discovered or the faintest self-gravitating star cluster known to orbit the Galaxy. Its line-of-sight velocity dispersion suggests the presence of dark matter, although current measurements are inconclusive because of the unknown contribution to the dispersion of potential binary stars. We use N-body simulations to show that, if self-gravitating, the system could not survive in the Milky Way tidal field for more than a single orbit (roughly 0.4 Gyr), which strongly suggests that the system is stabilized by the presence of large amounts of dark matter. If UMa3/U1 formed at the centre of a ~$10^9\rm M_\odot$ cuspy LCDM halo, its velocity dispersion would be predicted to be of order ~1 km/s. This is roughly consistent with the current estimate, which, neglecting binaries, places $\sigma_{\rm los}$ in the range 1 to 4 km/s. Because of its dense cusp, such a halo should be able to survive the Milky Way tidal field, keeping UMa3/U1 relatively unscathed until the present time. This implies that UMa3/U1 is in all likelihood the faintest and densest dwarf galaxy satellite of the Milky Way, with important implications for alternative dark matter models, and for the minimum halo mass threshold for luminous galaxy formation in the LCDM cosmology.

  • Quasar Sightline and Galaxy Evolution (QSAGE) -- III. The mass-metallicity and fundamental metallicity relation in $z \approx$ 2.2 galaxies.- [PDF] - [Article]

    H. M. O. Stephenson, J. P. Stott, F. Cullen, R. M. Bielby, N. Amos, R. Dutta, M. Fumagalli, N. Tejos, J. N. Burchett, R. A. Crain, J. X. Prochaska
     

    We present analysis of the mass-metallicity relation (MZR) for a sample of 67 [OIII]-selected star-forming galaxies at a redshift range of $z=1.99 - 2.32$ ($z_{\text{med}} = 2.16$) using \emph{Hubble Space Telescope} Wide Field Camera 3 grism spectroscopy from the Quasar Sightline and Galaxy Evolution (QSAGE) survey. Metallicities were determined using empirical gas-phase metallicity calibrations based on the strong emission lines [OII]3727,3729, [OIII]4959,5007 and H$\beta$. Star-forming galaxies were identified, and distinguished from active-galactic nuclei, via Mass-Excitation diagrams. Using $z\sim0$ metallicity calibrations, we observe a negative offset in the $z=2.2$ MZR of $\approx -0.51$ dex in metallicity when compared to locally derived relationships, in agreement with previous literature analysis. A similar offset of $\approx -0.46$ dex in metallicity is found when using empirical metallicity calibrations that are suitable out to $z\sim5$, though our $z=2.2$ MZR, in this case, has a shallower slope. We find agreement between our MZR and those predicted from various galaxy evolution models and simulations. Additionally, we explore the extended fundamental metallicity relation (FMR) which includes an additional dependence on star formation rate (SFR). Our results consistently support the existence of the FMR, as well as revealing an offset of $0.28\pm0.04$ dex in metallicity compared to locally-derived relationships, consistent with previous studies at similar redshifts. We interpret the negative correlation with SFR at fixed mass, inferred from an FMR existing for our sample, as being caused by the efficient accretion of metal-poor gas fuelling SFR at cosmic noon.

  • The edges of galaxies in the Fornax Cluster: Fifty percent smaller and denser compared to the field.- [PDF] - [Article]

    Nushkia Chamba, Matthew Hayes, LSST Dark Energy Science Collaboration
     

    Physically motivated measurements are crucial for understanding galaxy growth and the role of the environment on their evolution. In particular, the growth of galaxies as measured by their size or radial extent provides an empirical approach for addressing this issue. However, the established definitions of galaxy size used for nearly a century are ill-suited for these studies because of a previously ignored bias. The conventionally-measured radii consistently miss the diffuse, outer extensions of stellar emission which harbour key signatures of galaxy growth, including star formation and gas accretion or removal. This issue is addressed by examining low surface brightness truncations or galaxy "edges" as a physically motivated tracer of size based on star formation thresholds. Our total sample consists of $\sim900$ galaxies with stellar masses ranging from $10^5 M_{\odot} < M_{\star} < 10^{11} M_{\odot}$. This sample of nearby cluster, group satellite and nearly isolated field galaxies was compiled using multi-band imaging from the Fornax Deep Survey, deep IAC Stripe 82 and Dark Energy Camera Legacy Surveys. Across the full mass range studied, we find that compared to the field, the edges of galaxies in the Fornax Cluster are located at 50% smaller radii and the average stellar surface density at the edges are two times higher. These results are consistent with the rapid removal of loosely bound neutral hydrogen (HI) in hot, crowded environments which truncates galaxies outside-in earlier, preventing the formation of more extended sizes and lower density edges. In fact, we find that galaxies with lower HI fractions have edges with higher stellar surface density. Our results highlight the importance of deep imaging surveys to study the low surface brightness imprints of the large scale structure and environment on galaxy evolution.

  • The discovery of the faintest known Milky Way satellite using UNIONS.- [PDF] - [Article]

    Simon E. T. Smith, William Cerny, Christian R. Hayes, Federico Sestito, Jaclyn Jensen, Alan W. McConnachie, Marla Geha, Julio Navarro, Ting S. Li, Jean-Charles Cuillandre, Raphaël Errani, Ken Chambers, Stephen Gwyn, Francois Hammer, Michael J. Hudson, Eugene Magnier, Nicolas Martin
     

    We present the discovery of Ursa Major III/UNIONS 1, the least luminous known satellite of the Milky Way, which is estimated to have an absolute V-band magnitude of $+2.2^{+0.4}_{-0.3}$ mag, equivalent to a total stellar mass of 16$^{+6}_{-5}$ M$_{\odot}$. Ursa Major III/UNIONS 1 was uncovered in the deep, wide-field Ultraviolet Near Infrared Optical Northern Survey (UNIONS) and is consistent with an old ($\tau > 11$ Gyr), metal-poor ([Fe/H] $\sim -2.2$) stellar population at a heliocentric distance of $\sim$ 10 kpc. Despite being compact ($r_{\text{h}} = 3\pm1$ pc) and composed of so few stars, we confirm the reality of Ursa Major III/UNIONS 1 with Keck II/DEIMOS follow-up spectroscopy and identify 11 radial velocity members, 8 of which have full astrometric data from $Gaia$ and are co-moving based on their proper motions. Based on these 11 radial velocity members, we derive an intrinsic velocity dispersion of $3.7^{+1.4}_{-1.0}$ km s$^{-1}$ but some caveats preclude this value from being interpreted as a direct indicator of the underlying gravitational potential at this time. Primarily, the exclusion of the largest velocity outlier from the member list drops the velocity dispersion to $1.9^{+1.4}_{-1.1}$ km s$^{-1}$, and the subsequent removal of an additional outlier star produces an unresolved velocity dispersion. While the presence of binary stars may be inflating the measurement, the possibility of a significant velocity dispersion makes Ursa Major III/UNIONS 1 a high priority candidate for multi-epoch spectroscopic follow-ups to deduce to true nature of this incredibly faint satellite.

  • The 23.01 release of Cloudy.- [PDF] - [Article]

    Chamani M. Gunasekera, Peter A. M. van Hoof, Marios Chatzikos, Gary J. Ferland
     

    We announce the C23.01 update of Cloudy. This corrects a simple coding error, present since $\sim$ 1990, in one routine that required a conversion from the line-center to the mean normalization of the Ly$\alpha$ optical depth. This affects the destruction of H I Ly$\alpha$ by background opacities. Its largest effect is upon the Ly$\alpha$ intensity in high-ionization dusty clouds, where the predicted intensity is now up to three times stronger. Other properties that depend on Ly$\alpha$ destruction, such as grain infrared emission, change in response.

  • The MAGPI Survey: Drivers of kinematic asymmetries in the ionised gas of $z\sim0.3$ star-forming galaxies.- [PDF] - [Article]

    R. S. Bagge, C. Foster, A. Battisti, S. Bellstedt, M. Mun, K. Harborne, S. Barsanti, T.Mendel, S. Brough, S.M.Croom, C.D.P. Lagos, T. Mukherjee, Y. Peng, R-S. Remus, G. Santucci, P. Sharda, S. Thater, J. van de Sande, L. M. Valenzuela E. Wisnioski T. Zafar, B. Ziegler
     

    Galaxy gas kinematics are sensitive to the physical processes that contribute to a galaxy's evolution. It is expected that external processes will cause more significant kinematic disturbances in the outer regions, while internal processes will cause more disturbances for the inner regions. Using a subsample of 47 galaxies ($0.27<z<0.36$) from the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey, we conduct a study into the source of kinematic disturbances by measuring the asymmetry present in the ionised gas line-of-sight velocity maps at the $0.5R_e$ (inner regions) and $1.5R_e$ (outer regions) elliptical annuli. By comparing the inner and outer kinematic asymmetries, we aim to better understand what physical processes are driving the asymmetries in galaxies. We find the local environment plays a role in kinematic disturbance, in agreement with other integral field spectroscopy studies of the local universe, with most asymmetric systems being in close proximity to a more massive neighbour. We do not find evidence suggesting that hosting an Active Galactic Nucleus (AGN) contributes to asymmetry within the inner regions, with some caveats due to emission line modelling. In contrast to previous studies, we do not find evidence that processes leading to asymmetry also enhance star formation in MAGPI galaxies. Finally, we find a weak anti-correlation between stellar mass and asymmetry (ie. high stellar mass galaxies are less asymmetric). We conclude by discussing possible sources driving the asymmetry in the ionised gas, such as disturbances being present in the colder gas phase (either molecular or atomic) prior to the gas being ionised, and non-axisymmetric features (e.g., a bar) being present in the galactic disk. Our results highlight the complex interplay between ionised gas kinematic disturbances and physical processes involved in galaxy evolution.

  • The abundance discrepancy in ionized nebulae: which are the correct abundances?.- [PDF] - [Article]

    José Eduardo Méndez-Delgado, Jorge García-Rojas
     

    Ionized nebulae are key to understanding the chemical composition and evolution of the Universe. Among these nebulae, H~{\sc ii} regions and planetary nebulae are particularly important as they provide insights into the present and past chemical composition of the interstellar medium, along with the nucleosynthetic processes involved in the chemical evolution of the gas. However, the heavy-element abundances derived from collisional excited lines (CELs) and recombination lines (RLs) do not align. This longstanding abundance-discrepancy problem calls into question our absolute abundance determinations. Which of the lines (if any) provides the correct heavy-element abundances? Recently, it has been shown that there are temperature inhomogeneities concentrated within the highly ionized gas of the H~{\sc ii} regions, causing the reported discrepancy. However, planetary nebulae do not exhibit the same trends as the H~{\sc ii} regions, suggesting a different origin for the abundance discrepancy. In this proceedings, we briefly discuss the state-of-the-art of the abundance discrepancy problem in both H~{\sc ii} regions and planetary nebulae.

  • Galaxy stellar and total mass estimation using machine learning.- [PDF] - [Article]

    Jiani Chu, Hongming Tang, Dandan Xu, Shengdong Lu, Richard Long
     

    Conventional galaxy mass estimation methods suffer from model assumptions and degeneracies. Machine learning, which reduces the reliance on such assumptions, can be used to determine how well present-day observations can yield predictions for the distributions of stellar and dark matter. In this work, we use a general sample of galaxies from the TNG100 simulation to investigate the ability of multi-branch convolutional neural network (CNN) based machine learning methods to predict the central (i.e., within $1-2$ effective radii) stellar and total masses, and the stellar mass-to-light ratio $M_*/L$. These models take galaxy images and spatially-resolved mean velocity and velocity dispersion maps as inputs. Such CNN-based models can in general break the degeneracy between baryonic and dark matter in the sense that the model can make reliable predictions on the individual contributions of each component. For example, with $r$-band images and two galaxy kinematic maps as inputs, our model predicting $M_*/L$ has a prediction uncertainty of 0.04 dex. Moreover, to investigate which (global) features significantly contribute to the correct predictions of the properties above, we utilize a gradient boosting machine. We find that galaxy luminosity dominates the prediction of all masses in the central regions, with stellar velocity dispersion coming next. We also investigate the main contributing features when predicting stellar and dark matter mass fractions ($f_*$, $f_{\rm DM}$) and the dark matter mass $M_{DM}$, and discuss the underlying astrophysics.

  • Low-mass Pop III star formation due to the HD-cooling induced by weak Lyman-Werner radiation.- [PDF] - [Article]

    Sho Nishijima, Shingo Hirano, Hideyuki Umeda, (2) Kanagawa University)
     

    Lyman-Werner (LW) radiation photodissociating molecular hydrogen (H$_2$) influences the thermal and dynamical evolution of the Population III (Pop III) star-forming gas cloud. The effect of powerful LW radiation has been well investigated in the context of supermassive black hole formation in the early universe. However, the average intensity in the early universe is several orders of magnitude lower. For a comprehensive study, we investigate the effects of LW radiation at $18$ different intensities, ranging from $J_{\rm LW}/J_{21}=0$ (no radiation) to $30$ (H-cooling cloud), on the primordial star-forming gas cloud obtained from a three-dimensional cosmological simulation. The overall trend with increasing radiation intensity is a gradual increase in the gas cloud temperature, consistent with previous works. Due to the HD-cooling, on the other hand, the dependence of gas cloud temperature on $J_{\rm LW}$ deviates from the aforementioned increasing trend for a specific range of intensities ($J_{\rm LW}/J_{21}=0.025-0.09$). In HD-cooling clouds, the temperature remained below $200$ K during $10^5$ yr after the first formation of the high-density region, maintaining a low accretion rate. Finally, the HD-cooling clouds have only a low-mass dense core (above $10^8\,{\rm cm^{-3}}$) with about $1-16\, M_{\odot}$, inside which a low-mass Pop III star with $\leq\!0.8\,M_{\odot}$ (so-called "surviving star") could form. The upper limit of star formation efficiency $M_{\rm core}/M_{\rm vir, gas}$ significantly decreases from $10^{-3}$ to $10^{-5}$ as HD-cooling becomes effective. This tendency indicates that, whereas the total gas mass in the host halo increases with the LW radiation intensity, the total Pop III stellar mass does not increase similarly.

  • The Pre-explosion Environments and The Progenitor of SN 2023ixf from the Hobby Eberly Telescope Dark Energy Experiment (HETDEX).- [PDF] - [Article]

    Chenxu Liu, Xinlei Chen, Xinzhong Er, Gregory R. Zeimann, Jozsef Vinko, J. Craig Wheeler, Erin Mentuch Cooper, Dustin Davis, Daniel J. Farrow, Karl Gebhardt, Helong Guo, Gary J. Hill, Lindsay House, Wolfram Kollatschny, Fanchuan Kong, Brajesh Kumar, Xiangkun Liu, Sarah Tuttle, Michael Endl, Parker Duke, William D. Cochran, Jinghua Zhang, Xiaowei Liu
     

    Supernova (SN) 2023ixf was discovered on May 19th, 2023. The host galaxy, M101, was observed by the Hobby Eberly Telescope Dark Energy Experiment (HETDEX) collaboration over the period April 30, 2020 -- July 10, 2020, using the Visible Integral-field Replicable Unit Spectrograph (VIRUS; $3470\lesssim\lambda\lesssim5540$ \r{A}) on the 10-m Hobby-Eberly Telescope (HET). The fiber filling factor within $\pm$ 30 arcsec of SN 2023ixf is 80% with a spatial resolution of 1 arcsec. The r<5.5 arcsec surroundings are 100% covered. This allows us to analyze the spatially resolved pre-explosion local environments of SN 2023ixf with nebular emission lines. The 2-dimensional (2D) maps of the extinction and the star-formation rate (SFR) surface density ($\Sigma_{\rm SFR}$) show weak increasing trends in the radial distributions within the r<5.5 arcsec regions, suggesting lower values of extinction and SFR in the vicinity of the progenitor of SN 2023ixf. The median extinction and that of the surface density of SFR within r<3 arcsec are $E(B-V)=0.06\pm0.14$, and $\Sigma_{\rm SFR}=10^{-5.44\pm0.66}~\rm M_{\odot}\cdot yr^{-1}\cdot arcsec^{-2}$. There is no significant change in extinction before and after the explosion. The gas metallicity does not change significantly with the separation from SN 2023ixf. The metal-rich branch of the $R_{23}$ calculations indicates that the gas metallicity around SN 2023ixf is similar to the solar metallicity ($\sim Z_{\odot}$). The archival deep images from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) show a clear detection of the progenitor of SN 2023ixf in the $z$-band at $22.778\pm0.063$ mag, but non-detections in the remaining four bands of CFHTLS ($u,g,r,i$). The results suggest a massive progenitor of $\approx$ 22 $M_\odot$.

  • The PAU Survey: a new constraint on galaxy formation models using the observed colour redshift relation.- [PDF] - [Article]

    G. Manzoni, C. M. Baugh, P. Norberg, L. Cabayol, J. L. van den Busch, A. Wittje, D. Navarro-Girones, M. Eriksen, P. Fosalba, J. Carretero, F. J. Castander, R. Casas, J. De Vicente, E. Fernandez, J.Garcia-Bellido, E. Gaztanaga, J. C. Helly, H. Hoekstra, H. Hildebrandt, E. J. Gonzalez, S. Koonkor, R. Miquel, C. Padilla, P. Renard, E. Sanchez, I. Sevilla-Noarbe, M. Siudek, J. Y. H. Soo, P. Tallada-Crespi
     

    We use the GALFORM semi-analytical galaxy formation model implemented in the Planck Millennium N-body simulation to build a mock galaxy catalogue on an observer's past lightcone. The mass resolution of this N-body simulation is almost an order of magnitude better than in previous simulations used for this purpose, allowing us to probe fainter galaxies and hence build a more complete mock catalogue at low redshifts. The high time cadence of the simulation outputs allows us to make improved calculations of galaxy properties and positions in the mock. We test the predictions of the mock against the Physics of the Accelerating Universe Survey, a narrow band imaging survey with highly accurate and precise photometric redshifts, which probes the galaxy population over a lookback time of 8 billion years. We compare the model against the observed number counts, redshift distribution and evolution of the observed colours and find good agreement; these statistics avoid the need for model-dependent processing of the observations. The model produces red and blue populations that have similar median colours to the observations. However, the bimodality of galaxy colours in the model is stronger than in the observations. This bimodality is reduced on including a simple model for errors in the GALFORM photometry. We examine how the model predictions for the observed galaxy colours change when perturbing key model parameters. This exercise shows that the median colours and relative abundance of red and blue galaxies provide a constraint on the strength of the feedback driven by supernovae used in the model.

  • Combining astrophysical datasets with CRUMB.- [PDF] - [Article]

    Fiona A. M. Porter, Anna M. M. Scaife
     

    At present, the field of astronomical machine learning lacks widely-used benchmarking datasets; most research employs custom-made datasets which are often not publicly released, making comparisons between models difficult. In this paper we present CRUMB, a publicly-available image dataset of Fanaroff-Riley galaxies constructed from four "parent" datasets extant in the literature. In addition to providing the largest image dataset of these galaxies, CRUMB uses a two-tier labelling system: a "basic" label for classification and a "complete" label which provides the original class labels used in the four parent datasets, allowing for disagreements in an image's class between different datasets to be preserved and selective access to sources from any desired combination of the parent datasets.

  • The WHaD diagram: Classifying the ionizing source with one single emission line.- [PDF] - [Article]

    S. F. Sánchez, A. Z. Lugo-Aranda, J. Sánchez Almeida, J. K. Barrera-Ballesteros, O. Gonzalez-Martín, S. Salim, C. J. Agostino6
     

    The usual approach to classify the ionizing source using optical spectroscopy is based on the use of diagnostic diagrams that compares the relative strength of pairs of collisitional metallic lines (e.g., [O iii] and [N ii]) with respect to recombination hydrogen lines (e.g., H{\beta} and H{\alpha}). Despite of being accepted as the standard procedure, it present known problems, including confusion regimes and/or limitations related to the required signal-to-noise of the involved emission lines. These problems affect not only our intrinsic understanding of inter-stellar medium and its poroperties, but also fundamental galaxy properties, such as the star-formation rate and the oxygen abundance, and key questions just as the fraction of active galactic nuclei, among several others. We explore the existing alternatives in the literature to minimize the confusion among different ionizing sources and proposed a new simple diagram that uses the equivalent width and the velocity dispersion from one single emission line, H{\alpha}, to classify the ionizing sources. We use aperture limited and spatial resolved spectroscopic data in the nearby Universe (z{\sim}0.01) to demonstrate that the new diagram, that we called WHaD, segregates the different ionizing sources in a more efficient way that previously adopted procedures. A new set of regions are defined in this diagram to select betweeen different ionizing sources. The new proposed diagram is well placed to determine the ionizing source when only H{\alpha} is available, or when the signal-to-noise of the emission lines involved in the classical diagnostic diagrams (e.g., H{\beta}).

  • Predicting the linear response of self-gravitating stellar spheres and discs with LinearResponse.jl.- [PDF] - [Article]

    Michael S. Petersen, Mathieu Roule, Jean-Baptiste Fouvry, Christophe Pichon, Kerwann Tep
     

    We present LinearResponse.jl, an efficient, versatile public library written in julia to compute the linear response of self-gravitating (3D spherically symmetric) stellar spheres and (2D axisymmetric razor-thin) discs. LinearResponse.jl can scan the whole complex frequency plane, probing unstable, neutral and (weakly) damped modes. Given a potential model and a distribution function, this numerical toolbox estimates the modal frequencies as well as the shapes of individual modes. The libraries are validated against a combination of previous results for the spherical isochrone model and Mestel discs, and new simulations for the spherical Plummer model. Beyond linear response theory, the realm of applications of LinearResponse.jl also extends to the kinetic theory of self-gravitating systems through a modular interface.

  • The isolated dark matter-poor galaxy that ran away. An example from IllustrisTNG.- [PDF] - [Article]

    Ana Mitrašinović, Majda Smole, Miroslav Micic
     

    Since the discovery of dark matter-deficient galaxies, numerous studies have shown that these exotic galaxies naturally occur in the $\Lambda$CDM model due to stronger tidal interactions. They are typically satellites, with stellar masses in the $10^8-10^9\;\mathrm{M}_\odot$ range, of more massive galaxies. The recent discovery of a massive galaxy lacking dark matter and also lacking a more massive neighbor is puzzling. Two possible scenarios have been suggested in the literature: either the galaxy lost its dark matter early or it had been lacking ab initio. As a proof of concept for the former assumption, we present an example from IllustrisTNG300. At present, the galaxy has a stellar mass of $M_\star \simeq 6.8 \cdot 10^9\; \mathrm{M}_\odot$, with no gas, $M_\mathrm{DM}/M_\mathrm{B} \simeq 1.31$, and a stellar half-mass radius of $R_{0.5,\star} = 2.45\;\mathrm{kpc}$. It lost the majority of its dark matter early, between $z = 2.32$ and $z = 1.53$. Since then, it has continued to dwell in the cluster environment, interacting with the cluster members without merging, while accelerating on its orbit. Eventually, it left the cluster and it has spent the last $\sim 2\;\mathrm{Gyr}$ in isolation, residing just outside the most massive cluster in the simulation. Thus, the galaxy represents the first example found in simulations of both an isolated dark matter-poor galaxy that lost its extended envelope early and a fairly compact stellar system that has managed to escape.

  • The Large Magellanic Cloud's $\sim30$ Kiloparsec Bow Shock and its Impact on the Circumgalactic Medium.- [PDF] - [Article] - [UPDATED]

    David J. Setton, Gurtina Besla, Ekta Patel, Cameron Hummels, Yong Zheng, Evan Schneider
     

    The interaction between the supersonic motion of the Large Magellanic Cloud (LMC) and the Circumgalactic Medium (CGM) is expected to result in a bow shock that leads the LMC's gaseous disk. In this letter, we use hydrodynamic simulations of the LMC's recent infall to predict the extent of this shock and its effect on the Milky Way's (MW) CGM. The simulations clearly predict the existence of an asymmetric shock with a present day stand-off radius of $\sim6.7$ kpc and a transverse diameter of $\sim30$ kpc. Over the past 500 Myr, $\sim8\%$ of the MW's CGM in the southern hemisphere should have interacted with the shock front. This interaction may have had the effect of smoothing over inhomogeneities and increasing mixing in the MW CGM. We find observational evidence of the existence of the bow shock in recent $H\alpha$ maps of the LMC, providing a potential explanation for the envelope of ionized gas surrounding the LMC. Furthermore, the interaction of the bow shock with the MW CGM may also explain observations of ionized gas surrounding the Magellanic Stream. Using recent orbital histories of MW satellites, we find that many satellites have likely interacted with the LMC shock. Additionally, the dwarf galaxy Ret2 is currently sitting inside the shock, which may impact the interpretation of reported gamma ray excess in Ret2. This work highlights bow shocks associated with infalling satellites are an under-explored, yet potentially very important dynamical mixing process in the circumgalactic and intracluster media.

  • Formation of dense filaments induced by runaway supermassive black holes.- [PDF] - [Article] - [UPDATED]

    Go Ogiya, Daisuke Nagai
     

    A narrow linear object extending $\sim 60 \,{\rm kpc}$ from the centre of a galaxy at redshift $z \sim 1$ has recently been discovered and interpreted as shocked gas filament forming stars. The host galaxy presents an irregular morphology, implying recent merger events. Supposing that each of the progenitor galaxies has a central supermassive black hole (SMBH) and the SMBHs are accumulated at the centre of the merger remnant, a fraction of them can be ejected from the galaxy with a high velocity due to interactions between SMBHs. When such a runaway SMBH (RSMBH) passes through the circumgalactic medium (CGM), converging flows are induced along the RSMBH path, and star formation could eventually be ignited. We show that the CGM temperature prior to the RSMBH perturbation should be below the peak temperature in the cooling function to trigger filament formation. While the gas is temporarily heated due to compression, the cooling efficiency increases, and gas accumulation becomes allowed along the path. When the CGM density is sufficiently high, the gas can cool down and develop a dense filament by $z = 1$. The mass and velocity of the RSMBH determine the scale of filament formation. Hydrodynamical simulations validate the analytical expectations. Therefore, we conclude that the perturbation by RSMBHs is a viable channel to form the observed linear object. Using the analytic model validated by simulations, we show that the CGM around the linear object to be warm ($T < 2 \times 10^5 \, K$) and dense ($n > 2 \times 10^{-5} (T/2 \times 10^5 \, K)^{-1} \, {\rm cm^{-3}}$).

  • Quality flags for GSP-Phot Gaia DR3 astrophysical parameters with machine learning: Effective temperatures case study.- [PDF] - [Article] - [UPDATED]

    Aleksandra S. Avdeeva, Dana A. Kovaleva, Oleg Yu. Malkov, Gang Zhao
     

    Gaia Data Release 3 (DR3) provides extensive information on the astrophysical properties of stars, such as effective temperature, surface gravity, metallicity, and luminosity, for over 470 million objects. However, as Gaia's stellar parameters in GSP-Phot module are derived through model-dependent methods and indirect measurements, it can lead to additional systematic errors in the derived parameters. In this study, we compare GSP-Phot effective temperature estimates with two high-resolution and high signal-to-noise spectroscopic catalogues: APOGEE DR17 and GALAH DR3, aiming to assess the reliability of Gaia's temperatures. We introduce an approach to distinguish good-quality Gaia DR3 effective temperatures using machine-learning methods such as XGBoost, CatBoost and LightGBM. The models create quality flags, which can help one to distinguish good-quality GSP-Phot effective temperatures. We test our models on three independent datasets, including PASTEL, a compilation of spectroscopically derived stellar parameters from different high-resolution studies. The results of the test suggest that with these models it is possible to filter effective temperatures as accurate as 250 K with ~ 90 per cent precision even in complex regions, such as the Galactic plane. Consequently, the models developed herein offer a valuable quality assessment tool for GSP-Phot effective temperatures in Gaia DR3. Consequently, the developed models offer a valuable quality assessment tool for GSP-Phot effective temperatures in Gaia DR3. The dataset with flags for all GSP-Phot effective temperature estimates, is publicly available, as are the models themselves.

  • The impact of the free-floating planet (FFP) mass function on the event rate for the accurate microlensing parallax determination: application to Euclid and Roman parallax observation.- [PDF] - [Article] - [UPDATED]

    Makiko Ban
     

    A microlensing event is mainly used to search for free-floating planets (FFPs). To estimate the FFP mass and distance via the microlensing effect, a microlensing parallax is one of the key parameters. A short duration of FFP microlensing is difficult to yield a parallax by the observer's motion at a recognisable level, so the FFP microlensing parallax is expected on the simultaneous observation by multiple telescopes. Here, we approach the FFP detection by considering a variation of FFP mass functions and the event rate of accurately measured microlensing parallax. We used our FFP microlensing simulator assuming a parallax observation between upcoming space-based missions (Euclid and Roman) with full kinematics. As a result, we confirmed that the event rate of accurately measured microlensing parallax (i.e. within a factor of 2 uncertainty) does not simply follow the number of FFPs at a given mass but the ratio of the FFP population per star. This is because the population ratio determines the optical depth for a given mass and potential sources. In addition, we found that the probability of the event that can estimate the FFP mass and distance within a factor of 2 is not so high: approx. 40% of Earth-mass, 16% of Neptune-mass, and 4% of Jupiter-mass FFP events under our criteria. The probability can be improved by some technical approach such as using high cadence and observation in parallax more than two observers.

  • The ALMA-ALPINE [CII] survey: Kennicutt-Schmidt relation in four massive main-sequence galaxies at z~4.5.- [PDF] - [Article] - [UPDATED]

    M. Béthermin, C. Accard, C. Guillaume, M. Dessauges-Zavadsky, E. Ibar, P. Cassata, T. Devereaux, A. Faisst, J. Freundlich, G. C. Jones, K. Kraljic, H. Algera, R. O. Amorin, S. Bardelli, M. Boquien, V. Buat, E. Donghia, Y. Dubois, A. Ferrara, Y. Fudamoto, M. Ginolfi, P. Guillard, M. Giavalisco, C. Gruppioni, G. Gururajan, N. Hathi, C. C. Hayward, A. M. Koekemoer, B. C. Lemaux, G. E. Magdis, J. Molina, D. Narayanan, L. Mayer, F. Pozzi, F. Rizzo, M. Romano, L. Tasca, P. Theulé, D. Vergani, L. Vallini, G. Zamorani, A. Zanella, E. Zucca
     

    The Kennicutt-Schmidt (KS) relation between the gas and the star formation rate (SFR) surface density ($\Sigma_{\rm gas}$-$\Sigma_{\rm SFR}$) is essential to understand star formation processes in galaxies. So far, it has been measured up to z~2.5 in main-sequence galaxies. In this letter, we aim to put constraints at z~4.5 using a sample of four massive main-sequence galaxies observed by ALMA at high resolution. We obtained ~0.3"-resolution [CII] and continuum maps of our objects, which we then converted into gas and obscured SFR surface density maps. In addition, we produced unobscured SFR surface density maps by convolving Hubble ancillary data in the rest-frame UV. We then derived the average $\Sigma_{\rm SFR}$ in various $\Sigma_{\rm gas}$ bins, and estimated the uncertainties using a Monte Carlo sampling. Our galaxy sample follows the KS relation measured in main-sequence galaxies at lower redshift and is slightly lower than predictions from simulations. Our data points probe the high end both in terms of $\Sigma_{\rm gas}$ and $\Sigma_{\rm gas}$, and gas depletion timescales (285-843 Myr) remain similar to z~2 objects. However, three of our objects are clearly morphologically disturbed, and we could have expected shorter gas depletion timescales (~100 Myr) similar to merger-driven starbursts at lower redshifts. This suggests that the mechanisms triggering starbursts at high redshift may be different than in the low- and intermediate-z Universe.

astro-ph.IM

  • LenSiam: Self-Supervised Learning on Strong Gravitational Lens Images.- [PDF] - [Article]

    Po-Wen Chang, Kuan-Wei Huang, Joshua Fagin, James Hung-Hsu Chan, Joshua Yao-Yu Lin
     

    Self-supervised learning has been known for learning good representations from data without the need for annotated labels. We explore the simple siamese (SimSiam) architecture for representation learning on strong gravitational lens images. Commonly used image augmentations tend to change lens properties; for example, zoom-in would affect the Einstein radius. To create image pairs representing the same underlying lens model, we introduce a lens augmentation method to preserve lens properties by fixing the lens model while varying the source galaxies. Our research demonstrates this lens augmentation works well with SimSiam for learning the lens image representation without labels, so we name it LenSiam. We also show that a pre-trained LenSiam model can benefit downstream tasks. We open-source our code and datasets at https://github.com/kuanweih/LenSiam .

  • Calibration Unit Design for High-Resolution Infrared Spectrograph for Exoplanet Characterization (HISPEC).- [PDF] - [Article]

    Ben Sappey, Quinn Konopacky, Nemanja Jovanovic, Ashley Baker, Jerome Maire, Samuel Halverson, Dimitri Mawet, Jean-Baptiste Ruffio, Rob Bertz, Michael Fitzgerald, Charles Beichman, Garreth Ruane, Marc Kassis, Chris Johnson, Ken Magnone, HISPEC Team
     

    The latest generation of high-resolution spectrograph instruments on 10m-class telescopes continue to pursue challenging science cases. Consequently, ever more precise calibration methods are necessary to enable trail-blazing science methodology. We present the High-resolution Infrared SPectrograph for Exoplanet Characterization (HISPEC) Calibration Unit (CAL), designed to facilitate challenging science cases such as Doppler imaging of exoplanet atmospheres, precision radial velocity, and high-contrast high-resolution spectroscopy of nearby exoplanets. CAL builds upon the heritage from the pathfinder instrument Keck Planet Imager and Characterizer (KPIC) and utilizes four near-infrared (NIR) light sources encoded with wavelength information that are coupled into single-mode fibers. They can be used synchronously during science observations or asynchronously during daytime calibrations. A hollow cathode lamp (HCL) and a series of gas absorption cells provide absolute calibration from 0.98 {\mu}m to 2.5 {\mu}m. A laser frequency comb (LFC) provides stable, time-independent wavelength information during observation and CAL implements a lower finesse astro-etalon as a backup for the LFC. Design lessons from instrumentation like HISPEC will serve to inform the requirements for similar instruments for the ELTs in the future.

  • Astronomical Images Quality Assessment with Automated Machine Learning.- [PDF] - [Article]

    Olivier Parisot, Pierrick Bruneau, Patrik Hitzelberger
     

    Electronically Assisted Astronomy consists in capturing deep sky images with a digital camera coupled to a telescope to display views of celestial objects that would have been invisible through direct observation. This practice generates a large quantity of data, which may then be enhanced with dedicated image editing software after observation sessions. In this study, we show how Image Quality Assessment can be useful for automatically rating astronomical images, and we also develop a dedicated model by using Automated Machine Learning.

  • A BRAIN study to tackle image analysis with artificial intelligence in the ALMA 2030 era.- [PDF] - [Article]

    Fabrizia Guglielmetti, Michele Delli Veneri, Ivano Baronchelli, Carmen Blanco, Andrea Dosi, Torsten Enßlin, Vishal Johnson, Giuseppe Longo, Jakob Roth, Felix Stoehr, Łukasz Tychoniec, Eric Villard
     

    An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this study, we provide evidence of the benefits of employing these approaches to ALMA imaging for operational and scientific purposes. We show the potential of two techniques, RESOLVE and DeepFocus, applied to ALMA calibrated science data. Significant advantages are provided with the prospect to improve the quality and completeness of the data products stored in the science archive and overall processing time for operations. Both approaches evidence the logical pathway to address the incoming revolution in data rates dictated by the planned electronic upgrades. Moreover, we bring to the community additional products through a new package, ALMASim, to promote advancements in these fields, providing a refined ALMA simulator usable by a large community for training and/or testing new algorithms.

  • Euclid Near Infrared Spectro-Photometer: spatial considerations on H2RG detectors interpixel capacitance and IPC corrected conversion gain from on-ground characterization.- [PDF] - [Article] - [UPDATED]

    J. Le Graët, A. Secroun, R. Barbier, W. Gillard, JC. Clemens, S. Conseil, S. Escoffier, S. Ferriol, N. Fourmanoit, E. Kajfasz, S. Kermiche, B. Kubik, G. Smadja, J. Zoubian
     

    Euclid is a major ESA mission scheduled for launch in 2023-2024 to map the geometry of the dark Universe using two primary probes, weak gravitational lensing and galaxy clustering. \Euclid's instruments, a visible imager (VIS) and an infrared spectrometer and photometer (NISP) have both been designed and built by Euclid Consortium teams. The NISP instrument will hold a large focal plane array of 16 near-infrared H2RG detectors, which are key elements to the performance of the NISP, and therefore to the science return of the mission. Euclid NISP H2RG flight detectors have been individually and thoroughly characterized at Centre de Physique des Particules de Marseille (CPPM) during a whole year with a view to producing a reference database of performance pixel maps. Analyses have been ongoing and have shown the relevance of taking into account spatial variations in deriving performance parameters. This paper will concentrate on interpixel capacitance (IPC) and conversion gain. First, per pixel IPC coefficient maps will be derived thanks to single pixel reset (SPR) measurements and a new IPC correction method will be defined and validated. Then, the paper will look into correlation effects of IPC and their impact on the derivation of per super-pixel IPC-free conversion gain maps. Eventually, several conversion gain values will be defined over clearly distinguishable regions.

  • Monotone Cubic B-Splines with a Neural-Network Generator.- [PDF] - [Article] - [UPDATED]

    Lijun Wang, Xiaodan Fan, Huabai Li, Jun S. Liu
     

    We present a method for fitting monotone curves using cubic B-splines, which is equivalent to putting a monotonicity constraint on the coefficients. We explore different ways of enforcing this constraint and analyze their theoretical and empirical properties. We propose two algorithms for solving the spline fitting problem: one that uses standard optimization techniques and one that trains a Multi-Layer Perceptrons (MLP) generator to approximate the solutions under various settings and perturbations. The generator approach can speed up the fitting process when we need to solve the problem repeatedly, such as when constructing confidence bands using bootstrap. We evaluate our method against several existing methods, some of which do not use the monotonicity constraint, on some monotone curves with varying noise levels. We demonstrate that our method outperforms the other methods, especially in high-noise scenarios. We also apply our method to analyze the polarization-hole phenomenon during star formation in astrophysics. The source code is accessible at \texttt{\url{https://github.com/szcf-weiya/MonotoneSplines.jl}}.

  • A solution to continuous RFI in narrowband radio SETI with FAST: The MultiBeam Point-source Scanning strategy.- [PDF] - [Article] - [UPDATED]

    Bo-Lun Huang, Zhen-Zhao Tao, Tong-Jie Zhang
     

    Narrowband radio search for extraterrestrial intelligence (SETI) in the 21st century suffers severely from radio frequency interference (RFI), resulting in a high number of false positives, and it could be the major reason why we have not yet received any messages from space. We thereby propose a novel observation strategy, called MultiBeam Point-source Scanning (MBPS), to revolutionize the way RFI is identified in narrowband radio SETI and provide a prominent solution to the current situation. The MBPS strategy is a simple yet powerful method that sequentially scans over the target star with different beams of a telescope, hence creating real-time references in the time domain for cross-verification, thus potentially identifying all long-persistent RFI with a level of certainty never achieved in any previous attempts. By applying the MBPS strategy during the observation of TRAPPIST-1 with the FAST telescope, we successfully identified all 16,645 received signals as RFI using the solid criteria introduced by the MBPS strategy. Therefore we present the MBPS strategy as a promising tool that should bring us much closer to the first discovery of a genuine galactic greeting.

  • Search for shower's duplicates at the IAU MDC. Methods and general results.- [PDF] - [Article] - [UPDATED]

    T. J. Jopek, L. Neslušan, R. Rudawska, M. Hajduková
     

    Observers submit both new and known meteor shower parameters to the database of the IAU Meteor Data Center (MDC). It may happen that a new observation of an already known meteor shower is submitted as a discovery of a new shower. Then, a duplicate shower appears in the MDC. On the other hand, the observers may provide data which, in their opinion, is another set of parameters of an already existing shower. However, if this is not true, we can talk about a shower that is a false-duplicate of a known meteor shower. We aim to develop a method for objective detection of duplicates among meteor showers and apply it to the MDC. The method will also enable us to verify whether various sets of parameters of the same shower are compatible and, thus, reveal the false-duplicates. We suggest two methods based on cluster analyses and two similarity functions among geocentric and heliocentric shower parameters collected in the MDC. 7 new showers represented by two or more parameter sets were discovered. 30 times there was full agreement between our results and those reported in the MDC. 20 times the same duplicates as given in the MDC, were found only by one method. We found 34 multi-solution showers for which the number of the same duplicates found by both method is close to the corresponding number in the MDC database. However for 56 multi-solution showers listed in the MDC no duplicates were found by any of the applied methods. The obtained results confirmed the effectiveness of the proposed approach of identifying duplicates. We have shown that in order to detect and verify duplicate meteor showers, it is possible to apply the objective proposal instead of the subjective approach used so far.

gr-qc

  • Noncompactified Kaluza--Klein theories and Anisotropic Kantowski-Sachs Universe.- [PDF] - [Article]

    S. M. M. Rasouli
     

    We provide an overview of noncompactified Kaluza-Klein theories. The space-time-matter theory (or induced-matter theory) and the modified Brans-Dicke theory are discussed. Finally, an extended version of the Kantowski-Sachs anisotropic model is investigated as a cosmological application of the latter.

  • Potentials for general-relativistic geodesy.- [PDF] - [Article]

    Claus Laemmerzahl, Volker Perlick
     

    Geodesy in a Newtonian framework is based on the Newtonian gravitational potential. The general-relativistic gravitational field, however, is not fully determined by a single potential. The vacuum field around a stationary source can be decomposed into two scalar potentials and a tensorial spatial metric, which together serve as the basis for general-relativistic geodesy. One of the scalar potentials is a generalization of the Newtonian potential while the second one describes the influence of the rotation of the source on the gravitational field for which no non-relativistic counterpart exists. In this paper the operational realizations of these two potentials, and also of the spatial metric, are discussed. For some analytically given spacetimes the two potentials are exemplified and their relevance for practical geodesy on Earth is outlined.

  • Quasi-static evolution of axially and reflection symmetric large-scale configuration.- [PDF] - [Article]

    Z. Yousaf, Kazuharu Bamba, M. Z. Bhatti, U. Farwa
     

    We review recently offered notions of quasi-static evolution of the axial self-gravitating structures at large-scales and the criteria to characterize the corresponding evolutionary aspects under the influence of strong curvature regimes. In doing so, we examine the axial source's dynamic and quasi-static behavior within the parameters of various modified gravity theories. We address the formalism of these notions and their possible implications in studying the dissipative and anisotropic configuration. We initiate by considering higher-order curvature gravity. The Palatini formalism of $f(R)$ gravity is also taken into consideration to analyze the behavior of the kinematical as well as the dynamical variables of the proposed problem. The set of invariant velocities is defined to comprehend the concept of quasi-static approximation that enhances the stability of the system in contrast to the dynamic mode. It is identified that vorticity and distinct versions of the structure scalars $Y_{I}$, $Y_{II}$ and $Y_{KL}$ play an important role in revealing the significant effects of a fluid's anisotropy. As another example of evolution, we check the influence of Palatini-based factors on the shearing motion of the object. A comparison-based study of the physical nature of distinct curvature factors on the propagation of the axial source is exhibited. This provides an intriguing platform to grasp the notion of quasi-static evolution together with the distinct curvature factors at the current time scenario. The importance of slowly evolving axially symmetric regimes will be addressed through the distinct modified gravitational context. Finally, we share a list of queries that, we believe, deserve to be addressed in the near future.

  • Theory-agnostic parametrization of wormhole spacetimes.- [PDF] - [Article]

    Thomas D. Pappas
     

    We present a generalization of the Rezzolla-Zhidenko theory-agnostic parametrization of black-hole spacetimes to accommodate spherically-symmetric Lorentzian, traversable wormholes (WHs) in an arbitrary metric theory of gravity. By applying our parametrization to various known WH metrics and performing calculations involving shadows and quasinormal modes, we show that only a few parameters are important for finding potentially observable quantities in a WH spacetime.

  • Random pure Gaussian states and Hawking radiation.- [PDF] - [Article]

    Erik Aurell, Lucas Hackl, Paweł Horodecki, Robert H. Jonsson, Mario Kieburg
     

    A black hole evaporates by Hawking radiation. Each mode of that radiation is thermal. If the total state is nevertheless to be pure, modes must be entangled. Estimating the minimum size of this entanglement has been an important outstanding issue. We develop a new theory of constrained random symplectic transformations, based on that the total state is pure, Gaussian and random, and every mode thermal as in Hawking theory. From this theory we compute the distribution of mode-mode correlations, from which we bound mode-mode entanglement. We find that correlations between thinly populated modes (early-time high-frequency modes and/or late modes of any frequency) are strongly suppressed. Such modes are hence very weakly entangled. Highly populated modes (early-time low-frequency modes) can on the other hand be strongly correlated, but a detailed analysis reveals that they are nevertheless also weakly entangled. Our analysis hence establishes that restoring unitarity after a complete evaporation of a black hole does not require strong quantum entanglement between any pair of Hawking modes. Our analysis further gives exact general expressions for the distribution of mode-mode correlations in random, pure, Gaussian states with given marginals, which may have applications beyond black hole physics.

  • On CCGG, the De Donder-Weyl Hamiltonian formulation of canonical gauge gravity.- [PDF] - [Article]

    D. Vasak, J. Kirsch, A. van de Venn, V. Denk, J. Struckmeier
     

    This paper gives a brief overview of the manifestly covariant canonical gauge gravity (CCGG) that is rooted in the De Donder-Weyl Hamiltonian formulation of relativistic field theories, and the proven methodology of the canonical transformation theory. That framework derives, from a few basic physical and mathematical assumptions, equations describing generic matter and gravity dynamics with the spin connection emerging as a Yang Mills-type gauge field. While the interaction of any matter field with spacetime is fixed just by the transformation property of that field, a concrete gravity ansatz is introduced by the choice of the free (kinetic) gravity Hamiltonian. The key elements of this approach are discussed and its implications for particle dynamics and cosmology presented. Among the results are especially: - Anomalous Pauli coupling of spinors to curvature and torsion of spacetime, - spacetime with (A)dS ground state, inertia, torsion and geometrical vacuum energy, - Zero-energy balance of the Universe leading to a vanishing cosmological constant and torsional dark energy.

  • Light propagation in a plasma on Kerr spacetime. II. Plasma imprint on photon orbits.- [PDF] - [Article]

    Volker Perlick, Oleg Yu. Tsupko
     

    In this paper, light propagation in a pressure-free non-magnetized plasma on Kerr spacetime is considered, which is a continuation of our previous study [Phys. Rev. D 95, 104003 (2017)]. It is assumed throughout that the plasma density is of the form that allows for the separability of the Hamilton-Jacobi equation for light rays, i.e., for the existence of a Carter constant. Here we focus on the analysis of different types of orbits and find several peculiar phenomena which do not exist in the vacuum case. We start with studying spherical orbits and conical orbits. In particular, it is revealed that in the ergoregion in the presence of a plasma there can exist two different spherical light rays propagating through the same point. Then we study circular orbits and demonstrate that, contrary to the vacuum case, circular orbits can exist off the equatorial plane in the domain of outer communication of a Kerr black hole. Necessary and sufficient conditions for that are formulated. We also find a compact equation for circular orbits in the equatorial plane of the Kerr metric, with several examples developed. Considering the light deflection in the equatorial plane, we derive a new exact formula for the deflection angle which has the advantage of being directly applicable to light rays both inside and outside of the ergoregion. Remarkably, the possibility of a non-monotonic behavior of the deflection angle as a function of the impact parameter is demonstrated in the presence of a non-homogeneous plasma. Furthermore, in order to separate the effects of the black-hole spin from the effects of the plasma, we investigate weak deflection gravitational lensing. We also add some further comments to our discussion of the black-hole shadow which was the main topic of our previous paper.

  • On the degrees of freedom count on singular phase space submanifolds.- [PDF] - [Article]

    Alexey Golovnev
     

    It has recently been noticed that a metric $f(R)$ gravity, with $f(R)=R^2$ and no linear term, has no dynamical degrees of freedom when linearised around Minkowski. This is indeed true, in the sense of maximally using the emergent gauge freedom of that limit. In this note, I would like to show that it can easily be seen directly from the equations of motion, with no need of playing games with Lagrangians. Moreover, the scalar part of this "strong coupling" behaviour is actually more interpretational than dynamical, as it is not because of disappearance of a kinetic energy term but rather due to losing a constraint. On the other hand, the different answer reported from a procedure "a la Stueckelberg" has nothing to do with whatever remnant symmetries in the equations one might imagine. In the way the procedure had been applied, it does radically change the model at hand.

  • Emergent modified gravity coupled to scalar matter.- [PDF] - [Article]

    Martin Bojowald, Erick I. Duque
     

    Emergent modified gravity presents a new set of generally covariant gravitational theories in which the space-time metric is not directly given by one of the fundamental fields. A metric compatible with the modified dynamics of gravity is instead derived from covariance conditions for space-time in canonical form. This paper presents a significant extension of existing vacuum models to the case of a scalar field coupled to emergent modified gravity in a spherically symmetric setting. Unlike in previous attempts for instance in models of loop quantum gravity, it is possible to maintain general covariance in the presence of modified gravity-scalar couplings. In general, however, the emergent space-time metric depends not only on the phase-space degrees of freedom of the gravitational part of the coupled theory, but also on the scalar field. Matter therefore directly and profoundly affects the geometry of space-time, not only through the well-known dynamical coupling of stress-energy to curvature as in Einstein's equation, but even on a kinematical level before equations of motion are solved. This paper introduces further physical requirements that may be imposed for a reduction of modified theories to more specific classes, in some cases eliminating certain modifications that would be possible in a vacuum situation. With minimal coupling, results about the removal of classical black-hole singularities in vacuum emergent modified gravity are found to be unstable under the inclusion of matter, but alternative modifications exist in which singularities are removed even in the presence of matter. Emergent modified gravity is seen to provide a large class of new scalar-tensor theories with second-order field equations.

  • From black hole to one-dimensional chain: parity symmetry breaking and kink formation.- [PDF] - [Article] - [UPDATED]

    Zhi-Hong Li, Han-Qing Shi, Hai-Qing Zhang
     

    AdS/CFT correspondence is a "first-principle" tool to study the strongly coupled many-body systems. While it has been extensively applied to investigate the continuous symmetry breaking dynamics, the discrete symmetry breaking dynamics are rarely investigated. In this paper, the model of kink formation in a strongly coupled one-dimensional chain is realized from the AdS/CFT correspondence. In doing so, we first construct a model of real scalar fields with parity symmetries in the AdS bulk. By quenching the system across the critical point at a finite rate, kink hairs turn out in the bulk due to the spontaneous parity symmetry breaking, which accomplishes a counter-example of "no hair conjecture" of black hole. Due to the AdS/CFT correspondence, kink hairs in the bulk are dual to the kinks in the AdS boundary. The mean of the dual kink numbers are found to satisfy a universal power-law relation to the quench rate, in agreement with the celebrated Kibble-Zurek mechanism. Moreover, the higher cumulants of the kink numbers are proportional to the mean numbers, consistent with the assumption that the formation of kinks satisfy the binomial distributions which goes beyond the Kibble-Zurek mechanism.

  • Evolutionary algorithms for multi-center solutions.- [PDF] - [Article] - [UPDATED]

    Sami Rawash, David Turton
     

    Large classes of multi-center supergravity solutions have been constructed in the study of supersymmetric black holes and their microstates. Many smooth multi-center solutions have the same charges as supersymmetric black holes, with all centers deep inside a long black-hole-like throat. These configurations are constrained by regularity, absence of closed timelike curves, and charge quantization. Due to these constraints, constructing explicit solutions with several centers in generic arrangements, and with all parameters in physically relevant ranges, is a hard task. In this work we present an optimization algorithm, based on evolutionary algorithms and Bayesian optimization, that systematically constructs numerical solutions satisfying all constraints. We exhibit explicit examples of novel five-center and seven-center machine-precision solutions.

  • A regular black hole as the final state of evolution of a singular black hole.- [PDF] - [Article] - [UPDATED]

    Han-Wen Hu, Chen Lan, Yan-Gang Miao
     

    We propose a novel black hole model in which singular and regular black holes are combined as a whole and more precisely singular and regular black holes are regarded as different states of parameter evolution. We refer to them as singular and regular states, respectively. Furthermore, the regular state is depicted by the final state of parameter evolution in the model. We also present the sources that can generate such a black hole spacetime in the framework of $F(R)$ gravity. This theory of modified gravity is adopted because it offers a possible resolution to a tough issue in the thermodynamics of regular black holes, namely the discrepancy between the thermal entropy and Wald entropy. The dynamics and thermodynamics of the novel black hole model are also discussed when a singular state evolves into a regular state during the change of charge or horizon radius from its initial value to its extreme value.

  • Precessing binary black holes as engines of electromagnetic helicity.- [PDF] - [Article] - [UPDATED]

    Nicolas Sanchis-Gual, Adrian del Rio
     

    We show that binary black hole mergers with precessing evolution can potentially excite photons from the quantum vacuum in such a way that total helicity is not preserved in the process. Helicity violation is allowed by quantum fluctuations that spoil the electric-magnetic duality symmetry of the classical Maxwell theory without charges. We show here that precessing binary black hole systems in astrophysics generate a flux of circularly polarized gravitational waves which, in turn, provides the required helical background that triggers this quantum effect. Solving the fully non-linear Einstein's equations with numerical relativity we explore the parameter space of binary systems and extract the detailed dependence of the quantum effect with the spins of the two black holes. We also introduce a set of diagrammatic techniques that allows us to predict when a binary black hole merger can or cannot emit circularly polarized gravitational radiation, based on mirror-symmetry considerations. This framework allows to understand and to interpret correctly the numerical results, and to predict the outcomes in potentially interesting astrophysical systems.

  • A tale of analogies: gravitomagnetic effects, rotating sources, observers and all that.- [PDF] - [Article] - [UPDATED]

    Matteo Luca Ruggiero, Davide Astesiano
     

    Gravitoelectromagnetic analogies are somewhat ubiquitous in General Relativity, and they are often used to explain peculiar effects of Einstein's theory of gravity in terms of familiar results from classical electromagnetism. Perhaps, the best known of these analogy pertains to the similarity between the equations of electromagnetism and those of the linearized theory of General Relativity. But the analogy is somewhat deeper and ultimately rooted in the splitting of spacetime, which is preliminary to the definition of the measurement process. In this paper we review the various approaches that lead to the introduction of a magnetic-like part of the gravitational interaction, briefly called gravitomagnetic and, then, we provide a survey of the recent developments both from the theoretical and experimental viewpoints.

  • Minimal length scale correction in the noise of gravitons.- [PDF] - [Article] - [UPDATED]

    Soham Sen, Sunandan Gangopadhyay
     

    In this paper we have considered a quantized and linearly polarized gravitational wave interacting with a gravitational wave detector (interferometer detector) in the generalized uncertainty principle (GUP) framework. Following the analysis in Phys. Rev. Lett. 127 (2021) 081602 (https://link.aps.org/doi/10.1103/PhysRevLett.127.081602), we consider a quantized gravitational wave interacting with a gravitational wave detector (LIGO/VIRGO etc.) using a path integral approach. Although the incoming gravitational wave was quantized, no Planck-scale quantization effects were considered for the detector in earlier literatures. In our work, we consider a modified Heisenberg uncertainty relation with a quadratic order correction in the momentum variable between the two phase space coordinates of the detector. Using a path integral approach, we have obtained a stochastic equation involving the separation between two point-like objects. It is observed that random fluctuations (noises) and the correction terms due to the generalized uncertainty relation plays a crucial role in dictating such trajectories. Finally, we observe that the solution to the stochastic equation leads to time dependent standard deviation due to the GUP insertion, and for a primordial gravitational wave (where the initial state is a squeezed state) both the noise effect and the GUP effects exponentially enhance which may be possible to detect in future generation of gravitational wave detectors. We have also given a plot of the dimensionless standard deviation with time depicting that the GUP effect will carry a distinct signature which may be detectable in the future space based gravitational wave observatories.

  • The effect of environment in the timing of a pulsar orbiting SgrA*.- [PDF] - [Article] - [UPDATED]

    Amodio Carleo, Bilel Ben-Salem
     

    Pulsars are rapidly rotating neutron stars emitting intense electromagnetic radiation that is detected on Earth as regular and precisely timed pulses. By exploiting their extreme regularity and comparing the real arrival times with a theoretical model (pulsar timing), it is possible to deduce many physical information, not only concerning the neutron star and its possible companion, but also the properties of the interstellar medium, up to tests of General Relativity. Last but not least, pulsars are used in conjunction with each other as a galactic-sized detector for the cosmic background of gravitational waves. In this paper, we investigate the effect of "matter" on the propagation time delay of photons emitted by a pulsar orbiting a spinning black hole, one of the most important relativistic effect in pulsar timing. We deduce an analytical formula for the time delay from geodesic equations, showing how it changes as the type of matter around the black hole (radiation, dust or dark energy) varies with respect to previous results, where matter has not been taken into account. It turns out that while the spin $a$ only induces a shift in the phase of the maximum delay without increasing or decreasing the delay, the effect of matter surrounding the black hole results in a noticeable alteration of it. Our results show that dark energy would give the strongest effect and that, interestingly, when the pulsar is positioned between the observer and the black hole a slightly lower pulse delay than in the no-matter case appears. We estimated these effects for SGR J1745-2900, the closest magnetar orbiting SgrA*.

  • Eternal inflation and collapse theories.- [PDF] - [Article] - [UPDATED]

    R.L. Lechuga, D. Sudarsky
     

    The eternal inflation problem continues to be considered one of standard's cosmology most serious shortcomings. This arises when one considers the effects of "quantum fluctuations" (QF) on the zero mode of inflaton field during a Hubble time in the inflationary epoch. In the slow-roll regime it is quite clear that such QF could dwarf the classical rolling down of the inflaton, and with overwhelming probability this prevents inflation from ever ending. When one recognizes that QF can not be taken as synonymous of stochastic fluctuations, but rather intrinsic levels of indefiniteness in the quantities, one concludes that the eternal inflation problem simply does not exist. However, the same argument would serve to invalidate the account for the generation of the primordial seeds of cosmic structure. In order to do address that issue, one must explain the breaking of homogeneity and isotropy of the early inflationary epoch. The so called spontaneous collapse theories offer an additional element namely the stochastic and spontaneous state reduction characteristic of those proposals possesses the basic features to break those symmetries. In fact, a version of the CSL theory adapted to the cosmological context has been shown to offer a satisfactory account for the origin the seeds of cosmic structure with an adequate power spectrum, and will serve as the basis of our analysis. However, once such stochastic collapse is introduced into the theoretical framework the eternal inflation problem has the potential reappear. In this manuscript we explore those issues in detail and discuss an avenue that seems to allow for a satisfactory account for the generation of the primordial inhomogeneities and anisotropies while freeing the theory from the eternal inflation problem.

  • Chiral fermion anomaly as a memory effect.- [PDF] - [Article] - [UPDATED]

    Adrián del Río, Ivan Agullo
     

    We study the non-conservation of the chiral charge of Dirac fields between past and future null infinity due to the Adler-Bell-Jackiw chiral anomaly. In previous investigations \cite{dR21}, we found that this charge fails to be conserved if electromagnetic sources in the bulk emit circularly polarized radiation. In this article, we unravel yet another contribution coming from the non-zero, infrared "soft" charges of the external, electromagnetic field. This new contribution can be interpreted as another manifestation of the ordinary memory effect produced by transitions between different infrared sectors of Maxwell theory, but now on test quantum fields rather than on test classical particles. In other words, a flux of electromagnetic waves can leave a memory on quantum fermion states in the form of a permanent, net helicity. We elaborate this idea in both $1+1$ and $3+1$ dimensions. We also show that, in sharp contrast, gravitational infrared charges do not contribute to the fermion chiral anomaly.

  • Exploring the Quantum-to-Classical Vortex Flow: Quantum Field Theory Dynamics in Rotating Curved Spacetimes.- [PDF] - [Article] - [UPDATED]

    Patrik Švančara, Pietro Smaniotto, Leonardo Solidoro, James F. MacDonald, Sam Patrick, Ruth Gregory, Carlo F. Barenghi, Silke Weinfurtner
     

    Gravity simulators are laboratory systems where small excitations like sound or surface waves behave as fields propagating on a curved spacetime geometry. The analogy between gravity and fluids requires vanishing viscosity, a feature naturally realised in superfluids like liquid helium or cold atomic clouds. Such systems have been successful in verifying key predictions of quantum field theory in curved spacetime. In particular, quantum simulations of rotating curved spacetimes indicative of astrophysical black holes require the realisation of an extensive vortex flow in superfluid systems. Despite the inherent instability of multiply quantised vortices, here we demonstrate that a stationary giant quantum vortex can be stabilised in superfluid $^4$He. Its compact core carries thousands of circulation quanta, prevailing over current limitations in other physical systems such as magnons, atomic clouds and polaritons. We introduce a minimally invasive way to characterise the vortex flow by exploiting the interaction of micrometre-scale waves on the superfluid interface with the background velocity field. Intricate wave-vortex interactions, including the detection of bound states and distinctive analogue black hole ringdown signatures, have been observed. These results open new avenues to explore quantum-to-classical vortex transitions and utilise superfluid helium as a finite temperature quantum field theory simulator for rotating curved spacetimes.

  • Null Raychaudhuri: Canonical Structure and the Dressing Time.- [PDF] - [Article] - [UPDATED]

    Luca Ciambelli, Laurent Freidel, Robert G. Leigh
     

    We initiate a study of gravity focusing on generic null hypersurfaces, non-perturbatively in the Newton coupling. We present an off-shell account of the extended phase space of the theory, which includes the expected spin-2 data as well as spin-0, spin-1 and arbitrary matter degrees of freedom. We construct the charges and the corresponding kinematic Poisson brackets, employing a Beltrami parameterization of the spin-2 modes. We explicitly show that the constraint algebra closes, the details of which depend on the non-perturbative mixing between spin-0 and spin-2 modes. Finally we show that the spin zero sector encodes a notion of a clock, called dressing time, which is dynamical and conjugate to the constraint. It is well-known that the null Raychaudhuri equation describes how the geometric data of a null hypersurface evolve in null time in response to gravitational radiation and external matter. Our analysis leads to three complementary viewpoints on this equation. First, it can be understood as a Carrollian stress tensor conservation equation. Second, we construct spin-$0$, spin-$2$ and matter stress tensors that act as generators of null time reparametrizations for each sector. This leads to the perspective that the null Raychaudhuri equation can be understood as imposing that the sum of CFT-like stress tensors vanishes. Third, we solve the Raychaudhuri constraint non-perturbatively. The solution relates the dressing time to the spin-$2$ and matter boost charge operators. Finally we establish that the corner charge corresponding to the boost operator in the dressing time frame is monotonic. These results show that the notion of an observer can be thought of as emerging from the gravitational degrees of freedom themselves. We briefly mention that the construction offers new insights into focusing conjectures.

  • Equivalence between definitions of the gravitational deflection angle of light for a stationary spacetime.- [PDF] - [Article] - [UPDATED]

    Kaisei Takahashi, Ryuya Kudo, Keita Takizawa, Hideki Asada
     

    The Gibbons-Werner-Ono-Ishihara-Asada method for gravitational lensing in a stationary spacetime has been recently reexamined [Huang and Cao, arXiv:2306.04145], in which the gravitational deflection angle of light based on the Gauss-Bonnet theorem can be rewritten as a line integral of two functions $H$ and $T$. The present paper proves that the Huang-Cao line integral definition and the Ono-Ishihara-Asada one [Phys. Rev. D 96, 104037 (2017)] are equivalent to each other, whatever asymptotic regions are. A remark is also made concerning the direction of a light ray in a practical use of these definitions.

  • Thermodynamics of Deformed AdS-Schwarzschild Black Hole.- [PDF] - [Article] - [UPDATED]

    Mohammad Reza Khosravipoor, Mehrdad Farhoudi
     

    By implementing the gravitational decoupling method, we find the deformed AdS-Schwarzschild black hole solution when there is also an additional gravitational source, which obeys the weak energy condition. We also deliberately choose its energy density to be a certain monotonic function consistent with the constraints. In the method, there is a positive parameter that can adjust the strength of the effects of the geometric deformations on the background geometry, which we refer to as a deformation parameter. The condition of having an event horizon limits the value of the deformation parameter to an upper bound. After deriving various thermodynamic quantities as a function of the event horizon radius, we mostly focus on the effects of the deformation parameter on the horizon structure, the thermodynamics of the solution and the temperature of the Hawking- Page phase transition. The results show that with the increase of the deformation parameter: the minimum horizon radius required for a black hole to have local thermodynamic equilibrium and the minimum temperature below which there is no black hole decrease, and the horizon radius of the phase transition and the temperature of the first-order Hawking-Page phase transition increase. Furthermore, when the deformation parameter vanishes, the obtained thermodynamic behavior of the black hole is consistent with that stated in the literature.

  • Instability of asymptotically flat (2+1)-dimensional black holes with Gauss-Bonnet corrections.- [PDF] - [Article] - [UPDATED]

    Milena Skvortsova
     

    Using the integration of wave equation in time-domain we show that scalar field perturbations around the $(2+1)$-dimensional asymptotically flat black hole with Gauss-Bonnet corrections is dynamically unstable even when the coupling constant is sufficiently small.

  • Carrollian Structure of the Null Boundary Solution Space.- [PDF] - [Article] - [UPDATED]

    H. Adami, A. Parvizi, M.M. Sheikh-Jabbari, V. Taghiloo, H. Yavartanoo
     

    We study pure $D$ dimensional Einstein gravity in spacetimes with a generic null boundary. We focus on the symplectic form of the solution phase space which comprises a $2D$ dimensional boundary part and a $2(D(D-3)/2+1)$ dimensional bulk part. The symplectic form is the sum of the bulk and boundary parts, obtained through integration over a codimension 1 surface (null boundary) and a codimension 2 spatial section of it, respectively. Notably, while the total symplectic form is a closed 2-form over the solution phase space, neither the boundary nor the bulk symplectic forms are closed due to the symplectic flux of the bulk modes passing through the boundary. Furthermore, we demonstrate that the $D(D-3)/2+1$ dimensional Lagrangian submanifold of the bulk part of the solution phase space has a Carrollian structure, with the metric on the $D(D-3)/2$ dimensional part being the Wheeler-DeWitt metric, and the Carrollian kernel vector corresponding to the outgoing Robinson-Trautman gravitational wave solution.

  • A superradiant black hole rocket.- [PDF] - [Article] - [UPDATED]

    Lucas Acito, Nicolás E. Grandi, Pablo Pisani
     

    We calculate the total thrust resulting from the interaction between charged scalar modes and a superradiant Reissner-Nordstr\"om black hole, when the modes are deflected by a hemispherical perfect mirror located at a finite distance from the black hole's horizon.

  • Topologically modified Einstein equation: a solution with singularities on $\mathbb{S}^3$.- [PDF] - [Article] - [UPDATED]

    Quentin Vigneron, Áron Szabó, Pierre Mourier
     

    In [arXiv:2204.13980], we recently proposed a modification of general relativity in which a non-dynamical term related to topology is introduced in the Einstein equation. The original motivation for this theory is to allow for the non-relativistic limit to exist in any physical spacetime topology. In the present paper, we derive an exact static vacuum spherically symmetric solution of this theory. The metric represents a black hole (with positive Komar mass) and a naked white hole (with negative and opposite Komar mass) at opposite poles of an $\mathbb{S}^3$ universe. The solution is similar to the Schwarzschild metric, but the spacelike infinity is cut, and replaced by a naked white hole at finite distance, implying that the spacelike hypersurfaces of the Penrose--Carter diagram are closed. This solution further confirms a result of [arXiv:2212.00675] suggesting that staticity of closed-space models in this theory requires a vanishing total mass. As a subcase of the solution, we also obtain a vacuum homogeneous 3-sphere, something not possible in general relativity. We also put in perspective the solution with other attempts at describing singularities on $\mathbb{S}^3$ and discuss how this theory could be used to study the behaviour of a black hole in an empty closed expanding universe.

hep-ph

  • Closing in on new chiral leptons at the LHC.- [PDF] - [Article]

    Daniele Barducci, Luca Di Luzio, Marco Nardecchia, Claudio Toni
     

    We study the phenomenological viability of chiral extensions of the Standard Model, with new chiral fermions acquiring their mass through interactions with the Higgs. We examine constraints from electroweak precision tests, Higgs physics and direct searches at the LHC. Our analysis indicates that purely chiral scenarios are perturbatively excluded by the combination of Higgs coupling measurements and LHC direct searches. However, allowing for a partial contribution from vector-like masses opens up the parameter space and non-decoupled exotic leptons could account for the observed 2$\sigma$ deviation in $h \to Z\gamma$. This scenario will be further tested in the high-luminosity phase of the LHC.

  • Quark and lepton modular models from the binary dihedral flavor symmetry.- [PDF] - [Article]

    Carlos Arriaga-Osante, Xiang-Gan Liu, Saul Ramos-Sanchez
     

    Inspired by the structure of top-down derived models endowed with modular flavor symmetries, we investigate the yet phenomenologically unexplored binary dihedral group 2D_3. After building the vector-valued modular forms in the representations of 2D_3 with small modular weights, we systematically classify all (Dirac and Majorana) mass textures of fermions with fractional modular weights and all possible 2+1-family structures. This allows us to explore the parameter space of fermion models based on 2D_3, aiming at a description of both quarks and leptons with a minimal number of parameters and best compatibility with observed data. We consider the separate possibilities of neutrino masses generated by either a type-I seesaw mechanism or the Weinberg operator. We identify a model that, besides fitting all known flavor observables, delivers predictions for six not-yet measured parameters and favors normal-ordered neutrino masses generated by the Weinberg operator. It would be interesting to figure out whether it is possible to embed our model within a top-down scheme, such as T2/Z4 heterotic orbifold compactifications.

  • Hierarchies from Landscape Probability Gradients and Critical Boundaries.- [PDF] - [Article]

    Oleksii Matsedonskyi
     

    If the gradient of a probability distribution on a landscape of vacua aligns with the variation of some fundamental parameter, the parameter may be likely to take some non-generic value. Such non-generic values can be associated to critical boundaries, where qualitative changes of the landscape properties happen, or an anthropic bound is located. Assuming the standard volume-weighted and the local probability measures, we discuss ordered landscapes which can produce several types of the aligned probability gradients. The resulting values of the gradients are defined by the "closeness" of a given vacuum to the highest- or the lowest-energy vacuum. Using these ingredients we construct a landscape scanning independently the Higgs mass and the cosmological constant (CC). The probability gradient pushes the Higgs mass to its observed value, where a structural change of the landscape takes place, while the CC is chosen anthropically.

  • The majoron coupling to charged leptons.- [PDF] - [Article]

    A. Herrero-Brocal, Avelino Vicente
     

    The particle spectrum of all Majorana neutrino mass models with spontaneous violation of global lepton number include a Goldstone boson, the so-called majoron. The presence of this massless pseudoscalar changes the phenomenology dramatically. In this work we derive general analytical expressions for the 1-loop coupling of the majoron to charged leptons. These can be applied to any model featuring a majoron that have a clear hierarchy of energy scales, required for an expansion in powers of the low-energy scale to be valid. We show how to use our general results by applying them to some example models, finding full agreement with previous results in several popular scenarios and deriving novel ones in other setups.

  • Compatibility between theoretical predictions and experimental data for top-antitop hadroproduction at NNLO QCD accuracy.- [PDF] - [Article]

    Maria Vittoria Garzelli, Javier Mazzitelli, Sven-Olaf Moch, Oleksandr Zenaiev
     

    We compare double-differential normalized production cross sections for top-antitop $+ X$ hadroproduction at NNLO QCD accuracy, as obtained through a customized version of the MATRIX framework interfaced to PineAPPL, with recent data by the ATLAS and CMS collaborations. We take into account theory uncertainties due to scale variation and we see how predictions vary as a function of parton distribution function (PDF) choice and top-quark pole mass value, considering different state-of-the-art PDF fits with their uncertainties. Notwithstanding the overall reasonable good agreement, we observe discrepancies at the level of a few $\sigma$'s between data and theoretical predictions in some kinematical regions, which can be alleviated by refitting the top-quark mass value, and/or the PDFs and/or $\alpha_s(M_Z)$, considering the correlations between these three quantities. In a fit of top-quark mass standalone, we notice that, for all considered PDF + $\alpha_s(M_Z)$ sets used as input, some datasets point towards top-quark pole mass values lower by about $2\,\sigma$'s than those emerging from fitting other datasets, suggesting a possible tension between experimental measurements using different decay channels, and/or the need of better estimating uncertainties on the latter.

  • Radiative Majorana Neutrino Masses in a Parity Solution to the Strong CP Problem.- [PDF] - [Article]

    Lawrence J. Hall, Keisuke Harigaya, Yogev Shpilman
     

    The strong CP problem can be solved in Parity symmetric theories with electroweak gauge group containing $SU(2)_L \times SU(2)_R$ broken by the minimal Higgs content. Neutrino masses may be explained by adding the same number of gauge singlet fermions as the number of generations. The neutrino masses vanish at tree-level and are only radiatively generated, leading to larger couplings of right-handed neutrinos to Standard Model particles than with the tree-level seesaw mechanism. We compute these radiative corrections and the mixing angles between left- and right-handed neutrinos. We discuss sensitivities to these right-handed neutrinos from a variety of future experiments that search for heavy neutral leptons with masses from tens of MeV to the multi-TeV scale.

  • Toward Realistic Models in $T^2/\mathbb{Z}_2$ Flux Compactification.- [PDF] - [Article]

    Hiroki Imai, Nobuhito Maru
     

    Six dimensional gauge theories compactified on $T^2/\mathbb{Z}_{2}$ with magnetic flux are considered, where the generation number of fermions can be understood as the degree of degeneracy of fermion zero modes. We investigate whether three-generation models compatible with Yukawa couplings are possible and find such various models except for two Higgs doublet model.

  • The profile of the Higgs boson -- status and prospects.- [PDF] - [Article]

    Karl Jakobs, Giulia Zanderighi
     

    The Higgs boson, which was discovered at CERN in 2012, stands out as a remarkable elementary particle with distinct characteristics. Unlike any other observed particle, it possesses zero spin within the Standard Model (SM) of particle physics. Theoretical predictions had anticipated the existence of this scalar boson, postulating its interaction with the $W$ and $Z$ bosons as well as through Yukawa interactions with fermions. Furthermore the Higgs boson can interact with itself, commonly referred to as the Higgs self-interaction. In this review, the current state of experimental and theoretical investigations of Higgs boson production at the LHC and the ongoing efforts to unravel its properties are described, and an up-to-date assessment of our understanding of the Higgs sector of the SM is provided. In addition, potential links between the Higgs boson and significant unresolved questions within the realm of particle physics are presented.

  • Freeze-in Dark Matter via Higgs Portal: Small-scale Problem.- [PDF] - [Article]

    Xinyue Yin, Shuai Xu, Sibo Zheng
     

    Observations of small-scale structure favor self-interacting dark matter to provide a mild velocity-dependent dark matter scattering cross section. In this work we study how to address the small-scale problem with freeze-in dark matter. To this end, we impose the small-scale data on the dark matter scattering cross section to extract the dark matter parameter regions either within classical or resonant regime. We then present an analytical derivation about two-body decay and two-body annihilation freeze-in processes with Sommerfeld effect taken into account. Finally we apply the obtained results to construct a realistic model of freeze-in dark matter with self-interaction via the Standard Model Higgs portal, where various constraints including out-of-equilibrium condition, Higgs invisible decay, dark matter direct and indirect detections, are compatible.

  • The $\mathbf{\bar{q}q\bar{s}Q}$ $\mathbf{(q=u,\,d;\,Q=c,\,b)}$ tetraquark system in a chiral quark model.- [PDF] - [Article]

    Gang Yang, Jialun Ping, Jorge Segovia
     

    Inspired by the experimentally reported $T_{c\bar{s}}(2900)$ exotic states, the $S$-wave $\bar{q}q\bar{s}Q$ $(q=u,\,d;\,Q=c,\,b)$ tetraquarks, with spin-parity $J^P=0^+$, $1^+$ and $2^+$, in both isoscalar and isovector sectors are systematically studied in a chiral quark model. The meson-meson, diquark-antidiquark and K-type arrangements of quarks, along with all possible color wave functions, are comprehensively considered. The four-body system is solved by means of a highly efficient computational approach, the Gaussian expansion method, along with a complex-scaling formulation of the problem to disentangle bound, resonance and scattering states. This theoretical framework has already been successfully applied in various tetra- and penta-quark systems. In the complete coupled-channel case, and within the complex-range formulation, several narrow resonances of $\bar{q}q\bar{s}c$ and $\bar{q}q\bar{s}b$ systems are obtained in each allowed $I(J^P)$-channels. Particularly, the $T_{c\bar{s}}(2900)$ is well identified as a $I(J^P)=1(0^+)$ $\bar{q}q\bar{s}c$ tetraquark state with a dominant molecular structure. Meanwhile, more resonances in $\bar{q}q\bar{s}c$ and $\bar{q}q\bar{s}b$ systems are also obtained within the energy regions $2.4-3.4$ GeV and $5.7-6.7$ GeV, respectively. The predicted exotic states, which are an indication of a richer color structure when going towards multiquark systems beyond mesons and baryons, are expected to be confirmed in future high-energy particle and nuclear experiments.

  • Bayesian method for fitting the low-energy constants in chiral perturbation theory.- [PDF] - [Article]

    Hao-Xiang Pan, De-Kai Kong, Qiao-Yi Wen, Shao-Zhou Jiang
     

    The values of the low-energy constants (LECs) are very important in the chiral perturbation theory. This paper adopts a Bayesian method with the truncation errors to globally fit some next-leading order (NLO) LECs $L_i^r$ and some next-to-next-leading order (NNLO) LECs $C_i^r$. With the estimates of the truncation errors, the fitting results of $L_i^r$ in the NLO and NNLO are very close. The posterior distributions of $C_i^r$ indicate the boundary-dependent relations of these $C_i^r$. Ten $C_i^r$ are weakly dependent on the boundaries and their values are reliable. The other $C_i^r$ are required more experimental data to constrain their boundaries. Some linear combinations of $C_i^r$ are also fitted with more reliable posterior distributions. If one knows some more precise value of $C_i^r$, some other $C_i^r$ can be obtained by these values. With these fitting LECs, most observables provide a good convergence, except for the $\pi K$ scattering lengths $a_0^{3/2}$ and $a_0^{1/2}$. Two examples are also introduced to test the improvement of the method. All the computations indicate that considering the truncation errors can improve the global fit greatly, and more prior information can obtain better fitting results. This fitting method can extend to the other effective field theories and the perturbation theory.

  • Bridging the $ \mu $Hz Gap in the Gravitational-Wave Landscape: Unveiling Dark Baryons.- [PDF] - [Article]

    Martin Rosenlyst
     

    We study gravitational waves (GWs) with frequencies in the $\mu$Hz range, which arise from phase transitions related to dark confinement in the context of dark versions of Quantum Chromodynamics. Based on several compelling motivations, we posit that these theories predict the existence of GeV-mass asymmetric dark baryons, akin to ordinary baryons, with the potential to contribute to Dark Matter (DM). Furthermore, we emphasize the significance of a particular $\mathcal{O}(\text{TeV})$ scale for multiple reasons. First, to account for the similarity in present-day mass densities between Dark Matter (DM) and Visible Matter, various TeV-scale mechanisms can elucidate the similarities in both their number densities and masses. Moreover, to address the so-called electroweak hierarchy problem, we consider the introduction of either the Composite Higgs or Supersymmetry at around $\mathcal{O}(\text{TeV})$. These mechanisms lead to intriguing TeV collider signatures and the possibility of detecting mHz GWs in future experiments. In summary, this study provides a strong motivation for advancing GW experiments that can bridge the $\mu$Hz frequency gap in the GW spectrum. Additionally, there is a need for the construction of more powerful particle colliders to explore higher energy regimes. In consideration of the possibility to scrutinize these models from various perspectives, we strongly advocate their further development.

  • The polarimeter vector for $\tau \rightarrow 3 \pi\nu_{\tau}$ decays.- [PDF] - [Article]

    Vladimir Cherepanov, Christian Veelken
     

    The polarimeter vector of the $\tau$ represents an optimal observable for the measurement of the $\tau$ spin. In this paper we present an algorithm for the computation of the $\tau$ polarimeter vector for the decay channels $\tau^{-} \rightarrow \pi^{-}\pi^{+}\pi^{-}\nu_{\tau}$ and $\tau^{-} \rightarrow \pi^{-}\pi^{0}\pi^{0}\nu_{\tau}$. The algorithm is based on a model for the hadronic current in these decay channels, which was fitted to data recorded by the CLEO experiment.

  • Inferring the initial condition for the Balitsky-Kovchegov equation.- [PDF] - [Article]

    Carlisle Casuga, Mikko Karhunen, Heikki Mäntysaari
     

    We apply Bayesian inference to determine the posterior likelihood distribution for the parameters describing the initial condition of the small-$x$ Balitsky-Kovchegov evolution equation at leading logarithmic accuracy. The HERA structure function data is found to constrain most of the model parameters well. In particular, we find that the HERA data prefers an anomalous dimension $\gamma\approx 1$ unlike in previous fits where $\gamma>1$ which led to e.g. the unintegrated gluon distribution and the quark-target cross sections not being positive definite. The determined posterior distribution can be used to propagate the uncertainties in the non-perturbative initial condition when calculating any other observable in the Color Glass Condensate framework. We demonstrate this explicitly for the inclusive quark production cross section in proton-proton collisions and by calculating predictions for the nuclear modification factor for the $F_2$ structure function in the EIC and LHeC/FCC-he kinematics.

  • Scalar exotic mesons $bb\overline{c}\overline{c}$.- [PDF] - [Article]

    S. S. Agaev, K. Azizi, B. Barsbay, H. Sundu
     

    Properties of doubly charged scalar tetraquarks $bb\overline{c}\overline{c}$ are investigated in the framework of the QCD sum rule method. We model them as diquark-antidiquark states $X_{\mathrm{1}}$ and $X_{\mathrm{2}}$ built of axial-vector and pseudoscalar diquarks, respectively. The masses and current couplings of these particles are computed using the QCD two-point sum rule method. Results $m_{1}=(12715 \pm 80)~\mathrm{MeV}$ and $m_{2}=(13370 \pm 95)~\mathrm{MeV}$ obtained for the masses of these particles are used to determine their kinematically allowed decay modes. The full width $\Gamma_{ \mathrm{1}}$ of the state $X_{\mathrm{1}}$ is evaluated by taking into account its strong decays to mesons $2B_{c}^{-}$, and $2B_{c}^{\ast -}$. The processes $X_{\mathrm{2}} \to 2B_{c}^{-}$, $2B_{c}^{\ast -}$ and $ B_{c}^{-}B_{c}^{-}(2S)$ are employed to estimate $\Gamma_{\mathrm{2}}$. Predictions obtained for the full widths $\Gamma_{\mathrm{1}}=(63 \pm 12)~ \mathrm{MeV}$ and $\Gamma_{\mathrm{2}}=(79 \pm 14)~\mathrm{MeV}$ of these structures and their masses may be utilized in experimental studies of fully heavy resonances.

  • Hadronization of Heavy Quarks.- [PDF] - [Article]

    Jiaxing Zhao, Jörg Aichelin, Pol Bernard Gossiaux, Andrea Beraudo, Shanshan Cao, Wenkai Fan, Min He, Vincenzo Minissale, Taesoo Song, Ivan Vitev, Ralf Rapp, Steffen Bass, Elena Bratkovskaya, Vincenzo Greco, Salvatore Plumari
     

    Heavy-flavor hadrons produced in ultra-relativistic heavy-ion collisions are a sensitive probe for studying hadronization mechanisms of the quark-gluon-plasma. In this work, we survey how different transport models for the simulation of heavy-quark diffusion through a quark-gluon plasma in heavy-ion collisions implement hadronization and how this affects final-state observables. Utilizing the same input charm-quark distribution in all models at the hadronization transition, we find that the transverse-momentum dependence of the nuclear modification factor of various charm hadron species has significant sensitivity to the hadronization scheme. In addition, the charm-hadron elliptic flow exhibits a nontrivial dependence on the elliptic flow of the hadronizing partonic medium.

  • Investigation of the hadronic light-by-light contribution to the muon $g{-}2$ using staggered fermions.- [PDF] - [Article]

    Christian Zimmermann, Antoine Gérardin
     

    Hadronic contributions dominate the uncertainty of the standard model prediction for the anomalous magnetic moment of the muon. In this work, we describe an ongoing lattice calculation of the hadronic light-by-light contribution, performed with staggered fermions. The presence of quarks with different tastes complicates the analysis of the position-space correlation function. We present a suitable adaption of the "Mainz method". As a first numerical test, we reproduce the well-known lepton-loop contribution. Results at a single lattice spacing for the light quark contribution, using two volumes, are then discussed. Our study of the long distance behavior and finite-volume effects is supplemented by considering the contribution of the light pseudoscalar-pole. The corresponding transition form factors have been evaluated in previous simulations on the same ensembles.

  • QCD Equation of State of Dense Nuclear Matter from a Bayesian Analysis of Heavy-Ion Collision Data.- [PDF] - [Article] - [UPDATED]

    Manjunath Omana Kuttan, Jan Steinheimer, Kai Zhou, Horst Stoecker
     

    Bayesian methods are used to constrain the density dependence of the QCD Equation of State (EoS) for dense nuclear matter using the data of mean transverse kinetic energy and elliptic flow of protons from heavy ion collisions (HIC), in the beam energy range $\sqrt{s_{\mathrm{NN}}}=2-10 GeV$. The analysis yields tight constraints on the density dependent EoS up to 4 times the nuclear saturation density. The extracted EoS yields good agreement with other observables measured in HIC experiments and constraints from astrophysical observations both of which were not used in the inference. The sensitivity of inference to the choice of observables is also discussed.

  • Thermodynamic and hydrodynamic characteristics of interacting system formed in relativistic heavy ion collisions.- [PDF] - [Article] - [UPDATED]

    Xu-Hong Zhang, Hao-Ning Wang, Fu-Hu Liu, Khusniddin K. Olimov
     

    To study the energy-dependent characteristics of thermodynamic and hydrodynamic parameters, based on the framework of a multi-source thermal model, we analyze the soft transverse momentum ($p_{T}$) spectra of the charged particles ($\pi^{-}$, $\pi^{+}$, $K^{-}$, $K^{+}$, $\bar{p}$, and $p$) produced in gold-gold (Au-Au) collisions at the center-of-mass energies $\sqrt{s_{NN}}=7.7$, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV from the STAR Collaboration and in lead-lead (Pb-Pb) collisions at $\sqrt{s_{NN}}=2.76$ and 5.02 TeV from the ALICE Collaboration. In the rest framework of emission source, the probability density function obeyed by meson momenta satisfies the Bose-Einstein distribution, and that obeyed by baryon momenta satisfies the Fermi-Dirac distribution. To simulate the $p_{T}$ of the charged particles, the kinetic freeze-out temperature $T$ and transverse expansion velocity $\beta_{T}$ of emission source are introduced into the relativistic ideal gas model. Our results, based on the Monte Carlo method for numerical calculation, show a good agreement with the experimental data. The excitation functions of thermodynamic parameter $T$ and hydrodynamic parameter $\beta_{T}$ are then obtained from the analyses, which shows an increase tendency from 7.7 GeV to 5.02 TeV in collisions with different centralities.

  • Productions of $X(3872)$, $Z_c(3900)$, $X_2(4013)$, and $Z_c(4020)$ in $B_{(s)}$ decays offer strong clues on their molecular nature.- [PDF] - [Article] - [UPDATED]

    Qi Wu, Ming-Zhu Liu, Li-Sheng Geng
     

    The exotic states $X(3872)$ and $Z_c(3900)$ have long been conjectured as isoscalar and isovector $\bar{D}^*D$ molecules. In this work, we first propose the triangle diagram mechanism to investigate their productions in $B$ decays as well as their heavy quark spin symmetry partners, $X_2(4013)$ and $Z_c(4020)$. We show that the large isospin breaking of the ratio $\mathcal{B}[B^+ \to X(3872) K^+]/\mathcal{B}[B^0 \to X(3872) K^0] $ can be attributed to the isospin breaking of the neutral and charged $\bar{D}^*D$ components in their wave functions. For the same reason, the branching fractions of $Z_c(3900)$ in $B$ decays are smaller than the corresponding ones of $X(3872)$ by at least one order of magnitude, which naturally explains its non-observation. A hierarchy for the production fractions of $X(3872)$, $Z_c(3900)$, $X_2(4013)$, and $Z_c(4020)$ in $B$ decays, consistent with all existing data, is predicted. Furthermore, with the factorization ansatz we extract the decay constants of $X(3872)$, $Z_c(3900)$, and $Z_c(4020)$ as $\bar{D}^*D^{(*)}$ molecules via the $B$ decays, and then calculate their branching fractions in the relevant $B_{(s)}$ decays, which turn out to agree with all existing experimental data. The mechanism we proposed is useful to elucidate the internal structure of the many exotic hadrons discovered so far and to extract the decay constants of hadronic molecules, which can be used to predict their production in related processes.

  • Further study on the production of P-wave doubly heavy baryons from Z-boson decays.- [PDF] - [Article] - [UPDATED]

    Hai-Jiang Tian, Xuan Luo, Hai-Bing Fu
     

    In this paper, we carried out a systematic investigation for the excited doubly heavy baryons production in $Z$-boson decays within the NRQCD factorization approach. Our investigation accounts for all the $P$-wave intermediate diquark states, {\it i.e.} $\langle cc\rangle[^1P_1]_{\bar 3}$, $\langle cc\rangle[^3P_J]_{6}$, $\langle bc\rangle[^1P_1]_{\bar 3/6}$, $\langle bc\rangle[^3P_J]_{\bar 3/6}$, $\langle bb\rangle[^1P_1]_{\bar 3}$, and $\langle bb\rangle[^3P_J]_{6}$ with $J = (0, 1, 2)$. The results show that contributions from all diquark states in $P$-wave were $7\%$, $8\%$, and $3\%$ in comparing with $S$-wave for the production of $\Xi_{cc}$, $\Xi_{bc}$ and $\Xi_{bb}$ via $Z$-boson decay, respectively. Based on these results, we predicted about $0.539\times 10^3(10^6)$ events for $\Xi_{cc}$, $1.827\times 10^3(10^6)$ events for $\Xi_{bc}$, and $0.036\times 10^3(10^6)$ events for $\Xi_{bb}$ can be produced annually at the LHC (CEPC). Additionally, we plot the differential decay widths of $\Xi_{cc}$, $\Xi_{bc}$ and $\Xi_{bb}$ as a function of the invariant mass $s_{23}$ and energy function $z$ distributions, and analyze the theoretical uncertainties in decay width arising from the mass parameters of heavy quark.

  • Physics of the Analytic S-Matrix.- [PDF] - [Article] - [UPDATED]

    Sebastian Mizera
     

    You might've heard about various mathematical properties of scattering amplitudes such as analyticity, sheets, branch cuts, discontinuities, etc. What does it all mean? In these lectures, we'll take a guided tour through simple scattering problems that will allow us to directly trace such properties back to physics. We'll learn how different analytic features of the S-matrix are really consequences of causality, locality of interactions, unitary propagation, and so on. These notes are based on a series of lectures given in Spring 2023 at the Institute for Advanced Study in Princeton and the Higgs Centre School of Theoretical Physics in Edinburgh.

  • Study of the gluonic quartic gauge couplings at muon colliders.- [PDF] - [Article] - [UPDATED]

    Ji-Chong Yang, Yu-Chen Guo, Yi-Fei Dong
     

    The potential of the muon colliders open up new possibilities for the exploration of new physics beyond the Standard Model. It is worthwhile to investigate whether muon colliders are suitable for studying gluonic quartic gauge couplings~(gQGCs), which can be contributed by dimension-8 operators in the framework of the Standard Model effective field theory, and are intensively studied recently. In this paper, we study the sensitivity of the process $\mu^+\mu^-\to j j \nu\bar{\nu}$ to gQGCs. Our result indicate that the muon colliders with c.m. energies larger than $4\;{\rm TeV}$ can be more sensitive to gQGCs than the Large Hadron Collider.

  • Higher strangeonium decays into light flavor baryon pairs like $\Lambda\bar{\Lambda}$, $\Sigma\bar{\Sigma}$, and $\Xi\bar{\Xi}$.- [PDF] - [Article] - [UPDATED]

    Zi-Yue Bai, Qin-Song Zhou, Xiang Liu
     

    In this work, we investigate the decay behaviors of several higher strangeonia into $\Lambda\bar\Lambda$ through a hadronic loop mechanism, enabling us to predict some physical observables, including the branching ratios. Furthermore, we assess the reliability of our research by successfully reproducing experimental data related to the cross section of $e^+e^-\to\Lambda\bar{\Lambda}$ interactions. In this context, we account for the contributions arising from higher strangeonia, specifically $\phi(4S)$ and $\phi(3D)$. Additionally, we extend this study to encompass higher strangeonia decays into other light flavor baryon pairs, such as $\Sigma\bar\Sigma$ and $\Xi\bar\Xi$. By employing the same mechanism, we aim to gain valuable insights into the decay processes involving these particles. By conducting this investigation, we hope to shed light on the intricate decay mechanisms of higher strangeonia and their interactions with various baryons pairs.

  • A Hybrid Type I + III Inverse Seesaw Mechanism in $U(1)_{R-L}$-symmetric MSSM.- [PDF] - [Article] - [UPDATED]

    Cem Murat Ayber, Seyda Ipek
     

    We show that, in a $U(1)_{R-L}$-symmetric supersymmetric model, the pseudo-Dirac bino and wino can give rise to three light neutrino masses through effective operators, generated at the messenger scale between a SUSY breaking hidden sector and the visible sector. The neutrino-bino/wino mixing follows a hybrid type I+III inverse seesaw pattern. The light neutrino masses are governed by the ratio of the $U(1)_{R-L}$-breaking gravitino mass, $m_{3/2}$, and the messenger scale $\Lambda_M$. The charged component of the $SU(2)_L$-triplet, here the lightest charginos, mix with the charged leptons and generate flavor-changing neutral currents at tree level. We find that resulting lepton flavor violating observables yield a lower bound on the messenger scale, $\Lambda_M \gtrsim (500-1000)~{\rm TeV}$ for a simplified hybrid mixing scenario. We identify interesting mixing structures for certain $U(1)_{R-L}$-breaking singlino/tripletino Majorana masses. For example, in some parameter regimes, bino or wino has no mixing with the electron neutrino. We also describe the rich collider phenomenology expected in this neutrino-mass generation mechanism.

  • Glueball Spectrum with four light dynamical fermions.- [PDF] - [Article] - [UPDATED]

    Andreas Athenodorou, Jacob Finkenrath, Adam Lantos, Michael Teper
     

    We perform a calculation of the glueball spectrum for $N_f=4$ degenerate dynamical fermions with masses corresponding to light pions. We do so by making use of ensembles produced within the framework of maximally twisted fermions by the Extended Twisted Mass Collaboration (ETMC). We obtain masses of states that fall into the irreducible representations of the octahedral group of rotations in combination with the quantum numbers of charge conjugation $C$ and parity $P$; the above quantum numbers result in 20 distinct irreducible representations. We implement the Generalized Eigenvalue Problem (GEVP) using a basis that consists only of gluonic operators. The purpose of this work is to investigate the effect of light dynamical quarks on the glueball spectrum and how this compares to the statistically more accurate spectrum of $SU(3)$ pure gauge theory. Given that glueball states may have broad widths and thus need to be disentangled from all the relevant mixings, we use large ensembles of the order of ${\sim {~\cal O}}(20 {\rm K})$ configurations. Despite the large ensembles, the statistical uncertainties allow us to extract the masses for only a few irreducible representations; namely $A_1^{++}$, $A_1^{-+}$, $E^{++}$ as well as $T_2^{++}$. The results for the scalar $A_1^{++}$ representation show that an additional state appears as the lightest state in the scalar $A_1^{++}$ channel of the glueball spectrum, while the next two excited states are consistent with the lightest two states of the pure gauge theory. To further elucidate the nature of this additional state we perform a calculation using $N_f=2+1+1$ configurations and this demonstrates that it possesses a large quark content. Finally, the ground states of the $E^{++}$ and $T_2^{++}$ tensor channels and of the $A_1^{-+}$ pseudoscalar channel show, at most, minor effects due to the inclusion of dynamical quarks.

  • Flavor physics in SU(5) GUT with scalar fields in the 45 representation.- [PDF] - [Article] - [UPDATED]

    Toru Goto, Satoshi Mishima, Tetsuo Shindou
     

    We study a realistic SU(5) grand unified model, where a 45 representation of scalar fields is added to the Georgi-Glashow model in order to realize the gauge coupling unification and the masses and mixing of quarks and leptons. The gauge coupling unification together with constraints from proton decay implies mass splittings in scalar representations. We assume that an SU(2) triplet component of the 45 scalar, which is called $S_3$ leptoquark, has a TeV-scale mass, and color-sextet and color-octet ones have masses of the order of $10^6$ GeV. We calculate one-loop beta functions for Yukawa couplings in the model, and derive the low-energy values of the $S_3$ Yukawa couplings which are consistent with the grand unification. We provide predictions for lepton-flavor violation and lepton-flavor-universality violation induced by the $S_3$ leptoquark, and find that current and future experiments have a chance to find a footprint of our SU(5) model.

  • On the role of soft gluons in collinear parton densities.- [PDF] - [Article] - [UPDATED]

    M. Mendizabal, F. Guzman, H. Jung, S. Taheri Monfared
     

    The role of soft (non-perturbative) gluons in collinear parton densities is investigated with the Parton Branching method as a solution of the DGLAP evolution equations. It is found that soft gluons contribute significantly to collinear parton densities. Within the Parton Branching frame, the Sudakov form factor can be split into a perturbative and non-perturbative part. The non-perturbative part can be calculated analytically under certain conditions. It is shown that the inclusion of soft (non-perturbative) gluons to the parton density evolution is essential for the proper cancellation of divergent terms. It is argued that the non-perturbative part of the Sudakov form factor has its correspondence in Transverse Momentum Dependent parton distributions. Within the Parton Branching approach, this non-perturbative Sudakov form factor is constrained by fits of inclusive, collinear parton densities. We show that the non-perturbative Sudakov form factor and soft gluon emissions are essential for inclusive distributions (collinear parton densities and Drell-Yan transverse momentum spectra), while those soft gluons play essentially no role in final state hadron spectra.

  • An alternative evaluation of the leading-order hadronic contribution to the muon g-2 with MUonE.- [PDF] - [Article] - [UPDATED]

    Fedor Ignatov, Riccardo Nunzio Pilato, Thomas Teubner, Graziano Venanzoni
     

    We propose an alternative method to extract the leading-order hadronic contribution to the muon g-2, $a_{\mu}^\text{HLO}$, with the MUonE experiment. In contrast to the traditional method based on the integral of the hadronic contribution to the running of the effective fine-structure constant $\Delta\alpha_{had}$ in the space-like region, our approach relies on the computation of the derivatives of $\Delta\alpha_{had}(t)$ at zero squared momentum transfer $t$. We show that this approach allows to extract $\sim 99\%$ of the total value of $a_{\mu}^\text{HLO}$ from the MUonE data, while the remaining $\sim 1\%$ can be computed combining perturbative QCD and data on $e^+e^-$ annihilation to hadrons. This leads to a competitive evaluation of $a_{\mu}^\text{HLO}$ which is robust against the parameterization used to model $\Delta\alpha_{had}(t)$ in the MUonE kinematic region, thanks to the analyticity properties of $\Delta\alpha_{had}(t)$, which can be expanded as a polynomial at $t\sim 0$.

  • Kinematic scheme study of the $O(a^4)$ Bjorken sum rule and R ratio.- [PDF] - [Article] - [UPDATED]

    R.H. Mason, J.A. Gracey
     

    The Bjorken sum rule and R ratio are constructed to $O(a^4)$ in the Landau gauge in the three momentum subtraction schemes of Celmaster and Gonsalves where $a$ $=$ $g^2/(16\pi^2)$. We aim to examine the issue of convergence for observables in the various schemes as well as to test ideas on whether using the discrepancy in different scheme values is a viable and more quantum field theoretic alternative to current ways of estimating the theory error on a measureable.

  • Can negative bare couplings make sense? The $\vec{\phi}^4$ theory at large $N$.- [PDF] - [Article] - [UPDATED]

    Ryan D. Weller
     

    Scalar $\lambda\phi^4$ theory in 3+1D, for a positive coupling constant $\lambda>0$, is known to have no interacting continuum limit, which is referred to as quantum triviality. However, it has been recently argued that the theory in 3+1D with an $N$-component scalar $\vec{\phi}$ and a $(\vec{\phi}\cdot\vec{\phi})^{\,2}=\vec{\phi}^{\,4}$ interaction term does have an interacting continuum limit at large $N$. It has been suggested that this continuum limit has a negative (bare) coupling constant and exhibits asymptotic freedom, similar to the $\mathcal{P}\mathcal{T}$-symmetric $-g\phi^4$ field theory. In this paper I study the $\vec{\phi}^{\,4}$ theory in 3+1D at large $N$ with a negative coupling constant $-g<0$, and with the scalar field taking values in a $\mathcal{P}\mathcal{T}$-symmetric complex domain. The theory is non-trivial, has asymptotic freedom, and has a Landau pole in the IR, and I demonstrate that the thermal partition function matches that of the positive-coupling $\lambda>0$ theory when the Landau poles of the two theories (in the $\lambda>0$ case a pole in the UV) are identified with one another. Thus the $\vec{\phi}^{\,4}$ theory at large $N$ appears to have a negative bare coupling constant; the coupling only becomes positive in the IR, which in the context of other $\mathcal{P}\mathcal{T}$-symmetric and large-$N$ quantum field theories I argue is perfectly acceptable.

  • Superallowed nuclear beta decays and precision tests of the Standard Model.- [PDF] - [Article] - [UPDATED]

    Mikhail Gorchtein, Chien Yeah Seng
     

    For many decades, the main source of information on the top-left corner element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix $V_{ud}$ were superallowed nuclear beta decays with an impressive 0.01\% precision. This precision, apart from experimental data, relies on theoretical calculations in which nuclear structure-dependent effects and uncertainties play a prime role. This review is dedicated to a thorough reassessment of all ingredients that enter the extraction of the value of $V_{ud}$ from experimental data. We tried to keep balance between historical retrospect and new developments, many of which occurred in just five past years. They have not yet been reviewed in a complete manner, not least because new results are a-coming. This review aims at filling this gap and offers an in-depth yet accessible summary of all recent developments.

  • Renormalon cancellation and linear power correction to double logarithmic factorization of space-like parton correlators.- [PDF] - [Article] - [UPDATED]

    Yizhuang Liu, Yushan Su
     

    In this paper, we show that the common hard kernel of double-log-type factorization for certain space-like parton correlators in the context of lattice parton distributions, the heavy-light Sudakov hard kernel or equivalently, the axial gauge quark field renormalization factor, has linear infrared (IR) renormalon in its phase angle. We explicitly demonstrate how this IR renormalon correlates with ultraviolet (UV) renormalons of next-to-leading power soft contributions in two explicit examples: transverse momentum dependent (TMD) factorization of quasi wave function amplitude and threshold factorization of quark quasi-PDF. Theoretically, the pattern of renormalon cancellation complies with general expectations of perturbative series induced by marginal perturbation to UV fixed point. Practically, this linear renormalon explains the slow convergence of imaginary parts observed in lattice extraction of the Collins-Soper kernel and has the potential to reduce numerical uncertainty.

  • Constraints on Density Dependent MIT Bag Model Parameters for Quark and Hybrid Stars.- [PDF] - [Article] - [UPDATED]

    Soumen Podder, Suman Pal, Debashree Sen, Gargi Chaudhuri
     

    We compute the equation of state (EoS) of strange quark stars (SQSs) with the MIT Bag model using density dependent bag pressure, characterized by a Gaussian distribution function. The bag pressure's density dependence is controlled by three key parameters namely the asymptotic value ($B_{as}$), $\Delta B(=B_0 - B_{as})$, and $\beta$. We explore various parameter combinations ($B_{as}$, $\Delta B$, $\beta$) that adhere to the Bodmer-Witten conjecture, a criterion for the stability of SQSs. Our primary aim is to analyze the effects of these parameter variations on the structural properties of SQSs. However we find that none of the combinations can satisfy the NICER data for PSR J0030+0451 and the constraint on tidal deformability from GW170817. So it can be emphasized that this model cannot describe reasonable SQS configurations. We also extend our work to calculate structural properties of hybrid stars (HSs). With the density dependent bag model (DDBM), these astrophysical constraints are fulfilled by the HSs configurations within a very restricted range of the three parameters. The present work is the first to constrain the parameters of DDBM for both SQS and HSs using the recent astrophysical constraints on tidal deformabiity from GW170817 and that on mass-radius relationship from NICER data.

hep-th

  • Entanglement Edge Modes of General Noncommutative Matrix Backgrounds.- [PDF] - [Article]

    Alexander Frenkel
     

    We explore the structure of entanglement edge modes on noncommutative backgrounds that arise from matrix quantum mechanics. For the fuzzy sphere, despite nonlocality and UV/IR mixing, we find area law behavior in the dominant $U(N)$ representations governing the state of the edge modes. For general noncommutative backgrounds with no global symmetry, nonlocal effects resum into a smoothly varying coupling constant that deforms the metric to a different frame. The effect is analogous to the relationship between string frame and Einstein frame in string theory.

  • AdS Higgs mechanism from double trace deformed CFT.- [PDF] - [Article]

    Andreas Karch, Mianqi Wang, Merna Youssef
     

    Explicit breaking of a global symmetry in a conformal field theory is holographically dual to giving mass to a gauge field living in AdS via the Higgs mechanism. We show that if this breaking is induced via a double trace deformation the Higgs mechanism is induced via a scalar loop diagram. The mass can be calculated analytically in both bulk and field theory and we find perfect agreement. While representing familiar physics, the mechanism is identical to how the graviton picks up a mass in the holographic dual of a conformal field theory coupled to a bath.

  • Worldsheet from worldline.- [PDF] - [Article]

    Umut Gursoy, Guim Planella Planas
     

    We take a step toward a "microscopic" derivation of gauge-string duality. In particular, using mathematical techniques of Strebel differentials and discrete exterior calculus, we obtain a bosonic string worldsheet action for a string embedded in d+1 dimensional asymptotically AdS space from multi-loop Feynman graphs of a quantum field theory of scalar matrices in d-dimensions in the limit of diverging loop number. Our work is building on the program started by 't Hooft in 1974, this time including the holographic dimension which we show to emerge from the continuum of Schwinger parameters of Feynman diagrams.

  • Permutation invariance, partition algebras and large $N$ matrix models.- [PDF] - [Article]

    Adrian Padellaro
     

    In this thesis we will study matrix models with discrete gauge group $S_N$. We will put these matrix models into a generalized Schur-Weyl duality framework where dual algebras, known as partition algebras, emerge. These form generalizations of the symmetric group algebras -- they are semi-simple finite-dimensional associative algebras with a basis labelled by diagrams. We review the structure and representation theory of partition algebras. These algebras are then used to compute expectation values of $S_N$ invariant observables. This is a step towards studying the emergence of new geometric structures in their Feynman diagram expansion. Matrix models also appear in the form of quantum mechanical models of matrix oscillators. We explore the implications of the Schur-Weyl duality framework to matrix quantum mechanics with permutation symmetry.

  • A Chern-Simons approach to self-dual gravity in (2+1)-dimensions and quantisation of Poisson structure.- [PDF] - [Article]

    Prince K. Osei
     

    The (2+1)-dimensional analog self-dual gravity which is obtained via dimension reduction of the (3+1)-dimensional Holst action without reducing the internal gauge group is studied. A Chern-Simons formulation for this theory is constructed based on the gauge group $SL(2,\CC)_\RR\rcross \Rsix$ and maps the 3d complex self-dual dynamical variable and connection to $6d$ real variables which combines into a $12d$ Cartan connection. Quantization is given by the application of the combinatorial quantisation program of Chern-Simons theory. The Poisson structure for the moduli space of flat connections on $(SL(2,\CC)_\RR\rcross \Rsix)^{n+2g}$ which emerges in the combinatorial description of the phase space on $\RR \times \Sigma_{g,n},$ where $\Sigma_{g,n}$ is a genus $g$ surface with $n$ punctures is given in terms of the classical $r$-matrix for the quantum double $D(SL(2,\CC)_\RR)$ viewed as the double of a double $ D(SU(2)\dcross AN(2))$. This quantum double provides a feature for quantum symmetries of the quantum theory for the model.

  • Central Extension of Scaling Poincar\'e Algebra.- [PDF] - [Article]

    Yu Nakayama
     

    We discuss the possibility of a central extension of the Poincar\'e algebra and the scaling Poincar\'e algebra. In more than two space-time dimensions, all the central extensions are trivial and can be removed. In two space-time dimensions, both the Poincar\'e algebra and the scaling Poincar\'e algebra have distinct non-trivial central extensions that cannot be removed. In higher dimensions, the central charges between dilatation and global $U(1)$ symmetry may not be removed. Based on these central extensions, we give some examples of projective representations of the (scaling) Poincar\'e symmetry in two dimensions.

  • Resurgence and self-completion in renormalized gauge theories.- [PDF] - [Article]

    Alessio Maiezza, Juan Carlos Vasquez
     

    Under certain assumptions and independent of the instantons, we show that the logarithm expansion of dimensional regularization in quantum field theory needs a nonperturbative completion to have a renormalization-group flow valid at all energies. Then, we show that such nonperturbative completion has the analytic properties of the renormalons, which we find with no reference to diagrammatic calculations. We demonstrate that renormalon corrections necessarily lead to analyzable functions, namely, resurgent transseries. A detailed analysis of the resurgent properties of the renormalons is provided. The self-consistency of the theory requires these nonperturbative contributions to render the running coupling well-defined at any energy, thus with no Landau pole. We illustrate the point within the case of QED. This way, we explicitly realize the correspondence between the nonperturbative Landau pole scale and the renormalons. What is seen as a Landau pole in perturbation theory is cured by the nonperturbative, resurgent contributions.

  • Unification of Decoupling Limits in String and M-theory.- [PDF] - [Article]

    Chris D. A. Blair, Johannes Lahnsteiner, Niels A. J. Obers, Ziqi Yan
     

    We study and extend the duality web unifying different decoupling limits of type II superstring theories and M-theory. We systematically build connections to different corners, such as Matrix theories, nonrelativistic string and M-theory, tensionless (and ambitwistor) string theory, Carrollian string theory, and Spin Matrix limits of AdS/CFT. We discuss target space, worldsheet, and worldvolume aspects of these limits in arbitrary curved backgrounds.

  • Worldsheet Formalism for Decoupling Limits in String Theory.- [PDF] - [Article]

    Joaquim Gomis, Ziqi Yan
     

    We study the bosonic sector of a decoupling limit of type IIA superstring theory, where a background Ramond-Ramond one-form is fined tuned to its critical value, such that it cancels the associated background D0-brane tension. The light excitations in this critical limit are D0-branes, whose dynamics are described by Banks-Fischler-Shenker-Susskind (BFSS) Matrix theory that corresponds to M-theory in the Discrete Light-Cone Quantization (DLCQ). We develop the worldsheet formalism for the fundamental string in the same critical limit of type IIA superstring theory. We show that the fundamental string has a nonrelativistic worldsheet, whose topology is described by nodal Riemann spheres as in ambitwistor string theory. We study the T-duality transformations of this string sigma model and provide a worldsheet derivation for the recently revived and expanded duality web that unifies a zoo of decoupling limits in type II superstring theories. By matching the string worldsheet actions, we demonstrate how some of these decoupling limits are related to tensionless (and ambitwistor) string theory, Carrollian string theory, and the Spin Matrix limits of the AdS/CFT correspondence.

  • Hamiltonian Forging of a Thermofield Double.- [PDF] - [Article]

    Daniel Faílde, Juan Santos-Suárez, David A. Herrera-Martí, Javier Mas
     

    We address the variational preparation of Gibbs states as the ground state of a suitably engineered Hamiltonian acting on the doubled Hilbert space. The construction is exact for quadratic fermionic Hamiltonians and gives excellent approximations up to fairly high quartic deformations. We provide a variational circuit whose optimization returns the unitary diagonalizing operator, thus giving access to the whole spectrum. The problem naturally implements the entanglement forging ansatz, allowing the computation of Thermofield Doubles with a higher number of qubits than in competing frameworks.

  • Computable and Faithful Lower Bound for Entanglement Cost.- [PDF] - [Article]

    Xin Wang, Mingrui Jing, Chengkai Zhu
     

    Quantum entanglement is a crucial resource in quantum information processing. However, quantifying the entanglement required to prepare quantum states and implement quantum processes remains challenging. This paper proposes computable and faithful lower bounds for the entanglement cost of general quantum states and quantum channels. We introduce the concept of logarithmic $k$-negativity, a generalization of logarithmic negativity, to establish a general lower bound for the entanglement cost of quantum states under quantum operations that completely preserve the positivity of partial transpose (PPT). This bound is efficiently computable via semidefinite programming and is non-zero for any entangled state that is not PPT, making it faithful in the entanglement theory with non-positive partial transpose. Furthermore, we delve into specific and general examples to demonstrate the advantages of our proposed bounds compared with previously known computable ones. Notably, we affirm the irreversibility of asymptotic entanglement manipulation under PPT operations for full-rank entangled states and the irreversibility of channel manipulation for amplitude damping channels. We also establish the best-known lower bound for the entanglement cost of arbitrary dimensional isotropic states. These findings push the boundaries of understanding the structure of entanglement and the fundamental limits of entanglement manipulation.

  • Simplified algorithm for the Worldvolume HMC and the Generalized-thimble HMC.- [PDF] - [Article]

    Masafumi Fukuma
     

    The Worldvolume Hybrid Monte Carlo method (WV-HMC method) [arXiv:2012.08468] is a reliable and versatile algorithm towards solving the sign problem. Similarly to the tempered Lefschetz thimble method [arXiv:1703.00861], this method mitigates the ergodicity problem inherent in algorithms based on Lefschetz thimbles. In addition to this advantage, the WV-HMC method significantly reduces the computational cost because it does not require the computation of the Jacobian in generating configurations. A crucial step in this method is the RATTLE algorithm, which projects at each molecular dynamics step a transported configuration onto a submanifold (worldvolume) in the complex space. In this paper, we simplify the RATTLE algorithm by using a simplified Newton method with an improved initial guess, which can be similarly implemented to the HMC algorithm for the generalized thimble method (GT-HMC method). We perform a numerical test for the convergence of the simplified Newton equation, and show that the convergence depends on the system size only weakly. The application of this simplified algorithm to various models will be reported in subsequent papers.

  • Jacobian conjecture: coloring Abdesselam-Rivasseau model.- [PDF] - [Article] - [UPDATED]

    Vasily Sazonov
     

    We consider the Abdesselam-Rivasseau (AR) model turning the Jacobian Conjecture (JC) into a problem of the perturbative quantum field theory. Within the AR model, the JC inverse map is represented by a formal integral generating the tree's expansion for this map. By assigning colors to the edges in the vertex of the AR model and performing selective Gaussian integration, we show the termination of the tree's series for the inverse map. The latter implies the correctness of JC.

  • Quantum Chaos and Coherence: Random Parametric Quantum Channels.- [PDF] - [Article] - [UPDATED]

    Apollonas S. Matsoukas-Roubeas, Tomaž Prosen, Adolfo del Campo
     

    The survival probability of an initial Coherent Gibbs State (CGS) is a natural extension of the Spectral Form Factor (SFF) to open quantum systems. To quantify the interplay between quantum chaos and decoherence away from the semi-classical limit, we investigate the relation of this generalized SFF with the corresponding $l_1$-norm of coherence. As a working example, we introduce Parametric Quantum Channels (PQC), a discrete-time model of unitary evolution periodically interrupted by the effects of measurements or transient interactions with an environment. The Energy Dephasing (ED) dynamics arises as a specific case in the Markovian limit. We demonstrate our results in a series of random matrix models.

  • 3D Ising CFT and Exact Diagonalization on Icosahedron: The Power of Conformal Perturbation Theory.- [PDF] - [Article] - [UPDATED]

    Bing-Xin Lao, Slava Rychkov
     

    We consider the transverse field Ising model in $(2+1)$D, putting 12 spins at the vertices of the regular icosahedron. The model is tiny by the exact diagonalization standards, and breaks rotation invariance. Yet we show that it allows a meaningful comparison to the 3D Ising CFT on $\mathbb{R}\times S^2$, by including effective perturbations of the CFT Hamiltonian with a handful of local operators. This extreme example shows the power of conformal perturbation theory in understanding finite $N$ effects in models on regularized $S^2$. Its ideal arena of application should be the recently proposed models of fuzzy sphere regularization.

  • Einstein-Bumblebee-Dilaton black hole solution.- [PDF] - [Article] - [UPDATED]

    L. A. Lessa, J. E. G. Silva
     

    We obtain new black hole solutions in a Einstein-Bumblebee-scalar theory. By starting with a Einstein-Bumblebee theory in D + d dimensions, the scalar dilaton field and its interaction with the gravitational and bumblebee fields are obtained by Kaluza-Klein (KK) reduction over the extra dimensions. Considering the effects of both the bumblebee vacuum expectation value (VEV) and the fluctuations over the VEV, we obtained new charged solutions in (3 + 1) dimensions. For a vanishing dilaton, the black hole turned out to be a charged de Sitter-Reissner-Nordstrom solution, where the transverse mode is the Maxwell field and the longitudinal mode is the cosmological constant. The stability of these new solutions is investigated by means of the analysis of the black hole thermodynamics. The temperature, entropy and heat capacity show that these modified black holes are thermodynamic stable.

  • Scattering with Neural Operators.- [PDF] - [Article] - [UPDATED]

    Sebastian Mizera
     

    Recent advances in machine learning establish the ability of certain neural-network architectures called neural operators to approximate maps between function spaces. Motivated by a prospect of employing them in fundamental physics, we examine applications to scattering processes in quantum mechanics. We use an iterated variant of Fourier neural operators to learn the physics of Schr\"odinger operators, which map from the space of initial wave functions and potentials to the final wave functions. These deep operator learning ideas are put to test in two concrete problems: a neural operator predicting the time evolution of a wave packet scattering off a central potential in $1+1$ dimensions, and the double-slit experiment in $2+1$ dimensions. At inference, neural operators can become orders of magnitude more efficient compared to traditional finite-difference solvers.

  • Krylov Complexity of Fermionic and Bosonic Gaussian States.- [PDF] - [Article] - [UPDATED]

    Kiran Adhikari, Adwait Rijal, Ashok Kumar Aryal, Mausam Ghimire, Rajeev Singh, Christian Deppe
     

    The concept of \emph{complexity} has become pivotal in multiple disciplines, including quantum information, where it serves as an alternative metric for gauging the chaotic evolution of a quantum state. This paper focuses on \emph{Krylov complexity}, a specialized form of quantum complexity that offers an unambiguous and intrinsically meaningful assessment of the spread of a quantum state over all possible orthogonal bases. Our study is situated in the context of Gaussian quantum states, which are fundamental to both Bosonic and Fermionic systems and can be fully described by a covariance matrix. We show that while the covariance matrix is essential, it is insufficient alone for calculating Krylov complexity due to its lack of relative phase information. Our findings suggest that the relative covariance matrix can provide an upper bound for Krylov complexity for Gaussian quantum states. We also explore the implications of Krylov complexity for theories proposing complexity as a candidate for holographic duality by computing Krylov complexity for the thermofield double States (TFD) and Dirac field.

  • A new realization of quantum algebras in gauge theory and Ramification in the Langlands program.- [PDF] - [Article] - [UPDATED]

    Nathan Haouzi
     

    We realize the fundamental representations of quantum algebras via the supersymmetric Higgs mechanism in gauge theories with 8 supercharges on an $\Omega$-background. We test our proposal for quantum affine algebras, by probing the Higgs phase of a 5d quiver gauge theory on a circle. We show that our construction implies the existence of tame ramification in the Aganagic-Frenkel-Okounkov formulation of the geometric Langlands program, a correspondence which identifies $q$-conformal blocks of the quantum affine algebra with those of a Langlands dual deformed ${\cal W}$-algebra. The new feature of ramified blocks is their definition in terms of Drinfeld polynomials for a set of quantum affine weights. In enumerative geometry, the blocks are vertex functions counting quasimaps to quiver varieties describing moduli spaces of vortices. Physically, the vortices admit a description as a 3d ${\cal N}=2$ quiver gauge theory on the Higgs branch of the 5d gauge theory, uniquely determined from the Drinfeld polynomial data; the blocks are supersymmetric indices for the vortex theory supported on a 3-manifold with distinguished BPS boundary conditions. The top-down explanation of our results is found in the 6d $(2,0)$ little string theory, where tame ramification is provided by certain D-branes. When the string mass is taken to be large, we make contact with various physical aspects of the point particle superconformal limit: the Gukov-Witten description of ramification via monodromy defects in 4d Super Yang-Mills (and their S-duality), the Nekrasov-Tsymbaliuk solution to the Knizhnik-Zamolodchikov equations, and the classification of massive deformations of tamely ramified Hitchin systems. In a companion paper, we will show that our construction implies a solution to the local Alday-Gaiotto-Tachikawa conjecture.

  • Superconformal Gravity And The Topology Of Diffeomorphism Groups.- [PDF] - [Article] - [UPDATED]

    Jay Cushing, Gregory W. Moore, Martin Roček, Vivek Saxena
     

    Twisted four-dimensional supersymmetric Yang-Mills theory famously gives a useful point of view on the Donaldson and Seiberg-Witten invariants of four-manifolds. In this paper we generalize the construction to include a path integral formulation of generalizations of Donaldson invariants for smooth families of four-manifolds. Mathematically these are equivariant cohomology classes for the action of the oriented diffeomorphism group on the space of metrics on the manifold. In principle these cohomology classes should contain nontrivial information about the topology of the diffeomorphism group of the four-manifold. We show that the invariants may be interpreted as the standard topologically twisted path integral of four-dimensional $\mathcal{N}=2$ supersymmetric Yang-Mills coupled to topologically twisted background fields of conformal supergravity.

hep-ex

  • ALICE 3: a next-generation heavy-ion detector for LHC Run 5 and beyond.- [PDF] - [Article]

    Nicola Nicassio
     

    The ALICE Collaboration proposes a completely new apparatus, ALICE 3, for the LHC Runs 5 and 6. The detector consists of a large pixel-based tracking system covering eight units of pseudorapidity, complemented by multiple systems for particle identification, including silicon time-of-flight layers, a ring-imaging Cherenkov detector, a muon identification system, an electromagnetic calorimeter and a forward conversion tracker. Track pointing resolution of better than 10~$\mu$m for $p_T$ >200 MeV/$c$ can be achieved by placing the vertex detector on a retractable structure inside the beam pipe. ALICE 3 will, on the one hand, enable novel studies of the quark-gluon plasma (QGP) and, on the other hand, open up important physics opportunities in other areas of QCD and beyond. The main new studies in the QGP sector focus on low-$p_T$ heavy-flavour production, including beauty hadrons, multi-charm baryons and charm-charm correlations, as well as on precise multi-differential measurements of dielectron emission to probe the mechanism of chiral-symmetry restoration and the time-evolution of the QGP temperature. Besides QGP studies, ALICE 3 can uniquely contribute to hadronic physics, with femtoscopic studies of the interaction potentials between charm mesons and searches for nuclei with charm, and to fundamental physics, with tests of the Low theorem for ultra-soft photon emission. This paper will cover the detector concept, the status of novel sensor R\&D and the resulting physics performance.

  • A model-independent measurement of the CKM angle $\gamma$ in partially reconstructed $B^{\pm} \to D^{*} h^{\pm}$ decays with $D \to K_{S}^{0} h^{+}h^{-}$ $(h=\pi, K)$.- [PDF] - [Article]

    R. Aaij, A.S.W. Abdelmotteleb, C. Abellan Beteta, F. Abudinén, T. Ackernley, B. Adeva, M. Adinolfi, P. Adlarson, C. Agapopoulou, C.A. Aidala, Z. Ajaltouni, S. Akar, K. Akiba, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero, Z. Aliouche, P. Alvarez Cartelle, R. Amalric, S. Amato, J.L. Amey, Y. Amhis, L. An, L. Anderlini, M. Andersson, A. Andreianov, P. Andreola, M. Andreotti, D. Andreou, A. Anelli, D. Ao, F. Archilli, M. Argenton, S. Arguedas Cuendis, A. Artamonov, M. Artuso, E. Aslanides, M. Atzeni, B. Audurier, D. Bacher, I. Bachiller Perea, S. Bachmann, M. Bachmayer, J.J. Back, A. Bailly-reyre, P. Baladron Rodriguez, V. Balagura, W. Baldini, J. Baptista de Souza Leite, M. Barbetti, I. R. Barbosa, R.J. Barlow, S. Barsuk, W. Barter, M. Bartolini, et al. (1045 additional authors not shown)
     

    A measurement of $C\!P$-violating observables in $B^{\pm} \to D^{*} K^{\pm}$ and $B^{\pm} \to D^{*} \pi^{\pm}$ decays is made where the photon or neutral pion from the $D^{*} \to D\gamma$ or $D^{*} \to D\pi^{0}$ decay is not reconstructed. The $D$ meson is reconstructed in the self-conjugate decay modes, $D \to K_{S}^{0} \pi^{+} \pi^{-}$ or $D \to K_{S}^{0} K^{+} K^{-}$. The distribution of signal yields in the $D$ decay phase space is analysed in a model-independent way. The measurement uses a data sample collected in proton-proton collisions at centre-of-mass energies of 7, 8, and 13 TeV, corresponding to a total integrated luminosity of approximately 9 fb$^{-1}$. The $B^{\pm} \to D^{*} K^{\pm}$ and $B^{\pm} \to D^{*} \pi^{\pm}$ $C\!P$-violating observables are interpreted in terms of hadronic parameters and the CKM angle $\gamma$, resulting in a measurement of $\gamma = (92^{+21}_{-17})^{\circ}$. The total uncertainty includes the statistical and systematic uncertainties, and the uncertainty due to external strong-phase inputs.

  • Photo-Detection Efficiency measurement for FBK HD Near-UV sensitive SiPMs at 10 K temperature.- [PDF] - [Article]

    Meiyuenan Ma, Jiangfeng Zhou, Fengbo Gu, Junhui Liao, Yuanning Gao, Zhaohua Peng, Jian Zheng, Guangpeng An, Lifeng Zhang, Lei Zhang, Zhuo Liang, Xiuliang Zhao
     

    We measured the Photo-Detection Efficiency (PDE) of FBK NUV-HD-Cryo SiPMs at 10 K temperature with 405 nm and 530 nm light. The SiPMs have been tested under bias voltages between 6 - 11 V overvoltage (OV). The PDE increases as long as a greater OV is applied. With a bias of OV 9 V, the PDE reaches $\sim$ 40\% for both 405 nm and 530 nm light. The tests demonstrated that this type of SiPMs can be equipped as photosensors on a liquid helium Time Projection Chamber (TPC), which has been proposed for hunting for dark matter directly. In addition, we measured the SiPM's PDE at Room Temperature (RT). The results are well consistent with other measurements on similar model SiPMs.

  • Anomalous spin precession systematic effects in the search for a muon EDM using the frozen-spin technique.- [PDF] - [Article]

    G. Cavoto, R. Chakraborty, A. Doinaki, C. Dutsov, M. Giovannozzi, T. Hume, K. Kirch, K. Michielsen, L. Morvaj, A. Papa, F. Renga, M. Sakurai, P. Schmidt-Wellenburg
     

    At the Paul Scherrer Institut (PSI), we are currently working on the development of a high-precision apparatus with the aim of searching for the muon electric dipole moment (EDM) with unprecedented sensitivity. The underpinning principle of this experiment is the frozen-spin technique, a method that suppresses the spin precession due to the anomalous magnetic moment, thereby enhancing the signal-to-noise ratio for EDM signals. This increased sensitivity facilitates measurements that would be difficult to achieve with conventional $g - 2$ muon storage rings. Given the availability of the $p = 125$ MeV/$c$ muon beam at PSI, the anticipated statistical sensitivity for the EDM after a year of data collection is $6\times 10^{-23}e\cdot$cm. To achieve this goal, it is imperative to meticulously analyse and mitigate any potential spurious effects that could mimic EDM signals. In this study, we present a quantitative methodology to evaluate the systematic effects that might arise in the context of employing the frozen-spin technique within a compact storage ring. Our approach entails the analytical derivation of equations governing the motion of the muon spin in the electromagnetic (EM) fields intrinsic to the experimental setup, validated through subsequent numerical simulations. We also illustrate a method to calculate the cumulative geometric (Berry's) phase. This work complements ongoing experimental efforts to detect a muon EDM at PSI and contributes to a broader understanding of spin-precession systematic effects.

  • Search for Dark Photon Dark Matter in the Mass Range 41--74 $\mu\mathrm{eV}$ using Millimeter-Wave Receiver and Radioshielding Box.- [PDF] - [Article] - [UPDATED]

    S. Adachi, F. Fujinaka, S. Honda, Y. Muto, H. Nakata, Y. Sueno, T. Sumida, J. Suzuki, O. Tajima, H. Takeuchi
     

    Dark photons have been considered potential candidates for dark matter. The dark photon dark matter (DPDM) has a mass and interacts with electromagnetic fields via kinetic mixing with a coupling constant of $\chi$. Thus, DPDMs are converted into ordinary photons at metal surfaces. Using a millimeter-wave receiver set in a radioshielding box, we performed experiments to detect the conversion photons from the DPDM in the frequency range 10--18 GHz, which corresponds to a mass range 41--74 $\mu\mathrm{eV}$. We found no conversion photon signal in this range and set the upper limits to $\chi < (0.5\text{--}3.9) \times 10^{-10}$ at a 95% confidence level.

  • Review of real-time data processing for collider experiments.- [PDF] - [Article] - [UPDATED]

    V.V. Gligorov, V. Reković
     

    We review the status of, and prospects for, real-time data processing for collider experiments in experimental High Energy Physics. We discuss the historical evolution of data rates and volumes in the field and place them in the context of data in other scientific domains and commercial applications. We review the requirements for real-time processing of these data, and the constraints they impose on the computing architectures used for such processing. We describe the evolution of real-time processing over the past decades with a particular focus on the Large Hadron Collider experiments and their planned upgrades over the next decade. We then discuss how the scientific trends in the field and commercial trends in computing architectures may influence real-time processing over the coming decades.

  • Observation of quantum entanglement in top-quark pairs using the ATLAS detector.- [PDF] - [Article] - [UPDATED]

    ATLAS Collaboration
     

    We report the highest-energy observation of entanglement, in top$-$antitop quark events produced at the Large Hadron Collider, using a proton$-$proton collision data set with a center-of-mass energy of $\sqrt{s}=13$ TeV and an integrated luminosity of 140 fb$^{-1}$ recorded with the ATLAS experiment. Spin entanglement is detected from the measurement of a single observable $D$, inferred from the angle between the charged leptons in their parent top- and antitop-quark rest frames. The observable is measured in a narrow interval around the top$-$antitop quark production threshold, where the entanglement detection is expected to be significant. It is reported in a fiducial phase space defined with stable particles to minimize the uncertainties that stem from limitations of the Monte Carlo event generators and the parton shower model in modelling top-quark pair production. The entanglement marker is measured to be $D=-0.547 \pm 0.002~\text{(stat.)} \pm 0.021~\text{(syst.)}$ for $340 < m_{t\bar{t}} < 380 $ GeV. The observed result is more than five standard deviations from a scenario without entanglement and hence constitutes both the first observation of entanglement in a pair of quarks and the highest-energy observation of entanglement to date.

quant-ph

  • Control of individual electron-spin pairs in an electron-spin bath.- [PDF] - [Article]

    H. P. Bartling, N. Demetriou, N. C. F. Zutt, D. Kwiatkowski, M. J. Degen, S. J. H. Loenen, C. E. Bradley, M. Markham, D. J. Twitchen, T. H. Taminiau
     

    The decoherence of a central electron spin due to the dynamics of a coupled electron-spin bath is a core problem in solid-state spin physics. Ensemble experiments have studied the central spin coherence in detail, but such experiments average out the underlying quantum dynamics of the bath. Here, we show the coherent back-action of an individual NV center on an electron-spin bath and use it to detect, prepare and control the dynamics of a pair of bath spins. We image the NV-pair system with sub-nanometer resolution and reveal a long dephasing time ($T_2^* = 44(9)$ ms) for a qubit encoded in the electron-spin pair. Our experiment reveals the microscopic quantum dynamics that underlie the central spin decoherence and provides new opportunities for controlling and sensing interacting spin systems.

  • Optimal twirling depths for shadow tomography in the presence of noise.- [PDF] - [Article]

    Pierre-Gabriel Rozon, Ning Bao, Kartiek Agarwal
     

    The classical shadows protocol is an efficient strategy for estimating properties of an unknown state $\rho$ using a small number of state copies and measurements. In its original form, it involves twirling the state with unitaries from some ensemble and measuring the twirled state in a fixed basis. It was recently shown that for computing local properties, optimal sample complexity (copies of the state required) is remarkably achieved for unitaries drawn from shallow depth circuits composed of local entangling gates, as opposed to purely local (zero depth) or global twirling (infinite depth) ensembles. Here we consider the sample complexity as a function of the depth of the circuit, in the presence of noise. We find that this noise has important implications for determining the optimal twirling ensemble. Under fairly general conditions, we i) show that any single-site noise can be accounted for using a depolarizing noise channel with an appropriate damping parameter $f$; ii) compute thresholds $f_{\text{th}}$ at which optimal twirling reduces to local twirling for arbitrary operators and iii) $n^{\text{th}}$ order Renyi entropies ($n \ge 2$); and iv) provide a meaningful upper bound $t_{\text{max}}$ on the optimal circuit depth for any finite noise strength $f$, which applies to all operators and entanglement entropy measurements. These thresholds strongly constrain the search for optimal strategies to implement shadow tomography and can be easily tailored to the experimental system at hand.

  • A Unified Interface Model for Dissipative Transport of Bosons and Fermions.- [PDF] - [Article]

    Y. Minoguchi, J. Huber, L. Garbe, A. Gambassi, P. Rabl
     

    We study the directed transport of bosons along a one dimensional lattice in a dissipative setting, where the hopping is only facilitated by coupling to a Markovian reservoir. By combining numerical simulations with a field-theoretic analysis, we investigate the current fluctuations for this process and determine its asymptotic behavior. These findings demonstrate that dissipative bosonic transport belongs to the KPZ universality class and therefore, in spite of the drastic difference in the underlying particle statistics, it features the same coarse grained behavior as the corresponding asymmetric simple exclusion process (ASEP) for fermions. However, crucial differences between the two processes emerge when focusing on the full counting statistics of current fluctuations. By mapping both models to the physics of fluctuating interfaces, we find that dissipative transport of bosons and fermions can be understood as surface growth and erosion processes, respectively. Within this unified description, both the similarities and discrepancies between the full counting statistics of the transport are reconciled. Beyond purely theoretical interest, these findings are relevant for experiments with cold atoms or long-lived quasi-particles in nanophotonic lattices, where such transport scenarios can be realized.

  • Observation of the non-Hermitian skin effect and Fermi skin on a digital quantum computer.- [PDF] - [Article]

    Ruizhe Shen, Tianqi Chen, Bo Yang, Ching Hua Lee
     

    Non-Hermitian physics has attracted considerable attention in the recent years, in particular the non-Hermitian skin effect (NHSE) for its extreme sensitivity and non-locality. While the NHSE has been physically observed in various classical metamaterials and even ultracold atomic arrays, its highly-nontrivial implications in many-body dynamics have never been experimentally investigated. In this work, we report the first observation of the NHSE on a universal quantum processor, as well as its characteristic but elusive Fermi skin from many-fermion statistics. To implement NHSE dynamics on a quantum computer, the effective time-evolution circuit not only needs to be non-reciprocal and non-unitary, but must also be scaled up to a sufficient number of lattice qubits to achieve spatial non-locality. We show how such a non-unitary operation can be systematically realized by post-selecting multiple ancilla qubits, as demonstrated through two paradigmatic non-reciprocal models on a noisy IBM quantum processor, with clear signatures of asymmetric spatial propagation and many-body Fermi skin accumulation. To minimize errors from inevitable device noise, time evolution is performed using a trainable optimized quantum circuit produced with variational quantum algorithms. Our study represents a critical milestone in the quantum simulation of non-Hermitian lattice phenomena on present-day quantum computers, and can be readily generalized to more sophisticated many-body models with the remarkable programmability of quantum computers.

  • Robust non-ergodicity of ground state in the $\beta$ ensemble.- [PDF] - [Article]

    Adway Kumar Das, Anandamohan Ghosh, Ivan M. Khaymovich
     

    In various chaotic quantum many-body systems, the ground states show non-trivial athermal behavior despite the bulk states exhibiting thermalization. Such athermal states play a crucial role in quantum information theory and its applications. Moreover, any generic quantum many-body system in the Krylov basis is represented by a tridiagonal Lanczos Hamiltonian, which is analogous to the matrices from the $\beta$ ensemble, a well-studied random matrix model with level repulsion tunable via the parameter $\beta$. Motivated by this, here we focus on the localization properties of the ground and anti-ground states of the $\beta$ ensemble. Both analytically and numerically, we show that both the edge states demonstrate non-ergodic (fractal) properties for $\beta\sim\mathcal{O}(1)$ while the typical bulk states are ergodic. Surprisingly, the fractal dimension of the edge states remain three time smaller than that of the bulk states irrespective of the global phase of the $\beta$ ensemble. In addition to the fractal dimensions, we also consider the distribution of the localization centers of the spectral edge states, their mutual separation, as well as the spatial and correlation properties of the first excited states.

  • Quantum tomography of magnons using Brillouin light scattering.- [PDF] - [Article]

    Sanchar Sharma, Silvia Viola Kusminskiy, Victor ASV Bittencourt
     

    Quantum magnonics, an emerging field focusing on the study of magnons for quantum applications, requires precise measurement methods capable of resolving single magnons. Existing techniques introduce additional dissipation channels and are not apt for magnets in free space. Brillouin light scattering (BLS) is a well-established technique for probing the magnetization known for its high sensitivity and temporal resolution. The coupling between magnons and photons is controlled by a laser input, so it can be switched off when a measurement is not needed. In this article, we theoretically investigate the efficacy of BLS for quantum tomography of magnons. We model a finite optomagnonic waveguide, including the optical noise added by the dielectric, to calculate the signal-to-noise ratio (SNR). We find that the SNR is typically low due to a small magneto-optical coupling; nevertheless, it can be significantly enhanced by injecting squeezed vacuum into the waveguide. We reconstruct the density matrix of the magnons from the statistics of the output photons using a maximum likelihood estimate. The classical component of a magnon state, defined as the regions of positive Wigner function, can be reconstructed with a high accuracy while the non-classical component necessitates either a higher SNR or a larger dataset. The latter requires more compact data structures and advanced algorithms for post-processing. The SNR is limited partially by the input laser power that can be increased by designing the optomagnonic cavity with a heat sink.

  • Mimicking Classical Noise in Ion Channels by Quantum Decoherence.- [PDF] - [Article]

    Mina Seifi, Ali Soltanmanesh, Afshin Shafiee
     

    The mechanism of selectivity in ion channels is still an open question in biology. According to recent proposals, it seems that the selectivity filter of the ion channel, which plays a key role in the channel's function, may show quantum coherence, which can play a role in explaining the selection mechanism and conduction of ions. However, due to decoherence theory, the presence of environmental noise causes decoherence and loss of quantum effects. Sometimes we hope that the effect of calssical noise of the environment in ion channels can be modeled through a picture whose the quantum decoherence theory presents. In this paper, we simulated the behavior of the ion channel system in the Spin-Boson model using the unitary evolution of a stochastic Hamiltonian operator under the classical noise model. Also, in a different approach, we modeled the system evolution as a two-level Spin-Boson model with tunneling interacting with a bath of harmonic oscillators, using decoherence theory. The results of this system were discussed in different classical and quantum regimes. By examining the results it was found that the Spin-Boson model at a high hopping rate of Potassium ions can simulate the behavior of the system in the classical noise approach. This result is another proof for the fact that ion channels need high speed for high selectivity.

  • Resonance of Geometric Quantities and Hidden Symmetry in the Asymmetric Rabi Model.- [PDF] - [Article]

    Qinjing Yu, Zhiguo Lü
     

    We present the interesting resonance of two kinds of geometric quantities, namely the Aharonov-Anandan (AA) phase and the time-energy uncertainty, and reveal the relation between resonance and the hidden symmetry in the asymmetric Rabi model by numerical and analytical methods. By combining the counter-rotating hybridized rotating-wave method with time-dependent perturbation theory, we solve systematically the time evolution operator and then obtain the geometric phase of the Rabi model. In comparison with the numerically exact solutions, we find that the analytical results accurately describe the geometric quantities in a wide parameter space. We unveil the effect of the bias on the resonance of geometric quantities, (1) the positions of all harmonic resonances stemming from the shift of the Rabi frequency at the presence of the bias; (2) the occurrence of even order harmonic resonance due to the bias. When the driving frequency is equal to the subharmonics of the bias, the odd higher-order harmonic resonances disappear. Finally, the hidden symmetry has a resemblance to that of the quantum Rabi model with bias, which indicates the quasienergy spectra are similar to the energy spectra of the latter.

  • Efficient multipartite entanglement purification with non-identical states.- [PDF] - [Article]

    Hao Qin, Ming-Ming Du, Xi-Yun Li, Wei Zhong, Lan Zhou, Yu-Bo Sheng
     

    We present an efficient and general multipartite entanglement purification protocol (MEPP) for N-photon systems in Greenberger-Horne-Zeilinger (GHZ) states with non-identical input states. As a branch of entanglement purification, besides the cases of successful purification, the recurrence MEPP actually has the reusable discarded items which are usually regarded as a failure. Our protocol contains two parts for bit-flip error correction. The first one is the conventional MEPP, corresponding successful cases. The second one includes two efficient approaches, recycling purification with entanglement link and direct residual entanglement purification, that can utilize discarded items. We also make a comparison between two approaches. Which method to use depends on initial input states, and in most cases the approach of direct residual purification is optimal for it not only may obtain a higher fidelity entangled state but also it does not require additional sophisticated links. In addition, for phase-flip errors, the discarded items still have available residual entanglement in the case of different input states. With these approaches, this MEPP has a higher efficiency than all previous MEPPs and it may have potential applications in the future long-distance quantum communications and networks.

  • Realization of a programmable multi-purpose photonic quantum memory with over-thousand qubit manipulations.- [PDF] - [Article]

    Sheng Zhang, Jixuan Shi, Zhaibin Cui, Ye Wang, Yukai Wu, Luming Duan, Yunfei Pu
     

    One of the most important building blocks for a quantum network is a photonic quantum memory which serves as the interface between the communication channel and the local functional unit. A programmable quantum memory which can process a large stream of flying qubits and fulfill the requirements of multiple core functions in a quantum network is still to-be-realized. Here we report a high-performance quantum memory which can simultaneously store 72 optical qubits and support up to 1000 consecutive operations in a random access way. This quantum memory can also be adapted on-demand for implementing quantum queue, stack, and buffer, as well as the synchronization and reshuffle of 4 entangled photon pairs from a probabilistic entanglement source, which is an essential requirement for the realization of quantum repeaters and efficient routing in quantum networks.

  • Liouvillian-gap analysis of open quantum many-body systems in the weak dissipation limit.- [PDF] - [Article]

    Takashi Mori
     

    Recent experiments have reported that novel physics emerge in open quantum many-body systems due to an interplay of interactions and dissipation, which stimulate theoretical studies of the many-body Lindblad equation. Although the strong dissipation regime receives considerable interest in this context, this work focuses on the weak bulk dissipation. By examining the spectral property of the many-body Lindblad generator, we find that its spectral gap generically shows singularity in the weak dissipation limit when the thermodynamic limit is taken first. Such singular behavior is related to the concept of the Ruelle-Pollicott resonance in chaos theory, which determines the timescale of thermalization of an isolated system. Thus, the many-body Lindblad equation in the weak dissipation regime contains nontrivial information on intrinsic properties of a quantum many-body system.

  • Cohernece in permutation-invariant state enhances permutation-asymmetry.- [PDF] - [Article]

    Masahito Hayashi
     

    A Dicke state and its decohered state are invariant for permutation. However, when another qubits state to each of them is attached, the whole state is not invariant for permutation, and has a certain asymmetry for permutation. The amount of asymmetry can be measured by the number of distinguishable states under the group action or the mutual information. This paper investigates how the coherence of a Dicke state affects the amount of asymmetry. To address this problem asymptotically, we introduce a new type of central limit theorem by using several formulas on hypergeometric functions. We reveal that the amount of the asymmetry in the case with a Dicke state has a strictly larger order than that with the decohered state in a specific type of the limit.

  • Topological Discrimination of Steep to Supersteep Gap to Emergence of Tunneling in Adiabatic Quantum Processes.- [PDF] - [Article]

    Edmond Jonckheere
     

    It is shown that the gap that limits the speed of a quantum annealing process can take three salient morphologies: (i) the supersteep gap where both the ground and the first excited eigenenergy level curves have topologically related pairs of nearby inflection points giving both the maximum and the minimum a steep aspect, (ii) the steep gap where only the first excited eigenenergy level curve has a pair of inflection points giving its minimum a steep aspect while the maximum of the ground level does not exhibit inflection points, and (iii) the mild gap that has no related inflection points. Classification of the various singularities betrayed by the inflection points relies on the critical value curves of the quadratic numerical range mapping of the matrix H0+iH1, where H0 is the transverse field Hamiltonian and H1 the problem Hamiltonian. It is shown that the ground level is mapped to the generically smooth boundary of the numerical range, while the first excited level is mapped to an interior non-smooth critical value curve exhibiting swallow tails. The major result is that the position of the swallow tails relative to the boundary allows the supersteep versus steep discrimination, while the absence of swallow tail-boundary interaction characterizes the mild gap. As a corollary of the singularity analysis, the highly structured initial and final Hamiltonians of the Grover search create unstable singularities that break into stable swallow tails under perturbation, with the consequence of invalidating the gap scaling estimates computed around the unstable singularity. Classification of all stable singularities from a global viewpoint requires the Legendrian approach where the energy level curves become Legendrian knots in the contact space. Last but not least, it will be shown that a supersteep swallow tail previews tunneling.

  • Sequentially witnessing entanglement by independent observer pairs.- [PDF] - [Article]

    Mao-Sheng Li, Yan-Ling Wang
     

    This study investigates measurement strategies in a scenario where multiple pairs of Alices and Bobs independently and sequentially observe entangled states. The aim is to maximize the number of observer pairs $(A_k,B_l)$ that can witness entanglement. Prior research has demonstrated that arbitrary pairs $(A_k, B_k)$ ($k\leq n$) can observe entanglement in all pure entangled states and a specific class of mixed entangled states [Phys. Rev. A 106 032419 (2022)]. However, it should be noted that other pairs $(A_k, B_l)$ with $(k\neq l \leq n)$ may not observe entanglement using the same strategy. Moreover, a novel strategy is presented, enabling every pair of arbitrarily many Alices and Bobs to witness entanglement regardless of the initial state being a Bell state or a particular class of mixed entangled states. These findings contribute to understanding measurement strategies for maximizing entanglement observation in various contexts.

  • Quantifying Subspace Entanglement with Geometric Measures.- [PDF] - [Article]

    Xuanran Zhu, Chao Zhang, Bei Zeng
     

    Determining whether a quantum subspace is entangled and quantifying its entanglement level remains a fundamental challenge in quantum information science. This paper introduces a geometric measure of $r$-bounded rank, $E_r(S)$, for a given subspace $S$. This measure, derived from the established geometric measure of entanglement, is tailored to assess the entanglement within $S$. It not only provides a benchmark for quantifying the entanglement level but also sheds light on the subspace's ability to preserve such entanglement. Utilizing non-convex optimization techniques from the domain of machine learning, our method effectively calculates $E_r(S)$ in various scenarios. Showcasing strong performance in comparison to existing hierarchical and PPT relaxation techniques, our approach is notable for its accuracy, computational efficiency, and wide-ranging applicability. This versatile and effective tool paves the way for numerous new applications in quantum information science. It is particularly useful in validating highly entangled subspaces in bipartite systems, determining the border rank of multipartite states, and identifying genuinely or completely entangled subspaces. Our approach offers a fresh perspective for quantifying entanglement, while also shedding light on the intricate structure of quantum entanglement.

  • Fast algorithms for classical specifications of stabiliser states and Clifford gates.- [PDF] - [Article]

    Nadish de Silva, Wilfred Salmon, Ming Yin
     

    The stabiliser formalism plays a central role in quantum computing, error correction, and fault-tolerance. Stabiliser states are used to encode quantum data. Clifford gates are those which can be easily performed fault-tolerantly in the most common error correction schemes. Their mathematical properties are the subject of significant research interest. Numerical experiments are critical to formulating and testing conjectures involving the stabiliser formalism. In this note, we provide fast methods for verifying that a vector is a stabiliser state, and interconverting between its specification as amplitudes, a quadratic form, and a check matrix. We use these to rapidly check if a given unitary matrix is a Clifford gate and to convert between the matrix of a Clifford gate and its compact specification as a stabiliser tableau. We provide implementations of our algorithms in Python that outperform the best-known brute force methods by some orders of magnitude with asymptotic improvements that are exponential in the number of qubits.

  • Quantum-Assisted Simulation: A Framework for Designing Machine Learning Models in the Quantum Computing Domain.- [PDF] - [Article]

    Minati Rath, Hema Date
     

    Machine learning (ML) models are trained using historical data to classify new, unseen data. However, traditional computing resources often struggle to handle the immense amount of data, commonly known as Big Data, within a reasonable timeframe. Quantum computing (QC) provides a novel approach to information processing. Quantum algorithms have the potential to process classical data exponentially faster than classical computing. By mapping quantum machine learning (QML) algorithms into the quantum mechanical domain, we can potentially achieve exponential improvements in data processing speed, reduced resource requirements, and enhanced accuracy and efficiency. In this article, we delve into both the QC and ML fields, exploring the interplay of ideas between them, as well as the current capabilities and limitations of hardware. We investigate the history of quantum computing, examine existing QML algorithms, and aim to present a simplified procedure for setting up simulations of QML algorithms, making it accessible and understandable for readers. Furthermore, we conducted simulations on a dataset using both machine learning and quantum machine learning approaches. We then proceeded to compare their respective performances by utilizing a quantum simulator.

  • Quantum Data Encoding: A Comparative Analysis of Classical-to-Quantum Mapping Techniques and Their Impact on Machine Learning Accuracy.- [PDF] - [Article]

    Minati Rath, Hema Date
     

    This research explores the integration of quantum data embedding techniques into classical machine learning (ML) algorithms, aiming to assess the performance enhancements and computational implications across a spectrum of models. We explore various classical-to-quantum mapping methods, ranging from basis encoding, angle encoding to amplitude encoding for encoding classical data, we conducted an extensive empirical study encompassing popular ML algorithms, including Logistic Regression, K-Nearest Neighbors, Support Vector Machines and ensemble methods like Random Forest, LightGBM, AdaBoost, and CatBoost. Our findings reveal that quantum data embedding contributes to improved classification accuracy and F1 scores, particularly notable in models that inherently benefit from enhanced feature representation. We observed nuanced effects on running time, with low-complexity models exhibiting moderate increases and more computationally intensive models experiencing discernible changes. Notably, ensemble methods demonstrated a favorable balance between performance gains and computational overhead. This study underscores the potential of quantum data embedding in enhancing classical ML models and emphasizes the importance of weighing performance improvements against computational costs. Future research directions may involve refining quantum encoding processes to optimize computational efficiency and exploring scalability for real-world applications. Our work contributes to the growing body of knowledge at the intersection of quantum computing and classical machine learning, offering insights for researchers and practitioners seeking to harness the advantages of quantum-inspired techniques in practical scenarios.

  • Relative-intensity squeezing of high-order harmonic "twin beams".- [PDF] - [Article]

    Shicheng Jiang, Konstantin Dorfman
     

    Relative intensity squeezing (RIS) is emerging as a promising technique for performing high-precision measurements beyond the shot-noise limit. A commonly used way to produce RIS in visible/IR range is generating correlated "twin beams" through four-wave mixing by driving atomic resonances with weak laser beams. Here, we propose an all-optical strong-field scheme to produce a series of relative-intensity squeezed high-order harmonic "twin beams". Due to the nature of high harmonics generation the frequencies of the "twin beams" can cover a broad range of photon energy. Our proposal paves the way for the development of nonclassical XUV sources and high precision spectroscopy tools in strong-field regime.

  • Perspectives of running self-consistent DMFT calculations for strongly correlated electron systems on noisy quantum computing hardware.- [PDF] - [Article]

    Jannis Ehrlich, Daniel Urban, Christian Elsässer
     

    Dynamical Mean Field Theory (DMFT) is one of the powerful computatioinal approaches to study electron correlation effects in solid-state materials and molecules. Its practical applicability is, however, limited by the exponential growth of the many-particle Hilbert space with the number of considered electronic orbitals. Here, the possibility of a one-to-one mapping between electronic orbitals and the state of a qubit register suggests a significant computational advantage for the use of a Quantum Computer (QC) for solving DMFT models. In this work we present a QC approach to solve a two-site DMFT model based on the Variational Quantum Eigensolver (VQE) algorithm. We discuss the challenges arising from stochastic errors and suggest a means to overcome unphysical features in the self-energy. We thereby demonstrate the feasibility to obtain self-consistent results of the two-site DMFT model based on VQE simulations with a finite number of shots. We systematically compare results obtained on simulators with calculations on the IBMQ Ehningen QC hardware.

  • Quantum Counting by Quantum Walks.- [PDF] - [Article]

    Gustavo A. Bezerra, Raqueline A. M. Santos, Renato Portugal
     

    Quantum counting is a key quantum algorithm that aims to determine the number of marked elements in a database. This algorithm is based on the quantum phase estimation algorithm and uses the evolution operator of Grover's algorithm because its non-trivial eigenvalues are dependent on the number of marked elements. Since Grover's algorithm can be viewed as a quantum walk on a complete graph, a natural way to extend quantum counting is to use the evolution operator of quantum-walk-based search on non-complete graphs instead of Grover's operator. In this paper, we explore this extension by analyzing the coined quantum walk on the complete bipartite graph with an arbitrary number of marked vertices. We show that some eigenvalues of the evolution operator depend on the number of marked vertices and using this fact we show that the quantum phase estimation can be used to obtain the number of marked vertices. The time complexity for estimating the number of marked vertices in the bipartite graph with our algorithm aligns closely with that of the original quantum counting algorithm.

  • General limit to thermodynamic annealing performance.- [PDF] - [Article]

    Yutong Luo, Yi-Zheng Zhen, Xiangjing Liu, Daniel Ebler, Oscar Dahlsten
     

    Annealing has proven highly successful in finding minima in a cost landscape. Yet, depending on the landscape, systems often converge towards local minima rather than global ones. In this Letter, we analyse the conditions for which annealing is approximately successful in finite time. We connect annealing to stochastic thermodynamics to derive a general bound on the distance between the system state at the end of the annealing and the ground state of the landscape. This distance depends on the amount of state updates of the system and the accumulation of non-equilibrium energy, two protocol and energy landscape dependent quantities which we show are in a trade-off relation. We describe how to bound the two quantities both analytically and physically. This offers a general approach to assess the performance of annealing from accessible parameters, both for simulated and physical implementations.

  • Bound on annealing performance from stochastic thermodynamics, with application to simulated annealing.- [PDF] - [Article]

    Yutong Luo, Yi-Zheng Zhen, Xiangjing Liu, Daniel Ebler, Oscar Dahlsten
     

    Annealing is the process of gradually lowering the temperature of a system to guide it towards its lowest energy states. In an accompanying paper [Luo et al. Phys. Rev. E 108, L052105 (2023)], we derived a general bound on annealing performance by connecting annealing with stochastic thermodynamics tools, including a speed-limit on state transformation from entropy production. We here describe the derivation of the general bound in detail. In addition, we analyze the case of simulated annealing with Glauber dynamics in depth. We show how to bound the two case-specific quantities appearing in the bound, namely the activity, a measure of the number of microstate jumps, and the change in relative entropy between the state and the instantaneous thermal state, which is due to temperature variation. We exemplify the arguments by numerical simulations on the SK model of spin-glasses.

  • Structure of the Hamiltonian of mean force.- [PDF] - [Article]

    Phillip C. Burke, Goran Nakerst, Masudul Haque
     

    The Hamiltonian of mean force is an effective Hamiltonian that allows a quantum system, non-weakly coupled to an environment, to be written in an effective Gibbs state. We investigate the structure of the Hamiltonian of mean force in extended quantum systems with local interactions. We show that its spatial structure exhibits a "skin effect" -- its difference from the system Hamiltonian dies off exponentially with distance from the system-environment boundary. For spin systems, we identify the terms that can appear in the Hamiltonian of mean force at different orders in the inverse temperature.

  • Ultra-quantum coherent states in a single finite quantum system.- [PDF] - [Article]

    A. Vourdas
     

    A set of $n$ coherent states is introduced in a quantum system with $d$-dimensional Hilbert space $H(d)$. It is shown that they resolve the identity, and also have a discrete isotropy property. A finite cyclic group acts on the set of these coherent states, and partitions it into orbits. A $n$-tuple representation of arbitrary states in $H(d)$, analogous to the Bargmann representation, is defined. There are two other important properties of these coherent states which make them `ultra-quantum'. The first property is related to the Grothendieck formalism which studies the `edge' of the Hilbert space and quantum formalisms. Roughly speaking the Grothendieck theorem considers a `classical' quadratic form ${\mathfrak C}$ that uses complex numbers in the unit disc, and a `quantum' quadratic form ${\mathfrak Q}$ that uses vectors in the unit ball of the Hilbert space. It shows that if ${\mathfrak C}\le 1$, the corresponding ${\mathfrak Q}$ might take values greater than $1$, up to the complex Grothendieck constant $k_G$. ${\mathfrak Q}$ related to these coherent states is shown to take values in the `Grothendieck region' $(1,k_G)$, which is classically forbidden in the sense that ${\mathfrak C}$ does not take values in it. The second property complements this, showing that these coherent states violate logical Bell-like inequalities (which for a single quantum system are quantum versions of the Frechet probabilistic inequalities). In this sense also, our coherent states are deep into the quantum region.

  • Improving Continuous-variable Quantum Channels with Unitary Averaging.- [PDF] - [Article]

    S. Nibedita Swain, Ryan J. Marshman, Peter P. Rohde, Austin P. Lund, Alexander S. Solntsev, Timothy C. Ralph
     

    A significant hurdle for quantum information and processing using bosonic systems are stochastic phase errors, which are likely to occur as the photons propagate through a channel. We propose and demonstrate a scheme of passive, linear optical unitary averaging for protecting Gaussian channels. The scheme requires only linear optical elements and vacuum detectors, and protects against a loss of purity, squeezing and entanglement. We present numerical simulations and analytical formula, tailored for currently relevant parameters with low noise levels, where our approximations perform exceptionally well. We also show the asymptotic nature of the protocol, highlighting both current and future relevance.

  • Task Scheduling Optimization from a Tensor Network Perspective.- [PDF] - [Article]

    Alejandro Mata Ali, Iñigo Perez Delgado, Beatriz García Markaida, Aitor Moreno Fdez. de Leceta
     

    We present a novel method for task optimization in industrial plants using quantum-inspired tensor network technology. This method allows us to obtain the best possible combination of tasks on a set of machines with a set of constraints without having to evaluate all possible combinations. We will simulate a quantum system with all possible combinations, perform an imaginary time evolution and a series of projections to satisfy the constraints. We improve its scalability by means of a compression method, an iterative algorithm, and a genetic algorithm, and show the results obtained on simulated cases.

  • NISQ-Compatible Error Correction of Quantum Data Using Modified Dissipative Quantum Neural Networks.- [PDF] - [Article]

    Armin Ahmadkhaniha, Marzieh Bathaee
     

    Using a dissipative quantum neural network (DQNN) accompanied by conjugate layers, we upgrade the performance of the existing quantum auto-encoder (QAE) network as a quantum denoiser of a noisy m-qubit GHZ state. Our new denoising architecture requires a much smaller number of learning parameters, which can decrease the training time, especially when a deep or stacked DQNN is needed to approach the highest fidelity in the Noisy Intermediate-Scale Quantum (NISQ) era. In QAE, we reduce the connection between the hidden layer's qubits and the output's qubits to modify the decoder. The Renyi entropy of the hidden and output qubits' states is analyzed with respect to other qubits during learning iterations. During the learning process, if the hidden layer remains connected to the input layers, the network can almost perfectly denoise unseen noisy data with a different underlying noise distribution using the learning parameters acquired from training data.

  • Routing and wavelength assignment in hybrid networks with classical and quantum signals.- [PDF] - [Article]

    Lidia Ruiz, Juan Carlos Garcia-Escartin
     

    Quantum Key Distribution has become a mature quantum technology that has outgrown dedicated links and is ready to be incorporated into the classical infrastructure. In this scenario with multiple potential nodes, it is crucial having efficient ways to allocate the network resources between all the potential users. We propose a simple method for routing and wavelength assignment in wavelength multiplexed networks in which classical and quantum channels coexist. The proposed heuristics take into account the specific requirements of quantum key distribution and focus on keeping at bay the contamination of the quantum channels by photons coming from the classical signals by non-linear processes, among others. These heuristics reduce the shared path between classical and quantum channels. We show we can reduce the number of quantum key sequences that must be rejected due to excessive error rates that cannot guarantee security and compare the results to the usual classical RWA approach.

  • (Quantum) complexity of testing signed graph clusterability.- [PDF] - [Article]

    Kuo-Chin Chen, Simon Apers, Min-Hsiu Hsieh
     

    This study examines clusterability testing for a signed graph in the bounded-degree model. Our contributions are two-fold. First, we provide a quantum algorithm with query complexity $\tilde{O}(N^{1/3})$ for testing clusterability, which yields a polynomial speedup over the best classical clusterability tester known [arXiv:2102.07587]. Second, we prove an $\tilde{\Omega}(\sqrt{N})$ classical query lower bound for testing clusterability, which nearly matches the upper bound from [arXiv:2102.07587]. This settles the classical query complexity of clusterability testing, and it shows that our quantum algorithm has an advantage over any classical algorithm.

  • Dimensional Crossover in a Quantum Gas of Light.- [PDF] - [Article]

    Kirankumar Karkihalli Umesh, Julian Schulz, Julian Schmitt, Martin Weitz, Georg von Freymann, Frank Vewinger
     

    We experimentally study the properties of a harmonically trapped photon gas undergoing Bose-Einstein condensation along the dimensional crossover from one to two dimensions. The photons are trapped inside a dye microcavity, where polymer nanostructures provide a harmonic trapping potential for the photon gas. By varying the aspect ratio of the trap we tune from an isotropic two-dimensional confinement to an anisotropic, highly elongated one-dimensional trapping potential. Along this transition, we determine caloric properties of the photon gas, and find a softening of the second-order Bose-Einstein condensation phase transition observed in two dimensions to a crossover behaviour in one dimension.

  • The gauge-relativity of quantum light, matter, and information.- [PDF] - [Article]

    Adam Stokes, Hannah Riley, Ahsan Nazir
     

    We describe the physical relativity of light and matter quantum subsystems, their correlations, and energy exchanges. We examine the most commonly adopted definitions of atoms and photons, noting the significant difference in their localisation properties when expressed in terms of primitive manifestly gauge-invariant and local fields. As a result, different behaviours for entanglement generation and energy exchange occur for different definitions. We explore such differences in detail using toy models of a single photonic mode interacting with one and two dipoles.

  • Fluctuation-induced Forces on Nanospheres in External Fields.- [PDF] - [Article]

    Clemens Jakubec, Pablo Solano, Uroš Delić, Kanu Sinha
     

    We analyze the radiative forces between two dielectric nanospheres mediated via the quantum and thermal fluctuations of the electromagnetic field in the presence of an external drive. We generalize the scattering theory description of fluctuation forces to include external quantum fields, allowing them to be in an arbitrary quantum state. The known trapping and optical binding potentials are recovered for an external coherent state. We demonstrate that an external squeezed vacuum state creates similar potentials to a laser, despite its zero average intensity. Moreover, Schr\"odinger cat states of the field can enhance or suppress the optical potential depending on whether they are odd or even. Considering the nanospheres trapped by optical tweezers, we examine the total interparticle potential as a function of various experimentally relevant parameters, such as the field intensity, polarization, and phase of the trapping lasers. We demonstrate that an appropriate set of parameters could produce mutual bound states of the two nanospheres with potential depth as large as $\sim200$ K. Our results are pertinent to ongoing experiments with trapped nanospheres in the macroscopic quantum regime, paving the way for engineering interactions among macroscopic quantum systems.

  • Kerr-type nonlinear baths enhance cooling in quantum refrigerators.- [PDF] - [Article]

    Tanaya Ray, Sayan Mondal, Aparajita Bhattacharyya, Ahana Ghoshal, Debraj Rakshit, Ujjwal Sen
     

    We study the self-contained three-qubit quantum refrigerator, with a three-body interaction enabling cooling of the target qubit, in presence of baths composed of anharmonic quantum oscillators with Kerr-type nonlinearity. We show that such baths, locally connected to the three qubits, opens up the opportunity to implement superior steady-state cooling compared to using harmonic oscillator baths, aiding in access to the free energy required for empowering the refrigerator function autonomously. We find that in spite of providing significant primacy in steady-state cooling, such anharmonic baths do not impart much edge over using harmonic oscillator baths if one targets transient cooling. However, we gain access to steady-state cooling in the parameter region where only transient cooling could be achieved by using harmonic baths. Subsequently, we also study the scaling of steady-state cooling advantage and the minimum attainable temperature for varying levels of anharmonicity present in the bath oscillators. Finally, we analyse heat currents and coefficients of performance of quantum refrigerators using bath modes involving Kerr-type nonlinearity, and present a comparison with the case of using bosonic baths made of simple harmonic oscillators. On the way, we derive the decay rates in the Gorini-Kossakowski-Sudarshan-Lindblad quantum master equation for Kerr-type anharmonic oscillator baths.

  • Witnessing quantum coherence with prior knowledge of observables.- [PDF] - [Article]

    Mao-Sheng Li, Wen Xu, Shao-Ming Fei, Zhu-Jun Zheng, Yan-Ling Wang
     

    Quantum coherence is the key resource in quantum technologies including faster computing, secure communication and advanced sensing. Its quantification and detection are, therefore, paramount within the context of quantum information processing. Having certain priori knowledge on the observables may enhance the efficiency of coherence detection. In this work, we posit that the trace of the observables is a known quantity. Our investigation confirms that this assumption indeed extends the scope of coherence detection capabilities. Utilizing this prior knowledge of the trace of the observables, we establish a series of coherence detection criteria. We investigate the detection capabilities of these coherence criteria from diverse perspectives and ultimately ascertain the existence of four distinct and inequivalent criteria. These findings contribute to the deepening of our understanding of coherence detection methodologies, thereby potentially opening new avenues for advancements in quantum technologies.

  • Deterministic preparation of optical squeezed cat and Gottesman-Kitaev-Preskill states.- [PDF] - [Article]

    Matthew S. Winnel, Joshua J. Guanzon, Deepesh Singh, Timothy C. Ralph
     

    Large-amplitude squeezed cat and high-quality Gottesman-Kitaev-Preskill (GKP) states are powerful resources for quantum error correction. However, previous schemes in optics are limited to low success probabilities, small amplitudes, and low squeezing. We overcome these limitations and present scalable schemes in optics for the deterministic preparation of large-amplitude squeezed cat states using only Gaussian operations and photon-number measurements. These states can be bred to prepare high-quality approximate GKP states, showing that GKP error correction in optics is technically feasible in near-term experiments.

  • Quantum intersection and union.- [PDF] - [Article]

    Naqueeb Ahmad Warsi, Ayanava Dasgupta
     

    In information theory, we often use intersection and union of the typical sets to analyze various communication problems. However, in the quantum setting it is not very clear how to construct a measurement which behaves analogous to intersection and union of the typical sets. In this work, we construct a projection operator which behaves very similar to intersection and union of the typical sets. Our construction relies on the Jordan's lemma. Using this construction we study the problem of communication over authenticated classical-quantum channels and derive its capacity. As another application of our construction, we study the problem of quantum asymmetric composite hypothesis testing. Further, we also prove a converse for the quantum binary asymmetric hypothesis testing problem which is arguably very similar in spirit to the converse given in the Thomas and Cover book for the classical version of this problem.

  • Analysis of Quantum Steering Measures.- [PDF] - [Article]

    L. Maquedano, A. C. S. Costa
     

    The effect of quantum steering describes a possible action at a distance via local measurements. In the last few years, several criteria have been proposed in to detect this type of correlation in quantum systems. However, there are a few approaches presented in order to measure the degree of steerability of a given system. In this work, we are interested in investigating possible ways to quantify quantum steering, where we based our analysis on different criteria presented in the literature.

  • Kondo effect in the isotropic Heisenberg spin chain.- [PDF] - [Article]

    Pradip Kattel, Parameshwar R. Pasnoori, J. H. Pixley, Patrick Azaria, Natan Andrei
     

    We investigate the boundary effects that arise when spin-$\frac{1}{2}$ impurities interact with the edges of the antiferromagnetic spin-$\frac{1}{2}$ Heisenberg chain through spin exchange interactions. We consider both cases when the couplings are ferromagnetic or anti-ferromagnetic. We find that in the case of antiferromagnetic interaction, when the impurity coupling strength is much weaker than that in the bulk, the impurity is screened in the ground state via the Kondo effect. The Kondo phase is characterized by the Lorentzian density of states and dynamically generated Kondo temperature $T_K$. As the impurity coupling strength increases, $T_K$ increases until it reaches its maximum value $T_0=2\pi J$ which is the maximum energy carried by a single spinon. When the impurity coupling strength is increased further, we enter another phase, the bound mode phase, where the impurity is screened in the ground state by a single particle bound mode exponentially localized at the edge to which the impurity is coupled. We find that the impurity can be unscreened by removing the bound mode. There exists a boundary eigenstate phase transition between the Kondo and the bound-mode phases, a transition which is characterized by the change in the number of towers of the Hilbert space. The transition also manifests itself in ground state quantities like local impurity density of states and the local impurity magnetization. When the impurity coupling is ferromagnetic, the impurity is unscreened in the ground state; however, when the absolute value of the ratio of the impurity and bulk coupling strengths is greater than $\frac{4}{5}$, the impurity can be screened by adding a bound mode that costs energy greater than $T_0$. When two impurities are considered, the phases exhibited by each impurity remain unchanged in the thermodynamic limit, but nevertheless the system exhibits a rich phase diagram.

  • Beating the spectroscopic Rayleigh limit via post-processed heterodyne detection.- [PDF] - [Article]

    Wiktor Krokosz, Mateusz Mazelanik, Michał Lipka, Marcin Jarzyna, Wojciech Wasilewski, Konrad Banaszek, Michał Parniak
     

    Quantum-inspired superresolution methods surpass the Rayleigh limit in imaging, or the analogous Fourier limit in spectroscopy. This is achieved by carefully extracting the information carried in the emitted optical field by engineered measurements. An alternative to complex experimental setups is to use simple homodyne detection and customized data analysis. We experimentally investigate this method in the time-frequency domain and demonstrate the spectroscopic superresolution for two distinct types of light sources: thermal and phase-averaged coherent states. The experimental results are backed by theoretical predictions based on estimation theory.

  • Direct Observation of Entangled Electronic-Nuclear Wave Packets.- [PDF] - [Article]

    Gonenc Mogol, Brian Kaufman, Chuan Cheng, Itzik Ben-Itzhak, Thomas Weinacht
     

    We present momentum resolved covariance measurements of entangled electronic-nuclear wave packets created and probed with octave spanning phaselocked ultrafast pulses. We launch vibrational wave packets on multiple electronic states via multi-photon absorption, and probe these wave packets via strong field double ionization using a second phaselocked pulse. Momentum resolved covariance mapping of the fragment ions highlights the nuclear motion, while measurements of the yield as a function of the relative phase between pump and probe pulses highlight the electronic coherence. The combined measurements allow us to directly visualize the entanglement between the electronic and nuclear degrees of freedom and follow the evolution of the complete wavefunction.

  • Assessing Quantum Computing Performance for Energy Optimization in a Prosumer Community.- [PDF] - [Article]

    Carlo Mastroianni, Francesco Plastina, Luigi Scarcello, Jacopo Settino, Andrea Vinci
     

    The efficient management of energy communities relies on the solution of the "prosumer problem", i.e., the problem of scheduling the household loads on the basis of the user needs, the electricity prices, and the availability of local renewable energy, with the aim of reducing costs and energy waste. Quantum computers can offer a significant breakthrough in treating this problem thanks to the intrinsic parallel nature of quantum operations. The most promising approach is to devise variational hybrid algorithms, in which quantum computation is driven by parameters that are optimized classically, in a cycle that aims at finding the best solution with a significant speed-up with respect to classical approaches. This paper provides a reformulation of the prosumer problem, allowing to address it with a hybrid quantum algorithm, namely, Quantum Approximate Optimization Algorithm (QAOA), and with a recent variant, the Recursive QAOA. We report on an extensive set of experiments, on simulators and real quantum hardware, for different problem sizes. Results are encouraging in that Recursive QAOA is able, for problems involving up to 10 qubits, to provide optimal and admissible solutions with good probabilities, while the computation time is nearly independent of the system size

  • Long Lived Electronic Coherences in Molecular Wave Packets Probed with Pulse Shape Spectroscopy.- [PDF] - [Article]

    Brian Kaufman, Philipp Marquetand, Tamas Rozgonyi, Thomas Weinacht
     

    We explore long lived electronic coherences in molecules using shaped ultrafast laser pulses to launch and probe entangled nuclear-electronic wave packets. We find that under certain conditions, the electronic phase remains well defined despite vibrational motion along many degrees of freedom. The experiments are interpreted with the help of electronic structure calculations which corroborate our interpretation of the measurements

  • Robust universal quantum processors in spin systems via Walsh pulse sequences.- [PDF] - [Article]

    Matteo Votto, Johannes Zeiher, Benoît Vermersch
     

    We propose a protocol to realize quantum simulation and computation in spin systems with long-range interactions. Our approach relies on the local addressing of single spins with external fields parametrized by Walsh functions. This enables a mapping from a class of target Hamiltonians, defined by the graph structure of their interactions, to pulse sequences. We then obtain a recipe to implement arbitrary two-body Hamiltonians and universal quantum circuits. Performance guarantees are provided in terms of bounds on Trotter errors and total number of pulses, and robustness to experimental imperfections. We demonstrate and numerically benchmark our protocol with examples from the dynamical of spin models, quantum error correction and quantum optimization algorithms.

  • Simulating photonic devices with noisy optical elements.- [PDF] - [Article]

    Michele Vischi, Giovanni Di Bartolomeo, Massimiliano Proietti, Seid Koudia, Filippo Cerocchi, Massimiliano Dispenza, Angelo Bassi
     

    Quantum computers are inherently affected by noise. While in the long-term error correction codes will account for noise at the cost of increasing physical qubits, in the near-term the performance of any quantum algorithm should be tested and simulated in the presence of noise. As noise acts on the hardware, the classical simulation of a quantum algorithm should not be agnostic on the platform used for the computation. In this work, we apply the recently proposed noisy gates approach to efficiently simulate noisy optical circuits described in the dual rail framework. The evolution of the state vector is simulated directly, without requiring the mapping to the density matrix framework. Notably, we test the method on both the gate-based and measurement-based quantum computing models, showing that the approach is very versatile. We also evaluate the performance of a photonic variational quantum algorithm to solve the MAX-2-CUT problem. In particular we design and simulate an ansatz which is resilient to photon losses up to $p \sim 10^{-3}$ making it relevant for near term applications.

  • Quantum phases of hardcore bosons with repulsive dipolar density-density interactions on two-dimensional lattices.- [PDF] - [Article]

    J.A. Koziol, G. Morigi, K.P. Schmidt
     

    We analyse the ground-state quantum phase diagram of hardcore Bosons interacting with repulsive dipolar potentials. The bosons dynamics is described by the extended-Bose-Hubbard Hamiltonian on a two-dimensional lattice. The ground state results from the interplay between the lattice geometry and the long-range interactions, which we account for by means of a classical spin mean-field approach limited by the size of the considered unit cells. This extended classical spin mean-field theory accounts for the long-range density-density interaction without truncation. We consider three different lattice geometries: square, honeycomb, and triangular. In the limit of zero hopping the ground state is always a devil's staircase of solid (gapped) phases. Such crystalline phases with broken translational symmetry are robust with respect to finite hopping amplitudes. At intermediate hopping amplitudes, these gapped phases melt, giving rise to various lattice supersolid phases, which can have exotic features with multiple sublattice densities. At sufficiently large hoppings the ground state is a superfluid. The stability of phases predicted by our approach is gauged by comparison to the known quantum phase diagrams of the Bose-Hubbard model with nearest-neighbour interactions as well as quantum Monte Carlo simulations for the dipolar case on the square lattice. Our results are of immediate relevance for experimental realisations of self-organised crystalline ordering patterns in analogue quantum simulators, e.g., with ultracold dipolar atoms in an optical lattice.

  • QKD Entity Source Authentication: Defense-in-Depth for Post Quantum Cryptography.- [PDF] - [Article]

    John J. Prisco
     

    Quantum key distribution (QKD) was conceived by Charles Bennett and Gilles Brassard in December of 1984. In the ensuing 39 years QKD systems have been deployed around the world to provide secure encryption for terrestrial as well as satellite communication. In 2016 the National Institute of Standards and Technology (NIST) began a program to standardize a series of quantum resistant algorithms to replace our current encryption standards thereby protecting against future quantum computers breaking public key cryptography. This program is known as post quantum cryptography or PQC. One of the tenets of cybersecurity is to use an approach that simultaneously provides multiple protections known as defense-in-depth. This approach seeks to avoid single points of failure. The goal of this paper is to examine the suitability of a hybrid QKD / PQC defense-in-depth strategy. A focus of the paper will be to examine the sufficiency of initial QKD hardware authentication (entity source authentication) which is necessary to guard against man-in-the-middle attacks.

  • Enhancing Electron-Nuclear Resonances by Dynamical Control Switching.- [PDF] - [Article]

    Sichen Xu, Chanying Xie, Zhen-Yu Wang
     

    We present a general method to realize resonant coupling between spins even though their energies are of different scales. Applying the method to the electron and nuclear spin systems such as a nitrogen-vacancy (NV) center with its nearby nuclei, we show that a specific dynamical switching of the electron spin Rabi frequency achieves efficient electron-nuclear coupling, providing a much stronger quantum sensing signal and dynamic nuclear polarization than previous methods. This protocol has applications in high-field nanoscale nuclear magnetic resonances as well as low-power quantum control of nuclear spins.

  • Efficient reconstruction, benchmarking and validation of cross-talk models in readout noise in near-term quantum devices.- [PDF] - [Article]

    Jan Tuziemski, Filip B. Maciejewski, Joanna Majsak, Oskar Słowik, Marcin Kotowski, Katarzyna Kowalczyk-Murynka, Piotr Podziemski, Michał Oszmaniec
     

    Readout errors contribute significantly to the overall noise affecting present-day quantum computers. However, the complete characterization of generic readout noise is infeasible for devices consisting of a large number of qubits. Here we introduce an appropriately tailored quantum detector tomography protocol, the so called Quantum Detector Overlapping Tomography, which enables efficient characterization of $k-$local cross-talk effects in the readout noise as the sample complexity of the protocol scales logarithmically with the total number of qubits. We show that QDOT data provides information about suitably defined reduced POVM operators, correlations and coherences in the readout noise, as well as allows to reconstruct the correlated clusters and neighbours readout noise model. Benchmarks are introduced to verify utility and accuracy of the reconstructed model. We apply our method to investigate cross-talk effects on 79 qubit Rigetti and 127 qubit IBM devices. We discuss their readout noise characteristics, and demonstrate effectiveness of our approach by showing superior performance of correlated clusters and neighbours over models without cross-talk in model-based readout error mitigation applied to energy estimation of MAX-2-SAT Hamiltonians, with the improvement on the order of 20% for both devices.

  • Non-Zero Mean Quantum Wishart Distribution Of Random Quantum States.- [PDF] - [Article]

    Shrobona Bagchi
     

    Random quantum states are useful in various areas of quantum information science. Distributions of random quantum states using Gaussian distributions have been used in various scenarios in quantum information science. One of this is the distribution of random quantum states derived using the Wishart distibution usually used in statistics. This distribution of random quantum states using the Wishart distribution has recently been named as the quantum Wishart distribution. The quantum Wishart distribution has been found for non-central distribution with a general covariance matrix and zero mean matrix in an earlier work. Here, we find out the closed form expression for the distribution of random quantum states pertaining to non-central Wishart distribution with any general rank one mean matrix and a general covariance matrix for arbitrary dimensions in both real and complex Hilbert space. We term this as the non-zero mean quantum Wishart distribution.

  • An efficient quantum parallel repetition theorem and applications.- [PDF] - [Article]

    John Bostanci, Luowen Qian, Nicholas Spooner, Henry Yuen
     

    We prove a tight parallel repetition theorem for $3$-message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of $4$-message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent $3$-message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07]. As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary.

  • Realistic Cost to Execute Practical Quantum Circuits using Direct Clifford+T Lattice Surgery Compilation.- [PDF] - [Article]

    Tyler LeBlond, Christopher Dean, George Watkins, Ryan S. Bennink
     

    In this article, we report the development of a resource estimation pipeline that explicitly compiles quantum circuits expressed using the Clifford+T gate set into a lower level instruction set made out of fault-tolerant operations on the surface code. The cadence of magic state requests from the compiled circuit enables the optimization of magic state distillation and storage requirements in a post-hoc analysis. To compile logical circuits, we build upon the open-source Lattice Surgery Compiler, which is extensible to different surface code compilation strategies within the lattice surgery paradigm. The revised compiler operates in two stages: the first translates logical gates into an abstract, layout-independent instruction set; the second compiles these into local lattice surgery instructions that are allocated to hardware tiles according to a specified resource layout. In the second stage, parallelism in the logical circuit is translated into parallelism within the fault-tolerant layer while avoiding resource contention, which allows the compiler to find a realistic number of logical time-steps to execute the circuit. The revised compiler also improves the handling of magic states by allowing users to specify dedicated hardware tiles at which magic states are replenished according to a user-specified rate, which allows resource costs from the logical computation to be considered independently from magic state distillation and storage. We demonstrate the applicability of our resource estimation pipeline to large, practical quantum circuits by providing resource estimates for the ground state estimation of molecules. We find that, unless carefully considered, the resource costs of magic state storage can dominate in real circuits which have variable magic state consumption rates.

  • TISCC: A Surface Code Compiler and Resource Estimator for Trapped-Ion Processors.- [PDF] - [Article]

    Tyler LeBlond, Justin G. Lietz, Christopher M. Seck, Ryan S. Bennink
     

    We introduce the Trapped-Ion Surface Code Compiler (TISCC), a software tool that generates circuits for a universal set of surface code patch operations in terms of a native trapped-ion gate set. To accomplish this, TISCC manages an internal representation of a trapped-ion system where a repeating pattern of trapping zones and junctions is arranged in an arbitrarily large rectangular grid. Surface code operations are compiled by instantiating surface code patches on the grid and using methods to generate transversal operations over data qubits, rounds of error correction over stabilizer plaquettes, and/or lattice surgery operations between neighboring patches. Beyond the implementation of a basic surface code instruction set, TISCC contains corner movement functionality and a patch translation that is implemented using ion movement alone. Except in the latter case, all TISCC functionality is extensible to alternative grid-like hardware architectures. TISCC output has been verified using the Oak Ridge Quasi-Clifford Simulator (ORQCS).

  • Machine learning phase transitions: Connections to the Fisher information.- [PDF] - [Article]

    Julian Arnold, Niels Lörch, Flemming Holtorf, Frank Schäfer
     

    Despite the widespread use and success of machine-learning techniques for detecting phase transitions from data, their working principle and fundamental limits remain elusive. Here, we explain the inner workings and identify potential failure modes of these techniques by rooting popular machine-learning indicators of phase transitions in information-theoretic concepts. Using tools from information geometry, we prove that several machine-learning indicators of phase transitions approximate the square root of the system's (quantum) Fisher information from below -- a quantity that is known to indicate phase transitions but is often difficult to compute from data. We numerically demonstrate the quality of these bounds for phase transitions in classical and quantum systems.

  • Energy transfer in $N$-component nanosystems enhanced by pulse-driven vibronic many-body entanglement.- [PDF] - [Article] - [UPDATED]

    Fernando J. Gómez-Ruiz, Oscar L. Acevedo, Ferney J. Rodríguez, Luis Quiroga, Neil F. Johnson
     

    The processing of energy by transfer and redistribution plays a key role in the evolution of dynamical systems. At the ultrasmall and ultrafast scale of nanosystems, quantum coherence could in principle also play a role and has been reported in many pulse-driven nanosystems (e.g. quantum dots and even the microscopic Light-Harvesting Complex II (LHC-II) aggregate). Typical theoretical analyses cannot easily be scaled to describe these general $N$-component nanosystems; they do not treat the pulse dynamically; and they approximate memory effects. Here our aim is to shed light on what new physics might arise beyond these approximations. We adopt a purposely minimal model such that the time-dependence of the pulse is included explicitly in the Hamiltonian. This simple model generates complex dynamics: specifically, pulses of intermediate duration generate highly entangled vibronic (i.e. electronic-vibrational) states that spread multiple excitons -- and hence energy -- maximally within the system. Subsequent pulses can then act on such entangled states to efficiently channel subsequent energy capture. The underlying pulse-generated vibronic entanglement increases in strength and robustness as $N$ increases.

  • An universal quantum computation scheme with low error diffusion property.- [PDF] - [Article] - [UPDATED]

    Chen Lin, Guowu Yang, Xiaoyu Song, Marek. A. Perkowski, Xiaoyu Li
     

    Quantum concatenation code is an effective way to realize fault-tolerant universal quantum computing. Still, there are many non-fault-tolerant logical locations at its low encoding level, which thereby increases the probability of error multiplication and limits the ability that such code to realize a high-fidelity universal gate library. In this work, we propose a general framework based on machine learning technology for the decoder design of a segmented fault-tolerant quantum circuit. Then following this design principle, we adopt the neural network algorithm to give an optimized decoder for the such circuit. To assess the effectiveness of our new decoder, we apply it to the segmented fault-tolerant logical controlled-NOT gates, which act on the tensor composed of the Steane 7-qubit logical qubit and the Reed-Muller 15-qubit logical qubit. We simulate these gates under depolarizing noise environment and compare the gate error thresholds in contrast to the minimal-weight decoder. Finally, we provide a fault-tolerant universal gate library based on a 33-qubit non-uniform concatenated code. Furthermore, we offer several level-1 segmented fault-tolerant locations with optimized decoders to construct a non-Clifford gate on this code, which has less circuit depth than our existing work. Meanwhile, we analyze the pseudo-threshold of the universal scheme of this code.

  • Anomalous criticality with bounded fluctuations and long-range frustration induced by broken time-reversal symmetry.- [PDF] - [Article] - [UPDATED]

    Jinchen Zhao, Myung-Joong Hwang
     

    We consider a one-dimensional Dicke lattice with complex photon hopping amplitudes and investigate the influence of time-reversal symmetry breaking due to synthetic magnetic fields. We show that, by tuning the total flux threading the lattice with a periodic boundary condition, the universality class of superradiant phase transition (SPT) changes from that of the mean-field fully connected systems to one that features anomalous critical phenomena. The anomalous SPT exhibits a closing of the energy gap with different critical exponents on both sides of transition and a discontinuity of correlations and fluctuation despite it being a second-order phase transition. In the anomalous normal phase, we find that a non-mean-field critical exponent for the closing energy gap and nondivergent fluctuations and correlations appear, which we attribute to the asymmetric dispersion relation. Moreover, we show that the nearest neighborhood complex hopping induces effective long-range interactions for position quadratures of the cavity fields, whose competition leads to a series of first-order phase transitions among superradiant phases with varying degrees of frustration. The resulting multicritical points also show anomalous features such as two coexisting critical scalings on both sides of the transition. Our work shows that the interplay between the broken time-reversal symmetry and frustration on bosonic lattice systems can give rise to anomalous critical phenomena that have no counterpart in fermionic, spin, or time-reversal symmetric quantum optical systems.

  • Shadow tomography from emergent state designs in analog quantum simulators.- [PDF] - [Article] - [UPDATED]

    Max McGinley, Michele Fava
     

    We introduce a method that allows one to infer many properties of a quantum state -- including nonlinear functions such as R\'enyi entropies -- using only global control over the constituent degrees of freedom. In this protocol, the state of interest is first entangled with a set of ancillas under a fixed global unitary, before projective measurements are made. We show that when the unitary is sufficiently entangling, a universal relationship between the statistics of the measurement outcomes and properties of the state emerges, which can be connected to the recently discovered phenomenon of emergent quantum state designs in chaotic systems. Thanks to this relationship, arbitrary observables can be reconstructed using the same number of experimental repetitions that would be required in classical shadow tomography [Huang et al., Nat. Phys. 16, 1050 (2020)]. Unlike previous approaches to shadow tomography, our protocol can be implemented using only global operations, as opposed to qubit-selective logic gates, which makes it particularly well-suited to analog quantum simulators, including ultracold atoms in optical lattices and arrays of Rydberg atoms.

  • Photon-number moments and cumulants of Gaussian states.- [PDF] - [Article] - [UPDATED]

    Yanic Cardin, Nicolás Quesada
     

    We develop closed-form expressions for the moments and cumulants of Gaussian states when measured in the photon-number basis. We express the photon-number moments of a Gaussian state in terms of the loop Hafnian, a function that when applied to a $(0,1)$-matrix representing the adjacency of a graph, counts the number of its perfect matchings. Similarly, we express the photon-number cumulants in terms of the Montrealer, a newly introduced matrix function that when applied to a $(0,1)$-matrix counts the number of Hamiltonian cycles of that graph. Based on these graph-theoretic connections, we show that the calculation of photon-number moments and cumulants are $#P$-hard. Moreover, we provide an exponential time algorithm to calculate Montrealers (and thus cumulants), matching well-known results for Hafnians. We then demonstrate that when a uniformly lossy interferometer is fed in every input with identical single-mode Gaussian states with zero displacement, all the odd-order cumulants but the first one are zero. Finally, we employ the expressions we derive to study the distribution of cumulants up to the fourth order for different input states in a Gaussian boson sampling setup where $K$ identical states are fed into an $\ell$-mode interferometer. We analyze the dependence of the cumulants as a function of the type of input state, squeezed, lossy squeezed, squashed, or thermal, and as a function of the number of non-vacuum inputs. We find that thermal states perform much worse than other classical states, such as squashed states, at mimicking the photon-number cumulants of lossy or lossless squeezed states.

  • Quantum error correction on symmetric quantum sensors.- [PDF] - [Article] - [UPDATED]

    Yingkai Ouyang, Gavin K. Brennen
     

    Symmetric states of collective angular momentum are good candidates for multi-qubit probe states in quantum sensors because they are easy to prepare and can be controlled without requiring individual addressability. Here, we give quantum error correction protocols for estimating the magnitude of classical fields using symmetric probe states. To achieve this, we first develop a general theory for quantum error correction on the symmetric subspace. This theory, based on the representation theory of the symmetric group, allows us to construct efficient algorithms that can correct any correctible error on any permutation-invariant code. These algorithms involve measurements of total angular momentum, quantum Schur transforms or logical state teleportations, and geometric pulse gates. For deletion errors, we give a simpler quantum error correction algorithm based on primarily on geometric pulse gates. Second, we devise a simple quantum sensing scheme on symmetric probe states that works in spite of a linear rate of deletion errors, and analyze its asymptotic performance. In our scheme, we repeatedly project the probe state onto the codespace while the signal accumulates. When the time spent to accumulate the signal is constant, our scheme can do phase estimation with precision that approaches the best possible in the noiseless setting. Third, we give near-term implementations of our algorithms.

  • The Dunkl oscillator on a space of nonconstant curvature: an exactly solvable quantum model with reflections.- [PDF] - [Article] - [UPDATED]

    Angel Ballesteros, Amene Najafizade, Hossein Panahi, Hassan Hassanabadi, Shi-Hai Dong
     

    We introduce the Dunkl-Darboux III oscillator Hamiltonian in N dimensions, defined as a $\lambda-$deformation of the N-dimensional Dunkl oscillator. This deformation can be interpreted either as the introduction of a non-constant curvature related to $\lambda$ on the underlying space or, equivalently, as a Dunkl oscillator with a position-dependent mass function. This new quantum model is shown to be exactly solvable in arbitrary dimension N, and its eigenvalues and eigenfunctions are explicitly presented. Moreover, it is shown that in the two-dimensional case both the Darboux III and the Dunkl oscillators can be separately coupled with a constant magnetic field, thus giving rise to two new exactly solvable quantum systems in which the effect of a position-dependent mass and the Dunkl derivatives on the structure of the Landau levels can be explicitly studied. Finally, the whole 2D Dunkl-Darboux III oscillator is coupled with the magnetic field and shown to define an exactly solvable Hamiltonian, where the interplay between the $\lambda$-deformation and the magnetic field is explicitly illustrated.

  • Random-depth Quantum Amplitude Estimation.- [PDF] - [Article] - [UPDATED]

    Xi Lu, Hongwei Lin
     

    The maximum likelihood amplitude estimation algorithm (MLAE) is a practical solution to the quantum amplitude estimation problem with Heisenberg limit error convergence. We improve MLAE by using random depths to avoid the so-called critical points, and do numerical experiments to show that our algorithm is approximately unbiased compared to the original MLAE and approaches the Heisenberg limit better.

  • Enhancing qubit readout with Bayesian Learning.- [PDF] - [Article] - [UPDATED]

    F. Cosco, N. Lo Gullo
     

    We introduce an efficient and accurate readout measurement scheme for single and multi-qubit states. Our method uses Bayesian inference to build an assignment probability distribution for each qubit state based on a reference characterization of the detector response functions. This allows us to account for system imperfections and thermal noise within the assignment of the computational basis. We benchmark our protocol on a quantum device with five superconducting qubits, testing initial state preparation for single and two-qubit states and an application of the Bernstein-Vazirani algorithm executed on five qubits. Our method shows a substantial reduction of the readout error and promises advantages for near-term and future quantum devices.

  • Quantum-Enhanced Greedy Combinatorial Optimization Solver.- [PDF] - [Article] - [UPDATED]

    Maxime Dupont, Bram Evert, Mark J. Hodson, Bhuvanesh Sundar, Stephen Jeffrey, Yuki Yamaguchi, Dennis Feng, Filip B. Maciejewski, Stuart Hadfield, M. Sohaib Alam, Zhihui Wang, Shon Grabbe, P. Aaron Lott, Eleanor G. Rieffel, Davide Venturelli, Matthew J. Reagor
     

    Combinatorial optimization is a broadly attractive area for potential quantum advantage, but no quantum algorithm has yet made the leap. Noise in quantum hardware remains a challenge, and more sophisticated quantum-classical algorithms are required to bolster their performance. Here, we introduce an iterative quantum heuristic optimization algorithm to solve combinatorial optimization problems. The quantum algorithm reduces to a classical greedy algorithm in the presence of strong noise. We implement the quantum algorithm on a programmable superconducting quantum system using up to 72 qubits for solving paradigmatic Sherrington-Kirkpatrick Ising spin glass problems. We find the quantum algorithm systematically outperforms its classical greedy counterpart, signaling a quantum enhancement. Moreover, we observe an absolute performance comparable with a state-of-the-art semidefinite programming method. Classical simulations of the algorithm illustrate that a key challenge to reaching quantum advantage remains improving the quantum device characteristics.

  • Quantum Public-Key Encryption with Tamper-Resilient Public Keys from One-Way Functions.- [PDF] - [Article] - [UPDATED]

    Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Takashi Yamakawa
     

    We construct quantum public-key encryption from one-way functions.In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Barooti-Grilo-Malavolta-Sattath-Vu-Walter, eprint:2023/877]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels, based only on one-way functions. Moreover, we show a generic compiler to upgrade security against chosen plaintext attacks (CPA security) into security against chosen ciphertext attacks (CCA security) only using one-way functions. As a result, we obtain CCA secure quantum public-key encryption based only on one-way functions.

  • Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes.- [PDF] - [Article] - [UPDATED]

    Alexandr Sedykh, Maninadh Podapaka, Asel Sagingalieva, Karan Pinto, Markus Pflitsch, Alexey Melnikov
     

    Finding the distribution of the velocities and pressures of a fluid (by solving the Navier-Stokes equations) is a principal task in the chemical, energy, and pharmaceutical industries, as well as in mechanical engineering and the design of pipeline systems. With existing solvers, such as OpenFOAM and Ansys, simulations of fluid dynamics in intricate geometries are computationally expensive and require re-simulation whenever the geometric parameters or the initial and boundary conditions are altered. Physics-informed neural networks are a promising tool for simulating fluid flows in complex geometries, as they can adapt to changes in the geometry and mesh definitions, allowing for generalization across different shapes. We present a hybrid quantum physics-informed neural network that simulates laminar fluid flows in 3D Y-shaped mixers. Our approach combines the expressive power of a quantum model with the flexibility of a physics-informed neural network, resulting in a 21% higher accuracy compared to a purely classical neural network. Our findings highlight the potential of machine learning approaches, and in particular hybrid quantum physics-informed neural network, for complex shape optimization tasks in computational fluid dynamics. By improving the accuracy of fluid simulations in complex geometries, our research using hybrid quantum models contributes to the development of more efficient and reliable fluid dynamics solvers.

  • Tomography of Quantum States from Structured Measurements via quantum-aware transformer.- [PDF] - [Article] - [UPDATED]

    Hailan Ma, Zhenhong Sun, Daoyi Dong, Chunlin Chen, Herschel Rabitz
     

    Quantum state tomography (QST) is the process of reconstructing the state of a quantum system (mathematically described as a density matrix) through a series of different measurements, which can be solved by learning a parameterized function to translate experimentally measured statistics into physical density matrices. However, the specific structure of quantum measurements for characterizing a quantum state has been neglected in previous work. In this paper, we explore the similarity between highly structured sentences in natural language and intrinsically structured measurements in QST. To fully leverage the intrinsic quantum characteristics involved in QST, we design a quantum-aware transformer (QAT) model to capture the complex relationship between measured frequencies and density matrices. In particular, we query quantum operators in the architecture to facilitate informative representations of quantum data and integrate the Bures distance into the loss function to evaluate quantum state fidelity, thereby enabling the reconstruction of quantum states from measured data with high fidelity. Extensive simulations and experiments (on IBM quantum computers) demonstrate the superiority of the QAT in reconstructing quantum states with favorable robustness against experimental noise.

  • A Proof of Specker's Principle.- [PDF] - [Article] - [UPDATED]

    Guido Bacciagaluppi
     

    Specker's principle, the condition that pairwise orthogonal propositions must be jointly orthogonal, has been much investigated recently within the programme of finding physical principles to characterise quantum mechanics. It largely appears, however, to lack a transparent justification. In this paper, I provide a derivation of Specker's principle from three assumptions (made suitably precise): the existence of maximal entanglement, the existence of non-maximal measurements, and no-signalling. I discuss these three assumptions and describe canonical examples of non-Specker sets of propositions satisfying any two of them. These examples display analogies with various approaches in the interpretation of quantum mechanics, notably ones based on retrocausation. I also discuss connections with the work of Popescu and Rohrlich. The core of the proof (and the main example violating no-signalling) is illustrated by a variant of Specker's tale of the seer of Nineveh, with which I open the paper.

  • Spin squeezing in internal bosonic Josephson junctions via enhanced shortcuts to adiabaticity.- [PDF] - [Article] - [UPDATED]

    Manuel Odelli, Vladimir M. Stojanovic, Andreas Ruschhaupt
     

    We investigate a time-efficient and robust preparation of spin-squeezed states -- a class of states of interest for quantum-enhanced metrology -- in internal bosonic Josephson junctions with a time-dependent nonlinear coupling strength between atoms in two different hyperfine states. We treat this state-preparation problem, which had previously been addressed using shortcuts to adiabaticity (STA), using the recently proposed analytical modification of this class of quantum-control protocols that became known as the enhanced STA (eSTA) method. We characterize the state-preparation process by evaluating the time dependence of the coherent spin-squeezing and number-squeezing parameters and the target-state fidelity. We show that the state-preparation times obtained using the eSTA method compare favourably to those found in previously proposed approaches. We also demonstrate that the increased robustness of the eSTA approach -- compared to its STA counterpart -- leads to additional advantages for potential experimental realizations of strongly spin-squeezed states in bosonic Josephson junctions.

  • A fully ab initio approach to inelastic atom-surface scattering.- [PDF] - [Article] - [UPDATED]

    Michelle M. Kelley, Ravishankar Sundararaman, Tomás A. Arias
     

    We introduce a fully ab initio theory for inelastic scattering of any atom from any surface exciting single phonons, and apply the theory to helium scattering from Nb(100). The key aspect making our approach general is a direct first-principles evaluation of the scattering atom-electron vertex. By correcting misleading results from current state-of-the-art theories, this fully ab initio approach will be critical in guiding and interpreting experiments that adopt next-generation, non-destructive atomic beam scattering.

  • Detailed fluctuation theorem from the one-time measurement scheme.- [PDF] - [Article] - [UPDATED]

    Kenji Maeda, Tharon Holdsworth, Sebastian Deffner, Akira Sone
     

    We study the quantum fluctuation theorem in the one-time measurement (OTM) scheme, where the work distribution of the backward process has been lacking and which is considered to be more informative than the two-time measurement (TTM) scheme. We find that the OTM scheme is the quantum nondemolition TTM scheme, in which the final state is a pointer state of the second measurement whose Hamiltonian is conditioned on the first measurement outcome. Then, by clarifying the backward work distribution in the OTM scheme, we derive the detailed fluctuation theorem in the OTM scheme for the characteristic functions of the forward and backward work distributions, which captures the detailed information about the irreversibility and can be applied to quantum thermometry. We also verified our conceptual findings with the IBM quantum computer. Our result clarifies that the laws of thermodynamics at the nanoscale are dependent on the choice of the measurement and may provide experimentalists with a concrete strategy to explore laws of thermodynamics at the nanoscale by protecting quantum coherence and correlations.

  • Synchronization by Magnetostriction.- [PDF] - [Article] - [UPDATED]

    Jiong Cheng, Wenlin Li, Jie Li
     

    We show how to utilize magnetostriction to synchronize two mechanical vibration modes in a cavity magnomechanical system. The dispersive magnetostrictive interaction provides necessary nonlinearity required for achieving synchronization. Strong phase correlation between two mechanical oscillators can be established, leading to the synchronization robust against thermal noise. We develop a theoretical framework to analyze the synchronization by solving the constraint conditions of steady-state limit cycles. We determine that the strong cavity-magnon linear coupling can enhance and regulate the synchronization, which offers a new path to modulate synchronization. The work reveals a new mechanism for achieving and modulating synchronization and indicates that cavity magnomechanical systems can be an ideal platform to explore rich synchronization phenomena.

  • Quantum Amplitude Estimation by Generalized Qubitization.- [PDF] - [Article] - [UPDATED]

    Xi Lu, Hongwei Lin
     

    We propose a generalized qubitization technique for quantum amplitude estimation (QAE), which is a fundamental technique used in various problems like quantum simulation and quantum machine learning. Without prior information on the amplitude, we optimize the number of queries to $\frac{\pi}{\sqrt{6}\epsilon}\approx 1.28\epsilon^{-1}$, which is exactly a half compared to the quantum phase estimation based algorithm. We also discuss how our result improves the performance of quantum expectation value estimation and quantum nonlinear quantity estimation like the von Neumann entropy.

  • Continuous Hamiltonian dynamics on digital quantum computers without discretization error.- [PDF] - [Article] - [UPDATED]

    Etienne Granet, Henrik Dreyer
     

    We introduce an algorithm to compute Hamiltonian dynamics on digital quantum computers that requires only a finite circuit depth to reach an arbitrary precision, i.e. achieves zero discretization error with finite depth. This finite number of gates comes at the cost of an attenuation of the measured expectation value by a known amplitude, requiring more shots per circuit. The gate count for simulation up to time $t$ is $O(t^2\mu^2)$ with $\mu$ the $1$-norm of the Hamiltonian, without dependence on the precision desired on the result, providing a significant improvement over previous algorithms. The only dependence in the norm makes it particularly adapted to non-sparse Hamiltonians. The algorithm generalizes to time-dependent Hamiltonians, appearing for example in adiabatic state preparation. These properties make it particularly suitable for present-day relatively noisy hardware that supports only circuits with moderate depth.

  • Efficient Approximation of Quantum Channel Fidelity Exploiting Symmetry.- [PDF] - [Article] - [UPDATED]

    Yeow Meng Chee, Hoang Ta, Van Khu Vu
     

    Determining the optimal fidelity for the transmission of quantum information over noisy quantum channels is one of the central problems in quantum information theory. Recently, [Berta-Borderi-Fawzi-Scholz, Mathematical Programming, 2021] introduced an asymptotically converging semidefinite programming hierarchy of outer bounds for this quantity. However, the size of the semidefinite programs (SDPs) grows exponentially with respect to the level of the hierarchy, thus making their computation unscalable. In this work, by exploiting the symmetries in the SDP, we show that, for fixed input and output dimensions of the given quantum channel, we can compute the SDP in polynomial time in terms of the level of the hierarchy. As a direct consequence of our result, the optimal fidelity can be approximated with an accuracy of $\epsilon$ in a time that is polynomial in $1/\epsilon$.

  • Dissipative Landau-Zener transitions in a three-level bow-tie model: accurate dynamics with the Davydov multi-D2 Ansatz.- [PDF] - [Article] - [UPDATED]

    Lixing Zhang, Maxim F. Gelin, Yang Zhao
     

    We investigate Landau-Zener (LZ) transitions in the three-level bow-tie model (3L-BTM) in a dissipative environment by using the numerically accurate method of multiple Davydov D2 Ansatze. We first consider the 3L-TBM coupled to a single harmonic mode, study evolutions of the transition probabilities for selected values of the model parameters, and interpret the obtained results with the aid of the energy diagram method. We then explore the 3L-TBM coupled to a boson bath. Our simulations demonstrate that sub-Ohmic, Ohmic and super-Ohmic boson baths have substantially different influences on the 3L-BTM dynamics, which cannot be grasped by the standard phenomenological Markovian single-rate descriptions. We also describe novel bath-induced phenomena which are absent in two-level LZ systems.

  • Efficient quantum algorithms for testing symmetries of open quantum systems.- [PDF] - [Article] - [UPDATED]

    Rahul Bandyopadhyay, Alex H. Rubin, Marina Radulaski, Mark M. Wilde
     

    Symmetry is an important and unifying notion in many areas of physics. In quantum mechanics, it is possible to eliminate degrees of freedom from a system by leveraging symmetry to identify the possible physical transitions. This allows us to simplify calculations and characterize potentially complicated dynamics of the system with relative ease. Previous works have focused on devising quantum algorithms to ascertain symmetries by means of fidelity-based symmetry measures. In our present work, we develop alternative symmetry testing quantum algorithms that are efficiently implementable on quantum computers. Our approach estimates asymmetry measures based on the Hilbert--Schmidt distance, which is significantly easier, in a computational sense, than using fidelity as a metric. The method is derived to measure symmetries of states, channels, Lindbladians, and measurements. We apply this method to a number of scenarios involving open quantum systems, including the amplitude damping channel and a spin chain, and we test for symmetries within and outside the finite symmetry group of the Hamiltonian and Lindblad operators.

  • Detailed balance in mixed quantum-classical mapping approaches.- [PDF] - [Article] - [UPDATED]

    Graziano Amati, Jonathan R. Mannouch, Jeremy O. Richardson
     

    The violation of detailed balance poses a serious problem for the majority of current quasiclassical methods for simulating nonadiabatic dynamics. In order to analyze the severity of the problem, we predict the long-time limits of the electronic populations according to various quasiclassical mapping approaches, by applying arguments from classical ergodic theory. Our analysis confirms that regions of the mapping space that correspond to negative populations, which most mapping approaches introduce in order to go beyond the Ehrenfest approximation, pose the most serious issue for reproducing the correct thermalization behaviour. This is because inverted potentials, which arise from negative electronic populations entering into the nuclear force, can result in trajectories unphysically accelerating off to infinity. The recently developed mapping approach to surface hopping (MASH) provides a simple way of avoiding inverted potentials, while retaining an accurate description of the dynamics. We prove that MASH, unlike any other quasiclassical approach, is guaranteed to describe the exact thermalization behaviour of all quantum$\unicode{x2013}$classical systems, confirming it as one of the most promising methods for simulating nonadiabatic dynamics in real condensed-phase systems.

  • Indistinguishability between quantum randomness and pseudo-randomness under efficiently calculable randomness measures.- [PDF] - [Article] - [UPDATED]

    Toyohiro Tsurumaru, Tsubasa Ichikawa, Yosuke Takubo, Toshihiko Sasaki, Jaeha Lee, Izumi Tsutsui
     

    We present a no-go theorem for the distinguishability between quantum random numbers (i.e., random numbers generated quantum mechanically) and pseudo-random numbers (i.e., random numbers generated algorithmically). The theorem states that one cannot distinguish these two types of random numbers if the quantum random numbers are efficiently classically simulatable and the randomness measure used for the distinction is efficiently computable. We derive this theorem by using the properties of cryptographic pseudo-random number generators, which are believed to exist in the field of cryptography. Our theorem is found to be consistent with the analyses on the actual data of quantum random numbers generated by the IBM Quantum and also those obtained in the Innsbruck experiment for the Bell test, where the degrees of randomness of these two set of quantum random numbers turn out to be essentially indistinguishable from those of the corresponding pseudo-random numbers. Previous observations on the algorithmic randomness of quantum random numbers are also discussed and reinterpreted in terms of our theorems and data analyses.

  • Charge-parity switching effects and optimisation of transmon-qubit design parameters.- [PDF] - [Article] - [UPDATED]

    Miha Papič, Jani Tuorila, Adrian Auer, Inés de Vega, Amin Hosseinkhani
     

    Enhancing the performance of noisy quantum processors requires improving our understanding of error mechanisms and the ways to overcome them. A judicious selection of qubit design parameters, guided by an accurate error model, plays a pivotal role in improving the performance of quantum processors. In this study, we identify optimal ranges for qubit design parameters, grounded in comprehensive noise modeling. To this end, we commence by analyzing a previously unexplored error mechanism that can perturb diabatic two-qubit gates due to charge-parity switches caused by quasiparticles. We show that such charge-parity switching can be the dominant quasiparticle-related error source in a controlled-Z gate between two qubits. Moreover, we also demonstrate that quasiparticle dynamics, resulting in uncontrolled charge-parity switches, induce a residual longitudinal interaction between qubits in a tunable-coupler circuit. Our analysis of optimal design parameters is based on a performance metric for quantum circuit execution that takes into account the fidelity and frequencies of the appearance of both single and two-qubit gates in the circuit. This performance metric together with a detailed noise model enables us to find an optimal range for the qubit design parameters. Substantiating our findings through exact numerical simulations, we establish that fabricating quantum chips within this optimal parameter range not only augments the performance metric but also ensures its continued improvement with the enhancement of individual qubit coherence properties. Conversely, straying from the optimal parameter range can lead to the saturation of the performance metric. Our systematic analysis offers insights and serves as a guiding framework for the development of the next generation of transmon-based quantum processors.

  • Markovian master equations for quantum-classical hybrid systems.- [PDF] - [Article] - [UPDATED]

    Alberto Barchielli
     

    The problem of constructing a consistent quantum-classical hybrid dynamics is afforded in the case of a quantum component in a separable Hilbert space and a continuous, finite-dimensional classical component. In the Markovian case, the problem is formalized by the notion of hybrid dynamical semigroup. A classical component can be observed without perturbing the system and information on the quantum component can be extracted, thanks to the quantum-classical interaction. This point is formalized by showing how to introduce positive operator valued measures and operations compatible with the hybrid dynamical semigroup; in this way the notion of hybrid dynamics is connected to quantum measurements in continuous time. Then, the case of the most general quasi-free generator is presented and the various quantum-classical interaction terms are discussed. To bee quasi-free means to send, in the Heisenberg description, hybrid Weyl operators into multiples of Weyl operators; the results on the structure of quasi-free semigroups were proved in the article arXiv:2307.02611. Even in the pure quantum case, a quasi-free semigroup is not restricted to have only a Gaussian structure, but also jump-type terms are allowed. An important result is that, to have interactions producing a flow of information from the quantum component to the classical one, suitable dissipative terms must be present in the generator. Finally, some possibilities are discussed to go beyond the quasi-free case.

  • Quantum kernels for classifying dynamical singularities in a multiqubit system.- [PDF] - [Article] - [UPDATED]

    Diego Tancara, José Fredes, Ariel Norambuena
     

    Dynamical quantum phase transition is a critical phenomenon involving out-of-equilibrium states and broken symmetries without classical analogy. However, when finite-sized systems are analyzed, dynamical singularities of the rate function can appear, leading to a challenging physical characterization when parameters are changed. Here, we report a quantum support vector machine (QSVM) algorithm that uses quantum Kernels to classify dynamical singularities of the rate function for a multiqubit system. We illustrate our approach using $N$ long-range interacting qubits subjected to an arbitrary magnetic field, which induces a quench dynamics. Inspired by physical arguments, we introduce two different quantum Kernels, one inspired by the ground state manifold and the other based on a single state tomography. Our accuracy and adaptability results show that this quantum dynamical critical problem can be efficiently solved using physically inspiring quantum Kernels. Moreover, we extend our results for the case of time-dependent fields, quantum master equation, and when we increase the number of qubits.

  • Application of the Fourth-Order Time Convolutionless Master Equation to Open Quantum Systems: Quantum Limit of 1/f-Noise.- [PDF] - [Article] - [UPDATED]

    Elyana Crowder, Lance Lampert, Grihith Manchanda, Brian Shoffeitt, Srikar Gadamsetty, Yiting Pei, Shantanu Chaudhary, Dragomir Davidović
     

    We optimize and simplify the exact 4th-order time-convolutionless master equation (TCL4), in a dense set of open quantum systems weakly coupled to the bath. The master equation has a term proportional to the derivative of the system's spectral density and exhibits $1/f^{1-s}$ noise at zero temperature in the thermodynamic limit. $s$ is the power law in the spectral density at a small frequency $f$ and the noise approaches $1/f$ noise when $s\to 0_+$. The noise is driven by the fourth-order relaxation-dephasing hybrid processes. When $s<1$, the reduced dynamics diverges in the long time limit. However, by imposing a physical infrared cut-off time, we estimate the entanglement entropy between the small quantum system (hard matter) and the emitted soft bosons in the bath (soft matter). We also examine how the master equation represents the approach to a ground state, by comparing the asymptotic states of the dynamics at zero temperature and ground states, both computed to fourth order in the interaction. We find that the approach to ground state is numerically exact only in the second order of coupling to the bath. The TCL4 master equation embodies the infrared problem from quantum field theory, opening up possibilities to mitigate infrared divergent noise in quantum computing and to study its impact on the efficiency of photosynthetic light harvesting.

  • Erasure detection of a dual-rail qubit encoded in a double-post superconducting cavity.- [PDF] - [Article] - [UPDATED]

    Akshay Koottandavida, Ioannis Tsioutsios, Aikaterini Kargioti, Cassady R. Smith, Vidul R. Joshi, Wei Dai, James D. Teoh, Jacob C. Curtis, Luigi Frunzio, Robert J. Schoelkopf, Michel H. Devoret
     

    Qubits with predominantly erasure errors present distinctive advantages for quantum error correction(QEC) and fault tolerant quantum computing. Logical qubits based on dual-rail encoding that exploit erasure detection have been recently proposed in superconducting circuit architectures, either with coupled transmons or cavities. Here, we implement a dual-rail qubit encoded in a compact, double-post superconducting cavity. Using an auxiliary transmon, we perform erasure detection on the dual-rail subspace. We characterize the behaviour of the codespace by a novel method to perform joint-Wigner tomography. This is based on modifying the cross-Kerr interaction between the cavity modes and the transmon. We measure an erasure rate of 3.981 +/- 0.003 (ms)-1 and a residual dephasing error rate up to 0.17 (ms)-1 within the codespace. This strong hierarchy of error rates, together with the compact and hardware-efficient nature of this novel architecture, hold promise in realising QEC schemes with enhanced thresholds and improved scaling.

  • Optomechanical ring resonator for efficient microwave-optical frequency conversion.- [PDF] - [Article] - [UPDATED]

    I-Tung Chen, Bingzhao Li, Seokhyeong Lee, Srivatsa Chakravarthi, Kai-Mei Fu, Mo Li
     

    Phonons traveling in solid-state devices are emerging as a universal excitation that can couple to different physical systems through mechanical interaction. At microwave frequencies and in solid-state materials, phonons have a similar wavelength to optical photons, enabling them to interact efficiently with light and produce strong optomechanical effects that are highly desirable for classical and quantum signal transduction between optical and microwave. It becomes conceivable to build optomechanical integrated circuits (OMIC) that guide both photons and phonons and interconnect discrete photonic and phononic devices. Here, we demonstrate an OMIC including an optomechanical ring resonator (OMR), in which infrared photons and GHz phonons co-resonate to induce significantly enhanced interconversion. The OMIC is built on a hybrid platform where wide bandgap semiconductor gallium phosphide (GaP) is used as the waveguiding material and piezoelectric zinc oxide (ZnO) is used for phonon generation. The OMR features photonic and phononic quality factors of $>1\times10^5$ and $3.2\times10^3$, respectively, and resonantly enhances the optomechanical conversion between photonic modes to achieve an internal conversion efficiency $\eta_i=(2.1\pm0.1)%$ and a total device efficiency $\eta_{tot}=0.57\times10^{-6}$ at a low acoustic pump power of 1.6 mW. The efficient conversion in OMICs enables microwave-optical transduction for many applications in quantum information processing and microwave photonics.

  • Variational Quantum Eigensolver with Constraints (VQEC): Solving Constrained Optimization Problems via VQE.- [PDF] - [Article] - [UPDATED]

    Thinh Viet Le, Vassilis Kekatos
     

    Variational quantum approaches have shown great promise in finding near-optimal solutions to computationally challenging tasks. Nonetheless, enforcing constraints in a disciplined fashion has been largely unexplored. To address this gap, this work proposes a hybrid quantum-classical algorithmic paradigm termed VQEC that extends the celebrated VQE to handle optimization with constraints. As with the standard VQE, the vector of optimization variables is captured by the state of a variational quantum circuit (VQC). To deal with constraints, VQEC optimizes a Lagrangian function classically over both the VQC parameters as well as the dual variables associated with constraints. To comply with the quantum setup, variables are updated via a perturbed primal-dual method leveraging the parameter shift rule. Among a wide gamut of potential applications, we showcase how VQEC can approximately solve quadratically-constrained binary optimization (QCBO) problems, find stochastic binary policies satisfying quadratic constraints on the average and in probability, and solve large-scale linear programs (LP) over the probability simplex. Under an assumption on the error for the VQC to approximate an arbitrary probability mass function (PMF), we provide bounds on the optimality gap attained by a VQC. Numerical tests on a quantum simulator investigate the effect of various parameters and corroborate that VQEC can generate high-quality solutions.

  • Synthesis and Arithmetic of Single Qutrit Circuits.- [PDF] - [Article] - [UPDATED]

    Amolak Ratan Kalra, Dinesh Valluri, Michele Mosca
     

    In this paper we study single qutrit quantum circuits consisting of words over the Clifford+ $\mathcal{D}$ gate set, where $\mathcal{D}$ consists of cyclotomic gates of the form $\text{diag}(\pm\xi^{a},\pm\xi^{b},\pm\xi^{c}),$ where $\xi$ is a primitive $9$-th root of unity and $a,b,c$ are integers. We characterize classes of qutrit unit vectors $z$ with entries in $\mathbb{Z}[\xi, \frac{1}{\chi}]$ based on the possibility of reducing their smallest denominator exponent (sde) with respect to $\chi := 1 - \xi,$ by acting an appropriate gate in Clifford+$\mathcal{D}$. We do this by studying the notion of `derivatives mod $3$' of an arbitrary element of $\mathbb{Z}[\xi]$ and using it to study the smallest denominator exponent of $HDz$ where $H$ is the qutrit Hadamard gate and $D \in \mathcal{D}.$ In addition, we reduce the problem of finding all unit vectors of a given sde to that of finding integral solutions of a positive definite quadratic form along with some additional constraints. As a consequence we prove that the Clifford + $\mathcal{D}$ gates naturally arise as gates with sde $0$ and $3$ in the group $U(3,\mathbb{Z}[\xi, \frac{1}{\chi}])$ of $3 \times 3$ unitaries with entries in $\mathbb{Z}[\xi, \frac{1}{\chi}]$

  • On the Pauli Spectrum of QAC0.- [PDF] - [Article] - [UPDATED]

    Shivam Nadimpalli, Natalie Parham, Francisca Vasconcelos, Henry Yuen
     

    The circuit class $\mathsf{QAC}^0$ was introduced by Moore (1999) as a model for constant depth quantum circuits where the gate set includes many-qubit Toffoli gates. Proving lower bounds against such circuits is a longstanding challenge in quantum circuit complexity; in particular, showing that polynomial-size $\mathsf{QAC}^0$ cannot compute the parity function has remained an open question for over 20 years. In this work, we identify a notion of the Pauli spectrum of $\mathsf{QAC}^0$ circuits, which can be viewed as the quantum analogue of the Fourier spectrum of classical $\mathsf{AC}^0$ circuits. We conjecture that the Pauli spectrum of $\mathsf{QAC}^0$ circuits satisfies low-degree concentration, in analogy to the famous Linial, Nisan, Mansour theorem on the low-degree Fourier concentration of $\mathsf{AC}^0$ circuits. If true, this conjecture immediately implies that polynomial-size $\mathsf{QAC}^0$ circuits cannot compute parity. We prove this conjecture for the class of depth-$d$, polynomial-size $\mathsf{QAC}^0$ circuits with at most $n^{O(1/d)}$ auxiliary qubits. We obtain new circuit lower bounds and learning results as applications: this class of circuits cannot correctly compute - the $n$-bit parity function on more than $(\frac{1}{2} + 2^{-\Omega(n^{1/d})})$-fraction of inputs, and - the $n$-bit majority function on more than $(1 - 1/\mathrm{poly}(n))$-fraction of inputs. Additionally we show that this class of $\mathsf{QAC}^0$ circuits with limited auxiliary qubits can be learned with quasipolynomial sample complexity, giving the first learning result for $\mathsf{QAC}^0$ circuits. More broadly, our results add evidence that "Pauli-analytic" techniques can be a powerful tool in studying quantum circuits.

other

  • No papers in this section today!