Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2023-11-07 11:30 to 2023-11-10 12:30 | Next meeting is Friday Nov 1st, 11:30 am.
I consider the Festina Lente Swampland bound and argue taking thermal effects, as for instance occur during reheating, into account significantly strengthens the implications of this bound. I argue that the confinement scale should be higher than a scale proportional to the vacuum energy, while Festina Lente without thermal effects only bounds the confinement scale to be above the Hubble scale. For Higgsing of nonabelian gauge fields, I find that the magnitude of the Higgs mass should be heavier than a bound proportional to the Electroweak scale (or generally the scale set by the Higgs VEV). The measured values of the Higgs in the SM satisfy the bound. A way to avoid the bound being violated during inflation is to have a large number of species becoming light. If one wants the inflationary scale to lie below the species scale in this case, this bounds the inflationary scale to be $\ll 10^5$ GeV. These bounds have phenomenological implications for BSM physics such as GUTs, suggesting for example a weak or absent gravitational wave signature from the GUT Higgsing phase transition.
Direct detection experiments and the interpretation of their results are sensitive to the velocity structure of the dark matter in our galactic halo. In this work, we extend the formalism that deals with such astrophysics-driven uncertainties, originally introduced in the context of dark-matter-nuclear scattering, to include dark-matter-electron scattering interactions. Using mock data, we demonstrate that the ability to determine the correct dark matter mass and velocity distribution is depleted for recoil spectra which only populate a few low-lying bins, such as models involving a light mediator. We also demonstrate how this formalism allows one to test the compatibility of existing experimental data sets (e.g. SENSEI and EDELWEISS), as well as make predictions for possible future experiments (e.g. GaAs-based detectors).
We use high-resolution ($\simeq$ 35pc) hydrodynamical simulations of galaxy formation to investigate the relation between gas accretion and star formation in galaxies hosted by dark matter haloes of mass $10^{12}$ $\mathrm{M_\odot}$ at $z = 2$. At high redshift, cold-accreted gas is expected to be readily available for star formation, while gas accreted in a hot mode is expected to require a longer time to cool down before being able to form stars. Contrary to these expectations, we find that the majority of cold-accreted gas takes several hundred Myr longer to form stars than hot-accreted gas after it reaches the inner circumgalactic medium (CGM). Approximately 10% of the cold-accreted gas flows rapidly through the inner CGM onto the galactic disc. The remaining 90% is trapped in a turbulent accretion region that extends up to $\sim$ 50 per cent of the virial radius, from which it takes several hundred Myr for the gas to be transported to the star-forming disc. In contrast, most hot shock-heated gas avoids this 'slow track', and accretes directly from the CGM onto the disc where stars can form. We find that shock-heating of cold gas after accretion in the inner CGM and supernova-driven outflows contribute to, but do not fully explain, the delay in star formation. These processes combined slow down the delivery of cold-accreted gas to the galactic disc and consequently limit the rate of star formation in Milky Way mass galaxies at $z > 2$.
The Conditional Colour-Magnitude Distribution (CCMD) is a comprehensive formalism of the colour-magnitude-halo mass relation of galaxies. With joint modelling of a large sample of SDSS galaxies in fine bins of galaxy colour and luminosity, Xu et al. (2018) inferred parameters of a CCMD model that well reproduces colour- and luminosity-dependent abundance and clustering of present-day galaxies. In this work, we provide a test and investigation of the CCMD model by studying the colour and luminosity distribution of galaxies in galaxy groups. An apples-to-apples comparison of group galaxies is achieved by applying the same galaxy group finder to identify groups from the CCMD galaxy mocks and from the SDSS data, avoiding any systematic effect of group finding and mass assignment on the comparison. We find an overall nice agreement in the conditional luminosity function (CLF), the conditional colour function (CCF), and the CCMD of galaxies in galaxy groups inferred from CCMD mock and SDSS data. We also discuss the subtle differences revealed by the comparison. In addition, using two external catalogues constructed to only include central galaxies with halo mass measured through weak lensing, we find that their colour-magnitude distribution shows two distinct and orthogonal components, in line with the prediction of the CCMD model. Our results suggest that the CCMD model provides a good description of halo mass dependent galaxy colour and luminosity distribution. The halo and CCMD mock catalogues are made publicly available to facilitate other investigations.
Motivated by the stunning projections for future CMB surveys, we evaluate the amount of dark radiation produced in the early Universe by two-body decays or binary scatterings with thermal bath particles via a rigorous analysis in momentum space. We track the evolution of the dark radiation phase space distribution, and we use the asymptotic solution to evaluate the amount of additional relativistic energy density parameterized in terms of an effective number of additional neutrino species $\Delta N_{\rm eff}$. Our approach allows for studying light particles that never reach equilibrium across cosmic history, and to scrutinize the physics of the decoupling when they thermalize instead. We incorporate quantum statistical effects for all the particles involved in the production processes, and we account for the energy exchanged between the visible and invisible sectors. Non-instantaneous decoupling is responsible for spectral distortions in the final distributions, and we quantify how they translate into the corresponding value for $\Delta N_{\rm eff}$. Finally, we undertake a comprehensive comparison between our exact results and approximated methods commonly employed in the existing literature. Remarkably, we find that the difference can be larger than the experimental sensitivity of future observations, justifying the need for a rigorous analysis in momentum space.
Beyond-$\Lambda$CDM models, which were proposed to resolve the "Hubble tension", often have an impact on the discrepancy in the amplitude of matter clustering, the "$\sigma_8$-tension". To explore the interplay between the two tensions, we propose a simple method to visualize the relation between the two parameters $H_0$ and $\sigma_8$: For a given extension of the $\Lambda$CDM model and data set, we plot the relation between $H_0$ and $\sigma_8$ for different amplitudes of the beyond-$\Lambda$CDM physics. We use this visualization method to illustrate the trend of selected cosmological models, including non-minimal Higgs-like inflation, early dark energy, a varying effective electron mass, an extra number of relativistic species and modified dark energy models. We envision that the proposed method could be a useful diagnostic tool to illustrate the behaviour of complex cosmological models with many parameters in the context of the $H_0$ and $\sigma_8$ tensions.
Given the recent successful launch of the James Webb Space Telescope, determining robust calibrations of the slopes and absolute magnitudes of the near- to mid-infrared Tip of the Red Giant Branch (TRGB) will be essential to measuring precise extragalactic distances via this method. Using ground-based data of the Large Magellanic Cloud from the Magellanic Clouds Photometric Survey along with near-infrared (NIR) data from 2MASS and mid-infrared (MIR) data collected as a part of the SAGE survey using the Spitzer Space Telescope, we present slopes and zero-points for the TRGB in the optical (VI), NIR (JHK) and MIR ([3.6] & [4.5]) bandpasses. These calibrations utilize stars +0.3 +/- 0.1 mag below the tip, providing a substantial statistical improvement over previous calibrations which only used the sample of stars narrowly encompassing the tip.
We analyze the energy density spectrum of \acp{SIGW} using the NANOGrav 15-year data set, thereby constraining the primordial non-Gaussian parameter $f_{\mathrm{NL}}$. For the first time, we calculate the seventeen missing two-loop diagrams proportional to $f_{\mathrm{NL}}A_{\zeta}^3$ that correspond to the two-point correlation function $\langle h^{\lambda,(3)}_{\mathbf{k}} h^{\lambda',(2)}_{\mathbf{k}'} \rangle$ for local-type primordial non-Gaussianity. The total energy density spectrum of \acp{SIGW} can be significantly suppressed by these two-loop diagrams. If \acp{SIGW} dominate the \acp{SGWB} observed in \ac{PTA} experiments, the parameter interval $f_{\mathrm{NL}}\in [-5,-1]$ is notably excluded based on NANOGrav 15-year data set. After taking into account abundance of \acp{PBH} and the convergence of the cosmological perturbation expansion, we find that the only possible parameter range for $f_{\mathrm{NL}}$ might be $-1\le f_{\mathrm{NL}}< 0$.
High-resolution (HR) simulations in cosmology, in particular when including baryons, can take millions of CPU hours. On the other hand, low-resolution (LR) dark matter simulations of the same cosmological volume use minimal computing resources. We develop a denoising diffusion super-resolution emulator for large cosmological simulation volumes. Our approach is based on the image-to-image Palette diffusion model, which we modify to 3 dimensions. Our super-resolution emulator is trained to perform outpainting, and can thus upgrade very large cosmological volumes from LR to HR using an iterative outpainting procedure. As an application, we generate a simulation box with 8 times the volume of the Illustris TNG300 training data, constructed with over 9000 outpainting iterations, and quantify its accuracy using various summary statistics.
Knowledge of the spectrograph's instrumental profile (IP) provides important information needed for wavelength calibration and for the use in scientific analyses. This work develops new methods for IP reconstruction in high resolution spectrographs equipped with Laser Frequency Comb calibration (LFC) systems and assesses the impact that assumptions on IP shape have on achieving accurate spectroscopic measurements. Astronomical LFCs produce $\approx10000$ bright, unresolved emission lines with known wavelengths, making them excellent probes of the IP. New methods based on Gaussian Process regression were developed to extract detailed information on the IP shape from this data. Applying them to HARPS, an extremely stable spectrograph installed on the ESO 3.6m telescope, we reconstructed its IP at 512 locations of the detector, covering 60% of the total detector area. We found that the HARPS IP is asymmetric and that it varies smoothly across the detector. Empirical IP models provide wavelength accuracy better than 10 ms$^{-1}$ (5 ms$^{-1}$) with 92% (64%) probability. In comparison, reaching the same accuracy has a probability of only 29% (8%) when a Gaussian IP shape is assumed. Furthermore, the Gaussian assumption is associated with intra-order and inter-order distortions in the HARPS wavelength scale as large as 60ms$^{-1}$. The spatial distribution of these distortions suggests they may be related to spectrograph optics and therefore may generally appear in cross-dispersed echelle spectrographs when Gaussian IPs are used. Methods presented here can be applied to other instruments equipped with LFCs, such as ESPRESSO, but also ANDES and G-CLEF in the future. The empirical IPs will be crucial for obtaining objective and unbiased measurements of fundamental constants from high resolution spectra, as well as measurements of the redshift drift, isotopic abundances, and other science cases.
We analyse perturbations of self-interacting, scalar field dark matter that contains modes both in a coherent condensate state and an incoherent particle-like state. Starting from the coupled equations for the condensate, the particles and their mutual gravitational potential, first derived from first principles in earlier work by the authors, we derive a hydrodynamic limit of two coupled fluids and study their linearized density perturbations in an expanding universe. We find that away from the condensate-only or particle-only limits, and for certain ranges of the parameters, such self-interacting mixtures can significantly enhance the density power spectrum above the standard linear $\Lambda$CDM value at localised wavenumbers, even pushing structure formation into the non-linear regime earlier than expected in $\Lambda$CDM for these scales. We also note that such mixtures can lead to degeneracies between models with different boson masses and self-coupling strengths, in particular between self-coupled models and non-coupled Fuzzy Dark Matter made up of heavier bosons. These findings open up the possibility of a richer phenomenology in scalar field dark matter models and could further inform efforts to place observational limits on their parameters.
We explore the cosmology of the Dark Dimension scenario taking into account perturbations in the linear regime. In the context of the Dark Dimension scenario, a natural candidate for dark matter in our universe is the excitations of a tower of massive spin-2 KK gravitons. These dark gravitons are produced in the early universe and decay to lighter KK gravitons during the course of cosmological evolution. The decay causes the average dark matter mass to decrease as the universe evolves. In addition, the kinetic energy liberated in each decay leads to a kick velocity for the dark matter particles leading to a suppression of structure formation. Using current CMB (Planck), BAO and cosmic shear (KiDS-1000) data, we put a bound on the dark matter kick velocity today $v_\mathrm{today} \leq 2.2 \times 10^{-4} c$ at 95\% CL. This leads to rather specific regions of parameter space for the dark dimension scenario. The combination of the experimental bounds from cosmology, astrophysics and table-top experiments lead to the range $l_5\sim 1- 10 \, \mu m$ for the size of the Dark Dimension. The Dark Dimension scenario is found to be remarkably consistent with current observations and provides signatures that are within reach of near-future experiments.
We propose a model-independent \textit{B\'ezier parametric interpolation} to alleviate the degeneracy between baryonic and dark matter abundances by means of intermediate-redshift data. To do so, we first interpolate the observational Hubble data to extract cosmic bounds over the (reduced) Hubble constant, $h_0$, and interpolate the angular diameter distances, $D(z)$, of the galaxy clusters, inferred from the Sunyaev-Zeldovich effect, constraining the spatial curvature, $\Omega_k$. Through the so-determined Hubble points and $D(z)$, we interpolate uncorrelated data of baryonic acoustic oscillations bounding the baryon ($\omega_b = h^2_0\Omega_b$) and total matter ($\omega_m = h^2_0\Omega_m$) densities, reinforcing the constraints on $h_0$ and $\Omega_k$ with the same technique. Instead of pursuing the usual treatment to fix $\omega_b$ via the value obtained from the cosmic microwave background to remove the matter sector degeneracy, we here interpolate the acoustic parameter from correlated baryonic acoustic oscillations. The results of our Monte Carlo--Markov chain simulations turn out to agree at $1$--$\sigma$ confidence level with the flat $\Lambda$CDM model. While our findings are roughly suitable at $1$--$\sigma$ with its non-flat extension too, the Hubble constant appears in tension up to the $2$--$\sigma$ confidence level. Accordingly, we also reanalyze the Hubble tension with our treatment and find our expectations slightly match local constraints.
The redshifted 21-cm signal from neutral hydrogen is a direct probe of the physics of the early universe and has been an important science driver of many present and upcoming radio interferometers. In this study, we use a single night of observations with the New Extension in Nan\c{c}ay Upgrading LOFAR (NenuFAR) to place upper limits on the 21-cm power spectrum from the Cosmic Dawn at a redshift of $z$ = 20.3. NenuFAR is a new low-frequency radio interferometer, operating in the 10-85 MHz frequency range, currently under construction at the Nan\c{c}ay Radio Observatory in France. It is a phased array instrument with a very dense uv-coverage at short baselines, making it one of the most sensitive instruments for 21-cm cosmology analyses at these frequencies. Our analysis adopts the foreground subtraction approach, in which sky sources are modeled and subtracted through calibration, and residual foregrounds are subsequently removed using Gaussian process regression (GPR). The final power spectra are constructed from the gridded residual data cubes in the uv-plane. Signal injection tests are performed at each step of the analysis pipeline, and the relevant pipeline settings are optimized to ensure minimal signal loss, and any signal suppression is accounted for through a bias correction on our final upper limits. We obtain a best 2$\sigma$ upper limit of $2.4\times 10^7$ $\text{mK}^{2}$ at $z$ = 20.3 and $k$ = 0.041 $h\,\text{cMpc}^{-1}$. We see a strong excess power in the data, making our upper limits two orders of magnitude higher than the thermal noise limit. We investigate the origin and nature of this excess power and discuss further improvements in the analysis pipeline, which can potentially mitigate it and consequently allow us to reach the thermal noise sensitivity when multiple nights of observations are processed in the future.
The hydrogen 21-cm signal is predicted to be the richest probe of the young Universe including eras known as the cosmic Dark Ages, the Cosmic Dawn when the first star and black hole formed, and the Epoch of Reionization. This signal holds the key to deciphering processes that take place at the early stages of cosmic history. In this opinion piece, we discuss the potential scientific merit of lunar observations of the 21-cm signal and their advantages over more affordable terrestrial efforts. The moon is a prime location for radio cosmology which will enable precision observations of the low-frequency radio sky. The uniqueness of such observations is that they will provide an unparalleled opportunity to test cosmology and the nature of dark matter using the Dark Ages 21-cm signal. No less enticing is the opportunity to obtain a much clearer picture of Cosmic Dawn than what is achievable from the ground, which will allow us to probe properties of the first stars and black holes.
The scalar-induced gravitational waves (SIGW), arising from large amplitude primordial density fluctuations, provide a unique observational test for directly probing the epoch of inflation. In this work, we provide constraints on the SIGW background by taking into account the non-Gaussianity in the primordial density fluctuations, using the third observing run (O3) data of the LIGO-Virgo-KAGRA collaboration. We find that the non-Gaussianity gives a non-negligible effect on the GW energy density spectrum and starts to affect the analysis of the O3 data when the non-Gaussianity parameter is $F_{\rm NL} > 3.55$. Furthermore, the constraints exhibit asymptotic behavior given by $F_{\rm NL} A_g = \rm{const.}$ at large $F_{\rm NL}$ limit, where $A_g$ denotes the amplitude of the curvature perturbations. In the limit of large $F_{\rm NL}$, we placed a 95% confidence level upper limit $F_{\rm NL} A_g \leq 0.13, 0.09, 0.10$ at fixed scales of $10^{16}, 10^{16.5}, 10^{17}~{\rm Mpc}^{-1}$, respectively.
We present the final results of our search for new Milky Way (MW) satellites using the data from the Hyper Suprime-Cam (HSC) Subaru Strategic Program (SSP) survey over $\sim 1,140$ deg$^2$. In addition to three candidates that we already reported, we have identified two new MW satellite candidates in the constellation of Sextans at a heliocentric distance of $D_{\odot} \simeq 126$kpc, and Virgo at $D_{\odot} \simeq 151$kpc, named Sextans II and Virgo III, respectively. Their luminosities (Sext II:$M_V\simeq-3.9$mag; Vir III:$M_V\simeq-2.7$mag) and half-light radii (Sext II:$r_h\simeq154$ pc; Vir III:$r_h\simeq 44$ pc) place them in the region of size-luminosity space of ultra-faint dwarf galaxies (UFDs). Including four previously known satellites, there are a total of nine satellites in the HSC-SSP footprint. This discovery rate of UFDs is much higher than that predicted from the recent models for the expected population of MW satellites in the framework of cold dark matter models, thereby suggesting that we encounter a too many satellites problem. Possible solutions to settle this tension are also discussed.
We search for narrow-line optical emission from dark matter decay by stacking dark-sky spectra from the Dark Energy Spectroscopic Instrument (DESI) at the redshift of nearby galaxies from DESI's Bright Galaxy and Luminous Red Galaxy samples. Our search uses regions separated by 5 to 20 arcsecond from the centers of the galaxies, corresponding to an impact parameter of approximately $50\,\rm kpc$. No unidentified spectral line shows up in the search, and we place a line flux limit of $10^{-19}\,\rm{ergs}/\rm{s}/\rm{cm}^{2}/\rm{arcsec}^{2}$ on emissions in the optical band ($3000\lesssim\lambda\lesssim9000 \,\mathring{\rm A}$), which corresponds to $34$ in AB magnitude in a normal broadband detection. This detection limit suggests that the line surface brightness contributed from all dark matter along the line of sight is two orders of magnitude lower than the measured extragalactic background light (EBL), which rules out the possibility that narrow optical-line emission from dark matter decay is a major source of the EBL.
If gravity is fundamentally quantum, any two quantum particles must get entangled with each other due to their mutual interaction through gravity. This phenomenon, dubbed gravity-mediated entanglement, has led to recent efforts of detecting perturbative quantum gravity in table-top experimental setups. In this paper, we generalize this to imagine two idealized massive oscillators, in their ground state, which get entangled due to gravity in an expanding universe, and find that the curvature of the background spacetime leaves its imprints on the resulting entanglement profile. Thus, detecting gravity-mediated entanglement from cosmological observations will open up an exciting new avenue of measuring the local expansion rate of the cosmos.
There has been growing interest in $f(Q)$ gravity, which has led to significant advancements in the field. However, it is important to note that most studies in this area were based on the coincident gauge, thus overlooking the impact of the connection degrees of freedom. In this work, we pay special attention to the connection when studying perturbations in general teleparallel, metric teleparallel, and symmetric teleparallel theories of gravity. We do not just examine perturbations in the metric, but also in the affine connection. To illustrate this, we investigate cosmological perturbations in $f(G)$, $f(T)$, and $f(Q)$ gravity with and without matter in form of an additional scalar field for spatially flat and curved FLRW geometries. Our perturbative analysis reveals that for general $f(Q)$ backgrounds, there are up to seven degrees of freedom, depending on the background connection. This is in perfect agreement with the upper bound on degrees of freedom established for the first time in $\href{https://doi.org/10.1002/prop.202300185}{Fortschr. Phys. 2023, 2300185}$. In $f(G)$ and $f(T)$ gravity theories, only two tensor modes propagate in the gravity sector on generic curved cosmological backgrounds, indicating strong coupling problems. In the context of $f(Q)$ cosmology, we find that for a particular background connection, where all seven modes propagate, there is at least one ghost degree of freedom. For all other choices of the connection the ghost can be avoided at the cost of strong coupling problem, where only four degrees of freedom propagate. Hence, all of the cosmologies within the teleparallel families of theories in form of $f(G)$, $f(T)$, and $f(Q)$ suffer either from strong coupling or from ghost instabilities. A direct coupling of the matter field to the connection or non-minimal couplings might alter these results.
In this study we analyze Type Ia supernovae (SNe Ia) data sourced from the Pantheon+ compilation to investigate late-time physics effects influencing the expansion history, $H(z)$, at redshifts $(z < 2)$. Our focus centers on a time-varying dark energy (DE) model that introduces a rapid transition in the equation of state, at a specific redshift, $z_a$, from the baseline, $\Lambda = -1$, value to the present value, $w_0$, through the implementation of a sigmoid function. The constraints obtained for the DE sigmoid phenomenological parametrization have broad applicability for dynamic DE models that invoke late-time physics. Our analysis indicates that the sigmoid model provides a slightly better, though not statistically significant, fit to the SNe Pantheon+ data compared to the standard LCDM alone. The fit results, assuming a flat geometry and maintaining $\Omega_m$ constant at the 2018-Planck value of $0.3153$, are as follows: $H_0 = 73.3^{+0.2}_{-0.6}$ km s$^{-1}$ Mpc$^{-1}$, $w_{0} = -0.95^{+0.15}_{-0.02}$, $z_a = 0.8 \pm 0.46$. The errors represent statistical uncertainties only. The available SN dataset lacks sufficient statistical power to distinguish between the baseline LCDM and the alternative sigmoid models. A feature of interest offered by the sigmoid model is that it identifies a specific redshift, $z_a = 0.8$, where a potential transition in the equation of state could have occurred. The sigmoid model does not favor a DE in the phantom region ($w_0 < -1$). Further constraints to the dynamic DE model have been obtained using CMB data to compute the distance to the last scattering surface. While the sigmoid DE model does not completely resolve the $H_0$ tension, it offers a transition mechanism that can still play a role alongside other potential solutions.
Gravitationally lensed quasars (GLQs) are known to potentially provide an independent way of determining the value of the Hubble-Lema\^{i}tre parameter $H_{0}$, to probe the dark matter content of lensing galaxies and to resolve tiny structures in distant active galactic nuclei. That is why multiply imaged quasars are one of the main drivers for a photometric monitoring with the 4-m International Liquid Mirror Telescope (ILMT). We would like to answer the following questions -- how many multiply imaged quasars should we be able to detect with the ILMT? And how to derive accurate magnitudes of the GLQ images? Our estimation of the possible number of multiply imaged quasars is $15$, although optimistic forecasts predict up to $50$ of them. We propose to use the adaptive PSF fitting method for accurate flux measurements of the lensed images. During preliminary observations in spring 2022 we were able to detect the quadruply imaged quasar - SDSS J1251+2935 in the $\it{i}$ and $\it{r}$ spectral bands.
We use network theory to study topological features in the hierarchical clustering of dark matter halos. We use public halo catalogs from cosmological N-body simulations and construct tree graphs that connect halos within main halo systems. Our analysis shows these graphs exhibit a power-law degree distribution with an exponent of $-2$, and possess scale-free and self-similar properties according to the criteria of graph metrics. We propose a random graph model with preferential attachment kernels, which effectively incorporate the effects of minor mergers, major mergers, and tidal stripping. The model reproduces the structural, topological properties of simulated halo systems, providing a new way of modeling complex gravitational dynamics of structure formation.
Comparison of appropriate models to describe observational data is a fundamental task of science. The Bayesian model evidence, or marginal likelihood, is a computationally challenging, yet crucial, quantity to estimate to perform Bayesian model comparison. We introduce a methodology to compute the Bayesian model evidence in simulation-based inference (SBI) scenarios (also often called likelihood-free inference). In particular, we leverage the recently proposed learnt harmonic mean estimator and exploit the fact that it is decoupled from the method used to generate posterior samples, i.e. it requires posterior samples only, which may be generated by any approach. This flexibility, which is lacking in many alternative methods for computing the model evidence, allows us to develop SBI model comparison techniques for the three main neural density estimation approaches, including neural posterior estimation (NPE), neural likelihood estimation (NLE), and neural ratio estimation (NRE). We demonstrate and validate our SBI evidence calculation techniques on a range of inference problems, including a gravitational wave example. Moreover, we further validate the accuracy of the learnt harmonic mean estimator, implemented in the HARMONIC software, in likelihood-based settings. These results highlight the potential of HARMONIC as a sampler-agnostic method to estimate the model evidence in both likelihood-based and simulation-based scenarios.
We present and compare several methods to mitigate time-correlated (1/f) noise within the HI intensity mapping component of the MeerKAT Large Area Synoptic Survey (MeerKLASS). By simulating scan strategies, the HI signal, foreground emissions, white and correlated noise, we assess the ability of various data processing pipelines to recover the power spectrum of HI brightness temperature fluctuations. We use MeerKAT pilot data to assess the level of 1/f noise expected for the MeerKLASS survey and use these measurements to create realistic levels of time-correlated noise for our simulations. We find the time-correlated noise component within the pilot data to be between 10 and 20 times higher than the white noise level at the scale of k = 0.04 Mpc^-1. Having determined that the MeerKAT 1/f noise is partially correlated across all the frequency channels, we employ Singular Value Decomposition (SVD) as a technique to remove both the 1/f noise and Galactic foregrounds but find that over-cleaning results in the removal of HI power at large (angular and radial) scales; a power loss of 40 per cent is seen for a 3-mode SVD clean at the scale of k = 0.04 Mpc^-1. We compare the impact of map-making using weighting by the full noise covariance (i.e. including a 1/f component), as opposed to just a simple unweighted binning, finding that including the time-correlated noise information reduces the excess power added by 1/f noise by up to 30 per cent.
A group of massive galaxies at redshifts of $z\gtrsim 7$ have been recently detected by the James Webb Space Telescope (JWST), which were unexpected to form at such early time within the standard Big Bang cosmology. In this work, we propose that this puzzle can be explained by the presence of some primordial black holes (PBHs) with mass of $\sim 1000 M_\odot$. These PBHs act as seeds for early galaxies formation with masses of $\sim 10^{8}-10^{10}~M_\odot$ at high redshift, hence accounting for the JWST observations. We use a hierarchical Bayesian inference framework to constrain the PBH mass distribution models, and find that the Lognormal model with the $M_{\rm c}\sim 750M_\odot$ is preferred over other hypotheses. These rapidly growing BHs are expected to have strong radiation and may appear as the high-redshift compact objects, similar to those recently discovered by JWST.
Triumvirate is a Python/C++ package for measuring the three-point clustering statistics in large-scale structure (LSS) cosmological analyses. Given a catalogue of discrete particles (such as galaxies) with their spatial coordinates, it computes estimators of the multipoles of the three-point correlation function, also known as the bispectrum in Fourier space, in the tri-polar spherical harmonic (TripoSH) decomposition proposed by Sugiyama et al. (2019). The objective of Triumvirate is to provide efficient end-to-end measurement of clustering statistics which can be fed into downstream galaxy survey analyses to constrain and test cosmological models. To this end, it builds upon the original algorithms in the hitomi code developed by Sugiyama et al. (2018, 2019), and supplies a user-friendly interface with flexible input/output (I/O) of catalogue data and measurement results, with the built program configurable through external parameter files and tracked through enhanced logging and warning/exception handling. For completeness and complementarity, methods for measuring two-point clustering statistics are also included in the package.
A recent ultraviolet luminosity function (UVLF) analysis in the Hubble Frontier Fields, behind foreground lensing clusters, has helped solidify estimates of the faint-end of the $z \sim 5-9$ UVLF at up to five magnitudes fainter than in the field. These measurements provide valuable information regarding the role of low luminosity galaxies in reionizing the universe and can help in calibrating expectations for JWST observations. We fit a semi-empirical model to the lensed and previous UVLF data from Hubble. This fit constrains the average star formation efficiency (SFE) during reionization, with the lensed UVLF measurements probing halo mass scales as small as $M \sim 2 \times 10^9 {\rm M}_\odot$. The implied trend of SFE with halo mass is broadly consistent with an extrapolation from previous inferences at $M \gtrsim 10^{10} {\rm M}_\odot$, although the joint data prefer a shallower SFE. This preference, however, is partly subject to systematic uncertainties in the lensed measurements. Near $z \sim 6$ we find that the SFE peaks at $\sim 20 \%$ between $\sim 10^{11}-10^{12} {\rm M}_\odot$. Our best fit model is consistent with Planck 2018 determinations of the electron scattering optical depth, and most current reionization history measurements, provided the escape fraction of ionizing photons is $f_{\rm esc} \sim 10-20\%$. The joint UVLF accounts for nearly $80\%$ of the ionizing photon budget at $z \sim 8$. Finally, we show that recent JWST UVLF estimates at $z \gtrsim 11$ require strong departures from the redshift evolution suggested by the Hubble data.
We present the Simple Intensity Map Producer for Line Emission (SIMPLE), a public code for quickly simulating mock line-intensity maps, and an analytical framework for modeling intensity maps including observational effects. SIMPLE can be applied to any spectral line sourced by galaxies. The SIMPLE code is based on lognormal mock catalogs of galaxies including positions and velocities and assigns luminosities following the luminosity function. After applying a selection function to distinguish between detected and undetected galaxies, the code generates an intensity map, which can be modified with anisotropic smoothing, noise, a mask, and sky subtraction, and calculates the power spectrum multipoles. We show that the intensity autopower spectrum and the galaxy-intensity cross-power spectrum agree well with the analytical estimates in real space. We derive and show that the sky subtraction suppresses the intensity autopower spectrum and the cross-power spectrum on scales larger than the size of an individual observation. As an example application, we make forecasts for the sensitivity of an intensity mapping experiment similar to the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) to the cross-power spectrum of Ly$\alpha$-emitting galaxies and the Ly$\alpha$ intensity. We predict that HETDEX will measure the galaxy-intensity cross-power spectrum with a high signal-to-noise ratio on scales of $0.04\, h\,\mathrm{Mpc}^{-1} < k < 1\, h\,\mathrm{Mpc}^{-1}$.
We explore the eccentricity measurement threshold of LISA for gravitational waves radiated by massive black hole binaries (MBHBs) with redshifted BH masses $M_z$ in the range $10^{4.5}$-$10^{7.5}~{\rm M}_\odot$ at redshift $z=1$. The eccentricity can be an important tracer of the environment where MBHBs evolve to reach the merger phase. To consider LISA's motion and apply the time delay interferometry, we employ the lisabeta software and produce year-long eccentric waveforms using the inspiral-only post-Newtonian model TaylorF2Ecc. We study the minimum measurable eccentricity ($e_{\rm min}$, defined one year before the merger) analytically by computing matches and Fisher matrices, and numerically via Bayesian inference by varying both intrinsic and extrinsic parameters. We find that $e_{\rm min}$ strongly depends on $M_z$ and weakly on mass ratio and extrinsic parameters. Match-based signal-to-noise ratio criterion suggest that LISA will be able to detect $e_{\rm min}\sim10^{-2.5}$ for lighter systems ($M_z\lesssim10^{5.5}~{\rm M}_\odot$) and $\sim10^{-1.5}$ for heavier MBHBs with a $90$ per cent confidence. Bayesian inference with Fisher initialization and a zero noise realization pushes this limit to $e_{\rm min}\sim10^{-2.75}$ for lower-mass binaries, assuming a $<50$ per cent relative error. Bayesian inference can recover injected eccentricities of $0.1$ and $10^{-2.75}$ for a $10^5~{\rm M}_\odot$ system with a $\sim10^{-2}$ per cent and a $\sim10$ per cent relative errors, respectively. Stringent Bayesian odds criterion ($\ln{B}>8$) provides nearly the same inference. Both analytical and numerical methodologies provide almost consistent results for our systems of interest. LISA will launch in a decade, making this study valuable and timely for unlocking the mysteries of the MBHB evolution.
X-ray binaries (XRBs) are thought to regulate cosmic thermal and ionization histories during the Epoch of Reionization and Cosmic Dawn ($z\sim 5-30$).Theoretical predictions of the X-ray emission from XRBs are important for modelling such early cosmic evolution. Nevertheless, the contribution from Be-XRBs, powered by accretion of compact objects from decretion disks around rapidly rotating O/B stars, has not been investigated systematically. Be-XRBs are the largest class of high-mass XRBs (HMXBs) identified in local observations and are expected to play even more important roles in metal-poor environments at high redshifts. In light of this, we build a physically motivated model for Be-XRBs based on recent hydrodynamic simulations and observations of decretion disks. Our model is able to reproduce the observed population of Be-XRBs in the Small Magellanic Cloud with appropriate initial conditions and binary stellar evolution parameters. We derive the X-ray output from Be-XRBs as a function of metallicity in the (absolute) metallicity range $Z\in [10^{-4},0.03]$ with a large suite of binary population synthesis (BPS) simulations. The simulated Be-XRBs can explain a non-negligible fraction ($\gtrsim 30\%$) of the total X-ray output from HMXBs observed in nearby galaxies for $Z\sim 0.0003-0.02$. The X-ray luminosity per unit star formation rate from Be-XRBs in our fiducial model increases by a factor of $\sim 8$ from $Z=0.02$ to $Z=0.0003$, which is similar to the trend seen in observations of all types of HMXBs. We conclude that Be-XRBs are potentially important X-ray sources that deserve greater attention in BPS of XRBs.
Temperature profiles of the hot galaxy cluster intracluster medium (ICM) have a complex non-linear structure that traditional parametric modelling may fail to fully approximate. For this study, we made use of neural networks, for the first time, to construct a data-driven non-parametric model of ICM temperature profiles. A new deconvolution algorithm was then introduced to uncover the true (3D) temperature profiles from the observed projected (2D) temperature profiles. An auto-encoder-inspired neural network was first trained by learning a non-linear interpolatory scheme to build the underlying model of 3D temperature profiles in the radial range of [0.02-2] R$_{500}$, using a sparse set of hydrodynamical simulations from the THREE HUNDRED PROJECT. A deconvolution algorithm using a learning-based regularisation scheme was then developed. The model was tested using high and low resolution input temperature profiles, such as those expected from simulations and observations, respectively. We find that the proposed deconvolution and deprojection algorithm is robust with respect to the quality of the data, the morphology of the cluster, and the deprojection scheme used. The algorithm can recover unbiased 3D radial temperature profiles with a precision of around 5\% over most of the fitting range. We apply the method to the first sample of temperature profiles obtained with XMM{\it -Newton} for the CHEX-MATE project and compared it to parametric deprojection and deconvolution techniques. Our work sets the stage for future studies that focus on the deconvolution of the thermal profiles (temperature, density, pressure) of the ICM and the dark matter profiles in galaxy clusters, using deep learning techniques in conjunction with X-ray, Sunyaev Zel'Dovich (SZ) and optical datasets.
We consider F-term hybrid inflation and supersymmetry breaking in the context of a model which largely respects a global U(1) R symmetry. The Kaehler potential parameterizes the Kaehler manifold with an enhanced U(1)x(SU(1,1)/U(1)) symmetry, where the scalar curvature of the second factor is determined by the achievement of a supersymmetry-breaking de Sitter vacuum without ugly tuning. The magnitude of the emergent soft tadpole term for the inflaton can be adjusted in the range (1.2-460) TeV -- increasing with the dimensionality of the representation of the waterfall fields -- so that the inflationary observables are in agreement with the observational requirements. The mass scale of the supersymmetric partners turns out to lie in the region (0.09-253) PeV which is compatible with high-scale supersymmetry and the results of LHC on the Higgs boson mass. The mu parameter can be generated by conveniently applying the Giudice-Masiero mechanism and assures the out-of-equilibrium decay of the R saxion at a low reheat temperature Trh<~163 GeV.
We combine archival ALMA data targeting the Hubble Ultra Deep Field (HUDF) to produce the deepest currently attainable 1-mm maps of this key, extragalactic survey field. Combining all existing data in Band 6, our deepest map covers 4.2arcmin^2, with a beamsize of 1.49"x1.07" at an effective frequency of 243GHz (1.23mm). It reaches an rms of 4.6uJy/beam, with 1.5arcmin^2 below 9.0uJy/beam, an improvement of >5% over the best previously published map and 50% improvement in some regions. We also make a wider, but shallower map, covering 25.4arcmin^2. We detect 45 galaxies in the deep map down to 3.6sigma, including 10 more 1-mm sources than previously detected. 39 of these galaxies have a JWST ID from the JADES NIRCam imaging and the new sources are typically faint and red. A stacking analysis on the positions of ALMA-undetected JADES galaxies yields detections for z<4 and stellar masses from 10^(8.4) to 10^(10.4)Msun, extracting 10% of additional stacked signal from our map compared to previous analyses. Detected sources and stacking contribute (10.0+/-0.5)Jy/deg^2 of the cosmic infrared background (CIB) at 1.23mm. Although this is short of the (uncertain) background level of about 20Jy/deg^2, after taking into account intrinsic fluctuations in the CIB, our measurement is consistent with the background if the HUDF is a mild (~2sigma) negative fluctuation. This suggests that within the HUDF, JWST may have detected essentially all of the galaxies that contribute to the CIB. Our stacking analysis predicts that the field contains around 60 additional galaxies with 1.23mm flux densities averaging around 15uJy, and over 300 galaxies at the few uJy level. However, the contribution of these fainter more modestly-obscured objects to the background is small, and converging, as anticipated from the now well-established strong correlation between galaxy stellar mass and obscured star formation.
Based on recent data about the history of the Hubble factor, it is argued that the second law of thermodynamics holds at the largest scales accessible to observation. This is consistent with previous studies of the same question.
Ever since the discovery of the first Active Galactic Nuclei (AGN), substantial observational and theoretical effort has been invested into understanding how massive black holes have evolved across cosmic time. Circum-nuclear obscuration is now established as a crucial component, with almost every AGN observed known to display signatures of some level of obscuration in their X-ray spectra. But despite more than six decades of effort, substantial open questions remain: How does the accretion power impact the structure of the circum-nuclear obscurer? What are the dynamical properties of the obscurer? Can dense circum-nuclear obscuration exist around intrinsically weak AGN? How many intermediate mass black holes occupy the centers of dwarf galaxies? In this paper, we showcase a number of next-generation prospects attainable with the High Energy X-ray Probe (https://hexp.org) to contribute towards solving these questions in the 2030s. The uniquely broad (0.2--80 keV) and strictly simultaneous X-ray passband of HEX-P makes it ideally suited for studying the temporal co-evolution between the central engine and circum-nuclear obscurer. Improved sensitivities and reduced background will enable the development of spectroscopic models complemented by current and future multi-wavelength observations. We show that the angular resolution of HEX-P both below and above 10 keV will enable the discovery and confirmation of accreting massive black holes at both low accretion power and low black hole masses even when concealed by thick obscuration. In combination with other next-generation observations of the dusty hearts of nearby galaxies, HEX-P will hence be pivotal in paving the way towards a complete picture of black hole growth and galaxy co-evolution.
HEX-P is a probe-class mission concept that will combine high spatial resolution X-ray imaging ($<10"$ full width at half maximum) and broad spectral coverage (0.2--80 keV) with an effective area far superior to current facilities (including XMM-Newton and NuSTAR) to enable revolutionary new insights into a variety of important astrophysical problems. HEX-P is ideally suited to address important problems in the physics and astrophysics of supernova remnants (SNRs) and pulsar-wind nebulae (PWNe). For shell SNRs, HEX-P can greatly improve our understanding via more accurate spectral characterization and localization of non-thermal X-ray emission from both non-thermal-dominated SNRs and those containing both thermal and non-thermal components, and can discover previously unknown non-thermal components in SNRs. Multi-epoch HEX-P observations of several young SNRs (e.g., Cas A and Tycho) are expected to detect year-scale variabilities of X-ray filaments and knots, thus enabling us to determine fundamental parameters related to diffusive shock acceleration, such as local magnetic field strengths and maximum electron energies. For PWNe, HEX-P will provide spatially-resolved, broadband X-ray spectral data separately from their pulsar emission, allowing us to study how particle acceleration, cooling, and propagation operate in different evolution stages of PWNe. HEX-P is also poised to make unique and significant contributions to nuclear astrophysics of Galactic radioactive sources by improving detections of, or limits on, $^{44}$Ti in the youngest SNRs and by potentially discovering rare nuclear lines as evidence of double neutron star mergers. Throughout the paper, we present simulations of each class of objects, demonstrating the power of both the imaging and spectral capabilities of HEX-P to advance our knowledge of SNRs, PWNe, and nuclear astrophysics.
Type Ia supernovae (SNe Ia) are standardizable cosmological candles which led to the discovery of the accelerating universe. However, the physics of how white dwarfs (WDs) explode and lead to SNe Ia is still poorly understood. The initiation of the detonation front which rapidly disrupts the WD is a crucial element of the puzzle, and global 3D simulations of SNe Ia cannot resolve the requisite length scales to capture detonation initiation. In this work, we elucidate a theoretical criterion for detonation initiation in the distributed burning regime. We test this criterion against local 3D driven turbulent hydrodynamical simulations within electron-degenerate WD matter consisting initially of pure helium. We demonstrate a novel pathway for detonation, in which strong turbulent dissipation rapidly heats the helium, and forms carbon nuclei sufficient to lead to a detonation through accelerated burning via $\alpha$ captures. Simulations of strongly-driven turbulent conditions lead to detonations at a mean density of $10^6$ g cm$^{-3}$ and mean temperature of $1.4 - 1.8 \times 10^9$ K, but fail to detonate at a lower density of $10^5$ g cm$^{-3}$, in excellent agreement with theoretical predictions.
Analysis of AR Sco optical light curves spanning nine years show a secular change in the relative amplitudes of the beat pulse pairs generated by the two magnetic poles of its rotating white dwarf. Recent photometry now shows that the primary and secondary beat pulses have similar amplitudes, while in 2015 the primary pulse was approximately twice that of the secondary peak. The equalization in the beat pulse amplitudes is also seen in the linearly polarized flux. This rapid evolution is consistent with precession of the white dwarf spin axis. The observations imply that the pulse amplitudes cycle over a period of $\gtrsim 40$ yrs, but that the upper limit is currently poorly constrained. If precession is the mechanism driving the evolution, then over the next 10 years the ratio of the beat pulse amplitudes will reach a maximum followed by a return to asymmetric beat pulses.
The Milky Way (MW) dwarf spheroidal satellite galaxies (dSphs) are particularly intriguing targets to search for gamma rays from Weakly Interacting Massive Particle (WIMP) dark matter (DM) annihilation or decay. They are nearby, DM-dominated, and lack significant emission from standard astrophysical processes. Previous studies using the Fermi-Large Area Telescope (LAT) of DM emission from dSphs have provided the most robust and stringent constraints on the DM annihilation cross section and mass. We report an analysis of the MW dSphs using over 14 years of LAT data and an updated census of dSphs and $J$-factors. While no individual dSphs are significantly detected, we find slight excesses with respect to background at the $\gtrsim 2\,\sigma$ local significance level in both tested annihilation channels ($b\bar{b}$, $\tau^+\tau^-$) for 7 dSphs. We do not find a significant DM signal from a combined likelihood analysis of the dSphs ($s_{global}\sim 0.5\sigma$), yet a marginal local excess relative to background at a $2-3\,\sigma$ level is observed at a DM mass of $M_{\chi}=150-230$ GeV ($M_{\chi}=30-50$ GeV) for annihilation into $b\bar{b}$ ($\tau^+\tau^-$). Given the lack of a significant detection, we place updated constraints on the $b\bar{b}$ and $\tau^+\tau^-$ annihilation channels that are generally consistent with previous recent results. As in past studies, tension is found with some WIMP DM interpretations of the Galactic Center Excess (GCE), though the limits are consistent with other interpretations given the uncertainties of the Galactic DM density profile and GCE systematics. Based on conservative assumptions of improved sensitivity with increased LAT exposure and moderate increases in the sample of dSphs, we project the local $\sim 2\,\sigma$ signal, if real, could approach the $\sim 4\,\sigma$ local confidence level with additional $\sim 10$ years of observation.
We report results from an all-sky search of the LIGO data from the third LIGO-Virgo-KAGRA run (O3) for continuous gravitational waves from isolated neutron stars in the frequency band [30, 150] Hz and spindown range of $[{-1} \times 10^{-8}, {+1} \times 10^{-9}]$ Hz/s. This search builds upon a previous analysis of the first half of the O3 data using the same PowerFlux pipeline. We search more deeply here by using the full O3 data and by using loose coherence in the initial stage with fully coherent combination of LIGO Hanford (H1) and LIGO Livingston (L1) data, while limiting the frequency band searched and excluding narrow, highly disturbed spectral bands. We detect no signal and set strict frequentist upper limits on circularly polarized and on linearly polarized wave amplitudes, in addition to estimating population-averaged upper limits. The lowest upper limit obtained for circular polarization is $\sim 4.5 \times 10^{-26}$, and the lowest linear polarization limit is $\sim {1.3} \times 10^{-25}$ (both near 144 Hz). The lowest estimated population-averaged upper limit is $\sim {1.0} \times 10^{-25}$. In the frequency band searched here, these limits improve upon the O3a PowerFlux search by a median factor of $\sim 1.4$ and upon the best previous limits obtained for the full O3 data by a median factor of $\sim 1.1$.
We present the extension of GR-Athena++ to general-relativistic magnetohydrodynamics (GRMHD) for applications to neutron star spacetimes. The new solver couples the constrained transport implementation of Athena++ to the Z4c formulation of the Einstein equations to simulate dynamical spacetimes with GRMHD using oct-tree adaptive mesh refinement. We consider benchmark problems for isolated and binary neutron star spacetimes demonstrating stable and convergent results at relatively low resolutions and without grid symmetries imposed. The code correctly captures magnetic field instabilities in non-rotating stars with total relative violation of the divergence-free constraint of $10^{-16}$. It handles evolutions with a microphysical equation of state and black hole formation in the gravitational collapse of a rapidly rotating star. For binaries, we demonstrate correctness of the evolution under the gravitational radiation reaction and show convergence of gravitational waveforms. We showcase the use of adaptive mesh refinement to resolve the Kelvin-Helmholtz instability at the collisional interface in a merger of magnetised binary neutron stars. GR-Athena++ shows strong scaling efficiencies above $80\%$ in excess of $10^5$ CPU cores and excellent weak scaling is shown up to $\sim 5 \times 10^5$ CPU cores in a realistic production setup. GR-Athena++ allows for the robust simulation of GRMHD flows in strong and dynamical gravity with exascale computers.
The electromagnetic emission from the non-relativistic ejecta launched in neutron star mergers (either dynamically or through a disk wind) has the potential to probe both the total mass and composition of this ejecta. These observations are crucial in understanding the role of these mergers in the production of r-process elements in the universe. However, many properties of the ejecta can alter the light-curves and we must both identify which properties play a role in shaping this emission and understand the effects these properties have on the emission before we can use observations to place strong constraints on the amount of r-process elements produced in the merger. This paper focuses on understanding the effect of the velocity distribution (amount of mass moving at different velocities) for lanthanide-rich ejecta on the light-curves and spectra. The simulations use distributions guided by recent calculations of disk outflows and compare the velocity-distribution effects to those of ejecta mass, velocity and composition. Our comparisons show that uncertainties in the velocity distribution can lead to factor of 2-4 uncertainties in the inferred ejecta mass based on peak infra-red luminosities. We also show that early-time UV or optical observations may be able to constrain the velocity distribution, reducing the uncertainty in the ejecta mass.
The Extragalactic Background Light (EBL) is the main radiation field responsible for attenuating extragalactic gamma-ray emission at very high energies, but its precise spectral intensity is not fully determined. Therefore, disentangling propagation effects from the intrinsic spectral properties of gamma-ray sources (such as active galactic nuclei, AGN) is the primary challenge to interpreting observations of these objects. We present a Bayesian and Markov Chain Monte Carlo approach to simultaneously infer parameters characterizing the EBL and the intrinsic spectra in a combined fit of a set of sources, which has the advantage of easily incorporating the uncertainties of both sets of parameters into one another through marginalization of the posterior distribution. Taking a sample of synthetic blazars observed by the ideal CTA configuration, we study the effects on the EBL constraints of combining multiple observations and varying their exposure. We also apply the methodology to a set of 65 gamma-ray spectra of 36 different AGNs measured by current Imaging Atmospheric Cherenkov Telescopes, using Hamiltonian Monte Carlo as a solution to the difficult task of sampling in spaces with a high number of parameters. We find robust constraints in the mid-IR region while simultaneously obtaining intrinsic spectral parameters for all of these objects. In particular, we identify Markarian 501 (Mkn 501) flare data (HEGRA/1997) as essential for constraining the EBL above 30$\mu$m.
A fundamental goal of modern-day astrophysics is to understand the connection between supermassive black hole (SMBH) growth and galaxy evolution. Merging galaxies offer one of the most dramatic channels for galaxy evolution known, capable of driving inflows of gas into galactic nuclei, potentially fueling both star formation and central SMBH activity. Dual active galactic nuclei (dual AGNs) in late-stage mergers with nuclear pair separations $<10$ kpc are thus ideal candidates to study SMBH growth along the merger sequence since they coincide with the most transformative period for galaxies. However, dual AGNs can be extremely difficult to confirm and study. Hard X-ray ($>10$ keV) studies offer a relatively contamination-free tool for probing the dense obscuring environments predicted to surround the majority of dual AGN in late-stage mergers. To date, only a handful of the brightest and closest systems have been studied at these energies due to the demanding instrumental requirements involved. We demonstrate the unique capabilities of HEX-P to spatially resolve the soft and - for the first time - hard X-ray counterparts of closely-separated ($\sim2''-5''$) dual AGNs in the local Universe. By incorporating state-of-the-art physical torus models, we reproduce realistic broadband X-ray spectra expected for deeply embedded accreting SMBHs. Hard X-ray spatially resolved observations of dual AGNs - accessible only to HEX-P - will hence transform our understanding of dual AGN in the nearby Universe.
Theoretical studies of angular momentum transport suggest that isolated stellar-mass black holes are born with negligible dimensionless spin magnitudes $\chi \lesssim 0.01$. However, recent gravitational-wave observations indicate $\gtrsim 15\%$ of binary black hole systems contain at least one black hole with a non-negligible spin magnitude. One explanation is that the first-born black hole spins up the stellar core of what will become the second-born black hole through tidal interactions. Typically, the second-born black hole is the ``secondary'' (less-massive) black hole, though, it may become the ``primary'' (more-massive) black hole through a process known as mass-ratio reversal. We investigate this hypothesis by analysing data from the third gravitational-wave transient catalog (GWTC-3) using a ``single-spin'' framework in which only one black hole may spin in any given binary. Given this assumption, we show that at least $28\%$ (90% credibility) of the LIGO--Virgo--KAGRA binaries contain a primary with significant spin, possibly indicative of mass-ratio reversal. We find no evidence for binaries that contain a secondary with significant spin. However, the single-spin framework is moderately disfavoured (natural log Bayes factor $\ln {\cal B} = 3.1$) when compared to a model that allows both black holes to spin. If future studies can firmly establish that most merging binaries contain two spinning black holes, it may call into question our understanding of formation mechanisms for binary black holes or the efficiency of angular momentum transport in black hole progenitors.
We present a polarization analysis of PSR J0941$-$39 and PSR J1107$-$5907, which exhibit transitions between being pulsars and rotating radio transients (RRATs), using the ultra-wide bandwidth low-frequency (UWL) receiver on Murriyang, the Parkes 64\,m radio telescope. The spectral index of each pulsar was measured, revealing distinct variations among different states. By using the rotating vector model (RVM), we determined that the magnetosphere geometry remains consistent between the RRAT state and the pulsar state for PSR J0941$-$39, with emissions originating from the same height in the magnetosphere. The occurrence of the RRAT state could be attributed to variations in currents within the pulsar's magnetosphere. Our results suggest that the emission mechanism of RRAT may share similarities with that of a typical pulsar.
In a previous work [1], given a putative vortex, it was determined whether it is non abelian or not by studying its radiation channels. The example considered there was a $SU(2)$ gauge model whose internal orientational modes are described by an sphere $S^2$. The non abelian effects presented in this reference were not very pronounced, due to the compactness of this space. In the present work, this analysis is extended for a vortex whose internal space is non compact. This situation may be realised by semi-local supersymmetric vortices [2]-[9]. As the internal space has infinite volume, a largely energetic perturbation may propagate along the object. A specific configuration is presented, when the internal space is the resolved conifold with its Ricci flat metric. The curious feature about it is that it corresponds to a static vortex, that is, the perturbation is only due to the internal modes. Even being static, the emission of gravitational radiation is in the present case of considerable order. This suggest that the presence of slowly moving objects that can emit a large amount of gravitational radiation is a hint of non abelianity.
Neutron stars and black holes in X-ray binaries are observed to host strong collimated jets in the hard spectral state. Numerical simulations can act as a valuable tool in understanding the mechanisms behind jet formation and its properties. Although there have been significant efforts in understanding black-hole jets from general-relativistic magnetohydrodynamic (GRMHD) simulations in the past years, neutron star jets, however, still remain poorly explored. We present the results from three-dimensional (3D) GRMHD simulations of accreting neutron stars with oblique magnetospheres for the very first time. The jets in our simulations are produced due to the anchored magnetic field of the rotating star in analogy with the Blandford-Znajek process. We find that for accreting stars, the star-disk magnetic field interaction plays a significant role and as a result, the jet power becomes directly proportional to ${\Phi^2}_{\rm jet}$, where $\Phi_{\rm jet}$ is the open flux in the jet. The jet power decreases with increasing stellar magnetic inclination and finally for an orthogonal magnetosphere, it reduces by a factor of $\simeq 2.95$ compared to the aligned case. We also find that in the strong propeller regime, with a highly oblique magnetosphere, the disk-induced collimation of the open stellar flux preserves parts of the striped wind resulting in a striped jet.
Studies of the pulsar B0823+26 have been carried out using the Large Phased Array (LPA) radio telescope. At time span of 5.5 years, the amplitudes of the main pulse (MP), postcursor (PC) and interpulse (IP) were evaluated in daily sessions lasting 3.7 minutes. It is shown that the ratio of the average amplitudes of MP in the bright (B) and quiet (Q) modes is 60. For B-mode, the average ratio of MP amplitudes to IP amplitudes is 65, and the ratio of MP amplitudes to PC amplitudes is 28. The number of sessions with a nulling is 4% of the total number of sessions. Structure function (SF) and correlation function analysis of MP, IP and PC amplitude variations of over a long-time interval allowed us to detect typical time scales 37 \pm 5 days and one year. The analysis of time variations shows that the time scale of 37 days is well explained by refraction on inhomogeneities of interstellar plasma, which is distributed mostly quasi-uniformly in the line-of-sight. This scintillation makes the main contribution to the observed variability. Analysis of the structure function showed that there may be a few days variability. This time scale does not have an unambiguous interpretation but is apparently associated with the refraction of radio waves on the interstellar medium. One-year variability time scale has not been previously detected. We associate its appearance with the presence of a scattering layer on a closely located screen at a distance of about 50-100 pc from the Earth.
Theoretical predictions suggest that very massive stars have the potential to form through multiple collisions and eventually evolve into intermediate-mass black holes (IMBHs) within Population III star clusters that are embedded in mini dark matter haloes. In this study, we investigate the long-term evolution of Population III star clusters, including models with a primordial binary fraction of $f_{\rm b}=0$ and 1, using the $N$-body simulation code \textsc{petar}. We comprehensively examine the phenomenon of hierarchical triple black holes in the clusters, specifically focusing on the merging of their inner binary black holes (BBHs), with post-Newtonian correction, by using the \textsc{tsunami} code. Our findings suggest a high likelihood of the inner BBHs containing IMBHs with masses on the order of $\mathcal{O}(100)M_{\odot}$, and as a result, their merger rate could be up to $0.1{\rm Gpc}^{-3}{\rm yr}^{-3}$. In the model with $f_{\rm b}=0$, the evolution of these merging inner BBHs is predominantly driven by their gravitational wave radiation at an early time, but their evolutionary dynamics are dominated by the interaction between them and tertiary BHs in the case with $f_{\rm b}=1$. The orbital eccentricities of some merging inner BBHs oscillate over time periodically, known as the Kozai-Lidov oscillation, due to dynamical perturbations. Furthermore, the merging inner BBHs tend to have highly eccentric orbits at low frequency range, some of them with low redshift would be detected by LISA/TianQin. In the higher frequency range, the merging inner BBHs across a wider redshift range would be detected by DECIGO/ET/CE/LIGO/KAGRA.
We propose the use of entropy, measured from the spatial and flux distribution of pixels in the residual image, as a potential diagnostic and stopping metric for the CLEAN algorithm. Despite its broad success as the standard deconvolution approach in radio interferometry, finding the optimum stopping point for the iterative CLEAN algorithm is still a challenge. We show that the entropy of the residual image, measured during the final stages of CLEAN, can be computed without prior knowledge of the source structure or expected noise levels, and that finding the point of maximum entropy as a measure of randomness in the residual image serves as a robust stopping criterion. We also find that, when compared to the expected thermal noise in the image, the maximum entropy of the residuals is a useful diagnostic that can reveal the presence of data editing, calibration, or deconvolution issues that may limit the fidelity of the final CLEAN map.
We report on a campaign on the bright black hole X-ray binary Swift J1727.8$-$1613 centered around five observations by the Imaging X-ray Polarimetry Explorer (IXPE). This is the first time it has been possible to trace the evolution of the X-ray polarization of a black hole X-ray binary across a hard to soft state transition. The 2--8 keV polarization degree slowly decreased from $\sim$4\% to $\sim$3\% across the five observations, but remained in the North-South direction throughout. Using the Australia Telescope Compact Array (ATCA), we measure the intrinsic 7.25 GHz radio polarization to align in the same direction. Assuming the radio polarization aligns with the jet direction (which can be tested in the future with resolved jet images), this implies that the X-ray corona is extended in the disk plane, rather than along the jet axis, for the entire hard intermediate state. This in turn implies that the long ($\gtrsim$10 ms) soft lags that we measure with the Neutron star Interior Composition ExploreR (NICER) are dominated by processes other than pure light-crossing delays. Moreover, we find that the evolution of the soft lag amplitude with spectral state differs from the common trend seen for other sources, implying that Swift J1727.8$-$1613 is a member of a hitherto under-sampled sub-population.
We investigate the prospect of performing a null test of binary black hole (BBH) nature using spin-induced quadrupole moment (SIQM) measurements. This is achieved by constraining a deviation parameter ($\delta\kappa$) related to the parameter ($\kappa$) that quantifies the degree of deformation due to the spin of individual binary components on leading (quadrupolar) spin-induced moment. Throughout the paper, we refer to $\kappa$ as the SIQM parameter and $\delta\kappa$ as the SIQM-deviation parameter. The test presented here extends the earlier SIQM-based null tests for BBH nature by employing waveform models that account for double spin-precession and higher modes. We find that waveform with double spin-precession gives better constraints for $\delta\kappa$, compared to waveform with single spin-precession. We also revisit earlier constraints on the SIQM-deviation parameter for selected GW events observed through the first three observing runs (O1-O3) of LIGO-Virgo detectors. Additionally, the effects of higher-order modes on the test are also explored for a variety of mass-ratio and spin combinations by injecting simulated signals in zero-noise. Our analyses indicate that binaries with mass-ratio greater than 3 and significant spin precession may require waveforms that account for spin-precession and higher modes to perform the parameter estimation reliably.
We observed gamma-ray burst (GRB) 221009A using Very Long Baseline Interferomety (VLBI) with the European VLBI Network (EVN) and the Very Long Baseline Array (VLBA), over a period spanning from 40 to 262 days after the initial GRB. The high angular resolution (~mas) of our observations allowed us, for the second time ever after GRB 030329, to measure the projected size, $s$, of the relativistic shock caused by the expansion of the GRB ejecta into the surrounding medium. Our observations support the expansion of the shock with a 3.6 $\sigma$-equivalent significance, and confirm its relativistic nature by revealing an apparently superluminal expansion rate. Fitting a power law expansion model, $s\propto t^a$, to the observed size evolution, we find a slope $a=1.9_{-0.6}^{+0.7}$, which is steeper than expected from either a forward shock (FS) or reverse shock (RS) model, implying an apparent acceleration of the expansion. Fitting the data at each frequency separately, we find different expansion rates, pointing to a frequency-dependent behaviour. We show that the observed size evolution can be reconciled with a RS plus FS in the case of a wind-like circum-burst medium, provided that the two shocks dominate the emission at different frequencies and, possibly, at different times.
The Antarctic Impulsive Transient Antenna (ANITA) detector has observed several radio pulses coming from the surface of the ice cap at the South Pole. These pulses were attributed to upward-going atmospheric particle showers instead of the downward-going showers induced by cosmic rays that exhibit a characteristic polarity inversion of the radio signal due to reflection in the ice. Coherent transition radiation from cosmic-ray showers developing in the atmosphere and intercepting the ice surface has been suggested as a possible and alternative explanation of these so-called "anomalous" events. To test this interpretation, we have developed an extension of ZHS, a program to calculate coherent pulses from electromagnetic showers, to deal with showers that transit a planar interface between two homogeneous and dielectric media, including transition radiation. By considering different geometries, it is found that all pulses from air showers intercepting the ice surface and detected at the height of ANITA, display the same polarity as pulses emitted by ultra-high-energy cosmic-ray showers that fully develop in the atmosphere and are reflected on the ice. We find that transition radiation is disfavored as a possible explanation of the anomalous ANITA events.
Recently the Event Horizon Telescope observed black holes at event horizon scales for the first time, enabling us to now test the existence of event horizons. Although event horizons have by definition no observable features, one can look for their non-existence. In that case, it is likely that there is some kind of surface, which like any other surface could absorb (and thermally emit) and/or reflect radiation. In this paper, we study the potential observable features of such rotating reflecting surfaces. We construct a general description of reflecting surfaces in arbitrary spacetimes. This is used to define specific models for static and rotating reflecting surfaces, of which we study the corresponding light paths and synthetic images. This is done by numerical integration of the geodesic equation and by the use of the general relativistic radiative transfer code RAPTOR. The reflecting surface creates an infinite set of ring-like features in synthetic images inside the photon ring. There is a central ring in the middle and higher order rings subsequently lie exterior to each other converging to the photon ring. The shape and size of the ring features change only slightly with the radius of the surface R, spin a and inclination i, resulting in all cases in features inside the 'shadow region'. We conclude that rotating reflecting surfaces have clear observable features and that the Event Horizon Telescope is able to observe the difference between reflecting surfaces and an event horizon for high reflectivities. Such reflecting surface models can be excluded, which strengthens the conclusion that the black hole shadow indeed indicates the existence of an event horizon.
J191213.72-441045.1 is a binary system composed of a white dwarf and an M-dwarf in a 4.03-hour orbit. It shows emission in radio, optical, and X-ray, all modulated at the white dwarf spin period of 5.3 min, as well as various orbital sideband frequencies. Like in the prototype of the class of radio-pulsing white dwarfs, AR Scorpii, the observed pulsed emission seems to be driven by the binary interaction. In this work, we present an analysis of far-ultraviolet spectra obtained with the Cosmic Origins Spectrograph at the Hubble Space Telescope, in which we directly detect the white dwarf in J191213.72-441045.1. We find that the white dwarf has an effective temperature of 11485+/-90 K and mass of 0.59+/-0.05 solar masses. We place a tentative upper limit on the magnetic field of ~50 MG. If the white dwarf is in thermal equilibrium, its physical parameters would imply that crystallisation has not started in the core of the white dwarf. Alternatively, the effective temperature could have been affected by compressional heating, indicating a past phase of accretion. The relatively low upper limit to the magnetic field and potential lack of crystallisation that could generate a strong field pose challenges to pulsar-like models for the system and give preference to propeller models with a low magnetic field. We also develop a geometric model of the binary interaction which explains many salient features of the system.
We investigate the influence of a specific class of slow Baryon Number Violation (BNV) -- one that induces quasi-equilibrium evolution -- on pulsar spin characteristics. This work reveals how BNV can potentially alter observable parameters, including spin-down rates, the second derivative of spin frequency, and braking indices of pulsars. Moreover, we demonstrate that BNV could lead to anomalies in pulsar timing, along with a wide array of braking indices, both positive and negative. In addition, we examine the possibility of pulsar spin-up due to BNV, which may result in a novel mechanism for the revival of ``dead'' pulsars. We conclude by assessing the sensitivity required for future pulsar timing efforts to detect such BNV effects, thus highlighting the potential for pulsars to serve as laboratories for testing fundamental physics.
The study of core-collapse supernova remnants (SNRs) presents a fascinating puzzle, with intricate morphologies and a non-uniform distribution of stellar debris. Particularly, young remnants (aged less than 5000 years) hold immense value as they can offer crucial insights into the inner processes of the supernova (SN) engine, revealing details about nucleosynthetic yields and large-scale asymmetries arising from the early stages of the explosion. Furthermore, these remnants also bear characteristics that may reflect the nature of their progenitor stars and the interactions between the remnants and the surrounding circumstellar medium (CSM), shaped by the progenitor's mass-loss history. Hence, investigating the connection between young SNRs, parent SNe, and progenitor massive stars can be of paramount importance to delve into the physics of SN engines, and to investigate the final stages of massive star evolution and the elusive mechanisms governing their mass loss. In this contribution, I review recent advances in modeling the path from massive stars to SNe and SNRs achieved by our team. The focus is on investigating the links between the observed physical and chemical properties of SNRs and their progenitor stars and SN explosions. The unraveling of this connection offers us the opportunity to probe the physics of core-collapse SN explosions and the final stages of evolution of massive stars.
The Brazilian Mario Schenberg gravitational-wave detector remained operational until 2016 when it was disassembled. To assess the feasibility of reassembling the antenna, its capability to detect GW within its designed sensitivity parameters needs to be evaluated. Although the antenna is currently disassembled, insights can be gleaned from the O3 data of the LIGO detectors, given the similarities between Schenberg's ultimate sensitivity and the interferometers' sensitivity in the [3150-3260] Hz band. The search focused on signals lasting from ms to seconds, with no assumptions about their morphology, polarization, and arrival sky direction. Data analysis was performed using the coherent WaveBurst pipeline in the frequency range between 512 Hz and 4096 Hz, specifically targeting signals with bandwidths overlapping the Schenberg frequency band. However, the O3 data did not yield statistically significant evidence of GW bursts. This null result allowed for the characterization of the search efficiency in identifying simulated signal morphologies and setting upper limits on the GW burst event rate as a function of strain amplitude. The current search is sensitive to sources emitting isotropically $5\times10^{-6} M_{\odot}c^2$ in GWs from a distance of 10 kiloparsecs with a 50\% detection efficiency at a false alarm rate of 1 per 100 years. Moreover, we revisited estimations of detecting f-modes of neutron stars excited by glitches, setting the upper limit of the f-mode energy for the population of Galactic pulsars to $\sim 8 \times 10^{-8} M_{\odot}c^2$ at 3205 Hz. Our simulations suggest f-modes are an unlikely source of gravitational waves for the aSchenberg. Nevertheless, its potential in probing other types of GW short transients, such from giant flares from magnetars, post-merger phase of binary NSs, or the inspiral of binaries of primordial BHs with sub-solar masses, remains promising.
The continued operation of the Advanced LIGO and Advanced Virgo gravitational-wave detectors is enabling the first detailed measurements of the mass, spin, and redshift distributions of the merging binary black hole population. Our present knowledge of these distributions, however, is based largely on strongly parameteric models; such models typically assume the distributions of binary parameters to be superpositions of power laws, peaks, dips, and breaks, and then measure the parameters governing these "building block" features. Although this approach has yielded great progress in initial characterization of the compact binary population, the strong assumptions entailed leave it often unclear which physical conclusions are driven by observation and which by the specific choice of model. In this paper, we instead model the merger rate of binary black holes as an unknown \textit{autoregressive process} over the space of binary parameters, allowing us to measure the distributions of binary black hole masses, redshifts, component spins, and effective spins with near-complete agnosticism. We find the primary mass spectrum of binary black holes to be doubly-peaked, with a fairly flat continuum that steepens at high masses. We identify signs of unexpected structure in the redshift distribution of binary black holes: a uniform-in-comoving volume merger rate at low redshift followed by a rise in the merger rate beyond redshift $z\approx 0.5$. Finally, we find that the distribution of black hole spin magnitudes is unimodal and concentrated at small but non-zero values, and that spin orientations span a wide range of spin-orbit misalignment angles but are also moderately unlikely to be truly isotropic.
Eccentric compact binary mergers are significant scientific targets for current and future gravitational wave observatories. To detect and analyze eccentric signals, there is an increasing effort to develop waveform models, numerical relativity simulations, and parameter estimation frameworks for eccentric binaries. Unfortunately, current models and simulations use different internal parameterisations of eccentricity in the absence of a unique natural definition of eccentricity in general relativity, which can result in incompatible eccentricity measurements. In this paper, we adopt a standardized definition of eccentricity and mean anomaly based solely on waveform quantities, and make our implementation publicly available through an easy-to-use Python package, gw_eccentricity. This definition is free of gauge ambiguities, has the correct Newtonian limit, and can be applied as a postprocessing step when comparing eccentricity measurements from different models. This standardization puts all models and simulations on the same footing and enables direct comparisons between eccentricity estimates from gravitational wave observations and astrophysical predictions. We demonstrate the applicability of this definition and the robustness of our implementation for waveforms of different origins, including post-Newtonian theory, effective one body, extreme mass ratio inspirals, and numerical relativity simulations. We focus on binaries without spin-precession in this work, but possible generalizations to spin-precessing binaries are discussed.
Earlier, the screening condition in neutron star core has been formulated as equality of velocities of superconducting protons and the electrons $\mathbf{v}_p=\mathbf{u}_e$ at wavenumbers $q\ll\lambda^{-1}$ ($\lambda$ is the London penetration depth) and has been used to derive the force exerted by the electrons on a moving flux tube. By calculating the current-current response, I find that $\mathbf{v}_p\neq\mathbf{u}_e$ for $l^{-1}<q\ll\lambda^{-1}$ ($l$ is the electron mean free path). I show that at typical realistic parameters the electric field induced by a moving (relative to the electrons) flux tube is not screened by the electron currents. The implication is that the existing picture of the momentum exchange between the electrons and the flux tubes must be reassessed.
The aim of the current paper is to apply the method of Bambi (Bambi, 2015) to a source which contains two or more simultaneous triads of variability components. The joint chi-square variable that can be composed in this case, unlike some previous studies, allows the goodness of the fit to be tested. It appears that a good fit requires one of the observation groups to be disregarded. Even then, the model prediction for the mass of the neutron star in the accreting millisecond pulsar IGR J17511-3057 is way too high to be accepted.
We report the discovery of a faint radio filament near PSR J0538+2817 in the NVSS, CGPS, and the Rapid ASKAP Continuum Survey data. This pulsar is plausibly associated with the supernova that gave rise to the Spaghetti Nebula (Simeis 147). The structure is one-sided and appears to be almost aligned (within 17 degrees) with the direction of the pulsar's proper motion, but in contrast to the known cases of pulsar radio tails, it is located ahead of the pulsar. At the same time, this direction is also approximately (within 5 degrees) perpendicular to the axis of the extended non-thermal X-ray emission around the pulsar. No X-ray or optical emission is detected from the filament region, although the end point of the radio filament appears to be adjacent to a filament of H$_\alpha$ emission. We speculate that this structure might represent a filament connecting pulsar wind nebula with the ambient interstellar medium filled with relativistic electrons escaping the pulsar nebula, i.e. a radio analogue of X-ray filaments of Guitar and Lighthouse PWNs and filaments of non-thermal radio emission in the Galactic Center.
Wide-field survery have recently detected recurring optical and X-ray sources near galactic nuclei, with period spanning hours to years. These phenomena could result from repeated partial tidal disruptions of stars by supermassive black holes (SMBHs) or by interaction between star and SMBH-accretion discs. We study the physical processes that produce period changes in such sources, highlighting the key role of the interaction between the orbiting star and the accretion disc. We focus on ASASSN-14ko - a repeatedly flaring optical source with a mean period $P_0 = 115 \, \rm d$ and a detected period decay $\dot{P} = -2.6\times 10^{-3}$ (Payne et al. 2022). We argue that the system's $\dot{P}$ is most compatible with true orbital decay produced by hydrodynamical drag as a star passes through the accretion disc on an inclined orbit, twice per orbit. The star is likely a sun-like star whose envelope is somewhat inflated, possibly due to tidal heating. Star-disc interaction inevitably leads to drag-induced stripping of mass from the star, which may be the dominant component in powering the observed flares. We discuss ASASSN-14ko's possible formation history and observational tests of our interpretation of the measured $\dot P$. Our results imply that partial tidal disruption events manifesting as repeating nuclear transients cannot be modeled without accounting for the cumulative impact of tidal heating over many orbits. We discuss the implications of our results for other repeating transients, and predict that the recurrence time of Quasi-Periodic Eruptions is expected to decay at a rate of order $|\dot{P}| \approx 10^{-6}-10^{-5}$.
With recent advances in neutron star observations, major progress has been made in determining the pressure of neutron star matter at high density. This pressure is constrained by the neutron star deformability, determined from gravitational waves emitted in a neutron-star merger, and measurements of radii of two neutron stars, using a new X-ray observatory on the International Space Station. Previous studies have relied on nuclear theory calculations to provide the equation of state at low density. Here we use a combination of 15 constraints composed of three astronomical observations and twelve nuclear experimental constraints that extend over a wide range of densities. Bayesian Inference is then used to obtain a comprehensive nuclear equation of state. This data-centric result provides benchmarks for theoretical calculations and modeling of nuclear matter and neutron stars. Furthermore, it provides insights on the composition of neutron stars and their cooling via neutrino radiation.
Although low-frequency quasiperiodic oscillations (LFQPOs) are commonly detected in the X-ray light curves of accreting black hole X-ray binaries, their origin still remains elusive. In this study, we conduct phase-resolved spectroscopy in a broad energy band for LFQPOs in MAXI J1820+070 during its 2018 outburst, utilizing Insight-HXMT observations. By employing the Hilbert-Huang transform method, we extract the intrinsic quasiperiodic oscillation (QPO) variability, and obtain the corresponding instantaneous amplitude, phase, and frequency functions for each data point. With well-defined phases, we construct QPO waveforms and phase-resolved spectra. By comparing the phase-folded waveform with that obtained from the Fourier method, we find that phase folding on the phase of the QPO fundamental frequency leads to a slight reduction in the contribution of the harmonic component. This suggests that the phase difference between QPO harmonics exhibits time variability. Phase-resolved spectral analysis reveals strong concurrent modulations of the spectral index and flux across the bright hard state. The modulation of the spectral index could potentially be explained by both the corona and jet precession models, with the latter requiring efficient acceleration within the jet. Furthermore, significant modulations in the reflection fraction are detected exclusively during the later stages of the bright hard state. These findings provide support for the geometric origin of LFQPOs and offer valuable insights into the evolution of the accretion geometry during the outburst in MAXI J1820+070.
We perform an improvement in a thermodynamical consistent model with density dependent quark masses ($m'_{u,d,s}$) by introducing effects of quark confinement/deconfinement phase transition, at high density regime and zero temperature, by means of the traced Polyakov loop ($\Phi$). We use realistic values for the current quark masses, provided by the Particle Data Group, and replace the constants of the interacting part of $m'_{u,d,s}$ by functions of $\Phi$, leading to a first order phase transition structure, for symmetric and stellar quark matter, with $\Phi$ being the order parameter. We show that the improved model points out the direction of the chiral symmetry restoration due to the emergence of a deconfined phase. In another application, we construct quark stars mass-radius profiles, obtained from this new model, and show to be possible to satisfy recent astrophysical observational data coming from the LIGO and Virgo Collaboration, and the NICER mission concerning the millisecond pulsars PSR J0030+0451, and PSR J0740+6620.
We study the role of star formation and stellar feedback in a galaxy being ram pressure stripped on its infall into a cluster. We use hydrodynamical wind-tunnel simulations of a massive galaxy ($M_\text{star} = 10^{11} M_\odot$) moving into a massive cluster ($M_\text{cluster} = 10^{15} M_\odot$). We have two types of simulations: with and without star formation and stellar feedback, SF and RC respectively. For each type we simulate four realisations of the same galaxy: a face-on wind, edge-on wind, $45^\circ$ angled wind, and a control galaxy not subject to ram pressure. We directly compare the stripping evolution of galaxies with and without star formation. We find that stellar feedback has no direct effect on the stripping process, i.e. there is no enhancement in stripping via a velocity kick to the interstellar medium gas. The main difference between RC and SF galaxies is due to the indirect effect of stellar feedback, which produces a smoother and more homogeneous interstellar medium. Hence, while the average gas surface density is comparable in both simulation types, the scatter is broader in the RC galaxies. As a result, at the galaxy outskirts overdense clumps survive in RC simulation, and the stripping proceeds more slowly. At the same time, in the inner disc, underdense gas in the RC holes is removed faster than the smoothly distributed gas in the SF simulation. For our massive galaxy, we therefore find that the effect of feedback on the stripping rate is almost negligible, independent of wind angle.
We report spectroscopic identification of the host galaxies of 18 ultra-strong MgII systems (USMgII) at $0.6 \leq z \leq 0.8$. We created the largest sample by merging these with 20 host galaxies from our previous survey within $0.4 \leq z \leq 0.6$. Using this sample, we confirm that the measured impact parameters ($\rm 6.3\leq D[kpc] \leq 120$ with a median of 19 kpc) are much larger than expected, and the USMgII host galaxies do not follow the canonical $\rm W_{2796}-D$ anti-correlation. We show that the presence and significance of this anti-correlation may depend on the sample selection. The $\rm W_{2796}-D$ anti-correlation seen for the general MgII absorbers show a mild evolution at low $\rm W_{2796}$ end over the redshift range $0.4 \leq z \leq 1.5$ with an increase of the impact parameters. Compared to the host galaxies of normal MgII absorbers, USMgII host galaxies are brighter and more massive for a given impact parameter. While the USMgII systems preferentially pick star-forming galaxies, they exhibit slightly lower ongoing star-forming rates compared to main sequence galaxies with the same stellar mass, suggesting a transition from star-forming to quiescent states. For a limiting magnitude of $m_r < 23.6$, at least $29\%$ of the USMgII host galaxies are isolated, and the width of the MgII absorption in these cases may originate from gas flows (infall/outflow) in isolated halos of massive star-forming but not starbursting galaxies. We associate more than one galaxy with the absorber in $\ge 21\%$ cases where interactions may cause wide velocity spread.
We are entering an era in which we will be able to detect and characterize hundreds of dwarf galaxies within the Local Volume. It is already known that a strong dichotomy exists in the gas content and star formation properties of field dwarf galaxies versus satellite dwarfs of larger galaxies. In this work, we study the more subtle differences that may be detectable in galaxies as a function of distance from a massive galaxy, such as the Milky Way. We compare smoothed particle hydrodynamic simulations of dwarf galaxies formed in a Local Volume-like environment (several Mpc away from a massive galaxy) to those formed nearer to Milky Way-mass halos. We find that the impact of environment on dwarf galaxies extends even beyond the immediate region surrounding Milky Way-mass halos. Even before being accreted as satellites, dwarf galaxies near a Milky Way-mass halo tend to have higher stellar masses for their halo mass than more isolated galaxies. Dwarf galaxies in high-density environments also tend to grow faster and form their stars earlier. We show observational predictions that demonstrate how these trends manifest in lower quenching rates, higher HI fractions, and bluer colors for more isolated dwarf galaxies.
The recently-launched James Webb Space Telescope (JWST) can resolve eV-scale emission lines arising from dark matter (DM) decay. We forecast the end-of-mission sensitivity to the decay of axions, a leading DM candidate, in the Milky Way using the blank-sky observations expected during standard operations. Searching for unassociated emission lines will constrain axions in the mass range $0.18$ eV to $2.6$ eV with axion-photon couplings $g_{a\gamma\gamma}\gtrsim 5.5 \times 10^{-12}$ GeV$^{-1}$. In particular, these results will constrain astrophobic QCD axions to masses $\lesssim$ 0.2 eV.
We develop a model for galaxy formation and the growth of supermassive black holes (SMBHs), based on the fact that cold dark matter (CDM) halos form their gravitational potential wells through a fast phase with rapid change in the potential, and that the high universal baryon fraction makes cooled gas in halos self-gravitating and turbulent before it can form rotation-supported disks. Gas fragmentation produces sub-clouds so dense that cloud-cloud collision and drag on clouds are not significant, producing a dynamically hot system of sub-clouds that form stars and move ballistically to feed the central SMBH. Active galactic nucleus (AGN) and supernova (SN) feedback is effective only in the fast phase, and the cumulative effects are to regulate star formation and SMBH growth, as well as to reduce the amount of cold gas in halos to allow the formation of globally stable disks. Using a set of halo assembly histories, we demonstrate that the model can reproduce a number of observations, including correlations among SMBH mass, stellar mass of galaxies and halo mass, the number densities of galaxies and SMBH, as well as their evolution over the cosmic time.
We report spectroscopic observations of vB 120 (HD 30712), a 5.7 yr astrometric-spectroscopic binary system in the Hyades cluster. We combine our radial velocities with others from the literature, and with existing speckle interferometry measurements, to derive an improved 3D orbit for the system. We infer component masses of M1 = 1.065 +/- 0.018 MSun and M2 = 1.008 +/- 0.016 MSun, and an orbital parallax of 21.86 +/- 0.15 mas, which we show to be more accurate than the parallax from Gaia DR3. This is the ninth binary or multiple system in the Hyades with dynamical mass determinations, and one of the examples with the highest precision. An analysis of the spectral energy distribution yields the absolute radii of the stars, R1 = 0.968 +/- 0.012 RSun and R2 = 0.878 +/- 0.013 RSun, and effective temperatures of 5656 +/- 56 K and 5489 +/- 60 K for the primary and secondary, respectively. A comparison of these properties with the predictions of current stellar evolution models for the known age and metallicity of the cluster shows only minor differences.
Bright quasar samples at high redshift are useful for investigating active galactic nuclei evolution. In this study, we describe XQz5, a sample of 83 ultraluminous quasars in the redshift range $4.5 < z < 5.3$ with optical and near-infrared spectroscopic observations, with unprecendented completeness at the bright end of the quasar luminosity function. The sample is observed with the Southern Astrophysical Research Telescope, the Very Large Telescope, and the ANU 2.3m Telescope, resulting in a high-quality, moderate-resolution spectral atlas of the brightest known quasars within the redshift range. We use established virial mass relations to derive the black hole masses by measuring the observed Mg\,\textsc{ii}$\lambda$2799\AA\ emission-line and we estimate the bolometric luminosity with bolometric corrections to the UV continuum. Comparisons to literature samples show that XQz5 bridges the redshift gap between other X-shooter quasar samples, XQ-100 and XQR-30, and is a brighter sample than both. Luminosity-matched lower-redshift samples host more massive black holes, which indicate that quasars at high redshift are more active than their counterparts at lower-redshift, in concordance with recent literature.
In this manuscript, strong evidence is reported to support unobscured broad line regions (BLRs) in Type-1.9 AGN SDSS J1241+2602 with reliable broad H$\alpha$ but no broad H$\beta$. Commonly, disappearance of broad H$\beta$ can be explained by the AGN unified model expected heavily obscured BLRs in Type-1.9 AGN. Here, based on properties of two kinds of BH masses, the virial BH mass and the BH mass through the \msig relation, an independent method is proposed to test whether are there unobscured central BLRs in a Type-1.9 AGN. By the reliable measurement of stellar velocity dispersion about 110$\pm$12km/s through the host galaxy absorption features in \obj, the BH mass through the \msig relation is consistent with the virial BH mass $(3.43\pm1.25)\times10^7{\rm M_\odot}$ determined through properties of the observed broad H$\alpha$ without considering effects of obscurations in SDSS J1241+2602. Meanwhile, if considering heavily obscured BLRs in SDSS J1241+2602, the reddening corrected virial BH mass is tens of times larger than the \msig expected value, leading SDSS J1241+2602 to be an outlier in the \msig space with confidence level higher than $5\sigma$. Therefore, the unobscured BLRs are preferred in the Type-1.9 AGN SDSS J1241+2602. The results indicate that it is necessary to check whether unobscured central BLRs are common in Type-1.9 AGN, when to test the AGN unified model of AGN by properties of Type-1.9 AGN.
The CGM hosts many physical processes with different kinematic signatures that affect galaxy evolution. We address the CGM-galaxy kinematic connection by quantifying the fraction of HI that is aligned with galaxy rotation with the equivalent width co-rotation fraction, $f_{\rm EWcorot}$. Using 70 quasar sightlines having HST/COS HI absorption (${12<\log (N(HI)/{\rm cm}^{-2})<20}$) within $5R_{\rm vir}$ of $z<0.6$ galaxies we find that $f_{\rm EWcorot}$ increases with increasing HI column density. $f_{\rm EWcorot}$ is flat at $\sim0.6$ within $R_{\rm vir}$ and decreases beyond $R_{\rm vir}$ to $f_{\rm EWcorot}$$\sim0.35$. $f_{\rm EWcorot}$ also has a flat distribution with azimuthal and inclination angles within $R_{\rm vir}$, but decreases by a factor of two outside of $R_{\rm vir}$ for minor axis gas and by a factor of two for edge-on galaxies. Inside $R_{\rm vir}$, co-rotation dominated HI is located within $\sim 20$ deg of the major and minor axes. We surprisingly find equal amounts of HI absorption consistent with co-rotation along both major and minor axes within $R_{\rm vir}$. However, this co-rotation disappears along the minor axis beyond $R_{\rm vir}$, suggesting that if this gas is from outflows, then it is bound to galaxies. $f_{\rm EWcorot}$ is constant over two decades of halo mass, with no decrease for log(M$_{\rm h}/M_{\odot})>12$ as expected from simulations. Our results suggest that co-rotating gas flows are best found by searching for higher column density gas within $R_{\rm vir}$ and near the major and minor axes.
The multiphase CGM hosts critical processes that affect galaxy evolution such as accretion and outflows. We searched for evidence of these phenomena by using the EW co-rotation fraction ($f_{\rm EWcorot}$) to study the kinematic connection between the multiphase CGM and host galaxy rotation. We examined 27 systems with absorption lines from HST/COS (including, but not limited to, SiII, CII, SiIII, CIII, and OVI) within $21\leq D\leq~276$ kpc of galaxies. We find the median $f_{\rm EWcorot}$ for all ions is consistent within errors and the $f_{\rm EWcorot}$ increases with increasing N$(\rm HI)$. The $f_{\rm EWcorot}$ of lower ionization gas likely decreases with increasing $D/R_{\rm vir}$ while OVI and HI are more consistent with being flat. The $f_{\rm EWcorot}$ varies minimally as a function of azimuthal angle and is similar for all ions at a fixed azimuthal angle. The larger number of OVI detections enabled us to investigate where the majority of co-rotating gas is found. Highly co-rotating OVI primarily resides along the galaxies' major axis. Looking at the $f_{\rm EWcorot}$ as a function of ionization potential (${d{f_{\rm EWcorot}}}/{d{(eV)}}$), we find a stronger co-rotation signature for lower-ionization gas. There are suggestions of a connection between the CGM metallicity and major axis co-rotation where low-ionization gas with higher $f_{\rm EWcorot}$ exhibits lower metallicity and may trace large-scale filamentary inflows. Higher ionization gas with higher $f_{\rm EWcorot}$ exhibits higher metallicity and may instead trace co-planar recycled gas accretion. Our results stress the importance of comparing absorption originating from a range of ionization phases to differentiate between various gas flow scenarios.
The internal structure of the prestellar core G208.68-19.02-N2 (G208-N2) in the Orion Molecular Cloud 3 (OMC-3) region has been studied with the Atacama Large Millimeter/submillimeter Array (ALMA). The dust continuum emission revealed a filamentary structure with a length of $\sim$5000 au and an average H$_2$ volume density of $\sim$6 $\times$ 10$^7$ cm$^{-3}$. At the tip of this filamentary structure, there is a compact object, which we call a ``nucleus", with a radius of $\sim$150--200 au and a mass of $\sim$0.1 M$_{\odot}$. The nucleus has a central density of $\sim$2 $\times$ 10$^9$ cm$^{-3}$ with a radial density profile of $r^{-1.87{\pm}0.11}$. The density scaling of the nucleus is $\sim$3.7 times higher than that of the singular isothermal sphere. This as well as the very low virial parameter of 0.39 suggest that the gravity is dominant over the pressure everywhere in the nucleus. However, there is no sign of CO outflow localized to this nucleus. The filamentary structure is traced by the N$_2$D$^+$ 3--2 emission, but not by the C$^{18}$O 2--1 emission, implying the significant CO depletion due to high density and cold temperature. Toward the nucleus, the N$_2$D$^+$ also shows the signature of depletion. This could imply either the depletion of the parent molecule, N$_2$, or the presence of the embedded very-low luminosity central source that could sublimate the CO in the very small area. The nucleus in G208-N2 is considered to be a prestellar core on the verge of first hydrostatic core (FHSC) formation or a candidate for the FHSC.
In this letter, we measure the rest-frame optical and near-infrared sizes of ten quiescent candidates at 3<z<5, first reported by Carnell et al. (2023a). We use $\textit{James Webb Space Telescope}$ (JWST) Near-Infrared Camera (NIRCam) F277W and F444W imaging obtained through the public CEERS Early Release Science (ERS) program and $\textbf{imcascade}$, an astronomical fitting code that utilizes Multi-Gaussian Expansion, to carry out our size measurements. When compared to the extrapolation of rest-optical size-mass relations for quiescent galaxies at lower redshift, eight out of ten candidates in our sample (80%) are on average more compact by $\sim$40%. Seven out of ten candidates (70%) exhibit rest-frame infrared sizes $\sim$10% smaller than rest-frame optical sizes, indicative of negative color gradients. Two candidates (20%) have rest-frame infrared sizes $\sim$1.4$\times$ larger than rest-frame optical sizes; one of these candidates exhibits signs of ongoing or residual star formation, suggesting this galaxy may not be fully quenched. The remaining candidate is unresolved in both filters, which may indicate an Active Galactic Nuclei (AGN). Strikingly, we observe three of the most massive galaxies in the sample (log(M$_{\star}$/M$_{\odot}$) = 10.74 - 10.95) are extremely compact, with effective radii ${\sim}$0.7 kpc. Our results indicate that quiescent galaxies may be more compact than previously anticipated beyond z>3, even after correcting for potential color gradients. This suggests that the size evolution of quiescent galaxies is steeper than previously anticipated and our current understanding is biased by the limited wavelength capabilities of the $\textit{Hubble Space Telescope}$ (HST) and the presence of negative color gradients in quiescent galaxies.
Systematic studies have revealed hundreds of ultra-compact dwarf galaxies (UCDs) in the nearby Universe. With half-light radii $r_h$ of approximately 10-100 parsecs and stellar masses $M_*$ $\approx$ $10^6-10^8$ solar masses, UCDs are among the densest known stellar systems. Although similar in appearance to massive globular clusters, the detection of extended stellar envelopes, complex star formation histories, elevated mass-to-light ratio, and supermassive black holes suggest that some UCDs are remnant nuclear star clusters of tidally-stripped dwarf galaxies, or even ancient compact galaxies. However, only a few objects have been found in the transient stage of tidal stripping, and this assumed evolutionary path has never been fully traced by observations. Here we show that 106 galaxies in the Virgo cluster have morphologies that are intermediate between normal, nucleated dwarf galaxies and single-component UCDs, revealing a continuum that fully maps this morphological transition, and fills the `size gap' between star clusters and galaxies. Their spatial distribution and redder color are also consistent with stripped satellite galaxies on their first few pericentric passages around massive galaxies. The `ultra-diffuse' tidal features around several of these galaxies directly show how UCDs are forming through tidal stripping, and that this evolutionary path can include an early phase as a nucleated ultra-diffuse galaxy (UDG). These UCDs represent substantial visible fossil remnants of ancient dwarf galaxies in galaxy clusters, and more low-mass remnants probably remain to be found.
Recently Lian et al. (2023), thanks to Gaia-ESO data, studied the chemical evolution of neutron-capture elements in the regime [Fe/H]>-1. We aim here to complement this study down to [Fe/H]=-3, and focus on Ba, Y, Sr, and abundance ratios of [Ba/Y] and [Sr/Y], which give comprehensive views on s-process nucleosynthesis channels. We measured LTE and NLTE abundances of Ba, Y, and Sr in 323 Galactic metal-poor stars using high-resolution optical spectra with high S/N. We used the spectral fitting code TSFitPy, together with 1D model atmospheres using previously determined LTE and NLTE atmospheric parameters. The NLTE effects are on the order of -0.1 to ~0.2dex depending on the element. T he ratio between heavy and light s-process elements [Ba/Y] varies weakly with [Fe/H] even in the metal-poor regime, consistently with the behavior in the metal-rich regime. The [Ba/Y] scatter at a given metallicity is larger than the abundance measurement uncertainties. Homogeneous chemical evolution models with different yields prescriptions are unable to accurately reproduce the [Ba/Y] scatter at low-[Fe/H]. Adopting the stochastic chemical evolution model by Cescutti & Chaippini (2014) allows to reproduce the observed scatter in the abundance pattern of [Ba/Y] and [Ba/Sr]. With our observations, we rule out the need for an arbitrary scaling of the r-process contribution as previously suggested by the model authors. We have showed how important it is to properly include NLTE effects when measuring chemical abundances, especially in the metal-poor regime. This work shows that the choice of the Galactic chemical evolution model (stochastic vs. 1-zone) is key when comparing models to observations. The upcoming surveys such as 4MOST and WEAVE will deliver high quality spectra of many thousands of metal-poor stars, and this work gives a typical case study of what could be achieved with such surveys.
Gravitational waves (GWs) can be distorted, similarly to electromagnetic (EM) waves, by massive objects through the phenomenon of gravitational lensing. The importance of gravitational lensing for GW astronomy is becoming increasingly apparent in the GW detection era, in which nearly 100 events have already been detected. As current ground-based interferometers reach their design sensitivities, it is anticipated that these detectors may observe a few GW signals that are strongly lensed by the dark halos of intervening galaxies or galaxy clusters. Analyzing strong lensing effects on GW signals is, thus, becoming important to understand the lens' properties and correctly infer the intrinsic GW source parameters. However, one cannot accurately infer lens parameters for complex lens models with only GW observations because there are strong degeneracies between the parameters of lensed waveforms. In this paper, we discuss how to conduct parameter estimation of strongly lensed GW signals and infer the lens parameters with the help of EM observations, such as the shape and the redshift of the lens. We find that for simple spherically symmetric lens models, the lens parameters can be well recovered using only GW information. On the other hand, recovering the lens parameters requires systems in which four or more GW images are detected with additional EM observations for non-axially symmetric lens models. Combinations of GW and EM observations can further improve the inference of the lens parameters.
An analytical solution of the perturbed equations is obtained, which exists in all ergodic models of collisionless spherical stellar systems with a single length parameter. This solution corresponds to variations of this parameter, i.e., stretching or shrinking the sphere while preserving the total mass. The system remains in an equilibrium state. The simplicity of the solution allows for explicit expressions for the distribution function, potential, and density at all orders of perturbation theory. This, in turn, helps to clarify the concept of perturbation energy, which, being a second-order quantity in amplitude, cannot be calculated in linear theory. It is shown that the correct expression for perturbation energy, constructed taking into account 2nd order perturbations, and the well-known expression for perturbation energy constructed as bilinear form obtained within linear theory from 1st order perturbations do not coincide. However, both of these energies are integrals of motion and differ only by a constant. The obtained solution can be used to control the correctness of codes and the accuracy of calculations in the numerical study of collisionless stellar models.
Blue supergiants are the brightest stars in their host galaxies and yet their evolutionary status has been a long-standing problem in stellar astrophysics. In this pioneering work, we present a large sample of 59 early B-type supergiants in the Large Magellanic Cloud with newly derived stellar parameters and identify the signatures of stars born from binary mergers among them. We simulate novel 1D merger models of binaries consisting of supergiants with hydrogen-free cores (primaries) and main-sequence companions (secondaries) and consider the effects of interaction of the secondary with the core of the primary. We follow the evolution of the new-born 16--40\,M$_\{odot}$ stars until core-carbon depletion, close to their final pre-explosion structure. Unlike stars which are born alone, stars born from such stellar mergers are blue throughout their core helium-burning phase and reproduce the surface gravities and Hertzsprung-Russel diagram positions of most of our sample. This indicates that the observed blue supergiants are structurally similar to merger-born stars. Moreover, the large nitrogen-to-carbon and oxygen ratios, and helium enhancements exhibited by at least half our data sample are uniquely consistent with our model predictions, leading us to conclude that a large fraction of blue supergiants are indeed products of binary mergers.
The 4m International Liquid Mirror Telescope (ILMT) facility continuously scans the same sky strip ($\sim$22$^\prime$ wide) on each night with a fixed pointing towards the zenith direction. It is possible to detect hundreds of supernovae (SNe) each year by implementing an optimal image subtraction technique on consecutive night images. Prompt monitoring of ILMT-detected SNe is planned under the secured target of opportunity mode using ARIES telescopes (1.3m DFOT and 3.6m DOT). Spectroscopy with the DOT facility will be useful for the classification and detailed investigation of SNe. During the commissioning phase of the ILMT, supernova (SN) 2023af was identified in the ILMT field of view. The SN was further monitored with the ILMT and DOT facilities. Preliminary results based on the light curve and spectral features of SN 2023af are presented.
Nestled in the mountains of Northern India, is a 4-metre rotating dish of liquid mercury. Over a 10-year period, the International Liquid Mirror Telescope (ILMT) will survey 117 square degrees of sky, to study the astrometric and photometric variability of all detected objects. One of the scientific programs will be a survey of variable stars. The data gathered will be used to construct a comprehensive catalog of light curves. This will be an essential resource for astronomers studying the formation and evolution of stars, the structure and dynamics of our Milky Way galaxy, and the properties of the Universe as a whole. This catalog will be an aid in our advance to understanding the cosmos and provide deeper insights into the fundamental processes that shape our Universe. In this work, we describe the survey and give some examples of variable stars found in the early commissioning data from the ILMT.
Low surface brightness (LSB) galaxies make up a significant fraction of the luminosity density of the local universe. Their low surface brightness suggests a different formation and evolution process compared to more-typical high-surface-brightness galaxies. This study presents an analysis of LSB galaxies found in images obtained by the International Liquid Mirror Telescope during the observation period from October 24 to November 1, 2022. 3,092 LSB galaxies were measured and separated into blue and red LSB categories based on their $g'-i'$ colours. In these samples, the median effective radius is 4.7 arcsec, and the median value of the mean surface brightness within the effective radius is 26.1 mag arcsec$^{-2}$. The blue LSB galaxies are slightly brighter than the red LSB galaxies. No significant difference of ellipticity was found between the blue and the red LSB galaxies.
Recent research suggests a correlation between the variability and intrinsic brightness of quasars. If calibrated, this could lead to the use of quasars on the cosmic distance ladder, but this work is currently limited by lack of quasar light curve data with high cadence and precision. The Python photometric data pipeline SunPhot is being developed as part of preparations for an upcoming quasar variability survey with the International Liquid Mirror Telescope (ILMT). SunPhot uses aperture photometry to directly extract light curves for a catalogue of sources from calibrated ILMT images. SunPhot v.2.1 is operational, but the project is awaiting completion of ILMT commissioning.
We present a new determination of the evolving galaxy UV luminosity function (LF) over the redshift range $9.5<z<12.5$ based on a wide-area ($>250$ arcmin$^2$) data set of JWST NIRCam near-infrared imaging assembled from thirteen public JWST surveys. Our relatively large-area search allows us to uncover a sample of 61 robust $z>9.5$ candidates detected at $\geq 8\sigma$, and hence place new constraints on the intermediate-to-bright end of the UV LF. When combined with our previous JWST+UltraVISTA results, this allows us to measure the form of the LF over a luminosity range corresponding to four magnitudes ($M_{1500}$). At these early times we find that the galaxy UV LF is best described by a double power-law function, consistent with results obtained from recent ground-based and early JWST studies at similar redshifts. Our measurements provide further evidence for a relative lack of evolution at the bright-end of the UV LF at $z=9-11$, but do favour a steep faint-end slope ($\alpha\leq-2$). The luminosity-weighted integral of our evolving UV LF provides further evidence for a gradual, smooth (exponential) decline in co-moving star-formation rate density ($\rho_{\mathrm{SFR}}$) at least out to $z\simeq12$, with our determination of $\rho_{\mathrm{SFR}}(z=11)$ lying significantly above the predictions of many theoretical models of galaxy evolution.
We present individual star-formation histories of $\sim3000$ massive galaxies (log($\mathrm{M_*/M_{\odot}}$) > 10.5) from the Large Early Galaxy Astrophysics Census (LEGA-C) spectroscopic survey at a lookback time of $\sim$7 billion years and quantify the population trends leveraging 20hr-deep integrated spectra of these $\sim$ 1800 star-forming and $\sim$ 1200 quiescent galaxies at 0.6 < $z$ < 1.0. Essentially all galaxies at this epoch contain stars of age < 3 Gyr, in contrast with older massive galaxies today, facilitating better recovery of previous generations of star formation at cosmic noon and earlier. We conduct spectro-photometric analysis using parametric and non-parametric Bayesian SPS modeling tools - Bagpipes and Prospector to constrain the median star-formation histories of this mass-complete sample and characterize population trends. A consistent picture arises for the late-time stellar mass growth when quantified as $t_{50}$ and $t_{90}$, corresponding to the age of the universe when galaxies formed 50\% and 90\% of their total stellar mass, although the two sets of models disagree at the earliest formation times (e.g. $t_{10}$). Our results reveal trends in both stellar mass and stellar velocity dispersion as in the local universe - low-mass galaxies with shallower potential wells grow their stellar masses later in cosmic history compared to high-mass galaxies. Unlike local quiescent galaxies, the median duration of late-time star-formation ($\tau_{SF,late}$ = $t_{90}$ - $t_{50}$) does not consistently depend on the stellar mass. This census sets a benchmark for future deep spectro-photometric studies of the more distant universe.
The use of machine learning techniques has significantly increased the physics discovery potential of neutrino telescopes. In the upcoming years, we are expecting upgrade of currently existing detectors and new telescopes with novel experimental hardware, yielding more statistics as well as more complicated data signals. This calls out for an upgrade on the software side needed to handle this more complex data in a more efficient way. Specifically, we seek low power and fast software methods to achieve real-time signal processing, where current machine learning methods are too expensive to be deployed in the resource-constrained regions where these experiments are located. We present the first attempt at and a proof-of-concept for enabling machine learning methods to be deployed in-detector for water/ice neutrino telescopes via quantization and deployment on Google Edge Tensor Processing Units (TPUs). We design a recursive neural network with a residual convolutional embedding, and adapt a quantization process to deploy the algorithm on a Google Edge TPU. This algorithm can achieve similar reconstruction accuracy compared with traditional GPU-based machine learning solutions while requiring the same amount of power compared with CPU-based regression solutions, combining the high accuracy and low power advantages and enabling real-time in-detector machine learning in even the most power-restricted environments.
COSINE-100 is a dark matter direct detection experiment with 106 kg NaI(Tl) target material. 210Pb and daughter isotopes are a dominant background in the WIMP region of interest and detected via beta decay and alpha decay. Analysis of the alpha channel complements the background model as observed in the beta/gamma channel. We present the measurement of the quenching factors and Monte Carlo simulation results of the alpha decay components of the COSINE-100 NaI(Tl) crystals. The data strongly indicate that the alpha decays probabilistically undergo one of two possible quenching factors, but this phenomenon is not understood. The fitted results are consistent with independent measurements and improve the overall understanding of the COSINE-100 backgrounds.
We propose an approach to infer large-scale heterogeneities within a small celestial body from measurements of its gravitational potential, provided for instance by spacecraft radio-tracking. The non-uniqueness of the gravity inversion is here mitigated by limiting the solutions to piecewise-constant density distributions, thus composed of multiple regions of uniform density (mass anomalies) dispersed in a background medium. The boundary of each anomaly is defined implicitly as the 0-level surface of a scalar field (called the level-set function), so that by modifying this field the shape and location of the anomaly are varied. The gravitational potential associated with a density distribution is here computed via a line-integral polyhedron method, yielding the coefficients of its spherical harmonics expansion. The density distribution is then adjusted via an iterative least-squares approach with Tikhonov regularization, estimating at every iteration corrections to the level-set function, the density contrast of each anomaly, and the background density, in order to minimize the residuals between the predicted gravity coefficients and those measured. Given the non-convexity of the problem and the lack of prior knowledge assumed (save for the shape of the body), the estimation process is repeated for several random initial distributions, and the resulting solutions are clustered based on global properties independent of the input measurements. This provides families of candidate interior models in agreement with the data, and the spread of the local density values across each family is used to assess the uncertainties associated with the estimated distributions.
Extreme precision radial velocity (EPRV) measurements contend with internal noise (instrumental systematics) and external noise (intrinsic stellar variability) on the road to 10 cm/s "exo-Earth" sensitivity. Both of these noise sources are well-probed using "Sun-as-a-star" RVs and cross-instrument comparisons. We built the Solar Calibrator (SoCal), an autonomous system that feeds stable, disc-integrated sunlight to the recently commissioned Keck Planet Finder (KPF) at the W. M. Keck Observatory. With SoCal, KPF acquires signal-to-noise ~1200, R = ~98,000 optical (445--870 nm) spectra of the Sun in 5~sec exposures at unprecedented cadence for an EPRV facility using KPF's fast readout mode (<16 sec between exposures). Daily autonomous operation is achieved by defining an operations loop using state machine logic. Data affected by clouds are automatically flagged using a reliable quality control metric derived from simultaneous irradiance measurements. Comparing solar data across the growing global network of EPRV spectrographs with solar feeds will allow EPRV teams to disentangle internal and external noise sources and benchmark spectrograph performance. To facilitate this, all SoCal data products are immediately available to the public on the Keck Observatory Archive. We compared SoCal RVs to contemporaneous RVs from NEID, the only other immediately public EPRV solar dataset. We find agreement at the 30-40 cm/s level on timescales of several hours, which is comparable to the combined photon-limited precision. Data from SoCal were also used to assess a detector problem and wavelength calibration inaccuracies associated with KPF during early operations. Long-term SoCal operations will collect upwards of 1,000 solar spectra per six-hour day using KPF's fast readout mode, enabling stellar activity studies at high signal-to-noise on our nearest solar-type star.
Imaging spectroscopy is intended to be coupled with adaptive optics (AO) on large telescopes, such as EST, in order to produce high spatial and temporal resolution measurements of velocities and magnetic fields upon a 2D FOV. We propose a high spectral resolution slicer (30 m{\AA} typical) for the Multichannel Subtractive Double Pass (MSDP) of the future European Solar Telescope (EST), using a new generation slicer for thin photospheric lines such as FeI (56 channels, 0.13 mm step) which will benefit of AO and existing polarimeters. The aim is to reconstitute cubes of instantaneous data (X, Y, lambda) at high cadence, allowing the study of the photospheric dynamics and magnetic fields.
In the past decade, dual-phase xenon time projection chambers (Xe-TPCs) have emerged as some of the most powerful detectors in the fields of astroparticle physics and rare-event searches. Developed primarily towards the direct detection of dark matter particles, experiments presently operating deep underground have reached target masses at the multi-tonne scale, energy thresholds around 1\,keV and radioactivity-induced background rates similar to those from solar neutrinos. These unique properties, together with demonstrated stable operation over several years, allow for the exploration of new territory via high-sensitivity searches for a plethora of ultra-rare interactions. These include searches for particle dark matter, for second order weak decays, and the observation of astrophysical neutrinos. We first review some properties of xenon as a radiation detection medium and the operation principles of dual-phase Xe-TPCs together with their energy calibration and resolution. We then discuss the status of currently running experiments and of proposed next-generation projects, describing some of the technological challenges. We end by looking at their sensitivity to dark matter candidates, to second order weak decays and to solar and supernova neutrinos. Experiments based on dual-phase Xe-TPCs are difficult, and, like all good experiments, they are constantly pushed to their limits. Together with many other endeavours in astroparticle physics and cosmology they will continue to push at the borders of the unknown, hopefully to reveal profound new knowledge about our cosmos.
Astrophysical radio signals are excellent probes of extreme physical processes that emit them. However, to reach Earth, electromagnetic radiation passes through the ionised interstellar medium (ISM), introducing a frequency-dependent time delay (dispersion) to the emitted signal. Removing dispersion enables searches for transient signals like Fast Radio Bursts (FRB) or repeating signals from isolated pulsars or those in orbit around other compact objects. The sheer volume and high resolution of data that next generation radio telescopes will produce require High-Performance Computing (HPC) solutions and algorithms to be used in time-domain data processing pipelines to extract scientifically valuable results in real-time. This paper presents a state-of-the-art implementation of brute force incoherent dedispersion on NVIDIA GPUs, and on Intel and AMD CPUs. We show that our implementation is 4x faster (8-bit 8192 channels input) than other available solutions and demonstrate, using 11 existing telescopes, that our implementation is at least 20 faster than real-time. This work is part of the AstroAccelerate package.
The Nancy Grace Roman Space Telescope Coronagraph Instrument will enable the polarimetric imaging of debris disks and inner dust belts in the optical and near-infrared wavelengths, in addition to the high-contrast polarimetric imaging and spectroscopy of exoplanets. The Coronagraph uses two Wollaston prisms to produce four orthogonally polarized images and is expected to measure the polarization fraction with measurement errors < 3% per spatial resolution element. To simulate the polarization observations through the Hybrid Lyot Coronagraph (HLC) and Shaped Pupil Coronagraph (SPC), we model disk scattering, the coronagraphic point-response function, detector noise, speckles, jitter, and instrumental polarization and calculate the Stokes parameters. To illustrate the potential for discovery and a better understanding of known systems with both the HLC and SPC modes, we model the debris disks around Epsilon Eridani and HR 4796A, respectively. For Epsilon Eridani, using astrosilicates with 0.37+/-0.01 as the peak input polarization fraction in one resolution element, we recover the peak disk polarization fraction of 0.33+/-0.01. Similarly, for HR 4796A, for a peak input polarization fraction of 0.92+/-0.01, we obtain the peak output polarization fraction as 0.80+/-0.03. The Coronagraph design meets the required precision, and forward modeling is needed to accurately estimate the polarization fraction.
The IAG solar observatory is producing high-fidelity, ultra-high-resolution spectra (R>500000) of the spatially resolved surface of the Sun using a Fourier Transform spectrometer (FTS). The radial velocity (RV) calibration of these spectra is currently performed using absorption lines from Earth's atmosphere, limiting the precision and accuracy. To improve the frequency calibration precision and accuracy we plan to use a Fabry-Perot etalon (FP) setup that is an evolution of the CARMENES FP design and an iodine cell in combination. To create an accurate wavelength solution, the iodine cell is measured in parallel with the FP. The FP can then be used to transfer the accurate wavelength solution provided by the iodine via simultaneous calibration of solar observations. To verify the stability and precision of the FTS we perform parallel measurements of the FP and an iodine cell. The measurements show an intrinsic stability of the FTS of a level of 1 m/s over 90 hours. The difference between the FP RVs and the iodine cell RVs show no significant trends during the same time span. The RMS of the RV difference between FP and iodine cell is 10.7 cm/s, which can be largely attributed to the intrinsic RV precisions of the iodine cell and the FP (10.2 cm/s and 1.0 cm/s, respectively). This shows that we can calibrate the FTS to a level of 10 cm/s, competitive with current state-of-the-art precision RV instruments. Based on these results we argue that the spectrum of iodine can be used as an absolute reference to reach an RV accuracy of 10 cm/s.
The Simons Observatory (SO) is a cosmic microwave background instrumentation suite being deployed in the Atacama Desert in northern Chile. The telescopes within SO use three types of dichroic transition-edge sensor (TES) detector arrays, with the 90 and 150 GHz Mid-Frequency (MF) arrays containing 65% of the approximately 68,000 detectors in the first phase of SO. All of the 26 required MF detector arrays have now been fabricated, packaged into detector modules, and tested in laboratory cryostats. Across all modules, we find an average operable detector yield of 84% and median saturation powers of (2.8, 8.0) pW with interquartile ranges of (1, 2) pW at (90, 150) GHz, respectively, falling within their targeted ranges. We measure TES normal resistances and superconducting transition temperatures on each detector wafer to be uniform within 3%, with overall central values of 7.5 mohm and 165 mK, respectively. Results on time constants, optical efficiency, and noise performance are also presented and are consistent with achieving instrument sensitivity forecasts.
Orbital debris presents a growing risk to space operations, and is becoming a significant source of contamination of astronomical images. Much of the debris population is uncatalogued, making the impact more difficult to assess. We present initial results from the first ten nights of commissioning observations with the International Liquid Mirror Telescope, in which images were examined for streaks produced by orbiting objects including satellites, rocket bodies and other forms of debris. We detected 83 streaks and performed a correlation analysis to attempt to match these with objects in the public database. 48\% of these objects were uncorrelated, indicating substantial incompleteness in the database, even for some relatively-bright objects. We were able to detect correlated objects to an estimated magnitude of 14.5 and possibly about two magnitudes greater for the faintest uncorrelated object.
The International Liquid Mirror Telescope (ILMT) project is a scientific collaboration in observational astrophysics between the Li{\`e}ge Institute of Astrophysics and Geophysics (Li{\`e}ge University, Belgium), the Aryabatta Research Institute of observational sciencES (ARIES, Nainital, India) and several Canadian universities (British Columbia, Laval, Montr{\'e}al, Toronto, Victoria and York). Meanwhile, several other institutes have joined the project: the Royal Observatory of Belgium, the National University of Uzbekistan and the Ulugh Beg Astronomical Institute (Uzbekistan) as well as the Pozna{\'n} Observatory (Poland). The Li{\`e}ge company AMOS (Advanced Mechanical and Optical Systems) has fabricated the telescope structure that has been erected on the ARIES site in Devasthal (Uttarakhand, India). It is the first liquid mirror telescope being dedicated to astronomical observations. First light was obtained on 29 April 2022 and commissioning is being conducted at the present time. In this short article, we describe and illustrate the main components of the ILMT. We also highlight the ILMT papers presented during the third BINA workshop, which discuss various aspects of the ILMT science programs.
The 4m International Liquid Mirror Telescope (ILMT) is the first optical survey telescope in India that performs zenithal observations of a 22$'$ wide strip of the sky. To determine the portion of the sky covered by the ILMT during the entire year, we represent the ILMT Field of View (FoV) in three different coordinate systems - galactic, ecliptic, and equatorial. We adopt a constant declination of $+29^{\circ}21'41.4"$ and varying right ascension (RA) ranges corresponding to the Local Sidereal Time (LST). The observations from June to September are hampered due to the monsoon season. The handiness of such representations will allow us to locate a transient event in the ILMT FoV. This will enable prompt follow-up observations with other facilities.
The International Liquid Mirror Telescope (ILMT) is a 4-meter class survey telescope. It achieved its first light on 29$^{\rm th}$ April 2022 and is now undergoing the commissioning phase. It scans the sky in a fixed \ang{;22;} wide strip centred at the declination of $+$\ang{29;21;41.4} and works in \emph{Time Delay Integration (TDI)} mode. We present a full catalog of sources in the ILMT strip derived by crossmatching \textit{Gaia} DR3 with SDSS DR17 and PanSTARRS-1 (PS1) to supplement the catalog with apparent magnitudes of these sources in $g, r$, and $i$ filters. These sources can serve as astrometric calibrators. The release of Gaia DR3 provides synthetic photometry in popular broadband photometric systems, including the SDSS $g, r$, and $i$ bands for $\sim$220 million sources across the sky. We have used this synthetic photometry to verify our crossmatching performance and, in turn, create a subset of the catalog with accurate photometric measurements from two reliable sources.
The present article is based upon an invited talk delivered at the occasion of the inauguration of the 4m International Liquid Mirror Telescope (ILMT) which took place in Devasthal (ARIES, Uttarakhand, India) on 21st of March 2023. We present hereafter a short history of the liquid mirror telescopes and in particular of the 4m ILMT which is the first liquid mirror telescope entirely dedicated to astrophysical observations. We discuss a few preliminary scientific results and illustrate some direct CCD images taken during the first commissioning phase of the telescope. We invite the reader to refer to the series of ILMT poster papers published in these same proceedings of the BINA3 workshop for more details about the instrument, operation, first observations, performance and scientific results.
Dual-phase liquid-xenon time projection chambers (LXe TPCs) deploying a few tonnes of liquid are presently leading the search for WIMP dark matter. Scaling these detectors to 10-fold larger fiducial masses, while improving their sensitivity to low-mass WIMPs presents difficult challenges in detector design. Several groups are considering a departure from current schemes, towards either single-phase liquid-only TPCs, or dual-phase detectors where the electroluminescence region consists of patterned electrodes. Here, we discuss the possible use of Thick Gaseous Electron Multipliers (THGEMs) coated with a VUV photocathode and immersed in LXe as a building block in such designs. We focus on the transfer efficiencies of ionization electrons and photoelectrons emitted from the photocathode through the electrode holes, and show experimentally that efficiencies approaching 100 % can be achieved with realistic voltage settings. The observed voltage dependence of the transfer efficiencies is consistent with electron transport simulations once diffusion and charging-up effects are included.
Conformal Carter-Penrose diagrams are used for the visualization of hyperboloidal slices, which are smooth spacelike slices reaching null infinity. The focus is on the Schwarzschild black hole geometry in spherical symmetry, whose Penrose diagrams are introduced in a pedagogical way. The stationary regime involves time-independent slices. In this case, different options are given for integrating the height function -- the main ingredient for constructing hyperboloidal foliations. The dynamical regime considers slices changing in time, which are evolved together with the spacetime using the eikonal equation. It includes the relaxation of hyperboloidal Schwarzschild trumpet slices and the collapse of a massless scalar field into a black hole, for which Penrose diagrams are presented.
We calculate the holographic three-point function parameters $\mathcal A$, $\mathcal B$, $\mathcal C$ in general $d\geqslant 4$ dimensions from higher curvature gravities up to and including the quartic order. The result is valid both for massless and perturbative higher curvature gravities. It is known that in four dimensional CFT the $a$-charge is a linear combination of $\mathcal A$, $\mathcal B$, $\mathcal C$, our result reproduces this but also shows that a similar relation does not exist for general $d > 4$. We then compute the Weyl anomaly in $d = 6$ and found all the three $c$-charges are linear combinations of $\mathcal A$, $\mathcal B$, $\mathcal C$, which is consistent with that the $a$-charge is not. We also find the previously conjectured relation between $t_2$, $t_4$, $h''$ does not hold in general massless gravities, but holds for quasi-topological ones, and we obtain the missing coefficient.
We explore the generalized covariant entropy bound in the theory where Einstein gravity is perturbed by quadratic curvature terms, which can be viewed as the first-order quantum correction to Einstein gravity. By replacing the Bekenstein-Hawking entropy with the holographic entanglement entropy of this theory and introducing two reasonable physical assumptions, we demonstrate that the corresponding Generalized Covariant Entropy Bound is satisfied under a first-order approximation of the perturbation from the quadratic curvature terms. Our findings suggest that the entropy bound and the Generalized Second Law of black holes are satisfied in the Einstein gravity under the first-order perturbation from the quadratic curvature corrections, and they also imply that the generalized covariant entropy bound may still hold even after considering the quantum correction of gravity, but in this case, we may need to use holographic entanglement entropy as the formula for gravitational entropy.
Three models of black holes depending on the choice for the identification of the cut-off parameter are known in the asymptotic safe gravity. Here we show that for all three models the same feature takes place: high sensitivity of overtones to small deformation of the near horizon zone due to quantum corrections.
In this paper, we investigate how charge and modified terms affect the viability and stability of traversable wormhole geometry in the framework of $f(R,G)$ theory, where $R$ is the Ricci scalar and $G$ is the Gauss-Bonnet term. For this purpose, we develop a shape function through the Karmarkar condition to examine the wormhole geometry. The resulting shape function satisfies all the necessary conditions and establishes a connection between the asymptotically flat regions of the spacetime. The behavior of energy conditions and sound speed is checked in the presence of higher-order curvature terms and electromagnetic field to analyze the existence of stable traversable wormhole geometry. It is found that the traversable wormhole solutions are viable and stable in this modified theory.
Recently massless test scalar field perturbations of the holonomy corrected black holes [Z. Moreira et. al. Phys. Rev. D 107 (2023) 10, 104016] were studied in order to estimate quantum corrections to the quasinormal spectrum of a black hole. Here we study both the fundamental mode and overtones of scalar, electromagnetic and Dirac fields with the help of the Leaver method and higher order WKB formula with Pad\'e approximants. We observe that the overtones depend on the geometry near the event horizon, while the fundamental mode is localized near the peak of the potential barrier, what agrees with previous studies. We showed that unlike a massless field, the massive one possesses arbitrarily long lived modes. We also obtain the analytical eikonal formula for quasinormal modes and its extension beyond eikonal approximation as a series in powers of $1/\ell$, where $\ell$ is the multipole number.
Cosmologies of the lower Bianchi types, i.e. except those of type VIII or IX, admit a two-dimensional Abelian subgroup of the isometry group, the $G_2$. For orthogonal perfect fluid cosmologies of almost all lower Bianchi types the $G_2$ acts orthogonally-transitively, which is related to a cessation of the oscillations observed in the higher Bianchi types. However, in orthogonal perfect fluid cosmologies of type VI$_{-1/9}$ the $G_2$ does not necessarily act orthogonally-transitively. As a consequence, the dynamics of type VI$_{-1/9}$ orthogonal perfect fluid cosmologies have the same degrees of freedom as those of the higher types VIII and IX and their dynamics are expected to be markedly different compared to those of other lower Bianchi types. In this article we take a different approach to quiescence, namely the presence of an orthogonal stiff fluid. On the one hand, this completes the analysis of the initial singularity for Bianchi cosmologies with an orthogonal stiff fluid. On the other hand, this allows us to get a grasp of the underlying dynamics, in particular the effect of orthogonal transitivity as well as possible (asymptotic) polarization conditions.In particular, we shown that a generic type VI$_{-1/9}$ cosmology with an orthogonal stiff fluid has similar asymptotics as a generic Bianchi type VIII or IX cosmology with an orthogonal stiff fluid. The exceptions to this genericity are solutions satisfying an asymptotic polarization condition and solutions for which the $G_2$ acts orthogonally-transitively. Only in those cases the limits of the eigenvalues of the expansion-normalized Weingarten map may be negative. We also obtain a concise way to represent the dynamics which works more generally for the exceptional type VI$_{-1/9}$ cosmologies, and obtain a monotonic function for the case of a non-stiff orthogonal perfect fluid that is more stiff than a radiation fluid.
We show that the Milgromian acceleration of MOND and the cosmological constant can be understood and quantified as the effects of quantum fluctuations of spin connection which are described by precanonical quantum gravity put forward by one of us earlier. We also show that a MOND-like modification of Newtonian dynamics at small accelerations emerges from this picture in the non-relativistic approximation.
This research paper used a newly proposed strategy for finding the exact inflationary solutions to the Friedman equations in the context of Rastall theory of gravity (RTG), which is known as constant-roll warm inflation (CRWI). The dissipative effects produced during WI are studied by introducing a dissipation factor $Q=\frac{\Gamma}{3H}$, where $\Gamma$ is the coefficient of dissipation. We establish the model to evaluate the inflaton field, effective potential requires to produce inflation, and entropy density. These physical quantities lead to developing the important inflationary observables like scalar/tensor power spectrum, scalar spectral index, tensor-to-scalar ratio, and running of spectral-index for two choices of obtained potential that are $V_0=0$ and $V_0\neq0$. In this study, we focus on the effects of the theory parameter $\lambda$, CR parameter $\beta$, and dissipation factor $Q$ (under a high dissipative regime for which $Q=$constant) on inflation and are constrained to observe the compatibility of our model with Planck TT+lowP (2013), Planck TT, TE, EE+lowP (2015), Planck 2018 and BICEP/Keck 2021 bounds. The results are feasible and interesting up to the $2\sigma$ confidence level. Finally, we conclude that the CR technique produces significant changes in the early universe.
We study linear cosmological perturbations in the most general teleparallel gravity setting, where gravity is mediated by the torsion and nonmetricity of a flat connection alongside the metric. For a general linear perturbation of this geometry around a homogeneous and isotropic background geometry, we derive the irreducible decomposition of the perturbation variables, as well as their behavior under gauge transformations, i.e., infinitesimal diffeomorphisms generated by a vector field. In addition, we also study these properties for the most general set of matter variables and gravitational field equations. We then make use of these result to construct gauge-invariant perturbation variables, using a general approach based on gauge conditions. We further calculate these quantities also in the metric and symmetric teleparallel geometries, where nonmetricity or torsion is imposed to vanish. To illustrate our results, we derive the energy-momentum-hypermomentum conservation equations for both the cosmological background and the linear perturbations. As another example, we study the propagation of tensor perturbations in the $f(G)$, $f(T)$ and $f(Q)$ class of theories.
The article reviews the statistical theory of signal detection in application to analysis of deterministic gravitational-wave signals in the noise of a detector. Statistical foundations for the theory of signal detection and parameter estimation are presented. Several tools needed for both theoretical evaluation of the optimal data analysis methods and for their practical implementation are introduced. They include optimal signal-to-noise ratio, Fisher matrix, false alarm and detection probabilities, $\mathcal{F}$-statistic, template placement, and fitting factor. These tools apply to the case of signals buried in a stationary and Gaussian noise. Algorithms to efficiently implement the optimal data analysis techniques are discussed. Formulas are given for a general gravitational-wave signal that includes as special cases most of the deterministic signals of interest.
We study the motion of a gyroscope located far away from an isolated gravitational source in an asymptotically flat spacetime. As seen from a local frame tied to distant stars, the gyroscope precesses when gravitational waves cross its path, resulting in a net "orientation memory" that carries information on the wave profile. At leading order in the inverse distance to the source, the memory consists of two terms: the first is linear in the metric perturbation and coincides with the spin memory effect, while the second is quadratic and measures the net helicity of the wave burst. Both are closely related to symmetries of the gravitational radiative phase space at null infinity: spin memory probes superrotation charges, while helicity is the canonical generator of local electric-magnetic duality on the celestial sphere.
We obtain a new class of stationary axisymmetric spacetimes by using the G\"urses-G\"ursey metric with an appropriate mass function in order to generate a rotating core of matter that may be smoothly matched to the exterior Kerr metric. The same stationary spacetimes may be obtained by applying a slightly modified version of the Newman-Janis algorithm to a nonrotating spherically symmetric seed metric. The starting spherically symmetric configuration represents a nonisotropic de-Sitter type fluid whose radial pressure $p_r$ satisfies an state equation of the form $p_r=-\rho$, where the energy density $\rho$ is chosen to be the Tolman type-VII energy density [R. C. Tolman, Phys. Rev. {\bf 55}, 364 (1939)]. The resulting rotating metric is then smoothly matched to the exterior Kerr metric, and the main properties of the obtained geometries are investigated. All the solutions considered in the present study are regular in the sense they are free of curvature singularities. Depending on the relative values of the total mass $m$ and rotation parameter $a$, the resulting stationary spacetimes represent different kinds of rotating compact objects such as regular black holes, extremal regular black holes, and regular starlike configurations.
The Buchdahl star is the limiting compactness (which is indicated by sturation of the Buchdahl bound) object without horizon. It is in general defined by the potential felt by radially falling timelike particle, $\Phi(R) = 4/9$, in the field of a static object. On the other hand black hole is similarly characterized by $\Phi(R)=1/2$ which defines the horizon. Further it is remarkable that in terms of gravitational and non-gravitational energy, the Buchdahl star is alternatively defined when gravitational energy is half of non-gravitational energy while the black hole when the two are equal. When an infinitely dispersed system of bare mass $M$ collapses under its own gravity to radius $R$, total energy encompassed inside $R$ would be $E_{tot}(R)=M-E_{grav}(R)$. That is, energy inside the object is increased by the amount equivalent to gravitational energy lying outside and which manifests as internal energy in the interior. If the interior consists of free particles in motion interacting only through gravity as is the case for the Vlasov kinetic matter, internal (gravitational) energy could be thought of as kinetic energy and the defining condition for the Buchdahl star would then be kinetic (gravitational) energy equal to half of non-gravitational (potential) energy. Consequently it could be envisaged that equilibrium of the Buchdahl star interior is governed by the celebrated Virial theorem like relation (average kinetic energy equal to half of average potential energy). On the same count the black hole equilibrium is governed by equality of gravitational and non-gravitational energy !
We show that the Kontsevich-Segal (KS) criterion, applied to the complex saddles that specify the semiclassical no-boundary wave function, acts as a selection mechanism on inflationary scalar field potentials. Completing the observable phase of slow-roll inflation with a no-boundary origin, the KS criterion effectively bounds the tensor-to-scalar ratio of cosmic microwave background fluctuations to be less than 0.08, in line with current observations. We trace the failure of complex saddles to meet the KS criterion to the development of a tachyon in their spectrum of perturbations.
The Friedmann equation, augmented with an additional term that effectively takes on the role of dark energy, is demonstrated to be an exact solution to the recently proposed gravitational theory named "conformal Killing gravity." This theory does not explicitly incorporate dark energy. This finding suggests that there is no necessity to postulate the existence of dark energy as an independent physical entity. The dark energy derived from this theory is characterized by a specific equation of state parameter, denoted as $\omega$, which is uniquely determined to be $-5/3$. If this effective dark energy is present, typically around 5% of the total energy density at the present time, and under the assumption of density parameters for matter and the cosmological constant, $\Omega_{\rm m}\sim 0.25$ and $\Omega_\Lambda \sim 0.7$, respectively, the expansion of the universe at low redshifts ($z < 1.5$) can exceed expectations, while the expansion at $z > 1.5$ remains unchanged. This offers a potential solution to the Hubble tension problem. Alternatively, effective dark energy could be a dominant component in the present-day universe. In this scenario, there is also the potential to address the Hubble tension, and furthermore, it resolves the coincidence problem associated with the cosmological constant.
We consider a quantum Otto engine using an Unruh-DeWitt particle detector model which interacts with a quantum scalar field in curved spacetime. We express a generic condition for extracting positive work in terms of the effective temperature of the detector. This condition reduces to the well-known positive work condition in the literature under the circumstances where the detector reaches thermal equilibrium with the field. We then evaluate the amount of work extracted by the detector in two scenarios: an inertial detector in a thermal bath and a circulating detector in the Minkowski vacuum, which is inspired by the Unruh quantum Otto engine.
Various studies have shown that the late acceleration of the universe can be caused by the bulk viscosity associated with dark matter. But recently, it was indicated that a cosmological constant is essential for maintaining Near Equilibrium Conditions (NEC) for the bulk viscous matter during the accelerated expansion of the universe. In the present study, we investigate a model of the universe composed of mixed dark matter components, with viscous dark matter (vDM), and inviscid cold dark matter (CDM) as it's constituents, in the context of $f(R,T)$ gravity and showed that the model predicts late acceleration by satisfying NEC throughout the evolution, without cosmological constant. We have also compared the model predictions with combined Type Ia Supernovae and observational Hubble data sets and thereby determined the estimated values of different cosmological parameters.
Time-energy uncertainty relation (TEUR) plays a fundamental role in quantum mechanics, as it allows to grasp peculiar aspects of a variety of phenomena based on very general principles and symmetries of the theory. Using the Mandelstam-Tamm method, TEUR has been recently derived for neutrino oscillations by connecting the uncertainty on neutrino energy with the characteristic time-scale of oscillations. Interestingly enough, the suggestive interpretation of neutrinos as unstable-like particles has proved to naturally emerge in this context. Further aspects have been later discussed in semiclassical gravity by computing corrections to the neutrino energy uncertainty in a generic stationary curved spacetime, and in quantum field theory, where the clock observable turns out to be identified with the non-conserved flavor charge operator. In the present work, we give an overview on the above achievements. In particular, we analyze the implications of TEUR and explore the impact of gravitational and non-relativistic effects on the standard condition for neutrino oscillations.
The general relativistic gravitomagnetic clock effect, in its simplest form, consists of the non-vanishing difference in the orbital periods of two counter-orbiting objects moving in opposite directions along circular orbits lying in the equatorial plane of a central rotating source. We briefly review both the theoretical and observational aspects of such an intriguing consequence of Einstein's theory of gravitation.
We present a comprehensive study exploring the relationship between transport properties and measures of quantum entanglement in the Einstein-Maxwell-Axion-Horndeski theory. By using holographic duality, we study the entanglement measures, holographic entanglement entropy (HEE) and entanglement wedge cross-section (EWCS), and transport coefficients, for this model and analyze their dependence on free parameters which we classify into action parameter, observable parameters and axion factor. We find contrasting behaviors between HEE and EWCS with respect to observable parameters (charge and temperature), and the axion factor, indicating that they capture different types of quantum correlations. We also find that HEE exhibits positive correlation with both charge and thermal excitations, whereas EWCS exhibits a negative correlation with charge-related conductivities and thermal fluctuations. Furthermore, we find that the Horndenski coupling term, as the modification to standard gravity theory, does not change the qualitative behaviors of the conductivities and the entanglement measures.
We obtain a \textit{novel} model of oscillating non-singular cosmology on the spatially flat Randall-Sundrum (RS) II brane. At early times, the universe is dominated by a scalar field with an inflationary emergent potential $V(\phi)=A(e^{B\phi}-1)^2$, $A$ and $B$ being constants. Interestingly, we find that such a scalar field can source a non-singular bounce, replacing the big bang on the brane. The turnaround again happens naturally on the brane dominated by a phantom dark energy (favoured by observations\cite{E1,E2,E3} at late times), thus avoiding the big rip singularity and leading upto the following non-singular bounce via a contraction phase. There is a smooth non-singular transition of the brane universe through both the bounce and turnaround, leading to alternate expanding and contracting phases. This is the \textit{first} model where a single braneworld of positive tension can be made to recycle as discussed in details in the concluding section.
We present comprehensive global fits of the SMEFT under the $\textit{minimal}$ minimal flavour violation (MFV) hypothesis, i.e. assuming that only the flavour-symmetric and CP-invariant operators are relevant at the high scale. The considered operator set is determined by theory rather than the used datasets. We establish global limits on these Wilson coefficients using leading order and next-to-leading order SMEFT predictions for electroweak precision observables, Higgs, top, flavour and dijets data as well as measurements from parity violation experiments and lepton scattering. Our investigations reveal an intriguing crosstalk among different observables, underscoring the importance of combining diverse observables from various energy scales in global SMEFT analyses.
Dark Matter models that employ a vector portal to a dark sector are usually treated as an effective theory that incorporates kinetic mixing of the photon with a new U(1) gauge boson, with the $Z$ boson integrated out. However, a more complete theory must employ the full SU(2)$_L\times $U(1)$_Y \times $U(1)$_{Y^\prime}$ gauge group, in which kinetic mixing of the $Z$ boson with the new U(1) gauge boson is taken into account. The importance of the more complete analysis is demonstrated by an example where the parameter space of the effective theory that yields the observed dark matter relic density is in conflict with a suitably defined electroweak $\rho$-parameter that is deduced from a global fit to $Z$ physics data.
We study sub-GeV dark matter (DM) particles that may annihilate or decay into SM particles producing an exotic injection component in the Milky Way that leaves an imprint in both photon and cosmic ray (CR) fluxes. Specifically, the DM particles may annihilate or decay into $e^+e^-$, $\mu^+\mu^-$ or $\pi^+\pi^-$ and may radiate photons through their $e^\pm$ products. The resulting $e^\pm$ products can be directly observed in probes such as {\sc Voyager 1}. Alternatively, the $e^\pm$ products may produce bremsstrahlung radiation and upscatter the low-energy galactic photon fields via the inverse Compton process generating a broad emission from $X$-ray to $\gamma$-ray energies observable in experiments such as {\sc Xmm-Newton}. We find that we get a significant improvement in the DM annihilation and decay constraints from {\sc Xmm-Newton} (excluding thermally averaged cross sections of $10^{-31}$ cm$^3$\,s$^{-1} \lesssim \langle \sigma v\rangle \lesssim10^{-26}$ cm$^3$\,s$^{-1}$ and decay lifetimes of $10^{26}\,\textrm{s}\lesssim \tau \lesssim 10^{28}\,\textrm{s}$ respectively) by including best fit CR propagation and diffusion parameters. This yields the strongest astrophysical constraints for this mass range of DM of 1 MeV to a few GeV and even surpasses cosmological bounds across a wide range of masses as well.
The quark-gluon plasma (QGP) is an exotic phase of matter, composed of deconfined quarks and gluons and is briefly created in heavy-ion collisions (HIC) at the LHC and at the RHIC. High-energy, self-collimated structures of final-state particles also created in HIC, called jets, probe the QGP, piercing through it on their way to the particle detector. Quantum field theory at finite temperature or thermal field theory, is then an extremely powerful tool, capable of analytically quantifying how such a high-energy object interacts with a weakly coupled thermal bath. In this thesis, we work towards the computation of corrections to two quantities, which dictate how jets are quenched by the QGP. The first being the transverse momentum broadening coefficient, which describes how the jet diffuses in transverse momentum space through its interaction with the medium. We focus on the computation of logarithmically enhanced corrections, carefully showing how the thermal scale affects the logarithmic phase space. The second is the asymptotic mass, which can be thought of as a shift in the jet dispersion relation as it undergoes forward scattering with the medium. We complete a matching calculation, which rids the mass' classical corrections of any unphysical divergences, while also beginning the completion of its full two-loop, quantum corrections.
A model-independent expression for the Dalitz plot of the semileptonic decays of a neutral kaon $K_{\mu 3}^0$, including radiative corrections to order $\mathcal{O}[(\alpha / \pi )(q/M_1)]$, where $q$ is the four-momentum transfer and $M_1$ is the mass of the decaying kaon, is presented. In this paper the emitted muon is considered to be polarized so the analysis is centered on numerically evaluating the radiative corrections to the longitudinal, transverse, and normal polarization muon components. The model dependence of radiative corrections is kept in general form within this approximation, which is useful for model-independent experimental analyses. The final expressions, with the triple integration of the bremsstrahlung photon variables are ready to be performed numerically. The radiative corrections to the components of the muon polarization are found to be very small compared to their respective uncorrected values.
In the framework of the model with string fusion and the formation of string clusters the correlations between multiplicities in two separated rapidity windows in pp collisions at LHC energies were studied and the results obtained were compared with data from the ALICE collaboration at CERN. The simulation is carried out within the framework of a Monte Carlo implementation of the colour quark-gluon string model. String fusion effects are taken into account by implementing a finite lattice in the plane of the impact parameter. The dependence of the correlation coefficient between multiplicities in two rapidity observation windows on the distance between these windows is calculated for four values of their width and three values of initial energy. It is shown that the model with string clusters describes the main features of the behavior of the correlation coefficient: its increase with increasing initial energy, decrease with increasing rapidity distance between observation windows, and nonlinear dependence on the width of the rapidity window.
We study di-Higgs and tri-Higgs boson productions at a muon collider as functions of the modification of the muon Yukawa coupling resulting from new physics parameterized by the dimension 6 mass operator. We show that the di-Higgs signal can be used to observe a deviation in the muon Yukawa coupling at the 10 % level for $\sqrt{s} = 10$ TeV and at the 3.5 % level for $\sqrt{s} = 30$ TeV. The tri-Higgs signal improves the sensitivity dramatically with increasing $\sqrt{s}$, reaching 0.8 % at $\sqrt{s} = 30$ TeV. We also study all processes involving Goldstone bosons originating from the same operator, discuss possible model dependence resulting from other operators of dimension 6 and higher, and identify multi-Higgs productions and one additional process as golden channels. We further extend the study to the two Higgs doublet model type-II and show that di-Higgs and tri-Higgs signals involving heavy Higgs bosons can be enhanced by a factor of $(\tan \beta)^6$, which results in the potential sensitivity to a modified muon Yukawa coupling at the $10^{-6}$ level already at a $\sqrt{s} = 10 $ TeV muon collider. The results can be easily customized for other extensions of the Higgs sector.
We construct the effective Hamiltonian for hadronic parity violation in strangeness-nonchanging ($\Delta S=0$) processes in next-to-leading order (NLO) in QCD, for all isosectors, and at a renormalization scale of 2 GeV, thus extending our earlier leading-order (LO) analysis. Hadronic parity violation, studied in the context of the low-energy interactions of nucleons and nuclei, exposes the complex interplay of weak and strong interactions in these systems, and thus supports our extension to NLO. Here we exploit the flavor-blind nature of QCD interactions to construct the needed anomalous dimension matrices from those computed in flavor physics, which we then use to refine our effective Hamiltonian and finally our predicted parity-violating meson-nucleon coupling constants, to find improved agreement with few-body experiments.
Axion like particles (ALPs), the pseudo Nambu-Goldstone bosons associated to the spontaneous breaking of global symmetry, have emerged as promising dark matter candidates. Conventionally, in the context of misalignment mechanism, the non-thermally produced ALPs happen to stay frozen due to Hubble friction initially and at a later stage, they begin to oscillate (before matter-radiation equality) at characteristic frequencies defined by their masses and behaving like cold dark matter. In this work, we study the influence of electroweak symmetry breaking (EWSB), through a higher order Higgs portal interaction, on the evolution of ALPs. Such an interaction is found to contribute partially to the ALP's mass during EWSB, thereby modifying oscillation frequencies during EWSB as well as impacting upon the existing correlation between the scale of symmetry breaking and their masses. The novelty of the work lies in broadening the relic satisfied parameter space so as to probe it in near future via a wide range of experiments.
Masses of heavy mesons, tetraquarks and pentaquarks are computed in a potential model. Tetraquarks are studied as bound states of a diquark and an antidiquark. Pentaquarks are constructed from a series of two-body interactions between the four quarks and the antiquark.
Baryogenesis requires large CPV phases, while said phases are constrained by electric dipole moment (EDM) experiments. In the general two Higgs doublet model (g2HDM), without $Z_{2}$ symmetry, EWBG can be achieved while evading EDM bounds. In this study, we explore the g2HDM contributions to electron EDM (eEDM) and neutron EDM (nEDM), and review the future prospects in the experiment front. In particular, we show that the combined eEDM-nEDM results can not only provide crucial bound on the top Yukawa-driven baryogenesis explanation in g2HDM, but are poised for discovery as experimental precision increases within the next decade or so.
We present a calculation of inclusive diboson (WZ, ZZ, WW) processes at the LHC in the presence of intermediate polarised weak bosons decaying leptonically, matching next-to-leading-order accuracy in QCD with parton-shower effects. Starting from recent developments in polarised-boson simulation based on the helicity selection at the amplitude level, we have carried out the implementation in the POWHEG-BOX-RES framework, and validated it against independent fixed-order calculations. A phenomenological analysis in realistic LHC setups, as well as a comparison with recent ATLAS measurements, are presented.
The weak radiative decay of $D^0\to V\gamma$ with $V=\bar{K}^0$, and $\phi$, $\rho^0$, and $\omega$, is systematically studied in the vector meson dominance (VMD) model. It allows us to distinguish the short-distance mechanisms which can be described by the tree-level transitions in the non-relativistic constituent quark model, and the long-distance mechanisms which are related to the final-state interactions (FSIs). We find that the FSI effects play a crucial role in $D^0\to V\gamma$ and the SU(3) flavor symmetry can provide a natural constraint on the relative phase between the short and long-distance transition amplitudes. Our analysis suggests that the $D$ meson weak radiative decays can serve as a good case for investigating the non-perturbative QCD mechanisms at the charm quark mass region.
We study the heavy quarkonium ($c\bar{c}$ and $b\bar{b}$) spectra, using Cornell potential in a non-relativistic framework, with spin-dependent corrections corresponding to the spin-spin, spin-orbit, and tensor interactions added perturbatively. We predict the masses of low-lying and excited heavy quarkonium states up to $n=6$ and $L=2$. We analyze the radial and orbital Regge trajectories of both the systems and investigate their departure from linearity. Further, we estimate the wave function at the origin and, consequently, predict the decay widths of charmonium and bottomonium states annihilating to leptons and photons. Also, we investigate the effect of scale dependence of the wave function at the origin and the strong coupling constant on the predictions of annihilation decay widths. We compare our predictions of heavy quarkonium spectra and decay widths with experimental results and other theoretical models.
Recently, a study on the $J/\psi\eta$ mass spectrum from $B^+\rightarrow J/\psi\eta K^+$ decays was reported by the LHCb detector. The results of this study are reported as a ratio of branching fractions as $F_{X}\equiv\frac{\mathcal{B}r(B^+\rightarrow XK^+)\times\mathcal{B}r(X\rightarrow J/\psi\eta)}{\mathcal{B}r(B^+\rightarrow \psi(2S) K^+)\times\mathcal{B}r(\psi(2S)\rightarrow J/\psi K^+)}$ for $X=\psi_2(3823),\psi(4040)$, which are $(5.95^{+3.38}_{-2.55})\times10^{-2}$ and $(40.60\pm11.20)\times10^{-2}$, respectively. Also, the products related to $B_{X}\equiv\mathcal{B}r(B^+\rightarrow XK^+)\times\mathcal{B}r(X\rightarrow J/\psi\eta)$ branching fractions are $B_{\psi_2(3823)}=(1.25^{+0.71}_{-0.53}\pm0.04)\times10^{-6}$ and $B_{\psi(4040)}=(8.53\pm2.35\pm0.30)\times10^{-6}$. For the first time, we calculated this branching fraction using factorization. According to our calculations, $F_X$ to be $F_{\psi_{2}(3823)}=(3.02\pm0.09)\times10^{-2}$ and $F_{\psi(4040)}=(35.32\pm1.83)\times10^{-2}$ at $\mu=m_b/2$. We have estimated $B_{\psi_{2}(3823)}=(0.69\pm0.24)\times10^{-6}$ at $\mu=m_b/2$ and $B_{\psi(4040)}=(6.45\pm0.27)\times10^{-6}$ at $\mu=2m_b$. The results are consistent with the experimental reported.
The effective potential is a widely used phenomenological tool to investigate phase transitions occurring in the early Universe at finite temperature. In the standard perturbative treatment the potential becomes complex in some region of the background field values due to the non-convex nature of the classical potential in models with spontaneous symmetry breaking. The imaginary part renders the minimization of the potential impossible when at finite temperature the absolute minimum is in the complex region. In this talk we introduce a simple method to calculate an effective potential that is fully real based on the optimized perturbation theory scheme. We apply the method for models that extend the Standard Model with an additional singlet scalar.
Recent BESIII data on radiative $J/\psi$ decays from $\sim 10^{10}$ $J/\psi$ samples should significantly advance our understanding of the controversial nature of $\eta(1405/1475)$. This motivates us to develop a three-body unitary coupled-channel model for radiative $J/\psi$ decays to three-meson final states of any partial wave ($J^{PC}$). Basic building blocks of the model are bare resonance states such as $\eta(1405/1475)$ and $f_1(1420)$, and $\pi K$, $K\bar{K}$, and $\pi\eta$ two-body interactions that generate resonances such as $K^*(892)$, $K^*_0(700)$, and $a_0(980)$. This model reasonably fits $K_SK_S\pi^0$ Dalitz plot pseudo data generated from the BESIII's $J^{PC}=0^{-+}$ amplitude for $J/\psi\to\gamma K_SK_S\pi^0$. The experimental branching ratios of $\eta(1405/1475)\to\eta\pi\pi$ and $\eta(1405/1475)\to\gamma\rho$ relative to that of $\eta(1405/1475)\to K\bar{K}\pi$ are simultaneously fitted. Our $0^{-+}$ amplitude is analytically continued to find three poles, two of which correspond to $\eta(1405)$ on different Riemann sheet of the $K^*\bar{K}$ channel, and the third one for $\eta(1475)$. This is the first pole determination of $\eta(1405/1475)$ and, furthermore, the first-ever pole determination from analyzing experimental Dalitz plot distributions with a manifestly three-body unitary coupled-channel framework. Process-dependent $\eta\pi\pi$, $\gamma\pi^+\pi^-$, and $\pi\pi\pi$ lineshapes of $J/\psi\to\gamma(0^{-+})\to \gamma(\eta\pi\pi)$, $\gamma(\gamma\rho)$, and $\gamma(\pi\pi\pi)$ are predicted, and are in reasonable agreement with data. A triangle singularity is shown to play a crucial role to cause the large isospin violation of $J/\psi\to\gamma(\pi\pi\pi)$.
We present recent results in semileptonic and non-leptonic exclusive $B_c$ decays to charmonium states both in $S$-wave, $J/\psi$ and $\eta_c$, and in $P$-wave, $\chi_{cJ}$ and $h_c$. The analysis, based on the heavy quark spin symmetry (HQSS), produces relations among form factors that parametrize the hadronic matrix elements in the amplitudes of the decays. These relations are helpful to control the hadronic uncertainty affecting these processes. Furthermore, $B_c$ decays allow us to get hints on the structure of states like $\chi_{c1}(3872)$, whose exotic or ordinary charmonium nature is debated.
The generalized fractional derivative (GFD) of the parametric Nikiforov-Uvarov method is employed. The energy eigenvalues and total normalized wave function are obtained in terms of the Jacobi polynomial using the proposed Coulomb plus screened exponential hyperbolic potential (CPSEHP). The suggested potential works best with lower values of the screening parameter. The consequent energy eigenvalue is provided in a simple format and extended to investigate thermal properties and superstatistics that are presented in the context of partition function Z and other thermodynamic properties such as vibrational mean energy U, vibrational specific heat capacity C, vibrational entropy S, and vibrational free energy F. The overall pattern of the partition function and other thermodynamic properties one determined for both thermal properties and superstatistics. Also, a comparison with the previous literature is studied. The classical case is derived from the fractional case at {\alpha}=\b{eta}=1. We conclude that the fractional parameter has a critical role in thermal properties and superstatics in the present model.
We extract the top-quark mass value in the on-shell renormalization scheme from the comparison of theoretical predictions for $pp \rightarrow t\bar{t} + X$ at next-to-next-to-leading order (NNLO) QCD accuracy with experimental data collected by the ATLAS and CMS collaborations for absolute total, normalized single-differential and double-differential cross-sections during Run 1, Run 2 and the ongoing Run 3 at the Large Hadron Collider (LHC). For the theory computations of heavy-quark pair-production we use the MATRIX framework, interfaced to PineAPPL for the generation of grids of theory predictions, which can be efficiently used a-posteriori during the fit, performed within xFitter. We take several state-of-the-art parton distribution functions (PDFs) as input for the fit and evaluate their associated uncertainties, as well as the uncertainties arising from renormalization and factorization scale variation. Fit uncertainties related to the datasets are also part of the extracted uncertainty of the top-quark mass and turn out to be of similar size as the combined scale and PDF uncertainty. Fit results from different PDF sets agree among each other within 1$\sigma$ uncertainty, whereas some datasets related to $t\bar{t}$ decay in different channels (dileptonic vs. semileptonic) point towards top-quark mass values in slight tension among each other, although still compatible within $2.5 \sigma$ accuracy. Our results are compatible with the PDG 2022 top-quark pole-mass value. Our work opens the road towards more complex simultaneous NNLO fits of PDFs, the strong coupling $\alpha_s(M_Z)$ and the top-quark mass, using the currently most precise experimental data on $t\bar{t} + X$ total and multi-differential cross-sections from the LHC.
We discuss $E_6$ based extensions of the Standard Model (SM) containing two varieties of superheavy metastable cosmic strings (CSs) that respectively have neutral and electrically charged current carriers. We employ an extended version of the velocity-dependent one-scale (VOS) model, recently discussed by some authors, to estimate the gravitational wave (GW) spectrum emitted by metastable strings with a dimensionless string tension $G \mu \approx 10^{-6}$ that carry a right-handed neutrino (RHN) current. We find that with a low to moderate amount of current, the spectrum is compatible with the LIGO O3 run and also consistent at the 1$\sigma$ level with the recent PTA signals.
We explore the limit at which the effective baryonic Y-string model of the junction approaches the mesonic stringlike behavior. We calculate and compare the numerical values of the static potential and energy-density correlators of diquark-quark and quark-antiquark configurations. The gauge model is pure Yang-Mills $SU(3)$ lattice gauge theory at coupling $\beta=6.0$ and finite temperature. The diquark setup is approximated as two quarks confined within a sphere of radius $0.1$ fm. The lattice data of the potential and energy show that the string binding the diquark-quark configuration displays an identical behavior to the quark-antiquark confining string. However, with the temperature increase to a small enough neighborhood of the critical point $T_{c}$, the gluonic similarities between the two systems do not manifest neither at short nor intermediate distance scales $R<1.0$ fm. The comparison between the potential and the second moment of the action-density correlators for both systems shows significant splitting. This suggests that subsisted baryonic decoupled states overlap with the mesonic spectrum. The baryonic junction's model for the potential and the profile returns a good fit to the numerical lattice data of the diquark-quark arrangement. However, near the critical point, the mesonic string displays large deviations compared to fits of the corresponding quark-antiquark data.
Modifications of large transverse momentum single hadron, dihadron, and $\gamma$-hadron spectra in relativistic heavy-ion collisions are direct consequences of parton-medium interactions in the quark-gluon plasma (QGP). The interaction strength and underlying dynamics can be quantified by the jet transport coefficient $\hat{q}$. We carry out the first global constraint on $\hat{q}$ using a next-to-leading order pQCD parton model with higher-twist parton energy loss and combining world experimental data on single hadron, dihadron, and $\gamma$-hadron suppression at both RHIC and LHC energies with a wide range of centralities. The global Bayesian analysis using the information field (IF) priors provides the most stringent constraint on $\hat q(T)$. We demonstrate in particular the progressive constraining power of the IF Bayesian analysis on the strong temperature dependence of $\hat{q}$ using data from different centralities and colliding energies. We also discuss the advantage of using both inclusive and correlation observables with different geometric biases. As a verification, the obtained $\hat{q}(T)$ is shown to describe data on single hadron anisotropy at high transverse momentum well. Predictions for future jet quenching measurements in oxygen-oxygen collisions are also provided.
In this work, we consider an extension to the Standard Model composed by a Massive Vector Doublet under SU(2)$_L$ and a Left-handed Heavy Neutral Lepton. We study the production of these exotic leptons with the Same Flavor Opposite Sign standard lepton pair, and jets, considering Drell-Yan and Vector Boson Fusion as independent cases. We find that for the latter, the dilepton angular distribution is different enough from the background to use it as a smoking-gun for our model. Based on this fact, we establish limits on the parameter space considering previous experimental searches in this final state.
We revisit the short-range baryon-baryon potentials in the flavor SU(3) sector, using the constituent quark model. We employ the color Coulomb, linear confining, and color magnetic forces between two constituent quarks, and solve the three-quark Schr\"{o}dinger equation using the Gaussian expansion method to evaluate the wave functions of the octet $( N , \Lambda , \Sigma , \Xi )$ and decuplet $( \Delta , \Sigma ^{\ast} , \Xi ^{\ast} , \Omega )$ baryons. We then solve the six-quark equation using the resonating group method and systematically calculate equivalent local potentials for the $S$-wave two-baryon systems which reproduce the relative wave functions of two baryons in the resonating group method. As a result, we find that the flavor antidecuplet states with total spin $J = 3$, namely, $\Delta \Delta$, $\Delta \Sigma ^{\ast}$, $\Delta \Xi ^{\ast}$-$\Sigma ^{\ast} \Sigma ^{\ast}$, and $\Delta \Omega$-$\Sigma ^{\ast} \Xi ^{\ast}$ systems, have attractive potentials sufficient to generate dibaryon bound states as hadronic molecules. In addition, the $N \Omega$ system with $J = 2$ in coupled channels has a strong attraction and forms a bound state. We also make a comparison with the baryon-baryon potentials from lattice QCD simulations and try to understand the behavior of the potentials from lattice QCD simulations.
Dark matter particles can form halos gravitationally bound to massive astrophysical objects. The Earth could have such a halo where depending on the particle mass, the halo either extends beyond the surface or is confined to the Earth's interior. We consider the possibility that if dark matter particles are coupled to neutrinos, then neutrino oscillations can be used to probe the Earth's dark matter halo. In particular, atmospheric neutrinos traversing the Earth can be sensitive to a small size, interior halo, inaccessible by other means. Depending on the halo mass and neutrino energy, constraints on the dark matter-neutrino couplings are obtained from the halo corrections to the neutrino oscillations.
We propose a new method to solve the relativistic hydrodynamic equations based on implicit Runge-Kutta methods with a locally optimized fixed-point iterative solver. For numerical demonstration, we implement our idea for ideal hydrodynamics using the one-stage Gauss-Legendre method as an implicit method. The accuracy and computational cost of our new method are compared with those of explicit ones for the (1+1)-dimensional Riemann problem, as well as the (2+1)-dimensional Gubser flow and event-by-event initial conditions for heavy-ion collisions generated by TrENTo. We demonstrate that the solver converges with only one iteration in most cases, and as a result, the implicit method requires a smaller computational cost than the explicit one at the same accuracy in these cases. By showing a relationship between the one-stage Gauss-Legendre method with the iterative solver and the two-step Adams-Bashforth method, we argue that our method benefits from both the stability of the former and the efficiency of the latter.
We employ a model based on nucleonic and mesonic degrees of freedom to discuss the competition between isotropic and anisotropic phases in cold and dense matter. Assuming isotropy, the model exhibits a chiral phase transition which is of second order in the chiral limit and becomes a crossover in the case of a realistic pion mass. This observation crucially depends on the presence of the nucleonic vacuum contribution. Allowing for an anisotropic phase in the form of a chiral density wave can disrupt the smooth crossover. We identify the regions in the parameter space of the model where a chiral density wave is energetically preferred. A high-density re-appearance of the chiral density wave with unphysical behavior, as seen in previous studies, is avoided by a suitable renormalization scheme. A nonzero pion mass tends to disfavor the anisotropic phase compared to the chiral limit and we find that, within our model, the chiral density wave is only realized for baryon densities of at least about 6 times nuclear saturation density.
Given a model for self-dual non-linear electrodynamics in four spacetime dimensions, any deformation of this theory which is constructed from the duality-invariant energy-momentum tensor preserves duality invariance. In this work we present new proofs of this known result, and also establish a previously unknown converse: any parameterized family of duality-invariant Lagrangians, all constructed from an Abelian field strength $F_{\mu \nu}$ but not its derivatives, is related by a generalized stress tensor flow, in a sense which we make precise. We establish this and other properties of stress tensor deformations of theories of non-linear electrodynamics using both a conventional Lagrangian representation and using two auxiliary field formulations. We analyze these flows in several examples of duality-invariant models including the Born-Infeld and ModMax theories, and we derive a new auxiliary field representation for the two-parameter family of ModMax-Born-Infeld theories. These results suggest that the space of duality-invariant theories may be characterized as a subspace of theories of electrodynamics with the property that all tangent vectors to this subspace are operators constructed from the stress tensor.
The gluon distribution is obtained from the Golec-Biernat-W$\ddot{\mathrm{u}}$sthoff (GBW) and Bartels, Golec-Biernat and Kowalski (BGK) models at low $x$. We derive analytical results for the unintegrated color dipole gluon distribution function at small transverse momentum, which provides useful information to constrain the $k_{t}$-shape of the unintegrated gluon distribution in comparison with the unintegrated gluon distribution (UGD) models. The longitudinal proton structure function $F_{L}(x,Q^2)$ from the $k_{t}$ factorization scheme, using the unintegrated gluon density is computed. We compare the predictions for the on-shell and twist-2 corrections with the HERA data and the CJ15 parametrization method for $F_{L}$. We show that this method is very well described the experimental data within the on-shell and twist-2 framework. Effects of parameters on $F_{L}$, where charm contribution is taken into account, are investigated. These results are in good agreement with the data at fixed $W$.
Perturbative calculations for processes that involve heavy flavours can be performed in two approaches: the massive scheme and the massless one. The former enables one to fully account for the heavy-quark kinematics, while the latter allows one to resum potentially-large mass logarithms. Furthermore, the two schemes can be combined to take advantage of the virtues of each of them. Both massive and massless calculations can be supplemented by soft-gluon resummation. However matching between massive and massless resummed calculations is difficult, essentially because of the non-commutativity of the soft and massless limits. In this paper, we develop a formalism to combine resummed massless and massive calculations. We obtain an all-order expression that consistently resums both mass and soft logarithms to next-to-leading logarithmic accuracy. We perform detailed calculations for the decay of the Higgs into a heavy-quark pair and discuss the applications of this formalism to different processes.
The renormalization-scale dependence of the non-factorizable virtual corrections to Higgs boson production in weak boson fusion at next-to-next-to-leading order in perturbative QCD is unusually strong, due to the peculiar nature of these corrections. To address this problem, we compute the three-loop non-factorizable contribution to this process which accounts for the running of the strong coupling constant, and show that it stabilizes the theoretical prediction.
We extend the recent study of $K_{1}/K^{*}$ enhancement as a signature of chiral symmetry restoration in heavy ion collisions at the Large Hadron Collider (LHC) via the kinetic approach to include the effects due to non-unity hadron fugacities during the evolution of produced hadronic matter and the temperature-dependent $K_1$ mass. Although the effect of non-unity fugacity only slightly reduces the $K_1/K^*$ enhancement due to chiral symmetry restoration, the inclusion of the temperature-dependent $K_1$ mass leads to a substantial reduction in the $K_1/K^*$ enhancement. However, the final $K_1/K^*$ ratio in peripheral collisions still shows a more than factor of two enhancement compared to the case without chiral symmetry restoration and thus remains a good signature for chiral symmetry restoration in the hot dense matter produced in relativistic heavy ion collisions.
I discuss the thermodynamics-based derivation of the formula for the entanglement entropy of a system of gluons. The derivation is based on \cite{Kutak:2011rb}, where saturation and the Unruh effect were used to obtain and discuss the entropy of gluons. The formula agrees, in the high-energy limit, up to a numerical factor, with more recent results by \cite{Kharzeev:2017qzs}, where arguments based on the density matrix and bipartition of the proton were used to obtain the formula. Furthermore, I present arguments based on the properties of evolution equations as to why the saturation-based approach, as well as the double leading logarithmic limit of BFKL, agree in the functional form of the expression for entanglement entropy.
We study Horowitz-Polchinski string stars in pure AdS$_3$ near the Hagedorn temperature using the technique of worldsheet conformal perturbation theory. Since the worldsheet CFT for pure AdS$_3$ is known exactly, our methodology provides a systematic way to construct Horowitz-Polchinski backgrounds to all orders in $\alpha'$. We explicitly construct the leading string star equations in a double expansion in temperature and WZW level $k$ which we then solve numerically.
We demonstrate how recent developments in string field theory provide a framework to systematically study type II flux compactifications with non-trivial Ramond-Ramond profiles. We present an explicit example where physical observables can be computed order by order in a small parameter which can be effectively viewed as string coupling constant. We obtain the corresponding background solution of the string field equations of motions up to the second order in the expansion. Along the way, we show how the tadpole cancellations of the string field equations lead to the minimization of the F-term potential of the low energy supergravity description. String field action expanded around the obtained background solution furnishes a worldsheet description of the flux compactifications.
We classify one-dimensional symmetry-protected topological (SPT) phases protected by dipole symmetries. A dipole symmetry comprises two sets of symmetry generators: charge and dipole operators, which together form a non-trivial algebra with translations. Using matrix product states (MPS), we show that for a $G$ dipole symmetry with $G$ a finite abelian group, the one-dimensional dipolar SPTs are classified by the group $H^2[G\times G,U(1)]/H^2[G,U(1)]^2$. Because of the symmetry algebra, the MPS tensors exhibit an unusual property, prohibiting the fractionalization of charge operators at the edges. For each phase in the classification, we explicitly construct a stabilizer Hamiltonian to realize the SPT phase and derive the response field theories by coupling the dipole symmetry to background tensor gauge fields. These field theories generalize the Dijkgraaf-Witten theories to twisted finite tensor gauge theories.
We compute instanton corrections to the partition function of sine-Liouville (SL) theory, which provides a worldsheet description of two-dimensional string theory in a non-trivial tachyon background. We derive these corrections using a matrix model formulation based on a chiral representation of matrix quantum mechanics and using string theory methods. In both cases we restrict to the leading and subleading orders in the string coupling expansion. Then the CFT technique is used to compute two orders of the expansion in the SL perturbation parameter $\lambda$, while the matrix model gives results which are non-perturbative in $\lambda$. The matrix model results perfectly match those of string theory in the small $\lambda$ expansion. We also generalize our findings to the case of perturbation by several tachyon vertex operators carrying different momenta, and obtain interesting analytic predictions for the disk two-point and annulus one-point functions with ZZ boundary condition.
Six-dimensional superconformal field theories (SCFTs) have an atomic classification in terms of elementary building blocks, conformal systems that generalize matter and can be fused together to form all known 6d SCFTs in terms of generalized 6d quivers. It is therefore natural to ask whether 5d SCFTs can be organized in a similar manner, as the outcome of fusions of certain elementary building blocks, which we call 5d conformal matter theories. In this project we begin exploring this idea and we give a systematic construction of 5d generalized ``bifundamental'' SCFTs, building from geometric engineering techniques in M-theory. In particular, we find several examples of $(\mathfrak {e}_6,\mathfrak {e}_6)$, $(\mathfrak {e}_7,\mathfrak {e}_7)$ and $(\mathfrak {e}_8,\mathfrak {e}_8)$ 5d bifundamental SCFTs beyond the ones arising from (elementary) KK reductions of the 6d conformal matter theories. We show that these can be fused together giving rise to 5d SCFTs captured by 5d generalized linear quivers with exceptional gauge groups as nodes, and links given by 5d conformal matter. As a first application of these models we uncover a large class of novel 5d dualites, that generalize the well-known fiber/base dualities outside the toric realm.
In this paper, we construct the q-boson hopping model from the associated quantum gauge theory. Inspired by the symmetries of quantum field theory, we propose a new Bethe-ansatz to describe the wave functions of the q-boson hopping model, and then we derive the q-boson algebras from this new ansatz. Its study serves as a concrete case for a general program: the correspondence between the bosonic integrable systems and the associated two-dimensional gauge theories.
We initiate the study of boundary Vertex Operator Algebras (VOAs) of topologically twisted 3d $\mathcal{N}=4$ rank-0 SCFTs. This is a recently introduced class of $\mathcal{N}=4$ SCFTs that by definition have zero-dimensional Higgs and Coulomb branches. We briefly explain why it is reasonable to obtain rational VOAs at the boundary of their topological twists. When a rank-0 SCFT is realized as the IR fixed point of a $\mathcal{N}=2$ Lagrangian theory, we propose a technique for the explicit construction of its topological twists and boundary VOAs based on deformations of the holomorphic-topological twist of the $\mathcal{N}=2$ microscopic description. We apply this technique to the B-twist of a newly discovered subclass of 3d $\mathcal{N}=4$ rank-0 SCFTs and discuss relations to Virasoro minimal models. In the simplest case, this leads to a novel level-rank duality between $L_1(\mathfrak{osp}(1|2))$ and the minimal model $M(2,5)$. As an aside, we present a 3d $\mathcal{N}=2$ QFT that leads to the $M(3,4)$ minimal model, also known as the Ising model.
To our best knowledge, the non-perturbative lowest-order stringy interaction between an NS brane and a Dp brane remains unknown. We here present the non-perturbative stringy amplitudes for a system of an F-string and a Dp brane and a system of an NS 5 brane and a Dp brane for $0 \le p \le 6$. In either case, the F or NS5 and the Dp are placed parallel at a separation. We obtain the respective amplitudes from the amplitude for a system of a D1 and a D3 for the former and that for a system of a D5 and a D3 system for the latter, either of which can be computed via the known D-brane technique, using the IIB S-duality plus various T-dualities, along with the respective known long-range amplitudes.
We construct domain-wall skyrmion chains and domain-wall bimerons in chiral magnets with an out-of-plane easy-axis anisotropy and without a Zeeman term coupling to a magnetic field. Domain-wall skyrmions are skyrmions trapped inside a domain wall, they are present in the ferromagnetic (FM) phase of a chiral magnet with an out-of-plane easy-axis anisotropy. In this paper, we explore the stability of domain-wall skyrmions in the FM phase and in a chiral soliton lattice (CSL) or spiral phase, which is a periodic array of domain walls and anti-domain walls arranged in an alternating manner. In the FM phase, the worldline of a domain-wall skyrmion is bent to form a cusp at the position of the skyrmion. We describe such a cusp using both an analytic method and numerical solutions, and find a good agreement between them for small DM interactions. We show that the cusp grows toward the phase boundary with the CSL, and eventually diverges at the boundary. Second, if we put one skyrmion trapped inside a domain wall in a CSL, it decays into a pair of merons by a reconnection of the domain wall and its adjacent anti-domain wall. Third, if we put skyrmions and anti-skyrmions alternately in domain walls and anti-domain walls, respectively such a chain is stable.
Recent progress in string theory has unveiled the discovery of NS-NS couplings in bosonic and heterotic effective actions at order $\alpha'^2$, which were achieved by imposing $O(1,1)$ symmetry on the circle reduction of classical effective actions. While the bosonic theory features 25 couplings, the heterotic theory encompasses 24 parity-even and 3 parity-odd couplings, excluding the pure gravity couplings. In this study, we redefine the even-parity couplings in the bosonic and heterotic theories through the application of appropriate field redefinitions, resulting in 10 and 8 couplings, respectively. To establish the validity of these couplings, a cosmological reduction is conducted, demonstrating that the cosmological couplings in the heterotic theory vanish, subject to one-dimensional field redefinitions that include the lapse function and total derivative terms. Additionally, it is observed that the cosmological couplings in the bosonic theory can be expressed as $\mathrm{tr}(\dS^6)$. These results are consistent with existing literature, where such behavior is attributed to the pure gravity component of the couplings. Furthermore, the consistency of the obtained couplings with 4-point string theory S-matrix elements is confirmed.
In this paper, we have studied the holographic fermionic Pole-Skipping phenomena for a class of interacting theory in a charged AdS black hole background. We have studied two types of fermion-scalar interactions in the bulk: Dipole and Yukawa type interaction. Depending upon the interaction we introduced both real and charged scalar fields. We have particularly analyzed the effect of scalar condensation on the fermionic Pole-Skipping points and discussed their behaviour near critical temperatures.
We reveal an intriguing connection between conformal blocks and fractional calculus. By employing a modified form of half-derivates, we discover a compact representation of the three-dimensional conformal block, expressed as the product of two hypergeometric 4F3 functions. Furthermore, our method leads to several implications for two-dimensional Conformal Field Theory (CFT), which we explore towards the conclusion of this paper.
Real time holography is studied in the context of the embedding space formalism. In the first part of this paper, we present matching conditions for on-shell integer spin fields when going from Euclidean to Lorentzian signature on AdS background. Using the BTZ black hole as an example, we discuss various ways of lifting the physical solution from the AdS surface to the whole embedding space. The BTZ propagator for higher spin field is expressed elegantly in terms of the embedding coordinates. In the second part of the paper, we develop the proposed duality between higher spin theory and vector models. We obtain a specific map between the field configurations of these two theories in real time, so called Lorentzian AdS/CFT map. We conclude by exploring the matching conditions for higher spin fields satisfying the proposed bulk quadratic action. The physical and ghost modes can be treated independently during the Wick rotation; only physical modes are considered to be external modes.
Tensor hierarchy of Exceptional Field Theories contains gauge fields satisfying certain Bianchi identities. We define the full set of fluxes of the SL(5) exceptional field theory containing known gauge field strengths, generalized anholonomy coefficients and two new fluxes. It is shown that the full SL(5) ExFT Lagrangian can be written in terms of the listed fluxes. We derive the complete set of Bianchi identities and identify magnetic potentials of the theory and the corresponding (wrapped) membranes of M-theory.
In this paper, we explore the zoo of 5d superconformal field theories (SCFTs) constructed from M-theory on Isolated Complete Intersection Singularities (ICIS). We systematically investigate the crepant resolution of such singularities, and obtain a classification of rank $\leqslant 10$ models with a smooth crepant resolution and smooth exceptional divisors, as well as a number of infinite sequences with the same smoothness properties. For these models, we study their Coulomb branch properties and compute the flavor symmetry algebra from the resolved CY3 and/or the magnetic quiver. We check the validity of the conjectures relating the properties of the 5d SCFT and the 4d $\mathcal{N}=2$ SCFT from IIB superstring on the same singularity. When the 4d $\mathcal{N}=2$ SCFT has a Lagrangian quiver gauge theory description, one can obtain the magnetic quiver of the 5d theory by gauging flavor symmetry, which encodes the 5d Higgs branch information. Regarding the smoothness of the crepant resolution and integrality of 4d Coulomb branch spectrum, we find examples with a smooth resolved CY3 and smooth exceptional divisors, but fractional 4d Coulomb branch spectrum. Moreover, we compute the discrete (higher)-symmetries of the 5d/4d SCFTs from the link topology for a few examples.
The combinatorially and the geometrically defined partial orders on the set of permutations coincide. We extend this result to $(0,1)$-matrices with fixed row and column sums. Namely, the Bruhat order induced by the geometry of a Cherkis bow variety of type A coincides with one of the two combinatorially defined Bruhat orders on the same set.
The normalization in the path integral approach to quantum field theory, in contrast with statistical field theory, can contain physical information. The main claim of this paper is that the inner product on the space of field configurations, one of the fundamental pieces of data required to be added to quantize a classical field theory, determines the normalization of the path integral. In fact, dimensional analysis shows that the introduction of this structure necessarily introduces a scale that is left unfixed by the classical theory. We study the dependence of the theory on this scale. This allows us to explore mechanisms that can be used to fix the normalization based on cutting and gluing different integrals. "Self-normalizing" path integrals, those independent of the scale, play an important role in this process. Furthermore, we show that the scale dependence encodes other important physical data: we use it to give a conceptually clear derivation of the chiral anomaly. Several explicit examples, including the scalar and compact bosons in different geometries, supplement our discussion.
We compute partition functions of the deformed multiple M5-branes theory on $K3\times T^2$ using the refined topological vertex formalism and the Borcherds lift. The deformation is related to the mass deformation in the corresponding four dimensional $N=4$ $SU(N)$ gauge theory on $K3$. The seed of the Borcherd-lift is calculated by taking the universal part of the type IIb little string free energy of the CY3-fold $X_{N,1}$. We provide explicit modular covariant expressions, as expansions in the mass parameter $m$, of the genus two Siegel modular forms produced by the Borcherds lift of the first few seed functions. We also discuss the relation between genus-one free energy and Ray-Singer Torsion, and the automorphic properties of the latter.
We give a geometric interpretation for superconformal quantum mechanics defined on a hyper-Kahler cone which has an equivariant symplectic resolution. BPS states are identified with certain twisted Dolbeault cohomology classes on the resolved space and their index degeneracies can also be related to the Euler characteristic computed in equivariant sheaf cohomology. In the special case of the Hilbert scheme of K points on C2, we obtain a rigorous estimate for the exponential growth of the index degeneracies of BPS states as K goes to infinity. This growth serves as a toy model for our recently proposed duality between a seven dimensional black hole and superconformal quantum mechanics.
We consider macroscopically large 3-partitions $(A,B,C)$ of connected subsystems $A\cup B \cup C$ in infinite quantum spin chains and study the R\'enyi-$\alpha$ tripartite information $I_3^{(\alpha)}(A,B,C)$. At equilibrium in clean 1D systems with local Hamiltonians it generally vanishes. A notable exception is the ground state of conformal critical systems, in which $I_3^{(\alpha)}(A,B,C)$ is known to be a universal function of the cross ratio $x=|A||C|/[(|A|+|B|)(|C|+|B|)]$, where $|A|$ denotes $A$'s length. We identify different classes of states that, under time evolution with translationally invariant Hamiltonians, locally relax to states with a nonzero (R\'enyi) tripartite information, which furthermore exhibits a universal dependency on $x$. We report a numerical study of $I_3^{(\alpha)}$ in systems that are dual to free fermions, propose a field-theory description, and work out their asymptotic behaviour for $\alpha=2$ in general and for generic $\alpha$ in a subclass of systems. This allows us to infer the value of $I_3^{(\alpha)}$ in the scaling limit $x\rightarrow 1^-$, which we call ``residual tripartite information''. If nonzero, our analysis points to a universal residual value $-\log 2$ independently of the R\'enyi index $\alpha$, and hence applies also to the genuine (von Neumann) tripartite information.
We show the relation of the non-stationary difference equation proposed by one of the authors and the quantized discrete Painlev\'e VI equation. The five-dimensional Seiberg-Witten curve associated with the difference equation has a consistent four-dimensional limit. We also show that the original equation can be factorized as a coupled system for a pair of functions $\bigl(\mathcal{F}^{(1)},\mathcal{F}^{(2)}\bigr)$, which is a consequence of the identification of the Hamiltonian as a translation element in the extended affine Weyl group. We conjecture that the instanton partition function coming from the affine Laumon space provides a solution to the coupled system.
In this paper we initiate the study of correlation functions of a single trace operator and a circular supersymmetric Wilson loop in ABJM theory. The single trace operator is in the scalar sector and is an eigenstate of the planar two-loop dilatation operator. The Wilson loop is in the fundamental representation of the gauge group or a suitable (super-)group. Such correlation functions at tree level can be written as an overlap of the Bethe state corresponding to the single trace operator and a boundary state which corresponds to the Wilson loop. There are various type of supersymmetric Wilson loops in ABJM theory. We show that some of them correspond to tree-level integrable boundary states while some are not. For the tree-level integrable ones, we prove their integrability and obtain analytic formula for the overlaps. For the non-integrable ones, we give examples of non-vanishing overlaps for Bethe states which violate selection rules.
We consider $d=2$, $\mathcal{N}=(0,2)$ SCFTs that can arise from M5-branes wrapping four-dimensional, complex, toric manifolds and orbifolds. We use equivariant localization to compute the off-shell central charge of the dual supergravity solutions, obtaining a result which can be written as a sum of gravitational blocks and precisely agrees with a field theory computation using anomaly polynomials and $c$-extremization.
Quiver theories constitute an important class of supersymmetric gauge theories with well-defined holographic duals. Motivated by holographic duality, we use localisation on $S^d$ to study long linear quivers at large-N. The large-N solution shows a remarkable degree of universality across dimensions, including $d = 4$ where quivers are genuinely superconformal. In that case we upgrade the solution of long quivers to quivers of any length.
We develop a systematic method to classify connected \'etale algebras $A$'s in (possibly degenerate) pre-modular category $\mathcal B$. In particular, we find the category of $A$-modules, $\mathcal B_A$, have ranks bounded from above by $\lfloor\text{FPdim}(\mathcal B)\rfloor$. For demonstration, we classify connected \'etale algebras in some $\mathcal B$'s, which appear in physics. Physically, the results constrain (or fix) ground state degeneracies of (certain) $\mathcal B$-symmetric gapped phases. We study massive deformations of rational conformal field theories such as minimal models and Wess-Zumino-Witten models. In most of our examples, the classification suggests the symmetries $\mathcal B$'s are spontaneously broken.
Two theories describing antisymmetric tensor fields in 4 dimensions are well known: the gauge invariant Kalb-Ramond model which generalizes the Maxwell Lagrangian and the Avdeev-Chizhov model which describes self-dual 2-tensors. Using a theorem of Jackiw and Pi, we study p-forms in D dimensions and prove that the Kalb-Ramond model is conformal invariant only when the rank p of the gauge tensor is equal to its canonical dimension (D-2)/2 and that the Avdeev-Chizhov model and its new CP generalization inspired by the SU(2/1) superalgebraic chiral structure of the electroweak interactions are both conformal invariant in any even dimension.
Large-scale optical neutrino and dark-matter detectors rely on large-area photomultiplier tubes (PMTs) for cost-effective light detection. The new R14688-100 8-inch PMT developed by Hamamatsu provides state-of-the-art timing resolution of around 1 ns (FWHM), which can help improve vertex reconstruction and enable Cherenkov and scintillation light separation in scintillation-based detectors. This PMT also provides excellent charge resolution, allowing for precision photoelectron counting and improved energy reconstruction. The Eos experiment is the first large-scale optical detector to utilize these PMTs. In this manuscript we present a characterization of the R14688-100 single photoelectron response, such as the transit-time spreads, the dark-rates, and the afterpulsing. The single photoelectron response measurements are performed for the 206 PMTs that will be used in Eos.
We present a measurement of neutrino oscillation parameters with the Super-Kamiokande detector using atmospheric neutrinos from the complete pure-water SK I-V (April 1996-July 2020) data set, including events from an expanded fiducial volume. The data set corresponds to 6511.3 live days and an exposure of 484.2 kiloton-years. Measurements of the neutrino oscillation parameters $\Delta m^2_{32}$, $\sin^2\theta_{23}$, $\sin^2 \theta_{13}$, $\delta_{CP}$, and the preference for the neutrino mass ordering are presented with atmospheric neutrino data alone, and with constraints on $\sin^2 \theta_{13}$ from reactor neutrino experiments. Our analysis including constraints on $\sin^2 \theta_{13}$ favors the normal mass ordering at the 92.3% level.
The upgrade of the track classification and selection step of the CMS tracking to a Deep Neural Network is presented. The CMS tracking follows an iterative approach: tracks are reconstructed in multiple passes starting from the ones that are easiest to find and moving to the ones with more complex characteristics (lower transverse momentum, high displacement). The track classification comes into play at the end of each iteration. A classifier using a multivariate analysis is applied after each iteration and several selection criteria are defined. If a track meets the high purity requirement, its hits are removed from the hit collection, thus simplifying the later iterations, and making the track classification an integral part of the reconstruction process. Tracks passing loose selections are also saved for physics analysis usage. The CMS experiment improved the track classification starting from a parametric selection used in Run 1, moving to a Boosted Decision Tree in Run 2, and finally to a Deep Neural Network in Run 3. An overview of the Deep Neural Network training and current performance is shown.
The longitudinal polarization fraction of the $D^{*}$ meson is measured in $B^0\to D^{*-}\tau^{+}\nu_{\tau}$ decays, where the $\tau$ lepton decays to three charged pions and a neutrino, using proton-proton collision data collected by the LHCb experiment at center-of-mass energies of 7, 8 and 13 TeV and corresponding to an integrated luminosity of 5 fb$^{-1}$. The $D^{*}$ polarization fraction $F_{L}^{D^{*}}$ is measured in two $q^{2}$ regions, below and above 7 GeV$^{2}/c^{4}$, where $q^{2}$ is defined as the squared invariant mass of the $\tau\nu_{\tau}$ system. The $F_{L}^{D^{*}}$ values are measured to be $0.51 \pm 0.07 \pm 0.03$ and $0.35 \pm 0.08 \pm 0.02$ for the lower and higher $q^{2}$ regions, respectively. The first uncertainties are statistical and the second systematic. The average value over the whole $q^{2}$ range is: $$F_{L}^{D^{*}} = 0.43 \pm 0.06 \pm 0.03.$$ These results are compatible with the Standard Model predictions.
We report the first high-repetition rate generation and simultaneous characterization of nanosecond-scale return currents of kA-magnitude issued by the polarization of a target irradiated with a PW-class high-repetition-rate Ti:Sa laser system at relativistic intensities. We present experimental results obtained with the VEGA-3 laser at intensities from 5e18 - 1.3e20 W/cm2. A non-invasive inductive return-current monitor is adopted to measure the derivative of return-currents on the order of kA/ns and analysis methodology is developed to derive return-currents. We compare the current for copper, aluminium and Kapton targets at different laser energies. The data shows the stable production of current peaks and clear prospects for the tailoring of the pulse shape, promising for future applications in high energy density science, e.g. electromagnetic interference stress tests, high-voltage pulse response measurements, and charged particle beam lensing. We compare the target discharge of the order of hundreds of nC with theoretical predictions and a good agreement is found.
The observation of the production of four top quarks in proton-proton collisions is reported, based on a data sample collected by the CMS experiment at a center-of-mass energy of 13 TeV in 2016-2018 at the CERN LHC and corresponding to an integrated luminosity of 138 fb$^{-1}$. Events with two same-sign, three, or four charged leptons (electrons and muons) and additional jets are analyzed. Compared to previous results in these channels, updated identification techniques for charged leptons and jets originating from the hadronization of b quarks, as well as a revised multivariate analysis strategy to distinguish the signal process from the main backgrounds, lead to an improved expected signal significance of 4.9 standard deviations above the background-only hypothesis. Four top quark production is observed with a significance of 5.6 standard deviations, and its cross section is measured to be 17.7 $^{+3.7}_{-3.5}$ (stat) $^{+2.3}_{-1.9}$ (syst) fb, in agreement with the available standard model predictions.
Quantum tangent kernel methods provide an efficient approach to analyzing the performance of quantum machine learning models in the infinite-width limit, which is of crucial importance in designing appropriate circuit architectures for certain learning tasks. Recently, they have been adapted to describe the convergence rate of training errors in quantum neural networks in an analytical manner. Here, we study the connections between the trainability and expressibility of quantum tangent kernel models. In particular, for global loss functions, we rigorously prove that high expressibility of both the global and local quantum encodings can lead to exponential concentration of quantum tangent kernel values to zero. Whereas for local loss functions, such issue of exponential concentration persists owing to the high expressibility, but can be partially mitigated. We further carry out extensive numerical simulations to support our analytical theories. Our discoveries unveil a pivotal characteristic of quantum neural tangent kernels, offering valuable insights for the design of wide quantum variational circuit models in practical applications.
We study the equilibrium thermodynamics of quantum hard spheres in the infinite-dimensional limit, determining the boundary between liquid and glass phases in the temperature-density plane by means of the Franz-Parisi potential. We find that as the temperature decreases from high values, the effective radius of the spheres is enhanced by a multiple of the thermal de Broglie wavelength, thus increasing the effective filling fraction and decreasing the critical density for the glass phase. Numerical calculations show that the critical density continues to decrease monotonically as the temperature decreases further, suggesting that the system will form a glass at sufficiently low temperatures for any density.
The Casimir interaction and torque are related phenomena originating from the exchange of electromagnetic excitations between objects. While the Casimir force exists between any types of objects, the materials or geometrical anisotropy drives the emergence of the Casimir torque. Here both phenomena are studied theoretically between dielectric films with immersed parallel single wall carbon nanotubes in the dilute limit with their chirality and collective electronic and optical response properties taken into account. It is found that the Casimir interaction is dominated by thermal fluctuations at sub-micron separations, while the torque is primarily determined by quantum mechanical effects. This peculiar quantum vs. thermal separation is attributed to the strong influence of reduced dimensionality and inherent anisotropy of the materials. Our study suggests that nanostructured anisotropic materials can serve as novel platforms to uncover new functionalities in ubiquitous Casimir phenomena.
The planar grasshopper problem, originally introduced in (Goulko & Kent 2017 Proc. R. Soc. A 473, 20170494), is a striking example of a model with long-range isotropic interactions whose ground states break rotational symmetry. In this work we analyze and explain the nature of this symmetry breaking with emphasis on the importance of dimensionality. Interestingly, rotational symmetry is recovered in three dimensions for small jumps, which correspond to the non-isotropic cogwheel regime of the two-dimensional problem. We discuss simplified models that reproduce the symmetry properties of the original system in N dimensions. For the full grasshopper model in two dimensions we obtain quantitative predictions for optimal perturbations of the disk. Our analytical results are confirmed by numerical simulations.
Generative models are a class of machine learning models that aim to learn the underlying probability distribution of data. Unlike discriminative models, generative models focus on capturing the data's inherent structure, allowing them to generate new samples that resemble the original data. To fully exploit the potential of modeling probability distributions using quantum physics, a quantum-inspired generative model known as the Born machines have shown great advancements in learning classical and quantum data over matrix product state(MPS) framework. The Born machines support tractable log-likelihood, autoregressive and mask sampling, and have shown outstanding performance in various unsupervised learning tasks. However, much of the current research has been centered on improving the expressive power of MPS, predominantly embedding each token directly by a corresponding tensor index. In this study, we generalize the embedding method into trainable quantum measurement operators that can be simultaneously honed with MPS. Our study indicated that combined with trainable embedding, Born machines can exhibit better performance and learn deeper correlations from the dataset.
Computing excited-state properties of molecules and solids is considered one of the most important near-term applications of quantum computers. While many of the current excited-state quantum algorithms differ in circuit architecture, specific exploitation of quantum advantage, or result quality, one common feature is their rooting in the Schr\"odinger equation. However, through contracting (or projecting) the eigenvalue equation, more efficient strategies can be designed for near-term quantum devices. Here we demonstrate that when combined with the Rayleigh-Ritz variational principle for mixed quantum states, the ground-state contracted quantum eigensolver (CQE) can be generalized to compute any number of quantum eigenstates simultaneously. We introduce two excited-state (anti-Hermitian) CQEs that perform the excited-state calculation while inheriting many of the remarkable features of the original ground-state version of the algorithm, such as its scalability. To showcase our approach, we study several model and chemical Hamiltonians and investigate the performance of different implementations.
Nonadiabatic holonomic quantum computation has been attracting continuous attention since it was proposed. Until now, various schemes of nonadiabatic holonomic quantum computation have been developed and many of them have been experimentally demonstrated. It is known that at the end of a computation, one usually needs to estimate the average value of an observable. However, computation errors severely disturb the final state of a computation, causing erroneous average value estimation. Thus for nonadiabatic holonomic quantum computation, an important topic is to investigate how to better give the average value of an observable under the condition of computation errors. While the above topic is important, the previous works in the field of nonadiabatic holonomic quantum computation pay woefully inadequate attention to it. In this paper, we show that rescaling the measurement results can better give the average value of an observable in nonadiabatic holonomic quantum computation when computation errors are considered. Particularly, we show that by rescaling the measurement results, $56.25\%$ of the computation errors can be reduced when using depolarizing noise model, a widely adopted noise model in quantum computation community, to analyse the benefit of our method.
We consider stellar interferometry in the continuous-variable (CV) quantum information formalism and use the quantum Fisher information (QFI) to characterize the performance of three key strategies: direct interferometry (DI), local heterodyne measurement, and a CV teleportation-based strategy. In the lossless regime, we show that a squeezing parameter of $r\approx 2$ (18 dB) is required to reach $\approx$ 95% of the QFI achievable with DI; such a squeezing level is beyond what has been achieved experimentally. In the low-loss regime, the CV teleportation strategy becomes inferior to DI, and the performance gap widens as loss increases. Curiously, in the high-loss regime, a small region of loss exists where the CV teleportation strategy slightly outperforms both DI and local heterodyne, representing a transition in the optimal strategy. We describe this advantage as limited because it occurs for a small region of loss, and the magnitude of the advantage is also small. We argue that practical difficulties further impede achieving any quantum advantage, limiting the merits of a CV teleportation-based strategy for stellar interferometry.
We study how to use the surface states in a Bi$_{2}$Se$_{3}$ topological insulator ultra-thin film that are affected by finite size effects for the purpose of quantum computing. We demonstrate that: (i) surface states under the finite size effect can effectively form a two-level system where their energy levels lie in between the bulk energy gap and a logic qubit can be constructed, (ii) the qubit can be initialized and manipulated using electric pulses of simple forms, (iii) two-qubit entanglement is achieved through a $\sqrt{\text{SWAP}}$ operation when the two qubits are in a parallel setup, and (iv) alternatively, a Floquet state can be exploited to construct a qubit and two Floquet qubits can be entangled through a Controlled-NOT operation. The Floquet qubit offers robustness to background noise since there is always an oscillating electric field applied, and the single qubit operations are controlled by amplitude modulation of the oscillating field, which is convenient experimentally.
Two-mode squeezed states, which are entangled states with bipartite quantum correlations in continuous-variable systems, are crucial in quantum information processing and metrology. Recently, continuous-variable quantum computing with the vibrational modes of trapped atoms has emerged with significant progress, featuring a high degree of control in hybridizing with spin qubits. Creating two-mode squeezed states in such a platform could enable applications that are only viable with photons. Here, we experimentally demonstrate two-mode squeezed states by employing atoms in a two-dimensional optical lattice as quantum registers. The states are generated by a controlled projection conditioned on the relative phase of two independent squeezed states. The individual squeezing is created by sudden jumps of the oscillators' frequencies, allowing generating of the two-mode squeezed states at a rate within a fraction of the oscillation frequency. We validate the states by entanglement steering criteria and Fock state analysis. Our results can be applied in other mechanical oscillators for quantum sensing and continuous-variable quantum information.
Optical communication can be revolutionized by encoding data into the orbital angular momentum of light beams. However, state-of-the-art approaches for dynamic control of complex optical wavefronts are mainly based on liquid crystal spatial light modulators or miniaturized mirrors, which suffer from intrinsically slow response times. Here, we experimentally realize a hybrid meta-optical system that enables complex control of the wavefront of light with pulse-duration limited dynamics. Specifically, by combining ultrafast polarization switching in a WSe2 monolayer with a dielectric metasurface, we demonstrate second harmonic beam deflection and structuring of orbital angular momentum on the femtosecond timescale. Our results pave the way to robust encoding of information for free space optical links, while reaching response times compatible with real-world telecom applications.
Combinatorial optimization problems are ubiquitous in various disciplines and applications. Many heuristic algorithms have been devoted to solve these types of problems. In order to increase the efficiency for finding the optimal solutions, an application-specific hardware, called digital annealer (DA) has been developed for solving combinatorial optimization problems using quadratic unconstrained binary optimization (QUBO) formulations. In this study, we formulated the number partitioning problem and the graph partitioning problem into QUBO forms and solved such problems with the DA developed by Fujitsu Ltd. The QUBO formulation of the number partitioning problem is fully connected. The DA found the overall runtime for the optimal solution to be less than 30 seconds for 6500 binary variables. For the graph partitioning problem, we adopted modularity as the metric for determining the quality of the partitions. For Zachary's Karate Club graph, the modularity obtained was 0.445, a 6% increase against D-wave Quantum Annealer and Simulated Annealing. Moreover, to explore the DA's potential applications to real-world problems, we used the search for communities or virtual microgrids in a power distribution network as an example. The problem was formulated into graph partitioning. It is shown that the DA effectively identified community structures in the IEEE 33-bus and IEEE 118-bus network.
Many-body forces, and specially three-body forces, are sometimes a relevant ingredient in various fields, such as atomic, nuclear or hadronic physics. As their precise structure is generally difficult to uncover or to implement, phenomenological effective forces are often used in practice. A form commonly used for a many-body variable is the square-root of the sum of two-body variables. Even in this case, the problem can be very difficult to treat numerically. But this kind of many-body forces can be handled at the same level of difficulty than two-body forces by the envelope theory. The envelope theory is a very efficient technique to compute approximate, but reliable, solutions of many-body systems, specially for identical particles. The quality of this technique is tested here for various three-body forces with non-relativistic systems composed of three identical particles. The energies, the eigenfunctions, and some observables are compared with the corresponding accurate results computed with a numerical variational method.
The potential synergy between quantum communications and future wireless communication systems is explored. By proposing a quantum-native or quantum-by-design philosophy, the survey examines technologies such as quantum-domain (QD) multi-input multi-output (MIMO), QD non-orthogonal multiple access (NOMA), quantum secure direct communication (QSDC), QD resource allocation, QD routing, and QD artificial intelligence (AI). The recent research advances in these areas are summarized. Given the behavior of photonic and particle-like Terahertz (THz) systems, a comprehensive system-oriented perspective is adopted to assess the feasibility of using quantum communications in future systems. This survey also reviews quantum optimization algorithms and quantum neural networks to explore the potential integration of quantum communication and quantum computing in future systems. Additionally, the current status of quantum sensing, quantum radar, and quantum timing is briefly reviewed in support of future applications. The associated research gaps and future directions are identified, including extending the entanglement coherence time, developing THz quantum communications devices, addressing challenges in channel estimation and tracking, and establishing the theoretical bounds and performance trade-offs of quantum communication, computing, and sensing. This survey offers a unique perspective on the potential for quantum communications to revolutionize future systems and pave the way for even more advanced technologies.
Cooling a quantum system to its ground state is important for the characterization of non-trivial interacting systems, and in the context of a variety of quantum information platforms. In principle, this can be achieved by employing measurement-based passive steering protocols, where the steering steps are predetermined and are not based on measurement readouts. However, measurements, i.e., coupling the system to auxiliary quantum degrees of freedom, is rather costly, and protocols in which the number of measurements scales with system size will have limited practical applicability. Here, we identify conditions under which measurement-based cooling protocols can be taken to the dilute limit. For two examples of frustration-free one-dimensional spin chains, we show that steering on a single link is sufficient to cool these systems into their unique ground states. We corroborate our analytical arguments with finite-size numerical simulations and discuss further applications.
Creating dense and shallow nitrogen vacancy (NV) ensembles with good spin properties, is a prerequisite for developing diamond-based quantum sensors exhibiting better performance. Ion implantation is a key enabling tool for precisely controlling spatial localisation and density of NV colour centres in diamond. However, it suffers from a low creation yield, while higher ion fluences significantly damage the crystal lattice. In this work, we realize N2 ion implantation in the 30 to 40 keV range at high temperatures. At 800 C, NV ensemble photoluminescence emission is three to four times higher than room temperature implanted films, while narrow electron spin resonance linewidths of 1.5 MHz, comparable to well established implantation techniques are obtained. In addition, we found that ion fluences above 2E14 ions per cm2 can be used without graphitization of the diamond film, in contrast to room temperature implantation. This study opens promising perspectives in optimizing diamond films with implanted NV ensembles that could be integrated into quantum sensing devices.
A promising route towards the heralded creation and annihilation of single-phonons is to couple a single-photon emitter to a mechanical resonator. The challenge lies in reaching the resolved-sideband regime with a large coupling rate and a high mechanical quality factor. We achieve all of this by coupling self-assembled InAs quantum dots to a small-mode-volume phononic-crystal resonator with mechanical frequency $\Omega_\mathrm{m}/2\pi = 1.466~\mathrm{GHz}$ and quality factor $Q_\mathrm{m} = 2.1\times10^3$. Thanks to the high coupling rate of $g_\mathrm{ep}/2\pi = 2.9~\mathrm{MHz}$, and by exploiting a matching condition between the effective Rabi and mechanical frequencies, we are able to observe the interaction between the two systems. Our results represent a major step towards quantum control of the mechanical resonator via a single-photon emitter.
We present experiments in which self-assembled InAs quantum dots are coupled to a thin, suspended-beam GaAs resonator. The quantum dots are driven resonantly and the resonance fluorescence is detected. The narrow quantum-dot linewidths, just a factor of three larger than the transform limit, result in a high sensitivity to the mechanical motion. We show that one quantum dot couples to eight mechanical modes spanning a frequency range from $30$ to $600~\mathrm{MHz}$: one quantum dot provides an extensive characterisation of the mechanical resonator. The coupling spans the unresolved-sideband to the resolved-sideband regimes. Finally, we present the first detection of thermally-driven phonon sidebands (at $4.2~\mathrm{K}$) in the resonance-fluoresence spectrum.
Gaussian Boson sampling (GBS) plays a crucially important role in demonstrating quantum advantage. As a major imperfection, the limited connectivity of the linear optical network weakens the quantum advantage result in recent experiments. Here we present a faster classical algorithm to simulate the GBS process with limited connectivity. In this work, we introduce an enhanced classical algorithm for simulating GBS processes with limited connectivity. It computes the loop Hafnian of an $n \times n$ symmetric matrix with bandwidth $w$ in $O(nw2^w)$ time which is better than the previous fastest algorithm which runs in $O(nw^2 2^w)$ time. This classical algorithm is helpful on clarifying how limited connectivity affects the computational complexity of GBS and tightening the boundary of quantum advantage in the GBS problem.
In the Bogoliubov-Fr\"ohlich model, we prove that an impurity immersed in a Bose-Einstein condensate forms a stable quasi-particle when the total momentum is less than its mass times the speed of sound. The system thus exhibits superfluid behavior, as this quasi-particle does not experience friction. We do not assume any infrared or ultraviolet regularization of the model, which contains massless excitations and point-like interactions.
Quantum mechanics offers the possibility of unconditionally secure communication between multiple remote parties. Security proofs for such protocols typically rely on bounding the capacity of the quantum channel in use. In a similar manner, Cram\'er-Rao bounds in quantum metrology place limits on how much information can be extracted from a given quantum state about some unknown parameters of interest. In this work we establish a connection between these two areas. We first demonstrate a three-party sensing protocol, where the attainable precision is dependent on how many parties work together. This protocol is then mapped to a secure access protocol, where only by working together can the parties gain access to some high security asset. Finally, we map the same task to a communication protocol where we demonstrate that a higher mutual information can be achieved when the parties work collaboratively compared to any party working in isolation.
Understanding the spatial distribution of P1 centers is crucial for diamond-based sensors and quantum devices. P1 centers serve as a polarization source for DNP quantum sensing and play a significant role in the relaxation of NV centers. Additionally, the distribution of NV centers correlates with the distribution of P1 centers, as NV centers are formed through the conversion of P1 centers. We utilized dynamic nuclear polarization (DNP) and pulsed electron paramagnetic resonance (EPR) techniques that revealed strong clustering of a significant population of P1 centers that exhibit exchange coupling and produce asymmetric lineshapes. The $^{13}$C DNP frequency profile at high magnetic field revealed a pattern that requires an asymmetric EPR lineshape of the P1 clusters with electron-electron (e-e) coupling strengths exceeding the $^{13}$C nuclear Larmor frequency. EPR and DNP characterization at high magnetic fields was necessary to resolve energy contributions from different e-e couplings. We employed a two-frequency pump-probe pulsed Electron Double Resonance (ELDOR) technique to show crosstalk between the isolated and clustered P1 centers. This finding implies that the clustered P1 centers affect all P1 populations. Direct observation of clustered P1 centers and their asymmetric lineshape is a novel and crucial insight into understanding magnetic noise sources for quantum information applications of diamonds and for designing diamond-based polarizing agents with optimized DNP efficiency for $^{13}$C and other nuclear spins of analytes. We propose that room temperature $^{13}$C DNP at high field, achievable through straightforward modifications to existing solution-state NMR systems, is a potent tool for evaluating and controlling diamond defects.
Multidimensional coherent spectroscopy is a powerful tool to characterize nonlinear optical response functions. Typically, multidimensional spectra are interpreted via a perturbative framework that straightforwardly provides intuition into the density matrix dynamics that give rise to specific spectral features. When the goal is to characterize system coupling to a thermal bath however, the perturbative formalism becomes unwieldy and yields less intuition. Here, we extend an approach developed by Vagov et al. to provide an exact expression for multidimensional spectra of a spin-boson Hamiltonian up to arbitrary order of electric field interaction. We demonstrate the utility of this expression by modeling polaron formation and coherent exciton-phonon coupling in quantum dots, which strongly agree with experiment.
Disease gene prioritization assigns scores to genes or proteins according to their likely relevance for a given disease based on a provided set of seed genes. Here, we describe a new algorithm for disease gene prioritization based on continuous-time quantum walks using the adjacency matrix of a protein-protein interaction (PPI) network. Our algorithm can be seen as a quantum version of a previous method known as the diffusion kernel, but, importantly, has higher performance in predicting disease genes, and also permits the encoding of seed node self-loops into the underlying Hamiltonian, which offers yet another boost in performance. We demonstrate the success of our proposed method by comparing it to several well-known gene prioritization methods on three disease sets, across seven different PPI networks. In order to compare these methods, we use cross-validation and examine the mean reciprocal ranks and recall values. We further validate our method by performing an enrichment analysis of the predicted genes for coronary artery disease. We also investigate the impact of adding self-loops to the seeds, and argue that they allow the quantum walker to remain more local to low-degree seed nodes.
Distributing entangled states over potentially long distances provides a key resource for many protocols in quantum communication and quantum cryptography. Ideally, this should be implemented in a heralded manner. By starting with four single-photon states, we cascade two single-photon path-entangled states, coded in orthogonal polarizations, to distribute and herald polarization entanglement in a single quantum repeater link architecture. By tuning the input states to minimize (local) losses, the theoretically achievable fidelity to the target state without postselection approaches 1, while sacrificing heralding rates. We achieve a fidelity to the target state of over 95% after postselection, providing a benchmark for the experimental control. We show that the fidelity of the heralded state without postselection scales predictably and also identify various practical challenges and error sources specific to this architecture, and model their effects on the generated state. While our experiment uses probabilistic photon-pair sources based on spontaneous parametric down-conversion, many of these problems are also relevant for variants employing deterministic photon sources.
We introduce prethermal temperature probes for sensitive, fast and robust temperature estimation. While equilibrium thermal probes with a manifold of quasidegenerate excited states have been previously recognized for their maximal sensitivity, they suffer from long thermalization timescales. When considering time as a critical resource in thermometry, it becomes evident that these equilibrium probes fall short of ideal performance. Here, we propose a different paradigm for thermometry, where setups originally suggested for optimal equilibrium thermometry should instead be employed as prethermal probes, by making use of their long-lived quasiequilibrium state. This transient state emerges from the buildup of quantum coherences among quasidegenerate levels. For a class of physically-motivated initial conditions, we find that energy measurements of the prethermal state exhibit a similar sensitivity as the equilibrium state. However, they offer the distinct benefit of orders of magnitude reduction in the time required for the estimation protocol. Upon introducing a figure-of-merit that accounts for the estimation protocol time, prethermal probes surpass the corresponding equilibrium probes in terms of effective thermal sensitivity, opening avenues for rapid thermometry by harnessing the long-lived prethermal states.
Arrays of Josephson junctions are at the forefront of research on quantum circuitry for quantum computing, simulation and metrology. They provide a testing bed for exploring a variety of fundamental physical effects where macroscopic phase coherence, nonlinearities and dissipative mechanisms compete. Here we realize finite-circulation states in an atomtronic Josephson junction necklace, consisting of a tunable array of tunneling links in a ring-shaped superfluid. We study the stability diagram of the atomic flow by tuning both the circulation and the number of junctions. We predict theoretically and demonstrate experimentally that the atomic circuit withstands higher circulations (corresponding to higher critical currents) by increasing the number of Josephson links. The increased stability contrasts with the trend of the superfluid fraction -- quantified by Leggett's criterion -- which instead decreases with the number of junctions and the corresponding density depletion. Our results demonstrate atomic superfluids in mesoscopic structured ring potentials as excellent candidates for atomtronics applications, with prospects towards the observation of non-trivial macroscopic superpositions of current states.
Learning tasks play an increasingly prominent role in quantum information and computation. They range from fundamental problems such as state discrimination and metrology over the framework of quantum probably approximately correct (PAC) learning, to the recently proposed shadow variants of state tomography. However, the many directions of quantum learning theory have so far evolved separately. We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data and then testing how well the learned hypothesis generalizes to new data. In this framework, we prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities measuring how strongly the learner's hypothesis depends on the specific data seen during training. To achieve this, we use tools from quantum optimal transport and quantum concentration inequalities to establish non-commutative versions of decoupling lemmas that underlie recent information-theoretic generalization bounds for classical machine learning. Our framework encompasses and gives intuitively accessible generalization bounds for a variety of quantum learning scenarios such as quantum state discrimination, PAC learning quantum states, quantum parameter estimation, and quantumly PAC learning classical functions. Thereby, our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.
Quantum mechanics imposes fluctuations onto physical quantities, leading to sources of noise absent in the classical world. For light, quantum fluctuations limit many applications requiring high sensitivities, resolutions, or bandwidths. In many cases, taming quantum fluctuations requires dealing with highly multimode systems with both light and matter degrees of freedom - a regime which has traditionally eluded mechanistic insights, and for which general rules are largely lacking. In this work, we introduce and experimentally test a new theoretical framework for describing quantum noise in multimode systems of light and matter, called quantum sensitivity analysis. The framework leads to new general rules and mechanisms for quantum noise propagation - and accurately models all known quantum noise phenomena in nonlinear optics. We develop experiments to test unexplored aspects of our theory in the quantum noise dynamics of ultrafast multimode systems. For example, in physical effects related to supercontinuum generation, we observe and account for a proliferation of ultra low-noise pairs of wavelengths, despite that individual wavelengths are very noisy due to strong nonlinear amplification of vacuum fluctuations. We then show that by taking advantage of the spectral dynamics of quantum noise, it is possible to generate quantum light states, such as squeezed states, even with very noisy and complex light states - by exploiting the spectral dynamics of vacuum fluctuations undergoing nonlinearity and Raman scattering. Effects like these can widely extend the range of sources that can be used for quantum metrology, bringing quantum optics to higher powers and more complex sources. Broadly, the framework we developed will enable many new approaches for realizing light sources across the entire electromagnetic spectrum whose performance approaches ultimate limits set by quantum mechanics.
The financial sector is anticipated to be one of the first industries to benefit from the increased computational power of quantum computers, in areas such as portfolio optimisation and risk management to financial derivative pricing. Financial mathematics, and derivative pricing in particular, are not areas quantum physicists are traditionally trained in despite the fact that they often have the raw technical skills needed to understand such topics. On the other hand, most quantum algorithms have largely focused on qubits, which are comprised of two discrete states, as the information carriers. However, discrete higher-dimensional qudits, in addition to possibly possessing increased noise robustness and allowing for novel error correction protocols in certain hardware implementations, also have logarithmically greater information storage and processing capacity. In the current NISQ era of quantum computing, a wide array of hardware paradigms are still being studied and any potential advantage a platform offers is worth exploring. Here we introduce the basic concepts behind financial derivatives for the unfamiliar enthusiast as well as outline in great detail the quantum algorithm routines needed to price a European option, the simplest derivative. This is done within the context of a quantum computer comprised of qudits and employing the natural higher-dimensional analogue of a qubit-based pricing algorithm with its various subroutines. From these pieces, one should relatively easily be able to tailor the scheme to more complex, realistic financial derivatives. Finally, the entire stack is numerically simulated with the results demonstrating how the qudit-based scheme's payoff quickly approaches that of both a similarly-resourced classical computer as well as the true payoff, within error, for a modest increase in qudit dimension.
We describe tensor network algorithms to optimize quantum circuits for adiabatic quantum computing. To suppress diabatic transitions, we include counterdiabatic driving in the optimization and utilize variational matrix product operators to represent adiabatic gauge potentials. Traditionally, Trotter product formulas are used to turn adiabatic time evolution into quantum circuits and the addition of counterdiabatic driving increases the circuit depth per time step. Instead, we classically optimize a parameterized quantum circuit of fixed depth to simultaneously capture adiabatic time evolution together with counterdiabatic driving over many time steps. The methods are applied to the ground state preparation of quantum Ising chains of sizes $N = 7$ - $31$ with transverse and longitudinal fields. We show that the classically optimized circuits can significantly outperform Trotter product formulas. Furthermore, we discuss how the approach can be used for combinatorial optimization.
Regev recently introduced a quantum factoring algorithm that may be perceived as a $d$-dimensional variation of Shor's factoring algorithm. In this work, we extend Regev's factoring algorithm to an algorithm for computing discrete logarithms in a natural way. Furthermore, we discuss natural extensions of Regev's factoring algorithm to order finding, and to factoring completely via order finding.
Multi-Agent Reinforcement Learning is becoming increasingly more important in times of autonomous driving and other smart industrial applications. Simultaneously a promising new approach to Reinforcement Learning arises using the inherent properties of quantum mechanics, reducing the trainable parameters of a model significantly. However, gradient-based Multi-Agent Quantum Reinforcement Learning methods often have to struggle with barren plateaus, holding them back from matching the performance of classical approaches. We build upon a existing approach for gradient free Quantum Reinforcement Learning and propose tree approaches with Variational Quantum Circuits for Multi-Agent Reinforcement Learning using evolutionary optimization. We evaluate our approach in the Coin Game environment and compare them to classical approaches. We showed that our Variational Quantum Circuit approaches perform significantly better compared to a neural network with a similar amount of trainable parameters. Compared to the larger neural network, our approaches archive similar results using $97.88\%$ less parameters.
Quantum computing offers the potential for superior computational capabilities, particularly for data-intensive tasks. However, the current state of quantum hardware puts heavy restrictions on input size. To address this, hybrid transfer learning solutions have been developed, merging pre-trained classical models, capable of handling extensive inputs, with variational quantum circuits. Yet, it remains unclear how much each component - classical and quantum - contributes to the model's results. We propose a novel hybrid architecture: instead of utilizing a pre-trained network for compression, we employ an autoencoder to derive a compressed version of the input data. This compressed data is then channeled through the encoder part of the autoencoder to the quantum component. We assess our model's classification capabilities against two state-of-the-art hybrid transfer learning architectures, two purely classical architectures and one quantum architecture. Their accuracy is compared across four datasets: Banknote Authentication, Breast Cancer Wisconsin, MNIST digits, and AudioMNIST. Our research suggests that classical components significantly influence classification in hybrid transfer learning, a contribution often mistakenly ascribed to the quantum element. The performance of our model aligns with that of a variational quantum circuit using amplitude embedding, positioning it as a feasible alternative.
We study the probability distribution of the first return time to the initial state of a quantum many-body system subject to stroboscopic projective measurements. We show that this distribution can be interpreted as a continuation of the canonical partition function of a spin chain with non-interacting domains at equilibrium, which is entirely characterised by the Loschmidt amplitude of the quantum many-body system. This allows us to show that this probability may decay either algebraically or exponentially asymptotically in time, depending on whether the spin model displays a ferromagnetic or a paramagnetic phase. We illustrate this idea on the example of the return time of $N$ adjacent fermions in a tight-binding model, revealing a rich phase behaviour, which can be tuned by scaling the probing time with $N$. Our analytical predictions are corroborated by exact numerical computations.
We apply and study a Grover-type method for Quadratic Unconstrained Binary Optimization (QUBO) problems. First, we construct a marker oracle for such problems. For an $n$-dimensional QUBO problem, these oracles have a circuit depth and gate count of $O \left( n^2 \right)$. We also develop a novel Fixed-point Grover Adaptive Search for QUBO Problems, using our oracle design and a hybrid Fixed-point Grover Search of Li et al. [8]. This method has better performance guarantees than the Grover Adaptive Search of Gilliam et al. [5].
We introduce an adaptable and modular hybrid architecture designed for fault-tolerant quantum computing. It combines quantum emitters and linear-optical entangling gates to leverage the strength of both matter-based and photonic-based approaches. A key feature of the architecture is its practicality, grounded in the utilisation of experimentally proven optical components. Our framework enables the execution of any quantum error correcting code, but in particular maintains scalability for low-density parity check codes by exploiting built-in non-local connectivity through distant optical links. To gauge its efficiency, we evaluated the architecture using a physically motivated error model. It exhibits loss tolerance comparable to existing all-photonic architecture but without the need for intricate linear-optical resource-state-generation modules that conventionally rely on resource-intensive multiplexing. The versatility of the architecture also offers uncharted avenues for further advancing performance standards.
We show that a single photon pulse (SPP) incident on two interacting two-level atoms induces a transient entanglement force between them. After absorption of a multi-mode Fock state pulse, the time-dependent atomic interaction mediated by the vacuum fluctuations changes from the van der Waals interaction to the resonant dipole-dipole interaction (RDDI). We explicitly show that the RDDI force induced by the SPP fundamentally arises from the two-body transient entanglement between the atoms. This SPP induced entanglement force can be continuously tuned from being repulsive to attractive by varying the polarization of the pulse. We further demonstrate that the entanglement force can be enhanced by more than three orders of magnitude if the atomic interactions are mediated by graphene plasmons. These results demonstrate the potential of shaped SPPs as a powerful tool to manipulate this entanglement force and also provides a new approach to witness transient atom-atom entanglement.
Boson sampling has been theoretically proposed and experimentally demonstrated to show quantum computational advantages. However, it still lacks the deep understanding of the practical applications of boson sampling. Here we propose that boson sampling can be used to efficiently simulate the work distribution of multiple identical bosons. We link the work distribution to boson sampling and numerically calculate the transition amplitude matrix between the single-boson eigenstates in a one-dimensional quantum piston system, and then map the matrix to a linear optical network of boson sampling. The work distribution can be efficiently simulated by the output probabilities of boson sampling using the method of the grouped probability estimation. The scheme requires at most a polynomial number of the samples and the optical elements. Our work opens up a new path towards the calculation of complex quantum work distribution using only photons and linear optics.
Harrow-Hassidim-Lloyd algorithm (HHL) allows for the exponentially faster solution of a system of linear equations. However, this algorithm requires the postselection of an ancilla qubit to obtain the solution. This postselection makes the algorithm result probabilistic. Here we show conditions when the HHL algorithm can work without postselection of ancilla qubit. We derive expectation values for an observable $M$ on the HHL outcome state when ancilla qubit is measured in $\ket{0}$ and $\ket{1}$ and show condition for postselection-free HHL running. We provide an explicit example of a practically-interesting input matrix and an observable, which satisfy postselection-free HHL condition. Our work can improve the performance of the HHL-based algorithms.
Measurements take a singular role in quantum theory. While they are often idealized as an instantaneous process, this is in conflict with all other physical processes in nature. In this Letter, we adopt a standpoint where the interaction with an environment is a crucial ingredient for the occurrence of a measurement. Within this framework, we derive lower bounds on the time needed for a measurement to occur. Our bound scales proportionally to the change in entropy of the measured system, and decreases as the number of of possible measurement outcomes or the interaction strength driving the measurement increases. We evaluate our bound in two examples where the environment is modelled by bosonic modes and the measurement apparatus is modelled by spins or bosons.
Efficient suppression of errors without full error correction is crucial for applications with NISQ devices. Error mitigation allows us to suppress errors in extracting expectation values without the need for any error correction code, but its applications are limited to estimating expectation values, and cannot provide us with high-fidelity quantum operations acting on arbitrary quantum states. To address this challenge, we propose to use error filtration (EF) for gate-based quantum computation, as a practical error suppression scheme without resorting to full quantum error correction. The result is a general-purpose error suppression protocol where the resources required to suppress errors scale independently of the size of the quantum operation, and does not require any logical encoding of the operation. The protocol provides error suppression whenever an error hierarchy is respected -- that is, when the ancilliary cSWAP operations are less noisy than the operation to be corrected. We further analyze the application of EF to quantum random access memory, where EF offers hardware-efficient error suppression.
In this work, we explore the PT-symmetric quantum Rabi model, which describes a PT-symmetric qubit coupled to a quantized light field. By employing the adiabatic approximation (AA), we are able to solve this model analytically in the parameter regime of interest and analyze various physical aspects. We investigate the static and dynamic properties of the model, using both the AA and numerical diagonalization. Our analysis reveals a multitude of exceptional points (EPs) that are closely connected with the exactly solvable points in the Hermitian counterpart of the model. Intriguingly, these EPs vanish and revive depending on the light-matter coupling strength. Furthermore, we discuss the time evolution of physical observables under the non-Hermitian Hamiltonian. Rich and exotic behaviors are observed in both strong and ultra-strong coupling regimes. Our work extends the theory of PT symmetry into the full quantum light-matter interaction regime and provides insights that can be readily enlarged to a broad class of quantum optical systems.
Rapid progress in developing near- and long-term quantum algorithms for quantum chemistry has provided us with an impetus to move beyond traditional approaches and explore new ways to apply quantum computing to electronic structure calculations. In this work, we identify the connection between quantum many-body theory and a quantum linear solver, and implement the Harrow-Hassidim-Lloyd (HHL) algorithm to make precise predictions of correlation energies for light molecular systems via the (non-unitary) linearised coupled cluster theory. We alter the HHL algorithm to integrate two novel aspects- (a) we prescribe a novel scaling approach that allows one to scale any arbitrary symmetric positive definite matrix A, to solve for Ax = b and achieve x with reasonable precision, all the while without having to compute the eigenvalues of A, and (b) we devise techniques that reduce the depth of the overall circuit. In this context, we introduce the following variants of HHL for different eras of quantum computing- AdaptHHLite in its appropriate forms for noisy intermediate scale quantum (NISQ), late-NISQ, and the early fault-tolerant eras, as well as AdaptHHL for the fault-tolerant quantum computing era. We demonstrate the ability of the NISQ variant of AdaptHHLite to capture correlation energy precisely, while simultaneously being resource-lean, using simulation as well as the 11-qubit IonQ quantum hardware.
We provide a new perspective on shadow tomography by demonstrating its deep connections with the general theory of measurement frames. By showing that the formalism of measurement frames offers a natural framework for shadow tomography -- in which ``classical shadows'' correspond to unbiased estimators derived from a suitable dual frame associated with the given measurement -- we highlight the intrinsic connection between standard state tomography and shadow tomography. Such perspective allows us to examine the interplay between measurements, reconstructed observables, and the estimators used to process measurement outcomes, while paving the way to assess the influence of the input state and the dimension of the underlying space on estimation errors. Our approach generalizes the method described in [H.-Y. Huang {\it et al.}, Nat. Phys. 16, 1050 (2020)], whose results are recovered in the special case of covariant measurement frames. As an application, we demonstrate that a sought-after target of shadow tomography can be achieved for the entire class of tight rank-1 measurement frames -- namely, that it is possible to accurately estimate a finite set of generic rank-1 bounded observables while avoiding the growth of the number of the required samples with the state dimension.
Quantum key distribution (QKD) networks are expected to enable information-theoretical secure (ITS) communication over a large-scale network. Most researches on relay-based QKD network assume that all relays or nodes are completely trustworthy. However, the malicious behavior of any single node can undermine security of QKD networks. Current research on Quantum Key Distribution (QKD) networks primarily addresses passive attacks, such as eavesdropping, conducted by malicious nodes. Although there are proposed solutions like majority voting and secret sharing for point-to-point QKD systems to counter active attacks, these strategies are not directly transferable to QKD network research due to different security requirements. We propose the a new paradigm for the security requirements of QKD networks and addresses the active attack by collaborate malicious nodes. First, we introduce the ITS distributed authentication scheme, which additionally offers two crucial security properties to QKD networks: identity unforgeability and non-repudiation. Secondly, concerning correctness, we propose an ITS fault-tolerant consensus scheme to ensure consistency, enabling participating nodes to collaborate correctly in a more practical manner. Through our simulation, we have shown that our scheme exhibits a significantly lower growth trend in key consumption compared to the original pre-shared keys scheme. For instance, in larger networks such as when the nodes number is 80, our scheme's key consumption is only 13.1\% of the pre-shared keys scheme.
Coherent control errors, for which ideal Hamiltonians are perturbed by unknown multiplicative noise terms, are a major obstacle for reliable quantum computing. In this paper, we present a framework for analyzing the robustness of quantum algorithms against coherent control errors using Lipschitz bounds. We derive worst-case fidelity bounds which show that the resilience against coherent control errors is mainly influenced by the norms of the Hamiltonians generating the individual gates. These bounds are explicitly computable even for large circuits, and they can be used to guarantee fault-tolerance via threshold theorems. Moreover, we apply our theoretical framework to derive a novel guideline for robust quantum algorithm design and transpilation, which amounts to reducing the norms of the Hamiltonians. Using the $3$-qubit Quantum Fourier Transform as an example application, we demonstrate that this guideline targets robustness more effectively than existing ones based on circuit depth or gate count. Furthermore, we apply our framework to study the effect of parameter regularization in variational quantum algorithms. The practicality of the theoretical results is demonstrated via implementations in simulation and on a quantum computer.
A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.
Quantum reservoir computing is strongly emerging for sequential and time series data prediction in quantum machine learning. We make advancements to the quantum noise-induced reservoir, in which reservoir noise is used as a resource to generate expressive, nonlinear signals that are efficiently learned with a single linear output layer. We address the need for quantum reservoir tuning with a novel and generally applicable approach to quantum circuit parameterization, in which tunable noise models are programmed to the quantum reservoir circuit to be fully controlled for effective optimization. Our systematic approach also involves reductions in quantum reservoir circuits in the number of qubits and entanglement scheme complexity. We show that with only a single noise model and small memory capacities, excellent simulation results were obtained on nonlinear benchmarks that include the Mackey-Glass system for 100 steps ahead in the challenging chaotic regime.
Electrically percolating nanowire networks are amongst the most promising candidates for next-generation transparent electrodes. Scientific interest in these materials stems from their intrinsic current distribution heterogeneity, leading to phenomena like percolating pathway re-routing and localized self-heating, which can cause irreversible damage. Without an experimental technique to resolve the current distribution, and an underpinning nonlinear percolation model, one relies on empirical rules and safety factors to engineer these materials. We introduce Bose-Einstein microscopy to address the long-standing problem of imaging active current flow in 2D materials. We report on improvement of the performance of this technique, whereby observation of dynamic redistribution of current pathways becomes feasible. We show how this, combined with existing thermal imaging methods, eliminates the need for assumptions between electrical and thermal properties. This will enable testing and modelling individual junction behaviour and hotspot formation. Investigating both reversible and irreversible mechanisms will contribute to the advancement of devices with improved performance and reliability.
We present rapid and robust protocols for STIRAP and quantum logic gates. Our gates are based on geometric phases acquired by instantaneous eigenstates of a slowly accelerating inertial Hamiltonian. To begin, we establish the criteria for inertial evolution and subsequently engineer pulse shapes that fulfill these conditions. These tailored pulses are then used to optimize geometric logic gates. We analyze a realization of our protocols with $^{87}$Rb atoms, resulting in gate fidelity that approaches the current state-of-the-art, with marked improvements in robustness.
The variational quantum eigensolver (VQE) is a hybrid algorithm that has the potential to provide a quantum advantage in practical chemistry problems that are currently intractable on classical computers. VQE trains parameterized quantum circuits using a classical optimizer to approximate the eigenvalues and eigenstates of a given Hamiltonian. However, VQE faces challenges in task-specific design and machine-specific architecture, particularly when running on noisy quantum devices. This can have a negative impact on its trainability, accuracy, and efficiency, resulting in noisy quantum data. We propose variational denoising, an unsupervised learning method that employs a parameterized quantum neural network to improve the solution of VQE by learning from noisy VQE outputs. Our approach can significantly decrease energy estimation errors and increase fidelities with ground states compared to noisy input data for the $\text{H}_2$, LiH, and $\text{BeH}_2$ molecular Hamiltonians, and the transverse field Ising model. Surprisingly, it only requires noisy data for training. Variational denoising can be integrated into quantum hardware, increasing its versatility as an end-to-end quantum processing for quantum data.
Quantum computers require high fidelity quantum gates. These gates are obtained by routine calibration tasks that eat into the availability of cloud-based devices. Restless circuit execution speeds-up characterization and calibration by foregoing qubit reset in between circuits. Post-processing the measured data recovers the desired signal. However, since the qubits are not reset, leakage -- typically present at the beginning of the calibration -- may cause issues. Here, we develop a simulator of restless circuit execution based on a Markov Chain to study the effect of leakage. In the context of error amplifying single-qubit gates sequences, we show that restless calibration tolerates up to 0.5% of leakage which is large compared to the $10^{-4}$ gate fidelity of modern single-qubit gates. Furthermore, we show that restless circuit execution with leaky gates reduces by 33% the sensitivity of the ORBIT cost function developed by J. Kelly et al. which is typically used in closed-loop optimal control~[Phys. Rev. Lett. 112, 240504 (2014)]. Our results are obtained with standard qubit state discrimination showing that restless circuit execution is resilient against misclassified non-computational states. In summary, the restless method is sufficiently robust against leakage in both standard and closed-loop optimal control gate calibration to provided accurate results.
Dynamical fluctuations or rare events associated with atypical trajectories in chaotic maps due to specific initial conditions can crucially determine their fate, as the may lead to stability islands or regions in phase space otherwise displaying unusual behavior. Yet, finding such initial conditions is a daunting task precisely because of the chaotic nature of the system. In this work, we circumvent this problem by proposing a framework for finding an effective topologically-conjugate map whose typical trajectories correspond to atypical ones of the original map. This is illustrated by means of examples which focus on counterbalancing the instability of fixed points and periodic orbits, as well as on the characterization of a dynamical phase transition involving the finite-time Lyapunov exponent. The procedure parallels that of the application of the generalized Doob transform in the stochastic dynamics of Markov chains, diffusive process and open quantum systems, which in each case results in a new process having the prescribed statistics in its stationary state. This work thus brings chaotic maps into the growing family of systems whose rare fluctuations -- sustaining prescribed statistics of dynamical observables -- can be characterized and controlled by means of a large-deviation formalism.
Time-dependent drives play a crucial role in quantum computing efforts with circuit quantum electrodynamics. They enable single-qubit control, entangling logical operations, as well as qubit readout. However, their presence can lead to deleterious effects such as large ac-Stark shifts and unwanted qubit transitions ultimately reflected into reduced control or readout fidelities. Qubit cloaking was introduced in Lled\'o, Dassonneville, et al. [C. Lled\'o, R. Dassonneville, A. Moulinas et al., Nat. Commun. \textbf{14}, 6313 (2023)] to temporarily decouple the qubit from the coherent photon population of a driven cavity, allowing for the application of arbitrary displacements to the cavity field while avoiding the deleterious effects on the qubit. For qubit readout, cloaking permits to prearm the cavity with an, in principle, arbitrarily large number of photons, in anticipation to the qubit-state-dependent evolution of the cavity field, allowing for improved readout strategies. Here we take a closer look at two of them. First, arm-and-release readout, introduced together with qubit cloaking, where after arming the cavity the cloaking mechanism is released and the cavity field evolves under the application of a constant drive amplitude. Second, an arm-and-longitudinal readout scheme, where the cavity drive amplitude is slowly modulated after the release. We show that the two schemes complement each other, offering an improvement over the standard dispersive readout for any values of the dispersive interaction and cavity decay rate, as well as any target measurement integration time. Our results provide a recommendation for improving qubit readout without changes to the standard circuit QED architecture.
In this chapter we discuss the Einstein Podolsky Rosen theorem and its strong relation with Bell's theorem. The central role played by the concept of beable introduced by Bell is emphasized. In particular we stress that beables involved in EPR and Bell theorems are not limited to hidden supplementary variables (e.g., like in the de Broglie-Bohm (dBB) pilot-wave theory) but also include the wave function. In full agreement with Bell this allows us the reformulate the EPR and Bell results as strong theorems concerning nonlocality for quantum mechanics itself and not only for hidden-variables approaches as it is often mistakenly assumed. Furthermore, we clarify some repeated ambiguities concerning `local-realism' and emphasize that neither realism nor determinism nor counterfactual definiteness are prerequisites of EPR and Bell theorems.
Nelson's stochastic quantum mechanics provides an ideal arena to test how the Born rule is established from an initial probability distribution that is not identical to the square modulus of the wave function. Here, we investigate numerically this problem for three relevant cases: a double-slit interference setup, a harmonic oscillator, and a quantum particle in a uniform gravitational field. For all cases, Nelson's stochastic trajectories are initially localized at a definite position, thereby violating the Born rule. For the double slit and harmonic oscillator, typical quantum phenomena, such as interferences, always occur well after the establishment of the Born rule. In contrast, for the case of quantum particles free-falling in the gravity field of the Earth, an interference pattern is observed \emph{before} the completion of the quantum relaxation. This finding may pave the way to experiments able to discriminate standard quantum mechanics, where the Born rule is always satisfied, from Nelson's theory, for which an early subquantum dynamics may be present before full quantum relaxation has occurred. Although the mechanism through which a quantum particle might violate the Born rule remains unknown to date, we speculate that this may occur during fundamental processes, such as beta decay or particle-antiparticle pair production.
We consider the relation between three different approaches to defining quantum states across several times and locations: the pseudo-density matrix (PDM), the process matrix, and the multiple-time state approaches. Previous studies have shown that bipartite two-time states can reproduce the statistics of bipartite process matrices. Here, we show that the operational scenarios underlying two-time states can be represented as PDMs, and thereby construct a mapping from process matrices with measurements to PDMs. The existence of this mapping implies that PDMs can, like the process matrix, model processes with indefinite causal orders. We illustrate this ability by showing how the negativity of the PDM, a measure of temporal correlations, is activated by creating a quantum-switched order of operators associated with reset channels. The results contribute to the unification of quantum models of spatiotemporal states.
We design a quantum method for non-linear classical information compression. For compressing data obeying symmetries of the so-called hidden subgroup type, we prove an exponential speedup of quantum algorithm in terms of query complexity. We then generalize the method to a variational quantum algorithm that automatically compresses time-series data stored in a database with a priori unknown symmetries of the hidden subgroup type. The automatic compression exploits an encoder that computes the hidden subgroup and a decoder that reconstructs the data using the group structure. The algorithm can thus be viewed as a synthesis of hidden subgroup quantum computing and quantum autoencoders. The output of our algorithm compares favourably with that of a deep classical autoencoder for a tractable illustrative example. Our results show how quantum computers can efficiently compress certain types of data that cannot be efficiently compressible by classical computers. As an additional application, the computational advantage of the quantum compressor over its classical counterpart can be transformed into a quantum advantage for intelligent energy harvesting.
We give a necessary condition for photon state transformations in linear optical setups preserving the total number of photons. From an analysis of the algebra describing the quantum evolution, we find a conserved quantity that appears in all allowed optical transformations. We comment some examples and numerical applications, with example code, and give three general no-go results. These include (i) the impossibility of deterministic transformations which redistribute the photons from one to two different modes, (ii) a proof that it is impossible to generate a perfect Bell state in heralded schemes with a separable input for any number of ancillary photons and modes and a fixed herald and (iii) a restriction for the conversion between different types of entanglement (converting GHZ to W states).
We present an iterative scheme to estimate the minimal duration in which a quantum gate can be realized while satisfying hardware constraints on the control pulse amplitudes. The scheme performs a sequence of unconstrained numerical optimal control cycles that each minimize the gate fidelity for a given gate duration alongside an additional penalty term for the control pulse amplitudes. After each cycle, the gate duration is adjusted based on the inverse of the resulting maximum control pulse amplitudes, by re-scaling the dynamics to a new duration where control pulses satisfy the amplitude constraints. Those scaled controls then serve as an initial guess for the next unconstrained optimal control cycle, using the adjusted gate duration. We provide multiple numerical examples that each demonstrate fast convergence of the scheme towards a gate duration that is close to the quantum speed limit, given the control pulse amplitude bound. The proposed technique is agnostic to the underlying system and control Hamiltonian models, as well as the target unitary gate operation, making the time-scaling iteration an easy to implement and practically useful scheme for reducing the durations of quantum gate operations.
We realize collective enhancement and suppression of light scattered by an array of tweezer-trapped $^{87}$Rb atoms positioned within a strongly coupled Fabry-P\'{e}rot optical cavity. We illuminate the array with light directed transverse to the cavity axis, in the low saturation regime, and detect photons scattered into the cavity. For an array with integer-optical-wavelength spacing each atom scatters light into the cavity with nearly identical scattering amplitude, leading to an observed $N^2$ scaling of cavity photon number as the atom number increases stepwise from $N=1$ to $N=8$. By contrast, for an array with half-integer-wavelength spacing, destructive interference of scattering amplitudes yields a non-monotonic, sub-radiant cavity intensity versus $N$. By analyzing the polarization of light emitted from the cavity, we find that Rayleigh scattering can be collectively enhanced or suppressed with respect to Raman scattering. We observe also that atom-induced shifts and broadenings of the cavity resonance are precisely tuned by varying the atom number and positions. Altogether, tweezer arrays provide exquisite control of atomic cavity QED spanning from the single- to the many-body regime.
The advances in Artificial Intelligence (AI) and Machine Learning (ML) have opened up many avenues for scientific research, and are adding new dimensions to the process of knowledge creation. However, even the most powerful and versatile of ML applications till date are primarily in the domain of analysis of associations and boil down to complex data fitting. Judea Pearl has pointed out that Artificial General Intelligence must involve interventions involving the acts of doing and imagining. Any machine assisted scientific discovery thus must include casual analysis and interventions. In this context, we propose a causal learning model of physical principles, which not only recognizes correlations but also brings out casual relationships. We use the principles of causal inference and interventions to study the cause-and-effect relationships in the context of some well-known physical phenomena. We show that this technique can not only figure out associations among data, but is also able to correctly ascertain the cause-and-effect relations amongst the variables, thereby strengthening (or weakening) our confidence in the proposed model of the underlying physical process.
We here reply to a recent comment by Vaidman [\href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.048001}{Phys. Rev. Res. 5, 048001 (2023)}] on our paper [\href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.023048}{Phys. Rev. Res. 5, 023048 (2023)}]. In his Comment, Vaidman first admits that he is just defining (assuming) the weak trace gives the presence of a particle -- however, in this case, he should use a term other than presence, as this already has a separate, intuitive meaning other than ``where a weak trace is''. Despite this admission, Vaidman then goes on to argue for this definition by appeal to ideas around an objectively-existing idea of presence. We show these appeals rely on their own conclusion -- that there is always a matter of fact about the location of a quantum particle.
A ubiquitous problem in quantum physics is to understand the ground-state properties of many-body systems. Confronted with the fact that exact diagonalisation quickly becomes impossible when increasing the system size, variational approaches are typically employed as a scalable alternative: energy is minimised over a subset of all possible states and then different physical quantities are computed over the solution state. Despite remarkable success, rigorously speaking, all what variational methods offer are upper bounds on the ground-state energy. On the other hand, so-called relaxations of the ground-state problem based on semidefinite programming represent a complementary approach, providing lower bounds to the ground-state energy. However, in their current implementation, neither variational nor relaxation methods offer provable bound on other observables in the ground state beyond the energy. In this work, we show that the combination of the two classes of approaches can be used to derive certifiable bounds on the value of any observable in the ground state, such as correlation functions of arbitrary order, structure factors, or order parameters. We illustrate the power of this approach in paradigmatic examples of 1D and 2D spin-one-half Heisenberg models. To improve the scalability of the method, we exploit the symmetries and sparsity of the considered systems to reach sizes of hundreds of particles at much higher precision than previous works. Our analysis therefore shows how to obtain certifiable bounds on many-body ground-state properties beyond energy in a scalable way.
The realization of quantum advantage with noisy-intermediate-scale quantum (NISQ) machines has become one of the major challenges in computational sciences. Maintaining coherence of a physical system with more than ten qubits is a critical challenge that motivates research on compact system representations to reduce algorithm complexity. Toward this end, quantum simulations based on the variational quantum eigensolver (VQE) is considered to be one of the most promising algorithms for quantum chemistry in the NISQ era. We investigate reduced mapping of one spatial orbital to a single qubit to analyze the ground state energy in a way that the Pauli operators of qubits are mapped to the creation/annihilation of singlet pairs of electrons. To include the effect of non-bosonic (or non-paired) excitations, we introduce a simple correction scheme in the electron correlation model approximated by the geometrical mean of the bosonic (or paired) terms. Employing it in a VQE algorithm, we assess ground state energies of H2O, N2, and Li2O in good agreements with full configuration interaction (FCI) models respectively, using only 6, 8, and 12 qubits with quantum gate depths proportional to the squares of the qubit counts. With the adopted seniority-zero approximation that uses only one half of the qubit counts of a conventional VQE algorithm, we find our non-bosonic correction method reaches reliable quantum chemistry simulations at least for the tested systems.
This thesis explores the application of the Symmetry-Breaking/Symmetry-Restoration methodology on quantum computers to better approximate a Hamiltonian's ground state energy within a variational framework in many-body physics. This involves intentionally breaking and restoring the symmetries of the wave function ansatz at different stages of the variational search for the ground state. The Variational Quantum Eigensolver (VQE) is utilized for the variational component together with an ansatz inspired by the Bardeen-Cooper-Schrieffer (BCS) theory. The applications were demonstrated using the pairing and Hubbard Hamiltonians. Two approaches were identified with the VQE method: varying the symmetry-breaking ansatz parameters before or after symmetry restoration, termed Quantum Projection After Variation and Quantum Variation After Projection, respectively. The main contribution of this thesis was the development of a variety of symmetry restoration techniques based on the principles of the Quantum Phase Estimation algorithm, the notion of a Quantum "Oracle," and the Classical Shadow formalism. In the final part, hybrid quantum-classical techniques were introduced to extract an approximation of the low-lying spectrum of a Hamiltonian. Assuming accurate Hamiltonian moment extraction from their generating function with a quantum computer, two methods were presented for spectral analysis: the t-expansion method and the Krylov method, which provides, in particular, information about the evolution of the survival probability. Furthermore, the Quantum Krylov method was introduced, offering similar insights without the need to estimate Hamiltonian moments, a task that can be difficult on near-term quantum computers.
This paper addresses the problem of generating a common random string with min-entropy k using an unlimited supply of noisy EPR pairs or quantum isotropic states, with minimal communication between Alice and Bob. The paper considers two communication models -- one-way classical communication and one-way quantum communication, and derives upper bounds on the optimal common randomness rate for both models. We show that in the case of classical communication, quantum isotropic states have no advantage over noisy classical correlation[GR16]. In the case of quantum communication, we demonstrate that the common randomness rate can be increased by using superdense coding on quantum isotropic states. We also prove an upper bound on the optimal common randomness rate achievable by using one-way quantum communication. As an application, our result yields upper bounds on the classical capacity of the noiseless quantum channel assisted by noisy entanglement[HHH+01].