Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2024-01-05 12:30 to 2024-01-09 11:30 | Next meeting is Tuesday Oct 29th, 10:30 am.
The paradigm-changing possibility of collective neutrino-antineutrino oscillations was recently advanced in analogy to collective flavor oscillations. However, the amplitude for the backward scattering process $\nu_{\mathbf{p}_1}\overline\nu_{\mathbf{p}_2}\to\nu_{\mathbf{p}_2}\overline\nu_{\mathbf{p}_1}$ is helicity-suppressed and vanishes for massless neutrinos, implying that there is no off-diagonal refractive index between $\nu$ and $\overline\nu$ of a single flavor of massless neutrinos. For a nonvanishing mass, collective helicity oscillations are possible, representing de-facto $\nu$--$\overline\nu$ oscillations in the Majorana case. However, such phenomena are suppressed by the smallness of neutrino masses as discussed in the previous literature.
The Galactic Center Excess (GCE) remains an enduring mystery, with leading explanations being annihilating dark matter or an unresolved population of millisecond pulsars. Analyzing the morphology of the GCE provides critical clues to identify its exact origin. We investigate the robustness of the inferred GCE morphology against the effects of masking, an important step in the analysis where the gamma-ray emission from point sources and the galactic disk are excluded. Using different masks constructed from Fermi point source catalogs and a wavelet method, we find that the GCE morphology, particularly its ellipticity and cuspiness, is relatively independent of the choice of mask for energies above 2-3 GeV. The GCE morphology systematically favors an approximately spherical shape, as expected for dark matter annihilation. Compared to various stellar bulge profiles, a spherical dark matter annihilation profile better fits the data across different masks and galactic diffuse emission backgrounds, except for the stellar bulge profile from Coleman et al. (2020), which provides a similar fit to the data. Modeling the GCE with two components, one from dark matter annihilation and one tracing the Coleman Bulge, we find this two-component model outperforms any single component or combinations of dark matter annihilation and other stellar bulge profiles. Uncertainty remains about the exact fraction contributed by each component across different background models and masks. However, when the Coleman Bulge dominates, its corresponding spectrum lacks characteristics typically associated with millisecond pulsars, suggesting that it mostly models the emission from other sources instead of the GCE that is still present and spherically symmetric.
Polynomial inflation is a very simple and well motivated scenario. A potential with a concave ``almost'' saddle point at field value $\phi = \phi_0$ fits well the cosmic microwave background (CMB) data and makes testable predictions for the running of the spectral index and the tensor to scalar ratio. In this work we analyze leptogenesis in the polynomial inflation framework. We delineate the allowed parameter space giving rise to the correct baryon asymmetry as well as being consistent with data on neutrino oscillations. To that end we consider two different reheating scenarios. $(i)$ If the inflaton decays into two bosons, the reheating temperature can be as high as $T_\text{rh} \sim 10^{14}$ GeV without spoiling the flatness of the potential, allowing vanilla $N_1$ thermal leptogenesis to work if $T_\text{rh}> M_1$ where $N_1$ is the lightest right--handed neutrino and $M_1$ its mass. Moreover, if the dominant decay of the inflaton is into Higgs bosons of the Standard Model, we find that rare three--body inflaton decays into a Higgs boson plus one light and one heavy neutrino allow leptogenesis even for $T_\text{rh} < M_1$ if the inflaton mass is of order $10^{12}$ GeV or higher; in the polynomial inflation scenario this requires $\phi_0 \gtrsim 2.5~M_P$. This novel mechanism of non--thermal leptogenesis is quite generic, since the coupling leading to the three--body final state is required in the type I see--saw mechanism. $(ii)$ If the inflaton decays into two fermions, the flatness of the potential implies a lower reheating temperature. In this case inflaton decay to two $N_1$ still allows successful non--thermal leptogenesis if $\phi_0 \gtrsim 0.1~M_P$ and $T_\text{rh} \gtrsim 10^{6}$ GeV.
We investigate quantum decoherence in a class of models which interpolates between expanding (inflation) and contracting (ekpyrosis) scenarios. For the cases which result in a scale-invariant power spectrum, we find that ekpyrotic universes lead to complete decoherence of the curvature perturbation before the bounce. This is in stark contrast to the inflationary case, where recoherence has been previously observed in some situations. Although the purity can be computed for couplings of all sizes, we also study the purity perturbatively and observe that late-time (secular growth) breakdown of perturbation theory often occurs in these cases. Instead, we establish a simple yet powerful late-time purity resummation which captures the exact evolution to a remarkable level, while maintaining analytical control. We conclude that the cosmological background plays a crucial role in the decoupling of the heavy fields during inflation and alternatives.
Dust attenuation in star-forming galaxies (SFGs), as parameterized by the infrared excess (IRX $\equiv L_{\rm IR}/L_{\rm UV}$), is found to be tightly correlated with star formation rate (SFR), metallicity and galaxy size, following a universal IRX relation up to $z=3$. This scaling relation can provide a fundamental constraint for theoretical models to reconcile galaxy star formation, chemical enrichment, and structural evolution across cosmic time. We attempt to reproduce the universal IRX relation over $0.1\leq z\leq 2.5$ using the EAGLE hydrodynamical simulations and examine sensitive parameters in determining galaxy dust attenuation. Our findings show that while the predicted universal IRX relation from EAGLE approximately aligns with observations at $z\leq 0.5$, noticeable disparities arise at different stellar masses and higher redshifts. Specifically, we investigate how modifying various galaxy parameters can affect the predicted universal IRX relation in comparison to the observed data. We demonstrate that the simulated gas-phase metallicity is the critical quantity for the shape of the predicted universal IRX relation. We find that the influence of the infrared luminosity and infrared excess is less important while galaxy size has virtually no significant effect. Overall, the EAGLE simulations are not able to replicate some of the observed characteristics between IRX and galaxy parameters of SFGs, emphasizing the need for further investigation and testing for our current state-of-the-art theoretical models.
In this work we present a novel approach to the study of cosmological particle production in asymptotically Minkowski spacetimes. We emphasize that it is possible to determine the amount of particle production by focusing on the mathematical properties of the mode function equations,i.e. their singularities and monodromies, sidestepping the need to solve those equations. We consider in detail creation of scalar and spin 1/2 particles in four dimensional asymptotically Minkowski flat FLRW spacetimes. We explain that when the mode function equation for scalar fields has only regular singular points, the corresponding scale factors are asymptotically Minkowski. For Dirac spin 1/2 fields, the requirement of mode function equations with only regular points is more restrictive, and picks up a subset of the aforementioned scale factors. For the scalar case, we argue that there are two different regimes of particle production; while most of the literature has focused on only one of these regimes, the other regime presents enhanced particle production. On the other hand, for Dirac fermions we find a single regime of particle production. Finally, we very briefly comment on the possibility of studying particle production in spacetimes that don't asymptote to Minkowski, by considering mode function equations with irregular singular points.
We report on a detailed spatial and spectral analysis of the large-scale X-ray emission from the merging cluster Cygnus A. We use 2.2 Ms Chandra and 40 ks XMM-Newton archival datasets to determine the thermodynamic properties of the intracluster gas in the merger region between the two sub-clusters in the system. These profiles exhibit temperature enhancements that imply significant heating along the merger axis. Possible sources for this heating include the shock from the ongoing merger, past activity of the powerful AGN in the core, or a combination of both. To distinguish between these scenarios, we compare the observed X-ray properties of Cygnus A with simple, spherical cluster models. These models are constructed using azimuthally averaged density and temperature profiles determined from the undisturbed regions of the cluster and folded through MARX to produce simulated Chandra observations. The thermodynamic properties in the merger region from these simulated X-ray observations were used as a baseline for comparison with the actual observations. We identify two distinct components in the temperature structure along the merger axis, a smooth, large-scale temperature excess we attribute to the ongoing merger, and a series of peaks where the temperatures are enhanced by 0.5-2.5 keV. If these peaks are attributable to the central AGN, the location and strength of these features imply that Cygnus A has been active for the past 300 Myr injecting a total of $\sim$10$^{62}$ erg into the merger region. This corresponds to $\sim$10% of the energy deposited by the merger shock.
Flux excesses in the early time light curves of Type Ia supernovae (SNe\,Ia) are predicted by multiple theoretical models and have been observed in a number of nearby SNe\,Ia over the last decade. However, the astrophysical processes that cause these excesses may affect their use as standardizable candles for cosmological parameter measurements. In this paper, we perform a systematic search for early-time excesses in SNe\,Ia observed by the Zwicky Transient Facility (ZTF) to study whether SNe\,Ia with these excesses yield systematically different Hubble residuals. We analyze two compilations of ZTF SN\,Ia light curves from its first year of operations: 127 high-cadence light curves from \citet{Yao19} and 305 light curves from the ZTF cosmology data release of \citet{Dhawan22}. We detect significant early-time excesses for 17 SNe\,Ia in these samples and find that the excesses have an average $g-r$ color of $0.06\pm0.09$~mag; we do not find a clear preference for blue excesses as predicted by several models. Using the SALT3 model, we measure Hubble residuals for these two samples and find that excess-having SNe\,Ia may have lower Hubble residuals (HR) after correcting for shape, color, and host-galaxy mass, at $\sim$2-3$\sigma$ significance; our baseline result is $\Delta HR = -0.056 \pm 0.026$~mag ($2.2 \sigma$). We compare the host-galaxy masses of excess-having and no-excess SNe\,Ia and find they are consistent, though at marginal significance excess-having SNe\,Ia may prefer lower-mass hosts. Additional discoveries of early excess SNe\,Ia will be a powerful way to understand potential biases in SN\,Ia cosmology and probe the physics of SN\,Ia progenitors.
We present cosmological constraints from the sample of Type Ia supernovae (SN Ia) discovered during the full five years of the Dark Energy Survey (DES) Supernova Program. In contrast to most previous cosmological samples, in which SN are classified based on their spectra, we classify the DES SNe using a machine learning algorithm applied to their light curves in four photometric bands. Spectroscopic redshifts are acquired from a dedicated follow-up survey of the host galaxies. After accounting for the likelihood of each SN being a SN Ia, we find 1635 DES SN in the redshift range $0.10<z<1.13$ that pass quality selection criteria and can be used to constrain cosmological parameters. This quintuples the number of high-quality $z>0.5$ SNe compared to the previous leading compilation of Pantheon+, and results in the tightest cosmological constraints achieved by any SN data set to date. To derive cosmological constraints we combine the DES supernova data with a high-quality external low-redshift sample consisting of 194 SNe Ia spanning $0.025<z<0.10$. Using SN data alone and including systematic uncertainties we find $\Omega_{\rm M}=0.352\pm 0.017$ in a flat $\Lambda$CDM model, and $(\Omega_{\rm M},w)=(0.264^{+0.074}_{-0.096},-0.80^{+0.14}_{-0.16})$ in a flat $w$CDM model. For a flat $w_0w_a$CDM model, we find $(\Omega_{\rm M},w_0,w_a)=(0.495^{+0.033}_{-0.043},-0.36^{+0.36}_{-0.30},-8.8^{+3.7}_{-4.5})$, consistent with a constant equation of state to within $\sim2 \sigma$. Including Planck CMB data, SDSS BAO data, and DES $3\times2$-point data gives $(\Omega_{\rm M},w)=(0.321\pm0.007,-0.941\pm0.026)$. In all cases dark energy is consistent with a cosmological constant to within $\sim2\sigma$. In our analysis, systematic errors on cosmological parameters are subdominant compared to statistical errors; these results thus pave the way for future photometrically classified supernova analyses.
We present the full Hubble diagram of photometrically-classified Type Ia supernovae (SNe Ia) from the Dark Energy Survey supernova program (DES-SN). DES-SN discovered more than 20,000 SN candidates and obtained spectroscopic redshifts of 7,000 host galaxies. Based on the light-curve quality, we select 1635 photometrically-identified SNe Ia with spectroscopic redshift 0.10$< z <$1.13, which is the largest sample of supernovae from any single survey and increases the number of known $z>0.5$ supernovae by a factor of five. In a companion paper, we present cosmological results of the DES-SN sample combined with 194 spectroscopically-classified SNe Ia at low redshift as an anchor for cosmological fits. Here we present extensive modeling of this combined sample and validate the entire analysis pipeline used to derive distances. We show that the statistical and systematic uncertainties on cosmological parameters are $\sigma_{\Omega_M,{\rm stat+sys}}^{\Lambda{\rm CDM}}=$0.017 in a flat $\Lambda$CDM model, and $(\sigma_{\Omega_M},\sigma_w)_{\rm stat+sys}^{w{\rm CDM}}=$(0.082, 0.152) in a flat $w$CDM model. Combining the DES SN data with the highly complementary CMB measurements by Planck Collaboration (2020) reduces uncertainties on cosmological parameters by a factor of 4. In all cases, statistical uncertainties dominate over systematics. We show that uncertainties due to photometric classification make up less than 10% of the total systematic uncertainty budget. This result sets the stage for the next generation of SN cosmology surveys such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time.
The quanta of phantom dark energy models are negative energy particles, whose maximum magnitude of energy $|E|$ must be less than a cutoff $\Lambda\lesssim 20\,$MeV. They are produced by spontaneous decay of the vacuum into phantoms plus normal particles. I review general cosmological constraints that have been derived from the effects of such phantom fluid production, and a possible application: the generation of boosted dark matter or radiation that could be directly detected. Recent excess events from the DAMIC experiment can be well-fit by such processes.
The large value of non-minimal coupling constant $\xi$ required to satisfy CMB observations in Higgs inflation violates unitarity. In this work we study Higgs-inflation with non-canonical kinetic term of DBI form to find whether $\xi$ can be reduced. To study the inflationary dynamics, we transform the action to the Einstein frame, in which the Higgs is minimally coupled to gravity with a non-canonical kinetic term and modified potential. We choose the Higgs self coupling constant $\lambda=0.14$ for our analysis. We find that the value of $\xi$ can be reduced from $10^{3}-10^{4}$ to $\mathcal{O}(10)$ to satisfy Planck constraints on amplitude of scalar power spectrum. However, this model produces a larger tensor-to-scalar ratio $r$, in comparison to the Higgs inflation with canonical kinetic term. We also find that, to satisfy joint constraints on scalar spectral index $n_s$ and tensor-to-scalar ratio $r$ from Planck-2018 and bounds on $r$ from Planck and BICEP3, the value of $\xi$ should be of the order of $10^4$. Thus, the issue of unitarity violation remains even after considering Higgs inflation with non-canonical kinetic term
Absolute distances from strong lensing can anchor Type Ia Supernovae (SNe Ia) at cosmological distances giving a model-independent inference of the Hubble constant ($H_0$). Future observations could provide strong lensing time-delay distances with source redshifts up to $z\,\simeq\,4$, which are much higher than the maximum redshift of SNe Ia observed so far. In order to make full use of time-delay distances measured at higher redshifts, we use quasars as a complementary cosmic probe to measure cosmological distances at redshifts beyond those of SNe Ia and provide a model-independent method to determine $H_0$. In this work, we demonstrate a model-independent, joint constraint of SNe Ia, quasars, and time-delay distances from strong lensed quasars. We first generate mock data sets of SNe Ia, quasar, and time-delay distances based on a fiducial cosmological model. Then, we calibrate the quasar parameters model independently using Gaussian process (GP) regression with mock SNe Ia data. Finally, we determine the value of $H_0$ model-independently using GP regression from mock quasars and time-delay distances from strong lensing systems. As a comparison, we also show the $H_0$ results obtained from mock SNe Ia in combination with time-delay lensing systems whose redshifts overlap with SNe Ia. Our results show that quasars at higher redshifts show great potential to extend the redshift coverage of SNe Ia and thus enable the full use of strong lens time-delay distance measurements from ongoing cosmic surveys and improve the accuracy of the estimation of $H_0$ from $2.1\%$ to $1.3\%$ when the uncertainties of the time-delay distances are $5\%$ of the distance values.
We find that combined Planck cosmic microwave background, baryon acoustic oscillations and supernovae data analyzed under $\Lambda$CDM are in 4.9$\sigma$ tension with eBOSS Ly$\alpha$ forest in inference of the linear matter power spectrum at wavenumber $\sim 1 h\,\mathrm{Mpc}^{-1}$ and redshift = 3. Model extensions can alleviate this tension: running in the tilt of the primordial power spectrum ($\alpha_\mathrm{s} \sim -0.01$); a fraction $\sim (1 - 5)\%$ of ultra-light axion dark matter (DM) with particle mass $\sim 10^{-25}$ eV or warm DM with mass $\sim 90$ eV. The new DESI survey, coupled with high-accuracy modeling, will help distinguish the source of this discrepancy.
The intrinsic alignment (IA) of galaxies acts as a systematic effect in weak lensing measurements and tends to introduce biases. It mimics the gravitational lensing signal which makes it difficult to distinguish it from the true gravitational weak lensing effect. Hence, it is critical to account for the noise for correctly interpreting the results. This study aims at a quantitative analysis of IA using the Tidal Alignment and Tidal Torquing (TATT) model. We also investigate how the signals for shear and galaxy-galaxy lensing behave upon changing the parameters of the TATT model. The data for this study was prepared with a computational pipeline based on the Cocoa model to explore the parameter space of the intrinsic shape signal. Through this work, we identify that linear terms of the intrinsic shape signal are dominant in the case of GGL while the higher-order terms dictate the shear signal.
We explore the potential of enhancing LLM performance in astronomy-focused question-answering through targeted, continual pre-training. By employing a compact 7B-parameter LLaMA-2 model and focusing exclusively on a curated set of astronomy corpora -- comprising abstracts, introductions, and conclusions -- we achieve notable improvements in specialized topic comprehension. While general LLMs like GPT-4 excel in broader question-answering scenarios due to superior reasoning capabilities, our findings suggest that continual pre-training with limited resources can still enhance model performance on specialized topics. Additionally, we present an extension of AstroLLaMA: the fine-tuning of the 7B LLaMA model on a domain-specific conversational dataset, culminating in the release of the chat-enabled AstroLLaMA for community use. Comprehensive quantitative benchmarking is currently in progress and will be detailed in an upcoming full paper. The model, AstroLLaMA-Chat, is now available at https://huggingface.co/universeTBD, providing the first open-source conversational AI tool tailored for the astronomy community.
We aim to investigate the nature of time-variable X-ray sources detected in the {\it XMM-Newton} serendipitous survey. The X-ray light curves of objects in the {\it XMM-Newton} serendipitous survey were searched for variability and coincident serendipitous sources observed by {\it Chandra} were also investigated. Subsequent infrared spectroscopy of the counterparts to the X-ray objects that were identified using UKIDSS was carried out using {\it ISAAC} on the VLT. We found that the object 4XMM~J182531.5--144036 detected in the {\it XMM-Newton} serendipitous survey in April 2008 was also detected by {\it Chandra} as CXOU~J182531.4--144036 in July 2004. Both observations reveal a hard X-ray source displaying a coherent X-ray pulsation at a period of 781~s. The source position is coincident with a $K=14$ mag infrared object whose spectrum exhibits strong HeI and Br$\gamma$ emission lines and an infrared excess above that of early B-type dwarf or giant stars. We conclude that 4XMM~J182531.5--144036 is a Be/X-ray binary pulsar exhibiting persistent X-ray emission and is likely in a long period, low eccentricity orbit, similar to X Per.
We present initial results from a JWST survey of the youngest Galactic core-collapse supernova remnant Cassiopeia A (Cas A), made up of NIRCam and MIRI imaging mosaics that map emission from the main shell, interior, and surrounding circumstellar/interstellar material (CSM/ISM). We also present four exploratory positions of MIRI/MRS IFU spectroscopy that sample ejecta, CSM, and associated dust from representative shocked and unshocked regions. Surprising discoveries include: 1) a web-like network of unshocked ejecta filaments resolved to 0.01 pc scales exhibiting an overall morphology consistent with turbulent mixing of cool, low-entropy matter from the progenitor's oxygen layer with hot, neutrino and radioactively heated high-entropy matter, 2) a thick sheet of dust-dominated emission from shocked CSM seen in projection toward the remnant's interior pockmarked with small (approximately one arcsecond) round holes formed by knots of high-velocity ejecta that have pierced through the CSM and driven expanding tangential shocks, 3) dozens of light echoes with angular sizes between 0.1 arcsecond to 1 arcminute reflecting previously unseen fine-scale structure in the ISM. NIRCam observations place new upper limits on infrared emission from the neutron star in Cas A's center and tightly constrain scenarios involving a possible fallback disk. These JWST survey data and initial findings help address unresolved questions about massive star explosions that have broad implications for the formation and evolution of stellar populations, the metal and dust enrichment of galaxies, and the origin of compact remnant objects.
Stars formed with initial mass over 50 Msun are very rare today, but they are thought to be more common in the early universe. The fates of those early, metal-poor, massive stars are highly uncertain. Most are expected to directly collapse to black holes, while some may explode as a result of rotationally powered engines or the pair-creation instability. We present the chemical abundances of J0931+0038, a nearby low-mass star identified in early followup of SDSS-V Milky Way Mapper, which preserves the signature of unusual nucleosynthesis from a massive star in the early universe. J0931+0038 has relatively high metallicity ([Fe/H] = -1.76 +/- 0.13) but an extreme odd-even abundance pattern, with some of the lowest known abundance ratios of [N/Fe], [Na/Fe], [K/Fe], [Sc/Fe], and [Ba/Fe]. The implication is that a majority of its metals originated in a single extremely metal-poor nucleosynthetic source. An extensive search through nucleosynthesis predictions finds a clear preference for progenitors with initial mass > 50 Msun, making J0931+0038 one of the first observational constraints on nucleosynthesis in this mass range. However the full abundance pattern is not matched by any models in the literature. J0931+0038 thus presents a challenge for the next generation of nucleosynthesis models and motivates study of high-mass progenitor stars impacted by convection, rotation, jets, and/or binary companions. Though rare, more examples of unusual early nucleosynthesis in metal-poor stars should be found in upcoming large spectroscopic surveys.
The recent survey of the core-collapse supernova remnant Cassiopeia A (CasA) with the MIRI instrument on board the James Webb Space Telescope (JWST) revealed a large structure in the interior region, referred to as the "Green Monster". Although its location suggests that it is an ejecta structure, the infrared properties of the "Green Monster" hint at a circumstellar medium (CSM) origin. In this companion paper to the JWST Cas A paper, we investigate the filamentary X-ray structures associated with the "Green Monster" using Chandra X-ray Observatory data. We extracted spectra along the "Green Monster" as well as from shocked CSM regions. Both the extracted spectra and a principal component analysis show that the "Green Monster" emission properties are similar to those of the shocked CSM. The spectra are well-fit by a model consisting of a combination of a non-equilibrium ionization model and a power-law component, modified by Galactic absorption. All the "Green Monster" spectra show a blueshift of around ~2500 km/s, suggesting that the structure is on the near side of Cas A. The ionization age is around $n_{e}t$ = $1.4 \times 10^{11}$ cm$^{-3}$s. This translates into a pre-shock density of ~11 cm$^{-3}$, higher than previous estimates of the unshocked CSM. The relatively high net and relatively low radial velocity suggest that this structure has a relatively high density compared to other shocked CSM plasma. This analysis provides yet another piece of evidence that the CSM around Cas A's progenitor was not that of a smooth steady wind profile.
We explore the prospects for identifying differences in simulated gravitational-wave signals of binary neutron star (BNS) mergers associated with the way thermal effects are incorporated in the numerical-relativity modelling. We consider a hybrid approach in which the equation of state (EoS) comprises a cold, zero temperature, piecewise-polytropic part and a thermal part described by an ideal gas, and a tabulated approach based on self-consistent, microphysical, finite-temperature EoS. We use time-domain waveforms corresponding to BNS merger simulations with four different EoS. Those are injected into Gaussian noise given by the sensitivity of the third-generation detector Einstein Telescope and reconstructed using BayesWave, a Bayesian data-analysis algorithm that recovers the signals through a model-agnostic approach. The two representations of thermal effects result in frequency shifts of the dominant peaks in the spectra of the post-merger signals, for both the quadrupole fundamental mode and the late-time inertial modes. For some of the EoS investigated those differences are large enough to be told apart, especially in the early post-merger phase when the signal amplitude is the loudest. These frequency shifts may result in differences in the inferred tidal deformability, which might be resolved by third-generation detectors up to distances of about tens of Mpc at most.
We report on the detection of radio bursts from the Galactic bulge using the real-time transient detection and localization system, realfast. The pulses were detected commensally on the Karl G. Jansky Very Large Array during a survey of unidentified Fermi $\gamma$-ray sources. The bursts were localized to subarcsecond precision using realfast fast-sampled imaging. Follow-up observations with the Green Bank Telescope detected additional bursts from the same source. The bursts do not exhibit periodicity in a search up to periods of 480s, assuming a duty cycle of < 20%. The pulses are nearly 100% linearly polarized, show circular polarization up to 12%, have a steep radio spectral index of -2.7, and exhibit variable scattering on timescales of months. The arcsecond-level realfast localization links the source confidently with the Fermi $\gamma$-ray source and places it nearby (though not coincident with) an XMM-Newton X-ray source. Based on the source's overall properties, we discuss various options for the nature of this object and propose that it could be a young pulsar, magnetar, or a binary pulsar system.
We present an overview of recent numerical advances in the theoretical characterization of massive binary black hole (MBBH) mergers in astrophysical environments. These systems are among the loudest sources of gravitational waves (GWs) in the universe and particularly promising candidates for multimessenger astronomy. Coincident detection of GWs and electromagnetic (EM) signals from merging MBBHs is at the frontier of contemporary astrophysics. One major challenge in observational efforts searching for these systems is the scarcity of strong predictions for EM signals arising before, during, and after merger. Therefore, a great effort in theoretical work to-date has been to characterize EM counterparts emerging from MBBHs concurrently to the GW signal, aiming to determine distinctive observational features that will guide and assist EM observations. To produce sharp EM predictions of MBBH mergers it is key to model the binary inspiral down to coalescence in a full general relativistic fashion by solving Einstein's field equations coupled with the magnetohydrodynamics equations that govern the evolution of the accreting plasma in strong-gravity. We review the general relativistic numerical investigations that have explored the astrophysical manifestations of MBBH mergers in different environments and focused on predicting potentially observable smoking-gun EM signatures that accompany the gravitational signal.
This paper explores core-collapse supernovae as crucial targets for neutrino telescopes, addressing uncertainties in their simulation results. We comprehensively analyze eighteen modern simulations and discriminate among supernova models using realistic detectors and interactions. A significant correlation between the total neutrino energy and cumulative counts, driven by massive lepton neutrinos and oscillations, is identified, particularly noticeable with the DUNE detector. Bayesian techniques indicate strong potential for model differentiation during a Galactic supernova event, with HK excelling in distinguishing models based on equation of state, progenitor mass, and mixing scheme.
In the forthcoming era of big astronomical data, it is a burden to find out target sources from ground-based and space-based telescopes. Although Machine Learning (ML) methods have been extensively utilized to address this issue, the incorporation of in-depth data analysis can significantly enhance the efficiency of identifying target sources when dealing with massive volumes of astronomical data. In this work, we focused on the task of finding AGN candidates and identifying BL Lac/FSRQ candidates from the 4FGL DR3 uncertain sources. We studied the correlations among the attributes of the 4FGL DR3 catalogue and proposed a novel method, named FDIDWT, to transform the original data. The transformed dataset is characterized as low-dimensional and feature-highlighted, with the estimation of correlation features by Fractal Dimension (FD) theory and the multi-resolution analysis by Inverse Discrete Wavelet Transform (IDWT). Combining the FDIDWT method with an improved lightweight MatchboxConv1D model, we accomplished two missions: (1) to distinguish the Active Galactic Nuclei (AGNs) from others (Non-AGNs) in the 4FGL DR3 uncertain sources with an accuracy of 96.65%, namely, Mission A; (2) to classify blazar candidates of uncertain type (BCUs) into BL Lacertae objects (BL Lacs) or Flat Spectrum Radio Quasars (FSRQs) with an accuracy of 92.03%, namely, Mission B. There are 1354 AGN candidates in Mission A, 482 BL Lacs candidates and 128 FSRQ candidates in Mission B were found. The results show a high consistency of greater than 98% with the results in previous works. In addition, our method has the advantage of finding less variable and relatively faint sources than ordinary methods.
It is currently unknown how matter behaves at the extreme densities found within the cores of neutron stars. Measurements of the neutron star equation of state probe nuclear physics that is otherwise inaccessible in a laboratory setting. Gravitational waves from binary neutron star mergers encode details about this physics, allowing the equation of state to be inferred. Planned third-generation gravitational-wave observatories, having vastly improved sensitivity, are expected to provide tight constraints on the neutron star equation of state. We combine simulated observations of binary neutron star mergers by the third-generation observatories Cosmic Explorer and Einstein Telescope to determine future constraints on the equation of state across a plausible neutron star mass range. In one year of operation, a network consisting of one Cosmic Explorer and the Einstein Telescope is expected to detect $\gtrsim 3\times 10^5$ binary neutron star mergers. By considering only the 75 loudest events, we show that such a network will be able to constrain the neutron star radius to at least $\lesssim 200$ m (90% credibility) in the mass range $1-1.97$ $M_{\odot}$ -- about ten times better than current constraints from LIGO-Virgo-KAGRA and NICER. The constraint is $\lesssim 75$ m (90% credibility) near $1.4-1.6$ $M_{\odot}$ where we assume the the binary neutron star mass distribution is peaked. This constraint is driven primarily from the loudest $\sim 20$ events.
In this paper, we constrain the density of the interstellar medium (ISM) around the hadronic PeVatron candidate, supernova remnant (SNR) G106.3+2.7, based on X-ray and $\gamma$-ray observations. The purpose of this investigation is to understand the influence of the gaseous environment on this SNR as a proton PeVatron candidate. By modelling the self-regulated propagation of the CRs injected from the SNR, we calculate the $\gamma$-ray emission of CRs via the hadronuclear interactions with the molecular cloud and the ISM, and use the measured $\gamma$-ray flux to constrain the ISM density around the SNR. Our results support the picture that the SNR is expanding into a low-density ($n<0.05 cm^{-3}$) cavity, enabling the SNR to be a potential proton PeVatron despite that it presently is not in the very early phase.
Context. In September 2022, the transient neutron star low-mass X-ray binary XTE J1701-462 went into a new outburst. Aims. The objective of this work is to examine the evolution of the accretion geometry of XTE J1701-462 by studying the spectro-polarimetric properties along the Z track of this source. The simultaneous observations archived by the Insight-Hard X-ray Modulation Telescope (HXMT) and the Imaging X-ray Polarimetry Explorer (IXPE) give us the opportunity. Methods. We present a comprehensive X-ray spectro-polarimetric analysis of XTE J1701-462, using simultaneous observations from IXPE, Insight-HXMT and NuSTAR. For IXPE observations, two methods are employed to measure the polarization: a model-independent measurement with PCUBE and a model-dependent polarization-spectral analysis with XSPEC. The corresponding spectra from Insight-HXMT and NuSTAR are studied with two configurations that correspond to a slab-like corona and a spherical shell-like corona, respectively. Results. Significant polarization characteristics are detected in XTE J1701-462. The polarization degree shows a decreasing trend along the Z track, reducing from (4.84 $\pm$ 0.37)% to (3.76 $\pm$ 0.43)% on the horizontal branch and jumping to less than 1% on the normal branch. The simultaneous spectral analysis from Insight-HXMT and NuSTAR suggests that the redistribution between the thermal and Comptonized emission could be the reason for the PD evolution along the Z track. Based on the correlated spectro-polarimetric properties, we propose that this source likely has a slab coronal geometry and the size/thickness of the corona decreases along the Z track.
The colliding-wind region in binary systems made of massive stars allows us to investigate various aspects of shock physics, including particle acceleration. Particle accelerators of this kind are tagged as Particle-Accelerating Colliding-Wind Binaries, and are mainly identified thanks to their synchrotron radio emission. Our objective is first to validate the idea that obtaining snapshot high-resolution radio images of massive binaries constitutes a relevant approach to unambiguously identify particle accelerators. Second, we intend to exploit these images to characterize the synchrotron emission of two specific targets, HD167971 and HD168112, known as particle accelerators. We traced the radio emission from the two targets at 1.6 GHz with the European Very Long Baseline Interferometry Network, with an angular resolution of a few milli-arcseconds. Our measurements allowed us to obtain images for both targets. For HD167971, our observation occurs close to apastron, at an orbital phase where the synchrotron emission is minimum. For HD168112, we resolved for the very first time the synchrotron emission region. The emission region appears slightly elongated, in agreement with expectation for a colliding-wind region. In both cases the measured emission is significantly stronger than the expected thermal emission from the stellar winds, lending strong support for a non-thermal nature. Our study brings a significant contribution to the still poorly addressed question of high angular resolution radio imaging of colliding-wind binaries. We show that snapshot Very Long Baseline Interferometry measurements constitute an efficient approach to investigate these objects, with promising results in terms of identification of additional particle accelerators, on top of being promising as well to reveal long period binaries.
The Enhanced X-ray Timing and Polarimetry (eXTP) mission is a space mission to be launched in the late 2020s that is currently in development led by China in international collaboration with European partners. Here we provide a progress report on the Czech contribution to the eXTP science. We report on our simulation results performed in Opava (Institute of Physics of the Silesian University in Opava) and Prague (Astronomical Institute of the Czech Academy of Sciences), where the advanced timing capabilities of the satellite have been assessed for bright X-ray binaries that contain an accreting neutron star (NS) and exhibit the quasi-periodic oscillations. Measurements of X-ray variability originating in oscillations of fluid in the innermost parts of the accretion region determined by general relativity, such as the radial or Lense-Thirring precession, can serve for sensitive tests enabling us to distinguish between the signatures of different viable dense matter equations of state. We have developed formulae describing non-geodesic oscillations of accreted fluid and their simplified practical forms that allow for an expeditious application of the universal relations determining the NS properties. These relations, along with our software tools for studying the propagation of light in strong gravity and neutron star models, can be used for precise modeling of the X-ray variability while focusing on properties of the intended Large Area Detector (LAD). We update the status of our program and set up an electronic repository that will provide simulation results and gradual updates as the mission specifications progress toward their final formulation.
The Milky Way galaxy is estimated to be home to ten million to a billion stellar-mass black holes (BHs). Accurately determining this number and distribution of BH masses can provide crucial information about the processes involved in BH formation, the possibility of the existence of primordial BHs, and interpreting gravitational wave (GW) signals detected in LIGO-VIRGO-KAGRA. Sahu et al. recently confirmed one isolated stellar-mass BH in our galaxy using astrometric microlensing. This work proposes a novel method to identify such BHs using the gravitational analog of the Gertsenshtein-Zel'dovich (GZ) effect. We explicitly demonstrate the generation of GWs when a kilohertz(kHz) electromagnetic (EM) pulse from a pulsar is intervened by a spherically symmetric compact object situated between the pulsar and Earth. Specifically, we show that the curvature of spacetime acts as the catalyst, akin to the magnetic field in the GZ effect. Using the covariant semi-tetrad formalism, we quantify the GW generated from the EM pulse through the Regge-Wheeler tensor and express the amplitude of the generated GW in terms of the EM energy and flux. We demonstrate how GW detectors can detect stellar-mass BHs by considering known pulsars within our galaxy. This approach has a distinct advantage in detecting stellar mass BHs at larger distances since the GW amplitude falls as $1/r$.
Using a 12 ks archival Chandra X-ray Observatory ACIS-S observation on the massive globular cluster (GC) M14, we detect a total of 7 faint X-ray sources within its half-light radius at a 0.5-7 keV depth of $2.5\times 10^{31}\,\mathrm{erg~s^{-1}}$. We cross-match the X-ray source positions with a catalogue of the Very Large Array radio point sources and a Hubble Space Telescope (HST) UV/optical/near-IR photometry catalogue, revealing radio counterparts to 2 and HST counterparts to 6 of the X-ray sources. In addition, we also identify a radio source with the recently discovered millisecond pulsar PSR 1737-0314A. The brightest X-ray source, CX1, appears to be consistent with the nominal position of the classic nova Ophiuchi 1938 (Oph 1938), and both Oph 1938 and CX1 are consistent with a UV-bright variable HST counterpart, which we argue to be the source of the nova eruption in 1938. This makes Oph 1938 the second classic nova recovered in a Galactic GC since Nova T Scorpii in M80. CX2 is consistent with the steep-spectrum radio source VLA8, which unambiguously matches a faint blue source; the steepness of VLA8 is suggestive of a pulsar nature, possibly a transitional millisecond pulsar with a late K dwarf companion, though an active galactic nucleus (AGN) cannot be ruled out. The other counterparts to the X-ray sources are all suggestive of chromospherically active binaries or background AGNs, so their nature requires further membership information.
We present an analysis of the X-ray properties 10 luminous, dust-reddened quasars from the FIRST-2MASS (F2M) survey based on new and archival Chandra observations. These systems are interpreted to be young, transitional objects predicted by merger-driven models of quasar/galaxy co-evolution. The sources have been well-studied from the optical through mid-infrared, have Eddington ratios above 0.1, and possess high-resolution imaging, most of which shows disturbed morphologies indicative of a recent or ongoing merger. When combined with previous X-ray studies of five other F2M red quasars, we find that the sources, especially those hosted by mergers, have moderate to high column densities ($N_H \simeq 10^{22.5-23.5}$ cm$^{-2}$) and Eddington ratios high enough to enable radiation pressure to blow out the obscuring material. We confirm previous findings that red quasars have dust-to-gas ratios that are significantly lower than the value for the Milky Way's interstellar medium, especially when hosted by a merger. The dust-to-gas ratio for two red quasars that lack evidence for merging morphology is consistent with the Milky Way and they do not meet the radiative feedback conditions for blowout. These findings support the picture of quasar/galaxy co-evolution in which a merger results in feeding of and feedback from an AGN. We compare the F2M red quasars to other obscured and reddened quasar populations in the literature, finding that, although morphological information is lacking, nearly all such samples meet blowout conditions and exhibit outflow signatures suggestive of winds and feedback.
The HERMES (High Energy Rapid Modular Ensemble of Satellites) Pathfinder mission aims to develop a constellation of nanosatellites to study astronomical transient sources, such as gamma-ray bursts, in the X and soft $\gamma$ energy range, exploiting a novel inorganic scintillator. This study presents the results obtained describing, with an empirical model, the unusually intense and long-lasting residual emission of the GAGG:Ce scintillating crystal after irradiating it with high energy protons (70 MeV) and ultraviolet light ($\sim$ 300 nm). From the model so derived, the consequences of this residual luminescence for the detector performance in operational conditions has been analyzed. It was demonstrated that the current generated by the residual emission peaks at 1-2 pA, thus ascertaining the complete compatibility of this detector with the HERMES Pathfinder nanosatellites.
Aims: Ab initio global particle-in-cell (PIC) simulations of compact neutron star magnetospheres in the align rotator configuration to investigate the role of GR and plasma supply on the polar cap particle acceleration efficiency - precursor of coherent radio emission. Methods: A new module for the PIC code OSIRIS to model plasma dynamics around compact objects with full self-consistent GR effects is presented. A detailed description of the extensions and implementation methods are provided for the main sub-algorithms of the PIC loop, including the field solver, particle pusher and charge-conserving current-deposit scheme. Results: Leptons are efficiently accelerated in the polar caps of neutron stars with force-free magnetospheres. This solution supports strong poloidal currents, which are easily turned to spacelike at any stellar compactness due to the GR frame-dragging effect. Charge-separated magnetospheric solutions, in opposition, depend on the plasma supply and compactness to activate the polar cap. The furthest away the solution is from force-free, the highest the compactness is required for the polar cap activation. Conclusions: GR effects are crucial to explain the pulsar mechanism for low obliquity rotators. Focusing on the aligned rotator, we show that GR relaxes the minimum required poloidal magnetospheric current for the transition of the polar cap to the accelerator regime, thus justifying the observation of weak pulsars beyond the expected death line. Also, we demonstrate how the interplay between the polar cap and outer gaps might explain the intermittent behaviour of the measured spin-down luminosity and the existence of radio sub-pulse nullings for older pulsars.
Constraining the physical conditions of the ionized media in the vicinity of an active supermassive black hole (SMBH) is crucial to understanding how these complex systems operate. Metal emission lines such as iron (Fe) are useful probes to trace the gaseous media's abundance, activity, and evolution in these accreting systems. Among these, the FeII emission has been the focus of many prior studies to investigate the energetics, kinematics, and composition of the broad-emission line region (BELR) from where these emission lines are produced. In this work, we present the first simultaneous FeII modeling in the optical and near-infrared (NIR) regions. We use CLOUDY photoionization code to simulate both spectral regions in the wavelength interval 4000-12000 Angstroms. We compare our model predictions with the observed line flux ratios for IZw1 - a prototypical strong FeII-emitting active galactic nuclei (AGN). This allows putting constraints on the BLR cloud density and metal content that is optimal for the production of the FeII emission, which can be extended to IZw1-like sources, by examining a broad parameter space. We demonstrate the salient and distinct features of the FeII pseudo-continuum in the optical and NIR, giving special attention to the effect of micro-turbulence on the intensity of the FeII emission.
We searched for thermal gyro-synchrotron radio emission from a sample of five radio-loud stars whose X-ray coronae contain a hot ($T_e>10^7$ K) thermal component. We used the JVLA to measure Stokes I and V/I spectral energy distributions (SEDs) over the frequency range 15--45 GHz, determining the best-fitting model parameters using power-law and thermal gyro-synchrotron emission models. The SEDs of the three chromospherically active binaries (Algol, UX Arietis, HR 1099) were well-fit by a power-law gyro-synchrotron model, with no evidence for a thermal component. However, the SEDs of the two weak-lined T Tauri stars (V410 Tau, HD 283572) had a circularly polarized enhancement above 30 GHz that was inconsistent with a pure power-law distribution. These spectra were well-fit by summing the emission from an extended coronal volume of power-law gyro-synchrotron emission and a smaller region with thermal plasma and a much stronger magnetic field emitting thermal gyro-synchrotron radiation. We used Bayesian inference to estimate the physical plasma parameters of the emission regions (characteristic size, electron density, temperature, power-law index, and magnetic field strength and direction) using independently measured radio sizes, X-ray luminosities, and magnetic field strengths as priors, where available. The derived parameters were well-constrained but somewhat degenerate. The power-law and thermal volumes in the pre-main-sequence stars are probably not co-spatial, and we speculate they may arise from two distinct regions: a tangled-field magnetosphere where reconnection occurs and a recently discovered axisymmetric toroidal magnetic field, respectively.
We investigate the spherically-symmetric gravitational collapse of a massless scalar field in the framework of a type-II minimally modified gravity theory called VCDM. This theory propagates only two local physical degrees of freedom supplemented by the so-called instantaneous (or shadowy) mode. Imposing asymptotically flat spacetime in the standard Minkowski time slicing, one can integrate out the instantaneous mode. Consequently, the equations of motion reduce to those in general relativity (GR) with the maximal slicing. Unlike GR, however, VCDM lacks 4D diffeomorphism invariance, and thus one cannot change the time slicing that is preferred by the theory. We then numerically evolve the system to see if and how a black hole forms. For small amplitudes of the initial scalar profile, we find that its collapse does not generate any black hole, singularity or breakdown of the time slicing. For sufficiently large amplitudes, however, the collapse does indeed result in the formation of an apparent horizon in a finite time. After that, the solution outside the horizon is described by a static configuration, i.e. the Schwarzschild geometry with a finite and time-independent lapse function. Inside the horizon, on the other hand, the numerical results indicate that the lapse function keeps decreasing towards zero so that the central singularity is never reached. This implies the necessity for a UV completion of the theory to describe physics inside the horizon. Still, we can conclude that VCDM is able to fully describe the entire time evolution of the Universe outside the black hole horizon without knowledge about such a UV completion.
In this paper, we investigate the FRB population using the first CHIME/FRB catalog. We first reconstruct the extragalactic dispersion measure -- redshift relation ($\mathrm{DM_E} - z$ relation) from well-localized FRBs, then use it to infer redshift and isotropic energy of the first CHIME/FRB catalog. The intrinsic energy distribution is modeled by the power law with an exponential cutoff, and the selection effect of the CHIME telescope is modeled by a two-parametric function of specific fluence. For the intrinsic redshift distribution, the star formation history (SFH) model, as well as other five SFH-related models are considered. We construct the joint likelihood of fluence, energy and redshift, and all the free parameters are constrained simultaneously using Bayesian inference method. The Bayesian information criterion (BIC) is used to choose the model that best matches the observational data. For comparison, we fit our models with two data samples, i.e. the Full sample and the Gold sample. The power-law index and cutoff energy are tightly constrained to be $1.8 \lesssim \alpha \lesssim 1.9$ and $\mathrm{log}(E_c/{\rm erg}) \approx 42$, which are almost independent of the redshift distribution model and the data sample we choose. The parameters involving the selection effect strongly depends on the data sample, but are insensitive to the redshift distribution model. According to BIC, the pure SFH model is strongly disfavored by both the Full sample and Gold sample. For the rest five SFH-related redshift distribution models, most of them can match the data well if the parameters are properly chosen. Therefore, with the present data, it is still premature to draw a conclusive conclusion on the FRB population.
We present results from a systematic search for broad ($\geq$ 400 \kms) \ha\ emission in Integral Field Spectroscopy data cubes of $\sim$1200 nearby galaxies obtained with PMAS and MUSE. We found 19 unique regions that pass our quality cuts, four of which match the locations of previously discovered SNe: one Type IIP, and three Type IIn, including the well-known SN 2005ip. We suggest that these objects are young Supernova Remnants, with bright and broad \ha\ emission powered by the interaction between the SN ejecta and dense circumstellar material. The stellar ages measured at the location of these SNR candidates are systematically lower by about 0.5 dex than those measured at the location of core collapse SNe, implying that their progenitors might be shorter lived and therefore more massive than a typical CC SN progenitor. The methods laid out in this work open a new window into the study of nearby SNe with Integral Field Spectroscopy.
From the observation of both heavy neutron stars and light ones with small radii, one anticipates a steep rise in the speed of sound of nuclear matter as a function of baryon density up to values close to the causal limit. A question follows whether such behavior of the speed of sound in neutron-rich matter is compatible with the equation of state extracted from low-energy heavy-ion collisions. In this work, we consider a family of neutron star equations of state characterized by a steep rise in the speed of sound, and use the symmetry energy expansion to obtain equations of state applicable to the almost-symmetric nuclear matter created in heavy-ion collisions. We then compare collective flow data from low-energy heavy-ion experiments with results of simulations obtained using the hadronic transport code SMASH with the mean-field potential reproducing the density-dependence of the speed of sound. We show that equations of state featuring a peak in the speed of sound squared occurring at densities between 2-3 times the saturation density of normal nuclear matter, producing neutron stars of nearly M_max~2.5 M_Sun, are consistent with heavy-ion collision data.
"Changing-look" Active Galactic Nuclei (CL-AGNs) are challenging our basic ideas about the physics of accretion flows and of circumnuclear gas around supermassive black holes (SMBHs). Using first year Sloan Digital Sky Survey V (SDSS-V) repeated spectroscopy of nearly 29,000 previously-known AGNs, combined with dedicated follow-up spectroscopic observations, and publicly available optical light curves, we have identified 116 CL-AGNs where (at least) one broad emission line has essentially (dis-)appeared, as well as 88 other extremely variable systems. Our CL-AGN sample, with 107 newly identified cases, is among the largest reported to date, and includes $\sim$0.4% of the AGNs re-observed in the first year of SDSS-V operations. Among our CL-AGNs, 67% exhibit dimming while 33% exhibit brightening. Our data and sample probe extreme AGN spectral variability on timescales of months to decades, including some cases of recurring transitions on surprisingly short timescales ($\lesssim$ 2 months in the rest frame). We find that CL events are preferentially found in lower Eddington ratio ($f_{Edd}$) systems: Our CL-AGNs have a $f_{Edd}$ distribution that significantly differs from that of a redshift- and a carefully constructed, luminosity-matched control sample ($p_{KS}$ $\lesssim$ 2 $\times$ $10^{-4}$ ; median $f_{Edd}$ $\approx$ 0.025 vs. 0.043). This preference for low $f_{Edd}$ strengthens previous findings of higher CL-AGN incidence at lower Eddington ratios, found in much smaller samples of spectroscopically confirmed CL-AGNs. Finally, we show that the broad MgII emission line in our CL-AGN sample tends to vary significantly less than the broad H$\beta$ emission line. Our large CL-AGN sample demonstrates the advantages and challenges in using multi-epoch spectroscopy from large surveys to study extreme AGN variability, SMBH fueling, and AGN physics.
Milky Way globular clusters (GCs) display chemical enrichment in a phenomenon called multiple stellar populations (MSPs). While the enrichment mechanism is not fully understood, there is a correlation between a cluster's mass and the fraction of enriched stars found therein. However, present-day GC masses are often smaller than their masses at the time of formation due to dynamical mass loss. In this work, we explore the relationship between mass and MSPs using the stellar stream 300S. We present the chemical abundances of eight red giant branch member stars in 300S with high-resolution spectroscopy from Magellan/MIKE. We identify one enriched star characteristic of MSPs and no detectable metallicity dispersion, confirming that the progenitor of 300S was a globular cluster. The fraction of enriched stars (12.5\%) observed in our 300S stars is less than the 50\% of stars found enriched in Milky Way GCs of comparable present-day mass ($\sim10^{4.5}$\msun). We calculate the mass of 300S's progenitor and compare it to the initial masses of intact GCs, finding that 300S aligns well with the trend between the system mass at formation and enrichment. 300S's progenitor may straddle the critical mass threshold for the formation of MSPs and can therefore serve as a benchmark for the stellar enrichment process. Additionally, we identify a CH star, with high abundances of \textit{s}-process elements, probably accreted from a binary companion. The rarity of such binaries in intact GCs may imply stellar streams permit the survival of binaries that would otherwise be disrupted.
We present the first results from a JWST program studying the role played by powerful radio jets in the evolution of the most massive galaxies at the onset of Cosmic Noon. Using NIRSpec integral field spectroscopy, we detect 24 rest-frame optical emission lines from the $z=3.5892$ radio galaxy 4C+19.71. 4C+19.71 contains one of the most energetic radio jets known, making it perfect for testing radio-mode feedback on the interstellar medium (ISM) of a $M_{\star}\sim10^{11}\,\rm M_{\odot}$ galaxy. The rich spectrum enables line ratio diagnostics showing that the radiation from the active galactic nucleus (AGN) dominates the ionization of the entire ISM out to at least $25\,$kpc, the edge of the detection. Sub-kpc resolution reveals filamentary structures and emission blobs in the warm ionized ISM distributed on scales of $\sim5$ to $\sim20\,$kpc. A large fraction of the extended gaseous nebula is located near the systemic velocity. This nebula may thus be the patchy ISM which is illuminated by the AGN after the passage of the jet. A radiatively-driven outflow is observed within $\sim5\,$kpc from the nucleus. The inefficient coupling ($\lesssim 10^{-4}$) between this outflow and the quasar and the lack of extreme gas motions on galactic scales are inconsistent with other high-$z$ powerful quasars. Combining our data with ground-based studies, we conclude that only a minor fraction of the feedback processes is happening on $<25\,$kpc scales.
Insights from JWST observations suggest that AGN feedback evolved from a short-lived, high redshift phase in which radiatively cooled turbulence and/or momentum-conserving outflows stimulated vigorous early star formation (``positive'' feedback), to late, energy-conserving outflows that depleted halo gas reservoirs and quenched star formation. The transition between these two regimes occurred at $z\sim 6$, independently of galaxy mass, for simple assumptions about the outflows and star formation process. Observational predictions provide circumstantial evidence for the prevalence of massive black holes at the highest redshifts hitherto observed, and we discuss their origins.
Henize 2-10 is a dwarf starburst galaxy hosting a $\sim10^{6}~M_{\odot}$ black hole (BH) that is driving an ionized outflow and triggering star formation within the central $\sim100$ pc of the galaxy. Here we present ALMA continuum observations from 99 to 340 GHz, as well as spectral line observations of the molecules CO (1-0, 3-2), HCN (1-0, 3-2), and HCO$^{+}$ (1-0, 3-2), with a focus on the BH and its vicinity. Incorporating cm-wave radio measurements from the literature, we show that the spectral energy distribution of the BH is dominated by synchrotron emission from 1.4 to~340 GHz with a spectral index of $\alpha\approx-0.5$. We analyze the spectral line data and identify an elongated molecular gas structure around the BH with a velocity distinct from the surrounding regions. The physical extent of this molecular gas structure is $\approx130~{\rm pc}\times30$ pc and the molecular gas mass is $\sim10^{6}~M_{\odot}$. Despite an abundance of molecular gas in this general region, the position of the BH is significantly offset from the peak intensity, which may explain why the BH is radiating at a very low Eddington ratio. Our analysis of the spatially-resolved line ratio between CO J=3-2 and J=1-0 implies that the CO gas in the vicinity of the BH is highly excited, particularly at the interface between the BH outflow and the regions of triggered star formation. This suggests that the cold molecular gas is being shocked by the bipolar outflow from the BH, supporting the case for positive BH feedback.
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is designed to detect and measure the redshifts of more than one million Ly$\alpha$ emitting galaxies (LAEs) between $1.88 < z < 3.52$. In addition to its cosmological measurements, these data enable studies of Ly$\alpha$ spectral profiles and the underlying radiative transfer. Using the roughly half a million LAEs in the HETDEX Data Release 3, we stack various subsets to obtain the typical Ly$\alpha$ profile for the $z \sim 2-3$ epoch and to understand their physical properties. We find clear absorption wings around Ly$\alpha$ emission, which extend $\sim 2000$ km $\mathrm{s}^{-1}$ both redward and blueward of the central line. Using far-UV spectra of nearby ($0.002 < z < 0.182$) LAEs in the CLASSY treasury and optical/near-IR spectra of $2.8 < z < 6.7$ LAEs in the MUSE-Wide survey, we observe absorption profiles in both redshift regimes. Dividing the sample by volume density shows that the troughs increase in higher density regions. This trend suggests that the depth of the absorption is dependent on the local density of objects near the LAE, a geometry that is similar to damped Lyman-$\alpha$ systems. Simple simulations of Ly$\alpha$ radiative transfer can produce similar troughs due to absorption of light from background sources by HI gas surrounding the LAEs.
We here present the results of an analysis of the optical spectroscopy of 42 globular cluster (GC) candidates in the nearby spiral galaxy M81 (3.61~Mpc). The spectra were obtained using the long-slit and MOS modes of the OSIRIS instrument at the 10.4~m Gran Telescopio Canarias (GTC) at a spectral resolution of $\sim$1000. We used the classical H$\beta$ vs [MgFe]$'$ index diagram to separate genuine old GCs from clusters younger than 3 Gyr. Of the 30 spectra with continuum signal-to-noise ratio $>10$, we confirm 17 objects to be classical GCs (age $>10$~Gyr, $-1.4<$[Fe/H]$<-$0.4), with the remaining 13 being intermediate-age clusters (1-7.5~Gyr). We combined age and metallicity data of other nearby spiral galaxies ($\lesssim18$~Mpc) obtained using similar methodology like the one we have used here to understand the origin of GCs in spiral galaxies in the cosmological context. We find that the metal-poor ([Fe/H]<$-$1) GCs continued to form up to 6~Gyr after the first GCs were formed, with all younger systems (age $<8$~Gyr) being metal-rich.
Observed protostellar outflows exhibit a variety of asymmetrical features, including remarkable unipolar outflows and bending outflows. Revealing the formation and early evolution of such asymmetrical protostellar outflows, especially the unipolar outflows, is essential for a better understanding of the star and planet formation because they can dramatically change the mass accretion and angular momentum transport to the protostars and protoplanetary disks. Here, we perform the three-dimensional non-ideal magnetohydrodynamics simulations to investigate the formation and early evolution of the asymmetrical protostellar outflows in magnetized turbulent isolated molecular cloud cores. We find, for the first time to our knowledge, that the unipolar outflow forms even in the single low-mass protostellar system. The results show that the unipolar outflow is driven in the weakly magnetized cloud cores with the dimensionless mass-to-flux ratios of $\mu=8$ and $16$. Furthermore, we find the $\textit{protostellar rocket effect}$ of the unipolar outflow, which is similar to the launch and propulsion of a rocket. The unipolar outflow ejects the protostellar system from the central dense region to the outer region of the parent cloud core, and the ram pressure caused by its ejection suppresses the driving of additional new outflows. In contrast, the bending bipolar outflow is driven in the moderately magnetized cloud core with $\mu=4$. The ratio of the magnetic to turbulent energies of a parent cloud core may play a key role in the formation of asymmetrical protostellar outflows.
The intracluster light (ICL) fraction is a well-known indicator of the dynamical activity in intermediate-redshift clusters. Merging clusters in the redshift interval $0.18<z<0.56$ have a distinctive peak in the ICL fractions measured between $\sim 3800-4800$ \AA. In this work, we analyze two higher-redshift, clearly merging clusters, ACT-CLJ0102-49151 and CL J0152.7-1357, at $z>0.8$, using the HST optical and infrared images obtained by the RELICS survey. We report the presence of a similar peak in the ICL fractions, although wider and redshifted to the wavelength interval $\sim 5200-7300$ \AA. The fact that this excess in the ICL fractions is found at longer wavelengths can be explained by an assorted mixture of stellar populations in the ICL, direct inheritance of an ICL that was mainly formed by major galaxy mergers with the BCG at $z>1$ and whose production is instantaneously burst by the merging event. The ubiquity of the ICL fraction merging signature across cosmic time enhances the ICL as a highly reliable and powerful probe to determine the dynamical stage of galaxy clusters, which is crucial for cluster-based cosmological inferences that require relaxation of the sample.
Ly$\alpha$ emission is an exceptionally informative tracer of the life cycle of evolving galaxies and the escape of ionising photons. However, theoretical studies of Ly$\alpha$ emission are often limited by insufficient numerical resolution, incomplete sets of physical models, and poor line-of-sight (LOS) statistics. To overcome such limitations, we utilize here the novel PANDORA suite of high-resolution dwarf galaxy simulations that include a comprehensive set of state-of-the-art physical models for ionizing radiation, magnetic fields, supernova feedback and cosmic rays. We post-process the simulations with the radiative transfer code \textsc{RASCAS} to generate synthetic observations and compare to observed properties of Ly$\alpha$ emitters. Our simulated Ly$\alpha$ haloes are more extended than the spatial region from which the intrinsic emission emanates and our spatially resolved maps of spectral parameters of the Ly$\alpha$ emission are very sensitive to the underlying spatial distribution and kinematics of neutral hydrogen. Ly$\alpha$ and LyC emission display strongly varying signatures along different LOS depending on how each LOS intersects low-density channels generated by stellar feedback. Comparing galaxies simulated with different physics, we find the Ly$\alpha$ signatures to exhibit systematic offsets determined by the different levels of feedback strength and the clumpiness of the neutral gas. Despite this variance, and regardless of the different physics included in each model, we find universal correlations between Ly$\alpha$ observables and LyC escape fraction, demonstrating a robust connection between Ly$\alpha$ and LyC emission. Ly$\alpha$ observations from a large sample of dwarf galaxies should thus give strong constraints on their stellar feedback-regulated LyC escape and confirm their important role for the reionization of the Universe.
Molecular lines are powerful diagnostics of the physical and chemical properties of the interstellar medium (ISM). These ISM properties, which affect future star formation, are expected to differ in starburst galaxies from those of more quiescent galaxies. We investigate the ISM properties in the central molecular zone of the nearby starburst galaxy NGC 253 using the ultra-wide millimeter spectral scan survey from the ALMA Large Program ALCHEMI. We present an atlas of velocity-integrated images at a 1".6 resolution of 148 unblended transitions from 44 species, including the first extragalactic detection of HCNH$^+$ and the first interferometric images of C$_3$H$^+$, NO, HCS$^+$. We conduct a principal component analysis (PCA) on these images to extract correlated chemical species and to identify key groups of diagnostic transitions. To the best of our knowledge, our dataset is currently the largest astronomical set of molecular lines to which PCA has been applied. The PCA can categorize transitions coming from different physical components in NGC 253 such as i) young starburst tracers characterized by high-excitation transitions of HC$_3$N and complex organic molecules (COMs) versus tracers of on-going star formation (radio recombination lines) and high-excitation transitions of CCH and CN tracing PDRs, ii) tracers of cloud-collision-induced shocks (low-excitation transitions of CH$_3$OH, HNCO, HOCO$^+$, and OCS) versus shocks from star-formation-induced outflows (high-excitation transitions of SiO), as well as iii) outflows showing emission from HOC$^+$, CCH, H$_3$O$^+$, CO isotopologues, HCN, HCO$^+$, CS, and CN. Our findings show these intensities vary with galactic dynamics, star formation activities, and stellar feedback.
Under the hypothesis of gravitational redshift induced by the central supermassive black hole, and based on line widths and shifts of redward shifted H$\beta$ and H$\alpha$ broad emission lines for more than 8000 SDSS DR7 AGNs, we measure the virial factor in determining supermassive black hole masses. The virial factor had been believed to be independent of accretion radiation pressure on gas clouds in broad-line region (BLR), and only dependent on inclination effects of BLR. The virial factor measured spans a very large range. For the vast majority of AGNs ($>$96%) in our samples, the virial factor is larger than $f=1$ usually used in literatures. The $f$ correction makes the percent of high-accreting AGNs decrease by about 100 times. There are positive correlations of $f$ with the dimensionless accretion rate and Eddington ratio. The redward shifts of H$\beta$ and H$\alpha$ are mainly the gravitational origin, confirmed by a negative correlation between the redward shift and the dimensionless radius of BLR. Our results show that radiation pressure force is a significant contributor to the measured virial factor, containing the inclination effects of BLR. The usually used values of $f$ should be corrected for high-accreting AGNs, especially high redshift quasars. The $f$ correction increases their masses by one--two orders of magnitude, which will make it more challenging to explain the formation and growth of supermassive black holes at high redshifts.
We analyze JWST NIRSpec$+$MIRI/MRS observations of the infrared (IR) gas-phase molecular bands of the most enshrouded source (D1) within the interacting system and luminous IR galaxy II Zw 096. We report the detection of rovibrational lines of H$_2$O $\nu_2$=1-0 ($\sim$5.3-7.2 $\mu$m) and $^{12}$CO $\nu$=1-0 ($\sim$4.45-4.95 $\mu$m) in D1. The CO band shows the R- and P-branches in emission and the spectrum of the H$_2$O band shows the P-branch in emission and the R-branch in absorption. The H$_2$O R-branch in absorption unveils an IR-bright embedded compact source in D1 and the CO broad component features a highly turbulent environment. From both bands, we also identified extended intense star-forming (SF) activity associated with circumnuclear photodissociation regions (PDRs), consistent with the strong emission of the ionised 7.7 $\mu$m polycyclic aromatic hydrocarbon band in this source. By including the 4.5-7.0 $\mu$m continuum information derived from the H$_2$O and CO analysis, we modelled the IR emission of D1 with a dusty torus and SF component. The torus is very compact (diameter of $\sim$3 pc at 5 $\mu$m) and characterised by warm dust ($\sim$ 370 K), giving an IR surface brightness of $\sim$3.6$\times$10$^{8}$ L$_{\rm sun}$/pc$^2$. This result suggests the presence of a dust-obscured active galactic nucleus (AGN) in D1, which has an exceptionally high covering factor that prevents the direct detection of AGN emission. Our results open a new way to investigate the physical conditions of inner dusty tori via modelling the observed IR molecular bands.
(Abridged). We explore the fraction of radio loud quasars in the eHAQ+GAIA23 sample, which contains quasars from the High A(V) Quasar (HAQ) Survey, the Extended High A(V) Quasar (eHAQ) Survey, and the Gaia quasar survey. All quasars in this sample have been found using a near-infrared color selection of target candidates that have otherwise been missed by the Sloan Digital Sky Survey (SDSS). We implemented a redshift-dependent color cut in g-i to select red quasars in the sample and divided them into redshift bins, while using a nearest-neighbors algorithm to control for luminosity and redshift differences between our red quasar sample and a selected blue sample from the SDSS. Within each bin, we cross-matched the quasars to the Faint Images of the Radio Sky at Twenty centimeters (FIRST) survey and determined the radio-detection fraction. We find similar radio-detection fractions for red and blue quasars within 1 sigma, independent of redshift. This disagrees with what has been found in the literature for red quasars in SDSS. It should be noted that the fraction of broad absorption line (BAL) quasars in red SDSS quasars is about five times lower than in our sample. BAL quasars have been observed to be more frequently radio quiet than other quasars, therefore the difference in BAL fractions could explain the difference in radio-detection fraction. The observed higher proportion of BAL quasars in our dataset relative to the SDSS sample, along with the higher rate of radio detections, indicates an association of the redness of quasars and the inherent BAL fraction within the overall quasar population. This finding highlights the need to explore the underlying factors contributing to both the redness and the frequency of BAL quasars, as they appear to be interconnected phenomena.
Relic galaxies are massive, compact, quiescent objects observed in the local Universe that have not experienced any significant interaction episodes or merger events since about $z = 2$, remaining relatively unaltered since their formation. On the other hand, massive and compact Early Type Galaxies (cETGs) in the local Universe appear to show similar properties to Relic galaxies, despite having substantial accretion history. Relic galaxies, with frozen history, can provide important clues about the intrinsic processes related to the evolutionary pathways of ETGs and the role that mergers play in their evolution. Using the high-resolution cosmological simulation TNG50-1 from the Illustris Project, we investigate the assembly history of a sample of massive, compact, old, and quiescent subhalos split by satellite accretion fraction. We compare the evolutionary pathways at three cosmic epochs: $z = 2$, $z = 1.5$, and $z = 0$, using the orbital decomposition numerical method to investigate the stellar dynamics of each galactic kinematical component and their environmental correlations. Our results point to a steady pathway across time that is not strongly dependent on the environment. Relics and cETGs do not show a clear preference for high or low-density environments within the volume explored at TNG50. However, progenitors of Relic galaxies are shown to be located in high density since $z = 2$. The merger history can be recovered from the hot inner stellar halo imprints in the local Universe. In the current scenario, the mergers that drive the growth of cETGs do not give rise to a new and distinct evolutionary pathway when compared to Relics. This is despite the reported effects on the age and metallicity of the kinematic components.
Stellar population synthesis (SPS) is essential for understanding galaxy formation and evolution. However, the recent discovery of rotation-driven phenomena in star clusters warrants a review of uncertainties in SPS models caused by overlooked factors, including stellar rotation. In this study, we investigate the impact of rotation on SPS specifically using the PARSEC V2.0 rotation model and its implications for high redshift galaxies with the JWST. Rotation enhances the ultraviolet (UV) flux for up to $\sim 400$ Myr after the starburst, with the slope of UV increasing as the population gets faster rotating and more metal-poor. Using the Prospector tool, we construct simulated galaxies and deduce their properties associated with dust and star formation. Our results suggest that rapid rotation models result in a gradual UV slope up to 0.1 dex higher and an approximately 50\% increase in dust attenuation for identical wide-band spectral energy distributions. Furthermore, we investigate biases if the stellar population should be characterized by rapid rotation and demonstrate that accurate estimation can be achieved for rotation rates up to $\omega_\text{i}=0.6$. Accounting for the bias in the case of rapid rotation aligns specific star formation rates more closely with predictions from theoretical models. Notably, this also implies a slightly higher level of dust attenuation than previously anticipated, while still allowing for a `dust-free' interpretation of the galaxy. The impact of rapid rotation SPS models on the rest-UV luminosity function is found to be minimal. Overall, our findings have potentially important implications for comprehending dust attenuation and mass assembly history in the high-redshift Universe.
Isolated galaxies are the ideal reference sample to study the galaxy structure minimising potential environmental effects. We selected a complete sample of 14 nearby, late-type, highly inclined ($i\geq80^{\circ}$), isolated galaxies from the Catalogue of Isolated Galaxies (CIG) which offers a vertical view of their disc structure. We aim to study extraplanar Diffuse Ionized Gas (eDIG) by comparing the old and young disc components traced by near-infrared (NIR) and Ultraviolet (UV) imaging with the H$\alpha$ emission structure. We obtained H$\alpha$ monochromatic maps from the Fabry-Perot (FP) interferometry, while the old and young discs structures are obtained from the photometric analysis of the 2MASS K$_{s}$-band, and GALEX NUV and FUV images, thereby identifying the stellar disc and whether the eDIG is present. The H$\alpha$ morphology is peculiar in CIG 71, CIG 183, CIG 593 showing clear asymmetries. In general, geometric parameters (isophotal position angle, peak light distribution, inclination) measured from H$\alpha$, UV and NIR show minimal differences (e.g. $\Delta i\leq\pm$10$^{\circ}$), suggesting that interaction does not play a significant role in shaping the morphology, as expected in isolated galaxies. From H$\alpha$ maps, the eDIG was detected vertically in 11 out of 14 galaxies. Although the fraction of eDIG is high, the comparison between our sample and a generic sample of inclined spirals suggests that the phenomenon is uncorrelated to the galaxy environment. As suggested by the extraplanar UV emission found in 13 out of 14 galaxies the star formation extends well beyond the disc defined by the H$\alpha$ map.
The isolated globule B335 contains a single, low luminosity Class 0 protostar associated with a bipolar nebula and outflow system seen nearly perpendicular to its axis. We observed the innermost regions of this outflow as part of JWST/NIRCam GTO program 1187, primarily intended for wide-field slitless spectroscopy of background stars behind the globule. We find a system of expanding shock fronts with kinematic ages of only a few decades emerging symmetrically from the position of the embedded protostar, which is not directly detected at NIRCam wavelengths. The innermost and youngest of the shock fronts studied here shows strong emission from CO. The next older shock front shows less CO and the third shock front shows only H_2 emission in our data. This third and most distant of these inner shock fronts shows substantial evolution of its shape since it was last observed with high spatial resolution in 1996 with Keck/NIRC. This may be evidence of a faster internal shock catching up with a slower one and of the two shocks merging.
We study the 10 Myr evolution of parsec-scale stellar disks with initial masses of $M_{\mathrm{disk}} = 1.0$ - $7.5 \times 10^4 M_\odot$ and eccentricities $e_\mathrm{init}=0.1$-$0.9$ around supermassive black holes (SMBHs). Our disk models are embedded in a spherical background potential and have top-heavy single and binary star initial mass functions (IMF) with slopes of $0.25$-$1.7$. The systems are evolved with the N-body code $\texttt{BIFROST}$ including post-Newtonian (PN) equations of motion and simplified stellar evolution. All disks are unstable and evolve on Myr timescales towards similar eccentricity distributions peaking at $e_\star \sim 0.3$-$0.4$. Models with high $e_\mathrm{init}$ also develop a very eccentric $(e_\star\gtrsim0.9)$ stellar population. For higher disk masses $M_\mathrm{disk} \gtrsim3 \times10^4\;\mathrm{M_\odot}$, the disk disruption dynamics is more complex than the standard secular eccentric disk instability with opposite precession directions at different disk radii - a precession direction instability. We present an analytical model describing this behavior. A milliparsec population of $N\sim10$-$100$ stars forms around the SMBH in all models. For low $e_\mathrm{init}$ stars migrate inward while for $e_\mathrm{init}\gtrsim0.6$ stars are captured by the Hills mechanism. Without PN, after $6$ Myr the captured stars have a sub-thermal eccentricity distribution. We show that including PN effects prevents this thermalization by suppressing resonant relaxation effects and cannot be ignored. The number of tidally disrupted stars is similar or larger than the number of milliparsec stars. None of the simulated models can simultaneously reproduce the kinematic and stellar population properties of the Milky Way center clockwise disk and the S-cluster.
The discovery of planetary systems beyond our solar system has posed challenges to established theories of planetary formation. Planetary orbits display a variety of architectures not predicted by first principles, and free-floating planets appear ubiquitous. The recent discovery of candidate Jupiter Mass Binary Objects (JuMBOs) by the James Webb Space Telescope (JWST) further expanded this enigma. Here, by means of high-accuracy, direct $N$-body simulations, we evaluate the possibility that JuMBOs may form as a result of ejection after a close stellar flyby. We consider a system of two Jupiter-like planets moving in circular orbits with velocities $v_1$ and $v_2$ at distances $a_1$ and $a_2$ around a Sun-like star. The interloper is another Sun-like star approaching with asymptotic velocity $v_\infty$. We find that JuMBOs can indeed be formed upon ejection if the two planets are nearly aligned as the interloper reaches the closest approach. The ratio of the cross-section of JuMBOs production to that of single ejected free-floating planets can approach $\sim 20\%$ for $v_\infty/v_2\sim 0.1 - 0.2$ and $a_1/a_2\sim 0.65-0.7$. JuMBOs formed via this channel are expected to have an average semi-major axis comparable to $\Delta a~\sim 3(a_2-a_1)$ and high eccentricity, with a distinctive superthermal distribution which can help to observationally identify this formation channel and distinguish it from primordial formation. We determine an upper limit on the JuMBO formation efficiency per planetary system. In very dense star clusters like the Trapezium in the Orion Nebula, this efficiency can reach several tens of percent. If the ejection channel is confirmed for these or future JWST observations, these JuMBOs will directly inform us of the conditions where these giant planets formed in protoplanetary disks, putting stringent constraints on the giant planet formation theory.
The diffusion of water molecules through mesoporous dust of amorphous carbon (a-C) is a key process in the evolution of prestellar, protostellar, and protoplanetary dust, as well as in that of comets. It also plays a role in the formation of planets. Given the absence of data on this process, we experimentally studied the isothermal diffusion of water molecules desorbing from water ice buried at the bottom of a mesoporous layer of aggregated a-C nanoparticles, a material analogous to protostellar and cometary dust. We used infrared spectroscopy to monitor diffusion in low temperature (160 to 170 K) and pressure (6 $\times$ 10$^{-5}$ to 8 $\times$ 10$^{-4}$ Pa) conditions. Fick's first law of diffusion allowed us to derive diffusivity values on the order of 10$^{-2}$ cm$^2$ s$^{-1}$, which we linked to Knudsen diffusion. Water vapor molecular fluxes ranged from 5 $\times$ 10$^{12}$ to 3 $\times$ 10$^{14}$ cm$^{-2}$ s$^{-1}$ for thicknesses of the ice-free porous layer ranging from 60 to 1900 nm. Assimilating the layers of nanoparticles to assemblies of spheres, we attributed to this cosmic dust analog of porosity 0.80-0.90 a geometry correction factor, similar to the tortuosity factor of tubular pore systems, between 0.94 and 2.85. Applying the method to ices and refractory particles of other compositions will provides us with other useful data.
We derive a new criterion for estimating characteristic dynamical timescales in N-body simulations. The criterion uses the second, third, and fourth derivatives of particle positions: acceleration, jerk, and snap. It can be used for choosing timesteps in integrators with adaptive step size control. For any two-body problem the criterion is guaranteed to determine the orbital period and pericenter timescale regardless of eccentricity. We discuss why our criterion is the simplest derivative-based expression for choosing adaptive timesteps with the above properties and show its superior performance over existing criteria in numerical tests. Because our criterion uses lower order derivatives, it is less susceptible to rounding errors caused by finite floating point precision. This significantly decreases the volume of phase space where an adaptive integrator fails or gets stuck due to unphysical timestep estimates. For example, our new criterion can accurately estimate timesteps for orbits around a 50m sized Solar System object located at 40AU from the coordinate origin when using double floating point precision. Previous methods where limited to objects larger than 10km. We implement our new criterion in the high order IAS15 integrator which is part of the freely available N-body package REBOUND.
Transmission spectroscopy is currently the technique best suited to study a wide range of planetary atmospheres, leveraging the filtering of a star's light by a planet's atmosphere rather than its own emission. However, as both a planet and its star contribute to the information encoded in a transmission spectrum, an accurate accounting of the stellar contribution is pivotal to enabling robust atmospheric studies. As current stellar models lack the required fidelity for such accounting, we investigate here the capability of time-resolved spectroscopy to yield high-fidelity, empirical constraints on the emission spectra of stellar surface heterogeneities (i.e., spots and faculae). Using TRAPPIST-1 as a test case, we simulate time-resolved JWST/NIRISS spectra and demonstrate that with a blind approach incorporating no physical priors, it is possible to constrain the photospheric spectrum to less than 0.5% and the spectra of stellar heterogeneities to within 10%, a precision that enables photon-limited (rather than model-limited) science. Now confident that time-resolved spectroscopy can propel the field in an era of robust high-precision transmission spectroscopy, we introduce a list of areas for future exploration to harness its full potential, including wavelength dependency of limb darkening and hybrid priors from stellar models as a means to further break the degeneracy between the position, size, and spectra of heterogeneities.
In ultraviolet (UV) astronomical observations, photons from the sources are very few compared to the visible or infrared (IR) wavelength ranges. Detectors operating in the UV usually employ a photon-counting mode of operation. These detectors usually have an image intensifier sensitive to UV photons and a readout mechanism that employs photon counting. The development of readouts for these detectors is resource-intensive and expensive. In this paper, we describe the development of a low-cost UV photon-counting detector processing unit that employs a Raspberry Pi with its in built readout to perform the photon-counting operation. Our system can operate in both 3x3 and 5x5 window modes at 30 frames per sec (fps), where 5x5 window mode also enables the provision of detection of double events. The system can be built quickly from readily available custom-off-the-shelf (COTS) components and is thus used in inexpensive CubeSats or small satellite missions. This low-cost solution promises to broaden access to UV observations, advancing research possibilities in space-based astronomy.
This thesis introduces an effective theory for the long-distance behaviour of scalar fields in de Sitter spacetime, known as the second-order stochastic theory, with the aim of computing scalar correlation functions that are useful in inflationary cosmology.
Random tensor networks (RTNs) have proved to be fruitful tools for modelling the AdS/CFT correspondence. Due to their flat entanglement spectra, when discussing a given boundary region $R$ and its complement $\bar R$, standard RTNs are most analogous to fixed-area states of the bulk quantum gravity theory, in which quantum fluctuations have been suppressed for the area of the corresponding HRT surface. However, such RTNs have flat entanglement spectra for all choices of $R, \bar R,$ while quantum fluctuations of multiple HRT-areas can be suppressed only when the corresponding HRT-area operators mutually commute. We probe the severity of such obstructions in pure AdS$_3$ Einstein-Hilbert gravity by constructing networks whose links are codimension-2 extremal-surfaces and by explicitly computing semiclassical commutators of the associated link-areas. Since $d=3,$ codimension-2 extremal-surfaces are geodesics, and codimension-2 `areas' are lengths. We find a simple 4-link network defined by an HRT surface and a Chen-Dong-Lewkowycz-Qi constrained HRT surface for which all link-areas commute. However, the algebra generated by the link-areas of more general networks tends to be non-Abelian. One such non-Abelian example is associated with entanglement-wedge cross sections and may be of more general interest.
If the metric is chosen to depend exponentially on the conformal factor, and if one works in a gauge where the conformal factor has the wrong sign propagator, perturbative quantum gravity corrections can be partially resummed into a series of terms each of which is ultraviolet finite. These new terms however are not perturbative in some small parameter, and are not individually BRST invariant, or background diffeomorphism invariant. With appropriate parametrisation, the finiteness property holds true also for a full phenomenologically relevant theory of quantum gravity coupled to (beyond the standard model) matter fields, provided massive tadpole corrections are set to zero by a trivial renormalisation.
Investigating the existence of algebra and finding hidden symmetries in physical systems is one of the most important aspects for understanding their behavior and predicting their future. Expanding this unique method of study to cosmic structures and combining past knowledge with new data can be very interesting and lead to discovering new ways to analyze these systems. However, studying black hole symmetries always presents many complications and sometimes requires computational approximations. For example, checking the existence of Killing vectors and then calculating them is not always an easy task. It becomes much more difficult as the structure and geometry of the system become more complex. In this work, we will show that if the wave equations with a black hole background can be converted in the form of general Heun equation, based on its structure and coefficients, the algebra of the system can be easily studied, and computational and geometrical complications can be omitted. For this purpose, we selected two $AdS_5$ black holes: Reissner-Nordstrom (R-N) and Kerr, and analyzed the Klein-Gordon equation with the background of these black holes. Based on this concept, we observed that the radial part of the R-N black hole and both the radial and angular parts of the Kerr black hole could be transformed into the general form of the Heun equation. As a result, according to the algebraic structure that governs the Heun equation and its coefficients, one can easily achieve generalized $sl(2)$ algebra.
Recent proposals are emerging for the experimental detection of entanglement mediated by classical gravity, carrying significant theoretical and observational implications. In fact, the detection of gravitational waves (GWs) in LIGO provides an alternative laboratory for testing various gravity-related properties. By employing LIGO's arms as oscillators interacting with gravitational waves (GWs), our study demonstrates the potential for generating quantum entanglement between two mutually orthogonal modes of simple harmonic oscillators. Our findings reveal unique entanglement dynamics, including periodic ``collapse and revival" influenced by GW oscillations, alongside a distinct ``quantum memory effect." We believe that these forecasts may hold significance for both theoretically probing and experimentally verifying the quantumness of gravitational waves.
We quantize a closed homogeneous and isotropic universe in the modified teleparallel gravity, wherein a scalar field is non-minimally coupled to both the torsion and a boundary term. In this regard, we study exact solutions at both the classical and quantum frameworks by utilizing the corresponding Wheeler-DeWitt (WDW) equation of the model. To correspond to the comprehensive classical and quantum levels, we propose an appropriate initial condition for the wave packets and observe that they closely adhere to the classical trajectories and reach their peak. We quantify this correspondence using the de-Broglie Bohm interpretation of quantum mechanics. According to this proposal, the classical and Bohmian trajectories coincide when the quantum potential vanishes along the Bohmian paths. Furthermore, we apply the de-parameterization technique to our model in the realm of the problem of time in quantum cosmological models based on the WDW equation, utilizing the global internal time denoted as $\chi$, which represents a scalar field.
We present for the first time a study of the quasinormal modes of rapidly rotating Ellis-Bronnikov wormholes in General Relativity. We compute the spectrum of the wormholes using a spectral decomposition of the metric perturbations on a numerical background. We focus on the $M_z=2,3$ sector of the perturbations, and show that the triple isospectrality of the symmetric and static Ellis-Bronnikov wormhole is broken due to rotation, giving rise to a much richer spectrum than the spectrum of Kerr black holes. We do not find any instabilities for $M_z=2,3$ perturbations.
The symmetry frame formalism is an effective tool for computing the symmetries of a Riemann-Cartan geometry and, in particular, in metric teleparallel geometries. In the case of non-vanishing torsion in a four dimensional RC geometry, the Minkowski geometry is the only geometry admitting ten affine frame symmetries. Excluding this almost trivial geometry, the maximal number of affine frame symmetries is seven. A natural question is to ask what four dimensional geometries admit a seven-dimensional group of affine frame symmetries. Such geometries are locally homogeneous and admit the largest isotropy group permitted, and hence are called {\it maximally isotropic}. Using the symmetry frame formalism to compute affine frame symmetries along with the additional structure of the torsion tensor, we employ the Cartan-Karlhede algorithm to determine all possible seven-dimensional symmetry groups for Riemann-Cartan geometries.
This paper systematically investigates two unexplored scenarios within the realm of stationary and axially symmetric spacetimes influenced by the composition of two Ehlers and/or Harrison transformations of different natures. Specifically, it investigates instances where two magnetic maps are composed and scenarios wherein one electric and one magnetic map are superimposed. In the first part of the study, a magnetic Ehlers-Harrison composition is applied to the Minkowski geometry, resulting in the construction of an Electromagnetic Swirling spacetime. This background geometry manifests electromagnetic and vortex-like fields. The Kundt class nature of this spacetime is revealed, and its connection with the planar--Reissner--Nordstr\"om-NUT black hole is established. Furthermore, the generalization of this background in the presence of a nontrivial cosmological constant is presented. Subsequently, the study incorporates a Schwarzschild black hole onto the Electromagnetic Swirling universe background using the same magnetic map. Detailed insights into the geometrical properties of both spacetimes are provided. In the second part, the study delves into all the possible combinations of at most two Ehlers and/or Harrison transformations, specifically when their natures are mixed: one electric and the other magnetic. This exploration yields four novel background spacetimes: an electromagnetic spacetime endowed with electromagnetic monopolic charges, an electromagnetic spacetime influenced by a NUT charge, a Swirling background influenced by a NUT charge, and a Swirling background with electromagnetic charges. All of these novel backgrounds are shown to be algebraically general, namely, of type I according to Petrov classification. Their main geometrical characteristics are carefully analyzed.
This work tabulates measured and derived values of coefficients for Lorentz and CPT violation in the Standard-Model Extension. Summary tables are extracted listing maximal attained sensitivities in the matter, photon, neutrino, and gravity sectors. Tables presenting definitions and properties are also compiled.
The ability to test general relativity in extreme gravity regimes using gravitational wave observations from current ground-based or future space-based detectors motivates the mathematical study of the symmetries of black holes in modified theories of gravity. In this paper we focus on spinning black hole solutions in two quadratic gravity theories: dynamical Chern-Simons and scalar Gauss-Bonnet gravity. We compute the principal null directions, Weyl scalars, and complex null tetrad in the small-coupling, slow rotation approximation for both theories, confirming that both spacetimes are Petrov type I. Additionally, we solve the Killing equation through rank 6 in dynamical Chern-Simons gravity and rank 2 in scalar Gauss-Bonnet gravity, showing that there is no nontrivial Killing tensor through those ranks for each theory. We therefore conjecture that the still-unknown, exact, quadratic-gravity, black-hole solutions do not possess a fourth constant of motion.
As an alternative gravity model we consider an extended Einstein-Maxwell gravity containing a gauge invariance property. Extension is assumed to be addition of a directional coupling between spatial electromagnetic fields with the Ricci tensor. We will see importance of the additional term in making a compact stellar object and value of its radius. As an application of this model we substitute ansatz of magnetic field of a hypothetical magnetic monopole which has just time independent radial component and for matter part we assume a perfect fluid stress tensor. To obtain spherically symmetric internal metric of the perfect fluid stellar compact object we solve Tolman-Oppenheimer-Volkoff equation with a polytropic form of equation of state as $p(\rho)=a\rho^2$. Using dynamical system approach we study stability of the solutions for which arrow diagrams show saddle (quasi stable) for $a<0$ (dark stars) and sink (stable) for $a>0$ (normal visible stars). We check also the energy conditions, speed of sound and Harrison-Zeldovich-Novikov static stability criterion for obtained solution and confirm that they make stable state.
We study the massive scalar field equation $\Box_g \phi = m^2 \phi$ on a stationary and spherically symmetric black hole $g$ (including in particular the Schwarzschild and Reissner--Nordstr\"om black holes in the full sub-extremal range) for solutions $\phi$ projected on a fixed spherical harmonic. Our problem involves the scattering of an attractive long-range potential (Coulomb-like) and thus cannot be treated perturbatively. We prove precise (point-wise) asymptotic tails of the form $t^{-5/6} f(t)+ O(t^{-1+\delta})$, where $f(t)$ is an explicit oscillating profile. Our asymptotics appear to be the first rigorous decay result for a massive scalar field on a black hole. Establishing these asymptotics is also an important step in retrieving the assumptions used in work of the third author regarding the interior of dynamical black holes and Strong Cosmic Censorship.
We present a semi-rigorous justification of Bekenstein's Generalized Second Law of Thermodynamics applicable to a universe with black holes present, based on a generic quantum gravity formulation of a black hole spacetime, where the bulk Hamiltonian constraint plays a central role. Specializing to Loop Quantum Gravity, and considering the inspiral and post-ringdown stages of binary black hole merger into a remnant black hole, we show that the Generalized Second Law implies a lower bound on the non-perturbative LQG correction to the Bekenstein-Hawking area law for black hole entropy. This lower bound itself is expressed as a function of the Bekenstein-Hawking area formula for entropy. Results of the analyses of LIGO-VIRGO-KAGRA data recently performed to verify the Hawking Area Theorem for binary black hole merger, are shown to be entirely consistent with this Loop Quantum Gravity-induced inequality. However, the consistency is independent of the magnitude of the Loop Quantum Gravity corrections to black hole entropy, depending only on the negative algebraic sign of the quantum correction. We argue that results of alternative quantum gravity computations of quantum black hole entropy, where the quantum entropy exceeds the Bekenstein-Hawking value, may not share this consistency.
The black-hole laser (BHL) effect is the self-amplification of Hawking radiation between a pair of horizons which act as a resonant cavity. In a flowing atomic condensate, the BHL effect arises in a finite supersonic region, where Bogoliubov-Cherenkov-Landau (BCL) radiation is resonantly excited by any static perturbation. Thus, experimental attempts to produce a BHL unavoidably deal with the presence of a strong BCL background, making the observation of the BHL effect still a major challenge in the analogue gravity field. Here, we perform a theoretical study of the BHL-BCL crossover using an idealized model where both phenomena can be unambiguously isolated. By drawing an analogy with an unstable pendulum, we distinguish three main regimes according to the interplay between quantum fluctuations and classical stimulation: quantum BHL, classical BHL, and BCL. Based on quite general scaling arguments, the nonlinear amplification of quantum fluctuations up to saturation is identified as the most robust trait of a quantum BHL. A classical BHL behaves instead as a linear quantum amplifier, where the output is proportional to the input. The BCL regime also acts as a linear quantum amplifier, but its gain is exponentially smaller as compared to a classical BHL. Complementary signatures of black-hole lasing are a decrease in the amplification for increasing BCL amplitude or a nonmonotonic dependence of the growth rate with respect to the background parameters. We also identify interesting analogue phenomena such as Hawking-stimulated white-hole radiation or quantum BCL-stimulated Hawking radiation. The results of this work not only are of interest for analogue gravity, where they help to distinguish each phenomenon and to design experimental schemes for a clear observation of the BHL effect, but they also open the prospect of finding applications of analogue concepts in quantum technologies.
We revisit the connection between Hawking radiation and high-frequency dispersions for a Schwarzschild black hole following the work of Brout et al.. After confirming the robustness of Hawking radiation for monotonic dispersion relations, we consider non-monotonic dispersion relations that deviate from the standard relation only in the trans-Planckian domain. Contrary to the common belief that Hawking radiation is insensitive to UV physics, it turns out that Hawking radiation is subject to significant modifications after the scrambling time. Depending on the UV physics at the singularity, the amplitude of Hawking radiation could diminish after the scrambling time, while the Hawking temperature remains the same. Our finding is thus not contradictory to earlier works regarding the robustness of Hawking temperature.
In this paper we present the general spherically symmetric static solution to the vacuum equations of Cotton gravity. The obtained metric solution reveals the presence of singularities at the photosphere of a spherical source, which probably obstruct the formation of the stellar Schwarzschild-radius black holes. The solution is characterized by two integration constants, whose values can be restricted by association with the Hubble horizon. We examine the diverse features of the solution, including the long-range modifications to Newton's force through the incorporation of the velocity-squared repulsive term to model the dark energy.
Configurations of rotating black holes in the cubic Galileon theory are computed by means of spectral methods. The equations are written in the 3+1 formalism and the coordinates are based on the maximal slicing condition and the spatial harmonic gauge. The black holes are described as apparent horizons in equilibrium. It enables the first fully consistent computation of rotating black holes in this theory. Several quantities are extracted from the solutions. In particular, the vanishing of the mass is confirmed. A link is made between that and the fact that the solutions do not obey the zeroth-law of black hole thermodynamics.
This work analyzes the asymptotic behaviors of the asymptotically flat solutions of Einstein-\ae ther theory in the linear case. The vacuum solutions for the tensor, vector, and scalar modes are first obtained, written as sums of various multipolar moments. The suitable coordinate transformations are then determined, and the so-called pseudo-Newman-Unti coordinate systems are constructed for all radiative modes. In these coordinates, it is easy to identify the asymptotic symmetries. It turns out that all three kinds of modes possess the familiar Bondi-Metzner-Sachs symmetries or the extensions as in general relativity. Moreover, there also exist the \emph{subleading} asymptotic symmetries parameterized by a time-independent vector field on a unit 2-sphere. The memory effects are also identified. The tensor gravitational wave also excites similar displacement, spin, and center-of-mass memories to those in general relativity. New memory effects due to the vector and scalar modes exist. The subleading asymptotic symmetry is related to the (leading) vector displacement memory effect, which can be viewed as a linear combination of the electric-type and magnetic-type memory effects. However, the scalar memory effect seems to have nothing to do with the asymptotic symmetries at least in the linearized theory.
It was recently shown that the von Neumann algebras of observables dressed to the mass of a Schwarzschild-AdS black hole or an observer in de Sitter are Type II, and thus admit well-defined traces. The von Neumann entropies of "semi-classical" states were found to be generalized entropies. However, these arguments relied on the existence of an equilibrium (KMS) state and thus do not apply to, e.g., black holes formed from gravitational collapse, Kerr black holes, or black holes in asymptotically de Sitter space. In this paper, we present a general framework for obtaining the algebra of dressed observables for linear fields on any spacetime with a Killing horizon. We prove, assuming the existence of a stationary (but not necessarily KMS) state and suitable decay of solutions, a structure theorem that the algebra of dressed observables always contains a Type II factor "localized" on the horizon. These assumptions have been rigorously proven in most cases of interest. Applied to the algebra in the exterior of an asymptotically flat Kerr black hole, where the fields are dressed to the black hole mass and angular momentum, we find a product of a Type II$_{\infty}$ algebra on the horizon and a Type I$_{\infty}$ algebra at past null infinity. In Schwarzschild-de Sitter, despite the fact that we introduce an observer, the quantum field observables are dressed to the perturbed areas of the black hole and cosmological horizons and is the product of Type II$_{\infty}$ algebras on each horizon. In all cases, the von Neumann entropy for semiclassical states is given by the generalized entropy. Our results suggest that in all cases where there exists another "boundary structure" (e.g., an asymptotic boundary or another Killing horizon) the algebra of observables is Type II$_{\infty}$ and in the absence of such structures (e.g., de Sitter) the algebra is Type II$_{1}$.
With a 4-form ansatz of 11-dimensional supergravity over non-dynamical AdS$_4 \times S^7/Z_k$ background, with the internal space as a $S^1$ Hopf fibration on CP$^3$, we get a consistent truncation. The (pseudo)scalars, in the resulting scalar equations in Euclidean AdS_4 space, may be viewed as arising from (anti)M-branes wrapping around internal directions in the (Wick-rotated) skew-whiffed M2-branes background (as the resulting theory is for anti-M2-branes) and so, realizing the modes after swapping the three fundamental representations $8_s, 8_c, 8_v$ of SO(8). Taking the backreaction on the external and internal spaces, we get massless and massive modes, corresponding to exactly marginal and marginally irrelevant deformations on the boundary CFT$_3$, and write a closed solution for the bulk equation and compute its correction to the background action. Next, considering the Higgs-like (breathing) mode $m^2=18$, having all supersymmetries, parity and scale-invariance broken, by solving the associated bulk equation with mathematical methods, especially the Adomian decomposition method, and analyzing the behavior near the boundary of the solutions, we realize the boundary duals in SU(4) x U(1)-singlet sectors of the ABJM model. Then, introducing new dual deformation $\Delta_+$ = 3, 6 operators made of bi-fundamental scalars, fermions and U(1) gauge fields, we obtain SO(4)-invariant solutions as small instantons on a three-sphere with radius at infinity, which actually correspond to collapsing bulk bubbles leading to big-crunch singularities.
This paper proposes an alternative regularization method for handling the ultraviolet behavior of entanglement entropy. Utilizing an $i\epsilon$ prescription in the Euclidean double cone geometry, it accurately reproduces the universal behavior of entanglement entropy. The method is demonstrated in the free boson theory in arbitrary dimensions and two-dimensional conformal field theories. The findings highlight the effectiveness of the $i\epsilon$ regularization method in addressing ultraviolet issues in quantum field theory and gravity, suggesting potential applications to other calculable quantities.
We investigate the evolution of a flat Emergent Universe obtained with a non-linear equation of state (nEoS) in Einstein's general theory of Relativity. The nEoS is equivalent to three different types of barotropic cosmic fluids, which are found from the nEoS parameter. The EU began expanding initially with no interaction among the cosmic fluids. Assuming an interaction that sets in at a time $t \geq t_i$ in the fluid components, we study the evolution of the EU that leads to the present observed universe. We adopt a dynamical system analysis method to obtain the critical points of the autonomous system for studying the evolution of an EU with or without interaction in fluid components. We also study the stability of critical points and draw the phase portraits. The density parameters and the corresponding cosmological parameters are obtained for both the non-interacting and interacting phases of the evolution dynamics.
We employ the approach of path integral in the phase space to study the kinetics of state switching associated with black hole phase transitions. Under the assumption that the state switching process of the black hole is described by the stochastic Langevin equation based on the free energy landscape, we derived the Martin-Siggia-Rose-Janssen-de Dominicis (MSRJD) functional and obtained the path integral expression of the transition probability. The MSRJD functional inherently represents the path integral in the phase space, allowing us to extract the effective Hamiltonian for the dynamics of state switching process. By solving the Hamiltonian equations of motion, we obtain the kinetic path in the phase space using an example of the RNAdS black hole. Furthermore, the dominant kinetic path within the configuration space is calculated. We also discuss the kinetic rate by using the functional formalism. Finally, we examine two further examples: Hawking-Page phase transition and Gauss-Bonnet black hole phase transition at the triple point. Our analysis demonstrates that, concerning the Hawking-Page phase transition, while a dominant kinetic path in the phase space from the large SAdS black hole to the thermal AdS space is present, there is no kinetic path for the inverse process. For the Gauss-Bonnet black hole phase transition at the triple point, the state switching processes between the small, the intermediate and the large Gauss-Bonnet black holes constitute a chemical reaction cycle.
We apply machine-learning techniques to the effective-field-theory analysis of the $e^+e^- \to W^+W^-$ processes at future lepton colliders, and demonstrate their advantages in comparison with conventional methods, such as optimal observables. In particular, we show that machine-learning methods are more robust to detector effects and backgrounds, and could in principle produce unbiased results with sufficient Monte Carlo simulation samples that accurately describe experiments. This is crucial for the analyses at future lepton colliders given the outstanding precision of the $e^+e^- \to W^+W^-$ measurement ($\sim 10^{-4}$ in terms of anomalous triple gauge couplings or even better) that can be reached. Our framework can be generalized to other effective-field-theory analyses, such as the one of $e^+e^- \to t\bar{t}$ or similar processes at muon colliders.
New vector bosons that are coupled to conserved currents in the Standard Model exhibit enhanced rates below the electroweak scale from anomalous triangle amplitudes, leading to (energy/vector mass)$^2$ enhancements to rare Z decays and flavor-changing meson decays into the longitudinally polarized vector boson. In the case of a vector boson gauging $U(1)_{B-L}$, the mass gap between the top quark and the remaining SM fermions leads to (energy/vector mass)$^2$ enhancements for processes with momentum transfer below the top mass. In addition, we examine the case of an intergenerational $U(1)_{B_3 - L_2}$ that has been proposed to resolve the $(g-2)_\mu$ anomaly with an MeV scale DM candidate, and we find that these enhanced processes constrain the entire parameter space.
We consider a light scalar dark matter candidate with mass in the GeV range whose $p$-wave annihilation is enhanced through a Breit-Wigner resonance. The annihilation actually proceeds in the $s$-channel via a dark photon mediator whose mass is nearly equal to the masses of the incoming particles. We compute the temperature at which kinetic decoupling between dark matter and the primordial plasma occurs and show that including the effect of kinetic decoupling can reduce the dark matter relic density by orders of magnitude. For typical scalar masses ranging from 200 MeV to 5 GeV, we determine the range of values allowed for the dark photon couplings to the scalar and to the standard model particles after requiring the relic density to be in agreement with the value extracted from cosmological observations. We then show that $\mu$ and $y$-distortions of the CMB spectrum and X-ray data from XMM-Newton strongly constrain the model and rule out the region where the dark matter annihilation cross-section is strongly enhanced at small dispersion velocities. Constraints from direct detection searches and from accelerator searches for dark photons offer complementary probes of the model.
We review recent progress in understanding quarkonium dynamics inside the quark-gluon plasma as an open quantum system with a focus on the definition and nonperturbative calculations of relevant transport coefficients and generalized gluon distributions.
In an optically active medium, such as a plasma that contains a neutrino background, the left-handed and right-handed polarization photon modes acquire different dispersion relations. We study the propagation of photons in such a medium, which is otherwise isotropic, within the framework of the covariant collissionless Boltzmann equation incorporating a term that parametrizes the optical activity. Using the linear response approximation, we obtain the formulas for the components of the photon polarization tensor, expressed in terms of integrals over the momentum distribution function of the background particles. The main result here is the formula for the P- and CP-breaking component of the photon polarization tensor in terms of the parameter involved in the new term we consider in the Boltzmann equation to describe the effects of optical activity. We discuss the results for some particular cases, such as long-wavelength and non-relativistic limits, for illustrate purposes. We also discuss the generalizations of the P- and CP-breaking term we included in the Boltzmann equation. In particular we consider the application to a plasma with a neutrino background and establish contact with calculations of the photon self-energy in those systems in the framework of Thermal Field Theory.
The cosmological observations of gravitational lenses, cosmic microwave background, rotation speed of stars in galaxies confirm the existence of about 27% dark matter in the Universe. The nature of these particles is unknown, however, there are theoretical models Beyond the Standard Model (BSM), such as superstrings and D-branes, which predict new particles of type WIMPs, dilatons, axions, etc. Experiments to search for such particles are being actively carried out both in space and on modern accelerators, and unambiguous information regarding the type of particles has not yet been identified.
This study investigates the potential of a multi-TeV Muon Collider (MuC) for probing the Inert Triplet Model (ITM), which introduces a triplet scalar field with hypercharge $Y=0$ to the Standard Model. The ITM stands out as a compelling Beyond the Standard Model scenario, featuring a neutral triplet $T^0$ and charged triplets $T^\pm$. Notably, $T^0$ is posited as a dark matter (DM) candidate, being odd under a $Z_2$ symmetry. Rigorous evaluations against theoretical, collider, and DM experimental constraints corner the triplet scalar mass to a narrow TeV-scale region, within which three benchmark points are identified, with $T^\pm$ masses of 1.2 TeV, 1.85 TeV, and 2.3 TeV, for the collider study. The ITM's unique $TTVV$ four-point vertex, differing from fermionic DM models, facilitates efficient pair production through Vector Boson Fusion (VBF). This characteristic positions the MuC as an ideal platform for exploring the ITM, particularly due to the enhanced VBF cross-sections at high collision energies. To address the challenge of the soft decay products of $T^\pm$ resulting from the narrow mass gap between $T^\pm$ and $T^0$, we propose using Disappearing Charged Tracks (DCTs) from $T^\pm$ and Forward muons as key signatures. We provide event counts for these signatures at MuC energies of 6 TeV and 10 TeV, with respective luminosities of 4 ab$^{-1}$ and 10 ab$^{-1}$. Despite the challenge of beam-induced backgrounds contaminating the signal, we demonstrate that our proposed final states enable the MuC to achieve a $5\sigma$ discovery for the identified benchmark points, particularly highlighting the effectiveness of the final state with one DCT and one Forward muon.
We discuss properties of Quantum Chromodynamics at finite temperature obtained by means of lattice simulations with overlap fermions. This fermion discretization preserves chiral symmetry even at finite lattice spacing. We present details of the lattice formulation, first results for the chiral observables and discuss the behaviour of the system near the chiral thermal phase transition.
We perform a simultaneous global analysis of hadron fragmentation functions (FFs) to various charged hadrons at next-to-leading order in QCD. The world data set includes results from electron-positron single-inclusive annihilation, semi-inclusive deep inelastic scattering, as well as proton-proton collisions including jet fragmentation measurements which lead to strong constraints on the gluon fragmentations. By carefully selecting hadron kinematics to ensure the validity of QCD factorization and the convergence of perturbative calculations, we achieve a satisfying best fit with $\chi^2/$d.o.f.$=0.90$, in the simultaneous extraction of FFs for light charged hadrons ($\pi^{\pm}$, $K^{\pm}$ and $p/\bar{p}$). The total momentum of $u$, $d$ quarks and gluon carried by light charged hadrons have been determined precisely. That urges future precision measurements on fragmentation to neutral hadrons, which are crucial for the test of fundamental sum rules in QCD fragmentation.
One of the celebrated tools in explaining the Hydrogen atom is Born-Oppenheimer approximation. The resemblance of $QQ\bar{q}\bar{q}$ tetraquarks to Hydrogen atom within Quantum chromodynamics (QCD) implies usage of Born-Oppenheimer approximation for these multiquark states. In this work, we use dynamical diquark model to calculate mass spectra and sizes of doubly charmed and charged tetraquark states denoted as $T_{cc}^{++}$. Our results for mass spectra indicate some bound state candidates with respect to corresponding two-meson thresholds. Calculation of expectation values of $\sqrt{\langle r^2 \rangle}$ reflects that doubly charmed and charged tetraquark states are compact.
We present a short overview of the so-called flavour anomalies, discussing their significance and the connections with QCD issues discussed at the HADRON 2023 conference.
We conduct a combined analysis to investigate dark matter (DM) with hypercharge anapole moments, focusing on scenarios where Majorana DM particles with spin 1/2 or 1 interact exclusively with Standard Model particles through U(1)$_{Y}$ hypercharge anapole terms. For completeness, we construct general, effective, and hypercharge gauge-invariant three-point vertices. These enable the generation of interaction vertices for both a virtual photon $\gamma$ and a virtual $Z$ boson with two identical massive Majorana particles of any non-zero spin $s$, after the spontaneous breaking of electroweak gauge symmetry. For complementarity, we adopt an effective operator tailored to each dark matter spin allowing crossing symmetry. We calculate the relic abundance, analyze current constraints and future sensitivities from dark matter direct detection and collider experiments, and apply the conceptual naive perturbativity bound. Our findings demonstrate that the scenario with spin-1 DM is more stringently constrained than that with spin-1/2, primarily due to the reduced annihilation cross-section and/or the enhanced rate of LHC mono-jet events. A significant portion of the remaining parameter space in the spin-1/2 DM scenario can be explored through upcoming Xenon experiments, with more than 30 ton-year exposure equivalent to approximately 7 years of running the XENONnT experiment. The spin-1 scenario can be almost entirely tested after the full run of the high-luminosity LHC, except for a very small parameter region where the DM mass is around 1 TeV. Our estimations, based on a generalized vertex, anticipate even stronger bounds and sensitivities for Majorana dark matter with higher spins.
We study the QCD equation of state and the chiral condensate using the hadron resonance gas with repulsive mean-field interactions. We find that the repulsive interactions improve the agreement with the lattice results on the derivatives of the pressure with respect to the baryon chemical potential up to eighth order. From the temperature dependence of the chiral condensate we estimate the crossover temperature as a function of baryon chemical potential, $T_c(\mu_B)$. We find that the chiral crossover line starts to deviate significantly from the chemical freeze-out line already for $\mu_B>400$ MeV. Furthermore, we find that the chiral pseudo-critical line can be parameterized as $T_c(\mu_B)/T_c(0)=1-\kappa_2 (\mu_B/T_c (0))^2-\kappa_4 (\mu_B/T_c (0))^4$ with $\kappa_2=0.0150(2)$ and $\kappa_4=3.1(6) \cdot 10^{-5}$ which are in agreement with lattice QCD results for small values of $\mu_B$. For the first time we find a tiny but non-zero value of $\kappa_4$ in our study.
The recent anomaly observed in NO$\nu$A and T2K experiments, in standard three-flavor neutrino oscillation could potentially signal physics extending beyond the standard model (SM). For the NSI parameters that can accommodate this anomaly, we explore the violation of Leggett-Garg type inequalities (LGtI) within the context of three-flavor neutrino oscillations. Our analysis focuses on LGtI violations in scenarios involving complex NSI with $\epsilon_{e\mu}$ or $\epsilon_{e\tau}$ coupling in long baseline accelerator experiments for normal and inverted mass ordering. LGtI violation is significantly enhanced in normal ordering (NO) for $\epsilon_{e\tau}$ scenario, whereas it suppresses for $\epsilon_{e\mu}$ scenario for T2K, NO$\nu$A, and DUNE experiment set-up. We find that for inverted ordering (IO), in the DUNE experimental set-up above $6$ GeV, the LGtI violation can be an indication of $\epsilon_{e\tau}$ new physics scenario.
The full data set of the Daya Bay reactor neutrino experiment is used to probe the effect of the charged current non-standard interactions (CC-NSI) on neutrino oscillation experiments. Two different approaches are applied and constraints on the corresponding CC-NSI parameters are obtained with the neutrino flux taken from the Huber-Mueller model with a $5\%$ uncertainty. Both approaches are performed with the analytical expressions of the effective survival probability valid up to all orders in the CC-NSI parameters. For the quantum mechanics-based approach (QM-NSI), the constraints on the CC-NSI parameters $\epsilon_{e\alpha}$ and $\epsilon_{e\alpha}^{s}$ are extracted with and without the assumption that the effects of the new physics are the same in the production and detection processes, respectively. The approach based on the effective field theory (EFT-NSI) deals with four types of CC-NSI represented by the parameters $[\varepsilon_{X}]_{e\alpha}$. For both approaches, the results for the CC-NSI parameters are shown for cases with various fixed values of the CC-NSI and the Dirac CP-violating phases, and when they are allowed to vary freely. We find that constraints on the QM-NSI parameters $\epsilon_{e\alpha}$ and $\epsilon_{e\alpha}^{s}$ from the Daya Bay experiment alone can reach the order $\mathcal{O}(0.01)$ for the former and $\mathcal{O}(0.1)$ for the latter, while for EFT-NSI parameters $[\varepsilon_{X}]_{e\alpha}$, we obtain $\mathcal{O}(0.1)$ for both cases.
We pursue the investigation of the validity of our recently proposed quantum mechanically correlated statistical hadron gas model (HRG) inspired by a Beth-Uhlenbeck corrected form of the equation of state (EoS) to the ideal hadron resonance gas model (IHRG). We calculate the ratios of some particle yields of equal masses, namely ($\bar{p} / p$,$K^-/ K^+$, $\pi^-/ \pi^+$, $\bar{\Lambda}/ \Lambda$, $\bar{\Sigma}/ \Sigma$, $\bar{\Omega}/ \Omega$), and some particle yield with unequal masses, namely ($p/ \pi^+$, $k^+/ \pi^+$, $k^-/ \pi^-$, $\Lambda/ \pi^-$, $\bar{p}/ \pi^-$, $\Omega/\pi^-$). We then study the center-of-mass energy variation of these ratios of particle yields obtained by our proposed HRG model. Our model results are then confronted with the corresponding calculations obtained using the ideal hadron resonance gas (IHRG) model, the Cosmic Ray Monte Carlo (CRMC) EPOS $1.99$ simulations, as well as the experimental data from AGS, SPS, RHIC, and ALICE. Our proposed HRG model results generally show very close agreement with the experimental data compared with the other models considered. Especially remarkable is the very good matching obtained by our new HRG model with the experimental data of $\bar{p}/ \pi^-$ and $p/ \pi^+$, which suggests that our HRG model might be helpful in describing the famous proton anomaly at top RHIC and LHC energies. However, some experimental data of the ratios of hadron pairs of unequal mass and (multi)strange content, like $\Lambda/ \pi^-$ and $\Omega/ \pi^-$, appear to be quite underestimated by both our HRG model and the IHRG model, which alert further investigation of the suitability of the thermal hadron gas model(s) and thereby motivate the need for further modification.
We consider an axion-like particle (ALP) coupled to Standard Model (SM) fermions as a mediator between the SM and a fermionic dark matter (DM) particle. We explore the case where the ALP-SM and/or the ALP-DM couplings are too small to allow for DM generation via standard freeze-out. DM is therefore thermally decoupled from the visible sector and must be generated through either freeze-in or decoupled freeze-out (DFO). In the DFO regime, we present an improved approach to obtain the relic density by solving a set of three stiff coupled Boltzmann equations, one of which describes the energy transfer from the SM to the dark sector. Having determined the region of parameter space where the correct relic density is obtained, we revisit experimental constraints from electron beam dump experiments, rare $B$ and $K$ decays, exotic Higgs decays at the LHC, astrophysics, dark matter searches and cosmology. In particular, for our specific ALP scenario we (re)calculate and improve beam dump, flavour and supernova constraints. Throughout our calculation we implement state-of-the-art chiral perturbation theory results for the ALP partial decay width to hadrons. We find that while the DFO region, which predicts extremely small ALP-fermion couplings, can probably only be constrained by cosmological observables, the freeze-in region covers a wide area of parameter space that may be accessible to other more direct probes. Some of this parameter space is already excluded, but a significant part should be accessible to future collider experiments.
One of the most important parts of the QCD phase diagram of strongly interacting matter is the Critical End Point. The non-monotonic behavior of the conserved quantities like net-baryon ($\Delta B$), net-charge ($\Delta Q$), and net-strangeness ($\Delta S$) are believed to be the signatures of the QCD Critical End Point (CEP) as a function of the energy. We study the effect of the QCD critical point on moments of net-baryon in the Polyakov loop enhanced Nambu-Jona-Lasinio (PNJL) model of QCD with six quark and eight quark interactions. The study is performed at energies similar to RHIC beam energy scan (BES). Experimentally measuring conserved quantities is difficult due to systematic limitations, therefore net-proton, net-pion, and net-kaon are measured as the proxy of $\Delta B$, $\Delta Q$, and $\Delta S$. Thus the need for different models becomes predominant to estimate the value of different observables. Higher-order moments like skewness ($S$), kurtosis ($\kappa$), and their system volume independent products ($ M/\sigma^{2}, s\sigma$, $\kappa\sigma^{2}$) which are calculated in the PNJL model, are sensitive to the produced correlation length of the hot and dense medium, making them more prone to search for the critical point. Recent studies in the subensemble acceptance method (SAM) on the HRG model shows the dependency of the measure higher order moment on the experimental acceptance. We used SAM to analyze the behavior of $\kappa\sigma^{2}$ of net baryon distribution within the subvolume system for various acceptance fractions. These results can be directly mapped to the percentage of the subvolume (particle) of the total volume (conserved quantities). The results are compared to the STAR net-proton and proton data with different energies to understand the existence of critical point. For reference, results are also compared with the theoretical UrQMD and HRG models.
We derive a factorization theorem for the structure-dependent QED effects in the weak exclusive process $B^-\to\ell^-\bar\nu_\ell$, i.e., effects probing the internal structure of the $B$ meson. The derivation requires a careful treatment of endpoint-divergent convolutions common to subleading-power factorization formulas. We find that the decay amplitude is sensitive to two- and three-particle light-cone distribution amplitudes of the $B$ meson as well as to a new hadronic parameter $F(\mu,\Lambda)$, which generalizes the notion of the $B$-meson decay constant in the presence of QED effects. This is the first derivation of a subleading-power factorization theorem in which the soft functions are non-perturbative hadronic matrix elements.
We have studied the strong decay properties of the recently observed $T^a_{c\bar s0}(2900)^{++/0}$ by considering it as a $cu\bar{d}\bar{s}/cd\bar{u}\bar{s}$ fully open-flavor tetraquark state with $I(J^P)=1(0^+)$. In the framework of QCD sum rules, we have calculated the three-point correlation functions of the two-body strong decay processes $T^a_{c\bar s0}(2900)^{++}\rightarrow D_s^+\pi^+$, $D^+K^+, D_s^{\ast +}\rho^+$ and $D_{s1}^+\pi^+$. The full width of $T^a_{c\bar s0}(2900)^{++/0}$ is obtained as $161.7\pm94.8$ MeV, which is consistent with the experimental observation. We predict the relative branching ratios as $\Gamma (T\rightarrow D_s\pi):\Gamma(T\rightarrow DK):\Gamma (T\rightarrow D_s^{\ast} \rho):\Gamma (T\rightarrow D_{s1}\pi)\approx1.00:1.10:0.04:0.43$, implying that the main decay modes of $T^a_{c\bar s0}(2900)^{++/0}$ state are $D_s\pi$ and $DK$ channels in our calculations. However, the $P$-wave decay mode $D_{s1}\pi$ is also comparable and important by including the uncertainties. To further identify the nature of $T^a_{c\bar s0}(2900)^{++/0}$, we suggest confirming them in the $DK$ and $D_{s}\pi$ final states, and measuring the above ratios in future experiments.
We explore the features of interpolating gauge for QCD. This gauge, defined by Doust and by Baulieu and Zwanziger, interpolates between Feynman gauge or Lorenz gauge and Coulomb gauge. We argue that it could be useful for defining the splitting functions for a parton shower beyond order $\as$ or for defining the infrared subtraction terms for higher order perturbative calculations.
In this work we study the production of $\Sigma$ and $\Lambda$ hyperons in strangeness changing $\Delta S = -1$ charged current interactions of muon antineutrinos on nuclear targets. At the nucleon level, besides quasielastic scattering we consider the inelastic mechanism in which a pion is produced alongside the hyperon. Its relevance for antineutrinos with energies below 2 GeV is conveyed in integrated and differential cross sections. We observe that the distributions on the angle between the hyperon and the final lepton are clearly different for quasielastic and inelastic processes. Hyperon final state interactions, modeled with an intranuclear cascade, lead to a significant transfer from primary produced $\Sigma$'s into final $\Lambda$'s. They also cause considerable energy loss, which is apparent in hyperon energy distributions. We have investigated $\Lambda$ production off ${}^{40}$Ar in the conditions of the recently reported MicroBooNE measurement. We find that the $\Lambda \pi$ contribution, dominated by $\Sigma^*(1385)$ excitation, accounts for about one third of the cross section.
We study weak isosinglet vectorlike leptons that decay through a small mixing with the tau lepton, for which the discovery and exclusion reaches of the Large Hadron Collider and future proposed hadron colliders are limited. We show how an $e^+ e^-$ collider may act as a discovery machine for these $\tau^{\prime}$ particles, demonstrate that the $\tau^{\prime}$ mass peak can be reconstructed in a variety of distinct signal regions, and explain how the $\tau^{\prime}$ branching ratios may be measured.
The $SU(3)\otimes SU(2) \otimes U(1)$ standard model maps smoothly onto a conventional lattice gauge formulation, including the parity violation of the weak interactions. The formulation makes use of the pseudo-reality of the weak group and requires the inclusion a full generation of both leptons and quarks. As in continuum discussions, chiral eigenstates of the Dirac operator generate known anomalies, although with rough gauge configurations these are no longer exact zero modes of the Dirac operator.
In this study, the magnetic and quadrupole moments of the $Z_b(10650)$ state are determined using the compact diquark-antidiquark interpolating current through the QCD light-cone sum rule. The values that are obtained as a result of the analysis are as follows: $\mu_{Z_b} = 2.35^{+0.34}_{-0.33}~\mu_N$ and $\mathcal{D}_{Z_b} =(1.82^{+0.35}_{-0.31})\times 10^{-2}~\mbox{fm}^2$. Examining the results obtained, it can be seen that the magnetic moments are large enough to be measured experimentally, while the quadrupole moment is obtained as a small but non-zero value, corresponding to a prolate charge distribution. The magnetic moment is the leading-order response of a bound system to a weak external magnetic field. It therefore provides an excellent platform to probe the internal structures of hadrons governed by the quark-gluon dynamics of QCD.
From a cosmological perspective, scalar fields are well motivated dark matter and dark energy candidates. Several possibilities of neutrino couplings with a time-varying cosmic field have been investigated in the literature. In this work, we present a framework in which violations of Lorentz invariance (LIV) and $CPT$ symmetry in the neutrino sector could arise from an interaction among neutrinos with a time-varying scalar field. Furthermore, some cosmological and phenomenological aspects and constraints concerning this type of interaction are discussed. Potential violations of Lorentz and $CPT$ symmetries at present and future neutrino oscillation experiments such as IceCube and KM3NeT can probe this scenario.
In two recent papers Gonuguntla and Singleton claim that the Wu-Yang fiber bundle approach does not lead to to a consistent model for magnetic charge. We point out that this claim is false.
The mutual information characterizes correlations between spatially separated regions of a system. Yet, in experiments we often measure dynamical correlations, which involve probing operators that are also separated in time. Here, we introduce a space-time generalization of mutual information which, by construction, satisfies several natural properties of the mutual information and at the same time characterizes correlations across subsystems that are separated in time. In particular, this quantity, that we call the \emph{space-time mutual information}, bounds all dynamical correlations. We construct this quantity based on the idea of the quantum hypothesis testing. As a by-product, our definition provides a transparent interpretation in terms of an experimentally accessible setup. We draw connections with other notions in quantum information theory, such as quantum channel discrimination. Finally, we study the behavior of the space-time mutual information in several settings and contrast its long-time behavior in many-body localizing and thermalizing systems.
We perform a systematic analysis of the one-loop effective potential of pure $d=5$ supergravities, with supersymmetry fully broken by a Scherk-Schwarz compactification on the circle, as a function of the radial modulus. We discuss the precise correspondence between the effective potential $V_1$ in the full compactified theory and its counterpart $V_{1,red}$ in the reduced theory. We confirm that $V_1$ is finite for any $N>0$, in contrast to $V_{1,red}$. We find that for broken $N=8$ supergravity $V_1$ is negative definite even after accounting for the Kaluza-Klein states. We outline a program for future work where the study of a different kind of Scherk-Schwarz compactifications, still at the field theory level but with at least three extra dimensions, could lead to qualitatively new results.
It is expected that conformal symmetry is an emergent property of many systems at their critical point. This imposes strong constraints on the critical behavior of a given system. Taking them into account in theoretical approaches can lead to a better understanding of the critical physics or improve approximation schemes. However, within the framework of the non-perturbative or functional renormalization group and, in particular, of one of its most used approximation schemes, the Derivative Expansion (DE), non-trivial constraints only apply from third order (usually denoted $\mathcal{O}(\partial^4)$), at least in the usual formulation of the DE that includes correlation functions involving only the order parameter. In this work, we implement conformal constraints on a generalized DE including composite operators and show that new constraints already appear at second order of the DE (or $\mathcal{O}(\partial^2)$). We show how these constraints can be used to fix nonphysical regulator parameters.
For any locality-preserving action of a group $G$ on a quantum spin chain one can define an anomaly index taking values in the group cohomology of $G$. The anomaly index is a kinematic quantity, it does not depend on the Hamiltonian. We prove that a nonzero anomaly index prohibits any $G$-invariant Hamiltonian from having $G$-invariant gapped ground states. Lieb-Schultz-Mattis-type theorems are a special case of this result when $G$ involves translations. In the case when the symmetry group $G$ is a Lie group, we define an anomaly index which takes values in the differentiable group cohomology as defined by J.-L. Brylinski and prove a similar result.
We study $\mathfrak{sl}_2$ and $\mathfrak{sl}_3$ global conformal blocks on a sphere and a torus, using the shadow formalism. These blocks arise in the context of Virasoro and $\mathcal{W}_3$ conformal field theories in the large central charge limit. In the $\mathfrak{sl}_2$ case, we demonstrate that the shadow formalism yields the known expressions in terms of conformal partial waves. Then, we extend this approach to the $\mathfrak{sl}_3$ case and show that it allows to build simple integral representations for $\mathfrak{sl}_3$ global blocks. We demonstrate this construction on two examples: the four-point block on the sphere and the one-point torus block.
We present a novel $\mathcal{N} = 2 $ $\mathbb{Z}_2^2$-graded supersymmetric quantum mechanics ($\mathbb{Z}_2^2$-SQM) which has different features from those introduced so far. It is a two-dimensional (two-particle) system and is the first example of the quantum mechanical realization of an eight-dimensional irrep of the $\mathcal{N}=2$ $\mathbb{Z}_2^2$-supersymmetry algebra. The $\mathbb{Z}_2^2$-SQM is obtained by quantizing the one-dimensional classical system derived by dimensional reduction from the two-dimensional $\mathbb{Z}_2^2$-supersymmetric Lagrangian of $\mathcal{N}=1$, which we constructed in our previous work. The ground states of the $\mathbb{Z}_2^2$-SQM are also investigated.
Celestial holography is a new way to understand flat-space amplitudes. Self-dual theories, due to their nice properties, are good subjects to study celestial holography. In this paper, we developed a new formula to calculate the celestial color-ordered self-dual Yang-Mills amplitudes based on celestial Berends-Giele currents, which makes the leading OPE limit manifest. In addition, we explore some higher-order terms of OPE in the celestial self-dual Yang-Mills theory.
We study the counting problem of BPS D-branes wrapping holomorphic cycles of a general toric Calabi-Yau manifold. We evaluate the Jeffrey-Kirwan residues for the flavoured Witten index for the supersymmetric quiver quantum mechanics on the worldvolume of the D-branes, and find that BPS degeneracies are described by a statistical mechanical model of crystal melting. For Calabi-Yau threefolds, we reproduce the crystal melting models long known in the literature. For Calabi-Yau fourfolds, however, we find that the crystal does not contain the full information for the BPS degeneracy and we need to explicitly evaluate non-trivial weights assigned to the crystal configurations. Our discussions treat Calabi-Yau threefolds and fourfolds on equal footing, and include discussions on elliptic and rational generalizations of the BPS states counting, connections to the mathematical definition of generalized Donaldson-Thomas invariants, examples of wall crossings, and of trialities in quiver gauge theories.
The Coon amplitude is a one-parameter deformation of the Veneziano amplitude. We explore the unitarity of the Coon amplitude through its partial wave expansion using tools from $q$-calculus. Our analysis establishes manifest positivity on the leading and sub-leading Regge trajectories in arbitrary spacetime dimensions $D$, while revealing a violation of unitarity in a certain region of $(q,D)$ parameter space starting at the sub-sub-leading Regge order. A combination of numerical studies and analytic arguments allows us to argue for the manifest positivity of the partial wave coefficients in fixed spin and Regge asymptotics.
In this work, we study a type of commuting SYK model in which all terms in the Hamiltonian are commutative to each other. Because of the commutativity, this model has a large number of conserved charges and is integrable. After the ensemble average of random couplings, we can solve this model exactly in any $N$. Though this integral model is not holographic, we do find that it has some holography-like features, especially the near-perfect size winding in high temperatures. Therefore, we would like to call it pseudo-holographic. We also find that the size winding of this model has a narrowly peaked size distribution, which is different from the ordinary SYK model. We apply the traversable wormhole teleportation protocol in the commuting SYK model and find that the teleportation has a few features similar to the semiclassical traversable wormhole but in different parameter regimes. We show that the underlying physics is not entirely determined by the size-winding mechanism but involves the peaked-size mechanism and thermalization. Lastly, we comment on the recent simulation of the dynamics of traversable wormholes on Google's quantum processor.
This paper considers a recently-proposed string theory on $AdS_3\times S^3\times T^4$ with one unit of NS-NS flux ($k=1$). We discuss interpretations of the target space, including connections to twistor geometry and a more conventional spacetime interpretation via the Wakimoto representation. We propose an alternative perspective on the role of the Wakimoto formalism in the $k=1$ string, for which no large radius limit is required by the inclusion of extra operator insertions in the path integral. This provides an exact Wakimoto description of the worldsheet CFT. We also discuss an additional local worldsheet symmetry, $Q(z)$, that emerges when $k=1$ and show that this symmetry plays an important role in the localisation of the path integral to a sum over covering maps. We demonstrate the emergence of a rigid worldsheet translation symmetry in the radial direction of the $AdS_3$, for which again the presence of $Q(z)$ is crucial. We conjecture that this radial symmetry plays a key role in understanding, in the case of the $k=1$ string, the encoding of the bulk physics on the two-dimensional boundary.
By scrutinizing the singular sector of the Lounesto spinor classification, we investigate the correct definition of the expansion coefficient functions of local fermionic fields within a fully Lorentz covariant theory. As we can observe, a careful definition of the adjoint structure, directed towards local fields, maps singular spinors into class-2 according to a general spinor classification. Furthermore, we investigate all the necessary mathematical tools for constructing local fermionic and bosonic fields and provide insights into the physical implications for the other singular classes. Besides, we also show that incorporating \emph{Wigner degeneracy} maintains the rotational symmetry formalism working in general.
We present a review of Witten index calculations in different supersymmetric gauge theories in four dimensions: supersymmetric electrodynamics, pure N=1 supersymmetric Yang-Mills theories and also SYM theories including matter multiplets -- both with chirally symmetric and asymmetric content.
Recent measurements of femtoscopic correlations at NA61/SHINE unravel that the shape of the particle emitting source is not Gaussian. The measurements are based on L\'evy-stable symmetric sources, and we discuss the average pair transverse mass dependence of the source parameters. One of the parameters, the L\'evy exponent $\alpha$, is of particular importance. It describes the shape of the source, which, in the vicinity of the critical point of the QCD phase diagram, may be related to the critical exponent $\eta$. Its measurement hence may contribute to the search for and characterization of the critical point of the phase diagram.
In high-energy physics, precise measurements rely on highly reliable detector simulations. Traditionally, these simulations involve incorporating experiment data to model detector responses and fine-tuning them. However, due to the complexity of the experiment data, tuning the simulation can be challenging. One crucial aspect for charged particle identification is the measurement of energy deposition per unit length (referred to as dE/dx). This paper proposes a data-driven dE/dx simulation method using the Normalizing Flow technique, which can learn the dE/dx distribution directly from experiment data. By employing this method, not only can the need for manual tuning of the dE/dx simulation be eliminated, but also high-precision simulation can be achieved.
Realistic environments for prototyping, studying and improving analysis workflows are a crucial element on the way towards user-friendly physics analysis at HL-LHC scale. The IRIS-HEP Analysis Grand Challenge (AGC) provides such an environment. It defines a scalable and modular analysis task that captures relevant workflow aspects, ranging from large-scale data processing and handling of systematic uncertainties to statistical inference and analysis preservation. By being based on publicly available Open Data, the AGC provides a point of contact for the broader community. Multiple different implementations of the analysis task that make use of various pipelines and software stacks already exist. This contribution presents an updated AGC analysis task. It features a machine learning component and expanded analysis complexity, including the handling of an extended and more realistic set of systematic uncertainties. These changes both align the AGC further with analysis needs at the HL-LHC and allow for probing an increased set of functionality. Another focus is the showcase of a reference AGC implementation, which is heavily based on the HEP Python ecosystem and uses modern analysis facilities. The integration of various data delivery strategies is described, resulting in multiple analysis pipelines that are compared to each other.
The general-purpose Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) at CERN includes the hadronic calorimeter to register the energies of the charged and neutral hadrons produced in the proton-proton collisions at the LHC at a centre of mass energy 13.6 TeV. The calorimeter is located inside the superconducting solenoid of 6 m in diameter and 12.5 m in length creating the central magnetic flux density of 3.8 T. For operating optimally in the high pileup and high radiation environment of the High Luminosity LHC, the existing CMS endcap calorimeters will be replaced with a new high granularity calorimeter (HGCal) comprising of an electromagnetic and a hadronic section in each of the two endcaps. The hadronic section of the HGCAL will include 44 stainless steel absorber plates with a relative permeability value well below 1.05. The volume occupied by 22 plates in each endcap is about 21 m$^{3}$. The calculation of the axial electromagnetic forces on the absorber plates is a crucial element in designing the mechanical construction of the device. With a three-dimensional computer model of the CMS magnet, the axial forces on each absorber plate are calculated and the dependence of forces on the central magnetic flux density value is presented. The method of calculation and the obtained results are discussed.
The conceptual design study of a Future Circular hadron-hadron Collider (FCC-hh) to be con-structed at CERN with a center-of-mass energy of the order of 100 TeV requires superconducting magnetic systems with a central magnetic flux density of an order of 4 T for the experimental detectors. The developed concept of the FCC-hh detector involves the use of an iron-free magnetic system consisting of three superconducting solenoids. A superconducting magnet with a minimal steel yoke is proposed as an alternative to the baseline iron-free design. In this study, both magnetic system options for the FCC-hh detector are modeled with the same electrical parameters using Cobham$'$s program TOSCA. All the main characteristics of both designs are compared and discussed.
The ratio of branching fractions $R(D^{*}) = \mathcal{B}(\overline{B} \rightarrow D^{*} \tau^{-} \overline{\nu}_{\tau})$/$\mathcal{B} (\overline{B} \rightarrow D^{*} \ell^{-} \overline{\nu}_{\ell})$, where $\ell$ is an electron or muon, is measured using a Belle~II data sample with an integrated luminosity of $189~\mathrm{fb}^{-1}$ at the SuperKEKB asymmetric-energy $e^{+} e^{-}$ collider. Data is collected at the $\Upsilon(\mathrm{4S})$ resonance, and one $B$ meson in the $\Upsilon(\mathrm{4S})\rightarrow B\overline{B}$ decay is fully reconstructed in hadronic decay modes. The accompanying signal $B$ meson is reconstructed as $\overline{B}\rightarrow D^{*} \tau^{-}\overline{\nu}_{\tau}$ using leptonic $\tau$ decays. The normalization decay, $\overline{B}\rightarrow D^{*} \ell^{-} \overline{\nu}_{\ell}$, where $\ell$ is an electron or muon, produces the same observable final state particles. The ratio of branching fractions is extracted in a simultaneous fit to two signal-discriminating variables in both channels and yields $R(D^{*}) = 0.262~_{-0.039}^{+0.041}(\mathrm{stat})~_{-0.032}^{+0.035}(\mathrm{syst})$. This result is consistent with the current world average and with standard model predictions.
We report measurements of the branching fractions and direct $\it{CP}$ asymmetries of the decays $B^0 \to K^+ \pi^-$, $B^+ \to K^+ \pi^0$, $B^+ \to K^0 \pi^+$, and $B^0 \to K^0 \pi^0$, and use these for testing the standard model through an isospin-based sum rule. In addition, we measure the branching fraction and direct $\it{CP}$ asymmetry of the decay $B^+ \to \pi^+\pi^0$ and the branching fraction of the decay $B^0 \to \pi^+\pi^-$. The data are collected with the Belle II detector from $e^+e^-$ collisions at the $\Upsilon(4S)$ resonance produced by the SuperKEKB asymmetric-energy collider and contain $387\times 10^6$ bottom-antibottom meson pairs. Signal yields are determined in two-dimensional fits to background-discriminating variables, and range from 500 to 3900 decays, depending on the channel. We obtain $-0.03 \pm 0.13 \pm 0.04$ for the sum rule, in agreement with the standard model expectation of zero and with a precision comparable to the best existing determinations.
We show how to create coherent superpositions between two ground states of Lamda quantum system of three states, among which the middle one decays. The idea is to deplete the population of the bright state formed by the two ground states via the population loss channel. The remaining population is trapped in the dark states, which can be designed to be equal to any desired coherent superposition of the ground states. The present concept is an alternative to the slow adiabatic creation of coherent superpositions and may therefore be realized over short times, especially in the case where the middle state has a short life span. However, the price we pay for the fast evolution is associated with an overall 50% population losses. This issue can be removed in an experiment by using post-selection.
The states generated by a multiport beam-splitter usually display genuine multipartite entanglement between the many spatial modes. Here we investigate the different classes of multipartite entangled states within the paradigm of Stochastic Local Operations with Classical Communication. We highlight two scenarios, one where the multipartite entanglement classes follow a total number hierarchy, and the other where the various classes follow a nonclassicality degree hierarchy.
Gated InGaAs/InP avalanche photodiodes are the most practical device for detection of telecom single photons arriving at regular intervals.Here, we report the development of a compact single-photon detector (SPD) module measured just 8.8cm * 6cm * 2cm in size and fully integrated with driving signal generation, faint avalanche readout, and discrimination circuits as well as temperature regulation and compensation. The readout circuit employs our previously reported ultra-narrowband interference circuits (UNICs) to eliminate the capacitive response to the gating signal. We characterize a UNIC-SPD module with a 1.25-GHz clock input and find its performance comparable to its counterpart built upon discrete functional blocks. Setting its detection efficiency to 30% for 1,550-nm photons, we obtain an afterpulsing probability of 2.4% and a dark count probability of 8E-7 per gate under 3-ns hold-off time. We believe that UNIC-SPDs will be useful in important applications such as quantum key distribution.
The present contribution constitutes a brief account of information theoretical analysis in several representative model as well as real quantum mechanical systems. There has been an overwhelming interest to study such measures in various quantum systems, as evidenced by a vast amount of publications in the literature that has taken place in recent years. However, while such works are numerous in so-called \emph{free} systems, there is a genuine lack of these in their constrained counterparts. With this in mind, this chapter will focus on some of the recent exciting progresses that has been witnessed in our laboratory \cite{sen06,roy14mpla,roy14mpla_manning,roy15ijqc, roy16ijqc, mukherjee15,mukherjee16,majumdar17,mukherjee18a,mukherjee18b,mukherjee18c,mukherjee18d,majumdar20,mukherjee21,majumdar21a, majumdar21b}, and elsewhere, with special emphasis on following prototypical systems, namely, (i) double well (DW) potential (symmetric and asymmetric) (ii) \emph{free}, as well as a \emph{confined hydrogen atom} (CHA) enclosed in a spherical impenetrable cavity (iii) a many-electron atom under similar enclosed environment.
Time-periodic (Floquet) systems are one of the most interesting nonequilibrium systems. As the computation of energy eigenvalues and eigenstates of time-independent Hamiltonians is a central problem in both classical and quantum computation, quasienergy and Floquet eigenstates are the important targets. However, their computation has difficulty of time dependence; the problem can be mapped to a time-independent eigenvalue problem by the Sambe space formalism, but it instead requires additional infinite dimensional space and seems to yield higher computational cost than the time-independent cases. It is still unclear whether they can be computed with guaranteed accuracy as efficiently as the time-independent cases. We address this issue by rigorously deriving the cutoff of the Sambe space to achieve the desired accuracy and organizing quantum algorithms for computing quasienergy and Floquet eigenstates based on the cutoff. The quantum algorithms return quasienergy and Floquet eigenstates with guaranteed accuracy like Quantum Phase Estimation (QPE), which is the optimal algorithm for outputting energy eigenvalues and eigenstates of time-independent Hamiltonians. While the time periodicity provides the additional dimension for the Sambe space and ramifies the eigenstates, the query complexity of the algorithms achieves the near-optimal scaling in allwable errors. In addition, as a by-product of these algorithms, we also organize a quantum algorithm for Floquet eigenstate preparation, in which a preferred gapped Floquet eigenstate can be deterministically implemented with nearly optimal query complexity in the gap. These results show that, despite the difficulty of time-dependence, quasienergy and Floquet eigenstates can be computed almost as efficiently as time-independent cases, shedding light on the accurate and fast simulation of nonequilibrium systems on quantum computers.
Over the last few decades, the study of Bound States in the Continuum, their formation, and properties has attracted lots of attention, especially in optics and photonics. It is particularly noticeable that most of these investigations base their studies on symmetric systems. In this article, we study the formation of bound states in the continuum in electronic and photonic transport systems consisting of crossbar junctions formed by one-dimensional waveguides, considering asymmetric junctions with commensurable lengths for the upper and lower arms. We also study how BICs form in linear junction arrays as a function of the distance between consecutive junctions and their commensurability with the upper and lower arms. We solve the Helmholtz equation for the crossbar junctions and calculate the transmission probability, probability density in the intersections, and quality factor. The presence of quasi-BICs is reflected in the transmission probability as a sharp resonance in the middle of a symmetric Fano resonance along with Dirac's delta functions in the probability density and divergence in the quality factors.
Multi-photon dynamics beyond linear optical materials are of significant fundamental and technological importance in quantum information processing. However, it remains largely unexplored in nonlinear waveguide QED. In this work, we theoretically propose a structured nonlinear waveguide in the presence of staggered photon-photon interactions, which supports two branches of gaped bands for doublons (i.e., spatially bound-photon-pair states). In contrast to linear waveguide QED systems, we identify two important contributions to its dynamical evolution, i.e., single-photon bound states (SPBSs) and doublon bound states (DBSs). Most remarkably, the nonlinear waveguide can mediate the long-range four-body interactions between two emitter pairs, even in the presence of disturbance from SPBS. By appropriately designing system's parameters, we can achieve high-fidelity four-body Rabi oscillations mediated only by virtual doublons in DBSs. Our findings pave the way for applying structured nonlinear waveguide QED in multi-body quantum information processing and quantum simulations among remote sites.
The AEgIS experiment at CERN recently decided to adopt a control system solution based on the Sinara/ARTIQ open hardware and software infrastructure. This decision meant to depart from the previously used paradigm of custom-made electronics and software to control the experiment's equipment. Instead, adopting a solution with long-term support and used in many quantum physics experiments guarantees a vivid community using similar infrastructures. This transition reduces the risks and development timeline for integrating new equipment seamlessly within the setup. This work reviews the motivation, the setup, and the chosen hardware and presents several planned further steps in developing the control system.
The realization of strong nonlinear coupling between single photons has been a long-standing goal in quantum optics and quantum information science, promising wide impact applications, such as all-optical deterministic quantum logic and single-photon frequency conversion. Here, we report an experimental observation of the strong coupling between a single photon and a photon pair in an ultrastrongly-coupled circuit-QED system. This strong nonlinear interaction is realized by introducing a detuned flux qubit working as an effective coupler between two modes of a superconducting coplanar waveguide resonator. The ultrastrong light--matter interaction breaks the excitation number conservation, and an external flux bias breaks the parity conservation. The combined effect of the two enables the strong one--two-photon coupling. Quantum Rabi-like avoided crossing is resolved when tuning the two-photon resonance frequency of the first mode across the single-photon resonance frequency of the second mode. Within this new photonic regime, we observe the second harmonic generation for a mean photon number below one. Our results represent a key step towards a new regime of quantum nonlinear optics, where individual photons can deterministically and coherently interact with each other in the absence of any stimulating fields.
We consider the problem of fast forward evolution of the processes described in terms of the heat equation. The matter is considered on an adiabatically expanding time-dependent box. Attention is paid to acceleration of heat transfer processes. So called shortcuts to adiabaticity, implying fast forwarding of the adiabatic states are studied. Heat flux and temperature profiles are analyzed for standard and fast forwarded regimes.
At the fundamental level, full description of light-matter interaction requires quantum treatment of both matter and light. However, for standard light sources generating intense laser pulses carrying quadrillions of photons in a coherent state, classical description of light during intense laser-matter interaction has been expected to be adequate. Here we show how nonlinear optical response of matter can be controlled to generate dramatic deviations from this standard picture, including generation of multiple harmonics of the incident laser light entangled across many octaves. In particular, non-trivial quantum states of harmonics are generated as soon as one of the harmonics induces a transition between different laser-dressed states of the material system. Such transitions generate an entangled light-matter wavefunction, which emerges as the key condition for generating quantum states of harmonics, sufficient even in the absence of a quantum driving field or material correlations. In turn, entanglement of the material system with a single harmonic generates and controls entanglement between different harmonics. Hence, nonlinear media that are near-resonant with at least one of the harmonics appear to be most attractive for controlled generation of massively entangled quantum states of light. Our analysis opens remarkable opportunities at the interface of attosecond physics and quantum optics, with implications for quantum information science.
Quantum dynamics of a Dirac particle in a 1D box with moving wall is studied. Dirac equation with time-dependent boundary condition is mapped onto that with static one, but with time-dependent mass. Exact analytical solution of such modified Dirac equation is obtained for massless particle. For massive particle the problem is solved numerically. Time-dependences of the main characteristics of the dynamical confinement, such as average kinetic energy and quantum force are analyzed. It is found that the average kinetic energy remains bounded for the interval length bounded from below, in particular for the periodically oscillating wall.
Establishing quantum advantage for variational quantum algorithms is an important direction in quantum computing. In this work, we apply the Quantum Approximate Optimisation Algorithm (QAOA) -- a popular variational quantum algorithm for general combinatorial optimisation problems -- to a variant of the satisfiability problem (SAT): Not-All-Equal SAT (NAE-SAT). We focus on regimes where the problems are known to have solutions with low probability and introduce a novel classical solver that outperforms existing solvers. Extensively benchmarking QAOA against this, we show that while the runtime of both solvers scales exponentially with the problem size, the scaling exponent for QAOA is smaller for large enough circuit depths. This implies a polynomial quantum speedup for solving NAE-SAT.
Quantum machine learning with quantum kernels for classification problems is a growing area of research. Recently, quantum kernel alignment techniques that parameterise the kernel have been developed, allowing the kernel to be trained and therefore aligned with a specific dataset. While quantum kernel alignment is a promising technique, it has been hampered by considerable training costs because the full kernel matrix must be constructed at every training iteration. Addressing this challenge, we introduce a novel method that seeks to balance efficiency and performance. We present a sub-sampling training approach that uses a subset of the kernel matrix at each training step, thereby reducing the overall computational cost of the training. In this work, we apply the sub-sampling method to synthetic datasets and a real-world breast cancer dataset and demonstrate considerable reductions in the number of circuits required to train the quantum kernel while maintaining classification accuracy.
In this contribution we perform a density matrix renormalization group study of chains of planar rotors interacting via dipolar interactions. By exploring the ground state from weakly to strongly interacting rotors, we find the occurrence of a quantum phase transition between a disordered and a dipole-ordered quantum state. We show that the nature of the ordered state changes from ferroelectric to antiferroelectric when the relative orientation of the rotor planes varies and that this change requires no modification of the overall symmetry. The observed quantum phase transitions are characterized by critical exponents and central charges which reveal different universality classes ranging from that of the (1+1)D Ising model to the 2D classical XY model.
We use the recently introduced lifted product to construct a family of Quantum Low Density Parity Check Codes (QLDPC codes). The codes we obtain can be viewed as stacks of surface codes that are interconnected, leading to the name lift-connected surface (LCS) codes. LCS codes offer a wide range of parameters - a particularly striking feature is that they show interesting properties that are favorable compared to the standard surface code already at moderate numbers of physical qubits in the order of tens. We present and analyze the construction and provide numerical simulation results for the logical error rate under code capacity and phenomenological noise. These results show that LCS codes attain thresholds that are comparable to corresponding (non-connected) copies of surface codes, while the logical error rate can be orders of magnitude lower, even for representatives with the same parameters. This provides a code family showing the potential of modern product constructions at already small qubit numbers. Their amenability to 3D-local connectivity renders them particularly relevant for near-term implementations.
Quantum sensing enables the ultimate precision attainable in parameter estimation. Circumstantial evidence suggests that certain organisms, most notably migratory songbirds, also harness quantum-enhanced magnetic field sensing via a radical-pair-based chemical compass for the precise detection of the weak geomagnetic field. However, what underpins the acuity of such a compass operating in a noisy biological setting, at physiological temperatures, remains an open question. Here, we address the fundamental limits of inferring geomagnetic field directions from radical-pair spin dynamics. Specifically, we compare the compass precision, as derived from the directional dependence of the radical-pair recombination yield, to the ultimate precision potentially realisable by a quantum measurement on the spin system under steady-state conditions. To this end, we probe the quantum Fisher information and associated Cram\'er--Rao bound in spin models of realistic complexity, accounting for complex inter-radical interactions, a multitude of hyperfine couplings, and asymmetric recombination kinetics, as characteristic for the magnetosensory protein cryptochrome. We compare several models implicated in cryptochrome magnetoreception and unveil their optimality through the precision of measurements ostensibly accessible to nature. Overall, the comparison provides insight into processes honed by nature to realise optimality whilst constrained to operating with mere reaction yields. Generally, the inference of compass orientation from recombination yields approaches optimality in the limits of complexity, yet plateaus short of the theoretical optimal precision bounds by up to one or two orders of magnitude, thus underscoring the potential for improving on design principles inherent to natural systems.
We propose hybrid digital-analog learning algorithms on Rydberg atom arrays, combining the potentially practical utility and near-term realizability of quantum learning with the rapidly scaling architectures of neutral atoms. Our construction requires only single-qubit operations in the digital setting and global driving according to the Rydberg Hamiltonian in the analog setting. We perform a comprehensive numerical study of our algorithm on both classical and quantum data, given respectively by handwritten digit classification and unsupervised quantum phase boundary learning. We show in the two representative problems that digital-analog learning is not only feasible in the near term, but also requires shorter circuit depths and is more robust to realistic error models as compared to digital learning schemes. Our results suggest that digital-analog learning opens a promising path towards improved variational quantum learning experiments in the near term.
Entanglement is a striking feature of quantum mechanics, and it has a key property called unextendibility. In this paper, we present a framework for quantifying and investigating the unextendibility of general bipartite quantum states. First, we define the unextendible entanglement, a family of entanglement measures based on the concept of a state-dependent set of free states. The intuition behind these measures is that the more entangled a bipartite state is, the less entangled each of its individual systems is with a third party. Second, we demonstrate that the unextendible entanglement is an entanglement monotone under two-extendible quantum operations, including local operations and one-way classical communication as a special case. Normalization and faithfulness are two other desirable properties of unextendible entanglement, which we establish here. We further show that the unextendible entanglement provides efficiently computable benchmarks for the rate of exact entanglement or secret key distillation, as well as the overhead of probabilistic entanglement or secret key distillation.
The quantum-walk-based spatial search problem aims to find a marked vertex using a quantum walk on a graph with marked vertices. We describe a framework for determining the computational complexity of spatial search by continuous-time quantum walk on arbitrary graphs by providing a recipe for finding the optimal running time and the success probability of the algorithm. The quantum walk is driven by a Hamiltonian derived from the adjacency matrix of the graph modified by the presence of the marked vertices. The success of our framework depends on the knowledge of the eigenvalues and eigenvectors of the adjacency matrix. The spectrum of the Hamiltonian is subsequently obtained from the roots of the determinant of a real symmetric matrix $M$, the dimensions of which depend on the number of marked vertices. The eigenvectors are determined from a basis of the kernel of $M$. We show each step of the framework by solving the spatial searching problem on the Johnson graphs with a fixed diameter and with two marked vertices. Our calculations show that the optimal running time is $O(\sqrt{N})$ with an asymptotic probability of $1+o(1)$, where $N$ is the number of vertices.
In this work, we propose a comprehensive design for narrowband and passband composite pulse sequences by involving the dynamics of all states in the three-state system. The design is quite universal as all pulse parameters can be freely employed to modify the coefficients of error terms. Two modulation techniques, the strength and phase modulations, are used to achieve arbitrary population transfer with a desired excitation profile, while the system keeps minimal leakage to the third state. Furthermore, the current sequences are capable of tolerating inaccurate waveforms, detunings errors, and work well when rotating wave approximation is not strictly justified. Therefore, this work provides versatile adaptability for shaping various excitation profiles in both narrowband and passband sequences.
Here we present a quantum algorithm for clustering data based on a variational quantum circuit. The algorithm allows to classify data into many clusters, and can easily be implemented in few-qubit Noisy Intermediate-Scale Quantum (NISQ) devices. The idea of the algorithm relies on reducing the clustering problem to an optimization, and then solving it via a Variational Quantum Eigensolver (VQE) combined with non-orthogonal qubit states. In practice, the method uses maximally-orthogonal states of the target Hilbert space instead of the usual computational basis, allowing for a large number of clusters to be considered even with few qubits. We benchmark the algorithm with numerical simulations using real datasets, showing excellent performance even with one single qubit. Moreover, a tensor network simulation of the algorithm implements, by construction, a quantum-inspired clustering algorithm that can run on current classical hardware.
A new generation of sensors, hardware random number generators, and quantum and classical signal detectors are exploiting strong responses to external perturbations of system noise. Here, we study noise amplification by asymmetric dyads in freely expanding non-Hermitian optical systems. We show that modifications of the pumping strengths can counteract bias from natural imperfections of the system's hardware, while couplings between dyads lead to systems with non-uniform statistical distributions. Our results suggest that asymmetric non-Hermitian dyads are promising candidates for efficient sensors and ultra-fast random number generators. We propose that the integrated light emission from such asymmetric dyads can be efficiently used for analog all-optical degenerative diffusion models of machine learning to overcome the digital limitations of such models in processing speed and energy consumption.
In this paper we consider several algorithms for quantum computer vision using Noisy Intermediate-Scale Quantum (NISQ) devices, and benchmark them for a real problem against their classical counterparts. Specifically, we consider two approaches: a quantum Support Vector Machine (QSVM) on a universal gate-based quantum computer, and QBoost on a quantum annealer. The quantum vision systems are benchmarked for an unbalanced dataset of images where the aim is to detect defects in manufactured car pieces. We see that the quantum algorithms outperform their classical counterparts in several ways, with QBoost allowing for larger problems to be analyzed with present-day quantum annealers. Data preprocessing, including dimensionality reduction and contrast enhancement, is also discussed, as well as hyperparameter tuning in QBoost. To the best of our knowledge, this is the first implementation of quantum computer vision systems for a problem of industrial relevance in a manufacturing production line.
When preparing a pure state with a quantum circuit, there is an unavoidable approximation error due to the compilation error in fault-tolerant implementation. A recently proposed approach called probabilistic state synthesis, where the circuit is probabilistically sampled, is able to reduce the approximation error compared to conventional deterministic synthesis. In this paper, we demonstrate that the optimal probabilistic synthesis quadratically reduces the approximation error. Moreover, we show that a deterministic synthesis algorithm can be efficiently converted into a probabilistic one that achieves this quadratic error reduction. We also numerically demonstrate how this conversion reduces the $T$-count and analytically prove that this conversion halves an information-theoretic lower bound on the circuit size. In order to derive these results, we prove general theorems about the optimal convex approximation of a quantum state. Furthermore, we demonstrate that this theorem can be used to analyze an entanglement measure.
The widely used experimental technique of continuous-wave detection assumes counting pulses of photocurrent from a click-type detector inside a given measurement time window. With such a procedure we miss out the photons detected after each photocurrent pulse during the detector dead time. Additionally, each pulse may initialize so-called afterpulse, which is not associated with the real photons. We derive the corresponding quantum photocounting formula and experimentally verify its validity. Statistics of photocurrent pulses appears to be nonlinear with respect to quantum state, which is explained by the memory effect of the previous measurement time windows. Expressions -- in general, nonlinear -- connecting statistics of photons and pulses are derived for different measurement scenarios. We also consider an application of the obtained results to quantum state reconstruction with unbalanced homodyne detection.
Squeezing is essential to many quantum technologies and our understanding of quantum physics. Here we develop a theory of steady-state squeezing that can be generated in the closed and open quantum Rabi as well as Dicke model. To this end, we eliminate the spin dynamics which effectively leads to an abstract harmonic oscillator whose eigenstates are squeezed with respect to the physical harmonic oscillator. The generated form of squeezing has the unique property of time-independent uncertainties and squeezed dynamics, a novel type of quantum behavior. Such squeezing might find applications in continuous back-action evading measurements and should already be observable in optomechanical systems and Coulomb crystals.
We introduce a framework to compute upper bounds for temporal correlations achievable in open quantum system dynamics, obtained by repeated measurements on the system. As these correlations arise by virtue of the environment acting as a memory resource, such bounds are witnesses for the minimal dimension of an effective environment compatible with the observed statistics. These witnesses are derived from a hierarchy of semidefinite programs with guaranteed asymptotic convergence. We compute non-trivial bounds for various sequences involving a qubit system and a qubit environment, and compare the results to the best known quantum strategies producing the same outcome sequences. Our results provide a numerically tractable method to determine bounds on multi-time probability distributions in open quantum system dynamics and allow for the witnessing of effective environment dimensions through probing of the system alone.
Recently, Digitized-Counterdiabatic (CD) Quantum Approximate Optimization Algorithm (QAOA) has been proposed to make QAOA converge to the solution of an optimization problem in fewer steps, inspired by Trotterized counterdiabatic driving in continuous-time quantum annealing. In this paper, we critically revisit this approach by focusing on the paradigmatic weighted and unweighted one-dimensional MaxCut problem. We study two variants of QAOA with first and second-order CD corrections. Our results show that, indeed, higher order CD corrections allow for a quicker convergence to the exact solution of the problem at hand by increasing the complexity of the variational cost function. Remarkably, however, the total number of free parameters needed to achieve this result is independent of the particular QAOA variant analyzed.
Distributing quantum information between remote systems will necessitate the integration of emerging quantum components with existing communication infrastructure. This requires understanding the channel-induced degradations of the transmitted quantum signals, beyond the typical characterization methods for classical communication systems. Here we report on a comprehensive characterization of a Boston-Area Quantum Network (BARQNET) telecom fiber testbed, measuring the time-of-flight, polarization, and phase noise imparted on transmitted signals. We further design and demonstrate a compensation system that is both resilient to these noise sources and compatible with integration of emerging quantum memory components on the deployed link. These results have utility for future work on the BARQNET as well as other quantum network testbeds in development, enabling near-term quantum networking demonstrations and informing what areas of technology development will be most impactful in advancing future system capabilities.
The random sampling task performed by Google's Sycamore processor gave us a glimpse of the "Quantum Supremacy era". This has definitely shed some spotlight on the power of random quantum circuits in this abstract task of sampling outputs from the (pseudo-) random circuits. In this manuscript, we explore a practical near-term use of local random quantum circuits in dimensional reduction of large low-rank data sets. We make use of the well-studied dimensionality reduction technique called the random projection method. This method has been extensively used in various applications such as image processing, logistic regression, entropy computation of low-rank matrices, etc. We prove that the matrix representations of local random quantum circuits with sufficiently shorter depths ($\sim O(n)$) serve as good candidates for random projection. We demonstrate numerically that their projection abilities are not far off from the computationally expensive classical principal components analysis on MNIST and CIFAR-100 image data sets. We also benchmark the performance of quantum random projection against the commonly used classical random projection in the tasks of dimensionality reduction of image datasets and computing Von Neumann entropies of large low-rank density matrices. And finally using variational quantum singular value decomposition, we demonstrate a near-term implementation of extracting the singular vectors with dominant singular values after quantum random projecting a large low-rank matrix to lower dimensions. All such numerical experiments unequivocally demonstrate the ability of local random circuits to randomize a large Hilbert space at sufficiently shorter depths with robust retention of properties of large datasets in reduced dimensions.
Continuous-variable bosonic systems stand as prominent candidates for implementing quantum computational tasks. While various necessary criteria have been established to assess their resourcefulness, sufficient conditions have remained elusive. We address this gap by focusing on promoting circuits that are otherwise simulatable to computational universality. The class of simulatable, albeit non-Gaussian, circuits that we consider is composed of Gottesman-Kitaev-Preskill (GKP) states, Gaussian operations, and homodyne measurements. Based on these circuits, we first introduce a general framework for mapping a continuous-variable state into a qubit state. Subsequently, we cast existing maps into this framework, including the modular and stabilizer subsystem decompositions. By combining these findings with established results for discrete-variable systems, we formulate a sufficient condition for achieving universal quantum computation. Leveraging this, we evaluate the computational resourcefulness of a variety of states, including Gaussian states, finite-squeezing GKP states, and cat states. Furthermore, our framework reveals that both the stabilizer subsystem decomposition and the modular subsystem decomposition (of position-symmetric states) can be constructed in terms of simulatable operations. This establishes a robust resource-theoretical foundation for employing these techniques to evaluate the logical content of a generic continuous-variable state, which can be of independent interest.
Overcoming the issue of qubit-frequency fluctuations is essential to realize stable and practical quantum computing with solid-state qubits. Static ZZ interaction, which causes a frequency shift of a qubit depending on the state of neighboring qubits, is one of the major obstacles to integrating fixed-frequency transmon qubits. Here we propose and experimentally demonstrate ZZ-interaction-free single-qubit-gate operations on a superconducting transmon qubit by utilizing a semi-analytically optimized pulse based on a perturbative analysis. The gate is designed to be robust against slow qubit-frequency fluctuations. The robustness of the optimized gate spans a few MHz, which is sufficient for suppressing the adverse effects of the ZZ interaction. Our result paves the way for an efficient approach to overcoming the issue of ZZ interaction without any additional hardware overhead.
We study the non-Hermitian fermionic superfluidity subject to dissipation of Cooper pairs on a honeycomb lattice, for which we analyze the attractive Hubbard model with a complex-valued interaction. Remarkably, we demonstrate the emergence of the dissipation-induced superfluid phase that is anomalously enlarged by a cusp on the phase boundary. We find that this unconventional phase transition originates from the interplay between exceptional lines and van Hove singularity, which has no counterpart in equilibrium. Moreover, we demonstrate that the infinitesimal dissipation induces the nontrivial superfluid solution at the critical point. Our results can be tested in ultracold atoms with photoassociation techniques by postselcting special measurement outcomes with the use of quantum-gas microscopy and can lead to understanding the NH many-body physics triggered by exceptional manifolds in open quantum systems.
Measurement-induced phase transitions (MIPTs) are known to be described by non-unitary conformal field theories (CFTs) whose precise nature remains unknown. Most physical quantities of interest, such as the entanglement features of quantum trajectories, are described by boundary observables in this CFT. We introduce a transfer matrix approach to study the boundary spectrum of this field theory, and consider a variety of boundary conditions. We apply this approach numerically to monitored Haar and Clifford circuits, and to the measurement-only Ising model where the boundary scaling dimensions can be derived analytically. Our transfer matrix approach provides a systematic numerical tool to study the spectrum of MIPTs.
Thermodynamic computing exploits fluctuations and dissipation in physical systems to efficiently solve various mathematical problems. For example, it was recently shown that certain linear algebra problems can be solved thermodynamically, leading to an asymptotic speedup scaling with the matrix dimension. The origin of this "thermodynamic advantage" has not yet been fully explained, and it is not clear what other problems might benefit from it. Here we provide a new thermodynamic algorithm for exponentiating a real matrix, with applications in simulating linear dynamical systems. We describe a simple electrical circuit involving coupled oscillators, whose thermal equilibration can implement our algorithm. We also show that this algorithm also provides an asymptotic speedup that is linear in the dimension. Finally, we introduce the concept of thermodynamic parallelism to explain this speedup, stating that thermodynamic noise provides a resource leading to effective parallelization of computations, and we hypothesize this as a mechanism to explain thermodynamic advantage more generally.
Controlled operations are fundamental building blocks of quantum algorithms. Decomposing $n$-control-NOT gates ($C^n(X)$) into arbitrary single-qubit and CNOT gates, is a crucial but non-trivial task. This study introduces $C^n(X)$ circuits outperforming previous methods in the asymptotic and non-asymptotic regimes. Three distinct decompositions are presented: an exact one using one borrowed ancilla with a circuit depth $\Theta\left(\log(n)^{\log_2(12)}\right)$, an approximating one without ancilla qubits with a circuit depth $\mathcal O \left(\log(n)^{\log_2(12)}\log(1/\epsilon)\right)$ and an exact one with an adjustable-depth circuit using $m\leq n$ ancilla qubits. The resulting exponential speedup is likely to have a substantial impact on fault-tolerant quantum computing by improving the complexities of countless quantum algorithms with applications ranging from quantum chemistry to physics, finance and quantum machine learning.
In the framework of distributionally generalized quantum theory, the object $H\psi$ is defined as a distribution. The mathematical significance is a mild generalization for the theory of para- and pseudo-differential operators (as well as a generalization of the weak eigenvalue problem), where the $\psi$-do symbol (which is not a proper linear operator in this generalized case) can have its coefficient functions take on singular distributional values. Here, a distribution is said to be singular if it is not L$^p(\mathbb{R}^d)$ for any $p\geq 1$. Physically, the significance is a mathematically rigorous method, which does not rely upon renormalization or regularization of any kind, while producing bound state energy results in agreement with the literature. In addition, another benefit is that the method does not rely upon self-adjoint extensions of the Laplace operator. This is important when the theory is applied to non-Schrodinger systems, as is the case for the Dirac equation and a necessary property of any finite rigorous version of quantum field theory. The distributional interpretation resolves the need to evaluate a wave function at a point where it fails to be defined. For $d=2$, this occurs as $K_o(a|x|)\delta(x)$, where $K_o$ is the zeroth order MacDonald function. Finally, there is also the identification of a missing anomalous length scale, owing to the scale invariance of the formal symbol(ic) Hamiltonian, as well as the common identity for the logarithmic function, with $a,\,b\in\mathbb{R}^+$, $\log(ab)=\log(a)+\log(b)$, which loses unitlessness in its arguments. Consequently, the energy or point spectrum is generalized as a family (set indexed by the continuum) of would-be spectral values, called the C-spectrum.
Large-scale quantum networks, necessary for distributed quantum information processing, are posited to have quantum entangled systems between distant network nodes. The extent and quality of distributed entanglement in a quantum network, that is its functionality, depends on its topology, edge-parameter distributions and the distribution protocol. We uncover the parametric entanglement topography and introduce the notion of typical and maximal viable regions for entanglement-enabled tasks in a general model of large-scale quantum networks. We show that such a topographical analysis, in terms of viability regions, reveals important functional information about quantum networks, provides experimental targets for the edge parameters and can guide efficient quantum network design. Applied to a photonic quantum network, such a topographical analysis shows that in a network with radius $10^3$ kms and 1500 nodes, arbitrary pairs of nodes can establish quantum secure keys at a rate of $R_{sec}=1$ kHz using $1$ MHz entanglement generation sources on the edges and as few as 3 entanglement swappings at intermediate nodes along network paths.
In this paper we show how tensor networks help in developing explainability of machine learning algorithms. Specifically, we develop an unsupervised clustering algorithm based on Matrix Product States (MPS) and apply it in the context of a real use-case of adversary-generated threat intelligence. Our investigation proves that MPS rival traditional deep learning models such as autoencoders and GANs in terms of performance, while providing much richer model interpretability. Our approach naturally facilitates the extraction of feature-wise probabilities, Von Neumann Entropy, and mutual information, offering a compelling narrative for classification of anomalies and fostering an unprecedented level of transparency and interpretability, something fundamental to understand the rationale behind artificial intelligence decisions.
This work provides an error analysis of quantum Krylov algorithms based on real-time evolutions, subject to generic errors in the outputs of the quantum circuits. We establish a collective noise rate to summarize those errors, and prove that the resulting errors in the ground state energy estimates are leading-order linear in that noise rate. This resolves a misalignment between known numerics, which exhibit this linear scaling, and prior theoretical analysis, which only provably obtained square-root scaling. Our main technique is expressing generic errors in terms of an effective target Hamiltonian studied in an effective Krylov space. These results provide a theoretical framework for understanding the main features of quantum Krylov errors.
We study the effect of ferromagnetic metals (FM) on the circularly polarized modes of an electromagnetic cavity and show that broken time-reversal symmetry leads to a dichroic response of the cavity modes. With one spin-split band, the Zeeman coupling between the FM electrons and cavity modes leads to an anticrossing for mode frequencies comparable to the spin splitting. However, this is only the case for one of the circularly polarized modes, while the other is unaffected by the FM, allowing for the determination of the spin-splitting of the FM using polarization-dependent transmission experiments. Moreover, we show that for two spin-split bands, also the lifetimes of the cavity modes display a polarization-dependent response. The reduced lifetime of modes of only one polarization could potentially be used to engineer and control circularly polarized cavities.