Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2018-11-23 12:30 to 2018-11-27 11:30 | Next meeting is Tuesday Sep 23rd, 10:30 am.
Linear-cosmology observables, such as the Cosmic Microwave Background (CMB), or the large-scale distribution of matter, have long been used as clean probes of dark matter (DM) interactions with baryons. It is standard to model the DM as an ideal fluid with a thermal Maxwell-Boltzmann (MB) velocity distribution, in order to compute the heat and momentum-exchange rates relevant to these probes. This approximation only applies in the limit where DM self-interactions are frequent enough to efficiently redistribute DM velocities. It does not accurately describe weakly self-interacting particles, whose velocity distribution unavoidably departs from MB once they decouple from baryons. This article lays out a new formalism required to accurately model DM-baryon scattering, even when DM self-interactions are negligible. The ideal fluid equations are replaced by the collisional Boltzmann equation for the DM phase-space distribution. The collision operator is approximated by a Fokker-Planck operator, constructed to recover the exact heat and momentum exchange rates, and allowing for an efficient numerical implementation. Numerical solutions to the background evolution are presented, which show that the MB approximation can over-estimate the heat-exchange rate by factors of ~ 2-3, especially for light DM particles. A Boltzmann-Fokker-Planck hierarchy for perturbations is derived. This new formalism allows to explore a wider range of DM models, and will be especially relevant for upcoming ultra-high-sensitivity CMB probes.
Paleo-Detectors are ancient minerals which can record and retain tracks induced by nuclear recoils over billion year timescales. They may represent the most sensitive method for the direct detection of Dark Matter (DM) to date. Here, we improve upon the cut-and-count approach previously employed for paleo-detectors by performing a full spectral analysis of the DM- and background-induced track length distributions. This spectral analysis allows us to project improved exclusion limits and detection thresholds for DM. Further, we investigate the impact of background shape uncertainties using realistic background models. We find that in the most optimistic case of a %-level understanding of the background shape, we can achieve sensitivity to DM-nucleon scattering cross sections up to a factor of 100 smaller than current XENON1T bounds for DM masses above $100\,$GeV. For DM lighter than $ 10\,$GeV, paleo-detectors can probe DM-nucleon cross sections many orders of magnitude below current experimental limits. Allowing for larger uncertainties in the shape of the backgrounds, we find that the impact on the sensitivity is considerable. However, assuming 10% bin-to-bin shape uncertainties, the sensitivity of paleo-detectors still improves over XENON1T limits by a factor of $\sim 8$ for DM heavier than $ 100\,$GeV. For lighter DM candidates, even with 50% bin-to-bin background shape uncertainties, paleo-detectors could achieve sensitivities an order of magnitude better than proposed conventional low-threshold experiments. Finally we show that, in the case of a DM discovery, regions in which the mass can be constrained extend to significantly higher DM masses than for proposed conventional experiments. For DM-nucleon cross sections just below current XENON1T limits, paleo-detectors could constrain the DM mass even if the new particle is as heavy as $ 1 \mathrm{\,TeV}$.
Testing the distance-sum-rule in strong lensing systems provides an interesting method to determine the curvature parameter $\Omega_k$ using more local objects. In this paper, we apply this method to a quite recent data set of strong lensing systems in combination with intermediate-luminosity quasars calibrated as standard rulers. In the framework of three types of lens models extensively used in strong lensing studies (SIS model, power-law spherical model, and extended power-law lens model), we show that the assumed lens model has a considerable impact on the cosmic curvature constraint, which is found to be compatible or marginally compatible with the flat case (depending on the lens model adopted). Analysis of low, intermediate and high-mass sub-samples defined according to the lens velocity dispersion demonstrates that, although it is not reasonable to characterize all lenses with a uniform model, such division has little impact on cosmic curvature inferred. Finally, thinking about future when massive surveys will provide their yields, we simulated a mock catalog of strong lensing systems expected to be seen by the LSST, together with a realistic catalog of quasars. We found that with about 16000 such systems, combined with the distance information provided by 500 compact milliarcsecond radio sources seen in future radio astronomical surveys, one would be able to constrain the cosmic curvature with an accuracy of $\Delta \Omega_k\simeq 10^{-3}$, which is comparable to the precision of \textit{Planck} 2015 results.
The determination of the inflationary energy scale represents one of the first step towards the understanding of the early Universe physics. The (very mild) non-Gaussian signals that arise from any inflation model carry information about the energy scale of inflation and may leave an imprint in some cosmological observables, for instance on the clustering of high-redshift, rare and massive collapsed structures. In particular, the graviton exchange contribution due to interactions between scalar and tensor fluctuations leaves a specific signature in the four-point function of curvature perturbations, thus on clustering properties of collapsed structures. We compute the contribution of graviton exchange on two- and three-point function of halos, showing that at large scales $k\sim 10^{-3}\ \mathrm{Mpc}^{-1}$ its magnitude is comparable or larger to that of other primordial non-Gaussian signals discussed in the literature. This provides a potential route to probe the existence of tensor fluctuations which is alternative and highly complementary to B-mode polarisation measurements of the cosmic microwave background radiation.
Forthcoming exascale digital computers will further advance our knowledge of quantum chromodynamics, but formidable challenges will remain. In particular, Euclidean Monte Carlo methods are not well suited for studying real-time evolution in hadronic collisions, or the properties of hadronic matter at nonzero temperature and chemical potential. Digital computers may never be able to achieve accurate simulations of such phenomena in QCD and other strongly-coupled field theories; quantum computers will do so eventually, though I'm not sure when. Progress toward quantum simulation of quantum field theory will require the collaborative efforts of quantumists and field theorists, and though the physics payoff may still be far away, it's worthwhile to get started now. Today's research can hasten the arrival of a new era in which quantum simulation fuels rapid progress in fundamental physics.