Max Planck Institute for Gravitational Physics (Hannover)

Hannover, Germany

Max Planck Institute for Gravitational Physics (Hannover)

Hannover, Germany
SEARCH FILTERS
Time filter
Source Type

Behnke B.,Max Planck Institute for Gravitational Physics (Hannover) | Papa M.A.,Max Planck Institute for Gravitational Physics (Hannover) | Papa M.A.,University of Wisconsin - Milwaukee | Prix R.,Max Planck Institute for Gravitational Physics (Hannover)
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

The search for continuous gravitational-wave signals requires the development of techniques that can effectively explore the low-significance regions of the candidate set. In this paper we present the methods that were developed for a search for continuous gravitational-wave signals from the Galactic Center [1]. First, we present a data-selection method that increases the sensitivity of the chosen data set by 20%-30% compared to the selection methods used in previous directed searches. Second, we introduce postprocessing methods that reliably rule out candidates that stem from random fluctuations or disturbances in the data. In the context of [J. Aasi et al., Phys. Rev. D 88, 102002 (2013)] their use enabled the investigation of marginal candidates (three standard deviations below the loudest expected candidate in Gaussian noise from the entire search). Such low-significance regions had not been explored in continuous gravitational-wave searches before. We finally present a new procedure for deriving upper limits on the gravitational-wave amplitude, which is several times faster with respect to the standard injection-and-search approach commonly used. © 2015 American Physical Society.


Krishnan B.,Max Planck Institute for Physics | Krishnan B.,Max Planck Institute for Gravitational Physics (Hannover)
Classical and Quantum Gravity | Year: 2012

We construct the spacetime in the vicinity of a general isolated, rotating, charged black hole. The black hole is modeled as a weakly isolated horizon, and we use the characteristic initial value formulation of the Einstein equations with the horizon as an inner boundary. The spacetime metric and other geometric fields are expanded in a power series in a radial coordinate away from the horizon by solving the characteristic field equations in the Newman-Penrose formalism. This is the first in a series of papers which investigate the near-horizon geometry and its physical applications using the isolated horizon framework. © 2012 IOP Publishing Ltd.


Leaci P.,Max Planck Institute For Gravitationsphysik | Leaci P.,University of Rome La Sapienza | Prix R.,Max Planck Institute for Gravitational Physics (Hannover)
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

We derive simple analytic expressions for the (coherent and semicoherent) phase metrics of continuous-wave sources in low-eccentricity binary systems for the two regimes of long and short segments compared to the orbital period. The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte Carlo studies comparing metric mismatch predictions against the measured loss of detection statistics for binary parameter offsets. The agreement is generally found to be within ∼10%-30%. For an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque-balance level [R. V. Wagoner, Astrophys. J. 278, 345 (1984); L. Bildsten, Astrophys. J. 501, L89 (1998).] up to a frequency of ∼500-600 Hz, if orbital eccentricity is well constrained, and up to a frequency of ∼160-200 Hz for more conservative assumptions about the uncertainty on orbital eccentricity. © Published by the American Physical Society 2015.


Rosado P.A.,Max Planck Institute for Gravitational Physics (Hannover)
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

Basic aspects of the background of gravitational waves and its mathematical characterization are reviewed. The spectral energy density parameter Ω(f), commonly used as a quantifier of the background, is derived for an ensemble of many identical sources emitting at different times and locations. For such an ensemble, Ω(f) is generalized to account for the duration of the signals and of the observation, so that one can distinguish the resolvable and unresolvable parts of the background. The unresolvable part, often called confusion noise or stochastic background, is made by signals that cannot be either individually identified or subtracted out of the data. To account for the resolvability of the background, the overlap function is introduced. This function is a generalization of the duty cycle, which has been commonly used in the literature, in some cases leading to incorrect results. The spectra produced by binary systems (stellar binaries and massive black hole binaries) are presented over the frequencies of all existing and planned detectors. A semi-analytical formula for Ω(f) is derived in the case of stellar binaries (containing white dwarfs, neutron stars or stellar-mass black holes). Besides a realistic expectation of the level of background, upper and lower limits are given, to account for the uncertainties in some astrophysical parameters such as binary coalescence rates. One interesting result concerns all current and planned ground-based detectors (including the Einstein Telescope). In their frequency range, the background of binaries is resolvable and only sporadically present. In other words, there is no stochastic background of binaries for ground-based detectors. © 2011 American Physical Society.


Helling Ch.,University of St. Andrews | Jardine M.,University of St. Andrews | Mokler F.,Max Planck Institute for Extraterrestrial Physics | Mokler F.,Max Planck Institute for Gravitational Physics (Hannover)
Astrophysical Journal | Year: 2011

Observations have shown that continuous radio emission and also sporadic Hα and X-ray emission are prominent in singular, low-mass objects later than spectral class M. These activity signatures are interpreted as being caused by coupling of an ionized atmosphere to the stellar magnetic field. What remains a puzzle, however, is the mechanism by which such a cool atmosphere can produce the necessary level of ionization. At these low temperatures, thermal gas processes are insufficient, but the formation of clouds sets in. Cloud particles can act as seeds for electron avalanches in streamers that ionize the ambient gas, and can lead to lightning and indirectly to magnetic field coupling, a combination of processes also expected for protoplanetary disks. However, the precondition is that the cloud particles are charged. We use results from DRIFT-PHOENIX model atmospheres to investigate collisional processes that can lead to the ionization of dust grains inside clouds. We show that ionization by turbulence-induced dust-dust collisions is the most efficient kinetic process. The efficiency is highest in the inner cloud where particles grow quickly and, hence, the dust-to-gas ratio is high. Dust-dust collisions alone are not sufficient to improve the magnetic coupling of the atmosphere inside the cloud layers, but the charges supplied either on grains or within the gas phase as separated electrons can trigger secondary nonlinear processes. Cosmic rays are likely to increase the global level of ionization, but their influence decreases if a strong, large-scale magnetic field is present as on brown dwarfs. We suggest that although thermal gas ionization declines in objects across the fully convective boundary, dust charging by collisional processes can play an important role in the lowest mass objects. The onset of atmospheric dust may therefore correlate with the anomalous X-ray and radio emission in atmospheres that are cool, but charged more than expected by pure thermal ionization. © 2011. The American Astronomical Society. All rights reserved..


Haasteren R.V.,Max Planck Institute for Gravitational Physics (Hannover)
Monthly Notices of the Royal Astronomical Society | Year: 2013

The analysis of pulsar timing data, especially in pulsar timing array (PTA) projects, has encountered practical difficulties: evaluating the likelihood and/or correlation-based statistics can become prohibitively computationally expensive for large data sets. In situations where a stochastic signal of interest has a power spectral density that dominates the noise in a limited bandwidth of the total frequency domain (e.g. the isotropic background of gravitationalwaves), a linear transformation exists that transforms the timing residuals to a basis in which virtually all the information about the stochastic signal of interest is contained in a small fraction of basis vectors. By only considering such a small subset of these 'generalized residuals', the dimensionality of the data analysis problem is greatly reduced, which can cause a large speedup in the evaluation of the likelihood: the ABC-method (Acceleration By Compression). The compression fidelity, calculable with crude estimates of the signal and noise, can be used to determine how far a data set can be compressed without significant loss of information. Both direct tests on the likelihood, and Bayesian analysis of mock data, show that the signal can be recovered as well as with an analysis of uncompressed data. In the analysis of International PTA Mock Data Challenge data sets, speedups of a factor of 3 orders of magnitude are demonstrated. For realistic PTA data sets the acceleration may become greater than six orders of magnitude due to the low signal-to-noise ratio ©2012 The Author.


Van Haasteren R.,Max Planck Institute for Gravitational Physics (Hannover) | Van Haasteren R.,Leiden University | Levin Y.,Leiden University | Levin Y.,Monash University
Monthly Notices of the Royal Astronomical Society | Year: 2013

Although it is widely understood that pulsar timing observations generally contain timecorrelated stochastic signals (TCSSs; red timing noise is of this type), most data analysis techniques that have been developed make an assumption that the stochastic uncertainties in the data are uncorrelated, i.e. 'white'. Recentwork has pointed out that this can introduce severe bias in the determination of timing-model parameters and that better analysis methods should be used. This paper presents a detailed investigation of timing-model fitting in the presence of TCSSs and gives closed expressions for the post-fit signals in the data. This results in aBayesian technique to obtain timing-model parameter estimates in the presence of TCSSs, as well as computationally more efficient expressions of their marginalized posterior distribution. A new method to analyse hundreds of mock data set realizations simultaneously without significant computational overhead is presented, as well as a statistically rigorous method to check the internal consistency of the results. As a by-product of the analysis, closed expressions of the rms introduced by a stochastic background of gravitational waves in timing residuals are obtained, valid for regularly sampled data. Using T as the length of the data set and hc(1 yr-1) as the characteristic strain, this is σ2 GWB = hc(1 yr-1)2(9 3 √ 2π4(-10/3)/8008) yr-4/3T 10/3. © 2012 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Ellis J.A.,University of Wisconsin - Milwaukee | Siemens X.,University of Wisconsin - Milwaukee | Van Haasteren R.,Max Planck Institute for Gravitational Physics (Hannover)
Astrophysical Journal | Year: 2013

Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10-7 Hz-10-9 Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full likelihood function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate likelihood function reproduces results obtained from the full likelihood function. We further show, both analytically and through simulations, that, on average, this approximate likelihood function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes. © 2013. The American Astronomical Society. All rights reserved.


Dal Canton T.,Max Planck Institute for Gravitational Physics (Hannover) | Lundgren A.P.,Max Planck Institute for Gravitational Physics (Hannover) | Nielsen A.B.,Max Planck Institute for Gravitational Physics (Hannover)
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

The inclusion of aligned-spin effects in gravitational-wave search pipelines for neutron-star-black-hole binary coalescence has been shown to increase the astrophysical reach with respect to search methods where spins are neglected completely, under astrophysically reasonable assumptions about black-hole spins. However, theoretical considerations and population synthesis models suggest that many of these binaries may have a significant misalignment between the black-hole spin and the orbital angular momentum, which could lead to precession of the orbital plane during the inspiral and a consequent loss in detection efficiency if precession is ignored. This work explores the effect of spin misalignment on a search pipeline that completely neglects spin effects and on a recently developed pipeline that only includes aligned-spin effects. Using synthetic but realistic data, which could reasonably represent the first scientific runs of advanced-LIGO detectors, the relative sensitivities of both pipelines are shown for different assumptions about black-hole spin magnitude and alignment with the orbital angular momentum. Despite the inclusion of aligned-spin effects, the loss in signal-to-noise ratio due to precession can be as large as 40%, but this has a limited impact on the overall detection rate: even if precession is a predominant feature of neutron-star-black-hole binaries, an aligned-spin search pipeline can still detect at least half of the signals compared to an idealized generic precessing search pipeline. © 2015 American Physical Society.


Wette K.,Max Planck Institute for Gravitational Physics (Hannover)
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

The sensitivity of all-sky searches for gravitational-wave pulsars is primarily limited by the finite availability of computing resources. Semicoherent searches are a widely used method of maximizing sensitivity to gravitational-wave pulsars at fixed computing cost: the data from a gravitational-wave detector are partitioned into a number of segments, each segment is coherently analyzed, and the analysis results from each segment are summed together. The generation of template banks for the coherent analysis of each segment, and for the summation, requires knowledge of the metrics associated with the coherent and semicoherent parameter spaces respectively. We present a useful approximation to the semicoherent parameter-space metric, analogous to that presented in Wette and Prix [Phys. Rev. D 88, 123005 (2013)] for the coherent metric. The new semicoherent metric is compared to previous work in Pletsch [Phys. Rev. D 82, 042002 (2010)], and Brady and Creighton [Phys. Rev. D 61, 082001 (2000)]. We find that semicoherent all-sky searches require orders of magnitude more templates than previously predicted. © 2015 American Physical Society.

Loading Max Planck Institute for Gravitational Physics (Hannover) collaborators
Loading Max Planck Institute for Gravitational Physics (Hannover) collaborators