Adam Hadhazy, writer and editor for The Kavli Foundation, contributed this article to Space.com'sExpert Voices: Op-Ed & Insights. Astronomers are increasingly enlisting volunteer "citizen scientists" to help them examine a seemingly endless stream of images and measurements of the universe, and their combined efforts are having a powerful impact on the study of the cosmos. Just last November, a citizen science project called Space Warps announced the discovery of 29 new gravitational lenses, regions in the universe where massive objects bend the paths of photons (from galaxies and other light sources) as they travel toward Earth. As cosmic phenomena go, the lenses are highly prized by scientists because they offer tantalizing glimpses of objects too distant, and dim, to be seen through existing telescopes, and information on the objects that are acting as lenses. The Space Warps' haul of lenses is all the more impressive because of how it was obtained. During an eight-month period, about 37,000 volunteers individually combed more than 430,000 digital images in a huge, online photo library of deep space. Automated computer programs have identified most of the 500 gravitational lenses on astronomer’s books. However, computers failed to flag the 29 lenses the Space Warps volunteers spotted, speaking to unique skills we humans possess. The Kavli Foundation spoke with three researchers, all co-authors of two papers published in the Monthly Notices of the Royal Astronomical Society (SPACE WARPS – I. Crowdsourcing the discovery of gravitational lenses SPACE WARPS– II. New gravitational lens candidates from the CFHTLS discovered through citizen science) describing the Space Warps findings. In our roundtable, the researchers discussed the findings and the critical role citizen science is playing in furthering astronomical discovery. The participants were: The following is an edited transcript of the roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks. The Kavli Foundation: Anupreeta and Aprajita, where did you get the idea — along with your co-principal investigator Phil Marshall of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University — to put volunteers to work on identifying gravitational lenses starting back in 2013? Anupreeta More: A few years ago, Chris Lintott gave a talk on citizen science at the Kavli Institute for Cosmological Physics in Chicago, where I was working at the time. It got me thinking about a lens search by citizen scientists. Aprajita Verma: For Phil Marshall and I, Space Warps grew out of Galaxy Zoo. Soon after Galaxy Zoo launched, I started to look at some of the galaxies that were being posted on the Galaxy Zoo user forum that had potential lensed features surrounding them. This was a great by product of the core Galaxy Zoo project. However, we realized that to find these incredibly rare sources, which are often confused with other objects, we really needed a tailored interface to efficiently find lenses. This grew into Space Warps. TKF: Chris, Galaxy Zoo itself was inspired by Stardust@home, the first astronomy-based citizen science project in which people played an active role. Until then, citizen scientists were often computer owners who offered up free processing power on their devices to aid in machine-driven data analysis. Were you concerned when you started Galaxy Zoo in 2007 that it would be hard to attract volunteers? Chris Lintott: Since Stardust@home involved people looking at images of a comet's dust grains brought back by NASA's Stardust space probe, we thought "Well, if people are willing to look at dust grains, then surely they'd be happy to look at our galaxies!" But that turned out to be almost beside the point. As we've done many of these citizen science projects over the years, we've discovered it's not the quality of the images that matter. After all, our galaxies aren't typically beautiful. They are not the Hubble Space Telescope shots that you’d expect to find on the front page of the New York Times. Our galaxies are often fuzzy, little, enigmatic blobs. The Space Warps images are pretty, but again they're not the kind of thing you would sell as a poster in the gift shop at the Kennedy Space Center. It's actually the ideas that get people excited. I think Space Warps and Galaxy Zoo have been successful because they have done a great job of explaining to people why we need their help. We're saying to them: "Look, if you do this simple task, it allows us to do science." This idea is best shown by Planet Hunters, a citizen science project that searches for exoplanets in data from NASA's Kepler spacecraft. Users are looking at graphs for fun. But because the idea is the discovery of exoplanets, people will put up with looking at data. TKF: What sort of unique science is made possible because of Space Warps? Verma: Gravitational lenses allow us to look at objects, such as very distant galaxies, that are fainter and in much more detail than with the telescopes we have now. It's enabling the kind of science we'll be routinely doing with extremely large telescopes in the future. More: That's right. Something unique about gravitational lensing is that it acts like a natural telescope and allows us to study some really faint, distant galaxies which we wouldn't get to study otherwise. We're seeing these distant galaxies in the early stages of their life cycle, which helps us understand how galaxies evolve over time. Also, in a gravitational lens system, it's possible for us to study the properties of the foreground galaxies or galaxy groups that are gravitationally lensing the background sources. For example, we can measure the mass of these foreground galaxies and also study how mass is distributed in them. TKF: Space Warps and other citizen science projects flourish because computer programs sometimes struggle at identifying features in data. Why do computers have trouble spotting the characteristic arc or blobby shapes of gravitational lenses that humans can? More: The problem is that these arc-like images of distant galaxies can have very different shapes and profiles. The process of lensing magnifies these galaxies' images and can distort them. Also, these distant galaxies emit light at different wavelengths and can appear to have different colors. Furthermore, there are structures in these galaxies that can change the shape of the arcs. Verma: Also, lots of spiral galaxies have bluish spiral arms that can look like lenses. We call these objects "lens impostors" and we find many more of these false positives compared to rare, true gravitational lenses. More: All these differences make it difficult to automate the process for finding lenses. But human beings are very good at pattern recognition. The dynamic range that our eyes and our brains offer is much greater than a computer algorithm. Lintott: Another thing to bear in mind in astronomy, particularly in Space Warps, is that we're often looking for rare objects. A computer's performance depends very strongly on how many examples you have to "train" it with. When you're dealing with rare things, that's often very difficult to do. We can't assemble large collections of hundreds of thousands of examples of gravitational lenses because we don't have them yet. Also, people — unlike computers — check beyond what we are telling them to look for when they review images. One of the great Space Warps examples is the discovery of a "red ring" gravitational lens. All the example lenses on the Space Warps site are blue in color. But because we have human classifiers, they had no trouble noticing this red thing that looks a little like these blue things they've been taught to keep an eye out for. Humans have an ability to make intuitive leaps like that, and that's very important. Verma: I echo the point that it's very difficult to program diversity and adaptability into any computer algorithm, whereas we kind of get it for free from the citizen scientists! [Laughter] TKF: Aprajita and Anupreeta, what’s the importance of the red ring object Chris just mentioned that the Space Warps community discovered in 2014 and has nicknamed 9io9? Verma: This object was a really exciting find, and it's a classic example of something we hadn't seen before that citizen scientists quickly found. We think that inside the background galaxy there's both an active black hole, which is producing radio wave emissions, as well as regions of star-formation. They're both stretched by the lensing into these spectacular arcs. It's just a really nice example of what lensing can do. We're still putting in further observations to try and really understand what this object is like. More: In this particular case with 9io9, there is the usual, main lensing galaxy, but then there is also another, small, satellite galaxy, whose mass and gravity are also contributing to the lensing. The satellite galaxy produces visible effects on the lensed images and we can use this to study its mass distribution. There are no other methods besides gravitational lensing which can provide as accurate a mass estimate for galaxies at such great distances. TKF: Besides 9io9, citizen astrophysicists have turned up other bizarre, previously unknown phenomena. One example is Hanny’s Voorwerp, a galaxy-size gas cloud discovered in 2007 in Galaxy Zoo. More recently, in 2015, Planet Hunters spotted huge decreases in the starlight coming from a star called KIC 8462. The cause could be an eclipsing swarm of comets; another, albeit unlikely, possibility that has set off rampant speculation on the Internet is that an alien megastructure is blocking light from the star. Why does citizen science seemingly work so well at making completely unexpected discoveries? Lintott: I often talk about the human ability to be distracted as a good thing. If we're doing a routine task and something unusual comes along, we stop to pay attention to it. That's rather hard to develop with automated computer systems. They can look for anomalies, but in astronomy, most anomalies are boring, such as satellites crossing in front of the telescope, or the telescope's camera malfunctions. However, humans are really good at spotting interesting anomalies like Hanny's Voorwerp, which looks like either an amorphous green blob or an evil Kermit the Frog, depending on how you squint at it. [Laughter] The point is, it's something you want to pay attention to. The other great thing about citizen science is that the volunteers who find these unusual things start to investigate and become advocates for them. Citizen scientists will jump up and down and tell us professional scientists we should pay attention to something. The great Zooniverse discoveries have always been from that combination of somebody who's distracted and then asks questions about what he or she has found. TKF: Aprajita and Chris, you are both working on the Large Synoptic Survey Telescope (LSST). It will conduct the largest-ever scan of the sky starting in 2022 and should turn up tons of new gravitational lenses. Do you envision a Space Warps-style citizen science project for LSST? Verma: Citizens will play a huge role in the LSST, which is a game-changer for lensing. We know of about 500 lenses currently. LSST will find on the order of tens to hundreds of thousands of lenses. We will potentially require the skill that citizen scientists have in looking for exotic and challenging objects. Also, LSST’s dataset will have a time dimension. We're really going to make a movie of the universe, and this will turn up a number of surprises. I can see citizen scientists being instrumental in a lot of the discoveries LSST will make. Lintott: One thing that's challenging about LSST is the sheer size of the dataset. If you were a citizen scientist, say, who had subscribed to receive text message alerts for when objects change in the sky as LSST makes its movie of the universe, then you would end up with a couple of billion text messages a night. Obviously that would not work. So that means we need to filter the data. We'll dynamically decide whether to assign a task to a machine or to a citizen scientist, or indeed to a professional scientist. TKF: Chris, that comment reminds me of something you said to TIME magazine in 2008: "In many parts of science, we're not constrained by what data we can get, we're constrained by what we can do with the data we have. Citizen science is a very powerful way of solving that problem.” In this era of Big Data, how important do you all see citizen science being moving forward, given that computers will surely get better at visual recognition tasks? Lintott: In astronomy, if you're looking at things that are routine, like a spiral galaxy or a common type of supernova, I think the machines will take over. They will do so having been trained on the large datasets that citizen scientists will provide. But I think there will be citizen involvement for a long while and it will become more interesting as we use machines to do more of the routine work and filter the data. The tasks for citizen scientists will involve more varied things — more of the unusual, Hanny's Voorwerp-type of discoveries. Plus, a lot of unusual discoveries will need to be followed up, and I'd like to see citizen scientists get further into the process of analysis. Without them, I think we're going to end up with a pile of interesting objects which professional scientists just don't have time to deal with. Verma: We have already seen a huge commitment from citizen scientists, particularly those who've spent a long time on Galaxy Zoo and Space Warps. For example, on Space Warps, we have a group of people who are interested in doing gravitational lens modeling, which has long been the domain of the professional astronomer. So we know that there's an appetite there to do further analysis with the objects they’ve found. I think in the future, the citizen science community will work hand-in-hand with professional astronomers. TKF: Are there new citizen astrophysicist opportunities on the horizon related to your projects? Lintott: Galaxy Zoo has a new lease on life, actually. We just added in new galaxies from a telescope in Chile. These galaxies are relatively close and their images are beautiful. It's our first proper look at the southern sky, so we have an all-new part of the universe to explore. It gives users a chance to be the first to see galaxies — if they get over to Galaxy Zoo quickly! Verma: For Space Warps, we are expecting new data and new projects to be online next year. More: Here in Japan, we are leading an imaging survey called the Hyper Suprime-Cam (HSC) survey and it's going to be much larger and deeper than what we have been looking at so far. We expect to find more than an order of magnitude increase in the number of lenses. Currently, we are preparing images of the candidates from the HSC survey and hope to start a new lens search with Space Warps soon. TKF: Is it the thrill of discovery that entices most citizen scientist volunteers? Some of the images in Galaxy Zoo have never been seen before because they were taken by a robotic telescope and stored away. Volunteers therefore have the chance to see something no one else ever has. More: That discovery aspect is personal. I think it's always exciting for anyone. Lintott: When we set up Galaxy Zoo, we thought it would be a huge motivation to see something that's yours and be the first human to lay eyes on a galaxy. Exploring space in that way is something that until Galaxy Zoo only happened on "Star Trek." [Laughter] In the years since, we've also come to realize that citizen science is a collective endeavor. The people who've been through 10,000 images without finding anything have contributed to the discovery of something like the red ring galaxy just as much as the person who happens to stumble across it. You need to get rid of the empty data as well. I've been surprised by how much our volunteers believe that. It's a far cry from the traditional, public view of scientific discovery in which the lone genius makes the discovery and gets all the credit. Verma: We set out with Space Warps for citizen scientists to be part of our collaboration and they've really enabled us to produce important findings. They've inspired us with their dedication and productivity. We've learned from our analysis that basically anyone who joins Space Warps has an impact on the results. We are also especially grateful for a very dedicated, diligent group that has made most of the lens classifications. We look forward to welcoming everyone back in our future projects! Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Copyright 2016 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Morganson E.,KIPAC |
Marshall P.,KIPAC |
Marshall P.,University of California at Santa Barbara |
Treu T.,University of California at Santa Barbara |
And 3 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2010
We have searched 4.5 deg2 of archival Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) images for cosmic strings, identifying close pairs of similar, faint galaxies and selecting groups whose alignment is consistent with gravitational lensing by a long, straight string. We find no evidence for cosmic strings in five large-area HST treasury surveys (covering a total of 2.22 deg2) or in any of 346 multifilter guest observer images (1.18 deg2). Assuming that simulations accurately predict the number of cosmic strings in the Universe, this non-detection allows us to place upper limits on the dimensionless Universal cosmic string tension of Gμ/c2 < 2.3 × 10-6 and cosmic string density of Ωs < 2.1 × 10-5 at the 95 per cent confidence level (marginalizing over the other parameter in each case). We find four dubious cosmic string candidates in 318 single-filter guest observer images (1.08 deg2), which we are unable to conclusively eliminate with existing data. The confirmation of any of these candidates as cosmic strings would imply Gμ/c2 ≈ 10-6 and Ωs ≈ 10-5. However, we estimate that there is at least a 92 per cent chance that these string candidates are random alignments of galaxies. If we assume that these candidates are indeed false detections, our final limits on Gμ/c2 and Ωs fall to 6.5 × 10-7 and 7.3 × 10-6, respectively. Due to the extensive sky coverage of the HST/ACS image archive, the above limits are universal. They are quite sensitive to the number of fields being searched and could be further reduced by more than a factor of 2 using forthcoming HST data. © 2010 The Authors. Journal compilation © 2010 RAS. Source
Schrabback T.,Leiden University |
Schrabback T.,University of Bonn |
Hartlap J.,University of Bonn |
Joachimi B.,University of Bonn |
And 24 more authors.
Astronomy and Astrophysics | Year: 2010
We present a comprehensive analysis of weak gravitational lensing by large-scale structure in the Hubble Space Telescope Cosmic Evolution Survey (COSMOS), in which we combine space-based galaxy shape measurements with ground-based photometric redshifts to study the redshift dependence of the lensing signal and constrain cosmological parameters. After applying our weak lensing-optimized data reduction, principal-component interpolation for the spatially, and temporally varying ACS point-spread function, and improved modelling of charge-transfer inefficiency, we measured a lensing signal that is consistent with pure gravitational modes and no significant shape systematics. We carefully estimated the statistical uncertainty from simulated COSMOS-like fields obtained from ray-tracing through the Millennium Simulation, including the full non-Gaussian sampling variance. We tested our lensing pipeline on simulated space-based data, recalibrated non-linear power spectrum corrections using the ray-tracing analysis, employed photometric redshift information to reduce potential contamination by intrinsic galaxy alignments, and marginalized over systematic uncertainties. We find that the weak lensing signal scales with redshift as expected from general relativity for a concordance ΛCDM cosmology, including the full cross-correlations between different redshift bins. Assuming a flat ΛCDM cosmology, we measure σ8(Ω m/0.3)0.51 = 0.75±0.08 from lensing, in perfect agreement with WMAP-5, yielding joint constraints Ωm = 0.266+0.025 -0.023, σ8 = 0.802 +0.028 -0.029 (all 68.3% conf.). Dropping the assumption of flatness and using priors from the HST Key Project and Big-Bang nucleosynthesis only, we find a negative deceleration parameter q0 at 94.3% confidence from the tomographic lensing analysis, providing independent evidence of the accelerated expansion of the Universe. For a flat wCDM cosmology and prior w ε [-2,0], we obtain w <-0.41 (90% conf.). Our dark energy constraints are still relatively weak solely due to the limited area of COSMOS. However, they provide an important demonstration of the usefulness of tomographic weak lensing measurements from space. © ESO 2010. Source
Kaehler R.,KIPAC |
Hahn O.,KIPAC |
IEEE Transactions on Visualization and Computer Graphics | Year: 2012
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data. © 2012 IEEE. Source