Accessibility Links
IOPscience is a platform for IOP-hosted journal content. It incorporates some of the most innovative technologies to enhance your user experience.
Don't forget to create an account to customize IOPscience and to set up e-mail alerts.
Scientists have identified the chemical 'fingerprints' given off by specific bacteria when present in the lungs, potentially allowing for a quick and simple breath test to diagnose infections such as tuberculosis.
Read the research article
A new method developed by a group of researchers from the USA and Canada, could potentially cool trapped antihydrogen atoms to temperatures 25 times colder than already achieved.
Read the research article
Just published — proceedings from the 26th International Conference on Low Temperature Physics (LT26) conference.
Read the proceedings
The top ten breakthroughs in physics in 2012, as judged by Physics World magazine, have been announced.
Read more
In December's Publisher's Pick, researchers achieve strong tunable colour and bright visible light by adjusting the Yb-Ln composition of silica-coated co-doped upconverting nanocrystals.
Read more
A new Materials Science collection is available on IOPscience, bringing top articles from IOP Publishing's portfolio of journals, magazines and websites.
Read the collection
The November 2012 issue of Physics World is devoted to the inspiring field of "animal physics".
For a limited time only, you can download a PDF of the issue free of charge at physicsworld.com
In November's Publisher's Pick, US researchers have used a store-bought HD-DVD worth $10 to fabricate plasmonic structures that can provide fluorescence enhancements by factors of up to 118 for low-level chemical and biological sensing.
Read more
M Duocastella and C B Arnold 2013 J. Phys. D: Appl. Phys. 46 075102 Tag this article
Article References Full text PDF (2.41 MB) Enhanced article HTML
Liquid lenses are appealing for applications requiring adaptive control of the focal length, but current methods depend on factors such as liquid inertia that limit their response time to tens of milliseconds. A tunable acoustic gradient index (TAG) lens uses sound energy to radially excite a fluid-filled cylindrical cavity and produce a continuous change in refractive power that, at steady state, enables rapid selection of the focal length on time scales shorter than 1 µs. However, the time to reach steady state is a crucial parameter that is not fully understood. Here we characterize the dynamics of the TAG lens at the initial moments of operation as a function of frequency. Based on this understanding, we develop a model of the lens transients which incorporates driving frequency, fluid speed of sound and viscosity, and we show that is in good agreement with the experimental results providing a method to predict the lens behaviour at any given time.
A Pribush et al 2013 Physiol. Meas. 34 139 Tag this article
Article References Full text PDF (440 KB) Enhanced article HTML
Hypoosmotic swelling of erythrocytes and the formation of membrane holes were studied by measuring the dc conductance ( G). In accordance with the theoretical predictions, these processes are manifested by a decrease in G followed by its increase. Thus, unlike the conventional osmotic fragility test, the proposed methodological approach allows investigations of both the kinetics of swelling and the erythrocyte fragility. It is shown that the initial rate of swelling and the equilibrium size of the cells are affected by the tonicity of a hypotonic solution and the membrane rheological properties. Because the rupture of biological membranes is a stochastic process, a time-dependent increase in the conductance follows an integral distribution function of the membrane lifetime. The main conclusion which stems from reported results is that information about rheological properties of red blood cell (RBC) membranes and the resistivity of RBCs to a certain osmotic shock may be extracted from conductance signals.
E. Aliu et al. 2013 ApJ 764 38 Tag this article
Article References Full text PDF (8.07 MB) Enhanced article HTML
We report the discovery of TeV gamma-ray emission coincident with the shell-type radio supernova remnant (SNR) CTA 1 using the VERITAS gamma-ray observatory. The source, VER J0006+729, was detected as a 6.5 standard deviation excess over background and shows an extended morphology, approximated by a two-dimensional Gaussian of semimajor (semiminor) axis 0.°30 (0.°24) and a centroid 5' from the Fermi gamma-ray pulsar PSR J0007+7303 and its X-ray pulsar wind nebula (PWN). The photon spectrum is well described by a power-law dN/ dE = N 0( E/3 TeV) –Γ, with a differential spectral index of Γ = 2.2 ± 0.2 stat ± 0.3 sys, and normalization N 0 = (9.1 ± 1.3 stat ± 1.7 sys) × 10 –14 cm –2 s –1 TeV –1. The integral flux, F γ = 4.0 × 10 –12 erg cm –2 s –1 above 1 TeV, corresponds to 0.2% of the pulsar spin-down power at 1.4 kpc. The energetics, colocation with the SNR, and the relatively small extent of the TeV emission strongly argue for the PWN origin of the TeV photons. We consider the origin of the TeV emission in CTA 1.
David P. Palamara et al. 2013 ApJ 764 31 Tag this article
Article References Full text PDF (3.34 MB) Enhanced article HTML
We measure the clustering of extremely red objects (EROs) in 8 deg 2 of the NOAO Deep Wide Field Survey Boötes field in order to establish robust links between ERO ( z 1.2) and local galaxy ( z < 0.1) populations. Three different color selection criteria from the literature are analyzed to assess the consequences of using different criteria for selecting EROs. Specifically, our samples are ( R – K s ) > 5.0 (28, 724 galaxies), ( I – K s ) > 4.0 (22, 451 galaxies), and ( I – [3.6]) > 5.0 (64, 370 galaxies). Magnitude-limited samples show the correlation length ( r 0) to increase for more luminous EROs, implying a correlation with stellar mass. We can separate star-forming and passive ERO populations using the ( K s – [24]) and ([3.6] – [24]) colors to K s = 18.4 and [3.6] = 17.5, respectively. Star-forming and passive EROs in magnitude-limited samples have different clustering properties and host dark halo masses and cannot be simply understood as a single population. Based on the clustering, we find that bright passive EROs are the likely progenitors of 4 L* elliptical galaxies. Bright EROs with ongoing star formation were found to occupy denser environments than star-forming galaxies in the local universe, making these the likely progenitors of L* local ellipticals. This suggests that the progenitors of massive 4 L* local ellipticals had stopped forming stars by z 1.2, but that the progenitors of less massive ellipticals (down to L*) can still show significant star formation at this epoch.
R Plackett et al 2013 JINST 8 C01038 Tag this article
Stefan Rahmstorf et al 2012 Environ. Res. Lett. 7 044035 Tag this article
Article References Full text PDF (262 KB) Enhanced article HTML
We analyse global temperature and sea-level data for the past few decades and compare them to projections published in the third and fourth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). The results show that global temperature continues to increase in good agreement with the best estimates of the IPCC, especially if we account for the effects of short-term variability due to the El Niño/Southern Oscillation, volcanic activity and solar variability. The rate of sea-level rise of the past few decades, on the other hand, is greater than projected by the IPCC models. This suggests that IPCC sea-level projections for the future may also be biased low.
Francis O'Sullivan and Sergey Paltsev 2012 Environ. Res. Lett. 7 044030 Tag this article
Article References Full text PDF (333 KB) Enhanced article HTML
Estimates of greenhouse gas (GHG) emissions from shale gas production and use are controversial. Here we assess the level of GHG emissions from shale gas well hydraulic fracturing operations in the United States during 2010. Data from each of the approximately 4000 horizontal shale gas wells brought online that year are used to show that about 900 Gg CH 4 of potential fugitive emissions were generated by these operations, or 228 Mg CH 4 per well—a figure inappropriately used in analyses of the GHG impact of shale gas. In fact, along with simply venting gas produced during the completion of shale gas wells, two additional techniques are widely used to handle these potential emissions: gas flaring and reduced emission ‘green’ completions. The use of flaring and reduced emission completions reduce the levels of actual fugitive emissions from shale well completion operations to about 216 Gg CH 4, or 50 Mg CH 4 per well, a release substantially lower than several widely quoted estimates. Although fugitive emissions from the overall natural gas sector are a proper concern, it is incorrect to suggest that shale gas-related hydraulic fracturing has substantially altered the overall GHG intensity of natural gas production.
Holger Babinsky 2003 Phys. Educ. 38 497 Tag this article
Article References Full text PDF (364 KB)
The popular explanation of lift is common, quick, sounds logical and gives the correct answer, yet also introduces misconceptions, uses a nonsensical physical argument and misleadingly invokes Bernoulli's equation. A simple analysis of pressure gradients and the curvature of streamlines is presented here to give a more correct explanation of lift.
Steven J Davis et al 2013 Environ. Res. Lett. 8 011001 Tag this article
Article References Full text PDF (586 KB)
Abstract
Stabilizing CO 2 emissions at current levels for fifty years is not consistent with either an atmospheric CO 2 concentration below 500 ppm or global temperature increases below 2 °C. Accepting these targets, solving the climate problem requires that emissions peak and decline in the next few decades, and ultimately fall to near zero. Phasing out emissions over 50 years could be achieved by deploying on the order of 19 'wedges', each of which ramps up linearly over a period of 50 years to ultimately avoid 1 GtC y −1 of CO 2 emissions. But this level of mitigation will require affordable carbon-free energy systems to be deployed at the scale of tens of terawatts. Any hope for such fundamental and disruptive transformation of the global energy system depends upon coordinated efforts to innovate, plan, and deploy new transportation and energy systems that can provide affordable energy at this scale without emitting CO 2 to the atmosphere.
1. Introduction
In 2004, Pacala and Socolow published a study in Science arguing that '[h]umanity can solve the carbon and climate problem in the first half of this century simply by scaling up what we already know how to do' [1]. Specifically, they presented 15 options for 'stabilization wedges' that would grow linearly from zero to 1 Gt of carbon emissions avoided per year (GtC y −1; 1 Gt = 10 12 kg) over 50 years. The solution to the carbon and climate problem, they asserted, was 'to deploy the technologies and/or lifestyle changes necessary to fill all seven wedges of the stabilization triangle'. They claimed this would offset the growth of emissions and put us on a trajectory to stabilize atmospheric CO 2 concentration at 500 ppm if emissions decreased sharply in the second half of the 21st century.
The wedge concept has proven popular as an analytical tool for considering the potential of different technologies to reduce CO 2 emissions. In the years since the paper was published, it has been cited more than 400 times, and stabilization wedges have become a ubiquitous unit in assessing different strategies to mitigate climate change (e.g. [2–5]). But the real and lasting potency of the wedge concept was in dividing the daunting problem of climate change into substantial but tractable portions of mitigation: Pacala and Socolow gave us a way to believe that the energy-carbon-climate problem was manageable.
An unfortunate consequence of their paper, however, was to make the solution seem easy (see, e.g. [6, 7]). And in the meantime, the problem has grown. Since 2004, annual emissions have increased and their growth rate has accelerated, so that more than seven wedges would now be necessary to stabilize emissions and—more importantly—stabilizing emissions at current levels for 50 years does not appear compatible with Pacala and Socolow's target of an atmospheric CO 2 concentration below 500 ppm nor the international community's goal of limiting the increase in global mean temperature to 2 °C more than the pre-industrial era.
Here, we aim to revitalize the wedge concept by redefining what it means to 'solve the carbon and climate problem for the next 50 years'. This redefinition makes clear both the scale and urgency of innovating and deploying carbon-emissions-free energy technologies.
2. Solving the climate problem
Stabilizing global climate requires decreasing CO 2 emissions to near zero [8–11]. If emissions were to stop completely, global temperatures would quickly stabilize and decrease gradually over time [8, 12, 13]. But socioeconomic demands and dependence on fossil-fuel energy effectively commit us to many billions of tons of CO 2 emissions [14], and at the timescale of centuries, each CO 2 emission to the atmosphere contributes another increment to global warming: peak warming is proportional to cumulative CO 2 emissions [15, 16]. Cumulative emissions, in turn, integrate all past emissions as well as those occurring during three distinct phases of mitigation: (1) slowing growth of emissions, (2) stopping growth of emissions, and (3) reducing emissions. Although they noted that stabilizing the climate would require emissions to 'eventually drop to zero', Pacala and Socolow nonetheless defined 'solv[ing] the carbon and climate problem over the next half-century' as merely stopping the growth of emissions (phases 1 and 2). Further reductions (phase 3), they said, could wait 50 years if the level of emissions were held constant in the meantime.
But growth of emissions has not stopped (phase 2) or even slowed (phase 1), it has accelerated [17, 18]. In 2010, annual CO 2 emissions crested 9 GtC. At this level, holding emissions constant for 50 years (phase 2) is unlikely to be sufficient to avoid the benchmark targets of 500 ppm or 2 °C.
To support this assertion, we performed ensemble simulations using the UK Met Office coupled climate/carbon cycle model, HadCM3L (see supplementary material available at stacks.iop.org/ERL/8/011001/mmedia), to project changes in atmospheric CO 2 and global mean temperature in response to emissions scenarios in which seven wedges (W7) and nine wedges (W9) were immediately subtracted from the A2 marker scenario of the Intergovernmental Panel on Climate Change (IPCC)'s Special Report on Emissions Scenarios (SRES) [19] beginning in 2010 (figure 1). In the first half of this century, the A2 scenario is near the center of the plume of variation of the SRES emissions scenarios [20]. Indeed, actual annual emissions have exceeded A2 projections for more than a decade [21, 22]. During this period, strong growth of global emissions has been driven by the rapid, carbon-intensive growth of emerging economies [23, 24], which has continued despite the global financial crisis of 2008–9 [18]. For these reasons we believe that, among the SRES scenarios, A2 represents a reasonable 'business-as-usual' scenario. However, if emissions were to suddenly decline and follow a lower emissions business-as-usual trajectory such as B2, fewer wedges would be necessary to stabilize emissions, and deployment of seven wedges would reduce annual emissions to 4.5 GtC in 2060. Thus, mitigation effort (wedges) required to stabilize emissions is dependent on the choice of baseline scenario, but a half-century of emissions at the current level will have the same effect on atmospheric CO 2 and the climate regardless of what scenario is chosen.
Figure 1. Modeled effects of deploying wedges. (A) Future CO 2 emissions under SRES A2 marker scenario and the A2 scenario reduced by deployment of 7 wedges (W7). The response of (B) atmospheric CO 2 and (C) global mean surface temperature under W7. (D) Future CO 2 emissions under SRES A2 marker scenario and stabilized at 2010 levels (reduced by approximately 9 wedges relative to the A2 scenario) (W9). The response of (E) atmospheric CO 2 and (F) global mean surface temperature under W9. Error bars in ((C) and (F)) are 2-sigma. Dashed lines in (A), (B), (D) and (E) show emissions and concentrations of representative concentration pathways RCP4.5, RCP6, and RCP8.5 [38]. Mean temperatures reflect warming relative to the pre-industrial era.
We also note that the climate model we used, HadCM3L, has a strong positive climate/carbon cycle feedback mainly associated with the dieback of the Amazon rainforest [25]. As a result, HadCM3L projected the highest level of atmospheric CO 2 concentrations among eleven Earth system models that were driven by a certain CO 2 emission scenario [26]. However, this strong positive climate/carbon cycle feedback operates in simulations of both the A2 and wedge (W7 and W9) scenarios. Therefore, the relative effect of wedges, as opposed to the absolute values of projected atmospheric CO 2 and temperature, is expected to be less dependent on the strength of climate/carbon cycle feedback.
Atmospheric CO 2 concentration and mean surface temperatures continue to rise under the modeled W7 scenario (figures 1(A)–(C)). Deploying 7 wedges does not alter projected mean surface temperatures by a statistically significant increment until 2046 (α = 0.05 level), at which time the predicted difference between mean temperatures in the A2 and W7 scenarios is 0.14 ± 0.08 °C. In 2060, the difference in projected mean temperatures under the two scenarios is 0.47 ± 0.07 °C. Further, under the W7 scenario, our results indicate atmospheric CO 2 levels will exceed 500 ppm in 2042 (reaching 567 ± 1 ppm in 2060) (figure 1(B)), and 2 °C of warming in 2052 (figure 1(C)). Immediately stabilizing global emissions at 2010 levels (~10.0 GtCy −1), which would require approximately nine wedges (thus W9) under the A2 scenario, has a similarly modest effect on global mean surface temperatures and atmospheric CO 2, with warming of 1.92 ± 0.4 °C in 2060 and atmospheric CO 2 exceeding 500 ppm by 2049 (figures 1(D)–(F)). Our projections therefore indicate that holding emissions constant at current levels for the next half-century would cause substantial warming, approaching or surpassing current benchmarks [27–29] even before any reduction of emissions (phase 3) begins.
Insofar as current climate targets accurately reflect the social acceptance of climate change impacts, then, solving the carbon and climate problem means not just stabilizing but sharply reducing CO 2 emissions over the next 50 years.
We are not alone in drawing this conclusion (see, e.g. [30–32]). For example, at least some integrated assessment models have now found that the emissions reductions required to prevent atmospheric CO 2 concentration from exceeding 450 ppm are no longer either physically or economically feasible [11, 33, 34], and that preventing CO 2 concentration from exceeding 550 ppm will also be difficult if participation of key countries such as China and Russia is delayed [11]. Most model scenarios that allow CO 2 concentrations to stabilize at 450 ppm entail negative carbon emissions, for example by capturing and storing emissions from bioenergy [11].
A different body of literature has concluded that cumulative emissions of 1 trillion tons of carbon (i.e. 1000 GtC) are likely to result in warming of 2 °C [15, 35]. Whereas Pacala and Socolow's original proposal implied roughly 944 GtC of cumulative emissions (305 GtC prior to 2004, 389 GtC between 2004 and 2054, and another 250 GtC between 2054 and 2104 if emissions decrease at 2% y −1 as they suggested), stabilizing emissions at 2010 levels for 50 y and decreasing at 2% y −1 afterward increases the cumulative total to 1180 GtC of emissions (356 GtC prior to 2010, 491 GtC between 2010 and 2060, and 336 GtC between 2060 and 2110 at which time annual emissions remain at nearly 3.2 GtC y −1). Lastly, we note that even though emissions in the lowest of the new representative concentration pathways (RCP2.6) peak in 2020 at just 10.3 GtC y −1 and decline sharply to only 2.0 GtC y −1 in 2060 (figure 2), the concentration of atmospheric CO 2 nonetheless reaches 443 ppm in 2050 [36–38]. In contrast, emissions of the intermediate pathway RCP4.5 rise modestly to 11.5 GtC y −1 in 2040 before declining to 9.6 GtC y −1 in 2060, which leads to atmospheric CO 2 concentrations of 509 ppm in 2060 on the way to 540 ppm in 2100. These pathways, along with the integrated assessment models and cumulative emissions simulations all support our finding that 50 y of current emissions is not a solution to climate change.
Figure 2. Idealization of future CO 2 emissions under the business-as-usual SRES A2 marker scenario. Future emissions are divided into hidden (sometimes called 'virtual') wedges (brown) of emissions avoided by expected decreases in the carbon intensity of GDP by ~1% per year, stabilization wedges (green) of emissions avoided through mitigation efforts that hold emissions constant at 9.8 GtC y −1 beginning in 2010, phase-out wedges (purple) of emissions avoided through complete transition of technologies and practices that emit CO 2 to the atmosphere to ones that do not, and allowed emissions (blue). Wedges expand linearly from 0 to 1 GtC y −1 from 2010 to 2060. The total avoided emissions per wedge is 25 GtC, such that altogether the hidden, stabilization and phase-out wedges represent 775 GtC of cumulative emissions.
Unless current climate targets are sacrificed, solving the climate problem requires significantly reducing emissions over the next 50 years. Just how significant those reductions need to be will depend on a global trade-off between the damages imposed by climatic changes and the costs of avoiding them. But given substantial uncertainties associated with climate model projections (e.g., climate sensitivity), the arbitrary nature of targets like 500 ppm and 2 °C, and the permanence implied by the term 'solution', the ultimate solution to the climate problem is a complete phase-out of carbon emissions.
3. Counting wedges
But significantly reducing current emissions while also sustaining historical growth rates of the global economy is likely to require many more than seven wedges. Gross world product (GWP) projections embedded in the A2 scenario imply as many as 31 wedges would be required to completely phase-out emissions, grouped into three distinct groups: (1) 12 'hidden' wedges that represent the continued decarbonization of our energy system at historical rates (i.e. decreases in the carbon intensity of the global economy that are assumed to regardless of any additional efforts to mitigate emissions) [9, 39]. (2) 9 'stabilization' wedges that represent additional efforts to mitigate emissions above and beyond the technological progress already assumed by the scenario [1]. And (3), 10 'phase-out' wedges that represent the complete transition from energy infrastructure and land-use practices that emit CO 2 (on net) to the atmosphere to infrastructure and practices which do not (figure 2) [9, 14, 40].
There is good reason to be concerned that at least some number of the hidden wedges will not come to be—that the rates of decarbonization assumed by almost all scenarios of future emissions may underestimate the extent to which rising energy demand will be met by increased use of coal and unconventional fossil fuels [24, 41]. Moreover, there is no way to know whether a wedge created by deploying carbon-free energy technology represents additional mitigation effort (i.e. a stabilization wedge) or something that would have happened in the course of normal technological progress (i.e. a hidden wedge). Thus, in assessing the efficacy of efforts to reduce emissions, it may be more useful to tabulate wedges based only on the current carbon intensity of global energy and food production and projected demand for energy and food, without reference to any particular technology scenario. Doing so would clarify the full level of decarbonization necessary and remove the question of whether emissions reductions that do occur should count as mitigation or not. But even assuming that historical rates of decarbonization will persist and therefore that many hidden wedges will materialize, phasing-out emissions altogether will entail nearly three times the number of additional wedges that Pacala and Socolow originally proposed—a total of 19 wedges under the A2 scenario (figure 2).
4. The urgent need for innovation
Confronting the need for as many as 31 wedges (12 hidden, 9 stabilization and 10 phase-out), the question is whether there are enough affordable mitigation options available, and—because the main source of CO 2 emissions is the burning of fossil fuels—the answer depends upon an assessment of carbon-free energy technologies. There is a longstanding disagreement in the literature between those who argue that existing technologies, improved incrementally, are all that is needed to solve the climate problem (e.g. [1]) and others who argue that more transformational change is necessary (e.g. 42]). Although the disagreement has turned on the definitions of incremental and transformative and the trade-offs of a near-term versus a longer-term focus, the root difference lies in the perceived urgency of the climate problem [6]. The emission reductions required by current targets, let alone a complete phase-out of emissions, demand fundamental, disruptive changes in the global energy system over the next 50 years. Depending on what sort of fossil-fuel infrastructure is replaced and neglecting any emissions produced to build and maintain the new infrastructure (see, e.g. [43]), a single wedge represents 0.7–1.4 terawatts (TW) of carbon-free energy (or an equivalent decrease in demand for fossil energy). Whether the changes to the energy system are called incremental or revolutionary, few would dispute that extensive innovation of technologies will be necessary to afford many terawatts of carbon-free energy and reductions in energy demand [42, 44, 45].
Currently, only a few classes of technologies might conceivably provide carbon-free power at the scale of multiple terawatts, among them fossil fuels with carbon capture and storage (CCS), nuclear, and renewables (principally solar and wind, and perhaps biomass) [42, 46, 47]. However, CCS has not yet been commercially deployed at any centralized power plant; the existing nuclear industry, based on reactor designs more than a half-century old and facing renewed public concerns of safety, is in a period of retrenchment, not expansion; and existing solar, wind, biomass, and energy storage systems are not yet mature enough to provide affordable baseload power at terawatt scale. Each of these technologies must be further developed if they are to be deployed at scale and at costs competitive with fossil energy.
Yet because investments in the energy sector tend to be capital intensive and long term, research successes are often not fully appropriable [48], and technologies compete almost entirely on the price of delivered electricity, private firms tend to underinvest in R&D, which has made energy one of the least innovative industry sectors in modern economies [44]. Supporting deployment of newer energy technologies at large scales will undoubtedly lead to further development and reduced costs [49, 50], but additional public support for early stage R&D will also be necessary to induce needed innovation [6, 44, 45, 51–53]. Moreover, it is imperative that policies and programs also address the intermediate stages of development, demonstration, and commercialization, when ideas born of public-funded research must be transferred to and diffused among private industries [44, 54, 55].
5. Conclusions
In 2004, Pacala and Socolow concluded that 'the choice today is between action and delay'. After eight years of mostly delay, the action now required is significantly greater. Current climate targets of 500 ppm and 2 °C of warming will require emissions to peak and decline in the next few decades. Solving the climate problem ultimately requires near-zero emissions. Given the current emissions trajectory, eliminating emissions over 50 years would require 19 wedges: 9 to stabilize emissions and an additional 10 to completely phase-out emissions. And if historical, background rates of decarbonization falter, 12 'hidden' wedges will also be necessary, bringing the total to a staggering 31 wedges.
Filling this many wedges while sustaining global economic growth would mean deploying tens of terawatts of carbon-free energy in the next few decades. Doing so would entail a fundamental and disruptive overhaul of the global energy system, as the global energy infrastructure is replaced with new infrastructure that provides equivalent amounts of energy but does not emit CO 2. Current technologies and systems cannot provide the amounts of carbon-free energy needed soon enough or affordably enough to achieve this transformation. An integrated and aggressive set of policies and programs is urgently needed to support energy technology innovation across all stages of research, development, demonstration, and commercialization. No matter the number required, wedges can still simplify and quantify the challenge. But the problem was never easy.
Acknowledgments
We thank six anonymous reviewers for their comments on various versions of the manuscript. We also especially thank R Socolow for several thoughtful and stimulating discussions of this work.
Jiangjiang Zhu et al 2013 J. Breath Res. 7 016003 Tag this article
Article References Full text PDF (387 KB) Enhanced article HTML
The identification of bacteria by their volatilomes is of interest to many scientists and clinicians as it holds the promise of diagnosing infections in situ, particularly lung infections via breath analysis. While there are many studies reporting various bacterial volatile biomarkers or fingerprints using in vitro experiments, it has proven difficult to translate these data to in vivo breath analyses. Therefore, we aimed to create secondary electrospray ionization-mass spectrometry (SESI-MS) pathogen fingerprints directly from the breath of mice with lung infections. In this study we demonstrated that SESI-MS is capable of differentiating infected versus uninfected mice, P. aeruginosa-infected versus S. aureus-infected mice, as well as distinguish between infections caused by P. aeruginosa strains PAO1 versus FRD1, with statistical significance ( p < 0.05). In addition, we compared in vitro and in vivo volatiles and observed that only 25–34% of peaks are shared between the in vitro and in vivo SESI-MS fingerprints. To the best of our knowledge, these are the first breath volatiles measured for P. aeruginosa PAO1, FRD1, and S. aureus RN450, and the first comparison of in vivo and in vitro volatile profiles from the same strains using the murine infection model.
Mario Castro et al 2012 New J. Phys. 14 103039 Tag this article
Article References Full text PDF (1.42 MB) Enhanced article HTML
Chemical vapor deposition (CVD) is a widely used technique to grow solid materials with accurate control of layer thickness and composition. Under mass-transport-limited conditions, the surface of thin films thus produced grows in an unstable fashion, developing a typical motif that resembles the familiar surface of a cauliflower plant. Through experiments on CVD production of amorphous hydrogenated carbon films leading to cauliflower-like fronts, we provide a quantitative assessment of a continuum description of CVD interface growth. As a result, we identify non-locality, non-conservation and randomness as the main general mechanisms controlling the formation of these ubiquitous shapes. We also show that the surfaces of actual cauliflower plants and combustion fronts obey the same scaling laws, proving the validity of the theory over seven orders of magnitude in length scales. Thus, a theoretical justification is provided, which had remained elusive so far, for the remarkable similarity between the textures of surfaces found for systems that differ widely in physical nature and typical scales.
Stine S Korreman 2012 Phys. Med. Biol. 57 R161 Tag this article
Article References Full text PDF (1.72 MB) Enhanced article HTML
This review considers the management of motion in photon radiation therapy. An overview is given of magnitudes and variability of motion of various structures and organs, and how the motion affects images by producing artifacts and blurring. Imaging of motion is described, including 4DCT and 4DPET. Techniques for monitoring motion in real time by use of surrogates are reviewed. Treatment planning for various motion-management treatment delivery strategies is discussed, including choice of planning image, treatment field margins and dose calculation. Imaging techniques displaying motion in the treatment room for pre-treatment as well as real-time imaging for localization and verification are covered, and their use for various motion-management treatment delivery techniques is discussed. Use of motion management for different treatment sites—breast, lung and other sites—is elaborated, and gating, breath-hold and beam tracking strategies are described. Suggestions are given for breast and lung for practicable protocols for routine clinical use of motion management, including decision strategies. Finally, a perspective of the future of motion management in photon radiation therapy is given.
Justin McClellan et al 2012 Environ. Res. Lett. 7 034019 Tag this article
Article References Full text PDF (778 KB) Enhanced article HTML
We perform engineering cost analyses of systems capable of delivering 1–5 million metric tonnes (Mt) of albedo modification material to altitudes of 18–30 km. The goal is to compare a range of delivery systems evaluated on a consistent cost basis. Cost estimates are developed with statistical cost estimating relationships based on historical costs of aerospace development programs and operations concepts using labor rates appropriate to the operations. We evaluate existing aircraft cost of acquisition and operations, perform in-depth new aircraft and airship design studies and cost analyses, and survey rockets, guns, and suspended gas and slurry pipes, comparing their costs to those of aircraft and airships. Annual costs for delivery systems based on new aircraft designs are estimated to be $1–3B to deliver 1 Mt to 20–30 km or $2–8B to deliver 5 Mt to the same altitude range. Costs for hybrid airships may be competitive, but their large surface area complicates operations in high altitude wind shear, and development costs are more uncertain than those for airplanes. Pipes suspended by floating platforms provide low recurring costs to pump a liquid or gas to altitudes as high as ∼ 20 km, but the research, development, testing and evaluation costs of these systems are high and carry a large uncertainty; the pipe system’s high operating pressures and tensile strength requirements bring the feasibility of this system into question. The costs for rockets and guns are significantly higher than those for other systems. We conclude that (a) the basic technological capability to deliver material to the stratosphere at million tonne per year rates exists today, (b) based on prior literature, a few million tonnes per year would be sufficient to alter radiative forcing by an amount roughly equivalent to the growth of anticipated greenhouse gas forcing over the next half century, and that (c) several different methods could possibly deliver this quantity for less than $8B per year. We do not address here the science of aerosols in the stratosphere, nor issues of risk, effectiveness or governance that will add to the costs of solar geoengineering.
Rogier Braakman and Eric Smith 2013 Phys. Biol. 10 011001 Tag this article
Article References Full text PDF (3.00 MB) Enhanced article HTML
Metabolism is built on a foundation of organic chemistry, and employs structures and interactions at many scales. Despite these sources of complexity, metabolism also displays striking and robust regularities in the forms of modularity and hierarchy, which may be described compactly in terms of relatively few principles of composition. These regularities render metabolic architecture comprehensible as a system, and also suggests the order in which layers of that system came into existence. In addition metabolism also serves as a foundational layer in other hierarchies, up to at least the levels of cellular integration including bioenergetics and molecular replication, and trophic ecology. The recapitulation of patterns first seen in metabolism, in these higher levels, motivates us to interpret metabolism as a source of causation or constraint on many forms of organization in the biosphere. Many of the forms of modularity and hierarchy exhibited by metabolism are readily interpreted as stages in the emergence of catalytic control by living systems over organic chemistry, sometimes recapitulating or incorporating geochemical mechanisms.
We identify as modules, either subsets of chemicals and reactions, or subsets of functions, that are re-used in many contexts with a conserved internal structure. At the small molecule substrate level, module boundaries are often associated with the most complex reaction mechanisms, catalyzed by highly conserved enzymes. Cofactors form a biosynthetically and functionally distinctive control layer over the small-molecule substrate. The most complex members among the cofactors are often associated with the reactions at module boundaries in the substrate networks, while simpler cofactors participate in widely generalized reactions. The highly tuned chemical structures of cofactors (sometimes exploiting distinctive properties of the elements of the periodic table) thereby act as ‘keys’ that incorporate classes of organic reactions within biochemistry.
Module boundaries provide the interfaces where change is concentrated, when we catalogue extant diversity of metabolic phenotypes. The same modules that organize the compositional diversity of metabolism are argued, with many explicit examples, to have governed long-term evolution. Early evolution of core metabolism, and especially of carbon-fixation, appears to have required very few innovations, and to have used few rules of composition of conserved modules, to produce adaptations to simple chemical or energetic differences of environment without diverse solutions and without historical contingency. We demonstrate these features of metabolism at each of several levels of hierarchy, beginning with the small-molecule metabolic substrate and network architecture, continuing with cofactors and key conserved reactions, and culminating in the aggregation of multiple diverse physical and biochemical processes in cells.
Erik Behrens et al 2012 Environ. Res. Lett. 7 034004 Tag this article
Article References Full text PDF (3.20 MB) Enhanced article HTML
A sequence of global ocean circulation models, with horizontal mesh sizes of 0.5°, 0.25° and 0.1°, are used to estimate the long-term dispersion by ocean currents and mesoscale eddies of a slowly decaying tracer (half-life of 30 years, comparable to that of 137Cs) from the local waters off the Fukushima Dai-ichi Nuclear Power Plants. The tracer was continuously injected into the coastal waters over some weeks; its subsequent spreading and dilution in the Pacific Ocean was then simulated for 10 years. The simulations do not include any data assimilation, and thus, do not account for the actual state of the local ocean currents during the release of highly contaminated water from the damaged plants in March–April 2011. An ensemble differing in initial current distributions illustrates their importance for the tracer patterns evolving during the first months, but suggests a minor relevance for the large-scale tracer distributions after 2–3 years. By then the tracer cloud has penetrated to depths of more than 400 m, spanning the western and central North Pacific between 25°N and 55°N, leading to a rapid dilution of concentrations. The rate of dilution declines in the following years, while the main tracer patch propagates eastward across the Pacific Ocean, reaching the coastal waters of North America after about 5–6 years. Tentatively assuming a value of 10 PBq for the net 137Cs input during the first weeks after the Fukushima incident, the simulation suggests a rapid dilution of peak radioactivity values to about 10 Bq m −3 during the first two years, followed by a gradual decline to 1–2 Bq m −3 over the next 4–7 years. The total peak radioactivity levels would then still be about twice the pre-Fukushima values.
Hiroaki Aihara et al. 2011 ApJS 193 29 Tag this article
Article References Full text PDF (2.67 MB) View as HTML
The Sloan Digital Sky Survey (SDSS) started a new phase in 2008 August, with new instrumentation and new surveys focused on Galactic structure and chemical evolution, measurements of the baryon oscillation feature in the clustering of galaxies and the quasar Lyα forest, and a radial velocity search for planets around ~8000 stars. This paper describes the first data release of SDSS-III (and the eighth counting from the beginning of the SDSS). The release includes five-band imaging of roughly 5200 deg 2 in the southern Galactic cap, bringing the total footprint of the SDSS imaging to 14,555 deg 2, or over a third of the Celestial Sphere. All the imaging data have been reprocessed with an improved sky-subtraction algorithm and a final, self-consistent photometric recalibration and flat-field determination. This release also includes all data from the second phase of the Sloan Extension for Galactic Understanding and Exploration (SEGUE-2), consisting of spectroscopy of approximately 118,000 stars at both high and low Galactic latitudes. All the more than half a million stellar spectra obtained with the SDSS spectrograph have been reprocessed through an improved stellar parameter pipeline, which has better determination of metallicity for high-metallicity stars.
William J. Borucki et al. 2011 ApJ 736 19 Tag this article
Article References Full text PDF (1.03 MB) HTML with Enhancements
On 2011 February 1 the Kepler mission released data for 156,453 stars observed from the beginning of the science observations on 2009 May 2 through September 16. There are 1235 planetary candidates with transit-like signatures detected in this period. These are associated with 997 host stars. Distributions of the characteristics of the planetary candidates are separated into five class sizes: 68 candidates of approximately Earth-size ( R p < 1.25 R ⊕), 288 super-Earth-size (1.25 R ⊕ ≤ R p < 2 R ⊕), 662 Neptune-size (2 R ⊕ ≤ R p < 6 R ⊕), 165 Jupiter-size (6 R ⊕ ≤ R p < 15 R ⊕), and 19 up to twice the size of Jupiter (15 R ⊕ ≤ R p < 22 R ⊕). In the temperature range appropriate for the habitable zone, 54 candidates are found with sizes ranging from Earth-size to larger than that of Jupiter. Six are less than twice the size of the Earth. Over 74% of the planetary candidates are smaller than Neptune. The observed number versus size distribution of planetary candidates increases to a peak at two to three times the Earth-size and then declines inversely proportional to the area of the candidate. Our current best estimates of the intrinsic frequencies of planetary candidates, after correcting for geometric and sensitivity biases, are 5% for Earth-size candidates, 8% for super-Earth-size candidates, 18% for Neptune-size candidates, 2% for Jupiter-size candidates, and 0.1% for very large candidates; a total of 0.34 candidates per star. Multi-candidate, transiting systems are frequent; 17% of the host stars have multi-candidate systems, and 34% of all the candidates are part of multi-candidate systems.
E. Komatsu et al. 2011 ApJS 192 18 Tag this article
Article References Full text PDF (2.10 MB) View as HTML
The combination of seven-year data from WMAP and improved astrophysical data rigorously tests the standard cosmological model and places new constraints on its basic parameters and extensions. By combining the WMAP data with the latest distance measurements from the baryon acoustic oscillations (BAO) in the distribution of galaxies and the Hubble constant ( H 0) measurement, we determine the parameters of the simplest six-parameter ΛCDM model. The power-law index of the primordial power spectrum is n s = 0.968 ± 0.012 (68% CL) for this data combination, a measurement that excludes the Harrison-Zel'dovich-Peebles spectrum by 99.5% CL. The other parameters, including those beyond the minimal set, are also consistent with, and improved from, the five-year results. We find no convincing deviations from the minimal model. The seven-year temperature power spectrum gives a better determination of the third acoustic peak, which results in a better determination of the redshift of the matter-radiation equality epoch. Notable examples of improved parameters are the total mass of neutrinos, ∑ m ν < 0.58 eV(95%CL), and the effective number of neutrino species, N eff = 4.34 +0.86 –0.88 (68% CL), which benefit from better determinations of the third peak and H 0. The limit on a constant dark energy equation of state parameter from WMAP+BAO+ H 0, without high-redshift Type Ia supernovae, is w = –1.10 ± 0.14 (68% CL). We detect the effect of primordial helium on the temperature power spectrum and provide a new test of big bang nucleosynthesis by measuring Y p = 0.326 ± 0.075 (68% CL). We detect, and show on the map for the first time, the tangential and radial polarization patterns around hot and cold spots of temperature fluctuations, an important test of physical processes at z = 1090 and the dominance of adiabatic scalar fluctuations. The seven-year polarization data have significantly improved: we now detect the temperature- E-mode polarization cross power spectrum at 21σ, compared with 13σ from the five-year data. With the seven-year temperature- B-mode cross power spectrum, the limit on a rotation of the polarization plane due to potential parity-violating effects has improved by 38% to (68% CL). We report significant detections of the Sunyaev-Zel'dovich (SZ) effect at the locations of known clusters of galaxies. The measured SZ signal agrees well with the expected signal from the X-ray data on a cluster-by-cluster basis. However, it is a factor of 0.5-0.7 times the predictions from "universal profile" of Arnaud et al., analytical models, and hydrodynamical simulations. We find, for the first time in the SZ effect, a significant difference between the cooling-flow and non-cooling-flow clusters (or relaxed and non-relaxed clusters), which can explain some of the discrepancy. This lower amplitude is consistent with the lower-than-theoretically expected SZ power spectrum recently measured by the South Pole Telescope Collaboration.
Adam G. Riess et al. 2011 ApJ 730 119 Tag this article
Article References Full text PDF (3.71 MB) HTML with Enhancements
We use the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) to determine the Hubble constant from optical and infrared observations of over 600 Cepheid variables in the host galaxies of eight recent Type Ia supernovae (SNe Ia), providing the calibration for a magnitude-redshift relation based on 253 SNe Ia. Increased precision over past measurements of the Hubble constant comes from five improvements: (1) more than doubling the number of infrared observations of Cepheids in the nearby SN hosts; (2) increasing the sample size of ideal SN Ia calibrators from six to eight; (3) increasing by 20% the number of Cepheids with infrared observations in the megamaser host NGC 4258; (4) reducing the difference in the mean metallicity of the Cepheid comparison samples between NGC 4258 and the SN hosts from Δlog [O/H] = 0.08 to 0.05; and (5) calibrating all optical Cepheid colors with a single camera, WFC3, to remove cross-instrument zero-point errors. The result is a reduction in the uncertainty in H 0 due to steps beyond the first rung of the distance ladder from 3.5% to 2.3%. The measurement of H 0 via the geometric distance to NGC 4258 is 74.8 ± 3.1 km s –1 Mpc –1, a 4.1% measurement including systematic uncertainties. Better precision independent of the distance to NGC 4258 comes from the use of two alternative Cepheid absolute calibrations: (1) 13 Milky Way Cepheids with trigonometric parallaxes measured with HST/fine guidance sensor and Hipparcos and (2) 92 Cepheids in the Large Magellanic Cloud for which multiple accurate and precise eclipsing binary distances are available, yielding 74.4 ± 2.5 km s –1 Mpc –1, a 3.4% uncertainty including systematics. Our best estimate uses all three calibrations but a larger uncertainty afforded from any two: H 0 = 73.8 ± 2.4 km s –1 Mpc –1 including systematic errors, corresponding to a 3.3% uncertainty. The improved measurement of H 0, when combined with the Wilkinson Microwave Anisotropy Probe ( WMAP) 7 year data, results in a tighter constraint on the equation-of-state parameter of dark energy of w = –1.08 ± 0.10. It also rules out the best-fitting gigaparsec-scale void models, posited as an alternative to dark energy. The combined H 0 + WMAP results yield N eff = 4.2 ± 0.7 for the number of relativistic particle species in the early universe, a low-significance excess for the value expected from the three known neutrino flavors.
Daniel J. Eisenstein et al. 2011 The Astronomical Journal 142 72 Tag this article
Article References Full text PDF (2.46 MB) View as HTML
Building on the legacy of the Sloan Digital Sky Survey (SDSS-I and II), SDSS-III is a program of four spectroscopic surveys on three scientific themes: dark energy and cosmological parameters, the history and structure of the Milky Way, and the population of giant planets around other stars. In keeping with SDSS tradition, SDSS-III will provide regular public releases of all its data, beginning with SDSS Data Release 8 (DR8), which was made public in 2011 January and includes SDSS-I and SDSS-II images and spectra reprocessed with the latest pipelines and calibrations produced for the SDSS-III investigations. This paper presents an overview of the four surveys that comprise SDSS-III. The Baryon Oscillation Spectroscopic Survey will measure redshifts of 1.5 million massive galaxies and Lyα forest spectra of 150,000 quasars, using the baryon acoustic oscillation feature of large-scale structure to obtain percent-level determinations of the distance scale and Hubble expansion rate at z < 0.7 and at z 2.5. SEGUE-2, an already completed SDSS-III survey that is the continuation of the SDSS-II Sloan Extension for Galactic Understanding and Exploration (SEGUE), measured medium-resolution ( R = λ/Δλ 1800) optical spectra of 118,000 stars in a variety of target categories, probing chemical evolution, stellar kinematics and substructure, and the mass profile of the dark matter halo from the solar neighborhood to distances of 100 kpc. APOGEE, the Apache Point Observatory Galactic Evolution Experiment, will obtain high-resolution ( R 30,000), high signal-to-noise ratio (S/N ≥ 100 per resolution element), H-band (1.51 μm < λ < 1.70 μm) spectra of 10 5 evolved, late-type stars, measuring separate abundances for ~15 elements per star and creating the first high-precision spectroscopic survey of all Galactic stellar populations (bulge, bar, disks, halo) with a uniform set of stellar tracers and spectral diagnostics. The Multi-object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) will monitor radial velocities of more than 8000 FGK stars with the sensitivity and cadence (10-40 m s –1, ~24 visits per star) needed to detect giant planets with periods up to two years, providing an unprecedented data set for understanding the formation and dynamical evolution of giant planet systems. As of 2011 January, SDSS-III has obtained spectra of more than 240,000 galaxies, 29,000 z ≥ 2.2 quasars, and 140,000 stars, including 74,000 velocity measurements of 2580 stars for MARVELS.
J. Dunkley et al. 2011 ApJ 739 52 Tag this article
Article References Full text PDF (1.51 MB) View as HTML
We present cosmological parameters derived from the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz and 218 GHz over 296 deg 2 with the Atacama Cosmology Telescope (ACT) during its 2008 season. ACT measures fluctuations at scales 500 < ℓ < 10, 000. We fit a model for the lensed CMB, Sunyaev-Zel'dovich (SZ), and foreground contribution to the 148 GHz and 218 GHz power spectra, including thermal and kinetic SZ, Poisson power from radio and infrared point sources, and clustered power from infrared point sources. At ℓ = 3000, about half the power at 148 GHz comes from primary CMB after masking bright radio sources. The power from thermal and kinetic SZ is estimated to be , where . The IR Poisson power at 148 GHz is ( C ℓ = 5.5 ± 0.5 nK 2), and a clustered IR component is required with , assuming an analytic model for its power spectrum shape. At 218 GHz only about 15% of the power, approximately 27 μK 2, is CMB anisotropy at ℓ = 3000. The remaining 85% is attributed to IR sources (approximately 50% Poisson and 35% clustered), with spectral index α = 3.69 ± 0.14 for flux scaling as S(ν) ν α. We estimate primary cosmological parameters from the less contaminated 148 GHz spectrum, marginalizing over SZ and source power. The ΛCDM cosmological model is a good fit to the data (χ 2/dof = 29/46), and ΛCDM parameters estimated from ACT+ Wilkinson Microwave Anisotropy Probe ( WMAP) are consistent with the seven-year WMAP limits, with scale invariant n s = 1 excluded at 99.7% confidence level (CL) (3σ). A model with no CMB lensing is disfavored at 2.8σ. By measuring the third to seventh acoustic peaks, and probing the Silk damping regime, the ACT data improve limits on cosmological parameters that affect the small-scale CMB power. The ACT data combined with WMAP give a 6σ detection of primordial helium, with Y P = 0.313 ± 0.044, and a 4σ detection of relativistic species, assumed to be neutrinos, with N eff = 5.3 ± 1.3 (4.6 ± 0.8 with BAO+ H 0 data). From the CMB alone the running of the spectral index is constrained to be dn s / dln k = –0.034 ± 0.018, the limit on the tensor-to-scalar ratio is r < 0.25 (95% CL), and the possible contribution of Nambu cosmic strings to the power spectrum is constrained to string tension Gμ < 1.6 × 10 –7 (95% CL).
Anton M. Koekemoer et al. 2011 ApJS 197 36 Tag this article
Article References Full text PDF (23.68 MB) View as HTML
This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ~125 arcmin 2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ~800 arcmin 2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.
Ming-Hu Fang et al 2011 EPL 94 27009 Tag this article
Article References Full text PDF (1.10 MB) View as HTML
(Tl,K)Fe x Se 2 single crystals were first successfully synthesized with the Bridgeman method. The physical properties are characterized by electrical resistivity, magnetic susceptibility and Hall coefficient. We found that the (Tl,K)Fe x Se 2 (1.30 ≤ x ≤ 1.65) compounds show an antiferromagetic (AFM) insulator behavior, which may be associated with the Fe-vacancy ordering in the crystals. While in the 1.70 ≤ x < 1.78 crystals, superconductivity (SC) coexists with an insulating phase. As Fe content further increases, the bulk SC with T c =31 K (and a T c onset as high as 40 K) appears in the 1.78 ≤ x ≤ 1.88 crystals. Our discovery represents the first Fe-based high-temperature superconductivity (HTSC) at the verge of an AFM insulator.
Norman A. Grogin et al. 2011 ApJS 197 35 Tag this article
Article References Full text PDF (6.93 MB) View as HTML
The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) is designed to document the first third of galactic evolution, over the approximate redshift ( z) range 8-1.5. It will image >250,000 distant galaxies using three separate cameras on the Hubble Space Telescope, from the mid-ultraviolet to the near-infrared, and will find and measure Type Ia supernovae at z > 1.5 to test their accuracy as standardizable candles for cosmology. Five premier multi-wavelength sky regions are selected, each with extensive ancillary data. The use of five widely separated fields mitigates cosmic variance and yields statistically robust and complete samples of galaxies down to a stellar mass of 10 9 M ☉ to z 2, reaching the knee of the ultraviolet luminosity function of galaxies to z 8. The survey covers approximately 800 arcmin 2 and is divided into two parts. The CANDELS/Deep survey (5σ point-source limit H = 27.7 mag) covers ~125 arcmin 2 within Great Observatories Origins Deep Survey (GOODS)-N and GOODS-S. The CANDELS/Wide survey includes GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-deep Survey) and covers the full area to a 5σ point-source limit of H 27.0 mag. Together with the Hubble Ultra Deep Fields, the strategy creates a three-tiered "wedding-cake" approach that has proven efficient for extragalactic surveys. Data from the survey are nonproprietary and are useful for a wide variety of science investigations. In this paper, we describe the basic motivations for the survey, the CANDELS team science goals and the resulting observational requirements, the field selection and geometry, and the observing design. The Hubble data processing and products are described in a companion paper.
R. J. Bouwens et al. 2011 ApJ 737 90 Tag this article
Article References Full text PDF (4.88 MB) View as HTML
We identify 73 z ~ 7 and 59 z ~ 8 candidate galaxies in the reionization epoch, and use this large 26-29.4 AB mag sample of galaxies to derive very deep luminosity functions to < – 18 AB mag and the star formation rate (SFR) density at z ~ 7 and z ~ 8 (just 800 Myr and 650 Myr after recombination, respectively). The galaxy sample is derived using a sophisticated Lyman-break technique on the full two-year Wide Field Camera 3/infrared (WFC3/IR) and Advanced Camera for Surveys (ACS) data available over the HUDF09 (~29.4 AB mag, 5σ), two nearby HUDF09 fields (~29 AB mag, 5σ, 14 arcmin 2), and the wider area Early Release Science (~27.5 AB mag, 5σ, ~40 arcmin 2). The application of strict optical non-detection criteria ensures the contamination fraction is kept low (just ~7% in the HUDF). This very low value includes a full assessment of the contamination from lower redshift sources, photometric scatter, active galactic nuclei, spurious sources, low-mass stars, and transients (e.g., supernovae). From careful modeling of the selection volumes for each of our search fields, we derive luminosity functions for galaxies at z ~ 7 and z ~ 8 to < – 18 AB mag. The faint-end slopes α at z ~ 7 and z ~ 8 are uncertain but very steep at α = –2.01 ± 0.21 and α = –1.91 ± 0.32, respectively. Such steep slopes contrast to the local α –1.4 and may even be steeper than that at z ~ 4 where α = –1.73 ± 0.05. With such steep slopes (α –1.7) lower luminosity galaxies dominate the galaxy luminosity density during the epoch of reionization. The SFR densities derived from these new z ~ 7 and z ~ 8 luminosity functions are consistent with the trends found at later times (lower redshifts). We find reasonable consistency with the SFR densities implied from reported stellar mass densities being only ~40% higher at z < 7. This suggests that (1) the stellar mass densities inferred from the Spitzer Infrared Array Camera (IRAC) photometry are reasonably accurate and (2) that the initial mass function at very high redshift may not be very different from that at later times.