Pittsburgh Supercomputing Center

Pittsburgh, PA, United States

Pittsburgh Supercomputing Center

Pittsburgh, PA, United States
SEARCH FILTERS
Time filter
Source Type

News Article | April 24, 2017
Site: www.prweb.com

Recent federal recommendations against offering the inhaled nasal influenza vaccine due to lack of effectiveness could lead to more flu illness in the U.S. if the inhaled vaccine becomes effective again or if not having the choice of the needle-less vaccine substantially reduces immunization rates, according to a new analysis led by University of Pittsburgh School of Medicine scientists. The findings, published online and scheduled for a coming issue of the American Journal of Preventive Medicine, indicate that close surveillance will be needed to ensure that the U.S. Centers for Disease Control and Prevention (CDC) recommendation against the nasal vaccine—called the live attenuated influenza vaccine, or LAIV—continues to do more good than harm. “The CDC is being appropriately cautious and doing the right thing based on available data,” said lead author Kenneth J. Smith, M.D., M.S., professor of medicine and clinical and translational science in Pitt’s School of Medicine. “However, our study finds that it would take only relatively small changes to tip the scales back in favor of offering the LAIV, so close monitoring is very important.” The Pittsburgh Vaccination Research Group (PittVax) is one of a few sites across the U.S. that track flu in patients who received and did not receive the annual flu vaccine. The data they collect is shared with the CDC’s Advisory Committee on Immunization Practices and led to the CDC’s recommendation against LAIV last year after data from the two previous flu seasons showed it to be ineffective at preventing influenza A, which is typically the most common strain. In the past, the LAIV was a common vaccine offered to children 2 to 8 years old. Under current conditions, only offering the needle-delivered flu vaccine results in 20.9 percent of children ages 2 to 8 getting the flu, compared with 23.5 percent if both the needle and nasal vaccine are offered. However, if the LAIV effectiveness improves and can prevent flu in more than 63 percent of the people who get it, then it once again becomes beneficial to offer both forms of vaccination. “Interestingly, there has been no decrease in LAIV effectiveness in other countries, and we’re still unsure why this is,” said Smith. “It is possible that future research will find ways to make LAIV more effective in the U.S. again, in which case the CDC recommendations will need to be reexamined.” The researchers also found that if not having the needle-less vaccine as an option drives down vaccination rates by 18.7 percent or more, then offering both options is the better recommendation. “PittVax will continue collecting, analyzing and reporting on flu cases and flu vaccine effectiveness in the Pittsburgh region, helping guide flu immunization recommendations,” said senior author Richard K. Zimmerman, M.D., M.P.H., professor in Pitt School of Medicine’s Department of Family Medicine and Pitt Graduate School of Public Health’s Department of Behavioral and Community Health Sciences. “This kind of surveillance is critical to charting the best course to save lives from influenza, which kills thousands annually.” Additional authors on this study are Mary Patricia Nowalk, Ph.D., R.D., Angela Wateska, M.P.H., and Jonathan M. Raviotta, M.P.H., all of Pitt; Shawn T. Brown, Ph.D. and Jay V. DePasse, B.S., at the Pittsburgh Supercomputing Center at Carnegie Mellon University and Eunha Shim, Ph.D., of Soongsil University in Seoul, Republic of Korea. This project was funded by National Institute of General Medical Sciences grant R01GM111121. About the University of Pittsburgh Schools of the Health Sciences The University of Pittsburgh Schools of the Health Sciences include the schools of Medicine, Nursing, Dental Medicine, Pharmacy, Health and Rehabilitation Sciences and the Graduate School of Public Health. The schools serve as the academic partner to the UPMC (University of Pittsburgh Medical Center). Together, their combined mission is to train tomorrow’s health care specialists and biomedical scientists, engage in groundbreaking research that will advance understanding of the causes and treatments of disease and participate in the delivery of outstanding patient care. Since 1998, Pitt and its affiliated university faculty have ranked among the top 10 educational institutions in grant support from the National Institutes of Health. For additional information about the Schools of the Health Sciences, please visit http://www.health.pitt.edu.


News Article | May 2, 2017
Site: www.futurity.org

Getting half of American 8- to 11-year-olds into 25 minutes of physical activity three times a week would save $21.9 billion in medical costs and lost wages over their lifetimes, new research suggests. The relatively modest increase—from the current 32 percent to 50 percent of kids participating in exercise, active play, or sports that often—would also result in 340,000 fewer obese and overweight youth, a reduction of more than 4 percent, the study calculates. “Physical activity not only makes kids feel better and helps them develop healthy habits, it’s also good for the nation’s bottom line,” says Bruce Y. Lee, executive director of the Global Obesity Prevention Center at Johns Hopkins University. “Our findings show that encouraging exercise and investing in physical activity such as school recess and youth sports leagues when kids are young pays big dividends as they grow up.” The study, published in the journal Health Affairs, suggests an even bigger payoff if every current 8- through 11-year-old in the United States exercised 75 minutes over three sessions weekly. In that case, the researchers estimate, $62.3 billion in medical costs and lost wages over the course of their lifetimes could be avoided and 1.2 million fewer youths would be overweight or obese. And the savings would multiply if not just current 8-to-11 year olds, but every future cohort of elementary school children upped their game. Studies have shown that a high body mass index at age 18 is associated with a high BMI throughout adulthood and a higher risk for diabetes, heart disease, and other maladies linked to excess weight. The illnesses lead to high medical costs and productivity losses. In recent decades, there has been what experts describe as a growing epidemic of obesity in the United States. Lee and colleagues from the Johns Hopkins Bloomberg School of Public Health and the Pittsburgh Supercomputing Center at Carnegie Mellon University developed a computer simulation using their Virtual Population for Obesity Prevention software. They plugged in information representing current US children to show how changes in physical activity as kids could affect them—and the economy—throughout their lifetimes. The model relied on data from the 2005 and 2013 National Health and Nutrition Examination Survey and from the National Center for Health Statistics. Exercise totaling at least 25 minutes a day, three days a week, is a guideline developed for kids by the Sports and Fitness Industry Association. The researchers found that maintaining the current low 32 percent compliance would result in 8.1 million of today’s 8- to 11-year-olds being overweight or obese by 2020. That would trigger $2.8 trillion in additional medical costs and lost wages over their lifetimes. An overweight person’s lifetime medical costs average $62,331 and lost wages average $93,075. For an obese person, these amounts are even greater. “Even modest increases in physical activity could yield billions of dollars in savings,” Lee says. The costs averted are likely an underestimate, he says, as there are other benefits of physical activity that don’t affect weight, such as improving bone density, improving mood, and building muscle. Lee says that the spending averted by healthy levels of physical activity would more than make up for costs of programs designed to increase activity levels. “As the prevalence of childhood obesity grows, so will the value of increasing physical activity,” he says. “We need to be adding physical education programs and not cutting them. We need to encourage kids to be active, to reduce screen time and get them running around again. It’s important for their physical health—and the nation’s financial health.” Funding for the research came from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Agency for Healthcare Research and Quality.


Tarr T.B.,University of Pittsburgh | Tarr T.B.,Center for the Neural Basis of Cognition | Dittrich M.,University of Pittsburgh | Dittrich M.,Pittsburgh Supercomputing Center | And 2 more authors.
Trends in Neurosciences | Year: 2013

The frog neuromuscular junction (NMJ) is a strong and reliable synapse because, during activation, sufficient neurotransmitter is released to trigger a postsynaptic action potential (AP). Recent evidence supports the hypothesis that this reliability emerges from the assembly of thousands of unreliable single vesicle release sites. The mechanisms that govern this unreliability include a paucity of voltage-gated calcium channels, a low probability of calcium channel opening during an AP, and the rare triggering of synaptic vesicle fusion even when a calcium channel does open and allows calcium flux. Here, we discuss the evidence that these unreliable single vesicle release sites may be the fundamental building blocks of many types of synapses in both the peripheral and central nervous system (PNS and CNS, respectively). © 2012 Elsevier Ltd.


News Article | April 20, 2016
Site: www.nature.com

We imaged the primary visual cortex of an awake 9-month-old C57BL/6 male mouse, as described previously10, 13, with a custom-built two-photon microscope12. Using volumetric in vivo two-photon calcium imaging of a genetically encoded calcium indicator (GCaMP3), we measured the time-resolved responses of a population of identified neurons to a wide array of stimuli including drifting gratings (up to 16 directions, 3 spatial, and 2 temporal frequencies). Following 12 days of imaging calcium responses in the same cohort of neurons, we labelled blood vessels with a tail vein injection (rhodamine B-conjugated dextran) and acquired an in vivo fluorescence volume. The animal’s brain was then prepared for large-scale transmission EM as described previously8. 3,700 serial sections (<50 nm thick) were cut and imaged spanning a 450 μm × 450 μm × 150 μm volume at 4 nm × 4 nm × 40 nm per voxel resolution. Sections representing the middle third of the EM volume were aligned and imported into CATMAID16 for distributed, online, manual reconstruction and targeted volumes around identified synapses were exported for volumetric segmentation and PSD analysis. EM reconstructed neurons were identified in the in vivo stack by using the blood vessels as landmarks. Apical dendrites originating from deeper neocortical lamina were similarly identified and corresponded by location and branching geometry of their apical tufts. Permutation tests were used in statistical analyses, unless otherwise noted. All procedures were conducted in accordance with the ethical guidelines of the NIH and approved by the IACUC at Harvard Medical School. For cranial window implant surgery the mouse was anesthetized with isoflurane (1.2–2% in 100% O ). Dexamethasone (3.2 mg per kg body weight, intramuscular) was administered on the day before surgery and atropine (0.2 mg per kg body weight, intraperitoneally) at the beginning of surgery. Using aseptic technique, we secured a headpost in place using cyanoacrylate, dental acrylic, and C&B Metabond (Parkell), and made a 5 mm craniotomy over the left visual cortex (centre: ~2.8 mm lateral, 0.5 mm anterior to lambda) as described previously32. A 5 mm glass cranial window was implanted consisting of an 8 mm coverslip cured to two 5 mm coverslips (Warner #1; total thickness: ~0.5 mm; thickness below skull: ~200 mm) using index-matched adhesive (Norland #71). We secured the window in place using cyanoacrylate and dental acrylic. We habituated the mouse with water scheduling so that water was delivered only during and immediately after head restraint training. We increased the duration of head restraint sessions over the course of 2 weeks, from 3 min to 2 h32. We then performed retinotopic mapping of visual cortical areas using widefield intrinsic autofluorescence imaging, measuring autofluorescence produced by blue excitation (470 nm centre, 40 nm band, Chroma) through a green/red emission filter (longpass, 500 nm cutoff). We collected images using a CCD camera (Sensicam, Cooke, 344 × 260 pixels spanning 4 mm × 3 mm; 2 Hz acquisition rate) through a 5× air objective (0.14 NA, Mitituyo) using ImageJ acquisition software. For retinotopic mapping, stimuli were presented at 2–6 retinotopic positions for 10 s each, with 10 s of mean luminance between trials. GCaMP3 expression was targeted by viral injection. Dexamethasone (3.2 mg per kg body weight, intramuscular) was administered at least 2 h before coverslip removal. The mouse was anesthetized (isoflurane, 1–1.5%) and the cranial window was sterilized with alcohol and the coverslip removed. We then volume injected (50–100 ml min−1, Stoelting) 30–100 nl of a 10:1 mixture of AAV2/1.hSynap.GCaMP3.3.SV4033 (Penn Vector Core) and 1 mM sulforhodamine-101 (Invitrogen) to visualize the injection. Using the blood vessel pattern observed during widefield imaging as a guide, we made an injection in the posterior part of primary visual cortex at a depth of ~250 μm below the pial surface. After injection, a new cranial window was sealed in place and the mouse recovered. A 120 Hz LCD monitor (Samsung 2233RZ, 2200) was calibrated at each temporal frequency using a spectrophotometer (Photoresearch PR-650). We confirmed waveforms were sinusoidal by measuring luminance fluctuations of a full-field sinusoidally modulated stimulus (using a photomultiplier tube, Hamamatsu). The monitor was positioned so that the stimulus patch was 21 cm from the contralateral eye. Local 40° Gabor-like circular patches (sigmoidal 10–90% falloff in 10°) containing either square-wave (for mapping retinotopy with widefield intrinsic autofluorescence and targeting GCaMP3 injections) or sine-wave (for mapping position of receptive fields with two-photon imaging) drifting gratings (80% contrast) were alternated with periods of uniform mean luminance (59 cd m−2). In an effort to increase the population of responsive cells and explore receptive field parameters we presented gratings of varying directions at multiple spatial and temporal frequencies or at different positions in the visual field. We presented either 8 directions at 3 spatial frequencies (0.06, 0.12, and 0.24 cycles per degree (cpd)) and 2 temporal frequencies (2 and 8 Hz), 16 directions at 2 spatial frequencies (0.04 and 0.16 cpd) and 2 temporal frequencies (2 and 8 Hz), 8 directions at 6 positions, or 16 directions at 4 positions (45–115° eccentricity and −5–25° elevation), for a total of 64 stimulus types plus 10% blank trials. Stimuli were centred on the location eliciting maximum calcium responses in the imaged field (monocular cortex), which most effectively drove responses in the population for experiments that did not vary stimulus position. All stimuli in a given protocol were presented in a pseudo-random order (sampling without replacement), and presented 3 times per volume experiment with 2–4 experiments per volume per day. Imaging was performed with a custom-designed two-photon laser-scanning microscope12. Excitation light from a Mai Tai HP DeepSee laser (Spectra-Physics) with dispersion compensation was directed into a modulator (Conoptics) and a beam expander (Edmund Optics). The expanded beam was raster scanned into the brain with a resonant (4 kHz, Electro-Optical Products) and a conventional galvanometer (Galvoline) (240 line frames, bidirectional, 31 Hz) through a 0.8 numerical aperture (NA) 16× objective lens (Nikon). Emitted photons were directed through a green filter (centre: 542 nm; band: 50 nm; Semrock) onto GaAsP photomultipliers (no cooling, 50 μA protection circuit, H7422P-40MOD, Hamamatsu). The photomultiplier signals were amplified (DHPCA-100, Femto), and low-pass filtered (cutoff frequency = ~700 kHz). These and the mirror driver signals were acquired at 3.3 MHz using a multifunction data acquisition board (PCI-6115, National Instruments). Images were reconstructed in MATLAB (MathWorks) and continuously streamed onto a RAID array. Microscope control was also performed in MATLAB using an analogue output board (PCI-6711, National Instruments). The laser’s dispersion compensation was adjusted to maximize collected fluorescence. A piezoelectric objective translator on the microscope enabled imaging multiple 300 × 300 × 100 μm volumes with 8 planes at 4 Hz separated by ~12.5 μm allowing us to capture the response properties of many cells through the depth of L2/3. The imaged field of view was 200–300 μm on a side at resolution of 0.8–1.2 μm per pixel (dwell-time ~2.7 μs). GCaMP3 was excited at 920 nm. Laser power was automatically adjusted as a function of imaging depth at the modulator with power exiting the objective ranging from 30–60 mW. During imaging, the mouse was placed on 6-inch diameter foam ball that could spin noiselessly on an axel (Plasteel). We monitored trackball revolutions using a custom photodetector circuit and recorded eye movements using an IR-CCD camera (Sony xc-ei50; 30 Hz) and infrared illumination (720–2,750 nm bandpass filter, Edmund). Visual stimuli were presented for 4 s with 4 s of mean luminance between trials. Recording sessions were 2–6 h in duration. Use of the genetically encoded calcium indicator GCaMP3, permitted recording from the same neurons over multiple days with the selectivity of calcium signals stable over several days of imaging (Extended Data Fig. 1)32, 34, 35. Within this volume we obtained calcium signals from cell bodies of superficial layer (L2/3) neurons and large calibre apical dendrites that continued beyond the depth of our imaging volume and had branching morphologies consistent with deep layer pyramidal cells. These were likely from L5 neurons because of their large calibre, and because most L6 pyramidal cells do not project their apical dendrites more superficially than L436, 37. The calcium signals from these deep layer apical dendrites stem from either forward-38, 39 or back-propagating action potentials40, are consistent across days (Extended Data Fig. 1) and along the length of the deep layer apical dendritic trunks (Extended Data Fig. 2), and therefore most likely reflect the response properties of the soma. We relocated the cohort of neurons daily by using the vasculature’s negative staining as fiducial landmarks. For the final in vivo imaging session, we injected the tail vein with a fluorescent dye to label blood vessels (rhodamine B isothiocyanate–Dextran (MW ~70k), 5% v/v, Sigma) and acquired a fluorescence stack to correspond the calcium-imaged neurons in vivo with their identities in the EM volume ex vivo8 (see below, and Extended Data Fig. 4). Following in vivo two-photon imaging the animal was perfused transcardially (2% formaldehyde/2.5% glutaraldehyde in 0.1 M cacodylate buffer with 0.04% CaCl ) and the brain was processed for serial-section TEM. 200 μm thick coronal vibratome sections were cut, post-fixed, and en bloc stained with 1% osmium tetroxide/1.5% potassium ferrocyanide followed by 1% uranyl acetate, dehydrated with a graded ethanol series, and embedded in resin (TAAB 812 Epon, Canemco). We located the calcium-imaged region by matching vasculature between in vivo fluorescence and serial thick (1 μm) toluidine blue (EMS) sections cut from an adjacent vibratome sections, then cut ~3,700 serial (<50 nm) sections on an ultramicrotome (Leica UC7) using a 35 degree diamond knife (EMS-Diatome) and manually collected sections on 1 mm × 2 mm dot-notch slot grids (Synaptek) that were coated with a pale gold Pioloform (Ted Pella) support film, carbon coated, and glow-discharged. Following section pickup, we post-stained grids with uranyl acetate and lead citrate. Using the custom-built transmission electron microscope camera array (TEMCA)8 we imaged the ~3,700 section series, targeting a ~450 μm × 450 μm region for each section (Fig. 1c). Acquired at 4 nm per pixel in plane, this amounted to ~100 terabytes of raw data to date comprising 30 million cubic microns of brain and >10 million (4,000 × 2,672 pixel) camera images. Magnification at the scope was 2,000×, accelerating potential was 120 kV, and beam current was ~90 microamperes through a tungsten filament. Images suitable for circuit reconstruction were acquired at a net rate of 5–8 million pixels s−1. Approximately the middle third of the series (sections 2,281–3,154) was aligned using open source software developed at Pittsburgh Supercomputing Center (AlignTK)8 and imported into CATMAID16 for distributed online visualization and segmentation. Within the analysed EM series there were 51 missing sections. Nineteen were single section losses. There were 2 instances each of missing 2, 3, and 4; and 1 instance each of missing 6 or 8 consecutive sections near the series boundaries. Folds, staining artefacts, and sometimes cracks occurred during section processing, but were typically isolated to edges of our large sections and therefore did not usually interfere with manual segmentation. To find the correspondence between the cells imaged in vivo with those in the EM data set, a global 3D affine alignment was used with fiducial landmarks manually specified at successively finer scales of vasculature and then cell bodies to re-locate the calcium-imaged neurons in the EM-imaged volume (Extended Data Fig. 4). Apical dendrites arising from deep layer (putative L5) pyramidal neurons were identified by their characteristic morphology36, 41, 42 (also see below). Their correspondence was facilitated by the unique branching patterns of their apical tufts and those that could not be unambiguously identified were not included in the functional analysis. We first traced the axonal and dendritic arbors of the functionally characterized neurons in the EM data set by manually placing a series of marker points down the midline of each process to generate a skeletonized model of the arbors using CATMAID16 (Figs 1d, 2a, 3a, Extended Data Fig. 6, Supplementary Data 1–3). We identified synapses using classical criteria42. For each synapse on the axon of a functionally characterized cell, dendrites of postsynaptic excitatory neurons were traced either to the boundaries of the aligned volume or centripetally back to the cell body8. We identified deep layer apical dendrites of (putative L5) pyramidal cells by their large calibre, high spine density, and their continuation beyond the bottom border of the EM volume, which spans from the pial surface through L4. For each neuronal target reconstruction included in the analysis, a second independent annotator verified the tracing by working backwards from the most distal end of every process. An additional round of validation was done for each synapse between functionally characterized cells where a third annotator who had not previously traced the pre- or post-synaptic process, independently verified the anatomical connectivity blind to previous tracing work. We began this independent round of validation at each synapse and traced the pre- and postsynaptic processes centripetally. If the initial reconstruction and subsequent verification of the reconstruction diverged, that connection and the segmentation work distal from the point of divergence was excluded from further analysis. EM reconstruction and validation was performed blind to cells’ functional characteristics and targeted cells were initially assigned to individual annotators pseudo-randomly weighted by tracing productivity. We performed targeted volumetric reconstructions of synapses connecting functionally characterized cells by developing tools to interface with CATMAID cutout, locally align, and catalogue volumes of interest based on location (Fig. 4a; for example, 400 pixels × 400 pixels × 41 sections or 3.2 μm × 3.2 μm × 1.64 μm volumes centred on synapses represented by CATMAID connectors). Presynaptic boutons, postsynaptic spines, their parent axons and dendrites, and postsynaptic density (PSD) areas were manually segmented with itk-SNAP (http://www.itksnap.org/). PSD areas were calculated as described previously43 with obliquely cut or en face synapse areas measured using their maximum z-projection. En face or obliquely cut synapses were identified by serial sections that starkly transitioned from a clear presynaptic specialization hosting a vesicle pool, to a distinctly different postsynaptic cell, typically with an intervening section of electron dense area representing the postsynaptic density and/or synaptic cleft (for example, Extended Data Fig. 5). In vivo calcium imaging data was analysed in MATLAB and ImageJ (NIH) as described previously12, 13. To correct for motion along the imaged plane (x–y motion), the stack for each imaging plane was registered to the average field of view using TurboReg44. A 5 pixel border at each edge of the field of view was ignored to eliminate registration artefacts. Masks for analysing fluorescence signal from neurons were manually generated corresponding to cells in the EM volume, registered to the in vivo anatomical fluorescence stack, and to individual physiological imaging planes. Time courses of cells spanning multiple physiological imaging planes were weighted by dwell time in each plane and averaged across planes. Evoked responses for each EM identified cells were measured for each stimulation epoch as the difference in fractional fluorescence (% ΔF/F ) between the 5 s after and the 2.5 s before stimulus onset (pre-stimulus activity), and averaged across stimulus repetitions. We quantified visual responsiveness of each cell by calculating the average Pearson correlation coefficient of the responses to all stimuli across repetitions (average trial-to-trial correlation). We defined the significance of visual responses as the probability (P value) that the observed trial-to-trial correlation is larger than the correlation obtained from a full random permutation of the data for spatial and temporal frequency experiments (P  < 0.05) and experiments where stimulus position was varied (P  < 0.01). In retinotopic experiments designed to increase the number of characterized neurons, we found cells that did not reliably respond to stimuli ± 20° from the centre of the display. These cells that either had receptive fields smaller than our stimuli or stimuli were positioned at the at the edge of their receptive fields. We considered these cells as potentially driven by stimulus edge effects and therefore excluded such experiments from further analysis. To estimate the preferred orientation, direction, and spatiotemporal frequency, we modelled responses with a combination of a multivariate Gaussian with spatial frequency (x and y, deg), temporal frequency (Hz) and position (x and y, deg) as independent dimensions, a constant gain factor, and a static exponent. We fit the model to data using a large-scale nonlinear optimization algorithm (Trust Region Reflective, MATLAB Optimization Toolbox, MathWorks Inc.), generating multiple fits from randomly selected starting points and selected the best fit (least-square criterion). The quality of model fits was inspected visually for all neurons included in the data set. EM connectivity was analysed using custom written software in MATLAB and Python. Connectivity analysis that did not utilize functional information (Figs 1e and 3, Extended Data Fig. 7) started with the entire population of excitatory neuronal targets in the reconstructed network. Network modularity and neuron connectivity motifs (Fig. 1e and Extended Data Fig. 7) were analysed with code modified from the Brain Connectivity Toolbox45. We used an implementation of the Louvain method17 followed by consensus portioning46 for weighted and directed graphs to detect communities, or interconnected pyramidal neuron targets, from our EM reconstructed network purely by anatomical connectivity. For this analysis we included only the 201 traced neurons having multiple synaptic partners (degree ≥ 2). The number of synapses reconstructed between neurons was used as weights for all analyses. Modularity Q was given by the standard equation: where l is the total number of edges, given by where N is the total set of nodes, a is the (i,j) element of the weighted adjacency matrix, δ(m ,m ) is 1 if i and j are in the same community and 0 otherwise, and are the in and out degrees of the jth and ith nodes respectively, calculated by To generate null models of connectivity matrices for hypothesis testing, we shuffled the reconstructed adjacency conditioned on our sample degree, weight and strength distributions (Extended Data Fig. 7)31, 47. Analysis of connectivity with neuronal function restricted our sample population to those cell pairs where both pre- and post-synaptic cells were functionally characterized. For orientation tuning (Figs 1d, f, 2, 4a–c, Extended Data Figs 5, 6, 8, 9), between 50 neurons, there were 29 connected pairs. On average, we detected 1.3 synapses per connected pair where we measured orientation selectivity for both cells. We varied retinotopic position and spatial and temporal frequencies of the grating stimulus with the goal of improved measurement of orientation preference for more cells. The sensory physiology of a subset of cells were simultaneously recorded across multiple stimulus parameters. These 120 cells were used for signal correlation analysis (Extended Data Fig. 10). Potential synapse length (L ) represents the degree to which pairs of neurons’ axonal and dendritic arbors come sufficiently close to make a synapse (Fig. 2a, c–f, 3b, d, Extended Data Figs 9, 10). For excitatory pyramidal cells, we computed this length of potential synaptic connectivity between all pairs by first resampling the dendritic and axonal arbor skeletons to a maximum segment length of 40 nm (the average thickness of the EM sections) and summing the length of all dendrite segments within a maximum spine length distance of the axon (s = 5 μm: Figs 2, 3 and Extended Data Fig. 10; s = 1 μm: Extended Data Fig. 9). We use s = 5 μm based the longest spine connecting functionally connected neurons (~ 5 μm). Analysis of neurons connected by multiple synapses (Fig. 3) was not restricted to cell pairs where both pre and post-synaptic cells were physiologically characterized. This population included 137 neurons connected by 267 synapses in 115 multi-synapse cell pairs whose axonal and dendritic arbors were traced exhaustively in the aligned volume. As a comparison population, we used 25 unique pairs connected by one synapse from the functionally characterized population described above, because they were also reconstructed throughout the aligned volume. To examine whether poly-synaptic connectivity occurs greater than random, we first computed a population average synapse rate (λ ) normalized by potential synapse length, by dividing the total number of synapses reconstructed from the set of 50 functionally characterized neurons by their total pairwise L . We next compared λ for individual neuron pairs each connected by different numbers of synapses (Fig. 3b). This was used to assess whether multiple synapses occurred more often than predicted from a simple Poisson model. We examined the frequency of clustered vs distant synapses by comparing synapse pairs that were separated by >20 μm or <20 μm. For each synapse from each pair of neurons connected by n synapses, we computed the total L within 20 μm or beyond 20 μm from that synapse. We then took the fraction of the overlap beyond 20 μm: as the expected probability that each of the (n − 1) other synapses will occur >20 μm away. The expected number of distant synapse was taken as (n − 1) times the fraction of overlap beyond 20 μm, which was compared with the actual number of distant synapses observed (Fig. 3d). 3D renderings were generated using Blender (http://www.blender.org/) (Figs 1d, 2a, 3a, Extended Data Fig. 6, Supplementary Data 1–3), Imaris (Bitplane) (Extended Data Fig. 4 and Supplementary Video 1), and itk-SNAP (Fig. 4a). Cytoscape (http://www.cytoscape.org/) was used for network graph layouts (Figs 1f). Statistical methods were not used to predetermine sample sizes. Statistical comparisons between sample distributions were done with Permutation tests (that is, Monte Carlo-based Randomization tests) unless otherwise noted. Permutation tests were ideal as we do not assume the underlying distributions are normal, nor need the observations to be independent. For Permutation tests, we computed the incidence of differences between means or Pearson’s linear correlation coefficient of randomly drawn samples from combined sample distributions exceeding the empirical difference (Figs 2b–d, f, 4c and Extended Data Figs 7b, 9a, b, 10c, d). Cochran-Armitage two-sided tests for trend were used on proportional binned data with linear weights (Fig. 2b, f). Standard errors were calculated from bootstrapped sample distributions. For cumulative distributions (Figs 2c, d, and Extended Data Figs 9a, b, 10c, d), we repeatedly resampled by randomly drawing with replacement from the sample distribution the number of observed values 1,000–10,000 times and extracted the standard deviation at each step of the empirical CDF. For binned data (Fig. 2b, f, and Extended Data Fig. 9d), each resampled distribution was binned and the standard deviation was computed from the resampled probabilities or rates within each bin. Custom code is available upon request.


News Article | November 13, 2015
Site: www.greencarcongress.com

« Joule and Red Rock Biofuels intend to merge; solar fuels plus biomass F-T | Main | Two years in, BMW i3 is the best-selling EV in Germany and the 3rd top seller worldwide; 80% of buyers new to BMW » Researchers led by Ashley Weaver, assistant professor at the Virginia Tech-Wake Forest University Center for Injury Biomechanics, have developed a method to compute crash injury metrics and risks as functions of precrash occupant position. The process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to understand further important factors that lead to more severe motor vehicle crash injuries. The modeling results provide details not available from using crash test dummies (anthropomorphic test devices, or ATDs). More than 33,000 Americans die in motor vehicle crashes annually, according to the Centers for Disease Control and Prevention. Modern restraint systems save lives, but some deaths and injuries remain, and restraints themselves can cause some injuries. Although crash-test dummies help engineers design safer cars, they provide only limited information about forces the body experiences during impact. Computer models of vehicle crashes, on the other hand, can provide more sophisticated information on how to improve restraints and other safety systems. The models also help researchers simulate the effects of thousands of variables that would be far too slow to test in physical crash tests. The Crash Injury Research and Engineering Network (CIREN) has created a database of real-world vehicle crashes for researchers to test with computer models. Working with Joel Stitzel and graduate students and staff from the Center for Injury Biomechanics, Weaver developed a 3-phase real-world motor vehicle crash (MVC) reconstruction method to analyze injury variability as a function of precrash occupant position for 2 full-frontal CIREN cases. The researchers used the NSF-supported Blacklight supercomputer at the Pittsburgh Supercomputing Center and the DEAC Cluster at Wake Forest University to run thousands of simulations drawn from hundreds of cases. The simulations used virtual versions of the Toyota Camry and Chevrolet Cobalt. Weaver worked with members of the Extreme Science and Engineering Discovery Environment (XSEDE) Extended Collaborative Support Service team—staff with expertise in many areas of advanced computing—who helped set up the cyberinfrastructure and workflows needed to run the simulations. Supported by a five-year, $121-million NSF grant, XSEDE provides a collection of integrated digital resources that scientists can use to access advanced computing resources, data and expertise. Using the Total Human Model for Safety (THUMS), developed by Toyota Central Research and Development Labs, Weaver and her team showed that simulations can reproduce real-world injury patterns and predict details crash-test dummies can’t provide. Along the way, they demonstrated how injury-causing stress moves from the foot to the lower leg as a driver’s head comes forward into a frontal airbag, and that more reclined seating positions can lead to a higher risk of head and chest injuries. Weaver and her colleagues published their findings in an open-access paper in Traffic Injury Prevention. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position, which is often uncertain in real-world MVCs. This study provides perspective on the injury risk sensitivity of precrash occupant positioning within the vehicle compartment. By studying a variety of potential occupant positions, we can understand important factors that lead to more severe injuries and potentially mitigate these injuries with advanced safety systems to protect occupants in more dangerous positions. Evaluating additional cases in further detail will allow for development of new injury metrics and risk functions from real-world crash data to assess the effectiveness of restraint systems to prevent and mitigate injuries that are not easily studied using postmortem human subjects or ATDs.


News Article | February 3, 2016
Site: www.scientificcomputing.com

A $1.8-million National Institutes of Health grant to the Pittsburgh Supercomputing Center (PSC) will make a next-generation Anton 2 supercomputer developed by D. E. Shaw Research (DESRES) available to the biomedical research community. A specialized system for modeling the function and dynamics of biomolecules, the Anton 2 machine at PSC will be the only one of its kind publicly available to U.S. scientists. The grant also extends the operation of the Anton 1 supercomputer currently at PSC until the new Anton 2 is deployed, expected in the Fall of 2016. “Many life processes important to understanding the molecular basis of cellular events occur over timescales exceeding a millisecond in length,” says Phil Blood, principal investigator of the new grant and senior computational scientist at PSC. “Anton 2’s performance for molecular simulation will exceed that of current general-purpose supercomputing systems by orders of magnitude, enabling the study of biological processes not otherwise possible and offering new possibilities in drug discovery and development.” Molecular dynamics simulations can provide insights into the behavior of proteins, cell membranes, nucleic acids and other molecules at the atomic scale. But even the most advanced general-purpose supercomputers struggle to simulate beyond the microsecond level — a thousand times shorter than the millisecond level — without taking months of computational time. Anton has changed this, giving researchers practical access to simulations at far longer timescales. The Anton 1 supercomputer that has been in use at PSC since 2010 has been a great success and has so far enabled 277 simulation projects by 127 different PIs across the US and resulted in more than 120 peer-reviewed research papers. Three of these studies appeared in the scientific journal Nature, one of the international scientific community’s premier publications. The new 128-node Anton 2 will expand on the power and capabilities of the Anton 1 currently at PSC, increasing simulation speed approximately four-fold and enabling the simulation of biomolecular systems with around five times as many atoms as was possible using the previous machine. These capabilities will allow researchers to study larger biomolecules on timescales that weren’t previously accessible to molecular dynamics modeling. As with Anton 1, DESRES will provide the Anton 2 system without cost for non-commercial use by U.S. researchers. Time on the machine is expected to be allotted on the basis of research proposals submitted to an independent expert committee convened by the National Research Council at the National Academy of Sciences. More information on the Anton project at PSC can be found at https://www.psc.edu/index.php/computing-resources/anton


News Article | November 13, 2015
Site: www.scientificcomputing.com

More than 33,000 Americans die in motor vehicle crashes annually, according to the Centers for Disease Control and Prevention. Modern restraint systems save lives, but some deaths and injuries remain — and restraints themselves can cause some injuries. "Crash-test dummies" help engineers design safer cars, but provide only limited information about forces the body experiences during impact. Computer models of vehicle crashes, on the other hand, provide more sophisticated information on how to improve restraints and other safety systems. The models also help researchers simulate the effects of thousands of variables that would be far too slow to test in physical crash tests. "There's really limited information you can get from a crash-test dummy — you get only about 20 data points," says Ashley A. Weaver, an assistant professor at the Virginia Tech-Wake Forest University Center for Injury Biomechanics and a former National Science Foundation (NSF) graduate research fellow. "The human body model gives us much more, predicting injuries in organs that aren't in that dummy, such as lung contusions." The Crash Injury Research and Engineering Network (CIREN) has created a database of real-world vehicle crashes for researchers to test with computer models. Working with Joel Stitzel and graduate students and staff from the Center for Injury Biomechanics, Weaver used the NSF-supported Blacklight supercomputer at the Pittsburgh Supercomputing Center and the DEAC Cluster at Wake Forest University, to run thousands of simulations drawn from hundreds of cases. The simulations used virtual versions of the Toyota Camry and Chevrolet Cobalt. Weaver worked with members of the Extreme Science and Engineering Discovery Environment (XSEDE) Extended Collaborative Support Service team — staff with expertise in many areas of advanced computing — who helped set up the cyberinfrastructure and workflows needed to run the simulations. Supported by a five-year, $121 million NSF grant, XSEDE provides a collection of integrated digital resources that scientists can use to access advanced computing resources, data and expertise. Using the Total Human Model for Safety, developed by Toyota Central Research and Development Labs, Weaver and her team showed that simulations can reproduce real-world injury patterns and predict details crash-test dummies can't provide. Along the way, they demonstrated how injury-causing stress moves from the foot to the lower leg as a driver's head comes forward into a frontal airbag, and that more reclined seating positions can lead to a higher risk of head and chest injuries. Weaver and her colleagues published their findings in Traffic Injury Prevention in October 2015. The simulations allowed the researchers to quantify the sensitivity and uncertainty of the injury risk predictions based on occupant position, which is difficult to determine in real-world motor vehicle crashes. "By studying a variety of potential occupant positions," the team concluded, "we can understand important factors that lead to more severe injuries and potentially mitigate these injuries with advanced safety systems to protect occupants in more dangerous positions."


News Article | November 19, 2015
Site: phys.org

High-performance computing (or HPC) enables discoveries in practically every field of science—not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year—from high-school students to Nobel Prize winners—expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good—they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. Software that simulates the effect of an electric charge passing through a transistor—only a few atoms wide—is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain—and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves—ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than 1 petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE)—a powerful collection of advanced digital resources and services—to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, Texas to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain. Explore further: Innovative new supercomputers increase nation's computational capacity and capability


News Article | November 18, 2015
Site: www.scientificcomputing.com

When researchers need to compare complex new genomes, or map new regions of the Arctic in high-resolution detail, or detect signs of dark matter, or make sense of massive amounts of functional MRI data, they turn to the high-performance computing and data analysis systems supported by the National Science Foundation (NSF). High-performance computing (or HPC) enables discoveries in practically every field of science — not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year — from high-school students to Nobel Prize winners — expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good — they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. 2. Designing new nanodevices Software that simulates the effect of an electric charge passing through a transistor — only a few atoms wide — is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain — and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves — ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than one petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE) — a powerful collection of advanced digital resources and services — to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September, and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, TX, to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. 9. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain.


The security of the more than $7 billion in research funded by the National Science Foundation (NSF) will be significantly bolstered, thanks to a $5-million grant awarded to Indiana University, the National Center for Supercomputing Applications (NCSA), the Pittsburgh Supercomputing Center (PSC) and the University of Wisconsin-Madison for a collaborative effort to create the NSF Cybersecurity Center of Excellence. This funding will establish the Center for Trustworthy Scientific Cyberinfrastructure (CTSC), a three-year-old collaboration between the aforementioned institutions, as the NSF Cybersecurity Center of Excellence, an entity focused on addressing cybersecurity challenges of NSF scientific research. Ensuring scientific computing remains trustworthy and uncorrupted is essential in protecting the nation’s science. In its role as a Cybersecurity Center of Excellence, the CTSC will provide readily available cybersecurity services tailored to the NSF science community. These resources will include leadership and coordination across organizations, and education and training to expand the pool of available cybersecurity expertise. "NSF-funded cyberinfrastructure presents unique challenges for operational security personnel and impacts other important areas of research affecting society, including ocean sciences, natural hazards, engineering, biology and physics,” said Anita Nikolich, cybersecurity program director within NSF’s advanced cyberinfrastructure division. “Organizations that host cyberinfrastructure must find the right balance of security, privacy and usability while maintaining an environment in which data are openly shared. Many research organizations lack expertise in technical and policy security, and could benefit from an independent, shared security resource pool." The CTSC will collaborate directly with NSF-funded research organizations to address their cybersecurity challenges and provide forums for cybersecurity collaboration across organizations. For example, Jim Basney of the National Center for Supercomputing Applications will lead CTSC support activities on the topic of identity and access management for research organizations. “Cybersecurity is no longer solely a technical matter — it’s a critical part of any organization’s risk management,” said Von Welch, director of Indiana University’s Center for Applied Cybersecurity Research (CACR) and CTSC principal investigator. “Addressing the cybersecurity risks to science requires a comprehensive understanding of research and the threats it faces. Many of these threats are those faced by other organizations on the Internet, but others are unique to the science community with its collaborative nature and use of high-end information technology and cyberinfrastructure.” The CTSC will also convene an annual NSF Cybersecurity Summit, led by PSC Chief Information Security Officer James A. Marsteller, to share experiences, provide training and discuss cybersecurity challenges. “Organized with significant input from the NSF community, the annual Summit provides a key opportunity to share experiences, lessons learned and advances with other NSF projects,” Marsteller said. “The forum provides an opportunity to discuss serious issues around implementing cybersecurity not only of a technical nature, but also cultural, managerial and budgetary and the like.” An example of a safeguard the CTSC will promote is software assurance, as experienced, respected names in that field, such as Barton Miller, professor at University of Wisconsin-Madison, will offer their expertise to reduce the risks of vulnerabilities and breaches for researchers. “Every day, the news continues to document why truly excellent research in highly applied cybersecurity is a national priority,” said Brad Wheeler, IU vice president for information technology and interim dean of the IU School of Informatics and Computing. “This award adds to the many national distinctions that CACR has achieved in its 13 years as part of IU’s formidable cybersecurity capabilities in education, research and operations.” Additionally, the CTSC will collaborate with the U.S. Department of Energy’s Energy Science Network (ESnet) to develop a threat profile for open science. “The Department of Energy and NSF enable scientific discovery in a range of domains critical to our nation’s future,” said Greg Bell, director for ESnet and division director at the Lawrence Berkeley National Laboratory. “Working together to understand cybersecurity threat models shared by these collaborations is an important step forward for the two agencies, and ESnet is delighted to be collaborating on this effort.”

Loading Pittsburgh Supercomputing Center collaborators
Loading Pittsburgh Supercomputing Center collaborators