Time filter

Source Type

Worcester, MA, United States

Abstract: Kaho Maeda, Dr. Hideto Ito, Professor Kenichiro Itami of the JST-ERATO Itami Molecular Nanocarbon Project and the Institute of Transformative Bio-Molecules (ITbM) of Nagoya University, and their colleagues have reported in the Journal of the American Chemical Society, on the development of a new and simple strategy, "helix-to-tube" to synthesize covalent organic nanotubes. Organic nanotubes (ONTs) are organic molecules with tubular nanostructures. Nanostructures are structures that range between 1 nm and 100 nm, and ONTs have a nanometer-sized cavity. Various applications of ONTs have been reported, including molecular recognition materials, transmembrane ion channel/sensors, electro-conductive materials, and organic photovoltaics. Most ONTs are constructed by a self-assembly process based on weak non-covalent interactions such as hydrogen bonding, hydrophobic interactions and π-π interactions between aromatic rings. Due to these relatively weak interactions, most non-covalent ONTs possess a relatively fragile structure. Covalent ONTs, whose tubular skeletons are cross-linked by covalent bonding (a bond made by sharing of electrons between atoms) could be synthesized from non-covalent ONTs. While covalent ONTs show higher stability and mechanical strength than non-covalent ONTs, the general synthetic strategy for covalent ONTs was yet to be established. A team led by Hideto Ito and Kenichiro Itami has succeeded in developing a simple and effective method for the synthesis of robust covalent ONTs (tube) by an operationally simple light irradiation of a readily accessible helical polymer (helix). This so-called "helix-to-tube" strategy is based on the following steps: 1) polymerization of a small molecule (monomer) to make a helical polymer followed by, 2) light-induced cross-linking at longitudinally repeating pitches across the whole helix to form covalent nanotubes. With their strategy, the team designed and synthesized diacetylene-based helical polymers (acetylenes are molecules that contain carbon-carbon triple bonds), poly(m-phenylene diethynylene)s (poly-PDEs), which has chiral amide side chains that are able to induce a helical folding through hydrogen-bonding interactions. The researchers revealed that light-induced cross-linking at longitudinally aligned 1,3-butadiyne moieties (a group of molecules that contain four carbons with triple bonds at the first and third carbons) could generate the desired covalent ONT. "This is the first time in the world to show that the photochemical polymerization reaction of diynes is applicable to the cross-linking reaction of a helical polymer," says Maeda, a graduate student who mainly conducted the experiments. The "helix-to-tube" method is expected to be able to generate a range of ONT-based materials by simply changing the arene (aromatic ring) unit in the monomer. "One of the most difficult parts of this research was how to obtain scientific evidence on the structures of poly-PDEs and covalent ONTs," says Ito, one of the leaders of this study. "We had little experience with the analysis of polymers and macromolecules such as ONTs. Fortunately, thanks to the support of our collaborators in Nagoya University, who are specialists in these particular research fields, we finally succeeded in characterizing these macromolecules by various techniques including spectroscopy, X-ray diffraction, and microscopy." "Although it took us about a year to synthesize the covalent ONT, it took another one and a half year to determine the structure of the nanotube," says Maeda. "I was extremely excited when I first saw the transmission electron microscopy (TEM) images, which indicated that we had actually made the covalent ONT that we were expecting," she continues. "The best part of the research for me was finding that the photochemical cross-linking had taken place on the helix for the first time," says Maeda. "In addition, photochemical cross-linking is known to usually occur in the solid phase, but we were able to show that the reaction takes place in the solution phase as well. As the reactions have never been carried out before, I was dubious at first, but it was a wonderful feeling to succeed in making the reaction work for the first time in the world. I can say for sure that this was a moment where I really found research interesting." "We were really excited to develop this simple yet powerful method to achieve the synthesis of covalent ONTs," says Itami, the director of the JST-ERATO project and the center director of ITbM. "The "helix-to-tube" method enables molecular level design and will lead to the synthesis of various covalent ONTs with fixed diameters and tube lengths with desirable functionalities." "We envisage that ongoing advances in the "helix-to-tube" method may lead to the development of various ONT-based materials including electro-conductive materials and luminescent materials," says Ito. "We are currently carrying out work on the "helix-to-tube" methodology and we hope to synthesize covalent ONTs with interesting properties for various applications." About Nagoya University JST-ERATO Itami Molecular Nanocarbon Project The JST-ERATO Itami Molecular Nanocarbon Project was launched at Nagoya University in April 2014. This is a 5-year project that seeks to open the new field of nanocarbon science. This project entails the design and synthesis of as-yet largely unexplored nanocarbons as structurally well-defined molecules, and the development of novel, highly functional materials based on these nanocarbons. Researchers combine chemical and physical methods to achieve the controlled synthesis of well-defined uniquely structured nanocarbon materials, and conduct interdisciplinary research encompassing the control of molecular arrangement and orientation, structural and functional analysis, and applications in devices and biology. The goal of this project is to design, synthesize, utilize, and understand nanocarbons as molecules. About WPI-ITbM The Institute of Transformative Bio-Molecules (ITbM) at Nagoya University in Japan is committed to advance the integration of synthetic chemistry, plant/animal biology and theoretical science, all of which are traditionally strong fields in the university. ITbM is one of the research centers of the Japanese MEXT (Ministry of Education, Culture, Sports, Science and Technology) program, the World Premier International Research Center Initiative (WPI). The aim of ITbM is to develop transformative bio-molecules, innovative functional molecules capable of bringing about fundamental change to biological science and technology. Research at ITbM is carried out in a "Mix-Lab" style, where international young researchers from various fields work together side-by-side in the same lab, enabling interdisciplinary interaction. Through these endeavors, ITbM will create "transformative bio-molecules" that will dramatically change the way of research in chemistry, biology and other related fields to solve urgent problems, such as environmental issues, food production and medical technology that have a significant impact on the society. About JST-ERATO ERATO (The Exploratory Research for Advanced Technology), one of the Strategic Basic Research Programs, aims to form a headstream of science and technology, and ultimately contribute to science, technology, and innovation that will change society and the economy in the future. In ERATO, a Research Director, a principal investigator of ERATO research project, establishes a new research base in Japan and recruits young researchers to implement his or her challenging research project within a limited time frame. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.

News Article
Site: http://www.nature.com/nature/current_issue/

Male Long–Evans rats were obtained from Charles River at 8–10 weeks old. Rats were pair housed in a colony maintained on a 12 h light/dark cycle, and were given food and water ad libitum outside of behavioural training. During training, rats were given food ad libitum but worked in a closed economy for water, obtaining 15 ml of 5% sucrose solution during the task. Experimental protocols were approved by Stanford University IACUC to meet guidelines of the National Institutes of Health guide for the Care and Use of Laboratory Animals. Sample sizes were chosen to meet or exceed those in previously published accounts of cognitive and decision-making tasks in rats31, 32. Post-hoc tests then verified adequate statistical power given the observed effect sizes (see ‘Power analyses’). All behaviour was assessed in operant chambers (Med Associates). One wall of the operant chamber was arranged such that the sucrose port (Med Associates ENV-200R3BM) was positioned in the bottom centre slot. The nosepoke (Med Associates ENV-114BM) used to initiate each trial was slotted immediately above the sucrose port. The retractable choice levers (Med Associates, ENV-112CM) were on either side of the sucrose port (Extended Data Fig. 1). In the first phase of training, both levers were extended into the chamber, and every press resulted in 50 μl sucrose reward. Rats were given two hours to earn and retrieve 150 total sucrose rewards. Most rats completed this phase in one day. In the second phase of training, a randomly-selected lever entered the chamber and retracted when pressed. Every press resulted in a 50-μl sucrose reward. Rats were given 2 h to earn and retrieve 200 sucrose rewards. In the third phase of training, rats were trained to initiate each trial with a one-second nosepoke. On the first trial, the rat was required to nosepoke for 250 ms, after which both levers would enter the chamber. The rat would then press a lever to obtain a 50-μl sucrose reward. In each subsequent trial, the length of the required nosepoke incremented by 5 ms. Rats were given 2 h to complete 200 lever presses. In the final phase of training, rats were exposed to the behavioural task described in Fig. 1a. Each trial was initiated with a 1-s nosepoke. If the rat failed to hold the nosepoke for 1 s, it could try again immediately without penalty, but the 1-s clock would start again from zero. One lever always delivered a 50-μl reward, while the other delivered a 10-μl reward with 75% probability and a 170-μl reward with 25% probability (expected value = 50 μl). These objective expected values were held constant throughout the task. For the first 50 ‘forced choice’ trials, one randomly chosen lever entered the chamber, and the rat pressed it to obtain its reward. For the remaining 200 ‘free choice’ trials, both levers entered the chamber and the rat was allowed to choose. Rats were trained until their fraction of risky choices across three consecutive days varied by less than 10%. On average, rats required approximately 5 sessions in the final phase of training before reaching a stable behavioural baseline (mean = 4.85, s.d. = 2.29). In total, 12 out of 132 rats failed to learn the task. Rats were excluded from experiments if they failed to learn the initial lever pressing task, lost a fibreoptic implant before the conclusion of testing, or failed to develop stable baseline behaviour; these criteria were established in advance of experimentation. All cell counting data collection in Extended Data Figs 5, 6, 7 was conducted blinded to condition; the behavioural experimenter was not blind to the risk preference of each animal, but instead all behaviour was conducted while the experimenter monitored the rats from a different room, so as not to influence the animals’ choices. To validate rat sensitivity to relative expected value across the two levers, rats were trained to a stable baseline, as described above. The expected value of the safe lever was then systematically increased across days, to map out behavioural response curves (Extended Data Fig. 1b). To validate that rats’ choices were due to preference for the safe or risky reward schedule, rather than simply to side bias or indifference, rats were trained to a stable baseline. The location of the risky lever was then alternated between left and right levers at an uncued time in blocks of 100–250 trials (Extended Data Fig. 1c). Trial lengths for these blocks were on the order of the number of trials used in the main gambling task (200 free choice trials). The loss-sensitive index is determined as shown in equation (1). PPX (Sigma-Aldrich, A1237) and A-77636 hydrochloride (Tocris Biosciences, 1701) were diluted in physiological saline and injected intraperitoneally 30 min before the start of the task at the doses described in Fig. 2. A large cohort of animals was trained to conduct this experiment, and separate animals within the cohort were used for each drug dose. Animals were trained to a stable baseline, as described above, before drug injections were initiated. Surgeries were performed on 8–10-week-old rats. Rats were anaesthetized with 2–3% isofluorane; scalps were shaved, and subjects were placed in a stereotactic head apparatus. Rats received a subcutaneous injection of buprenorphine (0.01 mg kg−1) and a subcutaneous injection of lactated ringer’s solution (3 ml). Ophthalmic ointment was applied to prevent eyes from drying. A midline scalp incision was made, and a craniotomy was drilled above each injection or fibre implantation site. For intracranial drug infusion, guide cannulas (PlasticsOne, C313G) were implanted bilaterally. OFC cannulas were implanted at (A/P 4.5, M/L ±1.4, D/V −4.2; all coordinates in mm and relative to bregma (here and below)). NAc cannulas were implanted at (A/P 1.5, M/L ±1.8, D/V −6.5). In both NAc and OFC, left cannulas were implanted vertically while right cannulas were implanted at a 20° angle. Dental adhesive (C&B metabond, Parkell) was applied and dental cement (Stoelting) was added to secure the cannulas to the skull. For photometry and optogenetics experiments, virus was injected with a 10-μl glass syringe and a 33-gauge beveled metal needle (World Precision Instruments). Importantly, virus should be injected at a titre no greater than 3 × 1012 viral particles per ml to avoid potential cytotoxicity and diluted in ice-cold PBS if necessary. The injection volume and flow rate (750 nl at 150 nl min−1) were controlled by an injection pump (Harvard Apparatus). Each NAc received two injections (A/P 1.5, M/L ±1.8 mm, D/V −7.6 and −7.0). After injection, the needle was left in place for 5 additional minutes and then slowly withdrawn. All rats were injected and implanted bilaterally. In each NAc, an 8-mm fibre stub, terminated with a 2.5-mm diameter ferrule was implanted at (A/P 1.5 mm, M/L ±1.8 mm, D/V −7.2 mm). Left cannulas were implanted vertically while right cannulas were implanted at a 20° angle. For stimulation, a 30-μm core diameter, 0.37 numerical aperture (NA) fibre was used; for photometry, a 400-μm core diameter, 0.48-NA low-autofluorescence fibre with low-fluorescence epoxy for photometry (implantable fibres assembled by Doric Lenses, using fibre manufactured by ThorLabs or CeramOptec). Dental adhesive (C&B metabond; Parkell) was applied and light-curing composite (Flow-It ALC, Pentron Clinical, N11VH) was added to secure the ferrules to the skull. All behavioural experiments occurred at least 3 weeks after virus injection. Rats’ innate behaviour determined their assignment to ‘risk-seeking’ or ‘risk-averse’ groups. For optogenetic manipulations, half of the rats were randomly assigned to ChR2 or YFP (control) groups. For photometry, each excitation source was set to an average power of 30 μW at the fibre tip. Light was delivered through a 400-μm core diameter, 0.48-NA low-fluorescence patch cord (Doric Lenses) and joined to the implanted fibre ferrules using zirconia sleeves (Thorlabs). Recording location (left or right NAc) was balanced across subjects. For optogenetic stimulation, light pulses were administered for 1 s at 20 Hz at a power of 15 mW per side (0.75 mW per side corrected for duty cycle). Decision period stimulation began when the rat initiated a nosepoke. Outcome period stimulation occurred in the 1 s after sucrose port entry. Light was delivered through a 300-μm core diameter, 0.37-NA fibre (Thorlabs), fed through a fibre optic rotary joint (Doric Lenses, FRJ_1x1_FC-FC), and split into two beams using a Doric minicube (Doric Lenses,DMC_1x2i_VIS_FC). At each output of the minicube a 0.5-m, 300-μm core diameter, 0.37-NA fibre, terminating in a 2.5-mm ferrule (Thorlabs) was attached. Each fibre was sheathed in a steel spring to protect from chewing (PlasticsOne) and joined to an implanted fibre ferrule using a zirconia sleeve (Thorlabs). Rats were anaesthetized with 1–2% isoflurane and were placed in a stereotactic head apparatus. PPX was dissolved in saline (10 μg μl−1, 0.9% NaCl). Thirty minutes before the behaviour, 0.5 μl of the PPX solution was infused in each side of OFC or each side of NAc via an internal infusion needle (PlasticsOne, C313I) inserted into the guide cannula. The internal needle was connected to a 10-μl Hamilton syringe (Nanofil; WPI). Flow rate (0.1 μl min−1) was regulated by a syringe pump (Harvard Apparatus). Cannula locations were verified in Nissl-stained sections. Infusions were conducted in an ABABA design, alternating infusions of saline or PPX across days. Rats were anaesthetized with Beuthanasia and perfused transcardially, first with ice-cold PBS (pH 7.4) and then with 4% paraformaldehyde (PFA) dissolved in PBS. The brains were removed and post-fixed in 4% PFA overnight at 4 °C, and then equilibrated in 30% sucrose in PBS. Forty-micrometre-thick coronal sections were prepared on a freezing microtome (Leica) and stored in cryoprotectant (25% glycerol and 30% ethylene glycol in PBS, pH 6.7) at 4 °C. Cell counts were conducted by blinded experimenters. For anti-D2R staining, (Millipore, AB1558) was used as described below. For anti-ChAT staining (Millipore, AB144P) was used as previously described33. For anti-GFP staining (Life Technologies, A-31852) was used as previously described28. For D2R staining, the following protocol was used: (1) rinse 40-μm sections in PBS (pH 7.4), 3 × 10 min. (2) Block in PBS plus 3% normal donkey serum and 0.3% Triton-X (PBS++) for 30 min. (3) Incubate in primary antibody (rabbit anti-D2R, Millipore ab1558) at 1:200 in PBS++ for 24 h at room temperature on a rotary shaker. (4) Wash slices for 4 × 15 min in PBS. (5) Incubate in secondary antibody (Alexa-fluor 647, goat anti-rabbit, Life Technologies, A-21245) at 1:200 in PBS++ overnight at room temperature on a rotary shaker. (6) Wash slices for 4 × 15 min in PBS. (7) Incubate in tertiary antibody (Alexa-fluor 647, donkey anti-goat, Life Technologies, A-21447) at 1:500 in PBS++ for 8 h at room temperature on a rotary shaker. (8) Wash for 15 min in PBS. (9) Wash for 15 min in 1:50,000 DAPI in PBS. (10) Wash for 15 min in PBS and mount with PVA-DABCO. We developed a novel dopamine D2R-specific promoter (D2SP) for expression of transgenes in rat D2R+ cells compatible with use in a single AAV vector (Extended Data Fig. 5). The new 1.5-kb D2SP fragment was taken from a region immediately upstream of the rat D2R (also known as Drd2) gene (full sequence: Extended Data Fig. 5), differing from a previously reported D2R promoter region34 by excluding exon 1 and including a Kozak sequence inserted between the promoter region and the gene that it controls. D2SP was amplified from rat genomic DNA using primers 5′-CGCACGCGTTTATCCTCGGTGCATCTCAGAG-3′ and 5′-GGCGGATCCCCCCGGCACTGAGGCTGGACAGCT-3′ digested with MluI and BamHI and ligated with pAAV-hSYN-eYFP or pAAV-hSYN-hChR2(H134R)-eYFP digested with the same two enzymes to yield pAAV-D2SP-eYFP or pAAV-D2SP-hChR2(H134R)-eYFP, respectively. pAAV-D2RE-eYFP was constructed using the D2R promoter sequence described previously34 to replace the hSYN promoter in pAAV-hSYN-eYFP. pAAV-D2SP-eChR2(H134R)-eYFP was constructed with the ER export motif and trafficking signal as described previously29. pGP-CMV-GCaMP6m (Addgene plasmid 40754) and pGP-CMV-GCaMP6f Kim (Addgene plasmid 40755) were a gift from D. Kim. The GCaMP DNA was amplified by PCR using 5′-CCGGATCCGCCACCATGGGTTCTCATCATCATCATC-3′ and 5′-CGATAAGCTTGTCACTTCGCTGTCATCATTTGTAC-3′, digested with BamH1 and HindIII and cloned under the CaMKIIa or D2SP promoters to yield pAAV-CaMKIIa-GCaMP6m, pAAV-CaMKIIa-GCaMP6f, pAAV-D2SP-GCaMP6m and pAAV-GCaMP6f. All constructs were fully sequenced to check for accuracy of the cloning procedure, and all AAV vectors were tested for in vitro expression before viral production as AAV8/Y733F serotype packaged by the Stanford Neuroscience Gene Vector and Virus Core. Updated maps are available at http://optogenetics.org/. Primary cultured striatal neurons were prepared from P0 Sprague-Dawley rat pups (Charles River). The striatum was isolated, digested with 0.4 mg ml−1 papain (Worthington), and plated onto glass coverslips precoated with 1:30 Matrigel (Beckton Dickinson Labware). Cultures were maintained in a 5% CO humid incubator with Neurobasal-A medium (Invitrogen Carlsbad) containing 1.25% FBS (Hyclone), 4% B-27 supplement (Gibco), 2 mM glutamax (Gibco), and FUDR (10 mg 5-fluoro-2′-deoxyuridine and 25 μg uridine) from Sigma, for 6–10 days in a 24-well plate at a density of 65,000 cells per well. For each coverslip, a DNA and CaCl mix was prepared with 1.5–3.0 μg DNA (Qiagen endotoxin-free preparation) and 1.875 μl 2 M CaCl (final Ca2+ concentration 250 mM) in 15 μl total H O. To the DNA and CaCl mix, 15 μl of 2× HEPES-buffered saline (pH 7.05) was added, and the final volume was mixed well by pipetting. After 20 min at room temperature, the 30 μl DNA–CaCl2 –HBS mix was added drop-wise into each well (from which the growth medium had been temporarily removed and replaced with 400 μl pre-warmed MEM) and transfection was allowed to proceed at 37 °C for 45–60 min. At the end of the incubation, each well was washed with 3× 1-ml warm MEM before the original growth medium was returned. Opsin expression was generally observed within 24 h. Coverslips of cultured neurons were transferred from the culture medium to a recording bath filled with Tyrode solution (containing in mM: 125 NaCl, 2 KCl, 2 CaCl , 2 MgCl , 30 glucose and 25 HEPES). The coverslip was scanned for GCaMP-expressing neurons and a glass monopolar stimulating electrode filled with Tyrode was placed nearby. A 10-s 50-Hz stimulation (pulse width 5-ms, intensity 5–6 mA) was used to obtain maximal responses. Wavelengths of either 475 nm or 400 nm, generated using a Spectra X LED light engine (Lumencor), were used to illuminate the cell. Video was recorded at 10 Hz using a CCD camera (RoleraXR, Q-Imaging). Coverslips of cultured neurons were transferred from the culture medium to a recording bath filled with Tyrode solution (containing in mM: 125 NaCl, 2 KCl, 2 CaCl , 2 MgCl , 30 glucose, 25 HEPES, 0.001 TTX, 0.005 NBQX, 0.05 APV and 0.05 picrotoxin). Whole-cell patch-clamp recordings were performed with glass electrodes (resistance 2.5–4.0 MΩ when filled with internal, which includes (in mM): 120 K-gluconate, 11 KCl, 1 CaCl , 1 MgCl , 10 EGTA, 10 HEPES, 2 Mg-ATP and 0.3 Na-GTP, adjusted to pH 7.3 with KOH). Signals were amplified with a Multiclamp 700B amplifier, acquired using a Digidata 1440A digitizer, sampled at 10 kHz, and filtered at 2 kHz. All data acquisition and analysis were performed using pCLAMP software (Molecular Devices). ChR2-expressing neurons were visually identified for patching using an upright microscope (Olympus BX51WI) equipped with DIC optics, a filter set for visualizing YFP, and a CCD camera (RoleraXR, Q-Imaging). To stimulate ChR2, 1 s of continuous blue light (~10 mW mm−2) was generated using a Spectra X LED light engine (Lumencor) and delivered to the slice via a ×40/0.8 water immersion objective focused onto the recorded neuron. Acute 300-μm coronal slices were prepared by transcardially perfusing the rat with room-temperature NMDG slicing solution (containing in mM: 92 N-methyl-d-glucamine, 2.5 KCl, 30 NaHCO , 1.2 NaH PO -H O, 20 HEPES, 25 glucose, 5 sodium ascorbate, 2 thiourea and 3 sodium pyruvate, adjusted to pH 7.4 with HCl) and slicing the brain tissue in the same solution using a vibratome (VT1200S, Leica). Slices were allowed to recover for 10 min at 33 °C in the NMDG solution, then another 20 min at 33 °C in a modified HEPES artificial cerebrospinal fluid (containing in mM: 92 NaCl, 2.5 KCl, 30 NaHCO , 1.2 NaH PO -H O, 20 HEPES, 25 glucose, 5 sodium ascorbate, 2 thiourea and 3 sodium pyruvate), then another 15 min at room temperature in the HEPES solution. Finally, slices were transferred to standard artificial cerebrospinal fluid (aCSF; containing in mM: 125 NaCl, 2.5 KCl, 2 CaCl , 1 MgCl , 26 NaHCO , 1.25 NaH PO -H O and 11 glucose) bubbled with 95%O /5%CO and stored at room temperature until recording. Whole-cell patch-clamp recordings were performed in aCSF at 30–32 °C. Synaptic blockers (5 μm NBQX, 50 μm d-AP5 (d(−)-2-amino-5-phosphonovaleric acid) and 50 μm picrotoxin; Tocris) were added to the aCSF to isolate direct ChR2 responses. Resistance of the patch pipettes was 2.5–4.0 MΩ when filled with intracellular solution containing the following (in mM): 120 K-gluconate, 11 KCl, 1 CaCl , 1 MgCl , 10 EGTA, 10 HEPES, 2 Mg-ATP and 0.3 Na-GTP, adjusted to pH 7.3 with KOH). Signals were amplified with a Multiclamp 700B amplifier, acquired using a Digidata 1440A digitizer, sampled at 10 kHz, and filtered at 2 kHz. All data acquisition and analysis were performed using pCLAMP software (Molecular Devices). ChR2-expressing neurons were visually identified for patching using an upright microscope (Olympus BX51WI) equipped with DIC optics, a filter set for visualizing YFP, and a CCD camera (RoleraXR, Q-Imaging). To stimulate ChR2, 1 s of 5-ms blue light pulses (~10 mW mm−2) were generated at 20 Hz using a Spectra X LED light engine (Lumencor) and delivered to the slice via a ×40/0.8 water immersion objective focused onto the recorded neuron. Ex-vivo and cell culture physiology data were analysed using Clampfit software (Axon Instruments Inc., Molecular Devices). Statistical analyses were performed using MATLAB (Mathworks Inc.) and GraphPad Prism (GraphPad Software). All custom-written MATLAB code is available on request. As described previously27, 28, we measured bulk fluorescence from deep brain regions using a single optical fibre for both delivery of excitation light to, and collection of emitted fluorescence from, the targeted brain region. The fluorescence output of the calcium sensor is modulated by varying the intensity of the excitation light, generating an amplitude-modulated fluorescence signal that can be demodulated to recover the original calcium sensor response. This ‘upconversion’ of the calcium signal to a frequency range of our choice allows us to avoid any contribution to the signal from changes in ambient light levels with behaviour (since these will not be modulated at the appropriate frequency), as well as avoiding drift or low-frequency ‘flicker noise’ in our photodetector. We have extended this method to the case of multiple excitation wavelengths delivered over the same fibre, each modulated at a distinct carrier frequency, to allow for ratiometric measurements. Fluorescence excitation was provided by two diode lasers at 488 nm and 405 nm with analogue modulation capabilities (Luxx, Omicron Laserage). A real-time signal processor (RP2.1, Tucker-Davis Technologies) running custom software sinusoidally modulated each laser’s output (average power at the fibre tip was set to 30 μW for each wavelength), and simultaneously demodulated the two output signals from the output of the single photodetector (Model 2151 Femtowatt Photoreceiver) as described below. Carrier frequencies (211 and 531 Hz for 488 and 405 nm excitation, respectively) were chosen to avoid contamination from overhead lights (120 Hz and harmonics) and cross-talk between channels (the bandwidth of GCaMP6M was observed to be <15 Hz), while remaining within the 30–750-Hz bandwidth of the photodetector. Excitation light from the two lasers was combined by a dichroic mirror (425-nm longpass, DMLP425), passed through a clean-up filter (Thorlabs, FES0500) and a dichroic mirror (505-nm long-pass, DMLP505), before being coupled into a large-core, high-NA, low-fluorescence optical fibre patch cord (400 μm diameter, 0.48 NA, Doric Lenses) using a fixed-focused coupler/collimator with a standard FC connector (F240FC-A, NA 0.51, f = 7.9 mm). The far end of the patch cord is butt-coupled to the chronically implanted fibre using standard 2.5 mm ferrules and a zirconia sleeve, allowing for easy connections and repeated measurements across days, as in standard optogenetics preparations. A small amount of the fluorescence emitted in the brain is captured at the tip of the implanted fibre and travels back to the rig, where it is collimated and passes through the last dichroic mirror and is focused onto the photodetector by a lens (NA 0.5, f = 12.7 mm, part 62-561, Edmund Optics). The photodetector signal was sampled at 6.1 kHz, and each of the two modulated signals was independently recovered using standard synchronous demodulation techniques: the detector output was routed to two product detectors, one using the selected channel’s modulation signal as a reference, and the other using a 90° phase-shifted copy of the same reference. These outputs were low-pass filtered (corner frequency of 15 Hz), and added in quadrature. This dual-phase detection approach makes the output insensitive to any phase delay between the reference and signal. The resulting fluorescence magnitude signals were then decimated to 382 Hz for recording to disk, and then further filtered using an ~2-Hz low-pass filter. The ratiometric fluorescence signal used throughout the paper was calculated for each behavioural session as follows. A linear least-squares fit between the two timeseries was calculated (that is, the 405-nm control signal values were the independent variable and the 488-nm signal was the dependent variable). Change in fluorescence (dF) was calculated as (488 nm signal−fitted 405 nm signal), adjusted so that a dF of 0 corresponded to the second percentile value of the signal. dF/F was calculated by dividing each point in dF by the 405-nm fit at that time, which scaled transients according to the degree of bleaching estimated at that time. Behavioural variables, such as lever presses and reward port entry times, were fed into the real-time processor as TTL signals from the operant chambers. For each figure, a statistical test matching the structure of the experiment and the structure of the data was employed. For simple comparisons between just two groups, t-tests were used. Where the structure of the data did not fit the assumptions of the test, the non-parametric Mann–Whitney (for unpaired tests) or Wilcoxon matched-pairs (for paired tests) was used instead. When comparing the magnitude of effects of a manipulation across two groups, a two-way ANOVA was used, and where significant interactions were detected, a Bonferroni post-hoc test was used to determine the nature of the differences. When quantifying repeated manipulations within a group, a repeated-measures ANOVA was used, and where significant interactions were detected, a Dunnet’s post-hoc test was used to determine whether the manipulation altered behaviour, while correcting for multiple comparisons. For linear correlation, the Pearson’s r test was used throughout. Variances within each group of data are displayed as s.e.m. throughout. To quantify the temporal stability of individual subjects’ risk preferences across days, we calculated the reliability of percentage risky choices in unmanipulated control animals’ behaviour across 7 days of testing. Odd-versus-even day split-half reliability estimates (as in ref. 35) indicated significant internal consistency in risk preferences for risk-seeking animals (ICC = 0.95, P = 0.0003), risk-averse animals (ICC = 0.99, P < 0.0001), and overall (ICC = 0.99, P < 0.0001). Bootstrap analysis of 10,000 randomly-assigned split halves of the data generates an average ICC = 0.987 (P < 0.0001; Extended Data Fig. 2). Across rats, the average standard deviation in percentage risky choices across the 7 days of testing was 6.1%. For each rat, we calculated the median neural activity during each nosepoke, in the 1 s after nosepoke entry, during successfully completed nosepokes, across all free choices, across all days of behaviour. We then sorted nosepoke periods based either on previous trial outcome (Fig. 3g, k) or on the upcoming choice (Fig. 3i, m). In the case of previous trial outcome, a t-test was used to compare a list of all nosepoke-period signals when the animal received a loss outcome (hundreds of individual trials) against a list of all the signals when the animal received a gain or safe outcome (also hundreds of individual trials). In the case of next decision, a t-test compared the list of all activity during nosepokes when the animal was about to choose safe to the list of all nosepoke activity when the rat was about to choose to take a risk. The signal was larger after loss outcomes than after gain or safe outcomes (Fig. 3e–g). This trend is individually significant in 5 out of 6 rats (t-test, P < 0.0001 in all cases). Decision-period activity was higher in D2R+ cells before safe choices versus risky choices (Fig. 3h, i). This trend held in all rats tested and was individually significant in 5 out of 6 rats (t-test, P < 0.02 in all cases). The logistic regression analysis displayed in Fig. 1b–e is supported by 17 animals and >9,800 individual data points. Post-hoc analyses revealed power of 0.9 and 0.84, respectively, for the subpanels in Fig. 1f and a power of 0.99 for Fig. 1g. The one-way ANOVA in Fig. 2a has a power of 0.96. The Mann–Whitney test in Fig. 2c has a power of 0.96. The repeated-measures ANOVA used in Fig. 2f has a power of 0.99. The data in Fig. 3 comprise 31 recording sessions across the 6 rats, totalling >7,500 trials. Post-hoc power tests on Fig. 3g, i, k, m reveal a power >0.84 for all significant results. Tests on the significant correlations reveal a power of 0.95 for Fig. 3n and a power of 0.86 for Fig. 3p. The optogenetics experiments in Fig. 4 contain a total of 62 animals across the 4 groups. Power analyses reveal that the two-way ANOVA used to evaluate Fig. 4d–i has a power of 0.99. The one-way ANOVA in Fig. 4j has a power of 0.89. The goal of this classification is to determine the probability that a rat will choose the risky lever on any given trial given recent outcome history. We used a soft-max decision function: where x is a vector representing the recent outcome history, is a dummy variable indicating whether the rat chose risky on a given trial, and θ is the set of weights learned by the model. In this scenario, we know the outcome history (x) and the choice outcomes (y). We seek to use these data to find a set of weights (θ) that minimizes the difference between the prediction (h (x)) and the rat’s actual behaviour (y). To accomplish this, we use the MATLAB gradient descent algorithm fminunc to generate a set of weights (θ) that minimize the cost function: over m training examples. We use the vectorized implementation: We then used the weights generated by running this optimization over the training data to determine how well the model generalized to test data from the same rats. To do this, we plugged the weights from the optimization over training data and the outcome histories from the test data into equation (2). The probabilities generated by equation (2) were then compared to actual choice outcomes on a trial-by-trial basis, such that [ when ] or [ when ] were considered correct predictions.

News Article
Site: http://phys.org/technology-news/

FM Global is one of the world's largest commercial and industrial insurance companies. Providing insurance to one in three Fortune 1000 companies, FM Global attributes its success to offering not only comprehensive property insurance products but also world-class loss prevention research and engineering services that help clients better understand steps they can take to prevent fires and minimize loss if a fire does start. However, for FM Global research scientist Yi Wang, fire suppression research affects far more than his business's bottom line. "The goal of our research is to make protection standards and solutions better," Wang said. "We believe that the majority of property loss is preventable. We develop solutions to prevent losses, share these solutions, and promote improvement of protection standards." Some of these solutions Wang discovered came from research performed on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy Office of Science User Facility located at Oak Ridge National Laboratory. Improving protection standards can be a costly process, though—insurers must build large testing facilities and conduct expensive fire tests or use high-performance computing to simulate how fires spread and how suppression systems perform. FM Global researchers are doing both. To better understand how fires spread and the best methods to suppress them, FM Global built the world's largest fire technology laboratory at its property loss prevention research campus in West Glocester, Rhode Island. The entire laboratory is 108,000 sq. ft., with an enormous area for fire tests measuring 33,600 sq. ft., or the size of a football field. These testing rooms have "movable ceilings" that can go from 15 to 60 ft. high, allowing researchers to evaluate fire hazards and test fire suppression techniques at a variety of heights. In addition, FM Global owns the world's largest fire calorimeter—a large hood-like device that is hung above a fire to measure its heat release rate, or the rate at which it is releasing energy. This is the one of the best metrics to gauge fire size and is the main driver for determining how many sprinklers should turn on during a fire. In recent years, FM Global has been considering how it might enhance its fire testing capabilities and gain more insight from each test. Its facility is invaluable for gaining insights into fire growth and suppression, but it is also in high demand—researchers often must reserve the space months in advance. In addition, these tests are expensive, with some experiments costing $100,000 or more. And despite the company's football-field-sized fire testing room and world-class calorimeter, FM Global researchers found they might not be able to replicate fires for the world's newest, largest facilities. These industrial "mega" warehouses and distribution centers can exceed 100,000 sq. ft. and rise from 60 to 100 ft. in height. Many companies choose to use this extra height to store their commodities—in corrugated cardboard boxes— on wooden pallets stacked in tiers, each tier about 5 ft. high. As these facilities stack the pallets increasingly higher, they run the risk of helping a fire spread faster. Wang noted that once storage facilities reach a certain height, sprinkler systems may not prevent catastrophic damage as effectively. "Typically, when sprinklers are on the ceiling during a fire, the smoke reaches the ceiling to set off the system," he said. "But when warehouses get very tall, the time of the protection system activation can be delayed. In addition, the strong fire plume can prevent the ceiling water spray from penetrating through and reaching the base of the fire in time to achieve effective suppression." The high costs of large-scale fire tests, along with the challenges in generalizing and extrapolating test results, prompted FM Global research scientists in 2008 to develop computational models and simulate fires on an internal cluster computer. Using OpenFOAM, a fluid dynamics code, the team developed FireFOAM, its flagship code for simulating all of the complex physics that occur during an industrial fire. And in keeping with the firm's commitment to sharing important research results, the researchers made their code "open source," available to any researcher studying fires and fire suppression. "Our goal is to develop an efficient computational fluid dynamics (CFD) code for the fire research community which has all the components to catch all the physics that are important for fire suppression, such as heat transfer, material flammability, water spray, chemical transport, and radiation in larger fires," Wang said. Using supercomputers to simulate industrial fires is more complicated than just virtually igniting a fire and letting it burn. Researchers have to account for the roles of soot formation, oxidation, radiation, and sprinkler spray dynamics, among other processes. In addition to calculating so many different physical processes, researchers also require very fine resolution—small time and spatial scales—to fully capture all of the subtle chemical and physics-related processes happening during a fire. Early results for smaller-scale fire simulations using FM Global's internal cluster were encouraging—the simulations showed very good agreement with actual physical tests. However, as the team progressed in its fire modeling experience, it realized the need for substantially more computing power than it had internally to accurately model the much larger fires that could occur in clients' large warehouses. In addition, the team needed access to a very large high-performance computing system to scale FireFOAM up to meet the computational challenge. While attending a scientific conference on combustion, Wang and the FM Global team met Ramanan Sankaran, a computational research scientist and combustion expert at the OLCF. After discussing their research interests, Sankaran told them about the OLCF's Industrial Partnerships Program, which offers researchers in industry the opportunity to access America's most powerful supercomputer for open research. FM Global researchers knew that gaining access to a larger supercomputer was necessary to improve their simulations, but it was only part of the challenge—knowing how to make efficient use of a supercomputer with over 299,000 cores was the other part. They successfully scaled FireFOAM from 100 CPUs to thousands of CPUs. And the team's relationship with Sankaran continued paying dividends. "We got support from Ramanan early on and identified a bottleneck in the pyrolysis submodel in our code, which deals with solid fuel burning," Wang's colleague Ning Ren said. "He worked with us to improve the efficiency of that submodel significantly, and that allowed us to scale our model up so we could simulate 7 tiers (35 ft. high) of storage." Through those simulations, the team discovered that stacking storage boxes on wooden pallets impedes the horizontal flame spread, substantially reducing the fire hazard in the early stages of fire growth. As with many large-scale simulations, the team used FireFOAM by dividing its simulations into very fine mesh, with each cell calculating the processes for a very small area and sharing the data with neighboring grid points. The finer the grid, the more computationally demanding the simulation becomes. Wang credits his team's successful simulations to OLCF computing resources. "Without access to leadership computing resources at the OLCF, the team would have no way to accurately study fire spread dynamics in the larger warehouses," he said. "With Titan, we are doing predictive simulations of 7-tier stacks and gaining important information about the fire hazard that we simply can't gather through our experimental fire tests." After receiving a second award for computing time, the team has been collaborating with OLCF staff to incorporate the Adaptable I/O System (ADIOS) for its FireFOAM code. OLCF staff developed ADIOS to transfer data more efficiently on and off the computer. The team took the improved FireFOAM code and began simulating other commodities stored in warehouses, beginning with simulations for large paper-roll fires in 2015. Wang sees the collaboration between his group and OLCF staff as a relationship that benefits both parties and society at large. "Our project was the first step, and I think the work we've done is very promising," Wang said. "The collaboration with OLCF adds a lot of value to our research; access to Titan and the experts at the OLCF are enhancing our research capabilities so that we can offer better fire protection solutions for our clients." Explore further: WPI research team to conduct tests aimed at better understanding post-earthquake fires More information: N. Ren, J. de Vries, K. Meredith, M. Chaos, and Y. Wang, "FireFOAM Modeling of Standard Class 2 Commodity Rack Storage Fires," published in the Proceedings of Fire and Materials 2015 (February 2–4, 2015): 340.

News Article
Site: http://www.materialstoday.com/news/

H.C. Starck has partnered with the Worcester Polytechnic Institute (WPI) Center for Heat Treating Excellence (CHTE). The CHTE is an alliance between the industrial sector and university researchers that aims to address short and long-term needs in the heat treating and thermal processing industry. Members can help select quality research projects that help them solve today’s business challenges globally. ‘We are very excited to join the team at WPI Center for Heat Treating Excellence,’ said Dmitry Shashkov, member of H.C. Starck's executive board and head of the fabricated products division. ‘We are [...] rapidly growing our business to match the demands of the global marketplace, working with OEM and aftermarket furnace manufacturers on their new designs. To match this growth, H.C. Starck has made significant investments in our fabrication capabilities and we are leading the development of new materials for high temperature applications to improve cost and performance for our customers.’ H.C. Starck focuses on developing new materials for high temperature vacuum and inert atmosphere furnaces for industrial processes such as annealing, brazing, heat treating, hot isostatic pressing (HIP), melting, pre-heating, powder processing, sintering, tempering and metal injection molding (MIM). It plans to help improve furnace cycle time, maintain temperature uniformity, and reduce carbon contamination. This story uses material from H.C. Starck, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier.

News Article
Site: http://www.asminternational.org/news/industry;jsessionid=E0DE57F2BB9C9051EE13EEC74DEE0FE5?p_p_id=webcontentresults_WAR_webcontentsearchportlet_INSTANCE_bPy7zPdEmZHV&p_p_lifecycle=2&p_p_state=normal&p_p_mode=view&p_p_cacheability=cacheLevelPage&p

The 2016 Bernard M. Gordon Prize for Innovation in Engineering and Technology Education is awarded to Worcester Polytechnic Institute educators Diran Apelian, FASM, (pictured), Arthur C. Heinricher, Richard F. Vaz, and Kristin K. Wobbe, "for a project-based engineering curriculum developing leadership, innovative problem-solving, interdisciplinary collaboration, and global competencies." The project-based engineering curriculum at WPI prepares 21st century leaders to tackle global issues through interdisciplinary collaboration, communication, and critical thinking. The Institute's engineering program engages students with a specially designed sequence in which first-year students complete projects on topics such as energy and water; second-year capstones focus on the humanities and arts; junior-year interdisciplinary projects relate technology to society; and senior design projects are done in conjunction with external sponsors, providing relevant experience upon graduation. Last year, WPI launched its Institute on Project-Based Learning, an initiative to help other colleges and universities make progress toward implementing project-based learning on their campuses. The Bernard M. Gordon Prize for Innovation in Engineering and Technology Education is one of the most prestigious awards in engineering education

Discover hidden collaborations