Time filter

Source Type

Kyoto, Japan

Ohashi M.,Fukuoka University | Kawanishi N.,ATR
IEICE Transactions on Communications | Year: 2015

This paper discusses the core ambient sensor network (ASN) technologies in view of their support for global connectivity. First, we enumerate ASN services and use cases and then discuss the underlying core technologies, in particular, the importance of the RESTful approach for ensuring global accessibility to sensors and actuators. We also discuss several profile-handling technologies for context-aware services. Finally, we envisage the ASN trends, including our current work for cognitive behavior therapy (CBT) in mental healthcare. We strongly believe that ASN services will become widely available in the real world and an integral part of daily life and society in the near future. Copyright © 2015 The Institute of Electronics, Information and Communication Engineers. Source

Cho B.-K.,ATR | Park S.-S.,KAIST | Oh J.-H.,KAIST
2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010 | Year: 2010

This paper discusses the stabilization of a hopping humanoid robot for a push. According to the size of push, two controllers are selected. The posture balance controller is used when the push is small, and the posture balance controller and the foot placement method are activated together when the push is large. To develop the novel controller of the foot placement method, the simplified model is used and linearized Poincare map for single hopping is made. The control law is designed by the pole placement method. The proposed method is verified through the simulation and experiment. In the experiment, HUBO hops well against various pushes. ©2010 IEEE. Source

Okada M.,Shizouka University | Tada M.,ATR
Journal of Educational Multimedia and Hypermedia | Year: 2014

Real-world learning is important because it encourages learners to obtain knowledge through various experiences. An educational program should be improved through feedback by evaluating past practices, but no methodology has been proposed to assess experience-based learning in the real world. Thus, we need a new methodology for analyzing the diverse learning activities that occur in real-world learning, developing effective support strategies, and refining the learning environment. We adopted the methodology of empirical studies with field experiments, and accumulated qualitative and quantitative evaluation results to deepen the understanding of the nature of real-world learning. Then, we developed a method to capture learners' utterances, vision, body motions, and knowledge, to grasp the time-series occurrence of realworld learning, and to analyze the spatial characteristics of a learning field that draws out diverse intellectual interactions. By giving real-world learners a task for discovery learning through autonomous and exploratory activities, we achieved the following major findings: (1) our method was able to present a bird's eye view of the results of learners' cognitive activities in the world, (2) our method was able to map the structure of a real-world learning field from the viewpoint of the occurrence probability of important learning behavior and knowledge acquisition, (3) our method was able to extract regions where the learners did not succeed in acquiring discovery knowledge. Based on these findings, we discuss our educational contributions. Our evidence-based spatial analysis of an outdoor field helps to understand the diversity of real-world learning and to clarify the principles for redesigning on-site and off-site activities and formatively improving the learning environment of real-world learning. Source

News Article
Site: http://www.nature.com/nature/current_issue/

Rebecca Davies remembers a time when quality assurance terrified her. In 2007, she had been asked to lead accreditation efforts at the University of Minnesota's Veterinary Diagnostic Laboratory in Saint Paul. The lab needed to ensure that the tens of thousands of tests it conducts to monitor disease in pets, poultry, livestock and wildlife were watertight. “It was a huge task. I felt sick to my stomach,” recalls Davies, an endocrinologist at the university's College of Veterinary Medicine. She nevertheless accepted the challenge, and soon found herself hooked on finding — and fixing — problems in the research process. She and her team tracked recurring tissue-contamination issues to how containers were being filled and stored; they traced an assay's erratic performance to whether technicians let an enzyme warm to room temperature; and they established systems to eliminate spotty data collection, malfunctioning equipment and neglected controls. Her efforts were crucial to keeping the diagnostic lab in business, but they also forced her to realize how much researchers' work could improve. “That is the beauty of quality assurance,” Davies says. “That is what we were missing out on as scientists.” Davies wanted to spread the word. In 2009, she got permission and financial support to launch an internal consulting group for the college, to help labs with the dry but essential work of quality assurance (QA). The group, called Quality Central, now supports more than half a dozen research labs — helping them to design systems to ensure that their equipment, materials and data are up to scratch, and helping them to improve. She is also part of a small but growing group of professionals around the world who hope to transform basic biomedical research. Many were hired by their universities to help labs to meet certain regulatory standards, but these QA consultants have a broader vision. They are not pushing for universal adoption of formal regulatory certifications. Instead, they advocate 'voluntary QA'. With the right strategies, they argue, scientists can strengthen their research and improve reproducibility. When Davies first started proselytizing to her fellow faculty members, the responses were not encouraging. “None of them found the idea compelling at all,” Davies recalls. How important could QA be, they asked, if the US National Institutes of Health did not require it? How could anyone afford to spend money or time on non-essentials? Shouldn't they focus on the discoveries lurking in their data, and not the systems for collecting them? But some saw the potential, based on their own experiences. Before she had heard of Quality Central, University of Minnesota virologist Montserrat Torremorell was grateful when a colleague let her use his instruments to track transmissible disease in swine. But the results made no sense. Samples from pigs experimentally infected with influenza showed extremely low levels of the virus. It turned out that her benefactor had, like many scientists, skimped on equipment maintenance to save money. “It was a real eye-opener,” Torremorell recalls. “It just made me think that I could not rely on other people's equipment.” Quality systems are an integral part of most commercial goods and services, used in manufacturing everything from planes to paint. Some labs that focus on clinical applications implement certified QA systems such as Good Clinical Practice, Good Manufacturing Practice and Good Laboratory Practice for data submitted to regulatory bodies. There have also been efforts to guide research practices outside these schemes. In 2001, the World Health Organization published guidelines for QA in basic research. And in 2006, the British Association of Research Quality Assurance (now simply the RQA) in Ipswich issued guidelines for basic biomedical research. But few academic researchers know that these standards exist (Davies certainly didn't back in 2007). Instead, QA tends to be ad hoc in academic settings. Many scientists are taught how to keep lab notebooks by their mentors, supplemented perhaps by a perfunctory training course. Investigators often improvise ways to safeguard data, maintain equipment or catalogue and care for experimental materials. Too often, data quality is as likely to be assumed as assured. Scientific rigour has taken a drubbing in the past few years, with reports that fewer than one-third of biomedical papers can be reproduced (see Nature http://doi.org/477; 2015). Scientific culture, training and incentives have all been blamed for promoting sloppy work; a common refrain is that the status quo values publication counts over careful experimentation and documentation. “There is chaos in academia,” says Masha Fridkis-Hareli, head of ATR, a biotechnology consultancy in Worcester, Massachusetts, that also conducts laboratory work to help move basic research into industry. For every careful researcher she has encountered, there have been others who have thought nothing of scribbling data on paper towels, repeating experiments without running controls and guessing at details months after an experiment. Davies insists that plenty of scientists are doing robust work, but there is always room for improvement (see 'Solutions'). “There are easy fixes to situations that shouldn't be happening, but are,” she says. Michael Murtaugh, a swine biologist at the University of Minnesota, had tried to establish practices to beef up the reliability of his team's lab notebooks, but the attempts that he made on his own never gained traction. Then Davies got on his case. After a year or so of her “planting seeds” — as she puts it — Murtaugh agreed to work with Quality Central and implement a low-tech but effective solution. On designated Mondays, each member of Murtaugh's lab draws a name from a paper bag to determine whose notebook to audit. The scientists check that their assigned books include relevant controls for experiments, and indicate where data are stored and which particular machine generated them. The group also makes sure that any problems noted in the previous check have been addressed. It takes about ten minutes per researcher every few weeks, but that's enough to change people's habits. Graduate student Michael Rahe says that the checks ensure that he keeps his notebook legible and up to date. “I never used to put in raw data,” he says. Albert Cirera, a technologist developing gas nanosensors at the University of Barcelona in Spain, has also embraced QA. As his lab group grew to 12 people, he found it difficult to monitor everyone's experiments, and his own efforts to implement a tracking system were inadequate. He turned to a university-based QA consulting service for help. Now, samples, equipment and their data are all linked with tracking numbers printed on stickers and recorded in individuals' notebooks, on samples and in a central tracking file. The system does not slow down experiments, and staying abreast of projects is a breeze, says Cirera. But getting to this point took about four months and frequent consultations. “It was not something that you can create from zero,” he says. Any scientist adopting a QA system has to wager that the up-front hassle will pay off in the future. “It is very difficult to get people to check and annotate everything, because they think it is nonsense,” says Carmen Navarro-Aragay, head of the University of Barcelona quality team that worked with Cirera. “They realize the value only when they get results that they do not understand and find that the answer is lurking somewhere in their notebooks.” Even when experiments go as expected, quality systems can save time, says Murtaugh. Methods and data sections in papers practically write themselves, with no time wasted in frenzied hunting for missing information. There are fewer questions about how experiments were done and where data are stored, says Murtaugh. “It allows us to concentrate on biological explanations for results.” The more difficult data are to collect, the more important a good QA system becomes. Catherine Bens, a QA manager at Colorado State University in Fort Collins, says that she remembers getting cold, wet and dirty when she had to monitor a study involving ultrasound scans and blood samples from a population of feral horses in North Dakota. Typical animal-identification practices such as ear tagging were not allowed. So, before the collection started, Bens supported researchers as they rehearsed procedures, pre-labelled tubes, made back-up labels and recruited animal photographers and park volunteers to ensure that samples would be linked to the correct animals. Even in a snow storm with winds so loud that everyone had to shout, the team made sure that each data point could be traced. Rare samples or not, few basic researchers are clamouring to get QA systems in place. Most are unfamiliar with the discipline, says Davies. Others are hostile. “They see it as trying to constrain them, and that you're making them do more work.” Before awarding certain grants, the Found Animals Foundation in Los Angeles, California, which funds research on animal sterilization, requires proof that instruments have been calibrated and that written plans exist for tracing data and dealing with outliers. It can be a struggle, says Shirley Johnston, scientific director of the foundation. One grant recipient argued that QA systems were unnecessary because just looking over the data would reveal their quality. Part of the resistance may be down to how some QA professionals present themselves. “A lot of them are there to tell you what you are doing is wrong, and a lot of them are not very nice about it,” says Terry Nett, a reproductive biologist at Colorado State University who experienced this first-hand when he worked with outside consultants to incorporate Good Laboratory Practice principles in his lab. The effort was frustrating. “Instead of helping us understand, they would act like a dictator,” Nett recalls. “I just didn't want them in my lab.” A few years ago, however, the university hired its own quality managers, and things changed. The current manager, Bens, acts more like a partner, Nett says. She points out where labs are already using robust practices, and explains the reasoning behind QA practices that she introduces. To win scientists over, Bens stresses that QA systems produce data that can withstand criticism. “You build a support system around any data point you collect,” she says. When there is a strange result, researchers have documentation to trace its provenance. That can show whether a data point is real, an outlier or a problem — for example if a blood sample was not kept cold or was stored in the wrong tube. Scientists need to take the lead on which QA elements they incorporate, says Melissa Eitzen, director of regulatory operations at the University of Texas Medical Branch in Galveston. “You want to give them tips that they can take or not take,” she says. “If they choose it, they'll do it. If you tell them they have to do it, that's a struggle.” Rapport is paramount, says Michael Jamieson at the University of Southern California in Los Angeles, who helps other faculty members to move research towards clinical applications. Instead of talking about quality systems, he prefers to discuss concrete behaviours, such as labelling bottles with expiry dates and storage conditions. QA jargon puts scientists off, he says. “Using the term good research practice makes most researchers want to run the other way.” It's a lesson that many QA specialists have taken to heart. Some say 'assessment' or 'quality improvement' instead of 'audit'. Even 'research integrity' can be an inflammatory phrase, says Davies. “You have to find a way to communicate that QA is not punitive or guilt-inspiring.” Having data that are traceable — down to who did what experiment on which machine, and where the source data are stored — has knock-on benefits for research integrity, says Nett. “You can't pick out the data that you want.” Researchers who must provide strong explanations about why they chose to leave any information out of their analysis will be less tempted to cherry-pick data. QA can also weed out digital meddling: popular spreadsheet programs such as Microsoft Excel can be vulnerable to errors or manipulation if not properly locked, but QA teams can set up instruments to store read-only files and prevent researchers from tampering with data accidentally or intentionally. “I can't help but think that QA is going to make fraud harder,” says Davies. And good quality systems can be contagious. Melanie Graham, who studies diabetes at the University of Minnesota, often collaborates with others to test potential treatments. More than once, she says, collaborators have sent her samples in a polystyrene tube with nothing but a single letter written on it. Graham sends it back and requests a label that specifies the sample's identity and provenance, and a range of storage temperatures. 'Keep frozen' is too vague — she will not risk performing uninformative experiments because reagents stored in a standard freezer were supposed to be kept at −80 °C. When she first sent documentation requirements to collaborators, she expected them to push back. Instead, reactions were overwhelmingly positive. “It's a relief for them,” says Graham. “They want us to handle their test article in a trusted way.” The benefits go beyond providing solid data. In 2013, Davies worked with Torremorell and other Minnesota faculty members on a proposal to monitor and calibrate equipment used by several labs. The plan that they put in place helped them to secure US$1.8 million to build shared lab space to deal with animal pathogens, says Torremorell. “If we want to be competitive to get funding, and if we want people to believe our data, we need to be serious about the data that we generate.” Davies is still trying to spread the word. Her invitations to give talks and review grant applications have mushroomed. She and collaborators at other institutions have been developing online training materials and offering classes to technicians, postdocs, graduate students and principal investigators. After a presentation last year, a member of the audience told her that he had reviewed a grant from one of her clients; the QA plan had made the application stand out in a positive way. Davies was delighted. “I could finally come back to my folks and say, 'It was noticed.'” Davies knows it is still an uphill battle, but her ultimate goal is to make QA as much a part of research as peer review. It may not have the flash and dazzle of other efforts to ensure that research is robust and reproducible, but that is not the point. “A QA programme isn't sexy,” says Michael Conzemius, a veterinary researcher at the University of Minnesota and another client of Quality Central. “It's just kind of become the nuts and bolts of the scientific process for us.”

News Article
Site: http://www.nature.com/nature/current_issue/

No statistical methods were used to determine sample size. The experiments were not randomized and the investigators were not blinded during experiments and outcome assessment. DvPdf-GAL4 was provided by J. H. Park; Clk4.1M-GAL4 was from P. Hardin; UAS-dTrpA1 (2nd) was from P. Garrity; UAS-CaLexA was from J. Wang41; UAS-TNT and UAS-Tet were from H. Amrein; Pdf-GAL80 and CRY-GAL80 are described by Stoleru et al.12; LexA-P2X2 and Clk856-GAL4 were from O. Shafer11, 29. UAS-CD4::spGFP1-10 and LexAop-CD4::spGFP11 were from K. Scott; Clk4.1M-lexA was from A. Sehgal5, LexAop-LUC was generated by X. Gao and L. Luo48. LexAop-dTrpA1 was from G. M. Rubin. UAS-VGLUT RNAi 1 (VDRC 104324), UAS-mGluRA RNAi 1 (VDRC 103736), UAS-mGluRA RNAi 2 (VDRC 1793) were from the Vienna Drosophila Resource Center (VDRC). The following lines were ordered from the Bloomington Stock Center: Pdfr (R18H11)-GAL4 (48832), Pdfr (R18H11)-LexA (52535),UAS-CsChrimson (55136), UAS-eNPHR3.0 (36350), UAS-Denmark (33064), UAS-ArcLight (51056), UAS-GCaMP6f (42747),UAS-syt-GFP (33064), UAS-VGLUT RNAi 2 (40845, 40927), VGlutMI04979-GAL4 (60312). Flies were reared on standard cornmeal/agar medium supplemented with yeast. The adult flies were entrained in 12:12 light-dark cycles at 25 °C. The flies carrying GAL4 and UAS-dTrpA1 were maintained at 21 °C to inhibit dTrpA1 activity. Locomotor activity of individual male flies (aged 3–7 days) was measured with Trikinetics Drosophila Activity Monitors or video recording system under 12:12 light:dark conditions. The activity and sleep analysis was performed with a signal-processing toolbox implemented in MATLAB (MathWorks). Group activity was also generated and analysed with MATLAB. For dTrpA1-induced neuronal firing experiments (Fig. 3 and Extended Data Fig. 9), flies were entrained in light:dark for 3–4 days at 21 °C, transferred to 27 °C for two days, followed by 2 subsequent days at 21 °C. The evening activity index (Extended Data Fig. 9) was calculated by dividing the average activity from ZT8–12 by the average activity from ZT 0–12. The behaviour experiments involving RNAi expression (Extended Data Fig. 10b) were done at 27 °C to enhance knockdown efficiency. All statistical analyses were conducted using IBM SPSS software. The sample size was chosen based on the pilot studies to ensure >80% statistical power to detect significant difference between different groups. Animals within the same genotype were randomly allocated to experimental groups and then processed. We were not blind to the group allocation as the experimental design required specific genotypes for experimental and control groups. However, the data analyser was blinded when assessing the outcome. The Wilks–Shapiro test was used to determine normality of data. Normally distributed data were analysed with two-tailed, unpaired Student’s t-tests, one-way analysis of variance (ANOVA) followed by a Tukey–Kramer HSD test as the post-hoc test or two-way analysis of variance (ANOVA) with post-hoc Bonferroni multiple comparisons. Nonparametrically distributed data were assessed using the Kruskal–Wallis test. Data were presented as mean behavioural responses, and error bars represent the standard error of the mean (s.e.m.). Differences between groups were considered significant if the probability of error was less than 0.05 (P < 0.05). Experiments were repeated at least three times and representative data was shown in figures. For mechanical stimulation, individual flies from different groups were loaded into 96-well plates and placed close to a small push–pull solenoid. The tap frequency of the solenoid was directly driven by an Arduino UNO board (Smart Projects). One tap was used as a modest stimulus and ten taps (1 Hz) was used as a strong stimulus. Arousal threshold was measured during the middle of the day (ZT6) and evening (ZT10) with different intensities. The movement of flies before and after the stimulus was monitored by the web camera and the recording videos (1fps) were processed by the MTrack2 plugin in Fiji ImageJ software to convert the videos into binary images and to calculate the trajectory and moving area as well as the percentage of aroused flies. All trans-retinal (ATR) powder (Sigma) was dissolved in alcohol to prepare a 100 mM stock solution for CsChrimson experiments23. 100 μl of this stock solution was diluted in 25 ml of 5% sucrose and 1% agar medium to prepare 400 μM of ATR food. Newly eclosed flies were transferred to ATR food for at least 2 days before optogenetic experiments. The behavioural setup for the optogenetics and video recording system is schematized in Supplementary Fig. 1. Briefly, flies were loaded into white 96-well Microfluor 2 plates (Fisher) containing 5% sucrose and 1% agar food with or without 400 μM ATR. Back lighting for night vision was supplied by an 850 nm LED board (LUXEON) located under the plate. Two sets of high power LEDs (627 nm) mounted on heat sinks (four LEDs per heat sink) were symmetrically placed above the plate to provide light stimulation. The angle and height of the LEDs were adjusted to ensure uniform illumination. The voltage and frequency of red light pulses were controlled by an Arduino UNO board (Smart Projects). The whole circuit is described in ref. 25. The flat surface and compact wells of the 96-well plate allow uniform illumination, which was difficult to achieve in Trikinetics tubes. We used 627 nm red light pulses at 10 Hz (0.08 mW mm−2) to irradiate flies expressing the red-shifted channelrhodopsin CsChrimson within the DN1s23. (The CsChrimson illumination protocol had no effect on halorhodopsin eNpHR3.0). Fly behaviour was recorded by a web camera (Logistic C910) without an infrared filter. We used time-lapse software to capture snapshots at 10 s intervals. The light:dark cycle and temperature was controlled by the incubator, and the light intensity was maintained in a region that allowed entrainment of flies without activating CsChrimson. Fly movement was calculated by Pysolo software and transformed into a MATLAB readable file14. 5 pixels per second (50% of the Full Body Length) was defined as a minimum movement threshold15, 16. The activity and sleep analyses were performed with a signal-processing toolbox implemented in MATLAB (MathWorks) as described above. The design of the invention has been filed for patent. To monitor bioluminescence activity in living flies, we used previously described protocols49. White 96-well Microfluor 2 plates (Fisher) were loaded with 5% sucrose and 1% agar food containing 20 mM d-luciferin potassium salt (GOLDBIO). 250 μl of food was added to each well. Individual male or female flies expressing CaLexA–LUC were first anaesthetized with CO and then transferred to the wells. We used an adhesive transparent seal (TopSeal-A PLUS, Perkin Elmer) to cover the plate and poked 2–3 holes in the seal over each well for air exchange. Plates were loaded into the stacker of a TopCount NXT luminescence counter (Perkin Elmer). Assays were carried out in an incubator under light:dark conditions. Luminescence counts were collected for 5–7 days. For temperature shift experiments (Fig. 4b), the incubator temperature was set to 21 °C for 3 days and then increased to 30 °C at ZT 0 of the 4th day. Other experiments were performed at 25 °C. Three different modes were used in our experiments: (1) To record CaLexA–LUC activity only, 9 plates were placed in a stacker, and each plate was sequentially transferred to the TopCount machine for luminescence reading. Every cycle took about 1 h, and the recording was continued for several days. (2) To combine optogenetic stimulation with the luciferase assays (Fig. 4a and Extended Data Fig. 6a), we replaced the stacker with a chamber of our own design (Fig. 4a). 627 nm LEDs mounted to a pair of heat sinks were symmetrically positioned in the chamber to ensure uniform illumination of the 96-well plate (0.08 mW mm−2 for CsChrimson stimulation and 1 mW mm−2 for eNPHR3.0 stimulation). Flies pre-fed with ATR were loaded into a plate. Single plates stayed in the LED chamber for 8 min and then automatically transferred to the TopCount for luminescence reading for 2 min. (3) To assay fly movement in 96-well plates and CaLexA–LUC activity at the same time, single plates were recorded using a web camera attached to the top of chamber (Fig. 4a). During each hour, the plate sat in the video chamber for 58 min and then was automatically transferred to the TopCount machine for a 2 min luminescence reading. The raw data were analysed in MATLAB and in Microsoft Excel. All experiments were repeated at least three times. Immunostaining was performed as described50. Fly heads were removed and fixed in PBS with 4% paraformaldehyde and 0.008% Triton X-100 for 45–50 min at 4 °C. Fixed heads were washed in PBS with 0.5% Triton X-100 and dissected in PBS. The brains were blocked in 10% goat serum (Jackson Immunoresearch) and subsequently incubated with primary antibodies at 4 °C overnight or longer. For VGLUT and GFP co-staining, a rabbit anti-DVGlut (1:10,000) and a mouse anti-GFP antibody (Invitrogen; 1:1,000) antibody were used as primary antibodies. For GRASP staining, a mouse anti-GFP monoclonal antibody (Invitrogen; 1:1,000) and a rabbit anti-GFP antibody (Roche; 1:200) were used. After washing with 0.5% PBST three times, the brains were incubated with Alexa Fluor 633 conjugated anti-rabbit and Alexa Fluor 488 conjugated anti-mouse (Molecular Probes) at 1:500 dilution. The brains were washed three more times before being mounted in Vectashield Mounting Medium (Vector Laboratories) and viewed sequentially in 1.1 μm sections on a Leica SP5 confocal microscope. To compare the fluorescence signals from different conditions, the laser intensity and other settings were set at the same level during each experiment. Fluorescence signals were quantified by ImageJ as described. mRNA profiling from E cells and DN1s was performed as previously described34. DN1s and E cells were purified from Clk4.1M-GAL4, UAS-EGFP flies (DN1s) and Dv-Pdf-GAL4, UAS-EGFP, PDF-RFP flies, (E cells; GFP+RFP− cells), respectively. Flies were entrained for 3 days and then collected every 4 h for a total of six time points. Two replicates of six time points were performed for each cell type. Sequencing data were aligned to the Drosophila genome using TopHat51. Gene expression was quantified using the End Sequencing Analysis Toolkit (ESAT; publicly available at http://garberlab.umassmed.edu/software/esat/). ESAT quantifies gene expression only using information from the 3′-end of the genes. Imaging experiments were performed as previous described52. Adult male fly brains were dissected in ice-cold haemolymph-like saline (AHL) (108 mM NaCl, 5 mM KCl, 2 mM CaCl2, 8.2 mM MgCl , 4 mM NaHCO , 1 mM NaH PO -H O, 5 mM trehalose, 10 mM sucrose, 5 mM HEPES; pH 7.5). Brains were then pinned to a layer of Sylgard (Dow Corning) silicone under a small bath of AHL contained within a recording/perfusion chamber (Warner Instruments) and bathed with room temperature AHL. Brains expressing GCaMP6f and Arclight were exposed to fluorescent light for approximately 30 s before imaging to allow for baseline fluorescence stabilization. Perfusion flow was established over the brain with a gravity-fed ValveLink perfusion system (Automate Scientific). ATP or glutamate was delivered by switching the perfusion flow from the main AHL line to another channel containing diluted compound after 30 s of baseline recording for the desired durations followed by a return to AHL flow. For the mGluRA antagonist imaging experiments, 700 nM LY341495 (Tocris Bioscience) was used to block the glutamate-induced inhibition. Imaging was performed using an Olympus BX51WI fluorescence microscope (Olympus) under an Olympus ×40 (0.80 W, LUMPlanFl) or ×60 (0.90W, LUMPlanFI) water-immersion objective, and all recordings were captured using a charge-coupled device camera (Hamamatsu ORCA C472-80-12AG). For GCaMP6f and Arclight imaging, the following filter sets were used (Chroma Technology): excitation, HQ470/×40; dichroic, Q495LP; emission, HQ525/50 m. Frames were captured at 2 Hz with 4 × binning for either 2 min or 4 min using μManager acquisition software52. Neutral density filters (Chroma Technology) were used for all experiments to reduce light intensity and to limit photobleaching. For recordings using GCaMP6f and Arclight, ROIs were analysed using custom software developed in ImageJ52 and National Institute of Health. The fluorescence change was calculated by using the formula: ΔF/F = (F  – F )/F  × 100%, where F is the fluorescence at time point n, and F is the fluorescence at time 0. The fluorescence was calibrated by subtracting the background fluorescence value. To compare the fluorescence change between neurons in the same brain, fluorescence activities from different neurons were normalized to the highest fluorescence level during the recording time window.

Discover hidden collaborations