Wallingford Center, CT, United States
Wallingford Center, CT, United States

Time filter

Source Type

News Article | May 10, 2017
Site: www.nature.com

A total of 43 male mice were used in this study and the figure contribution of each mouse is summarized in Supplementary Table 1. All mice tested were between 2 and 12 months of age. C57BL/6J mice were purchased from Taconic Biosciences and VGAT–channel rhodopsin-2 (ChR2) mice were obtained from the Jackson laboratories. VGAT-cre mice were backcrossed to C57BL/6J mice for at least six generations. All mice were kept on a 12 h–12 h light–dark cycle. All animal experiments were performed according to the guidelines of the US National Institutes of Health and the Institutional Animal Care and Use Committee at the New York University Langone Medical Center. Behavioural setup. Behavioural training and testing took place in gridded floor-mounted, custom-built enclosures made of sheet metal covered with a thin layer of antistatic coating for electrical insulation (dimensions in cm: length, 15.2; width, 12.7; height, 24). All enclosures contained custom-designed operant ports, each of which was equipped with an IR LED/IR phototransistor pair (Digikey) for nose-poke detection. Trial initiation was achieved through an ‘initiation port’ mounted on the grid floor 6 cm away from the ‘response ports’ located at the front of the chamber. Task rule cues and auditory sweeps were presented with millisecond precision through a ceiling-mounted speaker controlled by an RX8 Multi I/O processing system (Tucker-Davis Technologies). Visual stimuli were presented by two dimmable, white-light-emitting diodes (Mouser) mounted on each side of the initiation port and controlled by an Arduino Mega microcontroller (Ivrea). For the 2AFC and 4AFC tasks, two and four response ports were mounted at the angled front wall 7.5 or 5 cm apart, respectively. Response ports were separated by 1-cm divider walls and each was capable of delivering a milk reward (10 μl evaporated milk delivered by a single syringe pump (New Era Pump Systems) when a correct response was performed. For the auditory Go/No-go task environment, response and reward ports were dissociated, with the reward port placed directly underneath the response port. In the 4AFC, the two outermost ports were assigned for ‘select auditory’ responses, whereas the two innermost ports were assigned for ‘select visual’ responses. Access to all response ports was restricted by vertical sliding gates which were controlled by a servo motor (Tower Hobbies). The TDT Rx8 sound production system (Tucker Davis Technologies) was triggered through MATLAB (MathWorks), interfacing with a custom-written software running on an Arduino Mega (Ivrea) for trial logic control. Training. Prior to training, all mice were food restricted to and maintained at 85–90% of their ad libitum body weight. Training was largely similar to our previously described approach8. First, 10 μl of evaporated milk (reward) was delivered randomly to each reward port for shaping and reward habituation. Making response ports accessible signalled reward availability. Illumination of the LED at the spatially congruent side was used to establish the association with visual targets on half of the trials. On the other half, association was established with the auditory targets where an upsweep (10 to 14 kHz, 500 ms) indicated a left and a downsweep (16 to 12 kHz, 500 ms) indicated a right reward. An individual trial was terminated 20 s after reward collection, and a new trial became available 5 s later. Second, mice learned to poke in order to receive reward. All other parameters remained constant. An incorrect poke had no negative consequence. By the end of this training phase, all mice collected at least 20 rewards per 30-min session. Third, mice were trained to initiate trials. Initially, mice had to briefly (50 ms) break the infrared beam in the initiation port to trigger target stimulus presentation and render reward ports accessible. Trial rule (attend to vision or attend to audition) was indicated by 10-kHz low-pass filtered white noise (vision) or 11 kHz high-pass filtered white noise (audition) sound cues. Stimuli were presented in blocks of six trials consisting of single-modality stimulus presentation (no conflict). An incorrect response immediately rendered the response port inaccessible. Rewards were available for 15 s following correct poking, followed by a 5 s inter-trial interval (ITI). Incorrect poking was punished with a time-out, which consisted of a 30 s ITI. During an ITI, mice could not initiate new trials. Fourth, conflict trials were introduced, in which auditory and visual targets were co-presented indicating reward at opposing locations. Four different trial types were presented in repeating blocks: (1) three auditory-only trials; (2) three visual-only trials; (3) six conflict trials with auditory target; and (4) six conflict trials with visual target. The time that mice had to break the IR barrier in the initiation port was continuously increased over the course of this training stage (1–2 weeks) until it reached 0.5 s. At the same time, duration of the target stimuli was successively shortened to a final duration of 0.1 s. Once mice performed successfully on conflict trials, single-modality trials were removed and block length was reduced to three trials. Fifth, during the final stage of training, trial availability and task rule were dissociated. Broadband white noise indicated trial availability, which prompted a mouse to initiate a trial. Upon successful initiation, the white noise was immediately replaced by either low-pass or high-pass filtered noise for 0.1 s to indicate the rule. This was followed by a delay period (variable, but for most experiments it was 0.4 s) before target stimuli presentation. All block structure was removed and trial type was randomized. Particular steps were taken throughout the training and testing periods to ensure that mice used the rules for sensory selection (see Supplementary Discussion 2). The first two training steps were identical to the 2AFC training, except that auditory stimuli consisted of tone clouds (interleaved pure tones (50 ms per tone over 200 ms, 36 tones total) spanning a frequency range of 1–15 kHz) directed to the left or right ear of the mouse to indicate the side of reward delivery. In the third stage, mice were trained to recognize the difference between visual and auditory response port positions. Initially, only two reward ports were available while access to the response ports associated with the non-target modality was restricted. All other parameters were as previously described in the 2AFC. Once mice successfully oriented to both target types (about two weeks), all four response ports were made available for subsequent training. Choosing a response port of the wrong modality was punished by a brief air puff delivered directly to the response port. Mice remained on this paradigm until they reached a performance criterion of 70% accuracy on both modalities. During the fourth training stage, sensory conflict trials were introduced using the same parameters as in the 2AFC. Trial types and locations were randomized (spatial conflict was also random). Responses were scored as correct or one of three different error types (see confusion matrix in Extended Data Fig. 2e). A total of four mice were trained. A pair of electrostatic speakers (Tucker Davis Technologies) producing the auditory stimuli were placed outside of the training apparatus and sound stimuli were conveyed by cylindrical tubes to apertures located at either side of the initiation port, allowing stereotypical delivery of stimuli across trials. Trial availability was indicated by a light positioned at the top of the box and trial initiation required a 200-ms continuous interruption of the IR beam in the initiation port to ensure that the animals head was properly positioned to hear the stimuli. Following trial initiation, a second port (the response port) was opened and a pure tone stimulus was played. A 20-kHz tone signalled a ‘Go’ response, whereas frequencies above or below 20 kHz signalled a ‘No-go’ response. The pure tone stimuli were presented for 300 ms before response time, and were pseudo-randomly varied on a trial-by-trial basis, with trials divided between the Go stimulus (approximately 40% of trials) and two No-go stimuli (16 and 24 kHz, appproximately 30% of trials per frequency). After stimulus presentation, the response port was made accessible for a 3-s period. In Go trials, correct poking within the trial period (hit) rendered the reward port accessible, and reward was subsequently delivered upon poking. For a ‘miss’ in which the mouse failed to poke within the 3-s period, the reward port remained inaccessible. For a ‘correct rejection’, which involved withholding a response when No-go stimuli were played, the reward port was made accessible at the end of the 3-s period. For a ‘false alarm’, which involved a poke in the response port on a No-go trial, the reward port remained inaccessible and the next trial was delayed by a 15-s time-out, as opposed to the regular 10-s inter-trial interval. For electrophysiological recordings and experiments with optical manipulation, testing conditions were equivalent to the final stage of training. The first cohort of PFC recordings involving ‘manipulation-free mice’ included three C57BL/6 wild-type mice and one VGAT-cre mouse. The VGAT-cre mouse in this cohort, which was also used for experiments involving PFC manipulations, was initially run for an equivalent number of laser-free sessions as the three wild-type mice before any manipulation. This design was used to confirm equivalence in electrophysiological findings across genotypes, and to strengthen the overall conclusions drawn by using transgenic animals. Equivalence across genotypes can be readily appreciated by comparing the four principal component analysis (PCA) plots in Extended Data Fig. 1j. For laser sessions, laser pulses of either blue (473 nm for ChR2 activation) or yellow (560 nm for eNpHR3.0 activation) light at an intensity of 4–5 mW (measured at the tip of the optic fibres) were delivered pseudo-randomly on 50% of the trials. During most optogenetic experiments, laser stimulation occurred during the whole delay period (500 ms) of the task. For temporal-specific manipulations concurrent with electrophysiological recordings (Fig. 1k, l, 2e, f and Extended Data Figs 4, 6), laser pulses were delivered for 250 ms either during the first half, after 100 ms (following cue presentation) or the latter half of the delay period. In the high-resolution optogenetic inactivation experiment (Fig. 2h) laser pulses were 100 ms long, dividing the 500-ms delay period equally into five periods. During a session, only one condition was tested. For stabilized step function opsin (SSFO, hChR2(C128S/D156A)) experiments (Figs 3, 4 and Extended Data Fig. 8), a 50-ms pulse of blue (473 nm, 4 mW intensity) light at the beginning of the delay period was delivered to activate the opsins and a 50-ms pulse of red (603 nm, 8 mW intensity) light to terminate activation at the end of the delay period. Similarly, for MGB manipulations (Extended Data Fig. 10), SSFO was activated by a 50-ms pulse of blue (473 nm, 4 mW intensity) light before stimulus delivery and its activity was terminated by a 50-ms pulse of red (603 nm, 8 mW intensity) at stimulus offset. An Omicron-Laserage lighthub system (Dudenhofen) was used for all optogenetic manipulations. For all experiments with optogenetic manipulations, only sessions where baseline performance was ≥65% correct were included in the analysis. For all behavioural testing, single-mouse statistics were initially used to evaluate significance and effect size followed by statistical comparisons across sessions. Performance on the auditory Go/No-go task was assessed on the basis of the number of correct responses to Go stimuli (hit rate) relative to No-go stimuli (false alarm rate) and was considered sufficient if the overall discrimination index (d′ = Z − Z )) was greater than 2 for the baseline condition. In cases where multiple groups were compared, a Kruskal–Wallis one-way analysis of variance (ANOVA) was used to assess variance across groups, followed by post hoc testing. For pairwise comparisons a Wilcoxon rank-sum test was used. Data are presented as mean ± s.e.m. and significance levels were set to P < 0.05. Injections were performed using a quintessential stereotactic injector (QSI, Stoelting). All viruses were obtained through UNC Chapel Hill, virus-vector core. For PFC manipulation during electrophysiological recordings, 200 nl of AAV2-hSyn-DIO-ChR2 was injected bilaterally into the PFC of VGAT-cre mice. Bilateral injections of AAV1-hSyn-eNpHR3.0-eYFP (300 nl) were used for mediodorsal thalamus and LGN manipulations. For SSFO experiments, AAV1-CamKIIa-SSFO-GFP was injected bilateral either into PFC (200 nl) or mediodorsal thalamus (400 nl). To test the effect of mediodorsal activation on functional cortical connectivity we injected the mediodorsal thalamus with AAV1-CamKIIa-SSFO-GFP (400 nl) ipsilateral and the PFC with AAV1-hSyn-ChR2-eYFP (200 nl) contralateral to the recording site. Following virus injection, animals were allowed to recover for at least two weeks for virus expression to take place before the start of behavioural testing or tissue collection. Mice were deeply anaesthetized using 1% isoflurane. For each mouse, up to three pairs of optic fibres (Doric Lenses) were used in behavioural optogenetic experiments and stereotactically inserted at the following coordinates (in mm from Bregma): PFC, AP 2.6, ML ± 0.25, DV −1.25; mediodorsal thalamus, AP −1.4, ML ± 0.6, DV −1.5; LGN, AP −2.2, ML 2.15, DV 2.6. Up to three stainless-steel screws were used to anchor the implant to the skull and everything was bonded together with dental cement. Mice were allowed to recover with ad libitum access to food and water for one week, after which they were brought back to food regulation and behavioural training resumed. A 473-nm laser was used for ChR2 activation, whereas eNpHR3.0 activation was achieved with a laser with a wavelength of 561 nm. Laser intensities were adjusted to be 4–5 mW measured at the tip of the optic fibre, which was generally the minimum intensity required to produce behavioural effects. Custom multi-electrode array scaffolds (drive bodies) were designed using 3D CAD software (SolidWorks) and printed in Accura 55 plastic (American Precision Prototyping) as described previously21. Prior to implantation, each array scaffold was loaded with 12–18 independently movable microdrives carrying 12.5-μm nichrome (California Fine Wire Company) stereotrodes or tetrodes. Electrodes were pinned to custom-designed, 96-channel electrode interface boards (EIB, Sunstone Circuits) along with a common reference wire (A-M systems). For combined optogenetic manipulations and electrophysiological recordings of the PFC, optic fibres delivering the light beam lateral (45° angled tips) were embedded adjacent to the electrodes (Extended Data Fig. 3g). In the case of combined optogenetic PFC manipulations with mediodorsal recordings, the optic fibre was placed away from the electrodes at the appropriate spatial offset. For combined unilateral multi-site recordings of PFC and mediodorsal (four mice) with SSFO manipulations, two targeting arrays (0.5 × 0.5 mm for PFC and 0.5 × 0.35 mm for mediodorsal) where separated by 3.2 mm in the AP axis. For SSFO manipulations, optic fibres delivering a lateral light beam were implanted directly next to the array targeting either PFC or mediodorsal thalamus. To test the effect of mediodorsal activation on functional cortical connectivity, a single electrode array was targeted to the PFC unilaterally, whereas a 400-μm core optic fibre (Doric Lenses) was targeted to the contralateral PFC. In addition, a 200-μm core optic fibre was placed 2.8 mm behind the electrode array for activating SSFO in the ipsilateral mediodorsal thalamus. Similarly, to interrogate the same question in a sensory thalamocortical circuit, an electrode array was implanted unilaterally into V1 and an additional 400-μm core optic fibre (Doric Lenses) was targeted to the contralateral V1. In addition, a 200-μm core optic fibre was placed 0.5 mm anterior to the electrode array for activating SSFO in the ipsilateral LGN. During implantation, mice were deeply anaesthetized with 1% isofluorane and mounted on a stereotaxic frame. A craniotomy was drilled centred at AP 2 mm, ML 0.6 mm for PFC recordings (approximately 1 × 2.5 mm), at AP −3 mm, ML 2.5 mm for V1 (1.5 × 1.5 mm) or at AP −1 mm, ML 1.2 mm for mediodorsal recordings (approximately 2 × 2 mm). The dura was carefully removed and the drive implant was lowered into the craniotomy using a stereotaxic arm until stereotrode tips touched the cortical surface. Surgilube (Savage Laboratories) was applied around electrodes to guard against fixation through dental cement. Stainless-steel screws were implanted into the skull to provide electrical and mechanical stability and the entire array was secured to the skull using dental cement. Signals from stereotrodes (cortical recordings) or tetrodes (thalamic recordings) were acquired using a Neuralynx multiplexing digital recording system (Neuralynx) through a combination of 32- and 64-channel digital multiplexing headstages plugged into the 96-channel EIB of the implant. Signals from each electrode were amplified, filtered between 0.1 Hz and 9 kHz and digitized at 30 kHz. For thalamic recordings, tetrodes were lowered from the cortex into the mediodorsal thalamus over the course of 1–2 weeks where recording depths ranged from −2.8 to −3.2 mm DV. For PFC recordings, adjustments accounted for the change of depth of PFC across the AP axis. Thus, in anterior regions, unit recordings were obtained –1.2 to −1.7 mm DV, whereas for more posterior recordings electrodes were lowered −2 to −2.4 mm DV. Following acquisition, spike sorting was performed offline on the basis of the relative spike amplitude and energy within electrode pairs using the MClust toolbox (http://redishlab.neuroscience.umn.edu/mclust/MClust.html). Units were divided into fast spiking and regular spiking on the basis of the waveform characteristics as previously described21. In brief, the peak to trough time was measured in all spike waveforms, and showed a distinct bimodal distribution (Hartigan’s dip test, P < 10−5). These distributions separated at 210 μs, and cells with peak to trough times above this threshold were considered regular-spiking neurons and those with peak to trough times below this threshold were considered fast-spiking cells (Extended data Fig. 1g). The majority of cells (2,727) in PFC recordings were categorized as regular spiking, whereas approximately one-third (909) was categorized as fast spiking. For histological verification of electrode position, drive-implanted mice were lightly anaesthetized using isoflorane and small electrolytic lesions were generated by passing current (10 μA for 20 s) through the electrodes. All mice were then deeply anaesthetized and transcardially perfused using phosphate-buffered saline (PBS) followed by 4% paraformaldehyde. Brains were dissected and postfixed overnight at 4 °C. Brain sections (50 μm) were cut using a vibratome (LEICA) and fluorescent images were obtained on a confocal microscope (LSM800, Zeiss). Confocal images are shown as maximal projection of 10 confocal planes, 20 μm thick. For all PFC and mediodorsal neurons, changes in firing rate associated task performance were assessed using peri-stimulus time histograms (PSTHs). PSTHs were computed using a 10-ms bin width for individual neurons in each recording session4 convolved with a Gaussian kernel (25 ms full-width at half-maximum) to create a spike density function (SDF)31, 32, which was then converted to a z score by subtracting the mean firing rate in the baseline (500 ms before event onset) and dividing by the variance over the same period. For comparison of overall firing rates across conditions, trial number and window size were matched between groups. Homogeneity of variance for firing rates across conditions was determined using the Fligner–Killeen test for homoscedasticity33. For comparisons of multiple groups, a Kruskal–Wallis one-way ANOVA was used to assess variance across groups before pairwise comparisons. A total of 3,444 single units were recorded within the PFC and 974 single units were recorded in the mediodorsal across animals. Overall assessment of firing rates during the task delay period showed that individual regular-spiking PFC neurons did not exhibit sustained increases in spiking relative to baseline (population shown in Extended Data Fig. 1) and a comparison of variance homoscedasticity (Fligner–Killeen test) did not reveal changes in variance. In a subset of cells, however, a brief enhancement of spike-timing consistency at a defined moment in the delay period was observed (Fig. 1b). To formally identify these neurons we used the following steps. First, periods of increased consistency in spike-timing across trials were identified using a matching-minimization algorithm34. This approach was used to determine the best moments of spike time alignment across trials (candidate tuning peaks). The number of these candidate tuning peaks (n) was based on firing rate values during the delay period for each neurons. n was obtained by minimizing the equation: Where n is the number of observed spikes in a trial k. As such, the initial (and maximum) number of candidate peaks is equal to the median number of spikes observed across trials. With an initial number of candidate peaks in hand, their times were subsequently estimated. These times were initially placed randomly within the delay window, and iteratively adjusted to obtain the set of final candidate peak times. The result of this iterative process was the solution to the equation: Where the set of final candidate peak times S is obtained by iteratively minimizing the temporal distance between candidate peak times (in each iteration) C and the observed spike times across trials S on the basis of a penalty associated with increased temporal distance, computed across all trials k. In the first step, temporal adjustment for each candidate peak time was based on finding the local minimum of the temporal distance function, d (as described in ref. 34) after which spikes were adjusted by linear interpolation. In brief, neighbouring spike times across trials were sorted by their temporal offset to a given candidate peak time, and their linear fit was computed. Each candidate peak time was then moved to the midpoint of that fitted line, to achieve a local minimum. In a second step, cost minimization was jointly computed for all putative peaks using the Lagrange multiplier solution to the global minimization equation34 and intervals between peak times were adjusted on the basis of this global minimum. Both the local and the global minimization steps were iterated until the spike-time variance, defined as the sum of the squared distances between spikes across trials, converged and a set of final candidate peak times were determined. Next, to identify genuine tuning peaks, we applied two further conditions. First, for 75% of the trials, at least one spike was required to fall within ±25 ms of each final candidate peak time. This conservative threshold was based on the median firing rates observed during the delay (around 10 Hz) predicting that inter-trial spike distances will be greater than 50 ms if spikes were randomly distributed, making it highly improbable to fulfill this condition by chance. Second, these candidate peaks needed to have z-score values of >1.5 (equivalent to a one-sided test of significance) to be considered genuine tuning peaks. The z score of spiking across trials during the delay was computed relative to the pre-delay 500-ms baseline (10-ms binning, convolved with a 25-ms full-width at half-maximum Gaussian kernel). Obtaining a genuine tuning peak identified a unit as task-modulated, which was subsequently used for most analyses in this study. The vast majority of units only showed a single tuning peak using this method. Independent validation of this method’s validity is discussed in Supplementary Discussion 1. To estimate the extent to which task modulated units differentially encode task rules, a PCA was first performed as described previously10. Next, linear regression was applied to define the two orthogonal, task-related axes of rule type and movement direction. These analyses were performed on neural z-core time-series, separately for each comparison (trials separated by rule type or movement direction). In brief, a data matrix X of size N  × (N  × T), was constructed in which columns corresponded to the z-scored population response vectors for a given task rule or movement direction at a particular time (T) within the 1-s window following task initiation. This window size was chosen to provide sufficient samples for analysis, but only the delay period data were examined for this study. The contribution of each principal component to the population response across time was quantified by projecting the trial-type-specific z-score time-series (for example, attend to vision rule) onto individual principal components and computing the variance. The first principal component was used for all subsequent analyses as subsequent principal components were found to be uninformative in the initial analysis. Multi-variable linear regression was applied to determine the contribution of task rule and subsequent movement to principal component divergence across time for the corresponding trial-type comparisons. Specifically, linear analysis related the response of unit i at time t to a linear combination of these two task variables using the following equation: Where r (k) is the z-score response for a neuron in trial set (k) for each task variable; movement and rule. The regression coefficients (β) were used to describe the extent to which z-score time-series variation in the firing rate of the unit at a given time point describes a particular task variable. This analysis was generally only applied to correct responses. Regression coefficients were then used to identify dimensions in state space corresponding to variance across neural response data for the two task variables. Vectors of these coefficients across z-score time-series matrices separated by trial types (for example, rule1 versus rule 2) were projected onto subspaces spanned by the previously identified principal component. We next constructed task-variable axes ( ) using QR-decomposition to identify principal component separation associated with each task variable (v). To identify movement along these axes for each population response, their associated z-score time-series were projected onto these axes across time as follows: Where X is the population vector for trial type c. This projection resulted in two time-series vectors p for each task variable that compared movement across trial types (rule 1 versus rule 2; right versus left) on their corresponding axes. The difference between these two time-series was used as the main metric for information (task rule or movement) in this study. For evaluating rule information in error trials when their number permitted analysis (>20 error trials; based on empirical assessment of minimum trial numbers required for principal component divergence), trial type axes obtained from correct trials were multiplied by −1 to reverse directionality. The significance bounds for all time-series were obtained using random subsampling and bootstrapping (around 60% of total neurons per bootstrap, 200 replications). The 95% confidence bounds at each time point were then estimated on the basis of the resulting distribution. To determine whether our inference that rule information was related to tuning peaks, task-modulated spike times were randomly jittered by 500 ms and the PCA repeated. This resulted in loss of rule-information-related principal component divergence, validating our inference. To obtain a quantitative estimate of peak fidelity across multiple trials, an internal neural synchrony measurement35 was modified for short-term synchrony, which was associated with identified peaks. This approach was applied to spike trains associated with differing task conditions and responses. Each spike within the train was convolved with a Gaussian kernel with a 9-ms half width. Trials were then summed and divided by the kernel peak size and trial number giving a maximum value (for perfect alignment) of one at any point. Convolution vector values around the tuning in the baseline condition were compared to the value within the same time window in the other condition. To compute cross-correlation histograms (cross-correlograms), the MATLAB function ‘crosscorr’ was applied to whole-session spike trains from pairs of cells. Continuous traces at a 1-kHz sampling rate were first generated on the basis of the spike times, with times at which spikes occurred set to one and all other times to zero. Crosscorr was then applied to trains from all possible cell pairs, using a maximum lag time of ±50 ms. The significance of a cross-correlogram was determined by randomly jittering all spike times independently and re-computing the cross-correlogram. Jitter values were drawn from a Gaussian distribution centred at zero with a s.d. of 3 ms. This process was repeated 100 times for each pair, and if the observed peak cleared the 95% confidence bounds of all shuffled sets, the pair was determined to have a significant cross-correlation. Pairs of cells were grouped as follows: the control group was composed of cells in which only the first cell was rule-tuned. The test group was composed of pairs in which both cells were tuned. This test group was further broken down into two subgroups: one in which both cells responded to the same rule and one in which the cells responded to different rules. Within these groups, co-modulation was defined as the number of significant cross-correlograms divided by the total number of cross-correlograms. After overall group comparison using a χ2 test, proportion differences were statistically evaluated in a post hoc pairwise fashion using binomial proportion tests. To examine the effect of tuning to the same rule on co-modulation strength, the distributions of cross-correlogram peak heights were also compared for the groups of pairs described above. An empirical CDF (cumulative distribution function) was constructed using the peak heights of each group, and these distributions were compared using a signed-rank test. Finally, the relationship between cross-correlogram peak height and inter-alignment time was explored. The inter-alignment times among neuronal pairs tuned to the same rule were calculated by taking difference in spike alignment times of each pair. To more effectively assess putative monosynaptic connections, the significant cross-correlograms between tuned pairs were also re-computed at a 50-μs resolution. Significance thresholding at this resolution was repeated by determining whether a sequence of two or more successive bins of the adjusted trace, which exceeded two standard deviations of the overall trace, occurred within 10 ms of the centre bin19. Cross-correlograms containing such outliers were further characterized on the basis of their peak times. Those with peaks at 300 μs or later were categorized as putative monosynaptic connections18, 19. Among these putative connections, the pairs were split into two groups: those that were tuned to the same rule, and those that were tuned to opposite rules. To compare peak strength, spike probability was estimated by subtracting a shuffled distribution of spike times with the same average firing rate as the postsynaptic neuron and dividing by the number of spikes in the presynaptic neuron17. The distributions of the resulting peak strengths among same rule and opposite rule putative monosynaptic connections were compared using the Kolmogorov–Smirnov test. Finally, the peak strengths of these pairs were plotted against their inter-alignment time. As in the above analysis, only same rule pairs were included. To further assess the degree of rule representation in the PFC and mediodorsal thalamus, we applied two population decoding approaches, the maximum correlation coefficient (MCC) and Poisson naive Bayes (PNB) classifiers as implemented in the neural decoding toolbox36. These analyses were applied to all tuned neurons recorded from either structure, each of which were pooled into a pseudo-population for each structure (n = 604 neurons in the PFC and n = 156 neurons in the mediodorsal thalamus). For MCC decoding, firing rate response profiles in individual correct trials associated with each rule were preprocessed by converting them to a z score using the mean and variance in the corresponding trial to prevent baseline spike-rate differences from affecting classification37. For PNB classification, neuron spiking activity was modelled as a Poisson random variable with each neuron’s activity assumed to be independent. Trial-specific z scores (MCC) or spike counts (PNB) from these pseudo-populations were then repeatedly and randomly subsampled (200 resampling runs) and divided into training and test subsets (six training and two test trials per recording session across n = 360 PFC and n = 116 mediodorsal sessions). For each subsampling, the classifier was trained using the training subset to produce a predictive mean response template for each rule (i). Templates were constructed separately for 100-ms overlapping windows across the trace (step size = 20 ms) and classifiers trained for each template. The windowed classifiers allowed us to estimate the temporal evolution of information in the population. In the cross-validation step, these templates were used to predict the class for each test trial in the test set (x*) by maximizing the correlation decision function in the case of MCC or the log-likelihood decision function in the case of the PNB classifier38. Finally, we estimated the predictive strength of population activity at each time point, that is, the extent to which activity in that time bin predicts the trial type, as the average of the correct predictions in the test set. To determine the variability of this estimate, a bootstrapping procedure was applied in which 25% of neurons were subsampled from the overall population and the same procedure was repeated (50 resampling runs). The resulting traces were used to estimate the 95% confidence intervals of the initial estimate from the full population. To determine the degree of causal connectivity in the ensemble of recorded neurons within the PFC or their counterpart in our simulated network, we used the Weiner–Granger vector autoregressive (VAR) causality analysis as implemented in the multivariate Granger causality toolbox (MVGC)25. Spike train data from each recorded or simulated neuron within a session was converted to a continuous signal by binning in 1-ms increments39, 40 and convolving the resulting signal with a Gaussian filter (half width 5 ms). For all neurons in individual sessions, this analysis used 500-ms segments either within the delay period (delay) or just before (task engagement) along with an equal number of randomly selected segments recorded outside of the behavioural environment (out of task). For assessment of laser effects, a matched number of correct trials in the laser and non-laser condition were compared for each recording session across neurons. To improve stationarity in the signal, segments were adjusted by subtracting the mean and dividing by the s.d. of each segment39, 41 and stationarity was checked by determining whether the spectral radius of the estimated full model was less than one25. All models met this stationarity criteria. Model order was estimated empirically for each subset using Bayesian information criteria after which VAR model parameters were determined for the selected model order. On the basis of the resulting parameters, time-domain conditional Granger causality measurements were calculated for each cell pair across all trials. Causal density for a given condition in each session was taken as the mean pairwise-conditional causality25. To assess the effect of changes in thalamic excitability on cortical connection strength, we measured intra-cortical responses evoked by ChR2-mediated activation of the contralateral cortex for V1/LGN (94 neurons in two mice) and PFC/mediodorsal thalamus (96 neurons in three mice). Responses to either cortical stimulation alone (10 ms ChR2 activation to the contralateral cortex), thalamic activation alone (500 ms SSFO activation in ipsilateral LGN or mediodorsal thalamus) or the combination were recorded in V1 and PFC (100 interleaved trials per condition). For the combined condition, thalamic activation preceded cortical stimulation by 100 ms. Network structure and dynamics. We constructed a model that consisted of excitatory (regular-spiking) and inhibitory (fast-spiking) PFC neurons as well as mediodorsal neurons. Within the PFC, regular-spiking cells formed subnetworks representing each task rule consisting of multiple interconnected chains. Neurons in each of these chains were locally connected to their nearest neighbour within the chain as well as to other chains within the same subnetwork. While neurons representing different rules were connected, connections were made stronger within each subnetwork (for example, among neurons representing the same rule) on the basis of our cross-correlation experimental data. Regular-spiking neurons of either rule sent overlapping projections to mediodorsal neurons and received reciprocal inputs from the mediodorsal thalamus. Mediodorsal inputs were modulatory with a longer time constant than for the PFC (1 ms versus 10 ms), and resulted in increased spiking of fast-spiking neurons (direct synaptic drive, w = 0.6) while providing an amplifying input (factor, 1.6×) to connections between regular-spiking neurons (regardless of rule tuning). During rule encoding, the arrival of input attributed to one rule simultaneously activated the starter neuron (first neuron in a chain) in chains encoding that rule, engaging mediodorsal neurons and enhancing their firing through synaptic convergence. In turn, mediodorsal neurons enabled signal propagation that was specific to that rule by amplifying currently active regular-spiking neuronal connections, while preventing irrelevant synchrony elsewhere through augmented inhibition. Spiking neuron model. We employed the leaky integrate-and-fire (LIF) model to simulate both of the network paradigms described above. LIF is a simplified spiking neuron model that is frequently used to mathematically model the electrical activity of neurons. The evolution of the membrane voltage of neuron j using the LIF equation is as follows: where C is the membrane capacitance, V is the jth neuron’s membrane voltage, α is the leak conductance (α = 0.95). Iext is an externally applied current with amplitude taken independently for each neuron from a uniform distribution (μ = 0.825, s.d. = 0.25 for PFC and mediodorsal neurons). I syn is the synaptic input to cell j, and this is defined as follows: where ω represents the strength of the connection between presynaptic cell i and the postsynaptic neuron j; A is the connectivity matrix that denotes the connectivity map. τ is the spike duration (1 ms in our simulations) and the H(t) is a Heaviside function that is zero for negative values (t < τ) and one for positive values (t > τ). In this model the voltage across the cell membrane grows, and after it reaches a certain threshold (Vth = 1), the cell fires an action potential, and its membrane potential is reset to the reset voltage. Here, the resting potential (E) and reset-potential Vreset are set to zero. The neuron enters a refractory period (Tref = 1.5 ms) immediately after it reaches the threshold (V = Vth) and spikes. To integrate the LIF equation, we used the Euler method with a step size of Δt = 0.01 ms. To reproduce the spontaneous activity of the network, we introduced a noise that arrives randomly at each cell with a predefined probability (f  = 10 Hz). For each statistical analysis provided in the manuscript, an appropriate statistical comparison was performed. For large sample sets, the Kolmogorov–Smirnov normality test was first performed on the data to determine whether parametric or non-parametric tests were required. Variance testing for analysis involving comparisons of firing rates under differing behavioural conditions and following optogenetic manipulations was done using the Fligner–Killeen test of variance homoscedasticity. For small sample sizes (n < 5) non-parametric tests were used by default. Two different approaches were used to calculate the required sample size. For studies in which sufficient information on response variables could be estimated, power analyses were performed to determine the number of mice needed. For sample size estimation in which effect size could be estimated, the sample number needed was estimated using power analysis in MATLAB (sampsizepwr) with a β of 0.7 (70%). For studies in which the behavioural effect of the manipulation could not be prespecified, including optogenetic experiments, we used a sequential stopping rule42. This method enables null-hypothesis tests to be performed in sequential stages, by analysing the data at several experimental points using non-parametric pairwise testing. In these cases, the experiment initially uses a small cohort of mice which are tested over multiple behavioural sessions. If the P value for the trial comparison across mice falls below 0.05, the effect is considered significant and the cohort size is not increased. If the P value is greater than 0.36 following four sessions that met criteria, the investigator stopped the experiment and retained the null hypothesis. Using this strategy, the required number of animals was determined to be between three and five animals per cohort across testing conditions. For multiple comparisons, a non-parametric ANOVA (Kruskal–Wallis H-test) was performed followed by pairwise post hoc analysis. All post hoc pairwise comparisons were two-sided. No randomization or investigator blinding was done for experiments involving electrophysiology. Blinding was used for experiments involving SSFO and behaviour (mediodorsal versus PFC). All computer code used for analysis and simulation in this study was implemented in MATLAB computing software (MathWorks). Code will be made freely available to any party upon request. Requests should be directed to the corresponding author. The data that support the findings of this study are available from the corresponding author upon reasonable request.


One key objective of quantum field theories is to capture the essential physics of complex quantum many-body systems in terms of collective degrees of freedom13, 14. The propagation of and interactions between the collective degrees of freedom of quantum many-body systems are encoded in the correlation functions where z = (z , …, z ), O(z ) is the Heisenberg operator that describes the collective degrees of freedom evaluated at a general coordinate z (here, the coordinate of our one-dimensional system), the angle brackets denote the quantum mechanical expectation value and N is the order of the correlation. In the absence of interactions between these degrees of freedom, all information is contained in the second-order correlation function G(2) (refs 3, 4). Higher-order correlations G(N) with N > 2 fully factorize: they can be calculated using the Wick decomposition3, 15, a sum containing products of only G(N) with N ≤ 2. In this case, the quantum many-body states are Gaussian—that is, they are fully characterized by their first and second moments (mean and variance). Determining, for an interacting system, the collective degrees of freedom that lead to complete factorization corresponds to solving the quantum many-body problem. Observing only approximate factorization points to solutions derived from perturbation theory. More generally, in the presence of interactions between the collective degrees of freedom, G(N) can be decomposed into3, 16 The first term, , is the disconnected part of the correlation function. It is fully determined by all of the lower-order correlation functions G(N′) with N′ < N and so does not contain new information at order N. The second term, , is the connected part of the correlation function and contains genuinely new information about the system at order N. Complete factorization of higher-order correlation functions is therefore equivalent to for all N > 2. In a diagrammatic expansion, is given by a sum of fully connected diagrams with N external lines3, 4. Whereas failure of the Wick decomposition can indicate only the presence of interactions, determining up to which order N the connected correlation function can be reliably estimated gives a direct handle on the level of complexity of the underlying quantum many-body system that is accessible in a given experiment. With the rapid progress in quantum gas experiments17, measuring higher-order correlation functions18, 19, 20, 21 is now possible. To illustrate the power of the above concepts for analysing a non-trivial interacting quantum many-body system, we experimentally investigate two tunnel-coupled one-dimensional (1D) bosonic superfluids, realized with quantum-degenerate 87Rb atoms trapped in a double-well potential with a freely adjustable barrier (Fig. 1)22. Matter–wave interferometry23, 24, 25 provides direct access to the spatially resolved relative phase φ(z) between the superfluids (see Methods and Extended Data Fig. 2). Tunnelling through the double-well barrier drives the relative phase φ(z) towards zero. The strength of this ‘phase locking’ is characterized by 〈cos(φ)〉, a quantity that is zero for completely random phases (no phase locking) and approaches unity in the limit of strong phase locking. The value of 〈cos(φ)〉 depends on the strength of the tunnel coupling and on the temperature22. From the measured phase profiles φ(z) we extract the Nth-order correlation functions of the phase by evaluating with coordinates z = (z , …, z ) and along the length of the system, and the brackets denoting averaging over many experimental realizations; see Methods for details on calculating and . In Fig. 2 we show experimental data for the full fourth-order correlation function G(4)(z, z′)—its disconnected and connected parts—for different strengths of the phase locking between the superfluids. The superfluids are prepared by slow evaporative cooling into the double-well potential, with the aim of creating a thermal equilibrium state. In both limits, 〈cos(φ)〉 ≈ 0 (uncoupled superfluids) and 〈cos(φ)〉 ≈ 1 (strongly coupled superfluids), the connected part vanishes (Fig. 2a, c). The full fourth-order correlation function is given by its disconnected part, calculated from the second-order correlation function; that is, the fourth-order correlation function factorizes. For intermediate phase locking (Fig. 2b), the fourth-order function cannot be described by second-order functions alone, and a substantial connected part remains. We now compare our observations with predictions for thermal states of the sine-Gordon model, which has been proposed as an effective description of the relative degrees of freedom of two tunnel-coupled 1D bosonic superfluids10. Following ref. 10 (see Supplementary Information for details) the Hamiltonian is where δρ(z) describes the relative density fluctuations and φ(z) is the relative phase (see Fig. 1). These fields represent canonically conjugate variables that fulfil appropriate commutation relations. The parameter m is the mass of the atoms, g is the 1D interaction strength, J is the tunnel-coupling strength between the superfluids with equal 1D densities n and ħ is the reduced Planck constant. The correlation functions in equation (3) reflect the correlations in the collective degrees of freedom, constructed from the conjugate fields δρ and φ. The connected correlation function for N > 2 is therefore a direct measure of their interactions. In contrast, the more commonly used correlation functions and their higher-order generalizations21 contain up to arbitrary order even for the second-order function, and so are not suitable for studying these interaction properties (see Methods and Supplementary Information). H nicely reflects the observations in Fig. 2. For 〈cos(φ)〉 ≈ 0, which corresponds to J ≈ 0, only the first part of H , the quadratic Tomonaga–Luttinger Hamiltonian26, 27, 28, remains, leading to Gaussian thermal states that are characterized by a vanishing connected correlation function for N > 2. For 〈cos(φ)〉 ≈ 1, we can replace the cosine in the Hamiltonian in equation (4) by its harmonic approximation29, which also leads to a quadratic Hamiltonian and Gaussian fluctuations. For intermediate phase locking (intermediate 〈cos(ϕ)〉), we have to consider the full cosine potential, which leads to a non-vanishing fourth-order connected correlation function. The Hamiltonian (equation (4)) represents an effective low-energy description of the underlying microscopic degrees of freedom and processes. Theoretically, the insensitivity to details of the underlying micro-physics can be efficiently phrased in terms of relevant and irrelevant operators of the model used. The factorization observed for strong and vanishing tunnel coupling provides an experimental demonstration that the contributions from a large set of possible irrelevant operators renormalize to zero in the low-energy effective theory that describes thermal equilibrium. For a quantitative comparison between experiment and equilibrium sine-Gordon theory, we first estimate the density n and the temperature T of our samples from independent measurements (see Methods). We then numerically calculate the theoretical prediction for the higher-order correlation functions (see Methods and Supplementary Information). We compare theory and experiment using the measure which is plotted in Fig. 3 as a function of the phase-locking strength, quantified by 〈cos(φ)〉. The experimental results for N = 4 agree well with sine-Gordon equilibrium theory. Looking at the Wick decomposition for the sixth-, eighth- and tenth-order functions, specifically, whether they factorize into second-order functions, we obtain similar results (see Extended Data Figs 3, 4, 5, 6, 7). For 〈cos(φ)〉 ≈ 0 the higher-order functions can be described by second-order functions only; in the intermediate regime this is not possible; and towards 〈cos(φ)〉 ≈ 1 factorization can be achieved, but conditions become more stringent with increasing order N. Experimentally measuring the connected part of the sixth-, eighth- and tenth-order (or higher) correlation functions to investigate their factorization into all lower-order correlation functions is a much more challenging task. Factorization for very weak and very strong phase locking follows from the observed validity of the Wick decomposition for these cases. In the intermediate regime, the connected part is substantial and there is qualitative agreement between experiment and thermal equilibrium theory (see Extended Data Figs 3, 4, 5, 6, 7). These non-vanishing connected parts are a clear indication that, in our system, three-, four- and five-particle interactions are important. This finding highlights that our method could provide new access to the microscopic aspects of effective few-body dynamics that contribute to many-body dynamics. To arrive at the sine-Gordon model from the original Hamiltonian that describes two tunnel-coupled 1D superfluids, a series of approximations are made that lead to the decoupling of the symmetric and antisymmetric modes of the system (see Supplementary Information). These approximations include only terms that are second-order in δρ and ∂ φ, and neglect mixed density–phase terms. Showing that the measured correlation functions up to tenth order (containing terms up to φ10) are faithfully reproduced by H demonstrates that the approximations needed to derive this low-energy effective theory are justified, at least in equilibrium. So far we discussed data for the system prepared by very slow cooling, which can be described by the thermal equilibrium sine-Gordon theory. Systems prepared using a final cooling speed that is a factor of ten faster (see Methods) exhibit different behaviour (Fig. 3). This contrast demonstrates that our method can differentiate between thermal and non-thermal states. For strong phase locking (〈cos(φ)〉 ≈ 1), a substantial connected part remains in the rapidly cooled sample, indicating that in the non-thermal case the cosine in the Hamiltonian in Equation (4) is relevant even in this regime. To gain insight into the mechanisms that lead to the difference between slow and fast cooling, we analyse the full distribution function of the phase differences Δφ = φ(z) − φ(z′) to which, in principle, all Nth-order phase correlation functions contribute. In Fig. 4a we show the full distributions for one particular pair of coordinates (z, z′) chosen symmetrically around the centre of the trap. For slow cooling and intermediate values of 〈cos(φ)〉, the full distribution functions of the phase differences Δφ are distinctly non-Gaussian. For strong phase locking (〈cos(φ)〉 ≈ 1), we find Gaussian full distribution functions, as anticipated from the observed validity of the Wick decomposition in this case. In contrast, for fast cooling, all coupled cases have non-Gaussian distribution functions. With increasing phase locking, distinct side peaks appear at ±2π, becoming more localized, but at the same time more suppressed. For 〈cos(φ)〉 = 0.94, we observe a Gaussian central peak (see insets of Fig. 4a) as well as a few outliers at ±2π. Studying interference patterns for individual realizations that correspond to the side peaks reveals that the phase rotates through a full circle of 2π within a short distance (see Fig. 4b). These localized kinks represent transitions between different minima of the cosine potential and can be identified as solitons of the sine-Gordon model; they are topological excitations of H (equation (4)). In the case of fast cooling, these sine-Gordon solitons are frozen in, and the phase of the quantum field fluctuates around them. Such states may therefore be interpreted as topologically distinct, ‘false’ vacua30 above which the quasiparticles are being excited. The energy of these false vacua increases with the number of sine-Gordon solitons. The procedure outlined here is the basis for a general principle for extracting information from non-trivial quantum systems in an unbiased and unambiguous way. It represents an important step towards solving complex quantum many-body problems by experiment. Furthermore, higher-order correlation functions hold promise for experimental and theoretical investigations of non-equilibrium dynamics. Our method thus provides an important tool for future quantum simulators.


We present improved Mars Odyssey Neutron Spectrometer (MONS) maps of near-surface Water-Equivalent Hydrogen (WEH) on Mars. These maps have intriguing implications for the global distribution of "excess" ice, which occurs when the mass fraction of water ice exceeds the threshold amount needed to saturate the pore volume in normal soils. We have refined the crossover technique of Feldman et al. (2011) by using spatial deconvolution and Gaussian weighting to create the first globally self-consistent map of WEH. At low latitudes, our new maps indicate that WEH exceeds 15% in several near-equatorial regions, such as Arabia Terra, which has important implications for the types of hydrated minerals present at low latitudes. At high latitudes, we demonstrate that the disparate MONS and Phoenix Robotic Arm (RA) observations of near surface WEH can be reconciled by a three-layer model incorporating dry soil over fully saturated pore ice over pure excess ice: such a three-layer model can also potentially explain the strong anticorrelation of Wdn and D observed at high latitudes. At moderate latitudes, we show that the distribution of recently formed impact craters is also consistent with our latest MONS results, as both the shallowest ice-exposing crater and deepest non-ice-exposing crater at each impact site are in good agreement with our predictions of near-surface WEH. Overall, we find that our new mapping is consistent with the widespread presence at mid-to-high Martian latitudes of recently deposited shallow excess ice reservoirs that are not yet in equilibrium with the atmosphere. Comments: 65 pages, 20 figures, submitted to Icarus Subjects: Earth and Planetary Astrophysics (astro-ph.EP) Cite as: arXiv:1705.05556 [astro-ph.EP] (or arXiv:1705.05556v1 [astro-ph.EP] for this version) Submission history From: Asmin Pathare [v1] Tue, 16 May 2017 07:16:13 GMT (6261kb) https://arxiv.org/abs/1705.05556 Asmin V. Pathare, William C. Feldman, Thomas H. Prettyman, Sylvestre Maurice (Submitted on 16 May 2017) https://arxiv.org/abs/1705.05556 Astrobiology


News Article | April 7, 2017
Site: www.techtimes.com

The evils of smoking are well-known and even prompted California to impose a price hike on cigarettes to deter people from the unhealthy habit. Now, a new study reveals that smoking led to over 11 percent deaths all over the world. According to the study, nearly 1 billion individuals all over the world smoke a cigarette each day. An alarming statistic that came to the fore was that in 2015, 1 out of 10 deaths occurred because of smoking. In 2015, 11.5 percent global deaths were attributed to smoking. Surprisingly, 52.2 percent of the deaths caused due to smoking, occurred in four countries, including the United States. Apart from the United States, the other countries to make it to the top four for deaths caused due to smoking were — China, India, and Russia. Smoking has claimed more than 5 billion lives since 1990. The negative impact of smoking is growing at an alarming rate, especially in countries falling under low income groups. To ascertain the prevalence of smoking and its global effects, the researchers analyzed data on smokers from 195 territories and countries. This data was from 1990 to 2015. "We synthesised 2818 data sources with spatiotemporal Gaussian process regression and produced estimates of daily smoking prevalence by sex, age group, and year for 195 countries and territories from 1990 to 2015," noted the researchers. The researchers found that there was decline in the percentage of smokers globally. However, owing to population growth, the number of smokers on a whole, had increased all around the world. This implies that as the number of people inhabiting Earth rises, the number of smokers grow as well. However, if the number of smokers are compared to the total population, the percentage is lesser than what it was over two decades years ago. "Growth in the sheer number of daily smokers still outpaces the global decline in daily smoking rates, indicating the need to prevent more people from starting the tobacco habit and to encourage smokers to quit," said Emmanuela Gakidou, senior author of the study. The researchers discovered that in 2015, roughly 933 million people smoked every single day and 80 percent of these smokers were men. However, from 1990 to 2015, there was a worldwide decline in smoking prevalence from 29.4 percent to 15.3 percent. The researchers found that in 2015, one in every four men worldwide smoked every day. The statistic for women is a shade better as it was found that 1 in every 20 women smoked on a daily basis. In this period, the daily smoking rate for men fell from 35 percent in 1990 to 25 percent in 2015. Comparatively, the daily smoking rate for women reduced from 8 percent in 1990 to 5 percent in 2015. Among the top four countries having the highest death rate due to smoking, China accounts for nearly 254 million male smokers and India follows close behind, accounting for 91 million male smokers. The United States led the countries with the most female smokers, accounting for 17 million female smokers in 2015. It was followed by China and India with 14 million and 13.5 million female smokers, respectively. However, in 2015, the maximum rate of female smoking was in Greenland as 44 percent of women in the region smoked daily. Gakidou stresses on the importance of tobacco control programs. He urges everyone around the world to take the implications of smoking on one's health seriously. The study has been published in journal The Lancet on April 5. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 28, 2017
Site: www.cemag.us

A “chemical imaging” system that uses a special type of laser beam to penetrate deep into tissue might lead to technologies that eliminate the need to draw blood for analyses including drug testing and early detection of diseases such as cancer and diabetes. The system, called stimulated Raman projection microscopy and tomography, makes possible “volumetric imaging” without using fluorescent dyes that might affect biological functions and hinder accuracy, said Ji-Xin Cheng, a professor in Purdue University’s Weldon School of Biomedical Engineering, Department of Chemistry and Birck Nanotechnology Center. “Volumetric chemical imaging allows a better understanding of the chemical composition of three-dimensional complex biological systems such as cells,” he said. The technology uses a type of laser beam called a Bessel beam, which maintains focus for a longer distance than a traditional “Gaussian beam” used in other imaging technologies, making it possible to penetrate deep into tissue. Stimulated Raman spectroscopy eliminates the need for fluorescent dyes. The technology yields more accurate data than other methods because it allows imaging of the entire cell by “adding up” signals produced from the scanning beam, Cheng said. Because the Bessel beam makes possible deep-tissue imaging, it could lead to systems that eliminate the need to draw blood for analyses such as drug testing and detection of biomarkers for non-invasive early diagnosis of diseases, Cheng said. “This is a long-term goal,” he said. “In the meantime, much more research is needed to improve the system.” The researchers proved the concept by imaging fat storage in living cells. Findings are detailed in a research paper appearing on April 24 in the journal Nature Communications. The reported technology yields information about chemical composition, collecting a series of images while rotating the sample and reconstructing the 3-D structure through image reconstruction algorithms. The Bessel beam is produced using a pair of cone-shaped “axicon” lenses and is combined with a microscope objective. Its use for volumetric fluorescence imaging was previously demonstrated by physicist Eric Betzig, who won the Nobel Prize in chemistry in 2014 for his pioneering contribution to super-resolution fluorescence microscopy. Super-resolution technology allows researchers to resolve structural features far smaller than the wavelength of visible light, sidestepping the “diffraction limit” that normally prevents imaging of features smaller than about 250 nanometers, which is large compared to certain biological molecules and structures in cells. However, fluorescence microscopy usually requires the use of fluorescent tags, which may interfere with biological processes and hinder accuracy for determining chemical structure. Future research will include work to increase the detection sensitivity of the system and improve the imaging quality and speed. “There is plenty of room for improvement,” Cheng said. “The system is based on a bulky and relatively expensive femtosecond laser, which limits its potential for broad use and clinical translation. Nevertheless, we anticipate that this limitation can be circumvented through engineering innovations to reduce the cost and size of our technology. We also note that the Bessel beam can be produced using fibers, which could simplify the system and enable endoscopic applications.” The paper was authored by Xueli Chen, a visiting scholar from Xidian University in China; Purdue postdoctoral research associate Chi Zhang; Purdue doctoral students Peng Lin and Kai-Chih Huang; Xidian University researchers Jimin Liang and Jie Tian; and Cheng. The research was supported by funds from the Keck Foundation and National Institutes of Health.


News Article | May 5, 2017
Site: www.eurekalert.org

Researchers at the Institute of Acoustics (IOA) of the Chinese Academy of Sciences have designed and fabricated an underwater acoustic carpet cloak using transformation acoustics, a scientific first. The research was published online in Scientific Reports on April 6. An acoustic cloak is a material shell that can control the propagation direction of sound waves to make a target undetectable in an acoustic system. The carpet cloak modifies the acoustic signature of the target and mimics the acoustic field obtained from a reflecting plane, so that the cloaked target is indistinguishable from the reflecting surface. The field of transformation acoustics focuses on the design of new acoustic structures. It shows how to control the propagation of acoustic waves. The parameters of the cloak shell can be given by transformation acoustics. However, in most cases, these parameters are too complex for practical use. To solve this problem, YANG Jun and his IOA team adopted a scaling factor and simplified the structure of the carpet cloak with only modest impedance mismatch. The research team then used layers of brass plates featuring small channels filled with water to construct the model cloak. This material possesses effective anisotropic mass density in long-wavelength regimes. The structure of the carpet cloak, comprised of layered brass plates, is therefore simplified at the cost of some impedance match. "The carpet cloak has a unit cell size of about 1/40 of the wavelength, making it able to control underwater acoustic waves in the deep subwavelength scale," said YANG Jun. The proposed carpet cloak has shown good performance in experimental results across a wide frequency range. In tests, a short Gaussian pulse propagates towards a target bump covered with the carpet cloak; the scattered wave then returns in the backscattering direction. The cloaked object successfully mimics the reflecting plane and is imperceptible to sound detection. Previously, the IOA researchers had designed and fabricated a carpet cloak in air. The results of this earlier research were published in the Journal of Applied Physics (Volume 113, Issue 2, January 2013).


News Article | May 8, 2017
Site: phys.org

An acoustic cloak is a material shell that can control the propagation direction of sound waves to make a target undetectable in an acoustic system. The carpet cloak modifies the acoustic signature of the target and mimics the acoustic field obtained from a reflecting plane so that the cloaked target is indistinguishable from the reflecting surface. The field of transformation acoustics focuses on the design of new acoustic structures. It shows how to control the propagation of acoustic waves. The parameters of the cloak shell can be given by transformation acoustics. However, in most cases, these parameters are too complex for practical use. To solve this problem, YANG Jun and his IOA team adopted a scaling factor and simplified the structure of the carpet cloak with only modest impedance mismatch. The research team then used layers of brass plates featuring small channels filled with water to construct the model cloak. This material possesses effective anisotropic mass density in long-wavelength regimes. The structure of the carpet cloak, comprised of layered brass plates, is therefore simplified at the cost of some impedance match. "The carpet cloak has a unit cell size of about 1/40 of the wavelength, making it able to control underwater acoustic waves in the deep subwavelength scale," said YANG Jun. The proposed carpet cloak has shown good performance in experimental results across a wide frequency range. In tests, a short Gaussian pulse propagates towards a target bump covered with the carpet cloak; the scattered wave then returns in the backscattering direction. The cloaked object successfully mimics the reflecting plane and is imperceptible to sound detection. More information: Yafeng Bi et al, Design and demonstration of an underwater acoustic carpet cloak, Scientific Reports (2017). DOI: 10.1038/s41598-017-00779-4


News Article | April 27, 2017
Site: phys.org

The system, called stimulated Raman projection microscopy and tomography, makes possible "volumetric imaging" without using fluorescent dyes that might affect biological functions and hinder accuracy, said Ji-Xin Cheng, a professor in Purdue University's Weldon School of Biomedical Engineering, Department of Chemistry and Birck Nanotechnology Center. "Volumetric chemical imaging allows a better understanding of the chemical composition of three-dimensional complex biological systems such as cells," he said. The technology uses a type of laser beam called a Bessel beam, which maintains focus for a longer distance than a traditional "Gaussian beam" used in other imaging technologies, making it possible to penetrate deep into tissue. Stimulated Raman spectroscopy eliminates the need for fluorescent dyes. The technology yields more accurate data than other methods because it allows imaging of the entire cell by "adding up" signals produced from the scanning beam, Cheng said. Because the Bessel beam makes possible deep-tissue imaging, it could lead to systems that eliminate the need to draw blood for analyses such as drug testing and detection of biomarkers for non-invasive early diagnosis of diseases, Cheng said. "This is a long-term goal," he said. "In the meantime, much more research is needed to improve the system." The researchers proved the concept by imaging fat storage in living cells. Findings are detailed in a research paper appearing on April 24 in the journal Nature Communications. The reported technology yields information about chemical composition, collecting a series of images while rotating the sample and reconstructing the 3-D structure through image reconstruction algorithms. The Bessel beam is produced using a pair of cone-shaped "axicon" lenses and is combined with a microscope objective. Its use for volumetric fluorescence imaging was previously demonstrated by physicist Eric Betzig, who won the Nobel Prize in chemistry in 2014 for his pioneering contribution to super-resolution fluorescence microscopy. Super-resolution technology allows researchers to resolve structural features far smaller than the wavelength of visible light, sidestepping the "diffraction limit" that normally prevents imaging of features smaller than about 250 nanometers, which is large compared to certain biological molecules and structures in cells. However, fluorescence microscopy usually requires the use of fluorescent tags, which may interfere with biological processes and hinder accuracy for determining chemical structure. Future research will include work to increase the detection sensitivity of the system and improve the imaging quality and speed. "There is plenty of room for improvement," Cheng said. "The system is based on a bulky and relatively expensive femtosecond laser, which limits its potential for broad use and clinical translation. Nevertheless, we anticipate that this limitation can be circumvented through engineering innovations to reduce the cost and size of our technology. We also note that the Bessel beam can be produced using fibers, which could simplify the system and enable endoscopic applications." The paper was authored by Xueli Chen, a visiting scholar from Xidian University in China; Purdue postdoctoral research associate Chi Zhang; Purdue doctoral students Peng Lin and Kai-Chih Huang; Xidian University researchers Jimin Liang and Jie Tian; and Cheng. Explore further: Imaging uses 'photothermal effect' to peer into living cells More information: Xueli Chen et al. Volumetric chemical imaging by stimulated Raman projection microscopy and tomography, Nature Communications (2017). DOI: 10.1038/ncomms15117


Caricato M.,Gaussian Inc
Journal of Chemical Theory and Computation | Year: 2012

The effect of the solvent on the structure of a molecule in an electronic excited state cannot be neglected. However, the computational cost of including explicit solvent molecules around the solute becomes rather onerous when an accurate method such as the equation of motion coupled cluster singles and doubles (EOM-CCSD) is employed. Solvation continuum models like the polarizable continuum model (PCM) provide an efficient alternative to explicit models, since the solvent conformational average is implicit and the solute-solvent mutual polarization is naturally accounted for. In this work, the coupling of EOM-CCSD and PCM in a state specific approach is presented for the evaluation of energy and analytic energy gradients. Also, various approximations are explored to maintain the computational cost comparable to gas phase EOM-CCSD. Numerical examples are used to test the different schemes. © 2012 American Chemical Society.


Scalmani G.,Gaussian Inc | Frisch M.J.,Gaussian Inc
Journal of Chemical Physics | Year: 2010

Continuum solvation models are appealing because of the simplified yet accurate description they provide of the solvent effect on a solute, described either by quantum mechanical or classical methods. The polarizable continuum model (PCM) family of solvation models is among the most widely used, although their application has been hampered by discontinuities and singularities arising from the discretization of the integral equations at the solute-solvent interface. In this contribution we introduce a continuous surface charge (CSC) approach that leads to a smooth and robust formalism for the PCM models. We start from the scheme proposed over ten years ago by York and Karplus and we generalize it in various ways, including the extension to analytic second derivatives with respect to atomic positions. We propose an optimal discrete representation of the integral operators required for the determination of the apparent surface charge. We achieve a clear separation between "model" and "cavity" which, together with simple generalizations of modern integral codes, is all that is required for an extensible and efficient implementation of the PCM models. Following this approach we are now able to introduce solvent effects on energies, structures, and vibrational frequencies (analytical first and second derivatives with respect to atomic coordinates), magnetic properties (derivatives with respect of magnetic field using GIAOs), and in the calculation more complex properties like frequency-dependent Raman activities, vibrational circular dichroism, and Raman optical activity. © 2010 American Institute of Physics.

Loading Gaussian Inc collaborators
Loading Gaussian Inc collaborators