Amsterdam, Netherlands
Amsterdam, Netherlands

Time filter

Source Type

Torok A.,Brain Imaging Center | Torok A.,Synetiq Ltd.
7th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2016 - Proceedings | Year: 2016

In recent years virtual reality, augmented reality, wearables, and deep learning became more and more widely used. Thus emerged a new challenge for human-computer interface design: understanding the human user. In the current work, I describe four areas of understanding, which are human behavior, human limits, human needs and human cognition. These together form the basis of a new way of interaction between computers and us. A way where collaboration replaces commands; where respect of each other's limits is emphasized over limitless freedom; user interfaces become-dynamic and responsive instead of static and unaware; and self-explaining design instead of user guides. I will discuss how these factors are dependent on each other and work as an integrated whole to revolutionize interfaces. I'm presenting this work from a cognitive scientific point of view, which focuses on the human factor in information and communications technology and uses quantitative methods to analyze behavior, limits, needs, and cognition. © 2016 IEEE.

News Article | April 25, 2017

Have you ever thought someone was angry at you, but it turned out you were just misreading their facial expression? One specific region of the brain, called the amygdala, is involved in making these (sometimes inaccurate) judgments about ambiguous or intense emotions, according to new research. Identifying the amygdala’s role in social cognition suggests insights into the neurological mechanisms behind autism and anxiety. “We have long known that the amygdala is important in processing emotion from faces,” says Ralph Adolphs, professor of psychology and neuroscience and of biology at the California Institute of Technology. “But now we are starting to understand that it incorporates a lot of complex information to make fairly sophisticated decisions that culminate in our judgments.” When looking at a face, brain cells in the amygdala fire electrical impulses or “spikes” in response. However, the role of such face cells in social cognition remains unclear. Adolphs and his group measured the activity of these cells, or neurons, in patients while they saw images of faces expressing different degrees of happiness or fear. The subjects also saw images of faces with more ambiguous or neutral emotions, such as moderate displeasure or muted happiness. For each type of image, subjects were asked to decide whether the face looked fearful or happy. The researchers then investigated how neurons reacted to different aspects of emotions, and how the activity of the face cells related to the decision made by the subjects. The researchers found that there are two groups of neurons in the amygdala that respond to facial emotions. One group, the emotion-tracking neurons, detects the intensity of a single specific emotion, such as happiness or fear. For example, a happiness-signaling neuron would fire more spikes if the emotion were extreme happiness, and fewer spikes if the emotion were mild happiness. Separate groups of neurons within the emotion-tracking neurons code specifically for fear or for happiness. The other group, the ambiguity-coding neurons, indicates the ambiguity of the perceived emotion, irrespective of the nature of that emotion. Showing patients images of emotionally ambiguous faces was the key to understanding how the specialized neurons in the amygdala contribute to decision-making, the researchers say. The faces were so ambiguous that a patient would sometimes judge the same image to be fearful at times and happy at other times. The emotion-coding neurons indicated the subjective decision the patient made about the face. “Most people are familiar with feeling that a face just looks too ambiguous to really decide what emotion the person is having,” says first author and visitor in neuroscience Shuo Wang. “The fact that amygdala neurons signal a decision made about a face, such as which emotion it shows, gives us important insight because it shows that the amygdala is involved in making decisions rather than simply representing sensory input.” In addition to recording single cells from the amygdala, the researchers also carried out a neuroimaging study using fMRI (in a separate group of participants), and additionally studied the emotion judgments of three rare subjects with lesions of the amygdala. The lesion subjects showed an abnormally low threshold for deciding when a face was fearful, and the fMRI study also showed the specific effect of emotion intensity and ambiguity in the amygdala. The study is the first to combine so many different sources of data. These findings also suggest a mechanistic basis for potential treatments involving the painless electrical stimulation of the amygdala, which are currently the subject of ongoing clinical trials. “Researchers at multiple institutions are currently evaluating whether deep-brain stimulation of the amygdala is effective in treating severe cases of autism or post-traumatic stress disorder,” says collaborator Ueli Rutishauser of Cedars-Sinai Medical Center in Los Angeles and a visiting associate in biology and biological engineering at Caltech. “Patients with severe PTSD are thought to have a hyperactive amygdala, which electrical stimulation might be able to inhibit. Our findings that amygdala neurons carry signals about the subjective percept of emotions indicates a more specific reason for why such electrical stimulation might be beneficial.” The paper appears in Nature Communications. The Caltech Brain Imaging Center is one of the affiliated research centers in the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech. Funding came from the Autism Science Foundation, the Simons Foundation, the National Institute of Mental Health Conte Center, and the National Science Foundation.

Figee M.,University of Amsterdam | De Koning P.,University of Amsterdam | Klaassen S.,University of Amsterdam | Vulink N.,University of Amsterdam | And 7 more authors.
Biological Psychiatry | Year: 2014

Background Obsessive-compulsive disorder is a chronic psychiatric disorder related to dysfunctional dopaminergic neurotransmission. Deep brain stimulation (DBS) targeted at the nucleus accumbens (NAc) has recently become an effective treatment for therapy-refractory obsessive-compulsive disorder, but its effect on dopaminergic transmission is unknown. Methods We measured the effects of NAc DBS in 15 patients on the dopamine D2/3 receptor availability in the striatum with [123I]iodobenzamide ([123I]IBZM) single photon emission computed tomography. We correlated changes in [ 123I]IBZM binding potential (BP) with plasma levels of homovanillic acid (HVA) and clinical symptoms. Results Acute (1-hour) and chronic (1-year) DBS decreased striatal [123I]IBZM BP compared with the nonstimulated condition in the putamen. BP decreases were observed after 1 hour of stimulation, and chronic stimulation was related to concurrent HVA plasma elevations, implying DBS-induced dopamine release. BP decreases in the area directly surrounding the electrodes were significantly correlated with changes in clinical symptoms (45% symptom decrease). Conclusions NAc DBS induced striatal dopamine release, which was associated with increased HVA plasma levels and improved clinical symptoms, suggesting that DBS may compensate for a defective dopaminergic system. © 2014 Society of Biological Psychiatry.

News Article | April 27, 2016

MRI data were collected on a 3T Siemens TIM Trio scanner at the UC Berkeley Brain Imaging Center using a 32-channel Siemens volume coil. Functional scans were collected using gradient echo EPI with repetition time (TR) = 2.0045 s, echo time (TE) = 31 ms, flip angle = 70°, voxel size = 2.24 × 2.24 × 4.1 mm (slice thickness = 3.5 mm with 18% slice gap), matrix size = 100 × 100, and field of view = 224 × 224 mm. Thirty axial slices were prescribed to cover the entire cortex and were scanned in interleaved order. A custom-modified bipolar water excitation radiofrequency (RF) pulse was used to avoid signal from fat. Anatomical data were collected using a T1-weighted multi-echo MP-RAGE sequence on the same 3T scanner. Functional data were collected from five male subjects and two female subjects: S1 (male, age 26), S2 (male, age 32), S3 (female, age 31), S4 (male, age 31), S5 (male, age 26), S6 (female, age 25), and S7 (male, age 30). Two of the subjects were authors (S1: A.G.H.; and S3: W.A.d.H.). All subjects were healthy and had normal hearing. The experimental protocol was approved by the Committee for the Protection of Human Subjects at University of California, Berkeley. Written informed consent was obtained from all subjects. Voxel-wise models were estimated and validated independently for each subject using separate data sets reserved for that purpose. Principal components and PrAGMATiC analyses used leave-one-subject-out cross-validation to verify that the group models accurately predict the data recorded in each individual subject. The model estimation data set consisted of ten 10- to 15-min stories taken from The Moth Radio Hour. In each story, a single speaker tells an autobiographical story in front of a live audience. The ten selected stories cover a wide range of topics and are highly engaging. Each story was played during a separate fMRI scan. The length of each scan was tailored to the story, and included 10 s of silence both before and after the story. These data were collected during two 2-h scanning sessions that were performed on different days. The model validation data set consisted of one 10-min story, also taken from The Moth Radio Hour. This story was played twice for each subject (once during each scanning session), and then the two responses were averaged. For story synopses and details of story transcription and preprocessing procedures, see Supplementary Methods. Stories were played over Sensimetrics S14 in-ear piezoelectric headphones. A Behringer Ultra-Curve Pro hardware parametric equalizer was used to flatten the frequency response of the headphones based on calibration data provided by Sensimetrics. All stimuli were played at 44.1 kHz using the pygame library in Python. All stimuli were normalized to have peak loudness of −1 dB relative to maximum. However, the stories were performed by different speakers and were not uniformly mastered, so some differences in total loudness remain. Each story was manually transcribed by one listener, and then the transcript was checked by a second listener. Certain sounds (for example, laughter, lip-smacking and breathing) were also marked to improve the accuracy of the automated alignment. The audio of each story was downsampled to 11 kHz and the Penn Phonetics Lab Forced Aligner (P2FA33) was used to automatically align the audio to the transcript. The forced aligner uses a phonetic hidden Markov model to find the temporal onset and offset of each word and phoneme. The Carnegie Mellon University (CMU) pronouncing dictionary was used to guess the pronunciation of each word. When necessary, words and word fragments that appeared in the transcript but not in the dictionary were manually added. After automatic alignment was complete, Praat34 was used to check and correct each aligned transcript manually. The corrected aligned transcript was then spot-checked for accuracy by a different listener. Finally, the aligned transcripts were converted into separate word and phoneme representations. The phoneme representation of each story is a list of pairs (p,t), where p is a phoneme and t is the time from the beginning of the story to the middle of the phoneme (that is, halfway between the start and end of the phoneme) in seconds. Similarly the word representation of each story is a list of pairs (w,t), where w is a word. To account for response variance caused by the semantic content of the stories, we constructed a 985-dimensional semantic feature space based on word co-occurrence statistics in a large corpus of text12, 18, 19. First, we constructed a 10,470-word lexicon from the union of the set of all words appearing in the stories and the 10,000 most common words in the large text corpus. We then selected 985 basis words from Wikipedia’s List of 1000 Basic Words (contrary to the title, this list contained only 985 unique words at the time it was accessed). This basis set was selected because it consists of common words that span a very broad range of topics. The text corpus used to construct this feature space includes the transcripts of 13 Moth stories (including the 10 used as stimuli in this experiment), 604 popular books, 2,405,569 Wikipedia pages, and 36,333,459 user comments scraped from In total, the 10,470 words in our lexicon appeared 1,548,774,960 times in this corpus. Next, we constructed a word co-occurrence matrix, M, with 985 rows and 10,470 columns. Iterating through the text corpus, we added 1 to M each time word j appeared within 15 words of basis word i. A window size of 15 was selected to be large enough to suppress syntactic effects (that is, word order) but no larger. Once the word co-occurrence matrix was complete, we log-transformed the counts, replacing M with log(1 + M ). Next, each row of M was z-scored to correct for differences in basis word frequency, and then each column of M was z-scored to correct for word frequency. Each column of M is now a 985-dimensional semantic vector representing one word in the lexicon. The matrix used for voxel-wise model estimation was then constructed from the stories: for each word–time pair (w,t) in each story we selected the corresponding column of M, creating a new list of semantic vector–time pairs, (M ,t). These vectors were then resampled at times corresponding to the fMRI acquisitions using a 3-lobe Lanczos filter with the cut-off frequency set to the Nyquist frequency of the fMRI acquisition (0.249 Hz). A linearized finite impulse response (FIR) model14, 17 consisting of four separate feature spaces was fit to every cortical voxel in each subject’s brain. These four feature spaces were word rate (1 feature), phoneme rate (1 feature), phonemes (39 features), and semantics (985 features). The word rate, phoneme rate, and phoneme features were used to account for responses to low-level properties of the stories that could contaminate the semantic model weights (see Supplementary Methods for details of how these low-level models were constructed). A separate linear temporal filter with four delays (1, 2, 3, and 4 time points) was fit for each of these 1,026 features, yielding a total of 4,104 features. This was accomplished by concatenating feature vectors that had been delayed by 1, 2, 3, and 4 time points (2, 4, 6, and 8 s). Thus, in the concatenated feature space one channel represents the word rate 2 s earlier, another 4 s earlier, and so on. Taking the dot product of this concatenated feature space with a set of linear weights is functionally equivalent to convolving the original stimulus vectors with linear temporal kernels that have non-zero entries for 1-, 2-, 3-, and 4-time-point delays. Before doing regression, we first z-scored each feature channel within each story. This was done to match the features to the fMRI responses, which were also z-scored within each story. However, this had little effect on the learned weights. The 4,104 weights for each voxel were estimated using L2-regularized linear regression (also known as ridge regression). To keep the scale of the weights consistent and to prevent bias in subsequent analyses, a single value of the regularization coefficient was used for all voxels in all subjects. This regularization coefficient was found by bootstrapping the regression procedure 50 times in each subject. In each bootstrap iteration, 800 time points (20 blocks of 40 consecutive time points each) were removed from the model estimation data set and reserved for testing. Then the model weights were estimated on the remaining 2,937 time points for each of 20 possible regularization coefficients (log spaced between 10 and 1,000). These weights were used to predict responses for the 800 reserved time points, and then the correlation between actual and predicted responses was found. After the bootstrapping was complete, a regularization–performance curve was obtained for each subject by averaging the bootstrap sample correlations first across the 50 samples and then across all voxels. Next, the regularization–performance curves were averaged across the seven subjects and the best overall value of the regularization parameter (183.3) was selected. The best overall regularization parameter value was also the best value in three individual subjects. For the other four subjects the best regularization parameter value was slightly higher (233.6). To validate the voxel-wise models, estimated semantic feature weights were used to predict responses to a separate story that had not been used for weight estimation. Prediction performance was then estimated as the Pearson correlation between predicted and actual responses for each voxel over the 290 time points in the validation story. Statistical significance was computed by comparing estimated correlations to the null distribution of correlations between two independent Gaussian random vectors of the same length. Resulting P values were corrected for multiple comparisons within each subject using the false discovery rate (FDR) procedure35. All model fitting and analysis was performed using custom software written in Python, making heavy use of NumPy36, SciPy37, and pycortex38. We used principal components analysis (PCA) to recover a low-dimensional semantic space from the estimated semantic model weights. We first selected only the 10,000 best predicted voxels in each subject according to the average bootstrap correlation (for the selected regularization parameter value) obtained during model estimation. This was done to avoid including noise from poorly modelled voxels. Then we removed temporal information from the voxel-wise model weights by averaging across the four delays for each feature. The weights for the word frequency, phoneme frequency, and phoneme features were then discarded, leaving only the 985 semantic model weights for each voxel. Finally, we applied PCA to these weights, yielding 985 principal components (PCs). Partial scree plots showing the amount of variance accounted for by each PC are shown in Extended Data Fig. 2. See Supplementary Methods for details. The PrAGMATiC generative model22 has two components: an arrangement model and an emission model. The arrangement model defines a probability distribution over possible arrangements of the functional areas. This model assumes that the location of each area is defined by a single point called the area centroid. Each centroid is modelled as being joined to nearby centroids by springs. While exact centroid locations can vary from subject to subject, the equilibrium length of each spring is assumed to be consistent across subjects. The probability distribution over possible locations of the centroids is defined using the total potential energy of the spring system. This distribution assigns a high probability to low-energy arrangements of the centroids (that is, where the springs are not stretched much and so store little potential energy) and low probability to high-energy arrangements (where the springs are stretched a lot). The second component is the emission model, which defines a probability distribution over semantic maps given an arrangement of functional areas. In the emission model each area centroid is assigned a particular semantic value in the four-dimensional common semantic space. This value determines what type of semantic information is represented in that area. To generate a semantic map from any particular arrangement, each point on the cortical surface is first assigned to the closest area centroid (creating a Voronoi diagram). Then the semantic value for each point is sampled from a spherical Gaussian distribution in semantic space, centred on the semantic value of the centroid. A consequence of modelling semantic maps using a Voronoi diagram is that every point on the cortex must be assigned to an area, while we know that many points on the cortex are not semantically selective. We distinguished between semantically selective and non-selective areas by testing whether the mean semantic voxel-wise model in each area predicted responses significantly better on a held-out story than a baseline model that accounts for responses to phonemes and word rate. To train the generative model we derived maximum-likelihood estimation (MLE) update rules similar to the Boltzmann learning rule with contrastive divergence25. We used these learning rules to iteratively update the spring lengths and semantic values, maximizing the probability of the observed maps and minimizing the probability of unobserved maps. For details see Supplementary Methods. Fusiform face area (FFA), occipital face area (OFA), parahippocampal place area (PPA), occipital place area (OPA), retrosplenial cortex (RSC), extrastriate body area (EBA), visual areas (V1-V4, V3A, V3B, V7), lateral occipital visual area (LO), middle temporal visual area (MT+), intraparietal sulcus visual area (IPS), auditory cortex (AC), primary motor and somatosensory areas for feet (M1F, S1F), hands (M1H, S1H), and mouth (M1M, S1M), secondary somatosensory areas for feet (S2F), and hands (S2H), frontal eye fields (FEF), frontal opercular eye movement area (FO), supplementary motor foot area (SMFA), and hand area (SMHA), supplementary eye fields (SEF), Broca’s area (BA), superior premotor ventral speech area (sPMv), premotor ventral hand area (PMvh).

News Article | December 19, 2016

Dartmouth researchers have found that children with a genetic risk for obesity had greater activity in brain reward centers when watching fast food commercials, which could help us to understand why some children are more likely to overeat. The study is the first-of-its kind to examine how a key obesity gene influences brain response to food advertisements and other cues to eat. The findings are published in the Proceedings of the National Academy of Sciences. "By examining the still-developing brain and its reward-related structures, our findings help explain why children who are genetically at-risk for obesity may be prone to over-eating unhealthy foods," says first author, Kristina M. Rapuano, a graduate student in the Brain Imaging Lab in the department of Psychological and Brain Sciences at Dartmouth College. Television food advertisements were used for this study to better approximate how the brain responds to food cues in the real-world, as most studies to date have shown only still images of food. Seventy-eight children ages nine to 12 years old, watched a children's television show in an MRI scanner. To simulate the experience of watching television from home, the show included 12 minutes of commercial breaks-- half were advertisements for fast food and the other half for non-food items. Children were also evaluated on their genetic risk for obesity based on the fat-mass and obesity-associated (FTO) gene, which strongly predicts obesity across the lifespan. The nucleus accumbens, a region in the brain commonly associated with reward craving, was not only physically larger in children with the obesity-risk FTO genotype compared to genetically low-risk children but also showed a stronger craving response to the food commercials. "About one-third of commercials children see on network television are food advertisements, and each one is a prompt to eat," says senior author, Diane Gilbert-Diamond, assistant professor of Epidemiology at Dartmouth's Geisel School of Medicine and member of the Norris Cotton Cancer Center. "We know from our prior work that children with this same genetic obesity risk factor are more likely to overeat after watching food advertisements on TV, even when they are not hungry. The brain scans suggest that these children may be especially vulnerable to food cues, and that limiting food advertisement exposure could be an effective way to combat child obesity." Available for comment are study co-authors Kristina M. Rapuano (, Diane Gilbert-Diamond at ( and William M. Kelley (, professor of Psychological and Brain Sciences, and director of the Dartmouth Brain Imaging Center. Amanda L. Zieselman, former undergraduate student (2015); James D. Sargent, professor of Pediatrics at the Geisel School of Medicine and adjunct professor of Psychological and Brain Sciences; and Todd F. Heatherton, the Lincoln Filene Professor of Human Relations in the department of Psychological and Brain Sciences, and Norris Cotton Cancer Center investigator; all of whom are affiliated with Dartmouth, also served as co-authors of the study. Broadcast studios: Dartmouth has TV and radio studios available for interviews. For more information, visit: http://communications. .

Koch S.B.J.,University of Amsterdam | Koch S.B.J.,Brain Imaging Center | Van Zuiden M.,University of Amsterdam | Nawijn L.,University of Amsterdam | And 6 more authors.
Neuropsychopharmacology | Year: 2016

The neuropeptide oxytocin (OT) has been suggested as a promising pharmacological agent for medication-enhanced psychotherapy in posttraumatic stress disorder (PTSD) because of its anxiolytic and prosocial properties. We therefore investigated the behavioral and neurobiological effects of a single intranasal OT administration (40 IU) in PTSD patients. We conducted a randomized, placebo-controlled, cross-over resting-state fMRI study in male and female police officers with (n=37, 21 males) and without PTSD (n=40, 20 males). We investigated OT administration effects on subjective anxiety and functional connectivity of basolateral (BLA) and centromedial (CeM) amygdala subregions with prefrontal and salience processing areas. In PTSD patients, OT administration resulted in decreased subjective anxiety and nervousness. Under placebo, male PTSD patients showed diminished right CeM to left ventromedial prefrontal cortex (vmPFC) connectivity compared with male trauma-exposed controls, which was reinstated after OT administration. Additionally, female PTSD patients showed enhanced right BLA to bilateral dorsal anterior cingulate cortex (dACC) connectivity compared with female trauma-exposed controls, which was dampened after OT administration. Although caution is warranted, our findings tentatively suggest that OT has the potential to diminish anxiety and fear expression of the amygdala in PTSD, either via increased control of the vmPFC over the CeM (males) or via decreased salience processing of the dACC and BLA (females). Our findings add to accumulating evidence that OT administration could potentially enhance treatment response in PTSD. © 2016 American College of Neuropsychopharmacology.

Santra A.,Brain Imaging Center | Kumar R.,All India Institute of Medical Sciences
Indian Journal of Nuclear Medicine | Year: 2014

Brain single photon emission computed tomography (SPECT) is a well-established and reliable method to assess brain function through measurement of regional cerebral blood fl ow (rCBF). It can be used to defi ne a patient's pathophysiological status when neurological or psychiatric symptoms cannot be explained by anatomical neuroimaging fi ndings. Though there is ample evidence validating brain SPECT as a technique to track human behavior and correlating psychiatric disorders with dysfunction of specifi c brain regions, only few psychiatrists have adopted brain SPECT in routine clinical practice. It can be utilized to evaluate the involvement of brain regions in a particular patient, to individualize treatment on basis of SPECT fi ndings, to monitor the treatment response and modify treatment, if necessary. In this article, we have reviewed the available studies in this regard from existing literature and tried to present the evidence for establishing the clinical role of brain SPECT in major psychiatric illnesses.

Beissner F.,Brain Imaging Center | Beissner F.,Goethe University Frankfurt | Baudrexel S.,Brain Imaging Center | Baudrexel S.,Goethe University Frankfurt | And 2 more authors.
NeuroImage | Year: 2010

Dual-echo EPI is based on the acquisition of two images with different echo times per excitation, thus allowing for the calculation of purely T2* weighted data. The technique can be used for the measurement of functional activation whenever the prerequisite of constant equilibrium magnetization cannot be fulfilled due to variable inter-volume delays. The latter is the case when image acquisition is triggered by physiological parameters (e.g. cardiac gating) or by the subject's response. Despite its frequent application, there is currently no standardized way of combining the information obtained from the two acquired echoes.The goal of this study was to quantify the implication of different echo combination methods (quotients of echoes and quantification of T2 *) and calculation modalities, either pre-smoothing data before combination or subjecting unsmoothed combined data to masking (no masking, volume-wise masking, joint masking), on the theoretically predicted signal-to-noise ratio (SNR) of the BOLD response and on activation results of two fMRI experiments using finger tapping and visual stimulation in one group (n=5) and different motor paradigms to activate motor areas in the cortex and the brainstem in another group (n=21).A significant impact of echo combination and masking procedure was found for both SNR and activation results. The recommended choice is a direct calculation of T2 * values, either using joint masking on unsmoothed data, or pre-smoothing images prior to T2 * calculation. This method was most beneficial in areas close to the surface of the brain or adjacent to the ventricles and may be especially relevant to brainstem fMRI. © 2010 Elsevier Inc.

Hattingen E.,Goethe University Frankfurt | Jurcoane A.,Goethe University Frankfurt | Bahr O.,Senckenberg Institute | Rieger J.,Senckenberg Institute | And 4 more authors.
Neuro-Oncology | Year: 2011

Bevacizumab shows unprecedented rates of response in recurrent glioblastomas (GBM), but the detailed mechanisms are still unclear. We employed in vivo magnetic resonance spectroscopic imaging (MRSI) and quantitative magnetic resonance imaging to investigate whether bevacizumab alters oxygen and energy metabolism and whether this effect has antitumoral activity in recurrent GBM. 31P and 1H MRSI, apparent diffusion coefficient (ADC), and high-resolution T2 and T2' mapping (indirect marker of oxygen extraction) were investigated in 16 patients with recurrentGBMat 3 Tesla before and 1.5-2 months after initiation of therapy with bevacizumab. Changes of metabolite concentrations and of the quantitative values in the tumor and normal appearing brain tissue were calculated. The Wilcoxon signed-ranks test was used to evaluate differences for tumor/edema versus control as well as changes before versus after commencement of therapy. Survival analyses were performed for significant parameters. Tumor T2', pH, ADC, and T2 decreased significantly in patients responding to bevacizumab therapy (n 5 10). Patients with at least 25% T2' decrease during treatment showed longer progression- free and overall survival durations. Levels of high-energy metabolites were lower at baseline; these persisted under therapy. Glycerophosphoethanolamine as catabolic phospholipid metabolite increased in responders. The MRSI data support the hypothesis that bevacizumab induces relative tumor hypoxia (T2' decrease) and affects energy homeostasis in recurrent GBM, suggesting that bevacizumab impairs vascular function. The antiangiogenic effect of bevacizumab is predictive of better outcome and seems to induce antitumoral activity in the responding GBMs. © The Author(s) 2011.

Klomp A.,Brain Imaging Center
Journal of visualized experiments : JoVE | Year: 2012

Pharmacological MRI (phMRI) is a new and promising method to study the effects of substances on brain function that can ultimately be used to unravel underlying neurobiological mechanisms behind drug action and neurotransmitter-related disorders, such as depression and ADHD. Like most of the imaging methods (PET, SPECT, CT) it represents a progress in the investigation of brain disorders and the related function of neurotransmitter pathways in a non-invasive way with respect of the overall neuronal connectivity. Moreover it also provides the ideal tool for translation to clinical investigations. MRI, while still behind in molecular imaging strategies compared to PET and SPECT, has the great advantage to have a high spatial resolution and no need for the injection of a contrast-agent or radio-labeled molecules, thereby avoiding the repetitive exposure to ionizing radiations. Functional MRI (fMRI) is extensively used in research and clinical setting, where it is generally combined with a psycho-motor task. phMRI is an adaptation of fMRI enabling the investigation of a specific neurotransmitter system, such as serotonin (5-HT), under physiological or pathological conditions following activation via administration of a specific challenging drug. The aim of the method described here is to assess brain 5-HT function in free-breathing animals. By challenging the 5-HT system while simultaneously acquiring functional MR images over time, the response of the brain to this challenge can be visualized. Several studies in animals have already demonstrated that drug-induced increases in extracellular levels of e.g. 5-HT (releasing agents, selective re-uptake blockers, etc) evoke region-specific changes in blood oxygenation level dependent (BOLD) MRI signals (signal due to a change of the oxygenated/deoxygenated hemoglobin levels occurring during brain activation through an increase of the blood supply to supply the oxygen and glucose to the demanding neurons) providing an index of neurotransmitter function. It has also been shown that these effects can be reversed by treatments that decrease 5-HT availability(16,13,18,7). In adult rats, BOLD signal changes following acute SSRI administration have been described in several 5-HT related brain regions, i.e. cortical areas, hippocampus, hypothalamus and thalamus(9,16,15). Stimulation of the 5-HT system and its response to this challenge can be thus used as a measure of its function in both animals and humans(2,11).

Loading Brain Imaging Center collaborators
Loading Brain Imaging Center collaborators