UMC Utrecht

Utrecht, Netherlands

UMC Utrecht

Utrecht, Netherlands
SEARCH FILTERS
Time filter
Source Type

News Article | May 8, 2017
Site: www.eurekalert.org

In people with photosensitive epilepsy, flashing lights are well known for their potential to trigger seizures. The results can be quite stunning. For instance, a particular episode of Pokémon sent 685 people in Japan to the hospital. But seizures can be triggered by certain still images, too. Now, researchers reporting in Current Biology on May 8 who have conducted an extensive review of the scientific literature think they know what it is about some static pictures that can trigger seizures. The key, they propose, is a particular repetitive pattern of neural activity in the brain known as gamma oscillations that occurs when people view certain images, such as black and white bar patterns, and not others. In fact, the researchers say, it's possible that those kinds of images are responsible for other problems, such as migraine headaches, particularly in people who are generally sensitive to light. "Our findings imply that in designing buildings, it may be important to avoid the types of visual patterns that can activate this circuit and cause discomfort, migraines, or seizures," says Dora Hermes of the University Medical Center (UMC) Utrecht in the Netherlands. "Even perfectly healthy people may feel modest discomfort from the images that are most likely to trigger seizures in photosensitive epilepsy." Gamma oscillations in the brain can be measured on an electroencephalogram (EEG), a test that measures electrical activity in the brain using small electrodes attached to the scalp. Scientists have studied them since the 1980s, but there's no consensus yet on the significance of those patterns for thought, perception, or neural processing. "Some scientists argue that these oscillations are hugely important and essential for awareness, attention, and neuronal communication, while others say that they are more likely a byproduct of normal neuronal processing, like the exhaust coming out of a car--a potentially useful diagnostic signal, but not one that makes the neuronal machinery work," Hermes says. One argument against the idea that gamma oscillations are important for neural processing is that they are produced in the brain when viewing some images and not others. Grating patterns produce strong gamma oscillations while puffy clouds or many natural scenes typically do not for reasons that scientists don't fully understand. In the new report, Hermes and her colleagues, including Jonathan Winawer at New York University and Dorothée Kasteleijn-Nolst Trenité at UMC Utrecht, conclude that those gamma-oscillation-provoking images are also most likely to trigger seizures. There are simple ways to adjust an image so as to dampen that pattern of brain activity, they note. Those adjustments include reducing the contrast, adjusting the width of the bars, or shifting the image from a grate design to something more like plaid. "What we distinguish in this proposal is that the link between images that trigger photosensitive epilepsy and normal brain activity is particular to gamma oscillations, and not to other forms or neuronal responses like the overall rate of action potentials," Winawer says. The findings suggest that existing studies on gamma oscillations might offer important clues for understanding photosensitive epilepsy. Hermes and her colleagues are now designing studies to explore these patterns of brain response in patients with photosensitive epilepsy and those without. They're also working on a model to predict which natural images or scenes--a city scene, train station, or interior design, for instance--are most likely to provoke gamma oscillations and seizures. This work was supported by the Netherlands Organization for Scientific Research, the National Institutes of Health, and the EU program Marie Curie MEXCT-CT-2005-024224 "Visual Sensitivity." Current Biology (@CurrentBiology), published by Cell Press, is a bimonthly journal that features papers across all areas of biology. Current Biology strives to foster communication across fields of biology, both by publishing important findings of general interest and through highly accessible front matter for non-specialists. Visit: http://www. . To receive Cell Press media alerts, contact press@cell.com.


News Article | May 10, 2017
Site: phys.org

"Historically, radiation has been a blunt tool," said Matt Vaughn, Director of Life Science Computing at the Texas Advanced Computing Center. "However, it's become ever more precise because we understand the physics and biology of systems that we're shooting radiation into, and have improved our ability to target the delivery of that radiation." The science of calculating and assessing the radiation dose received by the human body is known as dosimetry - and here, as in many areas of science, advanced computing plays an important role. Current radiation treatments rely on imaging from computed tomography (CT) scans taken prior to treatment to determine a tumor's location. This works well if the tumor lies in an easily detectable and immobile location, but less so if the area is moving, as in the case of lung cancer. At the University of Texas MD Anderson Cancer Center, scientists are tackling the problem of accurately attacking tumors using a new technology known as an MR-linac that combines magnetic resonance (MR) imaging with linear accelerators (linacs). Developed by Elekta in cooperation with UMC Utrecht and Philips, the MR-linac at MD Anderson is the first of its kind in the U.S. MR-linacs can image a patient's anatomy while the radiation beam is being delivered. This allows doctors to detect and visualize any anatomical changes in a patient during treatment. Unlike CT or other x-ray based imaging modalities, which provide additional ionizing radiation, MRI is harmless to healthy tissue. The MR-linac method offers a potentially significant improvement over current image-guided cancer treatment technology. However, to ensure patients are treated safely, scientists must first correct for the influence of the MRI's magnetic field on the measurements used to calibrate the radiation dose being delivered. Researchers use software called Geant4 to simulate radiation within the detectors. Originally developed by CERN to simulate high energy particle physics experiments, the MD Anderson team has adapted Geant4 to incorporate magnetic fields into their computer dosimetry model. "Since the ultimate aim of the MR-linac is to treat patients, it is important that our simulations be very accurate and that the results be very precise," said Daniel O'Brien, a postdoctoral fellow in radiation physics at MD Anderson. "Geant4 was originally designed to study radiation at much higher energies than what is used to treat patients. We had to perform tests to make sure that we had the accuracy that we needed." Using the Lonestar supercomputer at the Texas Advanced Computing Center (TACC), the research team simulated nearly 17 billion particles of radiation per detector to get the precision that they needed for their study. In August 2016, they published magnetic field correction factors in Medical Physics for six of the most-used ionization chamber detectors (gas-filled chambers that are used to ensure the dose delivered from a therapy unit is correct). They are now working on verifying these results experimentally. "The MR-linac is a very promising technology but it also presents many unique challenges from a dosimetry point of view," O'Brien said. "Over time, our understanding of these effects has improved considerably, but there is still work to be done and resources like TACC are an invaluable asset in making these new technologies safe and reliable." "Our computer simulations are important because their results will serve as the foundation to extend current national and international protocols to perform calibration of conventional linacs to MR-linacs," said Gabriel Sawakuchi, assistant professor of Radiation Physics at MD Anderson. "However, it is important that our results be validated against measurements and independent simulations performed by other groups before used clinically." X-ray radiation is the most frequently used form of high-energy treatment, but a new treatment is emerging that uses a beam of protons to deliver energy directly to the tumor with minimal damage to surrounding tissues and without the side effects of x-ray therapy. Like x-ray radiation, proton therapy blasts tumors with beams of particles. But whereas traditional radiation uses photons, or focused light beams, proton therapy uses ions - hydrogen atoms that have lost an electron. Proton beams have a unique physical characteristic known as the 'Bragg peak' that allows the greatest part of its energy to be transferred to a specific area within the body, where it has maximum destructive effect. X-ray radiation, on the other hand, deposits energy and kills cells along the whole length of the beam. This can lead to unintended cell damage and even secondary cancer that can develop years later. In comparison with current radiation procedures, proton therapy saves healthy tissue in front of and behind the tumor. Since the patient is irradiated from all directions and the intensity of beams can be well modulated, the method provides further reduction of adverse effects. Proton therapy is particularly effective when irradiating tumors near sensitive organs—for instance near the neck, spine, brain or lungs—where stray beams can be particularly damaging. Medical physicists and radiation oncologists from Mayo Clinic in Phoenix, Arizona in collaboration with MD Anderson researchers, recently published a series of papers describing improved planning and use of proton therapy. Writing in Medical Physics in January 2017, they showed that in the three clinical cases included in this study, their chance-constrained model was better at sparing organs at risk than the current method. The model also provided a flexible tool for users to balance between plan robustness and plan quality and was found to be much faster than the commercial solution. The research used the Stampede supercomputer at TACC to conduct computationally intensive studies of the hundreds of factors that go into maximizing the effectiveness of, and minimizing the risk and uncertainties involved in, these treatments. Proton therapy was first developed in the 1950s and came into mainstream in the 1990s. There are currently 12 proton therapy centers nation-wide and the number is growing. However, the cost of the proton beam devices—$200 million dollars, or 30 to 50 times more expensive than a traditional x-ray system—means they are still rare. They are applied only in cases that require extra precision and doctors must maximize their benefit when they are used. Mayo Clinic and MD Anderson operate the most advanced versions of these devices, which perform scanning beam proton therapy and are able to modulate the intensity of the beam. Wei Liu, one of the lead proton therapy researchers at Mayo Clinic, likens the process to 3-D printing, "painting the tumor layer by layer." However, this is accomplished at a distance, through a protocol that must be planned in advance. The specificity of the proton beam, which is its greatest advantage, means that it must be precisely calibrated and that discrepancies from the ideal must be considered. For instance, hospital staff situate patients on the operating surface of the device, and even placing a patient a few millimeters off-center can impact the success of the treatment. Moreover, every patient's body has a slightly different chemical composition, which can make the proton beam stop at a different position from what is intended. Even patients' breathing can throw off the location of the beam placement. "If a patient has a tumor close to the spinal cord and this level of uncertainty exists, then the proton beam can overdose and paralyze the patient," Liu said. The solution to these challenges is robust optimization, which uses mathematical techniques to generate a plan that can manage and mitigate the uncertainties and human errors that may arise. "Each time, we try to mathematically generate a good plan," he said. "There are many unknown variables. You can choose different beam angles or energy or intensity. There are 25,000 variables or more, so generating a plan that is robust to these mistakes and can still get the proper dose distribution to the tumor is a large-scale optimization problem." To solve these problems, Liu and his team use supercomputers at the Texas Advanced Computing Center. "It's very computationally expensive to generate a plan in a reasonable timeframe," he continued. "Without a supercomputer, we can do nothing." Liu has been working on developing the proton beam planning protocols for many years. Leading commercial companies have adopted methods similar to those that Liu and his collaborators developed as the basis for their radiation planning solutions. Recently, Liu and his collaborators extended their studies to include the uncertainties presented by breathing patients, which they call "4D robust optimization," since it takes into account the time component and not just spatial orientation. In the May 2016 issue of the International Journal of Radiation Oncology, they showed that compared to its 3D counterpart, 4D robust optimization for lung cancer treatment provided more robust target dose distribution and better target coverage, while still offering normal tissue protection. "We're trying to provide the patient with the most effective, most reliable, and most efficient proton therapy," Liu said. "Because it's so expensive, we have to do the best job to take advantage of this new technology." Like many forms of cancer therapy, clinicians know that proton therapy works, but precisely how it works is a bit of a mystery. The basic principle is not in question: proton ions collide with water molecules, which make up 70 percent of cells, triggering the release of electrons and free radicals that damage the DNA of cancerous cells. The proton ions also collide with the DNA directly, breaking bonds and crippling DNA's ability to replicate. Because of their high rate of division and reduced ability to repair damaged DNA, cancerous cells are much more vulnerable to DNA attacks than normal cells and are killed at a higher rate. Furthermore, a proton beam can be focused on a tumor area, thus causing maximum damage on cancerous cells and minimum damage on surrounding healthy cells. However, beyond this general microscopic picture, the mechanics of the process have been hard to determine. "As happens in cancer therapy, they know empirically that it works but they don't know why," said Jorge A. Morales, a professor of chemistry at Texas Tech University and a leading proponent of the computational analysis of proton therapy. "To do experiments with human subjects is dangerous, so the best way is through computer simulation." Morales has been running computer simulations of proton-cell chemical reactions using quantum dynamics models on TACC's Stampede supercomputer to investigate the fundamentals of the process. Computational experiments can mimic the dynamics of the proton-cell interactions without causing damage to a patient and can reveal what happens when the proton beam and cells collide from start to finish, with atomic-level accuracy. Quantum simulations are necessary because the electrons and atoms that are the basis for proton cancer therapy's effectiveness do not behave according to the laws of classical physics. Rather they are guided by the laws quantum mechanics which involve probabilities of location, speed and reactions' occurrences rather than to the precisely defined versions of those three variables. Morales' studies on Stampede, reported in PLOS One in March 2017, as well as in Molecular Physics, and Chemical Physics Letters (both 2014), have determined the basic byproducts of protons colliding with water within the cell, and with nucleotides and clusters of DNA bases - the basic units of DNA. The studies shed light on how the protons and their water radiolysis products damage DNA. The results of Morales' computational experiments match the limited data from physical chemistry experiments, leading to greater confidence in their ability to capture the quantum behavior in action. Though fundamental in nature, the insights and data that Morales' simulations produce help researchers understand proton cancer therapy at the microscale, and help modulate factors like dosage and beam direction. "The results are all very promising and we're excited to extend our research further," Morales said. "These simulations will bring about a unique way to understand and control proton cancer therapy that, at a very low cost, will help to drastically improve the treatment of cancer patients without risking human subjects." Explore further: What is cancer radiotherapy, and why do we need proton beam therapy? More information: Austin J. Privett et al, Exploring water radiolysis in proton cancer therapy: Time-dependent, non-adiabatic simulations of H+ + (H2O)1-6, PLOS ONE (2017). DOI: 10.1371/journal.pone.0174456


News Article | May 10, 2017
Site: www.eurekalert.org

Researchers use supercomputers at the Texas Advanced Computing Center to improve, plan, and understand the basic science of, radiation therapy Radiation therapy shoots high-energy particles into the body to destroy or damage cancer cells. Over the last century, the technologies used have constantly improved and it has become a highly effective way to treat cancer. However, physicians must still walk a fine line between delivering enough radiation to kill tumors, while sparing surrounding healthy tissue. "Historically, radiation has been a blunt tool," said Matt Vaughn, Director of Life Science Computing at the Texas Advanced Computing Center. "However, it's become ever more precise because we understand the physics and biology of systems that we're shooting radiation into, and have improved our ability to target the delivery of that radiation." The science of calculating and assessing the radiation dose received by the human body is known as dosimetry - and here, as in many areas of science, advanced computing plays an important role. Current radiation treatments rely on imaging from computed tomography (CT) scans taken prior to treatment to determine a tumor's location. This works well if the tumor lies in an easily detectable and immobile location, but less so if the area is moving, as in the case of lung cancer. At the University of Texas MD Anderson Cancer Center, scientists are tackling the problem of accurately attacking tumors using a new technology known as an MR-linac that combines magnetic resonance (MR) imaging with linear accelerators (linacs). Developed by Elekta in cooperation with UMC Utrecht and Philips, the MR-linac at MD Anderson is the first of its kind in the U.S. MR-linacs can image a patient's anatomy while the radiation beam is being delivered. This allows doctors to detect and visualize any anatomical changes in a patient during treatment. Unlike CT or other x-ray based imaging modalities, which provide additional ionizing radiation, MRI is harmless to healthy tissue. The MR-linac method offers a potentially significant improvement over current image-guided cancer treatment technology. However, to ensure patients are treated safely, scientists must first correct for the influence of the MRI's magnetic field on the measurements used to calibrate the radiation dose being delivered. Researchers use software called Geant4 to simulate radiation within the detectors. Originally developed by CERN to simulate high energy particle physics experiments, the MD Anderson team has adapted Geant4 to incorporate magnetic fields into their computer dosimetry model. "Since the ultimate aim of the MR-linac is to treat patients, it is important that our simulations be very accurate and that the results be very precise," said Daniel O'Brien, a postdoctoral fellow in radiation physics at MD Anderson. "Geant4 was originally designed to study radiation at much higher energies than what is used to treat patients. We had to perform tests to make sure that we had the accuracy that we needed." Using the Lonestar supercomputer at the Texas Advanced Computing Center (TACC), the research team simulated nearly 17 billion particles of radiation per detector to get the precision that they needed for their study. In August 2016, they published magnetic field correction factors in Medical Physics for six of the most-used ionization chamber detectors (gas-filled chambers that are used to ensure the dose delivered from a therapy unit is correct). They are now working on verifying these results experimentally. "The MR-linac is a very promising technology but it also presents many unique challenges from a dosimetry point of view," O'Brien said. "Over time, our understanding of these effects has improved considerably, but there is still work to be done and resources like TACC are an invaluable asset in making these new technologies safe and reliable." "Our computer simulations are important because their results will serve as the foundation to extend current national and international protocols to perform calibration of conventional linacs to MR-linacs," said Gabriel Sawakuchi, assistant professor of Radiation Physics at MD Anderson. "However, it is important that our results be validated against measurements and independent simulations performed by other groups before used clinically." X-ray radiation is the most frequently used form of high-energy treatment, but a new treatment is emerging that uses a beam of protons to deliver energy directly to the tumor with minimal damage to surrounding tissues and without the side effects of x-ray therapy. Like x-ray radiation, proton therapy blasts tumors with beams of particles. But whereas traditional radiation uses photons, or focused light beams, proton therapy uses ions - hydrogen atoms that have lost an electron. Proton beams have a unique physical characteristic known as the 'Bragg peak' that allows the greatest part of its energy to be transferred to a specific area within the body, where it has maximum destructive effect. X-ray radiation, on the other hand, deposits energy and kills cells along the whole length of the beam. This can lead to unintended cell damage and even secondary cancer that can develop years later. In comparison with current radiation procedures, proton therapy saves healthy tissue in front of and behind the tumor. Since the patient is irradiated from all directions and the intensity of beams can be well modulated, the method provides further reduction of adverse effects. Proton therapy is particularly effective when irradiating tumors near sensitive organs -- for instance near the neck, spine, brain or lungs -- where stray beams can be particularly damaging. Medical physicists and radiation oncologists from Mayo Clinic in Phoenix, Arizona in collaboration with MD Anderson researchers, recently published a series of papers describing improved planning and use of proton therapy. Writing in Medical Physics in January 2017, they showed that in the three clinical cases included in this study, their chance-constrained model was better at sparing organs at risk than the current method. The model also provided a flexible tool for users to balance between plan robustness and plan quality and was found to be much faster than the commercial solution. The research used the Stampede supercomputer at TACC to conduct computationally intensive studies of the hundreds of factors that go into maximizing the effectiveness of, and minimizing the risk and uncertainties involved in, these treatments. Proton therapy was first developed in the 1950s and came into mainstream in the 1990s. There are currently 12 proton therapy centers nation-wide and the number is growing. However, the cost of the proton beam devices -- $200 million dollars, or 30 to 50 times more expensive than a traditional x-ray system -- means they are still rare. They are applied only in cases that require extra precision and doctors must maximize their benefit when they are used. Mayo Clinic and MD Anderson operate the most advanced versions of these devices, which perform scanning beam proton therapy and are able to modulate the intensity of the beam. Wei Liu, one of the lead proton therapy researchers at Mayo Clinic, likens the process to 3-D printing, "painting the tumor layer by layer." However, this is accomplished at a distance, through a protocol that must be planned in advance. The specificity of the proton beam, which is its greatest advantage, means that it must be precisely calibrated and that discrepancies from the ideal must be considered. For instance, hospital staff situate patients on the operating surface of the device, and even placing a patient a few millimeters off-center can impact the success of the treatment. Moreover, every patient's body has a slightly different chemical composition, which can make the proton beam stop at a different position from what is intended. Even patients' breathing can throw off the location of the beam placement. "If a patient has a tumor close to the spinal cord and this level of uncertainty exists, then the proton beam can overdose and paralyze the patient," Liu said. The solution to these challenges is robust optimization, which uses mathematical techniques to generate a plan that can manage and mitigate the uncertainties and human errors that may arise. "Each time, we try to mathematically generate a good plan," he said. "There are many unknown variables. You can choose different beam angles or energy or intensity. There are 25,000 variables or more, so generating a plan that is robust to these mistakes and can still get the proper dose distribution to the tumor is a large-scale optimization problem." To solve these problems, Liu and his team use supercomputers at the Texas Advanced Computing Center. "It's very computationally expensive to generate a plan in a reasonable timeframe," he continued. "Without a supercomputer, we can do nothing." Liu has been working on developing the proton beam planning protocols for many years. Leading commercial companies have adopted methods similar to those that Liu and his collaborators developed as the basis for their radiation planning solutions. Recently, Liu and his collaborators extended their studies to include the uncertainties presented by breathing patients, which they call "4D robust optimization," since it takes into account the time component and not just spatial orientation. In the May 2016 issue of the International Journal of Radiation Oncology, they showed that compared to its 3D counterpart, 4D robust optimization for lung cancer treatment provided more robust target dose distribution and better target coverage, while still offering normal tissue protection. "We're trying to provide the patient with the most effective, most reliable, and most efficient proton therapy," Liu said. "Because it's so expensive, we have to do the best job to take advantage of this new technology." Like many forms of cancer therapy, clinicians know that proton therapy works, but precisely how it works is a bit of a mystery. The basic principle is not in question: proton ions collide with water molecules, which make up 70 percent of cells, triggering the release of electrons and free radicals that damage the DNA of cancerous cells. The proton ions also collide with the DNA directly, breaking bonds and crippling DNA's ability to replicate. Because of their high rate of division and reduced ability to repair damaged DNA, cancerous cells are much more vulnerable to DNA attacks than normal cells and are killed at a higher rate. Furthermore, a proton beam can be focused on a tumor area, thus causing maximum damage on cancerous cells and minimum damage on surrounding healthy cells. However, beyond this general microscopic picture, the mechanics of the process have been hard to determine. "As happens in cancer therapy, they know empirically that it works but they don't know why," said Jorge A. Morales, a professor of chemistry at Texas Tech University and a leading proponent of the computational analysis of proton therapy. "To do experiments with human subjects is dangerous, so the best way is through computer simulation." Morales has been running computer simulations of proton-cell chemical reactions using quantum dynamics models on TACC's Stampede supercomputer to investigate the fundamentals of the process. Computational experiments can mimic the dynamics of the proton-cell interactions without causing damage to a patient and can reveal what happens when the proton beam and cells collide from start to finish, with atomic-level accuracy. Quantum simulations are necessary because the electrons and atoms that are the basis for proton cancer therapy's effectiveness do not behave according to the laws of classical physics. Rather they are guided by the laws quantum mechanics which involve probabilities of location, speed and reactions' occurrences rather than to the precisely defined versions of those three variables. Morales' studies on Stampede, reported in PLOS One in March 2017, as well as in Molecular Physics, and Chemical Physics Letters (both 2014), have determined the basic byproducts of protons colliding with water within the cell, and with nucleotides and clusters of DNA bases - the basic units of DNA. The studies shed light on how the protons and their water radiolysis products damage DNA. The results of Morales' computational experiments match the limited data from physical chemistry experiments, leading to greater confidence in their ability to capture the quantum behavior in action. Though fundamental in nature, the insights and data that Morales' simulations produce help researchers understand proton cancer therapy at the microscale, and help modulate factors like dosage and beam direction. "The results are all very promising and we're excited to extend our research further," Morales said. "These simulations will bring about a unique way to understand and control proton cancer therapy that, at a very low cost, will help to drastically improve the treatment of cancer patients without risking human subjects."


The search for genes associated with common epilepsy, including both focal and generalised epilepsies, has been intensive in the past few decades. Consequently, our understanding of the genetic background of common epilepsy has improved considerably, and current genetic studies have optimised their design accordingly, showing much promise for the future. Nevertheless, we can only explain a fraction of the heritability of common epilepsy with the currently known genetic factors. These factors have been identified with a range of different gene mapping techniques, including linkage analysis of epilepsy families, association studies, and recent large scale sequencing studies, which individually are optimal to detect a certain class of genetic variation. Here, we give a selected overview of the genetic studies that illustrate the evolution of epilepsy genetics and contribute to the evidence for a polygenic basis of common epilepsy that likely involves both rare and common disease variants. © 2017 The Author(s).


News Article | April 5, 2016
Site: phys.org

How can we make walking rehabilitation easier, more fun and more effective? This is the question that the University of Twente, De Hoogstraat Revalidatie, UMC Utrecht and LedGo have been working hard to answer in the past two years. LedGo is a world leader in interactive LED video floors and their work features in high-profile entertainment productions such as The Voice, Victoria's Secret and the Eurovision Song Contest. This collaboration has resulted in a permanent LED video floor for rehabilitating patients to be installed in the DesignLab at the University of Twente Campus on 5 April.


News Article | December 23, 2015
Site: www.nature.com

C57BL/6 (B6, H-2b) and LP (H-2b) mice were obtained from Jackson Laboratory. B6 Lgr5-LacZ and B6 lgr5-gfp-ires-CreERT2 (Lgr5–GFP) mice were provided by H. Clevers1, 10. Mouse maintenance and procedures were done in accordance with the institutional protocol guideline of the Memorial Sloan Kettering Cancer Center (MSKCC) Institutional Animal Care and Use Committee. Mice were housed in micro-isolator cages, five per cage, in MSKCC pathogen-free facilities, and received standard chow and autoclaved sterile drinking water. To adjust for differences in weight and intestinal flora among other factors, identical mice were purchased from Jackson and then randomly distributed over different cages and groups by a non-biased technician who had no insight or information about the purpose or details of the experiment. The investigations assessing clinical outcome parameters were performed by non-biased technicians with no particular knowledge or information regarding the hypotheses of the experiments and no knowledge of the specifics of the individual groups. Isolation of intestinal crypts and the dissociation of cells for flow cytometry analysis were largely performed as previously described10. In brief, after euthanizing the mice with CO and collecting small and large intestines, the organs were opened longitudinally and washed with PBS. To dissociate the crypts, small intestine was incubated at 4 °C in EDTA (10 mM) for 15 min and then in EDTA (5 mM) for an additional 15 min. Large intestine was incubated in collagenase type 4 (Worthington) for 30 min at 37 °C to isolate the crypts. To isolate single cells from small and large intestine crypts, the pellet was further incubated in 1× TrypLE express (Gibco, Life Technologies) supplemented with 0.8 kU ml−1 DNase1 (Roche). For mouse organoids, depending on the experiments, 200–400 crypts per well were suspended in Matrigel composed of 25% advanced DMEM/F12 medium (Gibco) and 75% growth-factor-reduced Matrigel (Corning). After the Matrigel polymerized, complete ENR medium containing advanced DMEM/F12 (Sigma), 2 mM Glutamax (Invitrogen), 10 mM HEPES (Sigma), 100 U ml−1 penicillin, 100 μg ml−1 streptomycin (Sigma), 1 mM N-acetyl cysteine (Sigma), B27 supplement (Invitrogen), N2 supplement (Invitrogen), 50 ng ml−1 mouse EGF (Peprotech), 100 ng ml−1 mouse Noggin (Peprotech) and 10% human R-spondin-1-conditioned medium from R-spondin-1-transfected HEK 293T cells31 was added to small intestine crypt cultures10. For experiments evaluating organoid budding, the concentration of R-spondin-1 was lowered to 1.25–5%. For mouse large intestine, crypts were cultured in ‘WENR’ medium containing 50% WNT3a-conditioned medium in addition to the aforementioned proteins and 1% BSA (Sigma), and supplemented with SB202190 (10 μM, Sigma), ALK5 inhibitor A83-01 (500 nM, Tocris Bioscience) and nicotinamide (10 mM, Sigma). Media was replaced every 2–3 days. Along with medium changes, treatment wells received different concentrations of rmIL-22 (Genscript). We also tested the effects of F-652 (Generon Corporation). In some experiments, organoids from crypts were cultured in the presence of Stattic (Tocris Bioscience). For passaging of organoids, after 5–7 days of culture, organoids were passaged by mechanically disrupting with a seropipet and cold media to depolymerize the Matrigel and generate organoid fragments. After washing away the old Matrigel by spinning down at 600 r.p.m., organoid fragments were replated in liquid Matrigel. ISCs were isolated from Lgr5–GFP mice using a modified crypt isolation protocol with 20 min of 30 mM EDTA32, 33 followed by several strainer steps and a 5-min incubation with TrypLE and 0.8 kU ml−1 DNase1 under minute-to-minute vortexing to make a single-cell suspension. The Lgr5–GFPhigh cells were isolated by FACS. Approximately 5,000 ISCs were plated in 30 μl Matrigel and cultured in WENR media containing Rho-kinase/ROCK inhibitor Y-27632 (10 μM, Tocris Bioscience) and Jagged1 (1 μM, Anaspec). Starting from day 4, ISC were cultured without Wnt. For lymphocyte co-culture experiments, ILCs were isolated from the small intestine lamina propria. Washed small intestine fragments were incubated in EDTA/IEL solution (1× PBS with 5% FBS, 10 mM HEPES buffer, 1% penicillin/streptomycin (Corning), 1% l-glutamine (Gibco), 1 mM EDTA and 1 mM dithiothreitol (DTT)) in a 37 °C shaker for 15 min. The samples were strained (100 μM) and put in a Collagenase solution (RPMI 1640, 5% FCS, 10 mM HEPES, 1% penicillin/streptomycin, 1% glutamine, 1 mg ml−1 collagenase D (Roche) and 1 U ml−1 DNase1 (Roche) and incubated twice for 10 min in a 37 °C shaker. Afterwards, the samples were centrifuged at 1,500 r.p.m. for 5 min and washed with RPMI solution without enzymes. After several washes, the cell suspension was transferred into a 40% Percoll solution (in PBS), which is overlaid on an 80% Percoll solution. After spinning the interface containing the lamina propria, mononuclear cells was aspirated and washed in medium. The cell suspension was then stained with extracellular markers and Topro3 for viability. Topro3−CD45+CD11b−CD11c−CD90+ LPLs from B6 wild-type and Il22−/− mice and Topro3−CD45+CD3−RORγt+ ILC3s34 from Rorc(γt)-GFP+ mice (Jackson) were sorted for co-cultures with SI crypts. (For antibodies used, see Supplementary Table 1.) To activate and maintain LPLs and ILCs in culture, rmIL-2 (1,000 U ml−1), rmIL-15 (10 ng ml−1), rmIL-7 (50 ng ml−1) and rmIL-23 (50 ng ml−1) were added to the ENR medium in co-culture experiments. We have also performed co-cultures with addition of only rmIL-23 (50 ng ml−1) to ENR media. LPLs and SI crypts were cultured in Matrigel with a 7:1 LPL:crypt ratio; ILCs and crypts were cultured in Matrigel with a 25:1 ILC:crypt ratio. Co-cultures were compared to crypts cultured in ENR plus cytokines without LPLs or ILCs present. A neutralizing monoclonal antibody against IL-22 (8E11, Genentech)35 was used to abrogate IL-22-specific effects of ILCs. For specific experiments, organoids were cultured from fresh crypts obtained from specific genetically modified mice, such as the Stat1−/− mice (129S6/SvEv-Stat1tm1Rds, Taconic) and Stat3fl/fl mice (Jackson). Organoids from Stat3fl/fl mice that had been grown for 7 days were dissociated as single cells and incubated with adenoviral-Cre (University of Iowa) to cause the deletion of Stat3 from floxed organoid cells. Frozen passaged organoids from Lgr5DTR (Lgr5-DTR)25 mice were used to culture organoids in which Lgr5+ stem cells could be depleted with daily administration of diphtheria toxin (1 ng μl−1). For Paneth-cell-deficient organoid cultures, frozen crypts from Atoh1ΔIEC mice36 depleted of Paneth cells were used to culture organoids. As previously described36, Atoh1ΔIEC mice (and littermate controls) were given an intraperitoneal injection of tamoxifen (1 mg per mouse, Sigma, dissolved in corn oil) for 5 consecutive days to achieve deletion of ATOH1 from intestinal epithelium. Animals were euthanized on day 7 after the first injection, and intestinal crypts were isolated and frozen in 10% dimethylsulfoxide (DMSO) and 90% FBS. To investigate the effect of IL-22 on human small intestine, we generated human duodenal organoids from banked frozen organoids (>passage 7) that had been previously generated from biopsies obtained during duodenoscopy of three independent healthy human donors. All human donors had been investigated for coeliac disease, but turned out to have normal pathology. All provided written informed consent to participate in this study according to a protocol reviewed and approved by the review board of the UMC Utrecht, The Netherlands (protocol STEM study, METC 10-402/K). Human organoids were cultured in 10 μl Matrigel drops in expansion medium containing WENR with 10 nM SB202190, 500 nM A83-01 and 10 mM nicotinamide. For IL-22 stimulation experiments, rhIL-22 10 ng ml−1 (Genscript) was added daily. For the purpose of size measurements at day 6, organoids were passaged as single cells. Where applicable, organoid cultures were performed using conditioned media containing R-spondin-1 and WNT3a produced by stably transfected cell lines. R-spondin-1-transfected HEK293T cells31 were provided by C. Kuo. WNT3a-transfected HEK293T cells were provided by H. Clevers (patent WO2010090513A2). Cell lines were tested for mycoplasma and confirmed to be negative. For size evaluation, the surface area of organoid horizontal cross sections was measured. If all organoids in a well could not be measured, several random non-overlapping pictures were acquired from each well using a Zeiss Axio Observer Z1 inverted microscope and then analysed using MetaMorph or ImageJ software. Organoid perimeters for area measurements have been defined manually and by automated determination using the Analyze Particle function of ImageJ software, with investigator verification of the automated determinations, as automated measurements allowed for unbiased analyses of increased numbers of organoids. For automated size measurements, the threshold for organoid identification was set based on monochrome images. The sizes of the largest and smallest organoids in the reference well were measured manually, and their areas were used as the reference values for setting the minimal and maximal particle sizes. Organoids touching the edge of the images were excluded from the counting. After 5–7 days in culture, total organoid numbers per well were counted by light microscopy to evaluate growth efficiency. All organoid numbers were counted manually in this fashion except for the organoid counts presented in Extended Data Fig. 5b, which were counted using automated ImageJ analysis, as these organoids were too numerous to count manually. To compare organoid efficiency in different conditions, combining experiments with different organoid numbers, the percentage of organoids relative to the number of organoids in ENR-control (rmIL-22 0 ng ml−1) was calculated. The efficiency from sorted ISCs was presented as the percentage of cells forming organoids per number of seeded cells. BMT procedures were performed as previously described37. A minor histocompatibility antigen-mismatched BMT model (LP into B6; H-2b into H-2b) was used. Female B6 wild-type mice were typically used as recipients for transplantation at an age of 8–10 weeks. Recipient mice received 1,100 cGy of split-dosed lethal irradiation (550 cGy × 2) 3–4 h apart to reduce gastrointestinal toxicity. To obtain LP bone marrow cells from euthanized donor mice, the femurs and tibias were collected aseptically and the bone marrow canals washed out with sterile media. Bone marrow cells were depleted of T cells by incubation with anti-Thy 1.2 and low-TOX-M rabbit complement (Cedarlane Laboratories). The TCD bone marrow was analysed for purity by quantification of the remaining T cell contamination using flow cytometry. T cell contamination was usually about 0.2% of all leukocytes after a single round of complement depletion. LP donor T cells were prepared by collecting splenocytes aseptically from euthanized donor mice. T cells were purified using positive selection with CD5 magnetic Microbeads with the MACS system (Miltenyi Biotec). T cell purity was determined by flow cytometry, and was routinely approximately 90%. Recipients typically received 5 × 106 TCD bone marrow cells with or without 4 × 106 T cells per mouse via tail vein injection. Mice were monitored daily for survival and weekly for GVHD scores with an established clinical GVHD scoring system (including weight, posture, activity, fur ruffling and skin integrity) as previously described38. A clinical GVHD index with a maximum possible score of ten was then generated. Mice with a score of five or greater were considered moribund and euthanized by CO asphyxia. Recombinant mouse IL-22 was purchased from GenScript and reconstituted as described by the manufacturer to a concentration of 40 μg ml−1 in PBS. Mice were treated daily via i.p. injection with either 100 μl PBS or 100 μl PBS containing 4 μg rmIL-22. IL-22 administration was started on day 7 after BMT. This schedule was based on the results of rmIL-22 pharmacokinetics tested in untransplanted mice. For in vivo F-652 administration, starting from day 7 after BMT, mice were injected subcutaneously every other day for ten consecutive weeks with PBS or 100 μg kg−1 F-652. Mice were euthanized for organ analysis 21 days after BMT using CO asphyxiation. For histopathological analysis of GVHD, the small and large intestines were formalin-preserved, paraffin-embedded, sectioned and stained with haematoxylin and eosin. An expert in the field of GVHD pathology, blinded to allocation, assessed the sections for markers of GVHD histopathology. As described previously38, a semiquantitative score consisting of 19 different parameters associated with GVHD was calculated. For evaluation of stem-cell numbers, small intestines from Lgr5-LacZ recipient mice that were transplanted with LP bone marrow (and T cells where applicable) were collected. β-galactosidase (LacZ) staining was performed as previously described previously1. Washed 2.5-cm-sized small intestine fragments were incubated with an ice-cold fixative, consisting of 1% formaldehyde, 0.2% NP40 and 0.2% gluteraldehyde. After removing the fixative, organs were stained for the presence of LacZ according to manufacturer’s protocol (LacZ staining kit, Invivogen). The organs were then formalin-preserved, paraffin-embedded, sectioned and counterstained with Nuclear Fast Red (Vector Labs). Immunohistochemistry detection of REG3β was performed at the Molecular Cytology Core Facility of MSKCC using a Discovery XT processor (Ventana Medical Systems). Formalin-fixed tissue sections were deparaffinized with EZPrep buffer (Ventana Medical Systems), antigen retrieval was performed with CC1 buffer (Ventana Medical Systems) and sections were blocked for 30 min with Background Buster solution (Innovex). Slides were incubated with anti-REG3β antibodies (R&D Systems, MAB5110; 1 μg ml−1) or isotype (5 μg ml−1) for 6 h, followed by a 60-min incubation with biotinylated goat anti-rat IgG (Vector Laboratories, PK-4004) at a 1:200 dilution. The detection was performed with a DAB detection kit (Ventana Medical Systems) according to the manufacturer’s instructions. Slides were counterstained with haematoxylin (Ventana Medical Systems), and coverslips were added with Permount (Fisher Scientific). See Supplementary Table 1 for full description of antibodies used. Immunofluorescent staining was performed at the Molecular Cytology Core Facility of Memorial Sloan Kettering Cancer Center using a Discovery XT processor (Ventana Medical Systems). Formalin-fixed tissue sections were deparaffinized with EZPrep buffer (Ventana Medical Systems), and antigen retrieval was performed with CC1 buffer (Ventana Medical Systems). Sections were blocked for 30 min with Background Buster solution (Innovex) followed by avidin/biotin blocking for 12 min. IL-22R antibodies (R&D Systems, MAB42; 0.1 μg ml−1) were applied and sections were incubated for 5 h followed by 60 min incubation with biotinylated goat anti-rat IgG (Vector Laboratories, PK-4004) at a 1:200 dilution. The detection was performed with streptavidin–horseradish peroxidase (HRP) D (part of DABMap kit, Ventana Medical Systems), followed by incubation with Tyramide Alexa Fluor 488 (Invitrogen, T20932) prepared according to manufacturer’s instruction with predetermined dilutions. Next, lysozyme antibodies (DAKO, A099; 2 μg ml−1) were applied and sections were incubated for 6 h followed by incubation with biotinylated goat anti-rabbit IgG (Vector Laboratories, PK6101) for 60 min. The detection was performed with streptavidin–HRP D (part of DABMap kit, Ventana Medical Systems), followed by incubation with Tyramide Alexa Fluor 594 (Invitrogen, T20935) prepared according to manufacturer’s instruction with predetermined dilutions. Finally, GFP antibodies were applied and sections were incubated for 5 h followed by incubation with biotinylated goat anti-chicken IgG (Vector Laboratories, BA-9010) for 60 min. The detection was performed with streptavidin–HRP D (part of DABMap kit, Ventana Medical Systems), followed by incubation with Tyramide Alexa Fluor 647 (Invitrogen, T20936) prepared according to manufacturer instruction with predetermined dilutions. Slides were counterstained with DAPI (Sigma Aldrich, D9542; 5 μg ml−1) for 10 min and coverslips were added with Mowiol. For immunofluorescent and other microscopic imaging, including LacZ and immunohistochemistry slides, contrast and white balance were set based on control slides for each experiment, and the same settings were used for all slides to maximize sharpness and contrast. See Supplementary Table 1 for full description of antibodies used. Spleen and small intestine were collected from euthanized BMT recipients, and organs were then homogenized and spun down. The supernatant was stored at −20 °C until use for cytokine analysis. The cytokine multiplex assays were performed on thawed samples with the mouse Th1/Th2/Th17/Th22 13plex (FlowCytomix Multiplex kit, eBioscience) and performed according to the manufacturer’s protocol. For in vivo experiments, lymphoid organs were collected from euthanized mice and processed into single cell suspension. Cells were stained with the appropriate mixture of antibodies. For intracellular analysis, an eBioscience Fixation/Permeabilization kit was used per the manufacturer’s protocol. After thorough washing, the cells were stained with intracellular and extracellular antibodies simultaneously. Fluorochrome-labelled antibodies were purchased from BD Pharmingen (CD4, CD8, CD24, CD25, CD45, α4β7 and P-STAT3 Y705, P-STAT1 Y701), eBioscience (FOXP3), R&D (IL-22R), and Invitrogen (GFP). DAPI and Fixable Live/Dead Cell Stain Kits (Invitrogen) were used for viability staining. Paneth cells were identified based on bright CD24 staining and side scatter granularity as described previously2. For flow cytometry of small intestine organoid cells, organoids were dissociated using TrypLE (37 °C). After vigorously pipetting through a p200 pipette causing mechanical disruption, the crypt suspension was washed with 10 ml of DMEM/F12 medium containing 10% FBS and 0.8 kU ml−1 DNase1 and passaged through a cell strainer. Where applicable, the cells were directly stained or first fixed (4% paraformaldehyde) and permeabilized (methanol) depending on the extracellular or intracellular location of the target protein. All stainings with live cells were performed in PBS without Mg2+ and Ca2+ with 0.5% BSA. For EdU incorporation experiments there was a 1 h pre-incubation of EdU in the ENR medium of the intact organoid cultures before dissociating the cells with TrypLE. Cells were stained using Click-it kits for imaging and flow cytometry (Life Technologies). For cell cycle analysis, single cell suspensions obtained from dissociated organoids were fixed and stained with Hoechst 33342 (Life Technologies), then assessed with flow cytometry for DNA content and ploidy. For intracellular pSTAT staining of organoids, organoids were mechanically disrupted into crypt fragments, stimulated for 20 min with 20 ng ml−1 IL-22 at 37 °C, and then fixed with 4% paraformaldehyde (10 min at 37 °C). To assess STAT activation in Lgr5+ cells, after freshly isolating crypts from Lgr5–GFP mice, single-cell suspensions including Y-27632 (10 μM) were stimulated with IL-22. After obtaining a single cell suspension of stimulated and fixed cells, the samples were filtered (40 μM) and permeabilized with ice-cold (−20 °C) methanol. Fixed and permeabilized cells were rehydrated with PBS and thoroughly washed with PBS before staining, then stained with anti-phospho-STAT3 and anti-phospho-STAT1, plus anti-GFP or cell surface markers, for 30 min at 4 °C. All flow cytometry was performed with an LSRII cytometer (BD Biosciences) using FACSDiva (BD Biosciences), and the data were analysed with FlowJo software (Treestar). See Supplementary Table 1 for full description of antibodies used. Western blot analysis was carried out on total protein extracts. Free-floating crypts isolated from small intestine were treated in DMEM supplemented with Y-27632 (10 ng ml−1, Tocris) and IL-22 (5 ng ml−1, 30 min). Vehicle (PBS) was added to control wells. Crypts were then lysed in RIPA buffer containing a cocktail of protease and phosphatase inhibitors (Sigma). After sonication, protein amount was determined using the bicinchoninic acid assay Kit (Pierce). Loading 30 μg per lane of lysate, proteins were separated using electrophoresis in a 10% polyacrylamide gel and transferred to nitrocellulose. Membranes were blocked for 1 h at room temperature with 1% Blot-Qualified BSA (Promega, W384A) and 1% non-fat milk (LabScientific, M0841) and then incubated overnight at 4 °C with the following primary antibodies: rabbit anti-phospho-STAT1 (7649P), rabbit anti-phospho-STAT3 (9131S), rabbit anti-STAT1 (9172P) and rabbit anti-STAT3 (4904P), all from Cell Signaling. This was followed by incubation with the secondary antibody anti-rabbit HRP (7074P2) and visualization with the Pierce ECL Western Blotting Substrate (Thermo Scientific, 32106). Cell viability in organoids was assessed with a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium (MTT) test, based on the identification of metabolically active cells. The organoids were incubated with MTT (0.9 mg ml−1 final concentration, Sigma) for 2 h at 37 °C. Matrigel and cells containing intracellular reduction end product formazan were solubilized with acidic isopropanol (isopropanol with HCl) and the reduction end formazan production was evaluated by spectrophotometry using the Infinite M1000 pro plate reader (Tecan). For qPCR, segments of small intestine or isolated crypts were collected from euthanized mice and stored at −80 °C. Alternatively, RNA was isolated from organoids after in vitro culture. Extracted RNA was also stored at −80 °C. Reverse transcriptase PCR (RT–PCR) was performed with a QuantiTect Reverse Transcription Kit (QIAGEN) or a High-Capacity RNA-to-cDNA Kit (Applied Biosystems). qPCR was performed on a Step-One Plus or QuantStudio 7 Flex System (Applied Biosystems) using TaqMan Universal PCR Master Mix (Applied Biosystems). Specific primers were obtained from Applied Biosystems: Actb: Mm01205647_g1; Hprt: Mm00446968_m1; Reg3b: Mm00440616_g1; Reg3g: Mm00441127_m1; Wnt3: Mm00437336_m1; Egf: Mm00438696_m1; Rspo3: Mm00661105_m1; Axin2: Mm00443610_m1; Ctnnb1: Mm00483039_m1; Defa1: Mm02524428_g1; and Il22ra1: Mm01192943_m1. Other primers were obtained from PrimerBank: Gapdh (ID 6679937a1), Cdkn1a (also known as p21) (ID 6671726a1); Cdkn2d (also known as p19) (ID 31981844a1); Wnt3a (ID 7106447a1); Axin2 (ID 31982733a1); Hes1 (ID 6680205a1) Dll4 (ID 9506547a1) Dll1 (ID 6681197a1), for which cDNAs were amplified with SYBR master mix (Applied Biosystems) in QuantStudio 7 Flex System (Applied Biosystems). Relative amounts of mRNA were calculated by the comparative ΔC method with Actb, Hprt or Gapdh as house-keeping genes. For Il22ra1 qPCR on Lgr5+ cells, dissociated crypt cells from Lgr5–GFP mice were stained and isolated using the following monoclonal antibodies/parameters: EpCAM-1 (G8.8; BD Bioscience); CD45 (30F11; Life Technologies); CD31 (390; BioLegend), Ter119 (Ter119; BioLegend); GFP expression; dead cells were excluded using 7AAD. Cells were acquired on a BD ARIAIII and FACS-sorted. Cells were sorted directly into RA-1/TCEP (Macherey-Nagel) lysis buffer and stored at −80 °C until further analysis. RNA of haematopoietic cells (composite of dendritic cells, ILCs and B cells) was used as negative control. RNA was extracted using the NucleoSpin RNA XS kit (Machery Nagel) and cDNA was prepared with Ovation Pico and PicoSL WTA Systems V2 (NuGen). For qPCR, a Neviti Thermal Cycler (Applied Biosystems) and DyNAmo Flash SYBR Green qPCR kit (Finnzymes) were used, with the addition of MgCl to a final concentration of 4 mM. All reactions were done in duplicate and normalized to Gapdh. Relative expression was calculated by the cycling threshold (C ) method as 2−ΔCt. The primer sequences were as follows: Il22ra1: forward 5′-TCGGCTTGCTCTGTTATC-3′, reverse 5′-CCACTGAGGTCCAAGACA-3′. To explore the association of ISC gene signatures (GSE33948 and GSE23672)16 with STAT3-regulated genes, we performed GSEA in a mouse DSS colitis data set (GSE15955)12, comparing Stat3fl/fl;Villin-Cre− (wild type) and Stat3fl/fl;Villin-Cre+ (Stat3ΔIEC) mice with DSS colitis (GSEA2-2.2.0; http://www.broadinstitute.org/gsea)39, 40. A Paneth cell signature gene set was used as a negative control (DLL1+CD24hi, GSE39915)17. Nominal P values are shown. No statistical methods were used to predetermine sample size. To detect an effect size of >50% difference in means, with an assumed coefficient of variation of 30%, common in biological systems, we attempted to have at least five samples per group, particularly for in vivo studies. All experiments were repeated at least once. No mice were excluded from experiments. Experiments that were technical failures, such as experiments in vitro where cultures did not grow or experiments in vivo where transplanted control mice (bone marrow plus T cells) did not develop GVHD, were not included for analysis. Occasional individual mice that died post-transplant before analysis could not be included for tissue evaluation. All data are mean and s.e.m. for the various groups. Statistics are based on ‘n’ biological replicates. All tests performed are two sided. For the comparisons of two groups, a t-test or non-parametric test was performed. Adjustments for multiple comparisons were made. In most cases, non-parametric testing was performed if normal distribution could not be assumed. RT–qPCR reactions and ordinal outcome variables were tested non-parametrically. All analyses of statistical significance were calculated and displayed compared with the reference control group unless otherwise stated. There is large biological variation in organoid size. Statistical analyses of organoid sizes were thus based on all evaluable organoids (at least 25 organoids per group for all experiments). Statistical analyses of organoid numbers and efficiency were based on individual wells. To take into account intra-individual and intra-experimental variation as well, all in vitro experiments were performed at least twice with several wells per condition, and sample material coming from at least two different mice. Statistical analyses of stem-cell numbers (Lgr5-LacZ mice) in vivo were performed on several independent sections from multiple mice. Statistics were calculated and display graphs were generated using Graphpad Prism.


News Article | October 26, 2016
Site: www.nature.com

Redefine excellence Fix incentives to fix science | Do judge Treat metrics only as surrogates An obsession with metrics pervades science. Our institution, the University Medical Center Utrecht in the Netherlands, is not exempt. On our website, we proudly declare that we publish about 2,500 peer-reviewed scientific publications per year, with higher than average citation rates. A few years ago, an evaluation committee spent hours discussing which of several faculty members to promote, only to settle on the two who had already been awarded particularly prestigious grants. Meanwhile, faculty members who spent time crafting policy advice had a hard time explaining how this added to their scientific output, even when it affected clinical decisions across the country. Publications that directly influenced patient care were weighted no higher in evaluations than any other paper, and less if that work appeared in the grey literature — that is, in official reports rather than in scientific journals. Some researchers were actively discouraged from pursuing publications that might improve medicine but would garner few citations. All of this led many faculty members, especially younger ones, to complain that publication pressure kept them from doing what really mattered, such as strengthening contacts with patient organizations or trying to make promising treatments work in the real world. The institution decided to break free of this mindset. Our university medical centre has just completed its first round of professorial appointments using a different approach, which will continue to be used for the roughly 20 professors appointed each year. The institution is evaluating research programmes in a new way. In 2013, senior faculty members and administrators (including F.M.) at the University Medical Center (UMC) Utrecht, Utrecht University and the University of Amsterdam hosted workshops and published a position paper concluding that bibliometric parameters were overemphasized and societal relevance was undervalued1. This led to extensive media attention, with newspapers and television shows devoting sections to the 'crisis' in science. Other efforts have come to similar conclusions2, 3, 4. In the wake of this public discussion, we launched our own internal debates. We had two goals. We wanted to create policies that ensured individual researchers would be judged on their actual contributions and not the counts of their publications. And we wanted our research programmes to be geared towards creating societal impact and not just scientific excellence. Every meeting was attended by 20–60 UMC Utrecht researchers, many explicitly invited for their candour. They ranged from PhD students and young principal investigators to professors and department heads. The executive board, especially F.M., prepared the ground for frank criticism by publicly acknowledging publication pressure, perverse incentives and systemic flaws in science5, 6. Attendees debated the right balance between research driven by curiosity and research inspired by clinical needs. They considered the role of patients' advice in setting research priorities, the definition of a good PhD trajectory and how to weigh up scientific novelty and societal relevance. We published interviews and reports from these meetings on our internal website and in our magazine. We spent the next year redefining the portfolio that applicants seeking academic promotions are asked to submit. There were few examples to guide us, but we took inspiration from the approach used at the Karolinska Institute in Stockholm, which asks candidates for a package of scientific, teaching and other achievements. Along with other elements, Utrecht candidates now provide a short essay about who they are and what their plans are as faculty members. They must discuss achievements in terms of five domains, only one of which is scientific publications and grants. First, candidates describe their managerial responsibilities and academic duties, such as reviewing for journals and contributing to internal and external committees. Second, they explain how much time they devote to students, what courses they have developed and what other responsibilities they have taken on. Then, if applicable, they describe their clinical work as well as their participation in organizing clinical trials and research into new treatments and diagnostics. Finally, the portfolio covers entrepreneurship and community outreach. We also revamped the applicant-evaluation procedure. The chair of the committee is formally tasked with assuring that all domains are discussed for each candidate. This keeps us from overlooking someone who has hard-to-quantify qualities, such as their motivation to turn 'promising' results into something that really matters for patients, or to seek out non-obvious collaborations. Another aspect of breaking free of the 'bibliometric mindset' came in how we assess our multidisciplinary research programmes, each of which has on average 80 principal investigators. The evaluation method was developed by a committee of faculty members mostly in the early stages of their careers. Following processes outlined by the UK Research Excellence Framework, which audits the output of UK institutions, committee members drew on case studies and published literature to define properties that could be used in broad assessments. This led to a suite of semi-qualitative indicators that include conventional outcome measurements, evaluations of leadership and citizenship across UMC Utrecht and other communities, as well as assessments of structure and process, such as how research questions are formed and results disseminated. We think that these shifts will reduce waste7, 8, increase impact, and attract researchers geared for collaborations with each other and with society at large. Researchers at UMC Utrecht are already accustomed to national reviews, so our proposal to revamp evaluations fell on fertile ground. However, crafting these new policies took commitment and patience. Two aspects of our approach were crucial. First, we did not let ourselves become paralysed by the belief that only joint action along with funders and journals would bring real change. We were willing to move forward on our own as an institution. Second, we ensured that although change was stimulated from the top, the criteria were set by the faculty members who expect to be judged by those standards. Indeed, after ample debate fuelled by continuing international criticism of bibliometric indicators, the first wave of group leaders has embraced the new system, which will permeate the institute in the years to come. During the past few years of lectures and workshops, we were initially struck by how little early- and mid-career researchers knew about the 'business model' of modern science and about how science really works. But they were engaged, quick to learn and quick to identify forward-looking ideas to improve science. Students organized a brainstorming session with high-level faculty members about how to change the medical and life-sciences curriculum to incorporate reward-and-incentive structures. The PhD council chose a 'supervisor of the year' on the basis of the quality of supervision, and not just by the highest number of PhD students supervised, as was the custom before. Extended community discussions pay off. We believe that selection and evaluation committees are well aware that bibliometrics can be a reductive force, but that assessors may lack the vocabulary to discuss less-quantifiable dimensions. By formally requiring qualitative indicators and a descriptive portfolio, we broaden what can be talked about9. We shape the structures that shape science — we can make sure that they do not warp it. Some 20 years ago, when I was dean of biological sciences at the University of Manchester, UK, I tried an experiment. At the time, we assessed candidates applying for appointments and promotions using conventional measures: number of publications, quality of journal, h-index and so on. Instead, we decided to ask applicants to tell us what they considered to be their three most important publications and why, and to submit a copy of each. We asked simple, direct questions: what have you discovered? Why is it important? What have you done about your discovery? To make applicants feel more comfortable with this peculiar assessment, we also indicated that they could submit, if they wished, a list of all of their other scientific publications — everyone did. That experience has influenced the work I do now, as director-general of the main science-funding agency in Ireland. The three publications chosen by the applicant told me a lot about their achievements and judgement. Often, they highlighted unconventional impacts of their work. For example, a would-be professor of medicine whose research concerned safely shortening hospital stays selected an article that he had written in the free, unrefereed magazine, Hospital Doctor. Asked why, he replied that hospital managers and most doctors actually read that magazine, so that the piece had facilitated rapid adoption of his findings; he later detailed the impactful results of this in an eminent medical journal (a paper he chose not to submit). I believe most committee members actually read the papers submitted, unlike in other evaluations, where panellists have time only to scan exhaustive lists of publications. This approach may not have changed committee decisions, but it did change incentives of both the candidates and the panellists. The focus was on work that was important and meaningful. When counts of papers or citations become the dominant assessment criteria, people often overlook the basics: what did this scientist do and why does it matter? But committee members often felt uncomfortable; they thought their selection was subjective, and they felt more secure with the numbers. After all, the biological-sciences faculty had just been through a major reform to prioritize research activity. The committee members had a point — bibliometric methods do bring some objectivity and may help to avoid biases and prejudices. Still, such approaches do not necessarily help minorities, young people or those working on particularly difficult problems; nor do they encourage reproducibility (see go.nature.com/2dyn0sq). Exercising judgement is what people making important decisions are supposed to do. When I moved on from my position as dean, the system reverted to its conventional form. Changes that result in differences from a cultural norm are difficult to sustain, particularly when they rely on the passion of a small number of people. In the years since, bibliometric assessments have become ever more embedded in evaluations across the world. Lately, rumblings against their influence have grown louder3. To move the scientific enterprise towards better measures of quality, perhaps we need a collective effort by a group of leading international universities and research funders. What you measure is what you get: so if funders focus on assessing solid research advances (with potential economic and social impact) then this may encourage reliable, important work and discourage bibliometric gaming. What can funders do? By tweaking rewards, these bodies can shape researchers' choices profoundly. The UK government has commissioned two reports2, 10 on how bibliometrics can be gamed, and is mulling ways to improve nationwide evaluations. Already we have seen a higher value placed on reproducibility by the US National Institutes of Health, with an increased focus on methodology, and a policy not to release funds until concerns raised by grant reviewers are explicitly addressed. The Netherlands Organisation for Scientific Research, the country's main funding body, has allocated funding for repeat experiments. Research funders should also explicitly encourage important research, even at the expense of publication rate. To this end, at Science Foundation Ireland, we will experiment with changes to the grant application form that are similar to my Manchester pilot. We will also introduce prizes, for example, for mentorship. We believe that such concrete steps will incentivize high-quality research over the long term, counterbalance some of the distortions in the current system, and help institutions to follow suit. If enough international research organizations and funders return to basic principles in promotions, appointments and evaluations, then perhaps the surrogates can be used properly — as supporting information. They are not endpoints in themselves.


News Article | November 14, 2016
Site: www.sciencedaily.com

At UMC Utrecht, a brain implant has been placed in a patient enabling her to operate a speech computer with her mind. The researchers and the patient worked intensively to get the settings right. She can now communicate at home with her family and caregivers via the implant. That a patient can use this technique at home is unique in the world. This research was published in the New England Journal of Medicine. Because she suffers from ALS disease, the patient is no longer able to move and speak. Doctors placed electrodes in her brain, and the electrodes pick up brain activity. This enables her to wirelessly control a speech computer that she now uses at home. "This is a major breakthrough in achieving autonomous communication among severely paralyzed patients whose paralysis is caused by either ALS, a cerebral hemorrhage or trauma," says Professor Nick Ramsey, professor of cognitive neuroscience at the University Medical Center (UMC) Utrecht. "In effect, this patient has had a kind of remote control placed in her head, which enables her to operate a speech computer without the use of her muscles." The patient operates the speech computer by moving her fingers in her mind. This changes the brain signal under the electrodes. That change is converted into a mouse click. On a screen in front of her she can see the alphabet, plus some additional functions such as deleting a letter or word and selecting words based on the letters she has already spelled. The letters on the screen light up one by one. She selects a letter by influencing the mouse click at the right moment with her brain. That way she can compose words, letter by letter, which are then spoken by the speech computer. This technique is comparable to actuating a speech computer via a push-button (with a muscle that can still function, for example, in the neck or hand). So now, if a patient lacks muscle activity, a brain signal can be used instead. The patient underwent surgery during which electrodes were placed on her brain through tiny holes in her skull. A small transmitter was then placed in her body below her collarbone. This transmitter receives the signals from the electrodes via subcutaneous wires, amplifies them and transmits them wirelessly. The mouse click is calculated from these signals, actuating the speech computer. The patient is closely supervised. Shortly after the operation, she started on a journey of discovery together with the researchers to find the right settings for the device and the perfect way to get her brain activity under control. It started with a "simple" game to practice the art of clicking. Once she mastered clicking, she focused on the speech computer. She can now use the speech computer without the help of the research team. The UMC Utrecht Brain Center has spent many years researching the possibility of controlling a computer by means of electrodes that capture brain activity. Working with a speech computer driven by brain signals measured with a bathing cap with electrodes has long been tested in various research laboratories. That a patient can use the technique at home, through invisible, implanted electrodes, is unique in the world. If the implant proves to work well in three people, the researchers hope to launch a larger, international trial. Ramsey: "We hope that these results will stimulate research into more advanced implants, so that some day not only people with communication problems, but also people with paraplegia, for example, can be helped." This research is part of the Utrecht NeuroProsthesis (UNP) project conducted by the UMC Utrecht Brain Center Rudolf Magnus, and is funded by technology foundation STW. The implant itself was provided by one of the R&D departments of medical technology company Medtronic.


News Article | November 17, 2016
Site: www.gizmag.com

The University Medical Center Utrecht (UMC) has announced the success of a brain implant in a Dutch patient suffering from ALS disease that enables her to operate a speech computer with her mind. Fifty-nine year-old mother-of-three Hanneke de Bruijne was diagnosed with ALS (Lou Gehrig's disease) in 2008 and can no longer move or speak, yet her mind is fully functional. The electrode implanted in her brain picks up brain activity and enables her to wirelessly control a speech computer to communicate with family and caregivers. What's more, she uses the technology not in a laboratory but at home and it is mobile enough to travel, promising a new life for those otherwise locked in non-functioning bodies. Computers are going to become much better at repairing human cognitive and sensory-motor functions in the not-too-distant future, and the UMC achievement signifies another milestone in humanity's relationship with the computer. This announcement also marks a milestone in assisting and augmenting (not just repairing) human cognitive and sensory-motor functions, and gives us a peak at a future world where communication by thought alone might be possible. "This is a major breakthrough in achieving autonomous communication among severely paralyzed patients whose paralysis is caused by either ALS, a cerebral hemorrhage or trauma," said Nick Ramsey, Professor of Cognitive Neuroscience at UMC Utrecht. "In effect, this patient has had a kind of remote control placed in her head, which enables her to operate a speech computer without the use of her muscles." Hanneke de Bruijne underwent surgery last year to implant the electrodes on her brain, with the wires passing through tiny holes in her skull and a small transmitter was then placed in her body below her collarbone. This transmitter receives the signals from the electrodes via subcutaneous wires, amplifies them and transmits them wirelessly to the computer. The speech computer is operated by Hanneke de Bruijne moving her fingers "in her mind." This brain activity is detected by the electrodes, and the movement is converted into a mouse click. On a screen in front of her, Hanneke de Bruijne can construct words and sentences using a dedicated interface designed for the task, and those words are in turn vocalized by the speech computer. This is similar to using a speech computer via a push-button interface, but with a brain signal used instead to actuate the button instead of a muscle. Trials of the implant are currently underway with three patients, and the researchers hope to progress to a larger, international trial following that phase. "We hope that these results will stimulate research into more advanced implants, so that some day not only people with communication problems, but also people with paraplegia, for example, can be helped," says Ramsey. This research that enabled this breakthrough was conducted by the UMC Utrecht Brain Center Rudolf Magnus through its Utrecht NeuroProsthesis Project (UNP) and is funded by the technology foundation STW. The implant itself came from medical technology company Medtronic.

Loading UMC Utrecht collaborators
Loading UMC Utrecht collaborators