National Institute of Biomedical Imaging and Bioengineering

Bethesda, MD, United States

National Institute of Biomedical Imaging and Bioengineering

Bethesda, MD, United States
SEARCH FILTERS
Time filter
Source Type

News Article | May 8, 2017
Site: www.eurekalert.org

Chicago, IL (May 8, 2017) -- New research shows that an 18-mm magnetized capsule colonoscope, which can be paired with standard medical instruments, successfully performed intricate maneuvers inside the colon while guided by an external magnet attached to a robotic arm. Researchers believe this technology will reduce the potential discomfort of colonoscopies and lead to more people undergoing the life-saving screening test. This new study was presented at Digestive Disease Week® (DDW) 2017 , the largest international gathering of physicians, researchers and academics in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery. Researchers hope the capsule robot, which is inserted rectally, could be used safely and effectively in the future on humans to identify and remove pre-cancerous lesions and tumors detected during colonoscopy. "There's no doubt in the value of colonoscopies to keep people healthy through preventive screening for colon cancer, but many individuals still avoid this procedure, because of fear of the test itself, perceived discomfort or the risk of sedation," said Keith Obstein, MD, MPH, FASGE, the study's corresponding author and associate professor of medicine at Vanderbilt University Medical Center, Nashville, TN. "We developed this capsule robot to make traversing the GI tract much easier, for both the clinician and patient." Dr. Obstein and his team tested the capsule robot, which has a tether that is smaller in diameter than conventional endoscopes, 30 times in the colon of a pig. They reported that it successfully completed the maneuver of retroflexion, in which it bends backward to give the endoscopist a "reverse-view" of the colon wall, on its own (i.e. autonomously/autopilot) at the press of a button. "Not only is the capsule robot able to actively maneuver through the GI tract to perform diagnostics, it is also able to perform therapeutic maneuvers, such as biopsies of tissue or polyp removal, due to the tether -- something that other capsule devices are unable to do," added Dr. Obstein. "Since the external magnet pulls the capsule robot with the tether segment from the front or head of the capsule, instead of a physician pushing the colonoscope from behind as in traditional endoscopy, we're able to avoid much of the physical pressure that is placed on the patient's colon -- possibly reducing the need for sedation or pain medication." The team found that the autonomously-controlled capsule robot was successful in completing all 30 retroflexions. The capsule robot completed retroflexion in an average of 12 seconds, which was within the researchers' expectations. Following the success of these tests in a pig, Dr. Obstein indicated that the team will be pursuing human trials, expected to begin at the end of 2018. In the meantime, his team will continue to optimize the algorithms that control the robotic arm to improve their performance in maneuvering the capsule-based robotic system. This study was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award number R01EB018992. Dr. Keith Obstein and Mr. Piotr Slawinski will present data from the study, "The first autonomously controlled capsule robot for colon exploration," abstract Mo1962, on Monday, May 8, at 9:30 a.m. CT, in South Hall of McCormick Place. For more information about featured studies, as well as a schedule of availability for featured researchers, please visit http://www. . Dr. Obstein and Mr. Slawinski did not have any disclosures for DDW research. Digestive Disease Week® (DDW) is the largest international gathering of physicians, researchers and academics in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery. Jointly sponsored by the American Association for the Study of Liver Diseases (AASLD), the American Gastroenterological Association (AGA) Institute, the American Society for Gastrointestinal Endoscopy (ASGE) and the Society for Surgery of the Alimentary Tract (SSAT), DDW takes place May 6-9, 2017, at McCormick Place, Chicago. The meeting showcases more than 5,000 abstracts and hundreds of lectures on the latest advances in GI research, medicine and technology. More information can be found at http://www. .


CHARLOTTESVILLE, Va.--(BUSINESS WIRE)--The Accuro® automatic spinal navigation system significantly enhanced the accuracy of epidural and spinal anesthesia placement compared to traditional landmark techniques, even for residents-in-training and patients with atypical spinal anatomy, according to three abstracts presented at the annual meeting of the Society for Obstetric Anesthesia and Perinatology (SOAP). Accuro is a pocket-sized ultrasound system that uses specialized algorithms to automatically detect spinal midline and epidural depth and trajectory. The device is optimized for visualization of bony anatomy and eliminates the steep learning curve required for accurate ultrasound interpretation. Presented at the meeting were the results of a randomized trial conducted at the University of Virginia Medical Center comparing anesthesiology residents’ success placing spinal anesthesia in C-section patients with Accuro guidance and with conventional methods. Participants were not experienced in ultrasound reading and received a brief 10 minute training session on Accuro operation. The trial found that for residents with prior spinal anesthesia experience, Accuro improved first-attempt needle placement by more than 100% in patients with a high body mass index. For these residents, the average number of needle redirections to achieve placement using Accuro was almost half that of the same sub-group using conventional placement methods. This clinical trial was funded by National Institutes of Health (NIH) National Institute of Biomedical Imaging and Bioengineering (NIBIB) under award number R44EB015232. The content is solely the responsibility of the authors and does not represent official views of NIH. Stanford Clinical Trial Highlights the Accuracy and Effectiveness of Automated Ultrasound In a second clinical trial conducted at Stanford University Medical Center, automated Accuro imaging technology successfully identified the location and depth for optimal epidural anesthesia administration with essentially equivalent accuracy to traditional ultrasound images read by an experienced interpreter. Researchers also found that real needle depth to the epidural space measured after successful delivery significantly correlated with Accuro’s initial assessment. Additionally, Accuro identified the appropriate spinal interspace for needle insertion in 94% of patients. Its automated image navigation enabled 87% success in first-attempt epidural administration for participating physicians, who were primarily anesthesia residents. This study was conducted under the direction of Brendan Carvalho, MD at Stanford Medical Center and led by Katherine Seligman, MD, who is currently faculty at the University of New Mexico. A third SOAP presentation focused on the case history of a pregnant patient with severe scoliosis who received epidural anesthesia under Accuro guidance at Rutgers-New Jersey Medical School. Prior surgical scoliosis treatment had resulted in significant scarring and additional anatomical distortion. RIVANNA® Accuro successfully identified bony landmarks and the optimal spinal interspaces for anesthesia delivery. Ultimately, the procedure was successfully performed with first-time needle placement, followed by a single manipulation. Authors note that conventionally delivered spinal anesthesia for severe scoliosis patients typically involves multiple needle insertions, extended procedure time and elevated risk of complications. They believe that Accuro’s automated recognition of the spinal midline and epidural depth contributed significantly to the procedure’s success. “We are extremely encouraged by this growing body of scientific evidence underscoring Accuro’s accuracy and efficacy in providing image guidance for successful spinal and epidural anesthesia placement,” says RIVANNA Chairman and CEO Will Mauldin. “While numerous studies demonstrate the benefits of ultrasound guidance in epidural and related neuraxial anesthesia delivery, widespread use has been hampered by the need for operator experience and the complex, cumbersome nature of the equipment. Our goal is to provide precise, practical image guidance that supports anesthesiology workflow to enhance patient satisfaction and departmental efficiency, while eliminating the risks of repeated needle placement attempts.” RIVANNA® Accuro is the world’s first ultrasound-guidance system designed to effortlessly enhance spinal and epidural anesthesia placement accuracy. The revolutionary platform features BoneEnhance®, which optimizes ultrasound for the visualization of bony vs. soft tissue anatomy, and SpineNav3D™, which automates measurements of the spinal midline, epidural depth and trajectory. Accuro was engineered and commercialized by RIVANNA, an innovative medical device company headquartered in Charlottesville, VA. The proprietary device is FDA 510(k)-cleared for a variety of imaging applications. For anesthesia providers, certainty can be effortless with Accuro. For more information, visit rivannamedical.com.


News Article | May 11, 2017
Site: www.cemag.us

McAlpine, who gained international acclaim in 2013 for integrating electronics and novel 3D-printed nanomaterials to create a “bionic ear,” says this new discovery could also be used to print electronics on real human skin. This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives. “While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine says. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.” McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process. Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. These flexible 3D-printed sensors can stretch up to three times their original size. “This is a completely new way to approach 3D printing of electronics,” McAlpine says. “We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us into so many directions from health monitoring to energy harvesting to chemical sensing.” Researchers say the best part of the discovery is that the manufacturing is built into the process. “With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use,” McAlpine says. “This time, the manufacturing is built right into the process so it is ready to go now.” The researchers say the next step is to move toward semiconductor inks and printing on a real body. “The possibilities for the future are endless,” McAlpine says. In addition to McAlpine, the research team includes University of Minnesota Department of Mechanical Engineering graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park. The research was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (Award No. 1DP2EB020537). The researchers used facilities at the University of Minnesota Characterization Facility and Polymer Characterization Facility for testing.


News Article | May 11, 2017
Site: www.cemag.us

McAlpine, who gained international acclaim in 2013 for integrating electronics and novel 3D-printed nanomaterials to create a “bionic ear,” says this new discovery could also be used to print electronics on real human skin. This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives. “While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine says. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.” McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process. Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. These flexible 3D-printed sensors can stretch up to three times their original size. “This is a completely new way to approach 3D printing of electronics,” McAlpine says. “We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us into so many directions from health monitoring to energy harvesting to chemical sensing.” Researchers say the best part of the discovery is that the manufacturing is built into the process. “With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use,” McAlpine says. “This time, the manufacturing is built right into the process so it is ready to go now.” The researchers say the next step is to move toward semiconductor inks and printing on a real body. “The possibilities for the future are endless,” McAlpine says. In addition to McAlpine, the research team includes University of Minnesota Department of Mechanical Engineering graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park. The research was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (Award No. 1DP2EB020537). The researchers used facilities at the University of Minnesota Characterization Facility and Polymer Characterization Facility for testing.


News Article | May 10, 2017
Site: www.eurekalert.org

Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing stretchable electronic sensory devices that could give robots the ability to feel their environment. The discovery is also a major step forward in printing electronics on real human skin. The research will be published in the next issue of Advanced Materials and is currently online. "This stretchable electronic fabric we developed has many practical uses," said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study. "Putting this type of 'bionic skin' on surgical robots would give surgeons the ability to actually feel during minimally invasive surgeries, which would make surgery easier instead of just using cameras like they do now. These sensors could also make it easier for other robots to walk and interact with their environment." McAlpine, who gained international acclaim in 2013 for integrating electronics and novel 3D-printed nanomaterials to create a "bionic ear," says this new discovery could also be used to print electronics on real human skin. This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives. "While we haven't printed on human skin yet, we were able to print on the curved surface of a model hand using our technique," McAlpine said. "We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time." McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized "inks" that make up the layers of the device--a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer that holds the top layer in place while it sets. The supporting sacrificial layer is later washed away in the final manufacturing process. Surprisingly, all of the layers of "inks" used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. These flexible 3D printed sensors can stretch up to three times their original size. "This is a completely new way to approach 3D printing of electronics," McAlpine said. "We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us into so many directions from health monitoring to energy harvesting to chemical sensing." Researchers say the best part of the discovery is that the manufacturing is built into the process. "With most research, you discover something and then it needs to be scaled up. Sometimes it could be years before it ready for use," McAlpine said. "This time, the manufacturing is built right into the process so it is ready to go now." The researchers say the next step is to move toward semiconductor inks and printing on a real body. "The possibilities for the future are endless," McAlpine said. In addition to McAlpine, the research team includes University of Minnesota Department of Mechanical Engineering graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park. The research was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (Award No. 1DP2EB020537). The researchers used facilities at the University of Minnesota Characterization Facility and Polymer Characterization Facility for testing. To read the full research paper entitled "3D Printed Stretchable Tactile Sensors," visit the Advanced Materials website.


News Article | March 1, 2017
Site: www.cemag.us

Researchers at Tufts University's School of Engineering have developed a new bioinspired technique that transforms silk protein into complex materials that are easily programmable at the nano-, micro-, and macro-scales as well as ultralight and robust. Among the varied structures generated was a web of silk nanofibers able to withstand a load 4,000 times its own weight. The research is published online in Nature Nanotechnology. Structural proteins are nature’s building blocks, forming materials that provide stiffness, structure, and function in biological systems. A major obstacle to fabricating comparable synthetic materials is natural materials' hierarchical structure which confers unique properties from the molecular to the macro level. When scientists try to emulate this structure, they often find that control at one scale hinders control at other scales. The Tufts researchers combined bottom-up self-assembly characteristic of natural materials with directed, top-down assembly to simultaneously control geometry at all scales, micro-mechanical constraints, and solvent-removal dynamics — all of which determine biomaterial properties. "We generated controllable, multi-scale materials that could be readily engineered with dopant agents. While silk is our main focus, we believe this approach is applicable to other biomaterials and composites and synthetic hydrogels," says corresponding author Fiorenzo Omenetto, Ph.D., Frank C. Doble Professor in the Department of Biomedical Engineering.  Omenetto also has an appointment in the Department of Electrical and Computer Engineering and in the Department of Physics within the School of Arts and Sciences. With the new technique, centimeter-scale silicone molds were patterned with micro-scale features no thicker than a human hair. An aqueous fibroin protein gel derived from silkworm cocoons was injected into the molds and then mechanically stressed by contraction of the gel in the presence of water and ethanol and/or physical deformation of the entire mold.  As the system dried, the silk protein's structure naturally transformed to a more robust beta-sheet crystal. The material's final shape and mechanical properties were precisely engineered by controlling the micro-scale mold pattern, gel contraction, mold deformation and silk dehydration. "The final result of our process is a stable architecture of aligned nanofibers, similar to natural silk but offering us the opportunity to engineer functionality into the material," says first author Peter Tseng, Ph.D., postdoctoral scholar in Omenetto's Silk Lab at Tufts' School of Engineering. In some of the experiments the Tufts researchers doped the silk gel with gold nanoparticles which were able to transport heat when exposed to light. Tseng notes that webs spun by spiders are structurally dense rather than porous. "In contrast, our web structure is aerated, porous and ultra-light while also robust to human touch, which may enable every-day applications in the future," he says. A 2 to 3 cm diameter web weighing approximately 2.5 mg was able to support an 11-gram weight. Other paper authors were Bradley Napier, Tufts doctoral student in the Silk Lab; Siwei Zhao, Ph.D., post-doctoral associate in the Silk Lab;  Alexander N. Mitropoulos, Ph.D., former Tufts doctoral student in biomedical engineering, now at the United States Military Academy at West Point; Matthew B. Applegate, Ph.D., former Tufts doctoral student in biomedical engineering, now at Boston University; Benedetto Marelli, Ph.D., former post-doctoral associate in the  Silk Lab, now at MIT; and David L. Kaplan, Ph.D., Stern Family Professor of Engineering. Kaplan holds additional Tufts faculty appointments in the Department of Chemical and Biological Engineering, the Department of Chemistry in the School of Arts and Sciences, the School of Medicine and the School of Dental Medicine. Support for this research came in part from the Office of Naval Research under award N000141310596. Peter Tseng received support from the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under the Kirschstein National Research Service Awards fellowship number F32EB021159-02. The authors also acknowledge support from the Air Force Office of Scientific Research.


News Article | February 15, 2017
Site: www.eurekalert.org

This first-of-its-kind study used MRIs to image the brains of infants, and then researchers used brain measurements and a computer algorithm to accurately predict autism before symptoms set in CHAPEL HILL, NC - Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants - among those with older siblings with autism - will be diagnosed with autism at 24 months of age. "Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge," said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. "Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months." This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project's other clinical sites included the University of Washington, Washington University in St. Louis, and The Children's Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. "This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this," said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. "We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings." People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism - who had older siblings with autism -revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data - MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants - and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. "This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis," Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene 'pre-symptomatically' before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. "Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible," Piven said. "In Parkinson's for instance, we know that once a person is diagnosed, they've already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective." Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. "We haven't had a way to detect the biomarkers of autism before the condition sets in and symptoms develop," he said. "Now we have very promising leads that suggest this may in fact be possible." For this research, NIH funding was provided by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Institute of Mental Health (NIMH), and the National Institute of Biomedical Imaging and Bioengineering. Autism Speaks and the Simons Foundation contributed additional support.


Schuck P.,National Institute of Biomedical Imaging and Bioengineering
Biophysical Journal | Year: 2010

Sedimentation velocity analytical ultracentrifugation combines relatively high hydrodynamic resolution of macromolecular species with the ability to study macromolecular interactions, which has great potential for studying dynamically assembled multiprotein complexes. Complicated sedimentation boundary shapes appear in multicomponent mixtures when the timescale of the chemical reaction is short relative to the timescale of sedimentation. Although the Lamm partial differential equation rigorously predicts the evolution of concentration profiles for given reaction schemes and parameter sets, this approach is often not directly applicable to data analysis due to experimental and sample imperfections, and/or due to unknown reaction pathways. Recently, we have introduced the effective particle theory, which explains quantitatively and in a simple physical picture the sedimentation boundary patterns arising in the sedimentation of rapidly interacting systems. However, it does not address the diffusional spread of the reaction boundary from the cosedimentation of interacting macromolecules, which also has been of long-standing interest in the theory of sedimentation velocity analytical ultracentrifugation. Here, effective particle theory is exploited to approximate the concentration gradients during the sedimentation process, and to predict the overall, gradient-average diffusion coefficient of the reaction boundary. The analysis of the heterogeneity of the sedimentation and diffusion coefficients across the reaction boundary shows that both are relatively uniform. These results support the application of diffusion-deconvoluting sedimentation coefficient distributions c(s) to the analysis of rapidly interacting systems, and provide a framework for the quantitative interpretation of the diffusional broadening and the apparent molar mass values of the effective sedimenting particle in dynamically associating systems. © 2010 by the Biophysical Society.


Grant
Agency: NSF | Branch: Interagency Agreement | Program: | Phase: BIOMEDICAL ENGINEERING | Award Amount: 300.00K | Year: 2011

None


News Article | February 27, 2017
Site: www.rdmag.com

Scientists at Rutgers and other universities have created a new way to identify the state and fate of stem cells earlier than previously possible. Understanding a stem cell's fate -- the type of cell it will eventually become -- and how far along it is in the process of development can help scientists better manipulate cells for stem cell therapy. The beauty of the method is its simplicity and versatility, said Prabhas V. Moghe, distinguished professor of biomedical engineering and chemical and biochemical engineering at Rutgers and senior author of a study published recently in the journal Scientific Reports. "It will usher in the next wave of studies and findings," he added. Existing approaches to assess the states of stem cells look at the overall population of cells but aren't specific enough to identify individual cells' fates. But when implanting stem cells (during a bone marrow transplant following cancer treatment, for example), knowing that each cell will become the desired cell type is essential. Furthermore, many protein markers used to distinguish cell types don't show up until after the cell has transitioned, which can be too late for some applications. To identify earlier signals of a stem cell's fate, an interdisciplinary team from multiple universities collaborated to use super-resolution microscopy to analyze epigenetic modifications. Epigenetic modifications change how DNA is wrapped up within the nucleus, allowing different genes to be expressed. Some modifications signal that a stem cell is transitioning into a particular type of cell, such as a blood, bone or fat cell. Using the new method, the team of scientists was able to determine a cell's fate days before other techniques. "Having the ability to visualize a stem cell's future will take some of the questions out of using stem cells to help regenerate tissue and treat diseases," says Rosemarie Hunziker, program director for Tissue Engineering and Regenerative Medicine at the National Institute of Biomedical Imaging and Bioengineering. "It's a relatively simple way to get a jump on determining the right cells to use." The approach, called EDICTS (Epi-mark Descriptor Imaging of Cell Transitional States), involves labeling epigenetic modifications and then imaging the cells with super resolution to see the precise location of the marks. "We're able to demarcate and catch changes in these cells that are actually not distinguished by established techniques such as mass spectrometry," Moghe said. He described the method as "fingerprinting the guts of the cell," and the results are quantifiable descriptors of each cell's organization (for example, how particular modifications are distributed throughout the nuclei). The team demonstrated the method's capabilities by measuring two types of epigenetic modifications in the nuclei of human stem cells cultured in a dish. They added chemicals that coaxed some of the cells to become fat cells and others to become bone, while another set served as control. Within three days, the localization of the modifications varied in cells destined for different fates, two to four days before traditional methods could identify such differences between the cells. The technique had the specificity to look at regional changes within individual cells, while existing techniques can only measure total levels of modifications among the entire population of cells. "The levels are not significantly different, but how they're organized is different and that seems to correlate with the fact that these cells are actually exhibiting different fates," Moghe said. "It allows us to take out a single cell from a population of dissimilar cells," which can help researchers select particular cells for different stem cell applications. The method is as easy as labeling, staining and imaging cells - techniques already familiar to many researchers, he said. As the microscopes capable of super resolution imaging become more widely available, scientists can use it to sort and screen different types of cells, understand how a particular drug may disrupt epigenetic signaling, or ensure that stem cells to be implanted won't transform into the wrong cell type.

Loading National Institute of Biomedical Imaging and Bioengineering collaborators
Loading National Institute of Biomedical Imaging and Bioengineering collaborators