University of Tokyo Hospital

University, United States

University of Tokyo Hospital

University, United States
SEARCH FILTERS
Time filter
Source Type

Nakano M.,University of Tokyo Hospital | Kida S.,University of Tokyo Hospital | Masutani Y.,University of Tokyo Hospital | Shiraki T.,University of Tokyo Hospital | And 4 more authors.
Medical Physics | Year: 2013

Purpose: Four‐dimensional (4D) cone‐beam CT (CBCT) techniques have been started to use in clinical sites as a tool of image‐guided radiotherapy (IGRT), especially in the treatment of lung tumors. However, those techniques suppose periodic respiratory motion, and there are little approaches to visualize organs which present non‐periodic time‐ordered motion using CBCT imaging techniques. The present study proposes a method to visualize time‐ordered motion including peristaltic motion of gastrointestinal organs and adjacent area. Methods: Projection data sets of clinical patients and digital phantom were reconstructed in this study. Patients' data sets were acquired using the X‐ray Volume Imaging system (XVI, version 4.2) on Synergy linear accelerator system (Elekta, UK) as pre‐treatment CBCT imaging to setup prostate radiotherapy patients with 11.5 cm offset‐located flat panel detector (FPD) unit. An elliptic‐cylindrical digital phantom which contains a moving air sphere of 3 cm diameter was also reconstructed. These projection data sets were reconstructed using our in‐house reconstruction software based on Feldkamp, Davis and Kress (FDK) algorithm. In this study, 180 degrees or less angular ranges of projection data were used to reconstruct CBCT image set of each time phase, and the range moved as time progressed. Results: Reconstructed sagittal images of clinical patients visualized that flatus and stool inside rectum moved with the progress of time. Reconstructed sagittal images of digital phantom with several angular ranges of projection showed that the longitudinal length is getting shorter with shorter range of projection, though a shape of a sphere is getting blurred in vertical direction. Conclusion: The presented method of time‐ordered 4D CBCT reconstruction visualized deformation of intestine and rectum, and motion of flatus and stool, though the method has a trade‐off between temporal resolution improvement and image quality degradation. © 2013, American Association of Physicists in Medicine. All rights reserved.


Futaguchi M.,University of Tokyo Hospital | Haga A.,University of Tokyo Hospital | Sakumi A.,University of Tokyo Hospital | Okamoto H.,National Cancer Center Hospital | And 6 more authors.
Medical Physics | Year: 2013

Purpose: In the treatment of retinoblastoma, 106Ru brachytherapy is one of the important methods to control the tumor with preserving eye function. So far, we had some clinical experiences that the tumor on optic disc was effectively shrunk by using the COC‐type applicator (BEBIG), which has a notch. In this study, the dose distribution was evaluated by Monte Carlo simulation to investigate a dose to the notched area of COC applicator. Methods: The model of the COC‐type applicator was created and registered in EGS5 Monte Carlo simulation code. Here, the applicator thickness was 1mm including 0.1mm of the silver radiation window and 0.2 mm of radioactive layer. The notched area was also reproduced from the measurement of actual geometry. The dose to the notched area was represented with the polar coordinate, in which the two‐dimensional relative dose distributions of the spherical shells with 0.2 mm thickness at distances of 0.1, 1.3, and 2.5 mm from the surface were calculated. For comparison, the dose profile was also calculated by the Plaque simulator (BEBIG). Results: The Monte Carlo Result showed that the scattered electron largely contributed to the dose to the notched area. This contribution became dominant as the distance from the surface was larger. The calculation Result of the Plaque simulator was significantly different from that of the Monte Carlo simulation around the notched area of the applicator. Conclusion: The dose distribution with the COC‐type applicator could be quantified by the Monte Carlo Conclusion: The dose distribution with the COC‐type applicator could be quantified by the Monte Carlo calculation. It was found that the notched area of COC‐type applicator was irradiated sufficiently by the scattered electron. The Plaque simulator could not reproduce the dose distribution around the notched area. © 2013, American Association of Physicists in Medicine. All rights reserved.


News Article | November 5, 2016
Site: www.sciencedaily.com

It is virtually impossible to remove all contamination from robotic surgical instruments, even after multiple cleanings, according to a study published today in Infection Control & Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America. The results show that complete removal of surface contaminants from these tools may be unattainable, even after following manufacturers' cleansing instructions, leaving patients at risk for surgical site infections. "One of the top priorities for hospitals is to treat patients safely and with minimal risk of infection," said Yuhei Saito, RN, PHN, MS, lead author of the study and assistant professor at the University of Tokyo Hospital. "Our results show that surgical instruments could be placing patients at risk due to current cleaning procedures. One way to address this issue is to establish new standards for cleaning surgical instruments, including multipart robotic tools." The study examined 132 robotic and ordinary instruments over a 21-month period. Instruments were collected immediately after use to determine their level of contamination. The researchers used in-house cleaning methods that included manual procedures with ultrasonication following the manufacturers' instructions. Measurements of protein concentration were collected from tools after three subsequent cleanings to determine changes in the total amount of residual protein. Due to the complex structures of robotic instruments, these tools had a greater protein residue and lower cleaning efficacy compared to ordinary instruments. The cleanings were 97.6 percent effective for robotic instruments and 99.1 percent effective for ordinary instruments. As a result, researchers suggest that it might be necessary to establish new cleaning standards that use repeated measurements of residual protein, instead of only measuring contamination once after cleaning. "These instruments are wonderful tools that allow surgeons to operate with care; but completely decontaminating them has been a challenge for hospitals," said Saito. "By implementing new cleaning procedures using repeated measurements of the level of contamination on an instrument more than once, we could potentially save many patients from future infections."


News Article | October 31, 2016
Site: www.eurekalert.org

NEW YORK (October 31, 2016) - It is virtually impossible to remove all contamination from robotic surgical instruments, even after multiple cleanings, according to a study published today in Infection Control & Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America. The results show that complete removal of surface contaminants from these tools may be unattainable, even after following manufacturers' cleansing instructions, leaving patients at risk for surgical site infections. "One of the top priorities for hospitals is to treat patients safely and with minimal risk of infection," said Yuhei Saito, RN, PHN, MS, lead author of the study and assistant professor at the University of Tokyo Hospital. "Our results show that surgical instruments could be placing patients at risk due to current cleaning procedures. One way to address this issue is to establish new standards for cleaning surgical instruments, including multipart robotic tools." The study examined 132 robotic and ordinary instruments over a 21-month period. Instruments were collected immediately after use to determine their level of contamination. The researchers used in-house cleaning methods that included manual procedures with ultrasonication following the manufacturers' instructions. Measurements of protein concentration were collected from tools after three subsequent cleanings to determine changes in the total amount of residual protein. Due to the complex structures of robotic instruments, these tools had a greater protein residue and lower cleaning efficacy compared to ordinary instruments. The cleanings were 97.6 percent effective for robotic instruments and 99.1 percent effective for ordinary instruments. As a result, researchers suggest that it might be necessary to establish new cleaning standards that use repeated measurements of residual protein, instead of only measuring contamination once after cleaning. "These instruments are wonderful tools that allow surgeons to operate with care; but completely decontaminating them has been a challenge for hospitals," said Saito. "By implementing new cleaning procedures using repeated measurements of the level of contamination on an instrument more than once, we could potentially save many patients from future infections." Published through a partnership between the Society for Healthcare Epidemiology of America and Cambridge University Press, Infection Control & Hospital Epidemiology provides original, peer reviewed scientific articles for anyone involved with an infection control or epidemiology program in a hospital or healthcare facility. ICHE is ranked 13th out of 158 journals in its discipline in the latest Web of Knowledge Journal Citation Reports from Thomson Reuters. SHEA is a professional society representing physicians and other healthcare professionals around the world with expertise and passion in healthcare epidemiology, infection prevention, and antimicrobial stewardship. SHEA's mission is to prevent and control healthcare-associated infections, improve the use of antibiotics in healthcare settings, and advance the field of healthcare epidemiology. SHEA improves patient care and healthcare worker safety in all healthcare settings through the critical contributions of healthcare epidemiology and improved antibiotic use. The society leads this specialty by promoting science and research, advocating for effective policies, providing high-quality education and training, and developing appropriate guidelines and guidance in practice. Visit SHEA online at http://www. , http://www. and @SHEA_Epi Cambridge University Press publishes over 350 peer-reviewed academic journals across a wide spread of subject areas, in print and online. Many of these journals are leading academic publications in their fields and together form one of the most valuable and comprehensive bodies of research available today. For further information about Cambridge Journals, visit journals.cambridge.org Cambridge University Press is part of the University of Cambridge. It furthers the University's mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. Its extensive peer-reviewed publishing lists comprise 45,000 titles covering academic research, professional development, over 350 research journals, school-level education, English language teaching and bible publishing. Playing a leading role in today's international market place, Cambridge University Press has more than 50 offices around the globe, and it distributes its products to nearly every country in the world. For further information about Cambridge University Press, visit cambridge.org


Kondo M.,University of Tokyo Hospital | Kondo M.,Medinetco. ltd | Izumi T.,University of Tokyo Hospital | Izumi T.,Medinetco. ltd | And 8 more authors.
Journal of Visualized Experiments | Year: 2011

Human γδ T cells can recognize and respond to a wide variety of stress-induced antigens, thereby developing innate broad anti-tumor and anti-infective activity. The majority of γδ T cells in peripheral blood have the Vγ9Vδ2 T cell receptor. These cells recognize antigen in a major histocompatibility complex-independent manner and develop strong cytolytic and Th1-like effector functions.1Therefore, γδ T cells are attractive candidate effector cells for cancer immunotherapy. Vγ9Vδ2 T cells respond to phosphoantigens such as (E)-4-hydroxy-3-methyl-but-2-enyl pyrophosphate (HMBPP), which is synthesized in bacteria via isoprenoid biosynthesis; and isopentenyl pyrophosphate (IPP), which is produced in eukaryotic cells through the mevalonate pathway. In physiological condition, the generation of IPP in nontransformed cell is not sufficient for the activation of γδ T cells. Dysregulation of mevalonate pathway in tumor cells leads to accumulation of IPP and γδ T cells activation. Because aminobisphosphonates (such as pamidronate or zoledronate) inhibit farnesyl pyrophosphate synthase (FPPS), the enzyme acting downstream of IPP in the mevalonate pathway, intracellular levels of IPP and sensitibity to γδ T cells recognition can be therapeutically increased by aminobisphosphonates. IPP accumulation is less efficient in nontransfomred cells than tumor cells with a pharmacologically relevant concentration of aminobisphosphonates, that allow us immunotherapy for cancer by activating γδ T cells with aminobisphosphonates. Interestingly, IPP accumulates in monocytes when PBMC are treated with aminobisphosphonates, because of efficient drug uptake by these cells. Monocytes that accumulate IPP become antigen-presenting cells and stimulate Vγ9Vδ2 T cells in the peripheral blood. Based on these mechanisms, we developed a technique for large-scale expansion of γδ T cell cultures using zoledronate and interleukin-2 (IL-2). Other methods for expansion of γδ T cells utilize the synthetic phosphoantigens bromohydrin pyrophosphate (BrHPP) or 2-methyl-3-butenyl-1-pyrophosphate (2M3B1PP). All of these methods allow ex vivo expansion, resulting in large numbers of γδ T cells for use in adoptive immunotherapy. However, only zoledronate is an FDA-approved commercially available reagent. Zoledronate-expanded γδ T cells display CD27-CD45RA- effector memory phenotype and thier function can be evaluated by IFN-γ production assay. © 2011 Journal of Visualized Experiments.


News Article | October 26, 2016
Site: www.newscientist.com

THE doctors were stumped. After months of cancer treatment at the University of Tokyo Hospital, the patient – a woman in her 60s – was not getting much better. So the medical team plugged the woman’s symptoms into IBM’s Watson, the supercomputer that once famously trounced human champs in the TV quiz show Jeopardy! Watson rifled through its storehouse of oncology data and announced that she had a rare form of secondary leukemia. The team changed the treatment, and she was soon out of hospital. Watson spotted in minutes what could otherwise have taken weeks to diagnose, one doctor told The Japan Times. “It might be an exaggeration to say AI saved her life, but it surely gave us the data we needed in an extremely speedy fashion.” Is this the future of medicine? Artificial intelligence researchers have long dreamed of creating machines that can diagnose health conditions, suggest treatment plans to doctors, and even predict how a patient’s health will change. The main advantage of such an AI wouldn’t be speed, but precision. A study published earlier this year found that medical error is the third leading cause of death in the US, and a significant chunk of that is incorrect diagnoses. “There are so many health conditions and the literature changes so fast that no doctor can keep up“ There are just too many health conditions and the literature is changing too rapidly for a primary care physician to retain it all, says Herbert Chase, who works on biomedical informatics at Columbia University in New York City. “We’ve exceeded where it’s humanly possible for doctors to know what they need to know,” he says. “There are dozens of conditions that are being missed that could easily be diagnosed by a machine.” Chase once advised the IBM Watson team. These days, he is working on an algorithm that scours doctors’ notes for subtle clues that patients may be developing multiple sclerosis. The goal is to build a program that can calculate each person’s risk of MS, whether it be 0.5 or 5 per cent. He imagines a future in which software will automatically analyse electronic health records and spit out warnings or recommendations. “It’s a partnership. The machine makes a recommendation, then the human gets involved,” says Chase. But the spectrum of human illness is complex, so “algorithms will have to be built brick by brick”, with the focus on one medical question at a time. These building blocks often rely on machine learning, a branch of artificial intelligence that seeks patterns in mounds of statistics. Thanks to the ease of collecting and sharing data, researchers are coming up with new algorithms as fast as computers can crunch through the numbers. For example, a team at Stanford University in California recently unveiled a machine-learning algorithm trained to scrutinise slides of cancerous lung tissue. The computer learned to pick out specific features about each slide, like the cells’ size, shape and texture. It could also distinguish between samples from people who had only lived for a short time after diagnosis – say, a few months – and ones from those who survived much longer. The study verified the algorithm’s results by testing it on historical data, so now the AI could in principle be used with patients. Stanford’s slide-reader is just one in a long string of AIs that are learning to perform medical tasks. At a conference last week on machine learning and healthcare in Los Angeles, researchers presented new algorithms to detect seizures, predict the progression of kidney or heart disease, and pick out anomalies in pregnant women or newborn babies. Participants in one programming challenge are getting AIs to listen to recordings of heartbeats, sorting the normal rhythms from the abnormal. Yet other projects are trying to make medical judgements using more obscure or indirect sources. A Microsoft algorithm, published in June, makes guesses about who has pancreatic cancer based on their web searches. Google DeepMind, based in London, is using masses of anonymised data from the UK’s National Health Service to train an AI that will help ophthalmologists. The aim here is to spot looming eye disease earlier than a human can, although the project does raise questions about whether commercial firms are gaining access to health data too cheaply (see “Getting our money’s worth“). But is the medical profession ready to hand control over to artificial intelligence? Before that happens, doctors will probably want to see more solid proof that a computer’s predictions can improve health outcomes. Some fear that AI diagnosis may backfire, encouraging doctors to overdiagnose and overtest patients. Even if the algorithms work well, there’s the question of how to integrate them seamlessly into clinical practice. Doctors, notoriously overworked, aren’t likely to want to add yet more items to their checklist. Chase thinks that artificially intelligent diagnostics will end up being integrated right into databases of electronic health records, so that seeking machine insights becomes as routine as getting hold of a patient’s data. “For physicians to delegate tasks to an AI, they must first admit to being occasionally wrong“ Apps that offer diagnostic help already exist, like Isabel, which doctors can run on Google Glass in order to keep their hands free. But Chase says this approach is unpopular, as doctors must spend time inputting patient data to use them. AI diagnostics will only take off when it imposes no additional time pressure. There are social roadblocks, too, says Leo Anthony Celi, a doctor at the intensive care unit of the Beth-Israel Deaconess Medical Center in Boston. Down the line, Celi thinks, doctors will function more “like the captain of a ship”, delegating most daily tasks either to machines or to highly trained nurses, medical techs and physician’s assistants. For that system to succeed, doctors must first cede some control, admitting that the machine can perform better than them in some domains. That’s a tough ask in a career in which everyone from medical school professors to patients expects that doctors will always have the right answers. Ultimately, there needs to be a cultural shift toward respect for big data and AI’s potential in medicine, argues Celi. Only then can we let machines and humans do what each does best. “No one can really replace doctors’ ability to talk to patients,” he says. “Doctors should focus on what they do better, which is talking to patients and eliciting their values and their advance directives, and leave it up to the machine to make the complex decisions. We’re not really good at it.” Artificial intelligence may have a lot to offer in healthcare, but exploiting it means handing over troves of medical data to tech companies. How do we ensure that those transfers are a good deal for the public? As the recent deal between Google DeepMind and the UK’s National Health Service shows, it’s not just the quantity of patient data that matters, but its quality. NHS experts have spent a lot of time and money building and tending to the database given to Google. It’s not clear that the NHS will get that time and money back. Richard French, legal director at Digital Catapult, a non-profit R&D centre in London, says that the deal may not be the best one for the taxpayer. “One would have expected that Google would pay for access to the records in some form or another.” If there was no upfront payment, Google could have told the NHS that any commercial product based on the research would be available to it at a discount, he says. Hal Hodson This article appeared in print under the headline “Medicine by machine”

Loading University of Tokyo Hospital collaborators
Loading University of Tokyo Hospital collaborators