Baltimore, MD, United States
Baltimore, MD, United States

Time filter

Source Type

News Article | May 9, 2017
Site: www.sciencemag.org

Could the Trump administration be changing its mind about slashing funding for the National Institutes of Health (NIH)? Scientific leaders were optimistic yesterday after meeting for 2 hours at the White House with several biotech executives to discuss the “ecosystem” in which federally funded basic research leads to discoveries that companies turn into treatments. The closed meeting, described 2 weeks ago by Bloomberg News as a “summit,” took place on 8 May in a room in the White House residence. Twenty-seven people attended, including NIH Director Francis Collins, Health and Human Services Secretary Tom Price, Food and Drug Administration Acting Commissioner Stephen Ostroff, and nine White House officials including President Donald Trump’s daughter Ivanka Trump. About a dozen outside speakers ranged from Stanford University’s president and the CEOs of the Mayo Clinic and Johns Hopkins Medicine to four CEOs from biotech companies including Vertex Pharmaceuticals and Regeneron Pharmaceuticals. The meeting took place against the backdrop of White House plans for massive cuts to NIH that lawmakers in Congress have so far rebuffed. After the president proposed cutting NIH by about $1 billion in 2017, Congress instead gave the agency a $2 billion raise, to $34 billion, in a bill Trump signed last Friday. However, Trump’s proposed “skinny budget” for the 2018 fiscal year that begins 1 October would trim $5.8 billion from NIH’s budget. Yesterday’s gathering began with remarks by Vice President Mike Pence and a few brief presentations in which NIH and other leaders discussed milestones such as the Human Genome Project and falling rates of heart disease and HIV, and presented data on how NIH research contributes to the U.S. economy. Then the attendees had a discussion. Biotech leaders explained why private investment can’t substitute for NIH’s support for basic research at academic institutions. NIH and academic leaders described the crushing one in five odds of winning an NIH grant that early-career investigators are facing after years of flat NIH funding. Another concern was that Trump immigration policies are making it more difficult to recruit foreign talent. Heart disease researcher Helen Hobbs of the University of Texas Southwestern Medical Center in Dallas told the group that her Chinese postdocs are now accepting job offers in China instead of staying in the United States. White House officials were attentive, including Ivanka Trump, who asked a lot of questions, including on the role of women in biomedical research and K–12 science education, Collins says. “She was totally engaged,” he told reporters later. At the end, the group visited the Oval Office for a photo with President Trump. “By far the overwhelming message was the critical role of NIH in supporting fundamental biomedical research that has laid the foundation” for new diagnostics and therapeutics, says human geneticist Rick Lifton, president of The Rockefeller University in New York City. “I certainly came away with the understanding that we were carefully listened to,” he adds. “The message in the room was loud and clear: We need the NIH! And we need it now more than ever,” says Cori Bargmann, president of science for the Chan Zuckerberg Initiative in Palo Alto, California, in a Facebook post. The meeting was organized by Bill Ford, CEO of General Atlantic, a global investment firm, who has ties to Reed Cordish, Trump’s assistant for intragovernmental and technology initiatives, Collins said. Ford, who sits on the boards of The Rockefeller University and Memorial Sloan Kettering Cancer Center in New York City, thought it would be a “good idea” to have experts describe “the whole ecosystem” that biomedical innovation depends on, Collins said. Participants did not discuss the proposed $5.8 billion NIH budget cut in 2018—it was one of several “elephants in the room,” including drug pricing, Collins said. The meeting did touch on Price’s proposal to make that cut by slashing indirect costs, the overhead payments for research grants that NIH now disburses to grantee universities. But “it was not the main focus,” Collins says. (One attendee said that academic leaders explained how indirect cost payments don’t come close to covering the full cost of NIH-funded research.) The meeting “was an important step in laying out bold plans to fortify America's role as the global leader in biomedicine," Collins said in a statement afterward. He told reporters he thinks there will be more such meetings. Are NIH’s budget prospects looking better? “I think time will tell,” Collins said.


News Article | May 12, 2017
Site: www.prweb.com

Modern Healthcare, a leader in health care business news, research and data has ranked OSF HealthCare as having one of the top 10 innovation centers in the U.S. OSF Innovation is listed among other world-renowned centers at Cleveland Clinic, Johns Hopkins and Mayo Clinic. Launched in 2016, OSF Innovation is a multidisciplinary innovation center focused on internal and external innovation for the transformation of health care. “We are proud to see Modern Healthcare recognize our progress as a leading health care innovation program,” said Jeffry Tillery, MD, Senior Vice President and Chief Transformation Officer for OSF. “This speaks to our vision of transforming health care into value for the patients and communities we serve as well as the work of an extremely talented team that works tirelessly to integrate innovation within our health care ministry.” According to Modern Healthcare’s “By the Numbers” research, OSF Innovation ranked #2 out of 10 health care systems that are incubating startups. The recognition is based on the number of startup companies OSF HealthCare is working with to test ideas, offer mentorship, feedback and help with troubleshooting. The organization collaborates with about 50 entrepreneurial companies. The health care publication also listed OSF Innovation #6 out of 10 health care systems accelerating innovations. The ranking is based on the number of innovation projects being tested internally. OSF HealthCare boasts more than 50. “The unsustainable nature of health care today is what is driving our innovation agenda,” said Michelle Conger, Chief Strategy Officer for OSF. “We believe working together with startups and piloting new ideas can help us find solutions to some of health care's biggest challenges.” OSF Innovation employs a variety of approaches to innovation such as improving processes and functions to serve patients; mentoring, networking and partnering with external companies working on solutions to health care problems; investing in start-ups through OSF Ventures; and developing and testing internal and external ideas that could revolutionize how health care is delivered.


A peer-support program launched six years ago at Johns Hopkins Medicine to help doctors and nurses recover after traumatic patient-care events such as a patient's death probably saves the institution close to $2 million annually, according to a recent cost-benefit analysis. The findings, published online in the Journal of Patient Safety, could provide impetus for other medical centers to offer similar programs -- whose benefits go far beyond the financial, the Johns Hopkins Bloomberg School of Public Health researchers say. Clinicians who aren't able to cope with the stress or don't feel supported following these events, often suffer a decrease in their work productivity, take time off or quit their jobs, they say. "We often refer to medical providers who are part of these stressful events as 'second victims,'" says study leader William V. Padula, PhD, an assistant professor in the Department of Health Policy and Management at the Bloomberg School, using a term coined by Johns Hopkins professor Albert Wu, MD. "Although providers often aren't considered to be personally affected, the impact of these events can last through their entire career." In 2011, Johns Hopkins Medicine started the Resilience In Stressful Events (RISE) program. The program relies on a multidisciplinary network of peer counselors -- nurses, physicians, social workers, chaplains and other professionals -- who arrive or call a fellow clinician in need within 30 minutes after they request help following an emotionally difficult care-related event, such as a patient in extreme pain, dealing with an overwhelmed family, or a patient being harmed through a medical error. At large, academic medical centers such as Johns Hopkins, with a complicated and often very sick patient population, such events happen on a daily basis, Padula says. Although Padula says that he and others involved in the RISE program believe in its importance regardless of cost, the program does require Johns Hopkins to redirect some resources. For example, he says, although the peer counselors all volunteer their time, that's time taken away from other billable work, such as patient care. For Johns Hopkins to continue to invest in the program, he explains, showing a financial benefit is key. To explore whether such a benefit exists, Padula and his colleagues developed a model focused just on the nursing population to investigate the likely financial outcomes of a year with or without the RISE program in place. The model used data from a survey delivered to nurses familiar with the RISE program on their probability of quitting or taking a day off after a stressful event with or without the program in place. It also used Johns Hopkins human resources data as well as the average cost of replacing a lost nursing employee available in published literature, among other data. After inputting this information into the model, the researchers found that the annual cost of the RISE program per nurse was about $656. However, they found that the expected annual cost of not having the program in place was $23,232. Thus, the RISE program results in a net cost savings of $22,576 per nurse. Expanding that out to all users of the system -- including doctors, who have a much higher cost per billable hour and dramatically higher replacement costs -- the total savings to the entire institution in one year was expected to be about $1.81 million. The savings alone is an attractive reason to implement a program like RISE at other large, academic medical centers, Padula says. However, he says, helping clinicians get through a stressful event is the right thing to do, regardless of cost. "It's hard to put a true price on the emotional support and coping mechanisms this program provides for clinicians after tragic events," he says. Other Johns Hopkins researchers who participated in this study include Dane Moran, MPH; Albert W. Wu, MD; Cheryl Connors, MS; Meera R. Chappidi, MPH; Sushama K. Sreedhara, MBB; and Jessica J. Selter, MD. Funding for this study was provided by the Josie King Foundation and the Maryland Patient Safety Center.


News Article | May 11, 2017
Site: www.chromatographytechniques.com

The biggest problem in sports today—whether at the pee-wee, high school, collegiate or professional level—is the diagnosis, or non-diagnosis, of concussions. The signs and symptoms can be tricky to identify, even for seasoned medical professionals. With approximately 6 million people a year suffering from a case of mild traumatic brain injury (TBI) that may or may not result in a concussion, the desperate need for an accurate diagnostic tool is obvious. Researchers all over the world are working on a solution, but it’s a Maryland-based company called BrainScope that has taken the lead with its FDA-approved Ahead 300 device. The Ahead 300 is a portable EEG (electroencephalogram) that provides a rapid, objective assessment of the likelihood of the presence of TBI in patients who present with mild symptoms at the point of care, which could be a field, the emergency room, a park, etc. The goal of the device is to offer clinicians a comprehensive panel of data to assist in the diagnosis of the full spectrum of TBI. Eight years and three generations in the making, this iteration of the Ahead 300 was approved by the FDA in September 2016 and became commercially available on a limited basis in January 2017. “This is a first-of-its-kind instrument. We have finally reached the age where there will be objective quantitative measurements made on the mild-TBI patient, and that’s what is most exciting,” Dr. Daniel Hanley, Jr., M.D., and professor of neurological medicine at the Johns Hopkins University School of Medicine told Laboratory Equipment. Hanley was the lead investigator on the latest clinical trial for the Ahead 300, which showed a 97 percent accuracy in ruling out whether a person with a head injury likely had brain bleeding. Development of the device comes in part from grant support by the National Football League through the GE/NFL Head Health Initiative. BrainScope has also been awarded more than $27 million in research contracts since 2011 by the U.S. Department of Defense for research and development. These contracts helped support the diagnostic tool through more than 20 clinical studies at 55 sites, as well as its evolution and enhancement through the years. “We’ve learned in the last 15 years, unfortunately, that blasts cause concussions, so the military is very interested in being able to identify who had a serious blast injury and who had no injury at all but was near where the blast went off,” explained Hanley, noting that the device has applications beyond sports. “The military is very interested in triage and who can go back to work and who can’t. It looks like this device has potential in that second area.” The Ahead 300 features BrainScope’s proprietary, patent-protected EEG capabilities in combination with algorithms and machine learning to assess the likelihood a patient presenting with mild-TBI symptoms has more than 1 millimeter of bleeding in the brain, which would require immediate evaluation and further testing/intervention. A disposable electrode headset, powered by smartphone technology, records EEG data from five regions of the forehead and feeds the signals back to the handheld device. For clinicians who have never had a suited diagnostic tool for TBI before, the Ahead 300 answers two very important questions that can facilitate proper decision-making: 1) Is it likely the mildly presenting head-injured patient has a traumatic structural brain injury that would be visible on a CT scan, which is the gold standard used in emergency rooms?; and 2) Is there evidence of something functionally abnormal with the brain after head injury, which could be a concussion? In addition to these questions, the Ahead 300 also offers two rapid cognitive performance tests, as well as a number of professional society-based concussion assessment tools commonly used in today’s medical landscape. The 5- to 10-minute cognitive performance tests allow doctors to assess patient performance compared with healthy individuals in the same age group, providing an objective metric. The device also includes four objective tests and 16 standard concussion assessment tools, all of which can be customized since there is no national protocol for TBI/concussion diagnosis. Johns Hopkins’ Hanley, also the director of the university’s Brain Injury Outcomes program, recently published a study in Academic Emergency Medicine designed to test the accuracy and effectiveness of the Ahead 300 against a CT scan, the current gold standard for TBI/concussion assessment. Hanley and his research team recruited 720 adults from 11 emergency departments across the nation who presented with a closed head injury. After standard clinical assessment tests by a doctor or nurse to characterize the patient’s symptoms, the Ahead 300 device was used to measure EEG data—essentially tracking and recording brain wave patterns. For this study, the device was programmed to read approximately 30 specific features of brain electrical activity, which it uses an algorithm to analyze, and how the patient’s pattern of brain activity compared to the same pattern of brain activity considered normal. For example, it looked for how fast or slow information traveled from one side of the brain to the other, or whether electrical activity in both sides of the brain was coordinated or if one side was lagging. The accuracy of the device was then tested using CT scans from the patients. The presence of any blood within the intracranial cavity was considered a positive finding, indicating brain bleeding. Initially, researchers sorted the patients tested with Ahead 300 into two categories—“yes,” indicating likely traumatic brain injury with over 1 mm of bleeding, and “no,” for those with likely no bleeding in the brain. Of the 564 patients without brain bleeding as confirmed by CT scans, 291 were scored on the Ahead 300 as likely not having a brain injury. However, of the 156 patients with confirmed brain bleeding, the Ahead 300 correctly identified 144, or 92 percent. Given the nearly 50 percent variance in those without brain bleeding, the researchers then created three categories to sort patients—yes, no and maybe. The maybe category included a small number of patients with greater-than-usual abnormal EEG activity that was not statistically high enough to be definitely positive. When the results were recalculated on the three-tier system, the sensitivity of detecting someone with a traumatic brain injury increased to 97 percent, with 152 of 156 traumatic head injuries detected by the Ahead 300—99 percent of those having more than or equal to 1 milliliter of bleeding in the brain. None of the four false negatives required surgery, returned to the hospital due to their injury or needed additional brain imaging. The researchers say these predictive capabilities improve on the clinical criteria currently used to assess whether to do a CT scan—known as the New Orleans Criteria and the Canadian Head CT rules—and predicted the absence of brain bleeding more than 70 percent of the time in those people with no more than one symptom of brain injury, such as disorientation, headache or amnesia. “This work opens up the possibility of diagnosing head injury in a very early and precise way,” Hanley said. “This technology is not meant to replace the CT scan in patients with mild head injury, but it provides the clinician with additional information to facilitate routine clinical decision-making. If someone with a mild head injury was evaluated on the sports or battlefield, then this test could assist in the decision of whether or not he or she needs rapid transport to the hospital. Alternatively, if there is an accident with many people injured, medical personnel could use the device to triage which patients would need to have CT scans and who should go first.” This specific study only tested the Ahead 300 on adults between the ages of 18 and 85, leaving off a large portion of the younger target audience. But, since EEG is different across the spectrum of ages, the effectiveness of the device on children and teens needs to be tested in a separate pediatric study—which BrainScope is already working on. Hanley said he hopes to collaborate on that study, as well as any overall concussion research going forward. “The exciting thing is this has applications across mild TBI,” he said. “This is a measurement that could be used in chronic traumatic encephalopathy (CTE) research. The question there is ‘how many miles add up to CTE?’ This would give you some measure of mild concussion with or without brain bleeding.”


News Article | May 9, 2017
Site: www.npr.org

Almost 100 hospitals reported suspicious data on dangerous infections to Centers for Medicare & Medicaid Services officials, but the agency did not follow up or examine any of the cases in depth, according to a report by the Health and Human Services inspector general's office. Most hospitals report how many infections strike patients during treatment, meaning the infections are likely contracted inside the facility. Each year, Medicare is supposed to review up to 200 cases in which hospitals report suspicious infection-tracking results. The IG said Medicare should have done an in-depth review of 96 hospitals that submitted "aberrant data patterns" in 2013 and 2014. Such patterns could include a rapid change in results, improbably low infection rates or assertions that infections nearly always struck before patients arrived at the hospital. The IG's report, released Thursday, was designed to address concerns over whether hospitals are "gaming" a system in which it falls to the hospitals to report patient infection rates and, in turn, the facilities can see a bonus or a penalty worth millions of dollars. The bonuses and penalties are part of Medicare's Hospital Inpatient Quality Reporting program, which is meant to reward hospitals for low infection rates and give consumers access to the information at the agency's Hospital Compare website. The report zeroes in on a persistent concern about deadly infections that patients develop as a result of being in the hospital. A recent report in the journal BMJ identified medical errors as the third-leading cause of death in the country. Hospital infections particularly threaten senior citizens with weakened immune systems. Rigorous review of hospital-reported data is important to protect patients, says Lisa McGiffert, director of the Consumers Union's Safe Patient Project. "There's a certain amount of blind faith that the hospitals are going to tell the truth," McGiffert says. "It's a bit much to expect that if they have a bad record, they're going to fess up to it." Yet there are no uniform standards for reviewing the data that hospitals report to Medicare, says Dr. Peter Pronovost, senior vice president for patient safety and quality at Johns Hopkins Medicine. "There are greater requirements for what a company says about a washing machine's performance than there is for a hospital on quality of care, and this needs to change," Pronovost said. "We require auditing of financial data, but we don't require auditing of [health care] quality data, and what that implies is that dollars are more important than deaths." In 2015, Medicare and the Centers for Disease Control and Prevention issued a joint statement cautioning against efforts to manipulate the infection data. The report said CDC officials heard "anecdotal" reports of hospitals declining to test apparently infected patients so there would be no infection to report. They also warned against overtesting, which helps hospitals assert that patients came into the hospital with a pre-existing infection, thus avoiding a penalty. In double-checking hospital-reported data from 2013 and 2014, Medicare reviewed the results from 400 randomly selected hospitals, about 10 percent of the nation's more than 4,000 hospitals. Officials also examined the data from 49 "targeted" hospitals that had previously underreported infections or had a low score on a prior year's review. All told, only six hospitals failed the review, which included a look at patients' medical records and tissue sample analyses. Those hospitals were subject to a 0.6 percent reduction in their Medicare payments. Medicare did not specify which six hospitals failed the data review, but it did identify dozens of hospitals that received a pay reduction based on their reports on the quality of care. The new IG report recommends that Medicare "make better use of analytics to ensure the integrity of hospital-reported quality data." A response letter from Centers for Medicare & Medicaid Services Administrator Seema Verma says Medicare concurs with the finding and will "continue to evaluate the use of better analytics ... as feasible, based on [Medicare's] operational capabilities." Questions about truth in reporting hospital infections have percolated for years, as reports have trickled out from states that double-check data. In Colorado, one-third of the central-line infections that state reviewers found in 2012 were not reported to the state by hospitals, as required. Central lines are inserted into a patient's vein to deliver nutrients, fluids or medicine. Two years later, though, reviewers found that only 2 percent of central-line infections were not reported. In Connecticut, a 2010 analysis of three months of cases found that hospitals reported about half, or 23 of the 48 central-line infections that made patients sick. Reviewers took a second look in 2012 and found improved reporting, with about a quarter of the cases unreported, according to the state public health department. New York state officials have a rigorous data-checking system that they described in a report on 2015 infection rates. In 2014, they targeted hospitals that were reporting low rates of infections and urged self-audits that found underreporting rates of nearly 11 percent. Not all states double-check the data, though, which Pronovost says underscores the problem with data tracking the quality of health care. He said common oversight standards, like the accounting standards that apply to publicly traded corporations, would make sense in health care, given that patients make life-or-death decisions based on quality ratings assigned to hospitals. "You'd think, given the stakes, you'd have more confidence that the data is reliable," he said. Kaiser Health News is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation.


News Article | May 10, 2017
Site: motherboard.vice.com

Sitting in traffic on the way to the Brooklyn Navy Yard on Thursday, I wondered to myself why I hadn't simply taken my bike instead. I was en route to interview Dr. Amen Ra Mashariki, the City of New York's chief analytics officer, determined to better understand how data analysis can help making living in this cramped city a little more pleasant for its nearly 8.5 million residents. After all, the purpose of the event where I caught up with Mashariki, Smart Cities NYC 2017, was to explore the intersection of "technology and urban life." Who better to ask how the reams of data the city collects can actually make a difference? What follows is an edited and condensed recreation of my conversation with Mashariki, which took place on Thursday afternoon. Motherboard: When I saw the title "chief analytics officer," I had no idea what that was in the context of a city employee. Can you explain what a chief analytics officer does and how you got here? Amen Ra Mashariki: I think it's a great question primarily because across the country you'll see mostly chief djavascript:void(null);ata officers. You'll rarely see chief analytics officers, and I think that speaks to the difference of the role that I play here than what you might see with folks who are referred to as chief data officers. To give you the quick lay of the land, I started in the private sector in Motorola, but then I went back and got my doctorate. And when I went back and got my doctorate my nephew was diagnosed with cancer and I decided that I wanted to be impactful in the technological space. When I did my dissertation I did it on interoperable medical devices, which is essentially on how to share data across medical devices. Then I did post-doc at the University of Chicago's cancer research center. Again, staying in that vein of, "How do you use technology for good?" Then I went to Johns Hopkins and helped create their first bioinformatics research space. I was a computer scientist there, so all of my degrees are in computer science. So I was a core programmer but I had never thought about public service. I had been in the private sector and academia, but I applied for this program called the White House Fellows. What happens is 11 people are appointed by the president every year to function as high level senior advisers to heads of agencies. It's a very prestigious program about leadership and so that was my foray in 2012 into government. And I just absolutely fell in love with the concept of public service. What was jumping into government like? For me it was a learning experience, because I came in thinking the only people who go into government don't want to do work, or they're lazy or not smart. I was absolutely—my whole concept turned on its head! These are hard workers, committed people, extremely smart, who are experienced and knowledgeable in their fields, who have decided, "Hey I want to be maximally impactful." People in the private sector are the same way but in government you have to have a sense of patience. It's more of a mentality like, "We're not planning for the next quarterly results, we're planning how to better society." That's right! And that's a great segue into my role. My office, the way I describe it is, we're a no-cost data analytics consulting firm to the city. There are things that are long plays but there are things that city agencies need to do now. We need to find the top X-number of buildings that are not up to code. We need to find the places where the city is not performing. We need to identity a more efficient route when we do an emergency response or plow snow. So there are things that need to happen virtually immediately, right? When an agency decides that they're looking for a more innovative way to be more efficient and to be more impactful and to drive costs down, that's where we come in. We help city agencies determine what data sets they can use to help solve any number of problems that they may have. What sort of questions do these agencies ask you when they want help with, say, snow removal? The way that we function is that all a city agency has to do is essentially talk about their business challenge. Let's take one problem—one of the problems that we did a while back was to help fire inspectors buildings that were illegally converted [such as an apartment building only being licensed to 10 units actually having 14 units instead]. Why do we want to know which buildings have been illegally converted? Because the business problem was, "We want to minimize the number of fires that exist in the city." We translated that into an analytics question that says, "If we identify buildings that have been illegally converted you're taking gas and electricity to those additional units that you're not permitted to. So you have to do all sorts of splicing and connections and so forth, so maybe you've got your cousin or your uncle doing the work and they're not qualified to do it properly, which grows the chance of fire. So if we can minimize the number of illegally converted buildings we can minimize the number of fires. The thing that we think about the most is emergency responses. When you look at the recent history of New York City, there's 9/11, Hurricane Sandy—the de Blasio administration is very concerned about making sure we have the right resources and infrastructure in place for emergency response. So my office thinks about emergency response from standpoint of sharing data. Every month we do a thing called a data drill. Essentially we work with city agencies to create an emergency scenario and then we present that scenario. "This happened on this day, here are the circumstances, and here are the things that the city leadership need to know to be able to respond." We think about these agencies, whether it's the NYPD or FDNY, but then there's also questions about data that needs to get passed so that the leadership of these agencies have a better level of insight so that they can move the pieces where they need in an emergency situation. What kind of pieces? If there are downed trees, for example. Let's say there are X-number of downed trees in the city. How do we know which ones to go and remove first? Should we go and remove the downed tree because someone called it in first so it's at the top of the queue but it's in the middle of nowhere so it's not impacting traffic? Or maybe it's the people who called in 20th on the list but the tree is right in front of a building where people with disabilities live? Well, you should probably go there first. Gotcha, right. Mashariki: So there's all these things around data and information that agencies need to know. During a drill, we practice what we call "data at the speed of thought." We want people who are responding to emergencies to have data at the speed of thought. Were those conversations happening 10 years ago, 20 years ago, where the NYPD was talking to the Department of Buildings in terms of sharing data? Not at the level that we're facilitating now, no. I always bristle when I hear people say, "Aren't there silos where this agency isn't talking to that agency?" You have people who have been in government for 20, 30 years who've worked at four different agencies. And because they've worked at four different agencies they know the people to call at those agencies when they need stuff. So there is a mechanism that exists, "Oh I know that guy at the agency, let me call him," but there wasn't a framework or an infrastructure across the board. There's always one to one sharing where this agency can share with that agency, but to create an environment where all agencies can share across everyone else, no that wasn't happening. Right. Now we're talking about data-sharing but let me also add another piece in there. Let's say 10 years ago everyone was sharing data but there was something that no one was asking and that's whether or not the data was even good. That's my job to ask. Just because Agency A asks Agency Y for the data and they emailed it to them in an Excel spreadsheet does it mean that that data's going to be useful? It just means that they responded. Awesome, thank you, check, you responded, but is the data useful? Is it even in a format that I can understand? So who's responsible for ensuring that the data that's being collected is quality data to begin with? Mashariki: The agencies. The agencies do a good job but then, say, the Department of Buildings is a huge agency, so the question really should be who in that agency is working to ensure that that particular data is accurate. There isn't one "data guru" who oversees all the different sets of data. So in an emergency what happens is someone from City Hall calls the head of that agency. Someone from City Hall doesn't know the middle manager who oversees any particular dataset, but the head of that agency will, or can call on the Chief Information Officer to find that person. At that point you've already reached the threshold of how many people you should contact to get something out. So the overall question for us is, in an emergency situation what's the process for getting the right and best data? My sense is if you gave an agency a couple of days to find the best, highest quality data on any given topic they can do that, but if you give them 30 seconds are you going to get to that? So it sounds like we're limited to moving at the speed of people? It's not so much, data flying everywhere, ahh! It's more, OK, who knows this and how quickly can we find this person? That's exactly right. We lead with people and not data. It's all about whom you engage, how you engage them, and what tools you give them with which to respond. It's something I learned during my White House fellowship: government is well-equipped with the right people but we have to build out processes to give them the tools to be able to respond successfully. And that doesn't seem all that different from any other large organization. You often have the right people but they don't have the tools, or even know where to get the tools, to be as efficient and productive as they fully have the ability to be. Absolutely. So let's step back a bit. What kind of data does the city actually collect? I assume there's the usual things like crime statistics, car accidents, but what else? That's a big question so here's how I'll answer it. If you go to our Open Data Portal we have over 1,700 datasets and the next highest city has like 800, maybe 900. And we've only just begun. As for specifics: location, location, location. Almost every single dataset that we have in the city revolves around location. Some of our bigger data agencies are the Department of Finance because they manage information around taxes and land use. The Department of Buildings has building information. Every building in New York City has a Building Identification Number, and that BIN corresponds to a host of characteristics. Like you said, the NYPD collects crime statistics so that's a big dataset. The Department of Sanitation collects data about snow plow routing. The Department of Homeless Services sends us nightly the aggregate number of people in homeless shelters. What we don't get for all sorts of privacy reasons is data from the Department of Health, at least that we release to the Open Data Portal. But I would say buildings data is probably your most expansive dataset but that's primarily because buildings cut across all sorts of different agencies: Department of Finance for tax data, obviously the Department of Buildings, you've got Housing Preservation and Development, you've got NYPD that stores data. What sort of data is there surrounding transportation? I know everyone hates the subway but de Blasio has taken great pains to remind people that the city doesn't control the subway [a state agency does]. I'm just wondering what role does data play in getting folks from Point A to Point B as quickly and as safely as possible? We have CitiBike locations. We get TLC [Taxi and Limousine Commission] locations, that's a big dataset. We've got Department of Transportation data in terms of crashes and fatalities. We also have a partnership with Waze where we get some data from them that can be useful. DOT also has cameras where they can track traffic. It's funny you say Waze because I'm wondering if there are any other startups or private companies either working with the city to improve the data or improve how it's accessed? My office did a project called Business Atlas where we took federal, state, and city data, things like liquor license data and a bunch of Department of Consumer Affairs Data, we took all this data created a view of businesses such that you could get a better sense of market research if you were trying to open up a business in that area. You'd get a sense of median income, median age, what businesses that existed in that area, and so on and sort forth. We worked with a startup called PlaceMeter that uses sensors to measure foot traffic information around that area. So we created this map where you could put in an address an immediately get a sense of what's the business environment is like in that area. We also have strong partnerships with academia. So the theme of this event is the future. Things are whatever they are today but how can we make things better tomorrow. When you leave your role as chief analytics officer what would you like your legacy to be That's an easy answer. Two things. One is, I've already said that my job is to grow the competency of the Mayor's Office of Data Analytics [MODA] in the short term such that we can lessen its role across the city in the long term. And the concept behind that is, creating a culture will be more sustainable than saying, "As long as there's a MODA these things will get done." Because one thing we know about government is, there may not be a MODA depending on who gets elected years down the line. So my job is not to ensure that MODA maintains this leadership role and is always this hub of analytics excellent. My job is to spread analytics excellence across the city within these agencies. So it almost sounds like a successful MODA under your leadership is one that doesn't even really need to exist. That's exactly right. And the second is, oversight of our open data strategy. When I came in we published this vision called Open Data for All. And Open Data for All says that, yes, the city has this data that we want to release to New Yorkers but it's not useful if only a small, chosen few statistically minded few can use it. We need to have it so that all New Yorkers from all walks of life know that this exists and know how this exists but not only that but that use it to their advantage. We want it so that all New Yorkers can say, "Having access to this data can do things like help me start a business or can help me figure out a smart way to engage with my community." A strong implementation of Open Data for All and a growing understanding that everyone in New York—everyone—should have access to the resources of the city. Subscribe to Science Solved It , Motherboard's new show about the greatest mysteries that were solved by science.


News Article | May 10, 2017
Site: www.eurekalert.org

Using gene sequencing tools, scientists from Johns Hopkins Medicine and the University of British Columbia have found a set of genetic mutations in samples from 24 women with benign endometriosis, a painful disorder marked by the growth of uterine tissue outside of the womb. The findings, described in the May 11 issue of the New England Journal of Medicine, may eventually help scientists develop molecular tests to distinguish between aggressive and clinically "indolent," or non-aggressive, types of endometriosis. "Our discovery of these mutations is a first step in developing a genetics-based system for classifying endometriosis so that clinicians can sort out which forms of the disorder may need more aggressive treatment and which may not," says Ie-Ming Shih, M.D., Ph.D., the Richard W. TeLinde Distinguished Professor in the Department of Gynecology & Obstetrics at the Johns Hopkins University School of Medicine and co-director of the Breast and Ovarian Cancer Program at the Johns Hopkins Kimmel Cancer Center. Endometriosis occurs when tissue lining the uterus forms and grows outside of the organ, most often into the abdomen. The disease occurs in up to 10 percent of women before menopause and half of those with abdominal pain and infertility problems. In the 1920s, Johns Hopkins graduate and trained gynecologist John Sampson first coined the term "endometriosis" and proposed the idea that endometriosis resulted when normal endometrial tissue spilled out through the fallopian tubes into the abdominal cavity during menstruation. The new study, Shih says, challenges that view. The presence of the unusual set of mutations they found in their tissue samples, he says, suggests that while the origins of endometriosis are rooted in normal endometrial cells, acquired mutations changed their fate. For reasons the researchers say are not yet clear, the mutations they identified have some links to genetic mutations found in some forms of cancer. They emphasize that although abnormal tissue growth in endometriosis often spreads throughout the abdominal cavity, the tissue rarely becomes cancerous except in a few cases when ovaries are involved. For the study, Shih and his colleagues sequenced -- or figured out the genetic alphabet -- a part of the genome known as the exome, which contains all of the genes that can be expressed and make proteins. Specifically, they sequenced the exome of both normal tissue and endometriosis tissue removed during laparoscopic biopsies on 24 women, some with more than one abnormal endometrial growth. All had deep infiltrating endometriosis, the type that typically causes pain and infertility. Seven of the 24 women were from Japan; the rest were patients at Lenox Hill Hospital-Northwell Health in New York City. The use of samples from Japanese women was selected because endometriosis before menopause occurs more often in Asian women (13-18 percent) than in Caucasian women (6-10 percent), Shih says. The scientists looked for mutations, or abnormal changes in the DNA, and filtered out normal variations in genes that commonly occur among humans. Of the 24 women, 19 had one or more mutations in their endometriosis tissue that were not present in their normal tissue. The type and number of mutations varied per endometriosis lesion and between each of the women. The most common mutations, occurring in five of the women, occurred in genes including ARID1A, PIK3CA, KRAS and PPP2R1A, all known for controlling cell growth, cell invasion and DNA damage repair. Mutations in these genes have been associated with one of the deadliest types of ovarian cancer, called clear cell carcinoma. Nickolas Papadopoulos, Ph.D., professor of oncology and pathology at the Johns Hopkins Kimmel Cancer Center, led the team that completed the first sequencing of the clear cell ovarian cancer genome in 2010. "We were surprised to find cancer-linked genes in these benign endometriosis samples because these lesions do not typically become cancer," says Papadopoulos, whose Ludwig Center laboratories performed the sequencing. "We don't yet understand why these mutations occur in these tissues, but one possibility is that they could be giving the cells an advantage for growth and spread." In an additional group of endometriosis samples biopsied from 15 women at the University of British Columbia, the scientists looked specifically for mutations in the KRAS gene, whose expression signals proteins that spur cell growth and replication. They found KRAS mutations in five of the 15 patients. The scientists make clear that their sequencing studies may have missed mutations in some of the samples. Their data do not at this point reveal the aggressiveness of the lesions. However, Shih says, he and his team are working on additional studies to determine if the mutations correlate with patients' outcomes. He says a molecular test that sorts lesions as more or less aggressive has the potential to help doctors and patients decide how to treat and monitor the progression and control of the disease. "We may also be able to develop new treatments for endometriosis that use agents that block a gene-related pathway specific to a person's disease," says Shih. Women with endometriosis are typically prescribed anti-hormonal treatments that block estrogen to shrink lesions. When the disease occurs in the ovaries and forms a large cyst, which increases the risk of developing ovarian cancer, the lesion is usually surgically removed. Other scientists involved in the research include M.S. Anglesio, A. Ayhan, T.M. Nazeran, M. Noë, H.M. Horlings, A. Lum, S. Jones, J. Senz, T. Seckin, J. Ho, R.-C. Wu, V. Lac, H. Ogawa, B. Tessier?Cloutier, R. Alhassan, A. Wang, Y. Wang, J.D. Cohen, F. Wong, A. Hasanovic, N. Orr, M. Zhang, M. Popoli, W. McMahon, L.D. Wood, A. Mattox, C. Allaire, J. Segars, C. Williams, C. Tomasetti, N. Boyd, K.W. Kinzler, C.B. Gilks, L. Diaz, T.-L. Wang, B. Vogelstein, P.J. Yong, and D.G. Huntsman. Funding for the studies was provided by the Richard W. TeLinde Gynecologic Pathology Research Program at The Johns Hopkins University, the Virginia and D.K. Ludwig Fund for Cancer Research, the Ephraim and Wilma Shaw Roseman Foundation, the Endometriosis Foundation of America, the National Institutes of Health and National Cancer Institute (grants P50-CA62924, CA06973, GM07184, GM07309, CA09243, CA57345, P30-CA006973, CA215483, and UO1-CA200469), the Gray Family Ovarian Clear Cell Carcinoma Research Resource, the Canadian Cancer Society (grant 701603), the Canadian Institutes of Health Research (IHD-137431 and MOP-142273), the Canadian Foundation for Innovation (John R. Evans Leaders Fund) and British Columbia Knowledge Development Fund, the Women's Health Research Institute (Nelly Auersperg Grant), and the Canadian Foundation for Women's Health (General Research Grant), the BC Women's Hospital and Health Centre Foundation, The BC Cancer Foundation and the VGH and UBC Hospital Foundation, David and Darrell Mindell, Peter and Shelley O'Sullivan, the Jemini Foundation, the Vancouver Coastal Health Research Institute, the Dr. Chew Wei Memorial Professorship in Gynecologic Oncology, the Canada Research Chairs program (Research Chair in Molecular and Genomic Pathology), and the Dutch Cancer Society translational research fellowship (KWF 2013-5869).


News Article | June 21, 2017
Site: www.eurekalert.org

Johns Hopkins researchers report that a molecular diagnostic test accurately distinguishes among the three most common causes of vaginitis, an inflammation of vaginal tissue they say accounts for millions of visits to medical clinics and offices in the U.S. each year. In an article published June 8 in Obstetrics & Gynecology, the investigators said the new assay -- based on the presence of the genetic footprints of bacteria, yeast and the sexually transmitted protozoa trichomonas -- was as accurate as and more objective than traditional laboratory tests. "Overall, the disease prevalence identified by the traditional and the new molecular methods were similar," according to Charlotte Gaydos, Dr.P.H., M.P.H., professor of medicine and director of the Johns Hopkins Center for the Development of Point of Care Tests for Sexually Transmitted Diseases at the Johns Hopkins University School of Medicine. The comparative data, she added, earned U.S. Food and Drug Administration market authorization for use by diagnostic laboratories on October 28, 2016. The assay is licensed to BD Diagnostics, which will market it under the BD MAX™ Vaginal Panel. "Diagnostic tests traditionally used to distinguish among the causes of vaginitis are archaic, quite subjective and time-intensive, plus they require extensive training for those reading the results," Gaydos notes. Labs must grow cultures, conduct microscopic studies of cells for infection and even smell samples in what is commonly known as the "whiff" test to help differentiate among possible causes and select the proper treatment. "The new test is objective. Either the DNA of the causative agent is there or not; no gray area," says Gaydos. The new molecular test first uses a real-time polymerase chain reaction (PCR) to amplify large amounts of specific DNA sequences from the three most common causes of vaginitis from patient samples, then reads either a positive or a negative result based on whether enough DNA is present to indicate infection. For the study, researchers used PCR to amplify and test for the DNA of Trichomonas vaginalis, six bacteria species and six species of yeast. Vaginal swabs were collected from 1,740 symptomatic women with typical symptoms of vaginitis, including itching and burning. The women ranged in age from 18 to 81 years old and were of varied educational status and ethnic backgrounds, including American Indian or Alaskan Native, Asian, African American, Caucasian, Hispanic or Latina, and Pacific Islander decent. Researchers collected four vaginal swabs from each patient, two for use in traditional lab testing, one for use with the new molecular test and one for use with a separate comparative genetic method used to validate the results for discrepancy analysis purposes. For the molecular test, researchers prepped the samples and added them to a cartridge equipped with all the reagents needed for PCR. They then inserted the cartridge into the BD MAX™ System, a real-time PCR platform that "reads" the genetic sequences and reports for each of the three microbes. Researchers then compared these results with results from the traditional diagnostic tools and the alternate genetic test. The prevalence of bacterial vaginosis was positive in 37.3 percent of patients according to the traditional methods, and 36.1 percent in the molecular method; 14.7 of cases were deemed positive for yeast infection by traditional methods, and 16.2 by the molecular method; and 1.5 percent of patients tested positive for trichomonas using the traditional method, while 1.6 tested positive using the molecular. Gaydos says the new platform is faster than performing separate tests for each cause of vaginitis, is more sensitive, and, unlike current tests, can detect species of bacteria that cannot be easily grown in the lab. The molecular test also helps clinicians determine the best course of treatment by testing separately for two species of yeasts, Candida glabrata and C. krusei, which are resistant to some antifungal treatments. Gaydos notes that the new test is more expensive than traditional methods, costing around $75-$125, depending on a lab's existing equipment. In addition, the new test's use is limited by the fact that samples need to be sent to a PCR-capable lab, which can add hours or even days to the time before a diagnosis can be made. However, the economic advantages of using the new test may compensate for the upfront costs. The test's objective results can provide more accurate and detailed diagnoses, which would reduce the number of repeat patient visits for the same illness, saving clinics valuable time and resources. The new test's accuracy remains to be further evaluated because of the subjectivity of the traditional tests that were used for comparison. Gaydos says that, over time, traditional methods have been revealed to be less and less reliable, so the accuracy of the molecular test could be higher when compared with future testing options. In further studies, the research team hopes to develop a better version of the molecular tool that can provide faster results. Other researchers involved in this study include Jenell Coleman, of the Johns Hopkins University School of Medicine; Sajo Beqaj, of Pathology Inc.; Jane R. Schwebke, of the University of Alabama at Birmingham; Joel Lebed, of Planned Parenthood Southeastern Pennsylvania; Bonnie Smith, of Planned Parenthood Gulf Coast; Thomas E. Davis, of the Sidney and Lois Eskenazi Hospital; Kenneth H. Fife, of the Indiana University School of Medicine; Paul Nyirjesy, of the Drexel University College of Medicine; Timothy Spurrell, of Planned Parenthood of Southern New England; Dorothy Furgerson, of Planned Parenthood Mar Monte; and Sonia Paradis and Charles K. Cooper, of BD Diagnostics. Funding and materials described in this paper were provided by BD Diagnostics - TriPath. Dr. Gaydos is a paid speaker for BD Diagnostics - TriPath. This arrangement has been reviewed and approved by the Johns Hopkins University in accordance with its conflict-of-interest policies.


Johns Hopkins researchers who distributed a survey at a retreat and medical update for primary care physicians (PCPs) report that the vast majority of the 140 doctors who responded could not identify all 11 risk factors that experts say qualify patients for prediabetes screening. The survey, they say, is believed to be one of the first to formally test PCPs' knowledge of current professional guidelines for such screening. Of the providers who completed the survey, 6 percent correctly identified all of the risk factors that should -- under guidelines issued by the American Diabetes Association -- prompt prediabetes screening and 17 percent correctly identified the fasting glucose and HbA1c (a measure of glucose that attaches to the protein in red blood cells which carry oxygen), laboratory values for diagnosing prediabetes. On average, the respondents selected eight out of the 11 correct risk factors for prediabetes screening. A report of the survey's findings, published July 20 in the Journal of General Internal Medicine, also found that nearly one-third of the PCPs were unfamiliar with the American Diabetes Association's (ADA) guidelines for prediabetes. "Although this survey was conducted among primary care providers from a large academically-affiliated practice and may not represent providers from other types of practice settings, we think the findings are a wake-up call for all primary care providers to better recognize the risk factors for prediabetes, which is a major public health issue," says Eva Tseng, M.D., M.P.H., an assistant professor at the Johns Hopkins University School of Medicine and the paper's first author. An estimated 86 million adults in the United States have prediabetes; 70 percent of these individuals will eventually develop type 2 diabetes, according to the Centers for Disease Control and Prevention (CDC) and ADA expert panel. Preventive measures such as changes in diet and physical activity and the prescription of metformin, an oral diabetes medication that helps control blood sugar levels, have proven effective in preventing the progression of prediabetes to type 2 diabetes, according to the ADA. An estimated 90 percent of individuals with prediabetes, however, are unaware of their condition, according to the CDC. To better understand why so many with prediabetes go undiagnosed, Tseng and the research team created a survey to test awareness of expert prediabetes guidelines and beliefs regarding prediabetes management. At an annual retreat and medical update held for Mid-Atlantic region primary care physicians in 2015, the researchers invited all 156 PCPs who attended the meeting to participate in the on-site survey. The survey asked PCPs to select prediabetes risk factors from a list of factors recommended by the ADA guidelines for the screening of prediabetes. The survey also asked the PCPs to identify guidelines issued by the ADA about prediabetes screening; numerical values corresponding to the upper and lower limits of the fasting glucose and HbA1c laboratory criteria for diagnosing prediabetes; values corresponding to the ADA's recommendations for minimum weight loss and minimum physical activity for patients with prediabetes; best initial management approach to a patient with prediabetes; prediabetes screening tests used; initial patient management approaches; and intervals used for repeat lab work and follow-up visits. To evaluate attitudes and beliefs regarding prediabetes, the survey asked providers to rate, on a five-point scale (strongly agree to strongly disagree), whether they believe it is important to identify prediabetes and whether they believe that lifestyle modification and metformin can reduce the risk of progression to diabetes. A similar scale was used to evaluate what providers perceive as patient barriers to lifestyle modification and the use of metformin. While only 11 percent of physicians selected referral to a behavioral weight loss program as the recommended initial management approach to prediabetes, 96 percent selected counseling on diet and physical activity. Landmark studies such as the Diabetes Prevention Program have shown that behavioral weight loss programs are effective at reducing the risk of developing diabetes and are the recommended initial approach by the ADA. The survey also revealed that metformin use for prediabetes was uncommon: 25 percent of providers never prescribed metformin and 16 percent of providers did not believe in prescribing metformin for patients with prediabetes. In the 2017 guidelines, the ADA is now recommending that metformin be considered in patients with prediabetes who have failed to decrease their risk of diabetes through lifestyle change. "Primary care providers play a vital role in screening and identifying patients at risk for developing diabetes. This study highlights the importance of increasing provider knowledge and availability of resources to help patients reduce their risk of diabetes," says Nisa Maruthur, Nisa Maruthur, assistant professor of medicine at the Johns Hopkins University School of Medicine and the paper's senior author. Prediabetes is diagnosed by labs, specifically an elevated fasting glucose of 100-125 mg/dL or hemoglobin A1c of 5.7-6.4 percent. Diabetes is diagnosed based on labs above those thresholds, fasting glucose greater than or equal to 126 mg/dL or hemoglobin A1c of greater than or equal to 6.5 percent. Johns Hopkins' efforts to prevent diabetes include the implementation of a National Diabetes Prevention Program, a CDC-recognized lifestyle change program. Through this new program, East Baltimore pastors and community members have been trained as Lifestyle Coaches to help fellow community members manage weight, eat more healthfully and increase physical activity. Called the Power to Stop Diabetes, the program is one of only three coordinated efforts of its kind in Maryland. Other authors on this paper include Raquel C. Greer, Paul O'Rourke, Hsin-Chieh Yeh, Maura M. McGuire and Jeanne M. Clark of The Johns Hopkins University. Tseng is supported by training grant T32HL007180-41. Greer is supported by National Institutes of Health grant K23DK094975. This study received analytic support from the Baltimore Diabetes Research Center (National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Disease, grant P30 DK079637).


Grant
Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase I | Award Amount: 80.00K | Year: 2013

We propose a solution to the Navy"s laser weapons warning problem. The proposed solution employs a combined approach of extending the wavelength spectrum and dynamic range of current State of the Art laser warning systems. The proposed wide optical bandwidth will be achieved by adapting previously developed methods. The system will exploit on going design work created for visible multi-threat optical systems and leverage advances in FPA technologies. The optical bandwidth will require multiple diffraction gratings integrated with reflective wide FOV optics. FPA enhancements will address anticipated dynamic range requirements

Loading Johns Hopkins collaborators
Loading Johns Hopkins collaborators