Time filter

Source Type

News Article | May 10, 2017
Site: www.techradar.com

To meet the surging demand for expertise in the field of artificial intelligence (AI), US-based manufacturer of graphics processor technologies NVIDIA on Tuesday announced it will train 100,000 developers this year via the NVIDIA Deep Learning Institute. The NVIDIA Deep Learning Institute provides developers, data scientists and researchers with practical training on the use of the latest AI tools and technology. "AI is the defining technology of our generation. To meet overwhelming demand from enterprises, government agencies and universities, we are dramatically expanding the breadth and depth of our offerings, so developers worldwide can learn how to leverage this transformative technology," said Greg Estes, Vice President of Developer Programmes at NVIDIA, in a statement. Analyst firm International Data Corporation (IDC) estimates that 80 percent of all applications will have an AI component by 2020. The NVIDIA institute has trained developers around the world at public events and onsite training at companies such as Adobe, Alibaba and SAP and at government research institutions like the US National Institute of Health, National Institute of Science and Technology and the Barcelona Supercomputing Centre. It has also trained developers at the institutes of higher learning such as Temasek Polytechnic Singapore and India Institute of Technology, Bombay. NVIDIA is broadening the Deep Learning Institute's curriculum to include the applied use of deep learning for self-driving cars, healthcare, web services, robotics, video analytics and financial services. "There is a real demand for developers who not only understand artificial intelligence, but know how to apply it in commercial applications," added Christian Plagemann, Vice President of Content at Udacity. NVIDIA is also working with Microsoft Azure, IBM Power and IBM Cloud teams to port lab content to their cloud solutions.


News Article | May 9, 2017
Site: www.marketwired.com

10x Training Increase from Previous Year to Meet Surging Demand for AI Expertise SAN JOSE, CA--(Marketwired - May 9, 2017) - GPU Technology Conference -- To meet surging demand for expertise in the field of AI, NVIDIA ( : NVDA) today announced that it plans to train 100,000 developers this year -- a tenfold increase over 2016 -- through the NVIDIA Deep Learning Institute. Analyst firm IDC estimates that 80 percent of all applications will have an AI component by 2020. The NVIDIA Deep Learning Institute provides developers, data scientists and researchers with practical training on the use of the latest AI tools and technology. The institute has trained developers around the world at sold-out public events and onsite training at companies such as Adobe, Alibaba and SAP; at government research institutions like the U.S. National Institutes of Health, National Institute of Science and Technology, and the Barcelona Supercomputing Center; and at institutes of higher learning such as Temasek Polytechnic Singapore and India Institute of Technology, Bombay. In addition to instructor-led workshops, developers have on-demand access to training on the latest deep learning technology, using NVIDIA software and high-performance Amazon Web Services (AWS) EC2 P2 GPU instances in the cloud. More than 10,000 developers have already been trained by NVIDIA using AWS on the applied use of deep learning. "AI is the defining technology of our generation," said Greg Estes, vice president of Developer Programs at NVIDIA. "To meet overwhelming demand from enterprises, government agencies and universities, we are dramatically expanding the breadth and depth of our offerings, so developers worldwide can learn how to leverage this transformative technology." NVIDIA is broadening the Deep Learning Institute's curriculum to include the applied use of deep learning for self-driving cars, healthcare, web services, robotics, video analytics and financial services. Coursework is being delivered online using NVIDIA GPUs in the cloud through Amazon Web Services and Google's Qwiklabs, as well as through instructor-led seminars, workshops and classes to reach developers across Asia, Europe and the Americas. NVIDIA currently partners with Udacity to offer Deep Learning Institute content for developing self-driving cars. "There is a real demand for developers who not only understand artificial intelligence, but know how to apply it in commercial applications," said Christian Plagemann, vice president of Content at Udacity. "NVIDIA is a leader in the application of deep learning technologies and we're excited to work closely with their experts to train the next generation of artificial intelligence practitioners." Deep Learning Institute hands-on labs are taught by certified expert instructors from NVIDIA, partner companies and universities. Each lab covers a fundamental tenet of deep learning, such as using AI for object detection or image classification; applying AI to determine the best approach to cancer treatment; or, in the most advanced courses, using technologies such as NVIDIA DRIVE™ PX 2 and DriveWorks to develop autonomous vehicles. To meet its 2017 goal, NVIDIA is expanding the Deep Learning Institute through: NVIDIA is also working with Microsoft Azure, IBM Power and IBM Cloud teams to port lab content to their cloud solutions. At this week's GPU Technology Conference, in Silicon Valley, the Deep Learning Institute will offer 14 different labs and train more than 2,000 developers on the applied use of AI. View the schedule and register for a session at www.nvidia.com/dli. Instructors can access the DLI Teaching Kits, which also cover accelerated computing and robotics, at www.developer.nvidia.com/teaching-kits. More information on course offerings is available at NVDLI@nvidia.com. Keep Current on NVIDIA Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr. About NVIDIA NVIDIA's ( : NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI -- the next era of computing -- with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/. Certain statements in this press release including, but not limited to, statements as to the expected number of developers NVIDIA plans to train in 2017; estimates regarding applications with an AI component; AI as the defining technology of our generation; the impact of NVIDIA's Deep Learning Institute; and the offerings at the GPU Technology Conference, are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-K for the fiscal period ended January 29, 2017. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. © 2017 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo and NVIDIA DRIVE are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.


News Article | November 7, 2016
Site: www.chromatographytechniques.com

That's the day everything changed for Dr. Dharmendra Modha. Most people don't remember the exact day they realized what they wanted to do with the rest of their lives. Maybe it was a crisp fall day halfway through high school, or college or even middle school. But that’s not the case for Modha. His “day” was July 16, 2004—and he remembers it vividly. By 2004, Modha was already well on his way to being considered a computing pioneer. He joined IBM after receiving his bachelor’s from the India Institute of Technology in computer science and his Ph.D. in electrical computing engineering at the University of California, San Diego. Once at IBM, Modha has a series of extremely successful projects. He invented a code that went into every IBM disk drive; he invented algorithms to visualize data in tens of thousands of dimensions, which eventually became part of Watson; and he invented caching algorithms for large storage systems, which has generated billions of dollars for IBM over the years. “But then, I became acutely aware of the finiteness of life,” Modha recalled to R&D Magazine. “I wanted to do something that could have a paradigm-shifting effect on the field of computing. Something that would make the world better in a deep sense. But it had to have maybe just a sliver of chance of working. A very high-risk, high-leverage project.” After meditating for a year on what to do next, Modha came up with just what he wanted—the crazy, almost impossible idea to build a brain-inspired computer. But, can someone really build a computer inspired by the brain? After all, the human brain boasts about 100 trillion (1014) synapses and 100 billion (1011) neurons firing anywhere from five to 50 times per second. The point was never to compete with existing computers, Modha explains. “It was always, how can we complement today’s computers?” Cognitive computing, or brain-inspired computing, aims the emulate the human brain’s abilities for perception, action and cognition. Traditional computers are symbolic, fast and sequential with a focus on language and analytical thinking—much like the left brain. The neurosynaptic chips Modha and his team design are much more like the right brain—slow, synthetic, capable of addressing the five senses as well as pattern recognition. Today’s chip—called TrueNorth—features 1 million neurons, 256 million synapses, consumes 17 milliwatts of power and is about 4 square centimeters in size. Based on an innovative algorithm just published in September, TrueNorth can efficiently implement inference with deep networks to classify image data at 1,200 to 2,600 frames per second while consuming a mere 25 to 275 milliwatts. This means the chip can detect patterns in real-time from 50 to 100 cameras at once—each with 32x32 color pixels and streaming information at the standard TV rate of 24 fps—while running on a smartphone battery for days without recharging. “The new milestone provides a palpable proof-of-concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of cognitive computing spanning mobile, cloud and supercomputers,” Modha explained. The novel algorithm builds off the scaled-up platform IBM was able to deliver to Lawrence Livermore National Laboratory in March 2016. Called NS16e, the configuration consists of a 16-chip array of TrueNorth processors designed to run large-scale networks that do not fit on a single chip. The NS16e System interconnects TrueNorth chips via a built-in chip-to-chip message-passing interface that does not require additional circuitry or firmware. Both the algorithm and the scaled-up version of TrueNorth is the culmination of 12 ½ years of research and development, dating all the way back to that July day in 2004. The beginning and the middle Once the project received a green light and funding from IBM in 2006, Modha quickly identified three elements that were crucial to the success of his computer: neuroscience, supercomputing and architecture. After all, to build a brain-inspired computer, one must first understand how the brain works. Modha and his team consumed every bit of published information available about the brain, including 30 years of research regarding neurons. They ended up mapping out the largest, long distance line diagram of the brain—which consisted of 383 regions in the macabre monkey brain, illustrating 6,602 connections. Besides being “the most beautiful illustration” Modha as ever seen, the map successfully provided the researchers with a platform to study the brain as a network. The team turned to supercomputing simulations next. Luckily, they didn’t have to go far as IBM owns some of the most important milestones in supercomputing history, including the development of the Blue Gene/L, Blue Gene/P and Blue Gene/Q. Modha carried out a series of increasingly larger and increasingly more complex simulations on the largest Blue Gene supercomputers IBM has to offer. The largest simulation was done on the Blue Gene/Q— it was able to simulate a brain-like graph at a scale of 100 trillion synapses, or 1014. While that’s the same scale as the number of synapses in the human brain, there did exist a discrepancy—the simulation ran 1500x slower than real-time, even when using much simpler connectively and computation than the brain. “We figured a hypothetical computer designed to run the brain’s 100 trillion synapses in real-time would require 12 gigawatts of power,” Modha said, explaining what he learned from the supercomputer simulations. “That’s enough to power NYC and LA. In contrast, the human brain consumes just 20 watts. So, there’s a billion-fold disparity behind modern computers compared with what the brain can do. And that’s really what led us to the third element.” The third element was perhaps the riskiest, and thereby the most rewarding. Modha wanted to turn 70+ years of computing on its head by designing a brand new architecture that was completely different than the traditional von Neumann architecture. Described in 1945 and prevalent in most of today’s computers, von Neumann architecture refers to an electronic digital computer that shares a bus between program memory and data memory. This shared bus leads to a limited throughput (data transfer rate) between the CPU and memory compared with the amount of memory. This means power must increase as the communication rate (clock frequency) increases. Of course, Modha turned to the brain for inspiration on how to design a new architecture. His research turned up a neuroscience hypothesis that the brain is composed of canonical, cortical microcircuits, or tiny circuits that compose the fabric of the cerebral cortex. Applying this to computing, Modha sought to design an architecture based on tiny modules that could be tiled to create an overall system—which is precisely what TrueNorth is. “To prove the hypothesis, in 2011, we demonstrated a tiny little module, a neurosynaptic core with 256 neurons, the scale of a worm brain,” Modha explained. “This tiny little module formed the foundation. Then we shrank this core in area by an order of magnitude, in power by two orders of magnitude, then tiled 4,096 of these tiny cores to create the chip that is now called TrueNorth.” TrueNorth’s brain-inspired architecture consists of a network of neurosynaptic cores that are distributed and operated in parallel. Unlike von Neumann architecture, TrueNorth’s computation, memory, and communication are integrated, which results in a cool operating environment (allowing the chips to be stacked) and low power operation. Individual cores can fail and yet, like the brain, the architecture can still function. Cores on the same chip communicate with one another via an on-chip event-driven network. Chips communicate via an inter-chip interface leading to seamless scalability. This version of TrueNorth—literally a supercomputer the size of a postage stamp with the power of a hearing aid battery—debuted in 2014.


News Article | November 16, 2016
Site: news.mit.edu

The MIT Energy Initiative is sharing reports from the United Nations Climate Change Conference in Marrakech, Morocco, where MIT community members are observing the climate negotiations and speaking at auxiliary events. At a side event of COP22, the 2016 United Nations Climate Change Conference in Marrakech, Morocco, researchers and nongovernmental leaders from around the world discussed policy research that can support implementation of the 2015 Paris Agreement to limit global temperature rise. Among the nine panelists was a sole graduate student: MIT’s Arun Singh. On the panel, “New Directions in Climate Change Research and Implications for Policy,” Singh and fellow representatives of the COP22 Research and Independent Nongovernmental Organizations (RINGO) constituency gave brief overviews of their research in various areas, from agro-industrial development policies to green social work. Singh shared his research on clean development pathways for India, which applies an energy-economic model he is developing with advisors Valerie Karplus and Niven Winchester. The model simulates policy and technology choices India could make to fulfill its intended, nationally determined contributions under the Paris Agreement — and how each of those choices could impact emissions, energy use, and the country’s economy. “For example,” says Singh, “how would India’s ambitious solar targets compare with, say, a price on carbon to achieve similar levels of emissions reductions? Who wins and loses under alternate policy choices? Those are the types of questions we’re looking to answer.” In the global effort to address climate change, India’s role as a major player is indisputable. The country is the third largest emitter of global greenhouse gas emissions, behind China and the U.S., yet nearly 19 percent of India’s population, most of which lives in rural areas, still lacks reliable access to electricity — and the population is still growing rapidly. “India is in a situation where it has to balance tradeoffs between increasing energy output and ensuring that additional generation does not add significantly to the country’s carbon emissions,” Singh explains. To make these tradeoffs, policymakers and regulators would benefit from having access to quantitative analysis of policy impacts, which Singh and his team hope to provide. “Arun’s work stands out because it combines modeling of policies at the country level with an assessment of financial and operational barriers to clean energy investment at the micro level,” says Karplus, an assistant professor of global economics and management at the MIT Sloan School of Management, who is also a faculty affiliate of the MIT Energy Initiative and the Joint Program on the Science and Policy of Global Change. “We hope to work with policymakers in India to identify strategies that are cost effective and politically workable. To do that, we need to analyze proposals in terms of both the cost and the distribution of impacts.” For Singh, researching solutions to climate and energy issues is personal: Having grown up in Ayodhya, India, he experienced the challenges firsthand. “Frequent power cuts were a norm while I was growing up. In peak summer months, we would not get power for eight to 10 hours a day. And this was still in a town,” he says. Following his undergraduate studies at India Institute of Technology Roorkee, Singh became more interested in understanding energy and environmental policymaking, while working at a petroleum refinery. Then, as a research associate at the Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia office in Mumbai, he worked on environmental regulation reform projects in India, including a pilot emissions trading scheme for industrial particulate matter emissions, conducted with India’s Ministry of Environment and Forests. At J-PAL, he also carried out an impact evaluation of public disclosure of industrial air pollution ratings, for which he analyzed emissions data from more than 5,000 firms and worked closely with his team and with regulators to secure approval for a new disclosure program. As he made field visits to some of the most polluted industrial clusters in India, he learned how nuanced the issues can be. “In India it is common to hold strong positions favoring or opposing development. But that’s not helpful, as it’s not an either-or question,” he says. “Smart policies can be designed that encourage growth while limiting the impact on natural environment and climate. And India already has several forward-looking policies in place.” His work motivated him to come to MIT, where he arrived with a desire to focus on climate and energy policy research for developing countries, but he was not yet sure exactly where his studies would take him. He started in 2015 as a graduate student in the Technology and Policy Program — which is now part of the Institute for Data Systems and Society — working with Karplus to study policies and regulation in the electricity sector in India, with funding from the MIT Energy Initiative. Then, an opportunity arose to help Karplus and Winchester develop the energy-economic model he now works on as a fellow of the Tata Center for Technology and Design and research assistant in the MIT Joint Program. When Karplus learned of a call for researchers to present at COP22 with the RINGO constituency, she alerted Singh, who applied and was selected to present. In Marrakech, Singh shared preliminary findings from his model, which offer initial insights into how carbon pricing and renewable energy support policies compare in terms of their impact on carbon dioxide emissions, the energy system, and the economy. To finalize his research, he plans to expand the model’s specifications to reflect policy priorities and physical constraints, especially on details in technology choices. He is also investigating the political and economic factors that drive these choices, and viable design options for increasing the political feasibility of cost-effective policies to reduce carbon dioxide emissions. While at COP22, Singh also had the opportunity to interview developers, investors, and aid organizations that are involved in the expansion of renewable energy in Morocco, supporting Karplus as she contributes to an upcoming book on the commercialization of renewable energy in several African countries. “I am so pleased and proud that Arun had the opportunity to represent our group in Marrakech. By interacting with diverse stakeholders at the COP, Arun has been able to share his research on India with the world, and compare and contrast its insights with experiences in other countries,” Karplus says. At MIT, Singh co-leads a student group, Energy for Human Development (e4Dev), with fellow graduate student Turner Cotterman, bringing together members of the MIT community to advance understanding of issues facing the developing world, with guest lectures from notable experts, outreach programs, and educational opportunities. He plans to share his COP22 experience with the group. Singh’s first experience with UN climate negotiations has been “overwhelming,” he says, from the efforts that go into organizing the COP to how the complex negotiation process functions. “It’s very encouraging to see enthusiastic participation of all countries and the near unanimous recognition of climate change as a problem requiring strong collective efforts,” he says. “There’s no room for skepticism or delaying action.” Singh looks forward to continuing to play a role in informing energy and climate solutions for India with his research, as part of the MIT community dedicated to making a better world.


News Article | November 17, 2016
Site: www.theenergycollective.com

The MIT Energy Initiative is sharing reports from the United Nations Climate Change Conference in Marrakech, Morocco, where MIT community members are observing the climate negotiations and speaking at auxiliary events. At a side event of COP22, the 2016 United Nations Climate Change Conference in Marrakech, Morocco, researchers and nongovernmental leaders from around the world discussed policy research that can support implementation of the 2015 Paris Agreement to limit global temperature rise. Among the nine panelists was a sole graduate student: MIT’s Arun Singh. On the panel, “New Directions in Climate Change Research and Implications for Policy,” Singh and fellow representatives of the COP22 Research and Independent Nongovernmental Organizations (RINGO) constituency gave brief overviews of their research in various areas, from agro-industrial development policies to green social work. Singh shared his research on clean development pathways for India, which applies an energy-economic model he is developing with advisors Valerie Karplus and Niven Winchester. The model simulates policy and technology choices India could make to fulfill its intended, nationally determined contributions under the Paris Agreement — and how each of those choices could impact emissions, energy use, and the country’s economy. “For example,” says Singh, “how would India’s ambitious solar targets compare with, say, a price on carbon to achieve similar levels of emissions reductions? Who wins and loses under alternate policy choices? Those are the types of questions we’re looking to answer.” In the global effort to address climate change, India’s role as a major player is indisputable. The country is the third largest emitter of global greenhouse gas emissions, behind China and the U.S., yet nearly 19 percent of India’s population, most of which lives in rural areas, still lacks reliable access to electricity — and the population is still growing rapidly. “India is in a situation where it has to balance tradeoffs between increasing energy output and ensuring that additional generation does not add significantly to the country’s carbon emissions,” Singh explains. To make these tradeoffs, policymakers and regulators would benefit from having access to quantitative analysis of policy impacts, which Singh and his team hope to provide. “Arun’s work stands out because it combines modeling of policies at the country level with an assessment of financial and operational barriers to clean energy investment at the micro level,” says Karplus, an assistant professor of global economics and management at the MIT Sloan School of Management, who is also a faculty affiliate of the MIT Energy Initiative and the Joint Program on the Science and Policy of Global Change. “We hope to work with policymakers in India to identify strategies that are cost effective and politically workable. To do that, we need to analyze proposals in terms of both the cost and the distribution of impacts.” For Singh, researching solutions to climate and energy issues is personal: Having grown up in Ayodhya, India, he experienced the challenges firsthand. “Frequent power cuts were a norm while I was growing up. In peak summer months, we would not get power for eight to 10 hours a day. And this was still in a town,” he says. Following his undergraduate studies at India Institute of Technology Roorkee, Singh became more interested in understanding energy and environmental policymaking, while working at a petroleum refinery. Then, as a research associate at the Abdul Latif Jameel Poverty Action Lab (J-PAL) South Asia office in Mumbai, he worked on environmental regulation reform projects in India, including a pilot emissions trading scheme for industrial particulate matter emissions, conducted with India’s Ministry of Environment and Forests. At J-PAL, he also carried out an impact evaluation of public disclosure of industrial air pollution ratings, for which he analyzed emissions data from more than 5,000 firms and worked closely with his team and with regulators to secure approval for a new disclosure program. As he made field visits to some of the most polluted industrial clusters in India, he learned how nuanced the issues can be. “In India it is common to hold strong positions favoring or opposing development. But that’s not helpful, as it’s not an either-or question,” he says. “Smart policies can be designed that encourage growth while limiting the impact on natural environment and climate. And India already has several forward-looking policies in place.” His work motivated him to come to MIT, where he arrived with a desire to focus on climate and energy policy research for developing countries, but he was not yet sure exactly where his studies would take him. He started in 2015 as a graduate student in the Technology and Policy Program — which is now part of the Institute for Data Systems and Society — working with Karplus to study policies and regulation in the electricity sector in India, with funding from the MIT Energy Initiative. Then, an opportunity arose to help Karplus and Winchester develop the energy-economic model he now works on as a fellow of the Tata Center for Technology and Design and research assistant in the MIT Joint Program. When Karplus learned of a call for researchers to present at COP22 with the RINGO constituency, she alerted Singh, who applied and was selected to present. In Marrakech, Singh shared preliminary findings from his model, which offer initial insights into how carbon pricing and renewable energy support policies compare in terms of their impact on carbon dioxide emissions, the energy system, and the economy. To finalize his research, he plans to expand the model’s specifications to reflect policy priorities and physical constraints, especially on details in technology choices. He is also investigating the political and economic factors that drive these choices, and viable design options for increasing the political feasibility of cost-effective policies to reduce carbon dioxide emissions. While at COP22, Singh also had the opportunity to interview developers, investors, and aid organizations that are involved in the expansion of renewable energy in Morocco, supporting Karplus as she contributes to an upcoming book on the commercialization of renewable energy in several African countries. “I am so pleased and proud that Arun had the opportunity to represent our group in Marrakech. By interacting with diverse stakeholders at the COP, Arun has been able to share his research on India with the world, and compare and contrast its insights with experiences in other countries,” Karplus says. At MIT, Singh co-leads a student group, Energy for Human Development (e4Dev), with fellow graduate student Turner Cotterman, bringing together members of the MIT community to advance understanding of issues facing the developing world, with guest lectures from notable experts, outreach programs, and educational opportunities. He plans to share his COP22 experience with the group. Singh’s first experience with UN climate negotiations has been “overwhelming,” he says, from the efforts that go into organizing the COP to how the complex negotiation process functions. “It’s very encouraging to see enthusiastic participation of all countries and the near unanimous recognition of climate change as a problem requiring strong collective efforts,” he says. “There’s no room for skepticism or delaying action.” Singh looks forward to continuing to play a role in informing energy and climate solutions for India with his research, as part of the MIT community dedicated to making a better world.


Jensen E.C.,University of California at Berkeley | Stockton A.M.,University of California at Berkeley | Stockton A.M.,Jet Propulsion Laboratory | Chiesl T.N.,University of California at Berkeley | And 4 more authors.
Lab on a Chip - Miniaturisation for Chemistry and Biology | Year: 2013

A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (μL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. © 2013 The Royal Society of Chemistry.


Kumar S.K.,Loughborough University | Tiwari M.K.,India Institute of Technology
Computers and Industrial Engineering | Year: 2013

This paper considers the location, production-distribution and inventory system design model for supply chain for determining facility locations and their capacity. Risk pooling effect, for both safety stock and running inventory (RI), have been incorporated in the system to minimize the supply chain cost along with determining facility location and capacity. In order to study the benefit of risk pooling for safety stock and RI two cases have been considered, first when retailers act independently and second when DCs-retailers work jointly. The model is formulated as mixed integer nonlinear problem and divided into two stages. The first stage determines the optimal locations for plants and flow relation between plants-DCs and DCs-retailers. At this stage the problem has been linearized using piece-wise linear function. Second stage enumerates the required capacity of opened plants and DCs. The first stage problem is further divided in two sub-problems using Lagrangean relaxation. First sub-problem determines the flow relation between plants and DCs whereas; second sub-problem determines the DCs- retailers flow. Solution of the sub-problems provides the lower bound for the main problem. Computational results reveal that main problem is within the 8.25% of the lower bound and significant amount of cost reduction can be achieved for safety stock and RI costs when DC-Retailer acts jointly. © 2012 Elsevier Ltd. All rights reserved.


Puri R.,India Institute of Technology | Jain N.,India Institute of Technology | Ganesh S.,India Institute of Technology
FEBS Journal | Year: 2011

Recent studies indicate that glycogen, besides being a principal storage product, confers protection against cellular stress through an unknown physiological pathway. Abnormal glycogen inclusions have also been considered to underlie pathology in a few neurodegenerative disorders that are caused by proteolytic dysfunctions, although a link between proteolytic pathways and glycogen accumulation is yet to be established. In the present study, we investigated the subcellular localization of glycogen particles and report that their distribution is altered under physiological stress. Using a cellular model, we show that glycogen particles are recruited to the centrosomal aggresomal structures upon proteasomal or lysosomal blockade, and that this recruitment is dependent on the microtubule function. We also show that an increase in the glucose concentration leads to decreased cellular proteasomal activity and the formation of glycogen positive aggresomal structures. Proteasomal blockade also leads to the formation of diastase-resistant polyglucosan bodies. The glycogen particles in aggresomes might provide energy to the proteolytic process and/or function as a scaffold. Taken together, the findings of the present study suggest a functional link between proteasomal function and polyglucosan bodies, and also suggest that these two physiological processes could be linked in neurodegenerative disorders. We show here that glycogen particles are recruited to the centrosomal aggresomal structures upon proteasomal or lysosomal blockade, and that this recruitment is dependent on the microtubule function. The aggresomal glycogen particles could provide energy to the proteolytic process and/or might function as a scaffold. © 2011 FEBS.


Joung C.B.,U.S. National Institute of Standards and Technology | Carrell J.,U.S. National Institute of Standards and Technology | Carrell J.,Texas Tech University | Sarkar P.,U.S. National Institute of Standards and Technology | And 2 more authors.
Ecological Indicators | Year: 2013

The manufacturing industry is seeking an open, inclusive, and neutral set of indicators to measure sustainability of manufactured products and manufacturing processes. In these efforts, they find a large number of stand-alone indicator sets. This has caused complications in terms of understanding interrelated terminology and selecting specific indicators for different aspects of sustainability. This paper reviews a set of publicly available indicator sets and provides a categorization of indicators that are quantifiable and clearly related to manufacturing. The indicator categorization work is also intended to establish an integrated sustainability indicator repository as a means to providing a common access for manufacturers, as well as academicians, to learn about current indicators and measures of sustainability. This paper presents a categorization of sustainability indicators, based on mutual similarity, in five dimensions of sustainability: environmental stewardship, economic growth, social well-being, technological advancement, and performance management. Finally, the paper explains how to use this indicator set to assess a company's manufacturing operations. © 2012 Elsevier Ltd. All rights reserved.


Li Y.,Anhui University of Science and Technology | Wang J.,City University of Hong Kong | Qiao C.,State University of New York at Buffalo | Gumaste A.,India Institute of Technology | Xu Y.,India Institute of Technology
Journal of Lightwave Technology | Year: 2010

Integrated fiber-wireless (FiWi) access networks provide a powerful platform to improve the throughput of peer-to-peer communication by enabling traffic to be sent from the source wireless client to an ingress optical network unit (ONU), then to the egress ONU close to the destination wireless client, and finally delivered to the destination wireless client. Such wireless-optical- wireless communication mode introduced by FiWi access networks can reduce the interference in wireless subnetwork, thus improving network throughput. With the support for direct inter-ONU communication in the optical subnetwork, throughput of peer-to-peer communication in a FiWi access network can be further improved. In this paper, we propose a novel hybrid wavelength division multiplexed/time division multiplexed passive optical network (WDM/TDM PON) architecture supporting direct inter-ONU communication, a corresponding decentralized dynamic bandwidth allocation (DBA) protocol for inter-ONU communication and an algorithm to dynamically select egress ONU. The complexity of the proposed architecture is analyzed and compared with other alternatives, and the efficiency of the proposed system is validated by the simulations. © 2010 IEEE.

Loading India Institute of Technology collaborators
Loading India Institute of Technology collaborators