UT Dallas

UT, United States

UT Dallas

UT, United States

Time filter

Source Type

Accounting professors have confirmed what we always suspected: companies which are scrambling to meet or just beat Wall Street analysts’ profit projections have worker injury rates that are 12% higher than other employers. The recent research indicates that frantic efforts by “benchmark-beating” employers – increasing employees’ workloads or pressuring them to work faster, at the same time that these employers cut safety spending on activities like maintaining equipment or training employees, to meet the profit projections – are the likely source of increased injuries and illnesses. Professors Judson Caskey of the UCLA Anderson School of Management and Naim Bugra Ozel from the UT Dallas Jindal School of Management used workplace-level data from the federal Occupational Safety and Health Administration to compare injury rates for two general categories of firms. Companies that barely met or slightly exceeded analysts’ financial benchmarks had higher work-related injury and illness rates than those firms that comfortably beat or completely missed financial analysts’ profit forecasts. Their paper, “Earnings expectations and employee safety,” was published earlier this year in the Journal of Accounting and Economics. Caskey and Ozel’s findings are the flip side of the often-cited but less frequently followed “business case for safety” which has shown for two decades now that employer spending on health and safety programs generates two to three times the return on the investment. Effective safety programs reduce the number of injuries, illnesses and fatalities; they reduce all the associated costs, including medical expenses, workers’ compensation and regulatory fines; they increase productivity, and thereby increase profits and stock price; they result in higher employee morale, higher retention rates, lower absenteeism and reduced turn-over and training costs; and they help safer companies attract new talent and improve their corporate reputation. Among those banging the drum for the business case are Federal OSHA, the European Agency for Safety and Health (EU-OSHA),  the U.S. Chemical Safety Board (CSB), the Center for Safety & Health Sustainability (CSHS), and even Queen Elizabeth’s Royal Society for the Prevention of Accidents. We now have Caskey and Ozel’s analysis to show that employers that ignore the safety business case, not surprisingly, have higher rates of injury and illness among their workers. The accounting and business school professors looked at the safety records and financial performance of 35,350 “establishment-years” from 868 unique companies in the period of 2002 to 2011. They classified the companies into several categories related to financial analysts’ profit projections: the “meet/just beat” category; the “comfortably beat” category; and the “large miss” and “small miss” categories. The researchers used each firm’s total injury case rate as the variable for safety performance and the companies’ earnings as an indicator of financial performance. On average, the “meet/just beat,” or “suspect,” firms  had injury rates that were 5% to 15% higher (with an overall average of 12%) than the other firms. Another way to express the results is that one in 24 employees is injured in the “meet/just beat” firms compared to one in 27 employees in the other firms. The results were “both statistically and economically significant after controlling for various establishment-level and firm-level characteristics.” He also noted that CEOs’ “career outcomes” can be affected by missed benchmarks, and their pay is increasingly in the form of stocks, whose value decline with missed profit projections. The authors describe the “tools” that managers use in their scramble to meet analysts’ benchmarks: In the shorthand of standard employer excuses: “The Market made me do it…” Interestingly, the accounting and business school professors’ research found that there are three settings where this effect is much less evident: Dr. Ozel noted that most people think company managers “are rational players, so while performance is important, they will not sacrifice people’s health for this purpose.” But he said their study shows that this is not just a problem of random anecdotes of a few companies “willing to sacrifice employees’ safety.” Rather “we looked at a large sample, and in this sample, we find quite a significant result – a 10 to 15 percent increase in employee injuries.” I see several important take-away messages from this research: These may not be new or surprising conclusions to you – but now you have additional confirmation of them by business school bean-counters to back you up. Garrett Brown is a certified industrial hygienist who worked for Cal/OSHA for 20 years as a field Compliance Safety and Health Officer and then served as Special Assistant to the Chief of the Division before retiring in 2014.  He has also been the volunteer Coordinator of the Maquiladora Health & Safety Support Network since 1993 and has coordinated projects in Bangladesh, Central America, China, Dominican Republic, Indonesia, Mexico and Vietnam.


News Article | May 26, 2017
Site: www.eurekalert.org

In a new study, scientists at The University of Texas at Dallas have found that some types of cancers have more of a sweet tooth than others. "It has been suspected that many cancer cells are heavily dependent on sugar as their energy supply, but it turns out that one specific type -- squamous cell carcinoma -- is remarkably more dependent," said Dr. Jung-whan "Jay" Kim, assistant professor of biological sciences and senior author of the study published May 26 in the online journal Nature Communications. Kim and his collaborators initially set out to investigate differences in metabolism between two major subtypes of non-small cell lung cancer -- adenocarcinoma (ADC) and squamous cell carcinoma (SqCC). About one quarter of all lung cancers are SqCC, which has been difficult to treat with targeted therapies, Kim said. The research team, which included a Dallas high school student who interned in Kim's lab, first tapped into a large government database called The Cancer Genome Atlas, which maps information about 33 types of cancer gathered from more than 11,000 patients. Based on that data, they found that a protein responsible for transporting glucose -- a kind of sugar -- into cells was present in significantly higher levels in lung SqCC than in lung ADC. The protein, called glucose transporter 1, or GLUT1, takes up glucose into cells, where the sugar provides a fundamental energy source and fuels cell metabolism. GLUT1 is also necessary for normal cell function, such as building cell membranes. "Prior to this study, it was thought that the metabolic signatures of these two types of lung cancers would be similar, but we realized that they are very different," Kim said. "These findings lend credence to the idea that cancer is not just one disease, but many diseases that have very different characteristics." With elevated GLUT1 implicated in SqCC's appetite for sugar, the researchers looked for additional evidence by examining human lung tissue and isolated lung cancer cells, as well as animal models of the disease. "We looked at this from several different experimental angles, and consistently, GLUT1 was highly active in the squamous subtype of cancer. Adenocarcinoma is much less dependent on sugar," Kim said. "Our study is the first to show systematically that the metabolism of these two subtypes are indeed distinct and unique." The researchers also investigated the effect of a GLUT1 inhibitor in isolated lung cancer cells and mice with both types of non-small cell lung cancer. "When we gave GLUT1 inhibitors to mice with lung cancer, the squamous cancer diminished, but not the adenocarcinoma," Kim said. "There was not a complete eradication, but tumor growth slowed. "Taken in total, our findings indicate that GLUT1 could be a potential target for new lines of drug therapy, especially for the squamous subtype of cancer." In addition to squamous cell lung cancer, the team found that GLUT1 levels were much higher in four other types of squamous cell cancer: head and neck, esophageal and cervical. "These are very different organs and tissues in the body, but somehow squamous cell cancers have a very similar commonality in terms of glucose uptake," Kim said. "This type of cancer clearly consumes a lot of sugar. One of our next steps is to look at why this is the case." An upcoming study by Kim's group will examine the effect of a sugar-restricted diet on the progression of lung cancer in an animal model of the disease. The U.S. Department of Agriculture estimates that in 2015, on average, each American consumed more than 75 pounds of refined sugar, high fructose corn syrup and other sweeteners combined. "As a culture, we are very addicted to sugar," Kim said. "Excessive sugar consumption is not only a problem that can lead to complications like diabetes, but also, based on our studies and others, the evidence is mounting that some cancers are also highly dependent on sugar. We'd like to know from a scientific standpoint whether we might be able to affect cancer progression with dietary changes." The research was supported by the National Institutes of Health, the American Lung Association, the Japan Agency for Medical Research and Development, the Takeda Science Foundation, the Welch Foundation, and the Cancer Prevention and Research Institute of Texas. Other UT Dallas researchers involved with the work are co-lead authors and biological sciences graduate students Justin Goodwin and Michael Neugent; undergraduate students Maddox Robinson and Dana Jenkins; former Kim lab members Dr. Shin Yup Lee, Dr. Hyunsung Choi, Robin Ruthenborg and high school student intern Joshua Choe; Dr. Zhenyu Xuan and Dr. Hyuntae Yoo, assistant professors of biological sciences; Dr. Min Chen, associate professor of mathematical sciences; and Dr. Jung-Mo Ahn, associate professor of chemistry and biochemistry. Paper authors also include researchers from UT Southwestern Medical Center, the University of South Carolina, University of Nebraska Medical Center, the University of California, Los Angeles, Kyungpook National University in South Korea, the University of Tokyo and the Osaka Medical Center for Cancer and Cardiovascular Diseases.


News Article | May 24, 2017
Site: www.prweb.com

Keypath Education announces a partnership with the Executive Education MBA programs in the Naveen Jindal School of Management at The University of Texas at Dallas. Keypath Education, a full-service marketing partner for higher educational institutions worldwide, will support the executive education program through market research, inbound marketing and digital media services. “We trust Keypath to help us find the right students for our renowned Executive Education programs," said Pamela Foster Brady, director of the Jindal School’s Executive MBA programs. “Their experts develop detailed strategies that are supported by comprehensive industry- and school-specific research. Together, we will be able to help students find success and advance in careers meeting the needs and expectations of top employers.” The Jindal School’s Full-Time MBA was ranked No. 12 among U.S. public university programs by BLOOMBERG BUSINESSWEEK in 2016 and No. 16 by U.S. NEWS & WORLD REPORT in 2017. This year, U.S. NEWS & WORLD REPORT also ranked the Professional MBA as No. 7 among Best Online MBA Programs. The robust suite of MBA, MS and Executive Education programs at the Jindal School include several options that cater to a wide variety of interests and career objectives. With five formats, from full-time, evening, flex, online and executive education options to 15 concentrations and an Executive MBA and Global Leadership Executive MBA, the school prepares students at many levels and career paths with an education that will equip them for high-demand roles in the C-Suite. “We’re proud to partner with Executive Education programs at Jindal School UT Dallas,” said Keypath Group President Mike McHugh. “Not only do the high-caliber Executive MBA programs lay the groundwork for successful careers, the faculty and staff are also savvy in knowing in which areas to focus and what is needed to reach and develop today’s student.” To learn more about Executive Education programs at the UT Dallas Naveen Jindal School of Management http://jindal.utdallas.edu/emba-programs/ For more information, please contact Angela Connelly, senior marketing manager at angela.connelly(at)marketing.keypathedu.com or 913.254.6964 About Keypath Education Keypath Education partners with higher education institutions worldwide to launch programs, grow enrollment, improve learning and connect education to careers by focusing on outcomes. The company has helped more than 4,000 higher education institutions better serve students and graduates, resulting in a strong reputation for its education mission and quality in the United States, Canada, the U.K. and Australia. Since its beginning more than 25 years ago, Keypath Education has been dedicated to changing lives through education.


News Article | April 19, 2017
Site: www.eurekalert.org

Managers of U.S. companies facing market pressures to meet earnings expectations may risk damaging the health and safety of workers to please investors, according to recent research from the Naveen Jindal School of Management at UT Dallas. Companies may create incentives for employees to increase productivity or reduce discretionary expenditures, but often these actions come at the cost of managers and workers paying insufficient attention to safety. Dr. Naim Bugra Ozel, assistant professor of accounting in the Jindal School, and his co-author Dr. Judson Caskey of UCLA, recently studied firms that meet or just beat analyst expectations. The study, published in the February issue of the Journal of Accounting and Economics, found that these firms have a roughly 12 percent higher injury rate for employees than other firms do. "We know that firms try to meet earnings benchmarks because the benchmarks have implications for the firms," Ozel said. "If firms do not meet these benchmarks, then investors punish them, and stock prices go down significantly after a miss of earnings expectations. That gives managers incentive to use the tools they have to ensure they are going to perform at least to the expectations." Using injury data from the Occupational Safety and Health Administration and companies' financial data, the researchers examined company spending and worker output. They found that discretionary expenditures are associated with high injuries in firms that meet or just beat expectations, which is consistent with the conclusion that companies reduce safety-related expenditures such as oversight and employee training. The study also found that higher employee output is associated with higher rates of injuries in these firms. "Our research suggests that there is also an increase in the workload of the employees, so it's not just cutting expenditures, but asking employees to work a little harder," Ozel said. "That might be in the form of overtime, or that might be in the form of putting in more work in a shorter time period. If employees are forced to work harder, they might inadvertently ignore the safety procedures themselves." The researchers identified three factors that affect the relationship between injuries and meeting or just beating expectations. Ozel said the study shows one way that companies deal with the pressure to meet earnings expectations. Missing expectations not only means lower stock prices, but also can affect CEO career outcomes. "When we think about firms, we always think, 'These are rational players, so performance is important, but they will not sacrifice people's health for this purpose,'" Ozel said. "You may be able to think of some anecdotes where companies might be willing to sacrifice employees' safety, but we looked at a large sample. And in this sample, we find quite a significant result -- a 10 to 15 percent increase in employee injuries. "There's clearly an economic trade-off. Managers are there to think about the best interest of their investors, and they have to make a decision of what would be in the best interest of the investors, and sometimes they might decide to risk injuries."


News Article | May 4, 2017
Site: www.eurekalert.org

A University of Texas at Arlington team has won the 2017 Brain Bowl organized by University of Texas Health Science Center - San Antonio Center for Biomedical Neuroscience, beating out teams from Trinity University and the defending champions University of Texas at Dallas. The Brain Bowl is modeled after the 1960's quiz show University Challenge and includes three rounds of short answer questions that increase in difficulty with each round. The final round is comprised of a single complex challenge question, where teams wager points they have accumulated in the previous rounds. "All five members of our team are active members of my behavioral neuroscience laboratory," said Linda Perrotti, UTA associate professor of psychology and team mentor. "We made a victorious comeback to reclaim the title of Brain Bowl Champions after having lost it to UT Dallas in 2015. We also get to house the Brain Bowl Trophy on our campus for another year." The questions asked during Brain Bowl cover many fields of neuroscience research, including neurophysiology, neuroanatomy, neurochemistry, neuropharmacology and behavioral neuroscience. The Brain Bowl is sponsored annually by the Center for Behavioral Neuroscience at the University of Texas Health Science Center San Antonio and is a premier event within Brain Awareness Week for the neuroscience community. The first Brain Bowl was held in 1998. To date, nine Texas universities have competed; Texas Lutheran, Saint Mary's, University of Texas San Antonio, Trinity, Southwestern, University of Texas at Austin, UTA, Baylor, and Texas A&M. In 2013, UTA won their first Brain Bowl against the then defending champs, Trinity University, and the University of Texas at San Antonio. The following year our team went on to successfully defend their championship in 2014. UTA psychology chair Perry Fuchs was one of those congratulating the team on this important success: "UTA's team took on this challenge of cross-disciplinary quiz knowledge about neuroscience and beat out great teams from across Texas," Perry said. "It also clearly reflects on the leadership of UTA in the growing field of neuroscience." Anthropology and biology 2016 graduate. Research technician in Dr. Perrotti's lab group. Intends to apply to graduate school for neuroscience doctorate to further research in neuronal cell signaling. Enjoys reading, music, walking, and Japanese culture. Psychology major, minoring in biology. Intends to complete a medical degree and doctorate to improve psychiatric health care. Enjoys reading, meditating and 80s music. Psychology major, will begin pursuing her doctorate in neuroscience at UTA starting in the Fall 2017 semester. Enjoys reading, cooking and hiking. Biology and psychology major. Intends to complete a doctorate in clinical psychology to help people struggling with mental health issues. Enjoys trying new foods, reading, learning random facts and reading comics. The University of Texas at Arlington is a Carnegie Research-1 "highest research activity" institution. With a projected global enrollment of close to 57,000, UTA is one of the largest institutions in the state of Texas. Guided by its Strategic Plan 2020 Bold Solutions|Global Impact, UTA fosters interdisciplinary research and education within four broad themes: health and the human condition, sustainable urban communities, global environmental impact, and data-driven discovery. UTA was recently cited by U.S. News & World Report as having the second lowest average student debt among U.S. universities. U.S. News & World Report lists UTA as having the fifth highest undergraduate diversity index among national universities. The University is a Hispanic-Serving Institution and is ranked as the top four-year college in Texas for veterans on Military Times' 2017 Best for Vets list.


News Article | February 15, 2017
Site: www.eurekalert.org

A University of Texas at Dallas team is exploring whether teaching real-world science through a popular computer game may offer a more engaging and effective educational approach than traditional concepts of instruction. In an article recently published in Nature Chemistry, a UT Dallas team -- including a materials scientist, two chemists and a game design expert -- describes how a group of 39 college students from diverse majors played an enhanced version of the popular video game "Minecraft" and learned chemistry in the process, despite being given no in-class science instruction. Dr. Walter Voit led the team that created "Polycraft World," an adaptation or "mod" for "Minecraft" that allows players to incorporate the properties of chemical elements and compounds into game activities. Using the mod and instructions provided on a Wiki website, players can, for example, harvest and process natural rubber to make pogo sticks, or convert crude oil into a jetpack using distillation, chemical synthesis and manufacturing processes. "Our goal was to demonstrate the various advantages of presenting educational content in a gaming format," said Voit, a materials science and engineering professor in the Erik Jonsson School of Engineering and Computer Science. "An immersive, cooperative experience like that of 'Polycraft World' may represent the future of education." Dr. Ron Smaldone, an assistant professor of chemistry, joined the project to give the mod its accuracy as a chemistry teaching tool. Dr. Christina Thompson, a chemistry lecturer, supervised the course in which the research was conducted, and joined Smaldone in mapping out assembly instructions for increasingly complex compounds. Voit spearheaded a team of programmers that spent a full year on development of the platform. "Eventually, we got to the point where we said, 'Hey, we can do something really neat with this,'" Voit said. "We could build a comprehensive world teaching people materials science." For Smaldone and Voit, much of the work was finding in-game objectives that provided a proportional difficulty-reward ratio -- worth the trouble to build, but not too easy. "If the game is too difficult, people will get frustrated. If it's too easy, they lose interest," Voit said. "If it's just right? It's addicting, it's engaging, it's compelling." Thompson and Smaldone produced more than 2,000 methods for building more than 100 different polymers from thousands of available chemicals. "We're taking skills 'Minecraft' gamers already have -- building and assembling things -- and applying them to scientific principles we've programmed," Smaldone said. Some of the "Polycraft World" gamers became surprisingly proficient in processes for which they had no prior instruction, Voit said. "We've had complete non-chemists build factories to build polyether ether ketones, which are crazy hard to synthesize," he said. "The demands of the one-hour-a-week class were limited, yet some students went all-out, consuming all this content we put in." Dr. Monica Evans, an associate dean for graduate programs and associate professor in the School of Arts, Technology, and Emerging Communication, is a co-author of the paper and leads the University's game design program, which is ranked as one of the top programs in the country by The Princeton Review. "It's quite difficult to make a good video game, much less the rare good game that is also educational," Evans said. "The ingenuity of the 'Polycraft' team is that they've harnessed the global popularity of an existing game, 'Minecraft,' and transformed it into something that is explicitly educational with a university-level subject." Voit and Smaldone see "Polycraft World" as an early step on the road to a new format for learning without classroom instruction. "The games that already exist mostly serve only as a companion to classroom learning," Smaldone said. "The goal here is to make something that stands alone." A significant advantage of using such a tool comes in the volume of data it returns on student performance. "We can measure what each player is doing at every time, how long it takes them to mix chemicals, if they're tabbing back and forth to our Wiki, and so on," Voit said. "It gives us all this extra information about how people learn. We can use that to improve teaching." Smaldone agrees: "With traditional teaching methods, I'd walk into a room of several hundred people, and walk out with the same knowledge of their learning methods," he said. "With our method, it's not just the students learning -- it's the teachers as well, monitoring these player interactions. Even in chemistry, this is a big innovation. Watching how they fail to solve a problem can guide you in how to teach better." Smaldone admits the concept must overcome doubts held by some that gaming cannot serve useful purposes. "There's a preconception among some that video games are an inherent evil," he said. "Yet in a rudimentary form, we've made a group of non-chemistry students mildly proficient in understanding polymer chemistry. I have no doubt that if you scaled that up to more students, it would still work." Voit's plans for the next version of "Polycraft World" will take it beyond teaching chemistry. Perhaps the most ambitious objectives revolve around economics. "We've worked with several economists, and are developing a monetary system," Voit said. "There will be governments and companies you can form. A government can mint and distribute currency, then accumulate goods to prop up that currency. We'll see teams of people learning how to start companies or countries, how to control supply and demand, and how to sustain an economy. "Learning about micro- and macroeconomics by actually doing it can impart a much richer understanding of what monetary policy looks like and why." "It's a pleasure to be part of such a unique, transformative project, particularly as it moves forward into the next few stages of development," she said. For Smaldone, the appeal of the project comes from both its uniqueness and potential to yield change. "No one else is doing this to this level. That's why I think we've gotten traction," he said. "I think we have a chance to make an impact, even if only demonstrating how powerful it is to infiltrate a game with real, serious content. That's a proof of concept that so far, at least in chemistry, no one has done."


News Article | February 15, 2017
Site: www.eurekalert.org

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size -- and, hopefully, the price tag -- of a high-tech device commonly used to characterize material properties. "A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers," said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. "We have managed to miniaturize all of the electromechanical components down onto a single small chip." Moheimani and his colleagues describe their prototype device in this month's issue of the IEEE Journal of Microelectromechanical Systems. An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale -- that's roughly on the scale of individual molecules. The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image. "An AFM is a microscope that 'sees' a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve," said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. "It can capture features that are very, very small." The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach. "A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones," said Dr. Anthony Fowler, a research scientist in Moheimani's Laboratory for Dynamics and Control of Nanosystems and one of the article's co-authors. "These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece." The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Conventional AFMs operate in various modes. Some map out a sample's features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two. "The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft," Fowler said. "Or, if you are scanning a very hard surface, you could wear down the tip," The MEMS-based AFM operates in "tapping mode," which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image. "In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample," said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. "This device creates an image by maintaining the amplitude of oscillation." Because conventional AFMs require lasers and other large components to operate, their use can be limited. They're also expensive. "An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more," Moheimani said. "Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument. "One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars." A reduced size and price tag also could expand the AFMs' utility beyond current scientific applications. "For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made," Moheimani said. "With our technology, you might have an array of AFMs to characterize the wafer's surface to find micro-faults before the product is shipped out." The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device. "This is one of those technologies where, as they say, 'If you build it, they will come.' We anticipate finding many applications as the technology matures," Moheimani said. In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert's doctoral advisor. Moheimani's research has been funded by UT Dallas startup funds, the Von Ehr Distinguished Chair and the Defense Advanced Research Projects Agency.


News Article | February 15, 2017
Site: phys.org

"A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers," said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. "We have managed to miniaturize all of the electromechanical components down onto a single small chip." Moheimani and his colleagues describe their prototype device in this month's issue of the IEEE Journal of Microelectromechanical Systems. An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale—that's roughly on the scale of individual molecules. The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image. "An AFM is a microscope that 'sees' a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve," said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. "It can capture features that are very, very small." The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach. "A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones," said Dr. Anthony Fowler, a research scientist in Moheimani's Laboratory for Dynamics and Control of Nanosystems and one of the article's co-authors. "These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece." The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Conventional AFMs operate in various modes. Some map out a sample's features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two. "The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft," Fowler said. "Or, if you are scanning a very hard surface, you could wear down the tip," The MEMS-based AFM operates in "tapping mode," which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image. "In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample," said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. "This device creates an image by maintaining the amplitude of oscillation." Because conventional AFMs require lasers and other large components to operate, their use can be limited. They're also expensive. "An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more," Moheimani said. "Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument. "One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars." A reduced size and price tag also could expand the AFMs' utility beyond current scientific applications. "For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made," Moheimani said. "With our technology, you might have an array of AFMs to characterize the wafer's surface to find micro-faults before the product is shipped out." The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device. "This is one of those technologies where, as they say, 'If you build it, they will come.' We anticipate finding many applications as the technology matures," Moheimani said. More information: Michael G. Ruppert et al, On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach, Journal of Microelectromechanical Systems (2017). DOI: 10.1109/JMEMS.2016.2628890


News Article | February 15, 2017
Site: www.chromatographytechniques.com

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size -- and, hopefully, the price tag -- of a high-tech device commonly used to characterize material properties. "A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers," said Reza Moheimani, professor of mechanical engineering at UT Dallas. "We have managed to miniaturize all of the electromechanical components down onto a single small chip." Moheimani and his colleagues describe their prototype device in this month's issue of the IEEE Journal of Microelectromechanical Systems. An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale -- that's roughly on the scale of individual molecules. The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image. "An AFM is a microscope that 'sees' a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve," said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. "It can capture features that are very, very small." The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach. "A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones," said Dr. Anthony Fowler, a research scientist in Moheimani's Laboratory for Dynamics and Control of Nanosystems and one of the article's co-authors. "These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece." The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Conventional AFMs operate in various modes. Some map out a sample's features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two. "The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft," Fowler said. "Or, if you are scanning a very hard surface, you could wear down the tip," The MEMS-based AFM operates in "tapping mode," which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image. "In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample," said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. "This device creates an image by maintaining the amplitude of oscillation." Because conventional AFMs require lasers and other large components to operate, their use can be limited. They're also expensive. "An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more," Moheimani said. "Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument. "One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars." A reduced size and price tag also could expand the AFMs' utility beyond current scientific applications. "For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made," Moheimani said. "With our technology, you might have an array of AFMs to characterize the wafer's surface to find micro-faults before the product is shipped out." The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device. "This is one of those technologies where, as they say, 'If you build it, they will come.' We anticipate finding many applications as the technology matures," Moheimani said.


News Article | February 24, 2017
Site: www.eurekalert.org

Adjusting a firm's capacity can be expensive and difficult for a production manager. A new UT Dallas study derived optimal policies and data-driven, problem-solving techniques for firms to learn about demand so that they can decide when and by how much they should adjust their capacity level. "The structure of the optimal policy tells you, based on current information, whether or not you should change your capacity," said Dr. Anyan Qi, assistant professor of operations management in the Naveen Jindal School of Management and one of the paper's authors. "It's a mapping from what you know and what you have to what your decision should be -- whether you should continue to observe the demand, increase the capacity or decrease the capacity." The study was published in the January-February issue of Operations Research. Qi said it's important for researchers to investigate capacity -- an indicator of a firm's capability to satisfy the demand and, therefore, to earn revenue. Increasing the capacity can be costly because it requires an investment, such as buying additional equipment or hiring more workers. Downsizing capacity, which may require layoffs or equipment disinvestment, also can be expensive. "If you do not have enough capacity, and you have a lot of demand, you are losing potential revenue," Qi said. "If you have a lot of capacity, but not enough demand, you suffer from the redundant capacity you have. We would like to see the demand match the supply." To demonstrate that their method can be implemented with actual demand data, the researchers developed a numerical study using production and financial data related to the Ford Focus. Using the data from the first two generations of the Focus in the North American market, the numerical study illustrates how one could use the paper's approach in deciding how to adjust capacity for the third generation. Qi said it's difficult for firms to make decisions about capacity investment because of demand uncertainties. Production managers typically need to observe the demand for a while and then adjust the capacity level based on what they learn about it. Capacity adjustment can be costly and often is subject to managerial hurdles, which can make it difficult to adjust the capacity level multiple times. The study's main finding is that when this is the case, the production manager will need to maintain a careful balance between observing the demand and changing the capacity. The manager should take time to gather more information, especially if the demand can grow higher, Qi said. Because of the limited opportunity to change the capacity, the manager wants to learn more information about the demand so he or she can make the best decision. Qi said the study -- one of the first papers that combines capacity with demand learning -- also speaks to today's focus on machine learning. "It's important to know how to learn about your demand," Qi said. "How do you analyze big data to learn about the demand and support your operational decision?" Dr. Hyun-Soo Ahn and Dr. Amitabh Sinha, both of the University of Michigan, are co-authors on the paper. In 2015, the same trio of researchers published a study on investing in a shared supplier in a competitive market.

Loading UT Dallas collaborators
Loading UT Dallas collaborators