News Article | December 1, 2016
High-resolution brain scans analyzed by machine learning algorithms could determine whether a patient has a concussion, according to a new study published in PLOS Computational Biology. Currently, doctors diagnose concussions according to patient-reported symptoms, which can be challenging and inefficient. Previous studies have used high-resolution imaging to show that concussions cause changes in communication between different brain areas. However, these studies have typically only looked at average changes across groups of patients. Vasily Vakorin, now of Simon Fraser University, British Columbia, and colleagues (from the Hospital for Sick Children, Toronto and Defense Research and Development Canada) investigated whether high-resolution imaging could be combined with machine learning algorithms to detect concussions in the brains of individual patients. The researchers scanned the brains of men with and without concussion using magnetoencephalography (MEG), which records brain activity at fast time scales. MEG imaging showed that patients with concussions had distinctive changes in communication among areas of their brains. Then, by employing machine learning algorithms, the scientists were able to use individual brain scans to work backwards and predict whether a given patient had a concussion or not. They were able to detect concussions with 88% accuracy. This approach also accurately predicted the severity of symptoms reported by individual patients. "Changes in communication between brain areas, as detected by magnetoencephalography, allowed us to detect concussion from individual scans, in situations wherein typical clinical imaging tools such as MRI or CT fail," says study coauthor Sam Doesburg. Future research could refine understanding of the specific neural changes associated with concussions in order to improve detection, inform treatment, and monitor recovery. In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Funding: This work was supported by the Canadian Forces Health Services and funding from Defence Research and Development Canada (DRDC) (contract # W7719-135182/001/TOR) to MJT and EWP. The funders had no role in study. Competing Interests: The authors have declared that no competing interests exist.
News Article | December 7, 2016
Simon Fraser University researchers have found that high-resolution brain scans, coupled with computational analysis, could play a critical role in helping to detect concussions that conventional scans might miss. In a study published in PLOS Computational Biology, Vasily Vakorin and Sam Doesburg show how magnetoencephalography (MEG), which maps interactions between regions of the brain, could detect greater levels of neural changes than typical clinical imaging tools such as MRI or CAT scans. Qualified clinicians typically use those tools, along with other self-reporting measures such as headache or fatigue, to diagnose concussion. They also note that related conditions such as mild traumatic brain injury, often associated with football player collisions, don't appear on conventional scans. "Changes in communication between brain areas, as detected by MEG, allowed us to detect concussion from individual scans, in situations where MRI or CT failed," says Vakorin. The researchers are scientists with the Behavioral and Cognitive Neuroscience Institute based at SFU, and SFU's ImageTech Lab, a new facility at Surrey Memorial Hospital. Its research-dedicated MEG and MRI scanners make the lab unique in western Canada. The researchers took MEG scans of 41 men between 20-44 years of age. Half had been diagnosed with concussions within the past three months. They found that concussions were associated with alterations in the interactions between different brain areas--in other words, there were observable changes in how areas of the brain communicate with one another. The researchers say MEG offers an unprecedented combination of "excellent temporal and spatial resolution" for reading brain activity to better diagnose concussion where other methods fail. Relationships between symptom severity and MEG-based classification also show that these methods may provide important measurements of changes in the brain during concussion recovery. The researchers hope to refine their understanding of specific neural changes associated with concussions to further improve detection, treatment and recovery processes. The research was funded by Defence Research and Development Canada (DRDC).
News Article | September 28, 2016
It's a nightmare scenario: In some future war, an adversary flies a plane over the unsuspecting Canadian heartland and sprays anthrax or another deadly agent into the air. It's odorless, colourless, and the innocent people breathing it in are none the wiser. The Canadian army wants to be able to detect bioweapons before they can hurt anybody—using lasers. According to a request for proposal published to the government's public-facing procurement website on Monday, Defence Research and Development Canada (DRDC) is looking for a bidder to investigate how various airborne biological and chemical agents respond to being flashed with a laser for the purposes of detection and early warning. The contract is set to run until 2022, and the government will dedicate $850,000 to the project. Read More: Programmable Bioweapons Could Be the Nuclear Bombs of Future Wars "Chemical samples absorb and emit [light] at characteristic wavelengths that may be detected using laboratory, ground-based and airborne [...] sensors in the laboratory or field," the request for proposal states. In other words, when flashed with a laser, airborne chemical particles send back a unique optical signal that can be detected by specialized equipment. But before that can happen, technicians need a large library of recorded signals that can be checked against in order to find out exactly what kind of bioweapon they're dealing with. That, according to the request for proposal, is what scientists will be doing under this research project. "These activities have an objective to monitor air quality around valuable military assets, in order to detect and mitigate potential aerosolized or gaseous chemical and biological threats," Evan Koronewski, a spokesperson for the Department of National Defence, wrote me in an email. "This is done fundamentally from a defense perspective." The work won't involve testing real bioweapons—that would probably be pretty dangerous—but will instead focus on "simulants" (non-lethal analogs) of dangerous compounds, Koronewski said, and will be done under controlled conditions. The Canadian military and the DRDC in particular have been investigating ways to detect bioweapons from a distance, even using lasers, for years. Between 1999 and 2002, the DRDC developed a system called SINBAHD, which used lasers to detect biological agents. It's no longer in operation. In the intervening years, the DRDC developed several more laser-based technologies for detecting bioweapons. Hopefully, we'll never have to use any of this advanced tech.
News Article | December 3, 2015
The weapons that will be used to fight tomorrow’s wars will need to address a very old problem: friendly fire. Researchers think complex algorithms can help by telling soldiers where to shoot, and where not to. But how much trust should soldiers place in a machine that helps them to decide who to kill? “The reality is that soldiers are doing a very difficult task under very difficult circumstances,” Greg Jamieson, a researcher at the University of Toronto’s Cognitive Engineering Laboratory (CEL), told me over the phone. “So, if you can provide some kind of tool to help people make better decisions about where there’s a target or who this target is or the identity of that target, that’s in the interest of the civilian or non-aligned people in the environment.” The problem is that the tool Jamieson is referring to, called automated target detection (ATD), doesn’t really exist in any sort of ready-to-deploy form for individual soldiers. So, in partnership with Defence Research and Development Canada (DRDC), Jamieson and the other researchers at CEL are tackling the research backwards: instead of testing new tech to see how soldiers respond, they’re testing soldiers to understand what they need out of new tech. Photo: Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2014 Essentially, the CEL researchers are studying the trust soldiers place in ATD, and if soldiers benefit from imperfect automation when they understand its limitations. “People don’t want to tell us how well things work, and they don’t want to tell us how reliable they are because that’s sensitive information,” Jamieson said. “So, instead we take the opposite direction and say, OK, how about we provide the designers of this technology with some information about how effective it needs to be in order for it to be an aid to soldiers?” Studying ATD is a current focus for the Canadian military’s Future of Small Arms Research project, which is investigating and developing the killing machines for tomorrow’s wars. Basically, ATD relies on computer vision to process information about the scene surrounding a soldier and provide live feedback about targets in the area. But while many approaches to this task have been proposed over the years including laser radar, deep learning, and infrared imaging, the work has been met with limited success. Getting a computer to parse a busy scene with noisy data is hard, especially when you need enough accuracy to justify pulling the trigger, and in the blink of an eye. In these studies, a soldier is put in a room and surrounded by screens, meant to create the illusion of a virtual battle field. The DRDC calls this the virtual immersive soldier simulator, or VISS. Difficult-to-identify targets fly across the screen as the soldier looks down a modified rifle, with a heads-up-display projected inside the sight. The soldier sees yellow boxes around some of the objects in the scene—but not all—to indicate that the hypothetical ATD system has identified a target. The researchers "bias" the system to detect friendlies more readily than enemies, thus helping the soldier make a decision about whether a friend or foe has been targeted. Before the study, the soldier is told how reliable the system is, usually anywhere from 50 to 100 percent, and how likely it is to detect a soldier versus a civilian. The soldier must then decide when to shoot. Photo: Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2014 “We found that if we informed our participants of the ATD bias, they were more likely to identify targets in accordance with the ATD bias,” Justin Hollands, a DRDC researcher working on the project at CEL, wrote me in an email. “Importantly, we also found that detection of targets was much better with the ATD than without, regardless of its bias.” In other words, the automated targeting helped soldiers shoot better, especially when they were informed about how much trust they should place in its performance. Some past approaches for target identification include the combat identification (CID) systems currently used by many NATO countries. This kind of CID relies on a two-part “call and response” handshake between two sensors, one worn by the soldier or vehicle trying to identify a target, and the other by the friendly. The problem with this approach is that enemies and neutrals obviously don’t wear army-issue CID transponders, and so these systems often leave soldiers in a world of unknowns. According to Jamieson, these technologies sorely need an update, and ATD could be the answer. The message Jamieson wants to get across based on the work that he’s done at CEL, he tells me, is that automation doesn’t need to be better, or even as effective, as a human soldier. Of course, the idea of a computer telling a human to shoot to kill at an innocent bystander is no doubt unsettling—terrifying, even. The fact that it only happens sometimes doesn’t really help to allay such fears. But, Jamieson says, as long as a human still has to pull the trigger and understands the technology’s pitfalls, then it’s a net positive. “What that suggests to the people who are designing these technologies is that it doesn’t have to be perfect,” Jamieson said. “Instead of trying to make it perfect, we could invest energy in communicating what that reliability information means. That’s kind of where we want to go with the research in the future. We want to figure out how to tell a soldier most effectively how reliable the automation is.” The next step will be to take what the team at CEL has learned about automated targeting and put it into practice with some of the experimental tech that currently exists.“Within the FSAR project we will also conduct field trials where weapons have actual ATD and soldiers will use those,” Hollands wrote me, “so we will look at real weapons with real algorithms in those studies.” Eventually, automatic targeting tech will make it out of tests and onto the battlefield. But in between, thorny design questions will need to be answered: what imaging technique and algorithm will be used to identify targets? Will the device be mounted on the soldier or their weapon? How large will it be? How heavy? Machines that tell humans on the ground who to shoot at are still years away from being deployed, but Jamieson and Hollands’ work makes one thing clear: technical advances aside, tomorrow’s computer-aided warfare will be about trust.
Singh P.A.,Jamia Hamdard University |
Brindavanam N.B.,DRDC |
Kimothi G.P.,DRDC |
Aeri V.,Jamia Hamdard University
Asian Pacific Journal of Tropical Disease | Year: 2016
Objective: To evaluate the in vivo anti-inflammatory and analgesic potential of stem bark extract of Dillenia indica f. elongata (Miq.) Miq. (. D. indica f. elongata) and its comparison with Shorea robusta Gaertn. (. S. robusta) and respective standard drugs in experimental animals. Methods: Analgesic models (hot plate, tail flick and formalin induced paw licking) along with acute (carrageenan-induced) and chronic (formalin-induced) models of inflammation were evaluated for analgesic and anti-inflammatory potential of the plant extracts. Results: The results of the study showed that the ethyl acetate extracts of D. indica f. elongata (100 and 300 mg/kg) and S. robusta (100 and 300 mg/kg) possessed good central as well as peripheral analgesic activity as compared with pentazocine and indomethacin (10 mg/kg) respectively. The extracts showed significant (. P < 0.01) activity in carrageenan- and formalin-induced chronic inflammation models by using indomethacin (8 mg/kg) and diclofenac (13.5 mg/kg) as standard drugs respectively. Conclusions: It can be concluded that the presence of major constituents like flavonoids, tannins and phenols in the ethyl acetate extracts of stem bark of D. indica f. elongata (100 and 300 mg/kg) and S. robusta (100 and 300 mg/kg) may be responsible for its analgesic and anti-inflammatory activity. © 2016 Asian Pacific Tropical Medicine Press.
News Article | March 31, 2016
On the battlefield, shoddy intelligence means innocent people die. To make its intelligence analysts more effective, the Canadian government is experimenting on them by treating these highly trained personnel like animals. No, really, intelligence analysts aren’t so different from wild animals, and that’s actually the point. Like hungry little foxes, they go scrounging around different nooks and crannies, except they’re hunting in various databases for satellite images, not a tasty critter to eat. This is called “information foraging,” a theory that originated at the storied Palo Alto Research Center, where much of modern computing was born, in the early 1990s. The Canadian military wants its intelligence analysts to get better at this kind of foraging because, according to Defence Research and Development Canada (DRDC) researchers, analysts face two major challenges: “information overload” and tight time constraints. This makes sense when you consider how Canada and other governments have accelerated and expanded their digital surveillance regimes over the years—that’s a lot of data. To this end, DRDC launched a project in 2014 involving a whole constellation of research on how to make intelligence analysts do their jobs better. Part of that project is INFOCAT, an experimental platform for testing the information foraging abilities of analysts, which can then be used to design better training and search systems. The work with INFOCAT is being led by cognitive psychologist David Bryant in Toronto, and the team just launched its first experiment, he told me when I interviewed him over the phone. A DRDC spokesperson was also on the line. “Animals ask themselves: While I’m foraging in this bush, how long should I stay there?” said Bryant. “Should I stay there and completely exhaust the food? That’s not a very good strategy.” It’s a simple question of economics—how long do you dick around in one bush, er, database looking for the last few berries before you move to another, more fruitful location, and how fast can you do it? To measure this in people, INFOCAT gives subjects just 20 minutes to answer a research question with lots of databases at their disposal. “In information foraging, you might Google something and as you go along, you’ll find items that aren’t particularly useful or helpful,” Bryant explained. In other words, how long do you plod through pages of increasingly unhelpful search results before you try a different combination of words? “It’s a question of diminishing returns,” Bryant said. This work will have applications outside of a military context, Bryant said. After all, in 2016, information overload isn’t just a problem for military types. As a steady stream of individual tweets, news stories, memes, and occasionally relevant information inundates us all, sussing out what’s important and what’s not can be tough. A system that helps you navigate it efficiently would be helpful for academics or businesspeople, or, hell, the average Twitter user. Canadian intelligence analysts prepare briefings on anything from strategic questions, like whether ISIS is likely to attack a certain region, to tactical ones, like which frequencies a particular radar emits, Bryant said. Analysts must look through a ton of different information sources—signals intelligence such as metadata, human intelligence reports (classic, “boots on the ground”-style spook stuff), satellite imagery, and books or Wikipedia. The job is to pull out relevant information, without spending too much time in any one source. You know, like an animal hungrily scurrying from place to place looking for a bite to eat. “In the experiment, we vary how the information is distributed in databases, we vary the cost associated with moving from one database to another, and the cost of opening an item and processing it,” said Bryant. “By varying these factors, we can create situations where we’d expect different behaviours from an optimal forager. We can see if their behaviours are in line with what information foraging theory predicts.” The nerdier among us may be asking ourselves: instead of going through all this trouble, why not just automate the whole process? According to Bryant, there’s no AI right now that can reliably do the cognitively complex job of an intelligence analyst, although that may one day be the case. For now, we have many people who do this job, and so Bryant and the DRDC are focused on making them better, instead of replacing them. That doesn’t mean that machines don’t have a part to play in that mission, however—a 2014 DRDC research paper saw the military looking into creating an AI-based virtual assistant to assist intelligence analysts. In the information-drenched future of war, human analysts—animals that they are—are going to need a little help.
Wojtaszek D.,Chalk River Laboratories |
IEEE Transactions on Systems, Man, and Cybernetics: Systems | Year: 2014
The ability of an organization to perform some critical military tasks in a timely manner may depend on the availability of a sufficient number of appropriate vehicles. Therefore, the decision of which is the best military fleet-mix for a given set of requirements should take into consideration, in addition to cost, the ability of the fleet to perform tasks when some of its vehicles are unavailable. In this paper, a measure of the flexibility of military air mobility fleets is presented that evaluates their ability to perform tasks in a timely manner taking into account the possibility that some aircraft in the fleet may be unavailable at any given time. This measure computes the number of aircraft that must be unavailable in order to render a fleet incapable of performing each task in a timely manner. The utility of the flexibility measure is demonstrated by using it as an objective in a multiobjective optimization framework to compute nondominated fleets with respect to cost and flexibility. An artificial data set that is representative of real military air mobility data is used to illustrate how the new flexibility measure may be used to aid decision makers with their fleet mix problems. © 2013 IEEE.
Wojtaszek D.,DRDC |
IEEE Computational Intelligence Magazine | Year: 2012
A major financial expense for any military is the acquisition, operation, and maintenance of vehicles such as ships  and aircraft . For example, the U.S. Air Force estimates that the acquisition of the F-35 fighter aircraft will cost $156 million each , hence even slight improvements in fleet efficiency and/or effectiveness can save governments large amounts of money or, using the same budget, can buy better equipment. Such high costs have driven the development and application of optimization and simulation methodologies to problems of military fleet mix computation and analysis. The complexity of military fleet mix problems, due in large part to the uncertainty, multi-objectivity, and temporal criticality of military missions, has resulted in the increased use of computational intelligence (CI) methods for solving them. © 2012 IEEE.
Wesolkowski S.,DRDC |
2012 IEEE Congress on Evolutionary Computation, CEC 2012 | Year: 2012
Militaries involved in transportation of people and cargo need to be able to assess which tasks they can or cannot do given a specified fleet of heterogeneous platforms (such as vehicles or aircraft). We introduce the Stochastic Fleet Estimation under Steady State Tasking (SaFESST) model to determine which tasks will not be achievable. SaFESST is a bin-packing model which uses a fleet configuration (the assignment of specific platforms to each of the tasks) to fit each task from a scenario within the platform bins (the height of the bin represents the number of platforms). Each individual platform is represented by a strip of scenario length which is packed by sub-tasks it can carry out. SaFESST is run on a set of 10,000 scenarios for a single fleet configuration. Results are reported on various statistics of tasks that are unachievable. © 2012 IEEE.
Wojtaszek D.,DRDC |
IEEE SSCI 2011 - Symposium Series on Computational Intelligence - CISDA 2011: 2011 IEEE Symposium on Computational Intelligence for Security and Defense Applications | Year: 2011
The Non-dominated Sorting Genetic Algorithm-II is applied to a multi-objective air transportation fleet-mix problem for finding flexible fleet mixes. The Stochastic Fleet Estimation model, which is Monte Carlo-based, is used to determine average annual requirements that a fleet must meet. We search for Pareto-optimal combinations of platform-to-task assignments that can be used to complete stochastically generated scenarios. Solutions are evaluated using three objectives, with a goal of maximizing flexibility in accomplishing each task within its closure time, and minimizing fleet cost and total task duration. Optimization over all three objectives found very flexible low cost fleets, which were not discovered using previous two-objective and three-objective optimizations. © 2011 IEEE.