News Article | March 17, 2016
UPDATE: Two days after we first contacted the RCMP for comment, and one day after this article was posted, Canada’s federal police force has answered some of the questions raised in this story. Their response confirms the reporting below. “The RCMP does not currently have an approved project plan to implement a facial recognition system,” the statement we received by email states, although the new fingerprint system will “allow the RCMP to implement facial recognition as an option.” The RCMP does currently maintain a database of facial images voluntarily sent by “police agencies,” but “they are not being used or accessed by the RCMP at this time,” the emailed statement continues. Despite pushing ahead with the procurement process for the technology needed to access such a database, the RCMP spokesperson wrote: “There is currently no policy on the retention of facial images, including purging rules,” and that these questions will be addressed when RCMP policy is “finalized.” The RCMP statement noted that the law enforcement agency has not consulted the Office of the Privacy Commissioner with regards to this project, but is part of a biometrics working group, along with numerous other national security agencies such as the Canadian Security Intelligence Service and Canada Border Services Agency, created by Defence Research and Development Canada's Centre for Security Science. The Royal Canadian Mounted Police is aiming to upgrade its automated fingerprint identification system (AFIS), and this time, Canada’s top cops want the system to have facial recognition search capabilities. Even more concerning, available documents suggest that the plan flies in the face of Canada’s existing privacy guidelines for facial recognition technology. The AFIS renewal contract is set to run until 2021, according to a 2015 letter of interest, but there is “no planned implementation time” for the facial recognition aspect, according to another letter of interest published on Wednesday. Instead, a successful bidder for the AFIS contract only needs to “support” facial recognition capabilities, should the RCMP decide to implement them. Despite this ambiguity over when facial recognition will be used, the RCMP has some pretty clear ideas about how it should be used. According to a previously released document, the RCMP would like to store and analyze surveillance and cellphone video, “or other non-controlled, poor-quality sources.” The RCMP also expects that these videos may only contain partial facial images. It’s unclear from where, or how, the RCMP plans on acquiring cellphone video. According to the document, the RCMP will perform one-to-one searches (using one image to confirm the identity of one suspect), as well as one-to-many searches—fishing expeditions involving large databases of photos. If a photo does not contain an identifiable person, then it should be stored in an “unknown photo database repository,” according to the letter of interest, which the RCMP can later query. “What is the criteria for adding photos to that database?” Asked lawyer Micheal Vonn, policy director of the British Columbia Civil Liberties Association, who said she isn’t aware of any such RCMP repository. “If they are going to just download all manner of photos and videos into the repository without strict inclusion or exclusion criteria, that is a problem. For example, people marching in a demonstration should not be videoed and have their images placed in an RCMP unknown photo database [to be used as] a repository of suspects.“ Provisions in Bill C-51 that allow for an unprecedented level of information sharing between federal agencies under the aegis of national security, Vonn said, pose additional dangers. “If the RCMP used a national security rationale for commandeering, say, the passport database, it’s got much more photos of Canadians than it would have in their mugshots.” The RCMP declined to comment within Motherboard’s publishing timeframe, and we will update this article if we hear from them. In a 2013 report prepared by the Office of the Privacy Commissioner of Canada (OPC), the nation’s top privacy watchdog listed several guidelines for facial recognition. Two of them include stipulations to record and store descriptions of biometric data instead of images themselves to ensure they’re not re-analyzed improperly, and to stick to one-to-one searches to minimize the risk of false matches or data breaches. By stating that they wish to maintain a database of images, and perform one-to-many searches, the RCMP appears to be disregarding both of these guidelines. “We were not specifically aware of this letter of interest,” Tobi Cohen, OPC spokesperson, wrote me in an email. “The issue of facial recognition did come up in a Privacy Impact Assessment (PIA) from the RCMP in relation to body worn video cameras. In our response to the PIA last fall, we indicated that the RCMP would have to update its PIA and assess the privacy risks if it were to apply facial recognition technology to any footage collected. At the time, the RCMP indicated it was not contemplating such a thing.” “If the RCMP were to use facial recognition in any capacity, we would expect to receive a PIA on the program,” she added. Facial recognition technology has been used in Canada by passport authorities for years in order to detect fraud, beginning in 2009. That program has been undergoing PIAs since 2004, according to an OPC report, years before it was actually implemented. Despite shopping around for a company to supply them with facial recognition-ready technology, it appears as though the RCMP is not following the lead of other government agencies in terms of their concern for citizen privacy.
NEW DELHI (Reuters) - India successfully test-fired on Wednesday a new long range surface-to-air missile capable of countering aerial threats at extended ranges, as Prime Minister Narendra Modi pushes to enhance the country's military capabilities. India, which shares borders with nuclear-armed China and Pakistan, is likely to spend $250 billion over the next decade to upgrade its military. It is the world's biggest buyer of defense equipment but Modi is trying to build a defense industrial base in the country to cut overseas purchases. The test-firing of the missile system, jointly developed by India and Israel, was carried out by warship INS Kolkata, the Ministry of Defence said in a statement said. Defense industry told Reuters last year that the value of the Barak 8 project was $1.4 billion. The aerial defense system includes a radar for detection, tracking and missile guidance. Only a small club of countries including the United States, France, Britain and Israel possess such capability, a Defence Research and Development Organization spokesman said. Israel is one of India's top three arms suppliers, delivering items such as missiles and unmanned aerial vehicles, but such transactions were long largely unpublicised because of India's fear of upsetting Arab countries and its own large Muslim population.
Jain A.,Defence Research and Development |
Flora S.J.S.,Defence Research and Development
Journal of Environmental Biology | Year: 2012
Nicotine affects a variety of cellular process ranging from induction of gene expression to secretion of hormones and modulation of enzymatic activities. The objective of the present study was to study the dose dependent toxicity of nicotine on the oxidative stress in young, adult and old rats which were administered 0.75, 3 and 6 mg kg-1 nicotine as nicotine hydrogen tartarate intraperitoneally for a period of seven days. No changes were observed in blood catalase (CAT) activity and level of blood reactive oxygen species (ROS) in any of the age group at the lowest dose of nicotine. However, at the highest dose (6 mg kg-1 nicotine) ROS level increased significantly from 1.17to 1.41>M ml-1 in young rats and from 1.13 to 1.40
News Article | March 31, 2016
On the battlefield, shoddy intelligence means innocent people die. To make its intelligence analysts more effective, the Canadian government is experimenting on them by treating these highly trained personnel like animals. No, really, intelligence analysts aren’t so different from wild animals, and that’s actually the point. Like hungry little foxes, they go scrounging around different nooks and crannies, except they’re hunting in various databases for satellite images, not a tasty critter to eat. This is called “information foraging,” a theory that originated at the storied Palo Alto Research Center, where much of modern computing was born, in the early 1990s. The Canadian military wants its intelligence analysts to get better at this kind of foraging because, according to Defence Research and Development Canada (DRDC) researchers, analysts face two major challenges: “information overload” and tight time constraints. This makes sense when you consider how Canada and other governments have accelerated and expanded their digital surveillance regimes over the years—that’s a lot of data. To this end, DRDC launched a project in 2014 involving a whole constellation of research on how to make intelligence analysts do their jobs better. Part of that project is INFOCAT, an experimental platform for testing the information foraging abilities of analysts, which can then be used to design better training and search systems. The work with INFOCAT is being led by cognitive psychologist David Bryant in Toronto, and the team just launched its first experiment, he told me when I interviewed him over the phone. A DRDC spokesperson was also on the line. “Animals ask themselves: While I’m foraging in this bush, how long should I stay there?” said Bryant. “Should I stay there and completely exhaust the food? That’s not a very good strategy.” It’s a simple question of economics—how long do you dick around in one bush, er, database looking for the last few berries before you move to another, more fruitful location, and how fast can you do it? To measure this in people, INFOCAT gives subjects just 20 minutes to answer a research question with lots of databases at their disposal. “In information foraging, you might Google something and as you go along, you’ll find items that aren’t particularly useful or helpful,” Bryant explained. In other words, how long do you plod through pages of increasingly unhelpful search results before you try a different combination of words? “It’s a question of diminishing returns,” Bryant said. This work will have applications outside of a military context, Bryant said. After all, in 2016, information overload isn’t just a problem for military types. As a steady stream of individual tweets, news stories, memes, and occasionally relevant information inundates us all, sussing out what’s important and what’s not can be tough. A system that helps you navigate it efficiently would be helpful for academics or businesspeople, or, hell, the average Twitter user. Canadian intelligence analysts prepare briefings on anything from strategic questions, like whether ISIS is likely to attack a certain region, to tactical ones, like which frequencies a particular radar emits, Bryant said. Analysts must look through a ton of different information sources—signals intelligence such as metadata, human intelligence reports (classic, “boots on the ground”-style spook stuff), satellite imagery, and books or Wikipedia. The job is to pull out relevant information, without spending too much time in any one source. You know, like an animal hungrily scurrying from place to place looking for a bite to eat. “In the experiment, we vary how the information is distributed in databases, we vary the cost associated with moving from one database to another, and the cost of opening an item and processing it,” said Bryant. “By varying these factors, we can create situations where we’d expect different behaviours from an optimal forager. We can see if their behaviours are in line with what information foraging theory predicts.” The nerdier among us may be asking ourselves: instead of going through all this trouble, why not just automate the whole process? According to Bryant, there’s no AI right now that can reliably do the cognitively complex job of an intelligence analyst, although that may one day be the case. For now, we have many people who do this job, and so Bryant and the DRDC are focused on making them better, instead of replacing them. That doesn’t mean that machines don’t have a part to play in that mission, however—a 2014 DRDC research paper saw the military looking into creating an AI-based virtual assistant to assist intelligence analysts. In the information-drenched future of war, human analysts—animals that they are—are going to need a little help.
News Article | December 3, 2015
The weapons that will be used to fight tomorrow’s wars will need to address a very old problem: friendly fire. Researchers think complex algorithms can help by telling soldiers where to shoot, and where not to. But how much trust should soldiers place in a machine that helps them to decide who to kill? “The reality is that soldiers are doing a very difficult task under very difficult circumstances,” Greg Jamieson, a researcher at the University of Toronto’s Cognitive Engineering Laboratory (CEL), told me over the phone. “So, if you can provide some kind of tool to help people make better decisions about where there’s a target or who this target is or the identity of that target, that’s in the interest of the civilian or non-aligned people in the environment.” The problem is that the tool Jamieson is referring to, called automated target detection (ATD), doesn’t really exist in any sort of ready-to-deploy form for individual soldiers. So, in partnership with Defence Research and Development Canada (DRDC), Jamieson and the other researchers at CEL are tackling the research backwards: instead of testing new tech to see how soldiers respond, they’re testing soldiers to understand what they need out of new tech. Photo: Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2014 Essentially, the CEL researchers are studying the trust soldiers place in ATD, and if soldiers benefit from imperfect automation when they understand its limitations. “People don’t want to tell us how well things work, and they don’t want to tell us how reliable they are because that’s sensitive information,” Jamieson said. “So, instead we take the opposite direction and say, OK, how about we provide the designers of this technology with some information about how effective it needs to be in order for it to be an aid to soldiers?” Studying ATD is a current focus for the Canadian military’s Future of Small Arms Research project, which is investigating and developing the killing machines for tomorrow’s wars. Basically, ATD relies on computer vision to process information about the scene surrounding a soldier and provide live feedback about targets in the area. But while many approaches to this task have been proposed over the years including laser radar, deep learning, and infrared imaging, the work has been met with limited success. Getting a computer to parse a busy scene with noisy data is hard, especially when you need enough accuracy to justify pulling the trigger, and in the blink of an eye. In these studies, a soldier is put in a room and surrounded by screens, meant to create the illusion of a virtual battle field. The DRDC calls this the virtual immersive soldier simulator, or VISS. Difficult-to-identify targets fly across the screen as the soldier looks down a modified rifle, with a heads-up-display projected inside the sight. The soldier sees yellow boxes around some of the objects in the scene—but not all—to indicate that the hypothetical ATD system has identified a target. The researchers "bias" the system to detect friendlies more readily than enemies, thus helping the soldier make a decision about whether a friend or foe has been targeted. Before the study, the soldier is told how reliable the system is, usually anywhere from 50 to 100 percent, and how likely it is to detect a soldier versus a civilian. The soldier must then decide when to shoot. Photo: Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2014 “We found that if we informed our participants of the ATD bias, they were more likely to identify targets in accordance with the ATD bias,” Justin Hollands, a DRDC researcher working on the project at CEL, wrote me in an email. “Importantly, we also found that detection of targets was much better with the ATD than without, regardless of its bias.” In other words, the automated targeting helped soldiers shoot better, especially when they were informed about how much trust they should place in its performance. Some past approaches for target identification include the combat identification (CID) systems currently used by many NATO countries. This kind of CID relies on a two-part “call and response” handshake between two sensors, one worn by the soldier or vehicle trying to identify a target, and the other by the friendly. The problem with this approach is that enemies and neutrals obviously don’t wear army-issue CID transponders, and so these systems often leave soldiers in a world of unknowns. According to Jamieson, these technologies sorely need an update, and ATD could be the answer. The message Jamieson wants to get across based on the work that he’s done at CEL, he tells me, is that automation doesn’t need to be better, or even as effective, as a human soldier. Of course, the idea of a computer telling a human to shoot to kill at an innocent bystander is no doubt unsettling—terrifying, even. The fact that it only happens sometimes doesn’t really help to allay such fears. But, Jamieson says, as long as a human still has to pull the trigger and understands the technology’s pitfalls, then it’s a net positive. “What that suggests to the people who are designing these technologies is that it doesn’t have to be perfect,” Jamieson said. “Instead of trying to make it perfect, we could invest energy in communicating what that reliability information means. That’s kind of where we want to go with the research in the future. We want to figure out how to tell a soldier most effectively how reliable the automation is.” The next step will be to take what the team at CEL has learned about automated targeting and put it into practice with some of the experimental tech that currently exists.“Within the FSAR project we will also conduct field trials where weapons have actual ATD and soldiers will use those,” Hollands wrote me, “so we will look at real weapons with real algorithms in those studies.” Eventually, automatic targeting tech will make it out of tests and onto the battlefield. But in between, thorny design questions will need to be answered: what imaging technique and algorithm will be used to identify targets? Will the device be mounted on the soldier or their weapon? How large will it be? How heavy? Machines that tell humans on the ground who to shoot at are still years away from being deployed, but Jamieson and Hollands’ work makes one thing clear: technical advances aside, tomorrow’s computer-aided warfare will be about trust.