News Article | September 18, 2009
After seven years of development and internal bickering, the IEEE (Institute of Electrical and Electronics Engineers) recently signed off on the 802.11n wireless standard - meaning it's fully approved (or ratified) for use in wireless kit. The final amendments will be published in October. Replacing the 802.11g specification, 802.11n is the new Wi-Fi and has been seven years in the making. It's capable of delivering greater range, improved reliability and faster data speeds thanks to the introduction of MIMO (Multiple Input, Multiple Output) technology. 1. Seven years? Why the delay? It's hardly surprising that 801.11n took so long to get to the finishing line. According to the IEEE, over 400 equipment and silicon suppliers, service providers, systems integrators, consultant organisations and academic institutions have been involved in developing the specification. Bruce Kraemer, Chair of the IEEE Wireless LAN Working Group, also points out that: "when [the IEEE] started in 2002, many of the technologies addressed in 802.11n were university research topics and had not been implemented." A small format war also didn't help. Come 2004, the 802.11 Task Group (TGn) had received 32 different proposals to define the core specifications of the 802.11n standard. These were whittled down to two rival proposals by 2005. In the blue corner, the WWiSE consortium gathered together the likes of Airgo Networks, Broadcom, Motorola, Nokia and Texas Instruments. While in the red corner, the TGn Sync group was backed by tech heavyweights such as Intel, Atheros Communications, Samsung, Sony, Philips and Panasonic. The two groups merged their specification into an 802.11n Draft 1.0 a year later (albeit with 12,000 nit-picking comments). 2. Haven't we been using 802.11n for the past few years? Yes. And no. Manufacturers have always been keen to release faster wireless products. 802.11g, for example, was unofficially pushed beyond its 54Mbps limit with channel-bonding 'Super G' and 'Turbo' products, which accelerated performance to 108Mbps. Devices such as Belkin's G Plus MIMO pushed this still further. The first 802.11n-inspired products appeared in 2006 under the 'Pre-N' banner. These models based on Draft 1.0 of the 802.11n standard. MIMO: The technology originally appeared in souped-up 802.11g products like this Belkin router Draft 1.0, however, was criticised for its poor throughput and there were interoperability issues between products from different manufacturers that scared off many consumers. A more stable Draft 2.0 specification was issued in 2007 and this formed the basis for the 'Draft-N' and 'Wireless-N' products that have been sold to date by manufacturers including Belkin, Linksys, D-Link and Netgear. Although the final 802.11n specification has moved onto Draft 11.0, there haven't been any significant technical changes that would have required new hardware. 3. Will my old kit work with the final standard? Again. Yes. And no. It's a "Yes" if your router is based on Draft 2.0 of the 802.11n specification and was officially certified by the Wi-Fi Alliance. It's a "No" if your router is based on Draft 1.0 and calls itself Pre-N. According to the Wi-Fi Alliance, all existing Wi-Fi Certified Draft N wireless products will be compatible with the final standard. DRAFT 2.0: Devices based on Draft 2.0 of the 802.11n specification (like the Belkin N1 Vision above) will be compatible with the final, ratified standard. Some wireless manufacturers have also been quick to reassure consumers, announcing "full compliance" with the final version of 802.11n. Belkin, for example, has already stated that its products currently on the market are "already compliant and do not require firmware upgrades or other software downloads". Netgear told TechRadar that its current Draft-N models "will be upgradeable via a firmware upgrade". If you're unsure, check for the logo you can see in the picture box at the top of the page. If it's on your router or the box, you're compatible and new firmware will be released so that you're completely compliant. 4. So what's the advantage of a ratified 802.11n? To consumers, not that much. Considering that 802.11n (Draft 2.0) products have been available for the past two years, the appearance of official 802.11n hardware in 2010 is unlikely to make much of an impact. As far as many people are concerned, they've already upgraded to 802.11n. The optional extras included in the final specification (including packet aggregation to improve efficiency and 3x3 MIMO configurations for higher throughput) aren't deal-breakers. In contrast, the ratification of 802.11n should give business the confidence to upgrade to 802.11n – although it's worth pointing out that the Wi-Fi Alliance has already certified over 80 enterprise-grade Draft 2.0 devices. 5. What comes after 802.11n? Using its optional 40MHz mode, 802.11n is capable of delivering throughput of up to 600Mbps by combining four 150Mbps data channels. It's easily fast enough for the demands of streaming video. But research is already under way on Gigabit wireless networking technology to replace it. The High-Throughput Study Group (HTSG) that dreamt up 802.11n has since birthed two new groups. These are working on future standards using frequencies below 6GHz (dubbed 802.11ac) and in the 60GHz band (802.11ad). These wireless technologies could potentially double the performance and range of 802.11n. But given the IEEE's track record, we might have to wait until 2016... Liked this? Then check out Hacks and tweaks for a faster, safer network Sign up for TechRadar's free Weird Week in Tech newsletter Get the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register
News Article | September 7, 2016
An experimental division of Google parent Alphabet is harnessing Google’s advertising technology to help stop the spread of ISIS. "ISIS is a terrorist group unlike any that we’ve seen before," says Yasmin Green, the head of research and development at Jigsaw, an internal tech incubator focusing on international security issues. "They’ve been successful in capturing both physical territory and digital territory." The extremist group has reportedly lost physical territory in Iraq and Syria in recent months, but security experts have long warned that its sophisticated media strategy—involving videos, social media, and even glossy print publications—still enables the group to attract supporters from around the world. "With the widespread horizontal distribution of social media, terrorists can identify vulnerable individuals of all ages in the United States—spot, assess, recruit, and radicalize—either to travel or to conduct a homeland attack," Federal Bureau of Investigation Director James Comey told Congress last year. "The foreign terrorist now has direct access into the United States like never before." Jigsaw, formerly known as Google Ideas, concluded an 8-week pilot program earlier this year, using the same technologies that let commercial advertisers target internet users most likely to be interested in their products to identify users demonstrating sympathies toward ISIS. Then, says Green, online ads pointed them toward content, in both English and Arabic, delivering alternative viewpoints in ways that can actually change their minds. "They usually made their decision to join [ISIS] based on partial information," she says of those who've joined the terror group in the past. "That’s really the bet we’re making here, is that with better information, individuals will be empowered to make better choices." Before launching the test campaign, members of the Jigsaw team did extensive field research, meeting with former ISIS sympathizers and members of targeted communities from Iraq to London, trying to understand everything from how they use mobile phones to what motivated their initial sympathies for the terror group. They then formulated an advertising campaign targeting internet users whose search keywords indicated a potential for radicalization, not just an interest in mainstream news coverage of terrorism or events in the Middle East. "We were factoring in these types of things: supportive slogans, deferential terms for the Islamic State, preferences for ISIS-produced content," says Green. For instance, ISIS sympathizers are more likely to use an Arabic-language slogan meaning "remaining and expanding," and they're more likely to use certain terminology for the group itself and political concepts it embraces, like the return of the Islamic political institution known as the caliphate, she says. A Google advertising tool called the Keyword Planner, which uses Google’s substantial data collections to suggest additional relevant keywords to target for an ad campaign, helped find additional terms to target with ads, she says. And while the company didn’t do any offline tracking of targeted users, so it can’t say how many people may have actually been dissuaded from joining ISIS, it still saw encouraging signs from the pilot. "Over 8 weeks, in Arabic and English, that this pilot ran, it reached an estimated 320,000 unique individuals, half of which we believed showed signs of positive sentiment toward the Islamic State," Green says. And, she says, the click-through rate of the ads Jigsaw placed were on average 70% higher than others targeting the same keywords. But simply placing advertisements is only half the battle: Green and her team also had to decide what kinds of content those ads would promote. They quickly decided to curate existing content online, rather than producing new material, but that still left a lot of choices to be made. And sending users to videos or blog posts that just offer "facile parody" of ISIS, use terminology that's seen as overly derogatory, or simply come off as overly preachy will just alienate the people Jigsaw is trying to reach, she says. "It turns out a lot of the content being produced in this space, I liken it to showing smokers their lungs with nicotine [damage] on the side of the cigarette packet," she says. And even mainstream Western news sources, like the BBC, can be seen as biased by potential ISIS recruits, Green says the team learned through field research. They decided, instead, to focus on citizen journalism and documentary footage showing the realities of life under ISIS and the struggles the group has been having militarily, along with material highlighting the religious debate around some of the Islamic concepts ISIS cites and testimony from former ISIS supporters who had left the group. "Those were among the most compelling," Green says. "These were individuals who had just come back—they were until very recently subscribing to ISIS ideology." Ultimately, users targeted by the ads collectively watched about 500,000 minutes of video, she says. And as the project expands, Green hopes to work with external funding organizations and advertising groups to expand to other languages and potentially even enable deradicalization experts to work one-on-one with potential ISIS recruits who are posting on YouTube and social media networks. The efforts may one day expand to combat other forms of extremism, such as white supremacist movements, she says. Since shortly after it was founded in 2010 as Google Ideas, the group has been in contact with former extremists of a variety of stripes, looking to learn why young people are drawn to such movements. Jigsaw’s project isn’t the only effort to focus on countering ISIS propaganda: Obama administration officials have met with executives from Hollywood movie studios and social media companies like Snapchat and Facebook to discuss ways to limit and counter the group’s global reach. And while the pilot program arose within Alphabet, a company central to internet advertising, Green says there’s no reason a similar project couldn’t begin elsewhere. "There’s no really secret sauce here," she says. "This is really just about setting the target audience as those who already engaged, informing the campaigns with insights from defectors and former members, and getting the insights to design really good campaigns."
One of the top science stories of 2012 involved a furore about the wisdom of enhancing the transmissibility of the H5N1 avian influenza virus in ferrets. In that same year, fears mounted that do-it-yourself (DIY) biologists would cook up their own versions of the virus using information published in the academic press. Now, journalists and others are again targeting the citizen-science community — a group of people with or without formal training who pursue research either as a hobby or to foster societal learning and open science — amid fears about the nascent gene-editing technology CRISPR–Cas9. In January, the San Jose Mercury News ran an article under a pearl-clutching headline: “Bay Area biologist's gene-editing kit lets do-it-yourselfers play God at the kitchen table.” And although they are much less alarmist, scholars are advising policymakers to consider the potential uses of gene editing “outside the traditional laboratory setting” ( & Am. J. Bioeth. 15, 11–17; 2015). The reality is that the techniques and expertise needed to create a deadly insect or virus are far beyond the capabilities of the typical DIY biologist or community lab. Moreover, pursuing such a creation would go against the culture of responsibility that DIY biologists have developed over the past five years. In fact, when it comes to thinking proactively about the safety issues thrown up by biotechnology, the global DIY-biology community is arguably ahead of the scientific establishment. The equipment and reagents that are needed to use CRISPR–Cas9 are already readily available to DIY biologists. Members of the teams that participated in the 2015 International Genetically Engineered Machine (iGEM) competition — including high-school students and users of community labs around the world — received CRISPR–Cas9 plasmids in their starting kits. These kits contain more than 1,000 standard biological parts known as BioBricks, the DNA-based building blocks that participants need to engineer a biological system for entering into the competition. Other components of the CRISPR–Cas9 system are also available from the iGEM registry (http://parts.igem.org/CRISPR). Yet few DIY biologists seem to be using the technology. Both Tom Burkett, founder of the Baltimore Under Ground Science Space in Maryland, and Ellen Jorgensen, executive director of Genspace — a community lab in Brooklyn, New York — say that their users are interested in CRISPR–Cas9, and Genspace will be offering a workshop on it in March. But none of the projects currently being pursued in these spaces require it. Users of the La Paillasse community lab in Paris are similarly focused on projects that do not need CRISPR–Cas9. The materials might be available, but the knowledge and understanding needed to make edits that have the desired effects are not. Also, most DIY biologists are interested in building genetic circuits in bacteria or yeast, and they can generally do this using well-established techniques, such as SLiCE (seamless ligation cloning extract), and with genes that have been synthesized by commercial suppliers or that can be obtained from the iGEM registry. CRISPR–Cas9 is a fast-moving technology that may well become more popular with DIY biologists in the coming months and years. Even if this happens, there is no a priori reason to expect this community to cause more harm when using it than anyone else. The DIY-biology community developed codes of conduct in mid-2011 (https://diybio.org/codes). At this point, the community comprised one shared laboratory (Genspace), which opened in December 2010, and a loose-knit collection of groups from across the globe, each with different levels of expertise, resources and protocols. In discussions online and in face-to-face gatherings, it emerged that if the DIY-biology community was to advance and start pursuing more-sophisticated projects, it would need to develop a set of governance principles. I and Jason Bobe, a co-founder of DIYbio.org, an online hub for people interested in pursuing DIY biology, convened a series of workshops that brought together groups from the United Kingdom, Denmark, France and Germany. We then repeated the exercise with six groups in the United States. We knew that a set of rules that outlined appropriate practices would be effective only if such rules had been developed and agreed on together. Today, Genspace and other community labs around the world have their own advisory boards or can seek advice from the 'Ask a biosafety professional your question' portal (http://ask.diybio.org). The portal's panels review proposals for projects and flag potential safety issues. In the United States, community labs have even developed relationships with the Federal Bureau of Investigation, which has introduced members to local police and fire departments to maximize preparedness for security issues that could arise. In many ways, this proactive culture of responsibility is an advance on the post hoc scrambling that often occurs within the scientific establishment. Much of the debate about the pros and cons of the H5N1 experiments took place while the work was under review for publication. And in the case of gene editing, even the US National Academy of Sciences was caught on the hop. It did not begin to seriously discuss the risks associated with using the approach to engineer genes that could quickly spread through wild populations — known as gene drives — until after experiments demonstrating the concept in fruit flies had been published in a peer-reviewed journal ( & Science 348, 442–444; 2015). Of course, community norms will have little effect on the behaviour of rogue individuals who are intent on causing mischief or harm. But such people could just as easily be scientists working in government, university or commercial labs as DIY biologists. Indeed, the current culture of responsibility among DIY biologists, their collaborative style of working and the fact that community labs are open spaces in which everyone can see what is going on reduce, if not eliminate, doomsday scenarios of mutant organisms escaping from basements and causing harm. One development that has increased anxiety about the use of CRISPR–Cas9 by DIY biologists is a crowdsourcing venture by synthetic biologist Josiah Zayner, founder of the Open Discovery Institute in Burlingame, California. Thirty days after launching his campaign on the crowdfunding website Indiegogo last November, Zayner had raised almost US$34,000 to fund the production and distribution of DIY CRISPR kits — supposedly to help people “learn modern science by doing”. (He has since raised more than $62,000, six times his original goal.) But the concern about Zayner's project arises not because it gives people outside conventional labs more capabilities than they would otherwise have had. DIY biologists already use various tools to assemble DNA fragments in bacteria and yeast — the microorganisms that he supplies in his kits. Zayner's campaign is worrisome because it does not seem to comply with the DIYbio.org code of conduct. The video that accompanies his campaign zooms in on Petri dishes containing samples that are stored next to food in a refrigerator. More than anything, Zayner's campaign is a reminder of the myriad ways in which researchers — conventional or otherwise — can now get their work funded. With the ready availability of tools such as CRISPR–Cas9 and crowdfunding, a more-decentralized governance is needed for everyone, not just DIY biologists. Codes of conduct will be needed to establish appropriate norms for government funding and regulatory agencies, for people working both within and outside conventional research settings, for the directors of community labs and for the developers of crowdfunding platforms. The DIY-biology community, as a stakeholder that has already addressed many of the underlying issues, should take part in a robust public dialogue about the use of CRISPR–Cas9 and how governance models can ensure safe, responsible research.
News Article | January 25, 2016
The U.S. Federal Bureau of Investigation (FBI) has confirmed that they took over the operations of the biggest child pornographic website during a sting operation. The U.S. Justice Department agreed in court filings that the FBI operated a website called Playpen for two weeks in early 2015. During the sting operation, the FBI found that the website had over 215,000 registered users. The website also had links to over 23,000 sexually explicit videos and images of children that includes around 9,000 files, which could be directly downloaded from the federal agency. The details of the operation stay largely secret but this was the third operation when FBI took control of child porn websites and left it operational to catch people who accessed the content. FBI used a software in the latest operation that could identify users who would use security tools to hide their identity. In the past, the government did not allow FBI agents to use child pornographic images online to catch those who viewed them. The Justice Department says that children in the images are harmed each time someone views their explicit image. Moreover, the FBI has no control to stop these images from being copied or re-copied and circulated on the Internet. FBI agents acknowledge the associated risks; however, they say that there are no other way to identify those who accessed these websites. "We had a window of opportunity to get into one of the darkest places on Earth, and not a lot of other options except to not do it," says Ron Hosko, a former senior FBI official who was involved in planning one of the agency's first efforts to take over a child porn site. "There was no other way we could identify as many players." The FBI revealed that they noticed Playpen soon after it became operational in August 2014. The website was hidden in the "dark web," which is a part of the Internet accessible to the public only via Tor - network software that bounces users' Internet traffic from one system to another so that it is not easily traceable. FBI suggests that by March 2015, Playpen had become the largest known child pornographic service available in the world. The FBI has tracked that the website's servers were tracked to North Carolina and the agency secretly moved the computer servers at its own facility in Newington to start the sting operation. FBI agents say that shutting the website immediately after it was found would have not enabled law enforcement officers to identify offenders and rescue victims from abuse.
News Article | January 28, 2016
As local police departments turn more to digital systems to manage evidence and communicate with the public, they become increasingly vulnerable to cyberattacks, experts warn. "U.S. law enforcement will be breached," security firm PKWare said earlier this month in its list of digital security predictions for this year. "From body cameras to police databases, cyberattacks against law enforcement could become widespread in 2016." Hackers have targeted agencies involved in political controversies in recent years, with police departments and other local agencies in Baltimore, Cleveland, and Madison, Wisconsin, all seeing various forms of digital attacks by groups like political hacker collective Anonymous after controversial shootings by police. "You can expect that if you have a questionable shooting that occurs, you’re gonna get hacked," says Terry Sult, chief of police in Hampton, Virginia. Sult has written and spoken about cybersecurity for the International Association of Chiefs of Police (IACP). Sophisticated attackers could access police systems to learn the identities of witnesses, tamper with evidence, or try to blackmail the targets of investigations, says Winnie Callahan, the director of the University of San Diego Center for Cyber Security Engineering and Technology. "It does require being extremely careful, and assuming that someone wants to get in, and that you’re very, very up to date on the cyberhacking techniques," says Callahan, who’s worked on efforts to teach law enforcement officers about electronic crime. "The thing is that their records that they’re holding really do have tremendous impact on the people—the victims of crime and the criminals themselves." Once hacked, police information can be leaked. An Arizona state police agency was hacked multiple times by political hacker groups in 2011, with information about officers leaked to the public, and multiple police departments in Maine paid hackers to restore files held ransom by malware last year, according to the Portland Press Herald. Those kinds of risks mean that it’s essential for officers who are interacting with digital systems to know the basics of digital evidence preservation—like not turning off a computer at a crime scene that could have encryption enabled—and security, like not putting thumb drives that could have malware on them into police computers, says Callahan. Departments also need to make sure that digital tools they use are properly secure, which often means bringing in outside experts to evaluate vendors’ promises and audit police IT systems, she says. "Get a third party that doesn’t have an axe to grind or a dog in the fight, so to speak, to take a look at what a vendor is selling, and be sure that you can verify that what they say a particular piece of equipment can do, does that, and nothing more," she says. "Sometimes you can put things in, and they do a particular activity for you, but they do other things in their spare time, and that’s extremely dangerous, and that happens quite a bit." A security audit at a police department where Sult previously worked was an "eye opener," he recalls, turning up vulnerabilities like former employees who still had active accounts on departmental systems. "We found some surprising things, and I don’t think it’s unique to police departments," he says. "We found out that what we thought we had, and what we actually had, were not the same thing." In other cases, police departments have apparently unintentionally left sensitive data accessible to the public at large. The Electronic Frontier Foundation (EFF) reported last year that more than 100 license plate recognition systems were misconfigured, making live footage and plate information available on publicly accessible websites. And the weekly newspaper DigBoston reported last fall that Boston authorities had made license plate information, including people’s addresses, available on another public server. "Law enforcement agencies love to get new technological toys, but what they don’t necessarily keep in mind as they purchase this is that there’s an ongoing cost of upgrading, making sure it’s security tested—there’s a lot of upkeep that goes into it," says Dave Maass, an investigative researcher at the EFF. If systems aren’t patched and maintained, they can become vulnerable over time, and insecure systems can be more easily discovered, thanks to search engines like Shodan that index Internet-connected devices. "It could be all sorts of stuff that are just out there and connected to the Internet and nobody thought to lock down, or at least when they installed it, there weren’t the kind of threats that there are now," he says. Ideally, Maass says, police departments think carefully about how to protect data before they collect or store it—including taking into account the risk of insiders abusing legitimate access rights—and lawmakers should make sure agencies budget for maintenance, not just the initial installation of new tools, he says. "You don’t approve it just based on the initial pilot program or initial expenditure—you need to make sure the police officers have a five- or 10-year [plan] for updating the system or maintaining the system, with all of those costs built in," he says. Police departments are themselves becoming more aware of the risks, says Sult, thanks in part to efforts by groups like the IACP, which maintains its own Law Enforcement Cyber Center, and agencies like the Federal Bureau of Investigation, which offers training and tools to state and local agencies through its Cyber Shield Alliance program. "It’s individual—agency by agency," he says. "Some agencies are more prepared than others."