New York, NY, United States
New York, NY, United States
SEARCH FILTERS
Time filter
Source Type

Napoli P.M.,Duke University | Caplan R.,Data and Society
First Monday | Year: 2017

A common position amongst social media platforms and online content aggregators is their resistance to being characterized as media companies. Rather, companies such as Google, Facebook, and Twitter have regularly insisted that they should be thought of purely as technology companies. This paper critiques the position that these platforms are technology companies rather than media companies, explores the underlying rationales, and considers the political, legal, and policy implications associated with accepting or rejecting this position. As this paper illustrates, this is no mere semantic distinction, given the history of the precise classification of communications technologies and services having profound ramifications for how these technologies and services are considered by policy-makers and the courts. © 2017, Philip M. Napoli and Robyn Caplan. All Rights Reserved.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA | Phase: INFRA-2007-3.0-03 | Award Amount: 4.06M | Year: 2008

PESI provides standardised and authoritative taxonomic information by integrating and securing Europes taxonomically authoritative species name registers and nomenclators (name databases) that underpin the management of biodiversity in Europe.\nPESI defines and coordinates strategies to enhance the quality and reliability of European biodiversity information by integrating the infrastructural components of four major community networks on taxonomic indexing into a joint work programme. This will result in functional knowledge networks of taxonomic experts and regional focal points, which will collaborate on the establishment of standardised and authoritative taxonomic (meta-) data. In addition PESI will coordinate the integration and synchronisation of the European taxonomic information systems into a joint e-infrastructure and the set up of a common user-interface disseminating the pan-European checklists and associated user-services results\nThe organisation of national and regional focal point networks as projected not only assures the efficient access to local expertise, but is also important for the synergistic promotion of taxonomic standards throughout Europe, for instance to liaison with national governmental bodies on the implementation of European biodiversity legislations. In addition PESI will start with the geographic expansion of the European expertise networks to eventually cover the entire Palaearctic biogeographic region.\nPESI supports international efforts on the development of a Global Names Architecture by building a common intelligent name-matching device in consultation with the principal initiatives (GBIF, TDWG, EoL, SpeciesBase). PESI contributes the development of a unified cross-reference system and provides of high quality taxonomic standards. PESI will further involve the Europe-based nomenclatural services and link the planned joint European taxonomic e-infrastructures middle-layer to the global e-gateway.


Fukuma S.,Kyoto University | Yamaguchi T.,Tohoku University | Hashimoto S.,Data and Society | Nakai S.,Data and Society | And 3 more authors.
American Journal of Kidney Diseases | Year: 2012

Patient responsiveness to erythropoiesis-stimulating agents (ESAs), notoriously difficult to measure, has attracted attention for its association with mortality. We defined categories of ESA responsiveness and attempted to clarify their association with mortality. Cohort study. Data from Japan's dialysis registry (2005-2006), including 95,460 adult hemodialysis patients who received ESAs. We defined 6 categories of ESA responsiveness based on a combination of ESA dosage (low [<6,000 U/wk] or high [<6,000 U/wk]) and hemoglobin level (low [<10 g/dL], medium [10-11.9 g/dL], or high [<12 g/dL]), with medium hemoglobin level and low-dose ESA therapy as the reference category. All-cause and cardiovascular mortality during 1-year follow-up. HRs were estimated using a Cox model for the association between responsiveness categories and mortality, adjusting for potential confounders such as age, sex, postdialysis weight, dialysis duration, comorbid conditions, serum albumin level, and transferrin saturation. Median ESA dosage (4,500-5,999 U/wk) was used as a cutoff point, and mean hemoglobin level was 10.1 g/dL in our cohort. Of 95,460 patients during follow-up, 7,205 (7.5%) died of all causes, including 5,586 (5.9%) cardiovascular deaths. Low hemoglobin levels and high-dose ESA therapy were both associated with all-cause mortality (adjusted HRs, 1.18 [95% CI, 1.09-1.27] for low hemoglobin level with low-dose ESA and 1.44 [95% CI, 1.34-1.55] for medium hemoglobin level with high-dose ESA). Adjusted HRs for high-dose ESA with low hemoglobin level (hyporesponsiveness) were 1.94 (95% CI, 1.82-2.07) for all-cause and 2.02 (95% CI, 1.88-2.17) for cardiovascular mortality. We also noted the interaction between ESA dosage and hemoglobin level on all-cause mortality (likelihood ratio test, P = 0.002). Potential residual confounding from unmeasured factors and single measurement of predictors. Mortality can be affected by ESA responsiveness, which may include independent and interactive effects of ESA dose and hemoglobin level. Responsiveness category has prognostic importance and clinical relevance in anemia management. © 2011 by the National Kidney Foundation, Inc.


News Article | October 26, 2016
Site: www.newscientist.com

“Uber, your ratings are a complete joke,” complained a driver recently on UberPeople, an online forum for workers using the popular ride-sharing service. Their post inspired a heap of comments from drivers upset about their own unexplained low scores. Others groaned – whinging about the rating system was old news. “Does it suck? Yes. Is it unfair? Yes,” wrote one. “Just keep being diligent and you’ll be fine. And yes, let’s stop talking about this now.” But as well as causing annoyance to drivers, a study suggests the ratings mechanism could allow customer’s biases to creep into the system, as has been seen with other “gig economy” platforms like Airbnb. To better understand how customer ratings affect Uber drivers, a team led by technology ethnographer Alex Rosenblat at the Data and Society research institute in New York scoured months of forum posts and interviewed drivers. They presented the work at the Internet, Politics & Policy conference at the University of Oxford last month. The trouble is that many passengers may not understand how important the ratings are to drivers, or which criteria Uber expects users to consider, says Rosenblat. People might think that a four-star rating is good, for example. But as drivers risk being kicked off the platform if their average score drops below 4.6, anything under the maximum five stars constitutes a failing grade. The rating system could also open the back door to discrimination against drivers, say the team. They acknowledge that it is hard to determine the role prejudice plays in Uber ratings, if any, without access to the company’s internal data. But studies of other online services, such as Craigslist and Airbnb, have already shown that people of colour, for example, often experience hidden bias from users. One study found that people with names perceived as African-American are 16 per cent less likely to be accepted as Airbnb guests than those with white-sounding names, for example. “Many passengers may not understand how important the ratings they give are to drivers“ As a result, the study authors write: “Consumer-sourced ratings of this sort are highly likely to be influenced by bias on the basis of factors like race or ethnicity.” Watching what Uber does is important, because the firm sets the tone for how other gig economy companies might choose to manage their workers. Uber has fought to describe itself as a technological platform, not a taxi company, meaning that drivers are considered independent contractors rather than employees. Using software to keep tabs on so many scattered workers from afar may introduce a new crop of workplace problems. Rosenblat’s team hopes Uber will keep an eye out for any problematic patterns regarding bias against drivers. In the meantime, they offer a few technological solutions, such as using smartphone sensors or human evaluators to validate customer ratings. Alternatively, ratings could be unlinked from employment decisions entirely, replaced with an internal evaluation system, or perhaps used just to rank the drivers. “I think decoupling ratings from employment makes a lot of sense,” says Rosenblat. “You could also go back to the more old-fashioned way where you use ratings as a red flag and then send a human evaluator to check in.” Uber had not responded to a request for comment as New Scientist went to press. This article appeared in print under the headline “Help, my boss is an algorithm”


Komaba H.,Tokai University | Taniguchi M.,Data and Society | Wada A.,Data and Society | Iseki K.,Data and Society | And 2 more authors.
Kidney International | Year: 2015

Parathyroidectomy (PTx) drastically improves biochemical parameters and clinical symptoms related to severe secondary hyperparathyroidism (SHPT) but the effect of PTx on survival has not been adequately investigated. Here we analyzed data on 114,064 maintenance hemodialysis patients from a nationwide registry of the Japanese Society for Dialysis Therapy to evaluate the associations of severity of SHPT and history of PTx with 1-year all-cause and cardiovascular mortality. We then compared the mortality rate between 4428 patients who had undergone PTx and 4428 propensity score-matched patients who had not despite severe SHPT. During a 1-year follow-up, 7926 patients of the entire study population died, of whom 3607 died from cardiovascular disease. Among patients without a history of PTx, severe SHPT was associated with an increased risk for all-cause and cardiovascular mortality. However, such an increased risk of mortality was not observed among patients with a history of PTx. In the propensity score-matched analysis, patients who had undergone PTx had a 34% and 41% lower risk for all-cause and cardiovascular mortality, respectively, compared to the matched controls. The survival benefit associated with PTx was robust in several sensitivity analyses and consistent across subgroups, except for those who had persistent postoperative SHPT. Thus, successful PTx may reduce the risk for all-cause and cardiovascular mortality in hemodialysis patients with severe, uncontrolled SHPT. © 2015 International Society of Nephrology.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: Big Data Science &Engineering | Award Amount: 296.62K | Year: 2016

Data-driven modeling has moved beyond the realm of consumer predictions and recommendations into areas of policy and planning that have a profound impact on our daily lives. The tools of data analysis are being harnessed to predict crime, select candidates for jobs, identify security threats, determine credit risk, and even decide treatment plans and interventions for patients. Automated learning and mining tools can crunch incredible amounts and variety of data in order to detect patterns and make predictions. As is rapidly becoming clear, these tools can also introduce discriminatory behavior and amplify biases in the systems they are trained on. In this project, the PIs will study the problems of discrimination and bias in algorithmic decision-making. By studying all aspects of the data pipeline (from data preparation to learning, evaluation, and feedback), they will develop tools for analyzing, auditing, and designing automated decision-making systems that will be fair, accountable, and transparent. As specific goals to broaden the impact of this research, the PIs will develop a course curriculum to educate the next generation of data scientists on the ethical, legal, and societal implications of algorithmic decision-making, with the intent that they will then take this understanding into their jobs as they enter the workforce. Initial efforts by the PIs have attracted students from underrepresented groups in computer science, and they will continue these efforts. The PIs will also explore the legal and policy ramifications of this research, and develop best practice guidelines for the use of their tools by policy makers, lawyers, journalists, and other practitioners.

The PIs will explore the technical subject of this project in three ways. Firstly, they will develop a sound theoretical framework for reasoning about algorithmic fairness. This framework carefully separates mechanisms, beliefs, and assumptions in order to make explicit implicitly held assumptions about the nature of fairness in learning. Secondly, by examining the entire pipeline of tasks associated with learning, they will identify hitherto unexplored areas where bias may be unintentionally introduced into learning as well as novel problems associated with ensuring fairness. These include the initial stages of data preparation, various kinds of fairness-aware learning, and evaluation. They will also investigate the problem of feedback: when actions based on a biased learned model might cause a feedback loop that changes reality and leads to more bias.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: INFORMATION TECHNOLOGY RESEARC | Award Amount: 299.84K | Year: 2014

Big data are changing the ways in which individuals are tracked and the corresponding ways in which science must respond. Data uses run the gamut from public health modeling to personalization of online content to potentially discriminatory practices. Big data involve not just using new tools to get better answers but also new social, cultural and ethical issues. This award establishes a model for dealing with the ethical, social and legal impacts of big data projects from their outset, with an eye to developing next generation protocols to cover these kinds of impacts. A multidisciplinary council will be established to engage with and complement discussions already underway in mathematics, the sciences, engineering, and computer science. The goal of the Council will be to address issues such as security, privacy, equality, and access in order to help guard against the repetition of earlier mistakes and inadequate preparation. Through public commentary, events, white papers, and direct engagement with data analytics projects, the Council will develop frameworks to help researchers, practitioners, and the public understand the social, ethical, legal, and policy issues that underpin big data phenomena.

Functionally, the goal will be to help guard against the repetition of known mistakes and inadequate preparation by working across domains and disciplines involved in big data projects. In the process, those working on this project will develop new and powerful paradigms for identifying and understanding leading edge social, political, ethical, and legal issues. Both the research outputs and the coordinated network will help inform the design of scientific projects. Furthermore, the public-oriented nature and accessible outputs of this project will provide input for public discussions surrounding the big data phenomena by engaging with journalists, educators, and public policy makers. This project will create an influential community of thought leaders that can help shape the understanding of the complexities of the big data and also provide engagement for young scientists.


News Article | December 16, 2016
Site: phys.org

In their paper, "Discriminating Tastes: Customer Ratings as Vehicles for Bias," researchers examine how bias creeps into evaluations of Uber drivers from their customers. Researchers Karen Levy and Solon Barocas, assistant professors in the Department of Information Science, Alex Rosenblat of the Data and Society Research Institute, and Tim Hwang of Google, show that through Uber's rating system consumers can directly assert their preferences and biases in ways that companies are prohibited from doing through federal law. "The study collaborators are interested in ways bias can enter into algorithmic systems while not being prohibited by anti-discrimination law," Levy said. Ratings are especially important to Uber drivers because they risk getting kicked off the platform if their average score falls below a certain level. "The algorithms impact if these drivers get terminated or advanced," Levy said. "We know that people tend to have implicit biases that affect how they evaluate people from different groups. It would be illegal for an employer to discriminate directly, but this creates the possibility for backdoor bias creeping in from customers. These new technologies challenge the traditional way law prevents discrimination in the workplace." The paper concludes that consumer-sourced ratings are highly likely to be influenced by bias on the basis of factors like race or ethnicity. To help solve this issue, the authors suggest 10 proposed interventions, such as data-quality measures, better design elements in the rating system or human evaluators, to validate ratings. "Ratings are really subjective and mean different things to different people. Because they're so general, there's a lot of potential for bias to enter into them," Levy said. "There have already been situations in which companies in the sharing economy are dealing with discrimination between their users. The law has not caught up with how to protect people in these new environments." More information: Rosenblat, Alex and Levy, Karen EC and Barocas, Solon and Hwang, Tim, Discriminating Tastes: Customer Ratings as Vehicles for Bias (October 19, 2016). Available at SSRN: ssrn.com/abstract=2858946


News Article | February 23, 2017
Site: www.cnet.com

There's a reason people caution you not to read the comments. Rather than serve as a forum for debate, they often devolve into, well, a cesspool. In an effort to tone down the hate in comments sections, Google and its Alphabet corporate sibling Jigsaw, a technology incubator, launched on Thursday a machine learning tool that weeds out the nastier comments. "Because of harassment, many people give up on sharing their thoughts online or end up only talking to people who already agree with them," Jigsaw product manager CJ Adams said in a statement. The software, called Perspective, applies a score to comments based on similarities with other comments categorized as "toxic" by humans reviewers. Perspective has reviewed hundreds of thousands of such comments as a way of learning what's toxic and what's not. Drawing on its artificial intelligence roots, it learns as it goes. Publishers have a few options for what to do with the info Perspective provides. Via the Perspective software, they can flag comments and let moderators take it from there, or they can show commenters themselves if their comments are considered toxic. Another use could be allowing readers to sort through comments. Online harassment is all too common -- 72 percent of internet users in the US have witnessed it and 47 percent say they've experienced it themselves. Aside from simply being unpleasant, it can have a chilling effect on expression. Twenty-seven percent say they self-censor out of fear, according to a November study from the Data and Society Research Institute. "To tackle the biggest and most important problems we face, we need better ways to have conversations at scale," Lucas Dixon, Jigsaw chief research scientist, said in a statement. Other tech companies have introduced tools to fight hate online. In October, Microsoft launched a way to report online abuse for its services including Skype, Outlook and Xbox. Social media platforms like Twitter are also reacting to pressure to curb online harassment. In November, for example, Twitter expanded its mute function. The New York Times is already testing Perspective to moderate comments, and other publishers will be able to apply for the tool Thursday, free of charge. Solving for XX: The industry seeks to overcome outdated ideas about "women in tech." Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Loading Data and Society collaborators
Loading Data and Society collaborators