Leibniz Institute for the Social science

Köln, Germany

Leibniz Institute for the Social science

Köln, Germany
SEARCH FILTERS
Time filter
Source Type

Hienert D.,Leibniz Institute for the Social science | Wegener D.,Leibniz Institute for the Social science | Paulheim H.,TU Darmstadt
CEUR Workshop Proceedings | Year: 2012

Wikipedia is a rich data source for knowledge from all domains. As part of this knowledge, historical and daily events (news) are collected for different languages on special pages and in event portals. As only a small amount of events is available in structured form in DBpedia, we extract these events with a rule-based approach from Wikipedia pages. In this paper we focus on three aspects: (1) extending our prior method for extracting events for a daily granularity, (2) the automatic classification of events and (3) finding relationships between events. As a result, we have extracted a data set of about 170,000 events covering different languages and granularities. On the basis of one language set, we have automatically built categories for about 70% of the events of another language set. For nearly every event, we have been able to find related events.


Hienert D.,Leibniz Institute for the Social science | Luciano F.,Leibniz Institute for the Social science
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

The DBpedia project extracts structured information from Wikipedia and makes it available on the web. Information is gathered mainly with the help of infoboxes that contain structured information of the Wikipedia article. A lot of information is only contained in the article body and is not yet included in DBpedia. In this paper we focus on the extraction of historical events from Wikipedia articles that are available for about 2,500 years for different languages. We have extracted about 121,000 events with more than 325,000 links to DBpedia entities and provide access to this data via a Web API, SPARQL endpoint, Linked Data Interface and in a timeline application. © Springer-Verlag Berlin Heidelberg 2015.


Mayr P.,Leibniz Institute for the Social science | Mutschke P.,Leibniz Institute for the Social science
Proceedings - 2013 IEEE International Conference on Big Data, Big Data 2013 | Year: 2013

Bibliometric techniques are not yet widely used to enhance retrieval processes in digital libraries, although they offer value-added effects for users. In this paper we will explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of coauthorship network, can improve retrieval services for specific communities, as well as for large, cross-domain large collections. This paper aims to raise awareness of the missing link between information retrieval (IR) and bibliometrics / scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the digital library interface. © 2013 IEEE.


Petersohn S.,Leibniz Institute for the Social science
Education for Information | Year: 2016

Quantitative metrics in research assessment are proliferating all over the world. The demand has led to an increase in bibliometric practitioners and service providers. Their professional roles and competencies have not yet been subject to systematic study. This paper focuses on one important service provider in evaluative bibliometrics-academic librarians-and analyzes their professional competencies from a sociology of professions perspective. To this end, expert interviews with 25 British and German information professionals and several documents have been analyzed qualitatively. Academic librarians compete with other occupations for professional jurisdiction in quantitative research assessment. The main currency in this competition is their expert knowledge. Our results show that academic librarians rely strongly on the know-how gained in their academic Library and Information Science (LIS) training and develop a specific jurisdictional claim towards research assessment, consisting primarily in training, informing and empowering users to proficiently manage the task of evaluating scientific quality themselves. Based on these findings, and informed by the theoretical framework of Andrew Abbott, our conceptual proposal is to adapt formal training in bibliometrics to the various specific professional approaches prevalent in the jurisdictional competition surrounding quantitative research assessment. © 2016 IOS Press and the authors. All rights reserved.


Mayr P.,Leibniz Institute for the Social science
CEUR Workshop Proceedings | Year: 2016

In this paper we describe a case study where researchers in the social sciences (n=19) assess topical relevance for controlled search terms, journal names and author names which have been compiled by recommender services. We call these services Search Term Recommender (STR), Journal Name Recommender (JNR) and Author Name Recommender (ANR) in this paper. The researchers in our study (practitioners, PhD students and postdocs) were asked to assess the top n preprocessed recommendations from each recommender for specific research topics which have been named by them in an interview before the experiment. Our results show clearly that the presented search term, journal name and author name recommendations are highly relevant to the researchers topic and can easily be integrated for search in Digital Libraries. The average precision for top ranked recommendations is 0.749 for author names, 0.743 for search terms and 0.728 for journal names. The relevance distribution differs largely across topics and researcher types. Practitioners seem to favor author name recommendations while postdocs have rated author name recommendations the lowest. In the experiment the small postdoc group favors journal name recommendations.


Stier S.,Leibniz Institute for the Social science
WebSci 2016 - Proceedings of the 2016 ACM Web Science Conference | Year: 2016

This article analyzes political communication by partisan elites on Twitter. Based on framing theory, it investigates whether tweets on U.S. political debates by Democratic and Republican party actors diverge with regard to their semantics. Applying computational text analysis to the most discussed political topics in 2015, the paper identifies topically varying degrees of partisan framing. © 2016 Copyright held by the owner/author(s).


Schaer P.,Leibniz Institute for the Social science
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

During the last three years we conducted several information retrieval evaluation series with more than 180 LIS students who made relevance assessments on the outcomes of three specific retrieval services. In this study we do not focus on the retrieval performance of our system but on the relevance assessments and the inter-assessor reliability. To quantify the agreement we apply Fleiss' Kappa and Krippendorff's Alpha. When we compare these two statistical measures on average Kappa values were 0.37 and Alpha values 0.15. We use the two agreement measures to drop too unreliable assessments from our data set. When computing the differences between the unfiltered and the filtered data set we see a root mean square error between 0.02 and 0.12. We see this as a clear indicator that disagreement affects the reliability of retrieval evaluations. We suggest not to work with unfiltered results or to clearly document the disagreement rates. © 2012 Springer-Verlag.


Bosch T.,Leibniz Institute for the Social science
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Designing domain ontologies from scratch is a time-consuming endeavor requiring a lot of close collaboration with domain experts. However, domain descriptions such as XML Schemas are often available in early stages of the ontology development process. For my dissertation, I propose a method to convert XML Schemas to OWL ontologies in an automatic way. The approach addresses the transformation of any XML Schema documents by using the XML Schema metamodel, which is completely represented by the XML Schema Metamodel Ontology. Automatically, all Schema declarations and definitions are converted to class axioms, which are intended to be enriched with additional domain-specific semantic information in form of domain ontologies. © 2012 Springer-Verlag Berlin Heidelberg.


Schaan B.,Leibniz Institute for the Social science
Journals of Gerontology - Series B Psychological Sciences and Social Sciences | Year: 2013

Objectives. This study investigates the role of gender, caregiving, and marital quality in the correlation between widowhood and depression among older people within a European context by applying the theory of Social Production Functions as a theoretical framework.Method. Fixed-effects linear regression models are estimated using the first 2 waves (2004, 2006) of "The Survey of Health, Ageing and Retirement in Europe" (SHARE). A subsample of 7,844 respondents aged 50 and older in 11 countries, who were married at baseline and are either continuously married or widowed at follow-up, is analyzed.Results. Respondents who experienced widowhood between the 2 waves report significantly more depressive symptoms than those continuously married, with respondents living in Denmark and Sweden reporting a lower increase in depressive symptoms than those living in Greece, Spain, or Italy. There is no statistically significant interaction between gender and widowhood. Widowed persons who report higher marital quality at baseline show a larger increase in the number of symptoms of depression than those with low marital quality; widowed persons who report being a caregiver for their partner at baseline report smaller increase in the symptoms of depression compared with widowed noncaregivers.Discussion. The results support the results of previous studies using longitudinal data. Furthermore, the effect of widowhood varies among the 11 countries in the subsample although only a small amount of the variation in the increase of depressive symptoms after becoming widowed can be explained by such contextual factors. © 2013 The Author.


Zapilko B.,Leibniz Institute for the Social science | Mathiak B.,Leibniz Institute for the Social science
Proceedings of the International Conference on Dublin Core and Metadata Applications | Year: 2011

In recent years, many government agencies have published statistical information as Linked Open Data (e.g. Eurostat, data.gov.uk). Yet, while there are a number of visualization tools, researchers use data for scientific statistical analysis to answer their research questions. Currently, they have to download the statistical data in a table-based format, in order to use their statistics software, unfortunately losing all the benefits Linked Data provides to them like interlinking with other data sets. In this paper, we present an approach specifically designed to help researchers to perform statistical analysis on Linked Data. By combining distributed sources with SPARQL, we are able to apply simple statistical calculations, such as linear regression and present the results to the user. Results of testing these calculations with heterogeneous data sources expose a wide range of typical issues on data integration which have to be aware of when working with heterogeneous statistical data.

Loading Leibniz Institute for the Social science collaborators
Loading Leibniz Institute for the Social science collaborators