London, United Kingdom
London, United Kingdom
Time filter
Source Type

Johansson C.,Mendeley | Derelov M.,Linköping University | Olvander J.,Linköping University
Nuclear Engineering and Technology | Year: 2017

In order to help decision-makers in the early design phase to improve and make more cost-efficient system safety and reliability baselines of aircraft design concepts, a method (Multi-objective Optimization for Safety and Reliability Trade-off) that is able to handle trade-offs such as system safety, system reliability, and other characteristics, for instance weight and cost, is used. Multi-objective Optimization for Safety and Reliability Trade-off has been developed and implemented at SAAB Aeronautics. The aim of this paper is to demonstrate how the implemented method might work to aid the selection of optimal design alternatives. The method is a three-step method: step 1 involves the modelling of each considered target, step 2 is optimization, and step 3 is the visualization and selection of results (results processing). The analysis is performed within Architecture Design and Preliminary Design steps, according to the company's Product Development Process. The lessons learned regarding the use of the implemented trade-off method in the three cases are presented. The results are a handful of solutions, a basis to aid in the selection of a design alternative. While the implementation of the trade-off method is performed for companies, there is nothing to prevent adapting this method, with minimal modifications, for use in other industrial applications. © 2017

Kraker P.,Know Center | Schlogl C.,University of Graz | Jack K.,Mendeley | Lindstaedt S.,Know Center
Journal of Informetrics | Year: 2015

In this paper, we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating knowledge domain visualizations. First, we investigate the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69% of the publications in an average user library can be attributed to a single subject area. Then, we use co-readership patterns to map the field of educational technology. The resulting visualization prototype, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80% of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent. © 2015 Elsevier Ltd.

Schlogl C.,University of Graz | Gorraiz J.,University of Vienna | Gumpenberger C.,University of Vienna | Jack K.,Mendeley | Kraker P.,Know Center
Scientometrics | Year: 2014

In our article we compare downloads from ScienceDirect, citations from Scopus and readership data from the social reference management system Mendeley for articles from two information systems journals (“Journal of Strategic Information Systems” and “Information and Management”) published between 2002 and 2011. Our study shows a medium to high correlation between downloads and citations (Spearman r = 0.77/0.76) and between downloads and readership data (Spearman r = 0.73/0.66). The correlation between readership data and citations, however, was only medium-sized (Spearman r = 0.51/0.59). These results suggest that there is at least “some” difference between the two usage measures and the citation impact of the analysed information systems articles. As expected, downloads and citations have different obsolescence characteristics. While the highest number of downloads are usually made in the publication year and immediately afterwards, it takes several years until the citation maximum is reached. Furthermore, there was a re-increase in the downloads in later years which might be an indication that citations also have an effect on downloads to some degree. © 2014, Akadémiai Kiadó, Budapest, Hungary.

Schlogl C.,University of Graz | Gorraiz J.,University of Vienna | Gumpenberger C.,University of Vienna | Jack K.,Mendeley | Kraker P.,Know Center
Proceedings of ISSI 2013 - 14th International Society of Scientometrics and Informetrics Conference | Year: 2013

In our article we compare downloads from ScienceDirect, citations from Scopus and readership data from the social reference management system Mendeley for articles from the Journal of Strategic Information Systems (publication years: 2002-2011). Our study shows a medium to high correlation between downloads and readership data (Spearman r=0.73) and between downloads and citations (Spearman r=0.77). However, there is only a medium-sized correlation between readership data and citations (Spearman r=0.51). These results suggest that there is at least "some" difference among the two usage measures and the (citation) impact of the analysed information systems articles. As expected downloads and citations have different obsolescence characteristics. While the highest downloads accrue the first years after publication, it takes several years until the citation maximum is reached. © AIT Austrian Institute of Technology GmbH Vienna 2013.

Kraker P.,Know Center | Korner C.,University of Graz | Jack K.,Mendeley | Granitzer M.,University of Passau
WWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion | Year: 2012

Social reference management systems provide a wealth of information that can be used for the analysis of science. In this paper, we examine whether user library statistics can produce meaningful results with regards to science evaluation and knowledge domain visualization. We are conducting two empirical studies, using a sample of library data from Mendeley, the world's largest social reference management system. Based on the occurrence of references in users' libraries, we perform a large-scale impact factor analysis and an exploratory co-readership analysis. Our preliminary findings indicate that the analysis of user library statistics can produce accurate, timely, and content-rich results. We find that there is a significant relationship between the impact factor and the occurrence of references in libraries. Using a knowledge domain visualization based on co-occurrence measures, we are able to identify two areas of topics within the emerging field of technology-enhanced learning. Copyright is held by the International World Wide Web Conference Committee (IW3C2).

News Article | November 2, 2016

Provalis Research, a leading provider of text analytics software, announces the release of QDA Miner 5 (Qualitative Data Analysis). This latest version provides enhanced data portability, sharpened analysis of unstructured text and increased visualization capabilities. The new version introduces more than 25 new features that facilitate data import from external sources and uses powerful analysis tools to present results. With QDA Miner 5, you can import directly from: ● Web survey platforms ● Social media and email programs ● Reference management tools QDA Miner 5 allows direct data importing from well known platforms such as: Survey Gizmo, Survey Monkey, Qualitrics, Question Pro, Voxco, Facebook, Twitter, Endnote, Zotero, Mendeley, Outlook, Mbox and many others. Information collected from those sources may be analyzed using the manual coding and analysis functions of QDA Miner 5 or the automatic content analysis and text mining features of WordStat. QDA Miner 5’s new GIS mapping tool enables you to use geographic information and enhance your qualitative data analysis by creating three different types of maps: interactive maps of data points, thematic maps and heat maps. The new visual displays and charting options offer a quick glimpse of the spatial distribution of your coding with the document overview feature, display code co-occurrence using the Link Analysis feature or allow you to view your codes according the codebook hierarchical structure using the new Tree Grid report. QDA Miner is an easy-to-use qualitative data analysis software package for coding, annotating, retrieving and analyzing small and large collections of documents and images. The QDA Miner qualitative data analysis tool may be used to analyze interview or focus group transcripts, legal documents, journal articles, speeches, even entire books, as well as drawings, photographs, paintings, and other visual documents. Its seamless integration with WordStat, a quantitative content analysis and text mining module and SimStat, a statistical data analysis tool, gives data analysts unprecedented flexibility for analyzing text and relating its content to structured information including numerical and categorical data. A complete list of the new features can be found on the What’s New page, or download a 30-day free trial version to assess the new data analysis of unstructured text. To schedule a demo of the QDA Miner 5 product, contact us today. About Provalis Research Provalis Research is a world-leading developer of text analysis software with ground-breaking qualitative, quantitative and mixed methods programs. Developing text analysis programs for over 18 years, Provalis Research has a proven record of accomplishment in designing and bringing to market tools that have today become essential to researchers and analysis specialists worldwide. Headquartered in Montreal, Canada, the company was founded in 1989. Provalis Research software products are used by more than 4,000 institutions on 5 continents, in a wide range of applications such as business intelligence, market research, political sciences, media analysis, survey analysis, risk and fraud detection and international crime analysis.

News Article | November 5, 2015

It took three years for Richard Price, a PhD in philosophy, to get a paper published. The slow speed of that inspired him to start what is essentially a social network called Academia, where academics can publish their papers and have them reviewed by other experts called editors. Now, Price wants to take the next step to surface the best papers with a score. It’s called PaperRank, and it’s a way to help academics quickly determine the quality and validity of a paper. Experts can already recommend and make comments on papers as a sort of live peer review process, but now those recommendations fit into an algorithm that helps rank the paper. “In the journal model, the editor of the journal is a paid employee of the journal. They go and email a couple people and say, can you peer review this?” Price said. “And then they do it for free. It’s just a sniff test. It’s reading it and saying, yeah, I recommend it. What we thought was, what does peer review look like when you have a network, and that’s what we tried to build.” The number of recommendations a paper has and the scores of the authors recommending the paper determine the paper’s rank. It’s a shot at basically distributing the credentialing process across an entire network, rather than relying on editors emailing various expects to peer review the paper before it ends up in a journal. It’s not entirely dissimilar to Google’s PageRank in terms of the mathematics, Price said, though there are some more nuanced differences. The credentialing process is slow and generally, to get a paper widely viewed and accepted, it has to land in a major publication like Science or Nature. Editors receive papers and then have to seek out experts in their networks to get some of them to review the paper, to essentially tell whether it’s a valid scientific study or not. “The basic thought was, right now the system collects sentiment from two people, and we thought why stop at two? Why have this cap?” Price said. “What I’m trying to do is take the flat distribution right now of two peer reviews per paper and augment it and collect more sentiment. And the way we’re getting it is by asking people who read the paper just what they thought.” There’s always the chance that the credentialing process might turn out to not generally be accepted by the scientific community. But Academia does have 27 million registered users that upload 20,000 papers each day, and editors recommend around 3.5 papers per month. Price said that Academia editors are basically putting their reputations behind their recommendations. There’s another benefit to improving the credentialing process and more quickly determining the best papers — it can improve the speed at which additional papers that stem from that first study emerge. That improves the speed of scientific innovation, if it works, Price said. And, indeed, there’s definitely an opportunity here to do just that, with papers emerging all the time that stem from research. Price also said the ranking system helps solve what he calls the “reproducibility crisis,” which is an instance where the results of the experiments in a paper can’t be replicated. This happens in a large number of papers, he said, and in theory the higher the PaperRank, the higher the confidence in the experiments. Authors will also get a ranking, which then feeds back into how strong their recommendations are. The AuthorRank, which is what it’s called, is a factor of the scores of their papers. If it’s becoming a bit obvious that there’s a lot of self-referring happening here, that’s by design: it’s essentially a giant recursive function. So why would additional editors peer review? For starters, to get things rolling, Academia has set up about 2,000 academics to seed reviews and build up scores initially for papers. And in general, editors want to get their names out there — with the various audience sizes see the names of people who have recommended papers, Price said. To be sure, Academia is not the only player in this space. There are other networks like Mendeley and ResearchGate, and the service has to try to get ahead of incumbent publishers like Springer and Wiley, Price said. But the startup has twice the traffic of ResearchGate and is much larger, and PaperRank is the first peer review service implemented on a network, Price said. Academia is based in San Francisco and has raised $18 million from firms like Spark Capital, Khosla Ventures and True Ventures.

News Article | November 19, 2015

The Research Data Alliance (RDA) is a global organization — supported by funding bodies in Europe, the U.S., and Australia — that has been established to improve data sharing for research. This article focuses on the “Data and computing infrastructures for open scholarship” track of RDA's e-Infrastructures and RDA for data-intensive science workshop, which took place on September 22, 2015, in Paris, France. Open scholarship is important for an open society and has the power to improve lives across the globe. However, achieving this vision may require the redesign, enhancement, or adaptation of the e-infrastructures used for conducting research and disseminating results. Jarkko Siren, a project officer in the European Commission’s e-infrastructure unit, opened the first session of the workshop track. As well as drawing attention to the emphasis that has been placed on open data in the Horizon 2020 funding program, he used his presentation to speak about the need for transparency in research architectures: “Open scholarship requires new designs and architectures,” he says. “It requires transparency at all levels of the research life-cycle, which effectively leads to trust and uptake.” Read more about this in a recent discussion post from Siren. Giulia Ajmonemarsan presented a new report by the Organisation for Economic Co-operation and Development (OECD) entitled 'Making Open Science a Reality'. The report opens with these jarring words, which neatly sum up the scale of the challenge faced: “Science is the mother of the digital age. And yet, 22 years after CERN [the European Organization for Nuclear Research] placed the World Wide Web software in the public domain, effectively creating the open Internet, science itself has struggled not only to ‘go digital’ but also to ‘go open’”. The full report, which includes an assessment of the progress made in several countries towards making the vision of open science a reality, can be read in full on the OECD website. William Gunn, director of scholarly communication at Mendeley, spoke about the problem of irreproducibility in science. He cited research showing that around half of all research results cannot be reproduced — for a variety of factors — and argues that digital infrastructures have a key role to play in remedying this. Gunn suggested a number of specific steps for reducing reliance on contacting the original authors of research papers. In particular, he stressed the need to build tools to better capture the full research workflow, and to make these at least semi-automated where possible. Several new EC-supported projects — funded under the Horizon 2020 program — were showcased as part of this workshop track. Among these was the THOR project (‘technical and human infrastructure for open research’), which was presented by Sünje Dallmeier-Tiessen of CERN. This 30-month project builds on the success of ODIN. It is working — through better integration of persistent identifiers — to establish seamless integration between articles, data, and researchers across the research lifecycle. Thus, the project collaborators aim to create a wealth of open resources and foster a sustainable international e-infrastructure. By doing so, they hope to reduce unnecessary duplication of work, improve economies of scale, enrich research services, and create new opportunities for innovation. Another CERN representative to speak at the event was Tim Smith, who presented the research repository Zenodo, in support of open science. Read more about this in our feature article from the repository’s launch back in 2013. Sharing for the good of society The workshop track concluded with a panel discussion featuring several high-profile figures. During this, Jean-Pierre Bourguignon, president of the European Research Council, argued passionately for the importance of training people to have the right skills. “Research is very dynamic,” says Bourguignon. “The landscape is changing very, very quickly, and will continue to do so. We need to train people to become data scientists.” Kathleen Shearer, director of the Confederation of Open Access Repositories (COAR) highlighted the need to look at infrastructure from a global perspective. This is key, she believes, if we’re going to use infrastructure to help tackle major global problems. She also highlighted the importance of openness, reusability, standards, interoperability. Finally, Marie Farge, director of research at the French National Center for Scientific Research, emphasized the importance of not just sharing data, but also properly preserving it. “Sharing is not just about sharing in space, but also in time,” she says. “We’re building on past work and preparing work for future generations to build on”. “Sharing is at the core of science,” continues Farge. “Science is part of culture and part of knowledge. We need to protect our right to share knowledge... it is a commons: something that everyone can use, but which no one owns.” This article was originally published on Read the original article.

News Article | February 27, 2017

Join the Biomaterials Editors, Professors Abhay Pandit and Hanry Yu, for a discussion on why & how to be a referee and outstanding challenges on Peer Review on Tuesday 7 March 9 - 10 AM (GMT). This session will end with a live Question and Answer session where you are welcome to query the presenters on these topics. Would you like to post your pressing questions to our editors ahead of the webinar or continue the conversation afterwards? This event has an associated group on Mendeley. Join the discussion by emailing Community Manager Sophie de Koning at We look forward to seeing you there!

Elsevier, a world-leading provider of scientific, technical and medical information products and services, today announced that it has implemented the FORCE11 Joint Declaration of Data Citation Principles for over 1800 journals. This means that authors publishing with Elsevier are now able to cite the research data underlying their article, contributing to attribution and encouraging research data sharing with research articles. The FORCE11 data citation principles were launched in 2014 with the aim to make research data an integral part of the scholarly record. The principles recognized that a critical driver for increasing the availability of research data was to ensure authors receive credit for sharing through proper citation of research data. Elsevier was involved in drafting these principles and, along with many other publishers, data repositories and research institutions, endorsed them as an industry standard. Now, after working closely with other publishers within the Data Citation Implementation Pilot, Elsevier has incorporated them in its production and publication workflow in order to recognize and process data citations. Combined with new author guidance and education, this will encourage and reward researchers for sharing their research data. Data citation provides a persistent and consistent way to link an article to a dataset. Authors can cite the data they generated for their research to ensure easy access to their data, or they can cite existing datasets they used in their research, thereby providing an indication of reuse. For readers, articles with data citations provide a more complete picture of the research that was carried out. "To make archived and cited data really actionable, both for validation and verification of results, and for reuse in meta-analysis and discovery, we need a common approach by publishers and data archives" said Dr. Tim Clark, co-Leader of the NIH-funded Data Citation Implementation Pilot within FORCE11. "Developing and promoting these common approaches have been the driving rationale of the Data Citation Implementation Project and Elsevier and other early adopters of data citation are playing a crucial role in ensuring data citation is implemented uniformly across the ecosystem." "Research Data is of the utmost importance for researchers in their advancement of knowledge," said Philippe Terheggen, Managing Director, Elsevier Journals. "Therefore, Elsevier fully supports the proper citation of such data. The implementation of data citation in our journals is an important step towards encouraging and rewarding authors for sharing research data. This is part of our wider efforts around research data which include the data repository Mendeley Data and data journals such as Data in Brief." As with article references, the dataset will be cited at the relevant place within the text of the article and will appear in the reference list. The data citations will look the same as other citations but will also contain new elements, such as the repository where the dataset is stored. In most cases, there will be a direct link to the stored dataset, making it even easier for authors to find relevant datasets. Read more on Elsevier Connect or join our Publishing Campus webinar "Data Citation: How can you as a researcher benefit from citing data?" on December 6 at 3-4pm CET. Elsevier is a world-leading provider of information solutions that enhance the performance of science, health, and technology professionals, empowering them to make better decisions, deliver better care, and sometimes make groundbreaking discoveries that advance the boundaries of knowledge and human progress. Elsevier provides web-based, digital solutions - among them ScienceDirect, Scopus, Research Intelligence and ClinicalKey - and publishes over 2,500 journals, including The Lancet and Cell, and more than 35,000 book titles, including a number of iconic reference works. Elsevier is part of RELX Group, a world-leading provider of information and analytics for professional and business customers across industries.

Loading Mendeley collaborators
Loading Mendeley collaborators