Entity

Time filter

Source Type

London, United Kingdom

Along with colleagues from the universities of Zurich and Bern, Cambridge's Adrian Leemann has developed the free app English Dialects (available on iOS and Android) which asks you to choose your pronunciation of 26 different words before guessing where in England you're from. The app, officially launched today on the App Store and Google Play, also encourages you to make your own recordings in order to help researchers determine how dialects have changed over the past 60 years. The English language app follows the team's hugely successful apps for German-speaking Europe which accumulated more than one million hits in four days on Germany's Der Spiegel website, and more than 80,000 downloads of the app by German speakers in Switzerland. "We want to document how English dialects have changed, spread or levelled out," said Dr Leemann, a researcher at Cambridge's Department of Theoretical and Applied Linguistics. "The first large-scale documentation of English dialects dates back 60-70 years, when researchers were sent out into the field – sometimes literally – to record the public. It was called the 'Survey of English Dialects'. In 313 localities across England, they documented accents and dialects over a decade, mainly of farm labourers." The researchers used this historical material for the dialect guessing app, which allows them to track how dialects have evolved into the 21st century. "We hope that people in their tens of thousands will download the app and let us know their results – which means our future attempts at mapping dialect and language change should be much more precise," added Leemann. "Users can also interact with us by recording their own dialect terms and this will let us see how the English language is evolving and moving from place to place." The app asks users how they pronounce certain words or which dialect term they most associate with commonly-used expressions; then produces a heat map for the likely location of your dialect based on your answers. For example, the app asks how you might say the word 'last' or 'shelf', giving you various pronunciations to listen to before choosing which one most closely matches your own. Likewise, it asks questions such as: 'A small piece of wood stuck under the skin is a…' then gives answers including: spool, spile, speel, spell, spelk, shiver, spill, sliver, splinter or splint. The app then allows you to view which areas of the country use which variations at the end of the quiz. It also asks the endlessly contentious English question of whether 'scone' rhymes with 'gone' or 'cone'. "Everyone has strong views about the pronunciation of this word, but, perhaps surprisingly, we know rather little about who uses which pronunciation and where," said Professor David Britain, a dialectologist and member of the app team based at the University of Bern in Switzerland. "Much of our understanding of the regional distribution of different accent and dialect features is still based on the wonderful but now outdated Survey of English Dialects – we haven't had a truly country-wide survey since. We hope the app will harness people's fascination with dialect to enable us to paint a more up-to-date picture of how dialect features are spread across the country." At the end of the 26 questions, the app gives its best three guesses as to the geography of your accent based on your dialect choices. However, while the Swiss version of the app proved to be highly accurate, Leemann and his colleagues have sounded a more cautious note on the accuracy of the English dialect app. Dr Leemann said: "English accents and dialects are likely to have changed over the past decades. This may be due to geographical and social mobility, the spread of the mass media and other factors. If the app guesses where you are from correctly, then the accent or dialect of your region has not changed much in the last century. If the app does not guess correctly, it is probably because the dialect spoken in your region has changed quite a lot over time." At the end of the quiz, users are invited to share with researchers their location, age, gender, education, ethnicity and how many times they have moved in the last decade. This anonymous data will help academics understand the spread, evolution or decline of certain dialects and dialect terms, and provide answers as to how language changes over time. "The more people participate and share this information with us, the more accurately we can track how English dialects have changed over the past 60 years," added Dr Leemann. After taking part in the quiz, users can also listen to both historic and contemporary pronunciations, taking the public on an auditory journey through England and allowing them to hear how dialects have altered in the 21st century. The old recordings are now held by the British Library and were made available for use in the app. One of these recordings features a speaker from Devon who discusses haymaking and reflects on working conditions in his younger days. Dr Leemann added: "Our research on dialect data collected through smartphone apps has opened up a new paradigm for analyses of language change. For the Swiss version nearly 80,000 speakers participated. Results revealed that phonetic variables (e.g. if you say 'sheuf' or 'shelf') tended to remain relatively stable over time, while lexical variables (e.g. if you say 'splinter', 'spelk', 'spill' etc.) changed more over time. The recordings from the Swiss users also showed clear geographical patterns; for example people spoke consistently faster in some regions than others. We hope to do such further analyses with the English data in the near future." The findings of the German-speaking experiments were published last week in PLOS ONE., Explore further: Philadelphia shifts to a Northern accent, research indicates More information: Adrian Leemann et al. Crowdsourcing Language Change with Smartphone Applications, PLOS ONE (2016). DOI: 10.1371/journal.pone.0143060


News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

Many cultural institutions have accelerated the development of their digital collections and data sets by allowing citizen volunteers to help with the millions of crucial tasks that archivists, scientists, librarians and curators face. One of the ways institutions are addressing these challenges is through crowdsourcing. In this post, I’ll look at a few sample crowdsourcing projects from libraries and archives in the U.S. and around the world. This is strictly a general overview. For more detailed information, follow the linked examples or search online for crowdsourcing platforms, tools or infrastructures. In general, volunteers help with: The Library of Congress utilizes public input for its Flickr project. Visitors analyze and comment on the images in the Library’s general Flickr collection of over 20,000 images and the Library’s Flickr “Civil War Faces” collection. “We make catalog corrections and enhancements based on comments that users contribute,” said Phil Michel, digital conversion coordinator at the Library. In another type of image analysis, Cancer Research UK’s Cellslider project invites volunteers to analyze and categorize cancer cell cores. Volunteers are not required to have a background in biology or medicine for the simple tasks. They are shown what visual elements to look for and instructed on how to categorize into the webpage what they see. Cancer Research UK states on its Web site that, as of the original publication of this story, 2,571,751 images have been analyzed. Both of the examples above use descriptive metadata or tagging, which helps make the images more findable by means of the specific keywords associated with — and mapped to — the images. The British National Archives runs a project, titled “Operation War Diary,” in which volunteers help tag and categorize diaries of World War I British soldiers. The tags are fixed in a controlled vocabulary list, a menu from which volunteers can select keywords, which helps avoid the typographical variations and errors that may occur when a crowd of individuals freely type their text in. The New York Public Library’s “Community Oral History Project” makes oral history videos searchable by means of topic markers tagged into the slider bar by volunteers; the tags map to time codes in the video. So, for example, instead of sitting through a one-hour interview to find a specific topic, you can click on the tag — as you would select from a menu — and jump to that tagged topic in the video. The National Archives and Records Administration offers a range of crowdsourcing projects on its Citizen Archivist Dashboard. Volunteers can tag records and subtitle videos to be used for closed captions; they can even translate and subtitle non-English videos into English subtitles. One NARA project enables volunteers to transcribe handwritten old ship’s logs that, among other things, contain weather information for each daily entry. Such historic weather data is an invaluable addition to the growing body of data in climate-change research. Transcription is one of the most in-demand crowdsourcing tasks. In the Smithsonian’s Transcription Center, volunteers can select transcription projects from at least 10 of the Smithsonian’s 19 museums and archives. The source material consists of handwritten field notes, diaries, botanical specimen sheets, sketches with handwritten notations and more. Transcribers read the handwriting and type into the Web page what they think the handwriting says. The Smithsonian staff then runs the data through a quality control process before they finally accept it. In all, the process comprises three steps: Notable transcription projects from other institutions are the British Library’s Card Catalogue project, Europeana’s World War I documents, the Massachusetts Historical Society’s “The Diaries of John Quincy Adams,” The University of Delaware’s, “Colored Conventions,” The University of Iowa’s “DIY History,” and the Australian Museum’s Atlas of Living Australia. Optical Character Recognition is the process of taking text that has been scanned into solid images — sort of a photograph of text — and machine-transforming that text image into text characters and words that can be searched. The process often generates incomplete or mangled text. OCR is often a “best guess” by the software and hardware. Institutions ask for help comparing the source text image with its OCR text-character results and hand-correcting the mistakes. Newspapers comprise much of the source material. The Library of Virginia, The Cambridge Public Library, and the California Digital Newspaper collection are a sampling of OCR-correction sites. Examples outside of the U.S. include the National Library of Australia and the National Library of Finland. The New York Public Library was featured in the news a few years ago for the overwhelming number of people who volunteered to help with its “What’s on the Menu” crowdsourcing transcription project, where the NYPL asked volunteers to review a collection of scanned historic menus and type the menu contents into a browser form. NYPL Labs has gotten even more creative with map-oriented projects. With “Building Inspector” (whose peppy motto is, “Kill time. Make history.”), it reaches out to citizen cartographers to review scans of very old insurance maps and identify each building — lot by lot, block by block — by its construction material, its address and its spatial footprint; in an OCR-like twist, volunteers are also asked to note the name of the then-existent business that is hand written on the old city map (e.g. MacNeil’s Blacksmith, The Derby Emporium). Given the population density of New York, and the propensity of most of its citizens to walk almost everywhere, there’s a potential for millions of eyes to look for this information in their daily environment, and go home and record it in the NYPL databases. Volunteers can also user the NYPL Map Warper to rectify the alignment differences between contemporary maps and digitized historic maps. The British Library has a similar map-rectification crowdsourcing project called Georeferencer. Volunteers are asked to rectify maps scanned from 17th-, 18th- and 19th-century European books. In the course of the project, maps get geospatially enabled and become accessible and searchable through Old Maps Online. Citizen Science projects range from the cellular level to the astronomical level. The Audubon Society’s Christmas Bird Count asks volunteers to go outside and report on what birds they see. The data goes toward tracking the migratory patterns of bird species. Geo-Wiki is an international platform that crowdsources monitoring of the earth’s environment. Volunteers give feedback about spatial information overlaid on satellite imagery or they can contribute new data. Gamification makes a game out of potentially tedious tasks. Malariaspot, from the Universidad Politécnica de Madrid, makes a game of identifying the parasites that lead to malaria. Their Web site states, “The analysis of all the games played will allow us to learn (a) how fast and accurate is the parasite counting of non-expert microscopy players, (b) how to combine the analysis of different players to obtain accurate results as good as the ones provided by expert microscopists.” Carnegie Melon and Stanford collaboratively developed, EteRNA, a game where users play with puzzles to design RNA sequences that fold up into a target shapes and contribute to a large-scale library of synthetic RNA designs. MIT’s “Eyewire” uses gamification to get players to help map the brain. MIT’s “NanoDoc” enables game players to design new nanoparticle strategies towards the treatment of cancer. The University of Washington’s Center for Game Science offers “Nanocrafter,” a synthetic biology game, which enables players to use pieces of DNA to create new inventions. “Purposeful Gaming,” from the Biodiversity Heritage Library, is a gamified method of cleaning up sloppy OCR. Harvard uses the data from its “Test My Brain” game to test scientific theories about the way the brain works. Crowdsourcing enables institutions to tap vast resources of volunteer labor, to gather and process information faster than ever, despite the daunting volume of raw data and limitations of in-house resources. Sometimes the volunteers’ work goes directly into a relational database that maps to target digital objects and sometimes the work resides somewhere until a human can review it and accept or reject it. The process requires institutions to trust “outsiders” — average people, citizen archivists, historians, hobbyists. If a project is well-structured and the user instructions are clear and simple, there is little reason for institutions to not ask the general public for help. It’s a collaborative partnership that benefits everyone. This blog was originally published on The Signal. Read the original post.


News Article
Site: http://news.yahoo.com/green/

Women's rights, the foundations of capitalism and the warping of space-time can all take a backseat to meticulous descriptions of long-beaked finches, at least if public opinion is any measure. "On the Origin of Species," Charles Darwin's famous tome on evolution, has been voted the most influential academic book in history, according to an online survey answered by the public. The biology bombshell edged out competitors such as "The Complete Works of William Shakespeare"; "On the Vindication of the Rights of Women," by Mary Wollstonecraft Shelley; "The Wealth of Nations," by Adam Smith; and even physics classics such as the theory of general relativity by Albert Einstein and "A Brief History of Time," by Stephen Hawking. A group of academic booksellers, publishers and librarians conducted the survey in advance of Academic Book Week in the United Kingdom. [Creationism vs. Evolution: 6 Big Battles] Darwin's famous book made a splash when it was first published in 1859, and it has been making waves ever since. The book, which emerged from the naturalist's observations as he traveled aboard the ship HMS Beagle, lays the groundwork for modern evolutionary theory, the process by which organisms change as a result of heritable changes. In Darwin's theory, species emerge through natural selection, where genetic changes lead some in a population to be more fit for their environment than their competitors. Over time, those with the genetic change may outcompete or outbreed their counterparts, causing those changes to become widespread. In one of the most iconic examples in the book, Darwin noted that, over time, the finches of the Galapagos Islands had evolved long or short beaks, depending on whether they needed to dig deep to access the food inside the cactus fruit. From almost the instant the book was published, it sparked controversy, with many taking issue with its implications for religion and the origin of human beings. In the United States, for instance, conflicts arose when public schools began teaching the theory of evolution after World War I, with Tennessee passing a law stating that no theories of human origins taught in public schools could contradict the Bible. The law was tested in the famous Scopes Trial, and stayed on the books until 1968, when the Supreme Court ruled such laws contradicted the separation of church and state. Despite the controversy, the theories laid out in the classic book have been validated time and again, and there is now broad scientific consensus that evolutionary theory explains how species, including humans, got to be the way they are. Despite this near-unanimous agreement among scientists, roughly half of Americans continue to reject the notion that humans evolved from earlier primates. Academic Book Week is a week of activities, held from Nov. 9 to Nov. 16, related to the Academic Book of the Future project, launched in 2014 by the Arts and Humanities Research Council and the British Library to brainstorm what that future book looks like with the backdrop of open-access publishing and the evolution of digital publishing. Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


Hockx-Yu H.,British Library
Proceedings of the 3rd International Web Science Conference, WebSci 2011 | Year: 2011

This paper takes a critical look at the efforts since the mid-1990s in archiving and preserving websites by memory institutions around the world. It contains an overview of the approaches and practices to date, and a discussion of the various technical, curatorial and legal issues related to web archiving. It also looks at a number of current projects which take a different approach to dealing with the temporal aspects or persistence of the web. The paper argues for closer collaboration with the main stream web science research community and the use of technology developed for the live web, such as visualisation and data analytics, to advance the web archiving agenda. Copyright 2011 ACM. Source


Knight B.,British Library
Polymer Degradation and Stability | Year: 2014

The evidence for the existence of a critical concentration of acetic acid, or autocatalytic point, at which deacetylation of cellulose triacetate film accelerates markedly, is re-examined. It is concluded that in fact deacetylation is autocatalytic at all concentrations of acid, and also that there is no evidence for the existence of an induction period. © 2013 Elsevier Ltd. All rights reserved. Source

Discover hidden collaborations