Ontotext is a Bulgarian software company headquartered in Sofia. It is the semantic technology branch of Sirma Group. Its main domain of activity is the development of software products and solutions based on the Semantic Web languages and standards, in particular RDF, OWL and SPARQL. Wikipedia.
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: INFRAIA-1-2014-2015 | Award Amount: 7.97M | Year: 2015
The European Holocaust Research Infrastructure (EHRI) project seeks to transform archival research on the Holocaust. The vision of EHRI is to integrate the data, services and expertise of existing Holocaust infrastructures on an unprecedented scale. It will allow researchers from across the globe transnational and virtual access to the integrated infrastructure, and provide them with innovative digital tools and methods to (collaboratively) explore and analyse Holocaust sources. EHRI will thereby become an indispensable tool for the study of the Holocaust from a pan-European perspective. EHRI is based on an advanced community that has already achieved a significant co-ordination of its efforts, not least thanks to the activities undertaken during EHRIs first phase. The aim of the second phase is to further expand this community. The EHRI consortium includes 22 partners, spread across Europe and beyond. This consortium, as well as a network of regional contact points, enables EHRI to reach those regions where much valuable Holocaust source material is located, but where access has hitherto been problematic, especially in South-Eastern and Eastern Europe. EHRI includes measures to build capacity in such regions, thereby ensuring that institutions and people across Europe can contribute to, and make use of, the EHRI infrastructure. EHRI will continue to serve as a best practice model for other humanities projects, and its innovative approach to data integration, management and retrieval will have impact in the wider cultural and IT industries. Although EHRI is geared towards scholarly communities, open online availability of reliable Holocaust material is important for the larger public, as the Holocaust is deeply rooted in the development of European societies. European support for the study of this most traumatic historical event is essential to achieve a comprehensive approach to the history of the Holocaust as a shared European phenomenon.
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-14-2016-2017 | Award Amount: 3.60M | Year: 2017
Corporate information, including basic company firmographics (e.g., name(s), incorporation data, registered addresses, ownership and related entities), financials (e.g., balance sheets, ratings) as well as contextual data (e.g., cadastral data on corporate properties, geo data, data about directors and shareholders, public tenders data, press mentions) are the foundation that many data value chains are built on. Furthermore, this type of information contributes to the transparency and accountability of enterprises, is instrumental input to the process of marketing and sales, and plays a key role in many business interactions. Existing initiatives to increase the interoperability and access of corporate data are mostly fragmented (across borders), limited in scope and size, and silo-ed within specific business communities with limited accessibility from outside their originating sectors and countries. As a result, collecting and aggregating data about a business entity from several public sources (be it private/public, official or non-official ones), and especially across country borders and languages is a tedious, time consuming, error prone, and very expensive operation which renders many potential business models non-feasible. euBusinessGraph represents a key initiative to simplify and disrupt the cross-border and cross-lingual collection, reconciliation, aggregation, and provisioning and analytics of company-related data from authoritative and non-authoritative public or private sector sources, with the aim of enabling cross-sectorial innovation. By a combination of large companies, SMEs, public organizations, and technology transfer providers euBusinessGraph sets the foundations of a European cross-border and cross-lingual business graph, aggregating, linking, and provisioning (open and non-open) high-quality company-related data, demonstrating innovation across sectors where company-related data value chains are relevant.
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2013.4.1 | Award Amount: 4.18M | Year: 2013
The consumption of large amounts of multilingual and multimedia content regardless of its reliability and cross-validation can have important consequences on the society. An indicative example is the current crisis of the financial markets in Europe, which has created an extremely unstable ground for economic transactions and caused insecurity in the population. The fact that the national mass media provide exaggerated and contradictory information and the inaptness to understand local contexts from different countries has a considerable share in the aggravation of the crisis.To break this spiral, we need multilingual technologies with sentiment, social and spatiotemporal competence that are able to interpret, relate and summarize economic information and news created from various local subjective and biased views and disseminated via TV, radio, mass media websites and social media. To address these challenges, MULTISENSOR will mine heterogeneous data from the aforementioned resources and apply multidimensional content integration. MULTISENSOR will go beyond the state of the art by proposing the following scientific objectives:a) content distillation of heterogeneous multimedia and multilingual data;b) sentiment and context analysis of content and social interactions;c) semantic integration of heterogeneous multimedia and multilingual data;d) semantic reasoning and intelligent decision support;e) multilingual and multimodal summarization and presentation of the information to the user.In order to achieve multidimensional integration of heterogeneous resources, MULTISENSOR proposes a content integration framework that builds upon multimedia mining, knowledge extraction, analysis of computer-mediated interaction, topic detection, semantic and multimodal representation as well as hybrid reasoning.The developed technologies will be validated with the aid of 2 main use cases: a) International mass media news monitoring and b) SME international investments.
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2011.4.4 | Award Amount: 3.46M | Year: 2012
Non-relational data management is emerging as a critical need for the new data economy based on large, distributed, heterogeneous, and complexly structured data sets. This new data management paradigm also provides an opportunity for research results to impact young innovative companies working on new RDF and graph data management technologies to start playing a significant role in this new data economy.Standards and benchmarking are two of the most important factors for the development of new information technology, yet there is still no comprehensive suite of benchmarks and benchmarking practices for RDF and graph databases, nor is there an authority for setting benchmark definitions and auditing official results. Without them, the future development and uptake of these technologies is at risk by not providing industry with clear, user-driven targets for performance and functionality.The goal of the Linked Data Benchmark Council (LDBC) project is to create the first comprehensive suite of open, fair and vendor-neutral benchmarks for RDF/graph databases together with the LDBC foundation which will define processes for obtaining, auditing and publishing results. The core scientific innovation of LDBC is therefore to define meaningful benchmarks derived from a combination of actual usage scenarios combined with the technical insight of top database systems researchers and architects in the choke points of current technology. LDBC will bring together a broad community of researchers and RDF and graph database vendors to establish an independent authority, the LDBC foundation, responsible for specifying benchmarks, benchmarking procedures and verifying/publishing results. The forum created will become a long-surviving, industry supported association similar to the TPC. Vendors and user organisations will participate in order to influence benchmark design and to make use of the obvious marketing opportunities.
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2011.4.1 | Award Amount: 2.00M | Year: 2012
ANNOMARKET aims to revolutionise the text annotation market, by delivering an affordable, open market place for pay-as-you-go, cloud-based extraction resources and services, in multiple languages. This project is is driven by a commercially-dominated consortium, from 3 EU countries and with 43% of the budget assigned to SMEs.The key differentiating feature of ANNOMARKET is its open marketplace concept. In addition, the Software-as-a-Service (SaaS) model reduces the complexity of deployment, maintenance, customisation, and sharing of text processing resources and services, making them affordable to SMEs both users and resource providers. The main beneficiaries will be the SME providers of text analysis resources and services, who will be able to deploy their custom components/applications and receive revenue via the AnnoMarket marketplace. There will be a mixture of paid-for proprietary resources and services and free open-source ones, in different languages. AnnoMarket will also promote customisation and re-targetting to new vertical domains and languages. The open-source nature of the underlying infrastructure will foster a strong developer community and enable easy deployment on private and public cloud infrastructures. Pricing will be transparent (based on data volumes) and the business model self-sustainable.The techniques will be generic with many business applications, e.g. large-volume multi-lingual information management, business intelligence, social media monitoring, customer relations management. The project will also benefit society and ordinary citizens by enabling affordable enrichment of government data archives and health-related web content. The marketplace architecture will be refined and evaluated with early adopters from our focus group, covering these vertical domains, in five target languages.
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2013.4.1 | Award Amount: 4.27M | Year: 2014
Social media poses three major computational challenges, dubbed by Gartner the 3Vs of big data: volume, velocity, and variety. Content analytics methods have faced additional difficulties, arising from the short, noisy, and strongly contextualised nature of social media. In order to address the 3Vs of social media, new language technologies have emerged, e.g. using locality sensitive hashing to detect breaking news stories from media streams (volume), predicting stock market movements from microblog sentiment (velocity), and recommending blogs and news articles based on user content (variety).PHEME will focus on a fourth crucial, but hitherto largely unstudied, challenge: veracity. It will model, identify, and verify phemes (internet memes with added truthfulness or deception), as they spread across media, languages, and social networks.PHEME will achieve this by developing novel cross-disciplinary social semantic methods, combining document semantics, a priori large-scale world knowledge (e.g. Linked Open Data) and a posteriori knowledge and context from social networks, cross-media links and spatio-temporal metadata. Key novel contributions are dealing with multiple truths, reasoning about rumour and the temporal validity of facts, and building longitudinal models of users, influence, and trust.Results will be validated in two high-profile case studies: healthcare and digital journalism. The techniques will be generic with many business applications, e.g. brand and reputation management, customer relationship management, semantic search and knowledge management. In addition to its high commercial relevance, PHEME will also benefit society and citizens by enabling government organisations to keep track of and react to rumours spreading online.PHEME addresses Objective ICT-2013.4.1 Content analytics and language technologies; a) cross-media analytics.
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2013.4.3 | Award Amount: 2.17M | Year: 2013
While in recent years a large number of datasets has been published as open (and often linked) data, applications utilizing these open and distributed data have been rather few. Reasons include, amongst others, the technical complexity and cost of publishing and providing access to the data, lack of monetization incentives on the provider side, and lack of simplified and unified solutions for data consumption in a multi-platform way. The DaPaaS project directly addresses these challenges by developing a software infrastructure combining Data-as-a-Service (DaaS) and Platform-as-a-Service (PaaS) for open data, with the aim of optimizing publication of Open Data and development of data applications. Addressing the data consumption aspect by developing novel cross-platform interfaces to data applications, DaPaaS extensively covers the life cycle of cost-efficient data publishing and consumption. Backed by the development of a methodology for data use in the DaPaaS infrastructure, the project will deliver an intuitive platform that simplifies data publication, as well as cross-platform data consumption, thus enabling a sustainable infrastructure for efficient and simplified reuse of open data. Core innovations include: an open DaaS and PaaS, unified Linked Data access, integrated DaaS and PaaS for open data, lowering the complexity of open data publishing and consumption for non-experts.Sustainable exploitation of the project results is ensured through a strong participation of SMEs in the consortium. The participating SMEs are among the worlds leading organizations in the field of Open Data, with strong knowledge transfer experience and unique technologies in Linked Data, Semantic Web, data integration and mobile development, with strong links across both public and private sectors, and are committed to a joint development of unique technologies for effectively and efficiently supporting the life cycle of reuse of open data.
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-15-2014 | Award Amount: 3.89M | Year: 2015
The overall objective of the KConnect project is to create a medical text Data-Value Chain with a critical mass of participating companies using cutting-edge commercial cloud-based services for multilingual Semantic Annotation, Semantic Search and Machine Translation of Electronic Health Records and medical publications. The commercial cloud-based services will be the result of productisation of the multilingual medical text processing tools developed in the Khresmoi FP7 project, allowing wide adoption of these tools by industry. The critical mass will be created by the KConnect Professional Services Community, which will consist of at least 30 companies by the end of the project. These companies will be trained to build solutions based on the KConnect Services, hence serving as multipliers for commercial exploitation of the KConnect services. The KConnect project will facilitate the straightforward adaptation of the commercialised services to new languages by providing toolkits enabling the adaptation to be done by by people having a software engineering skillset, as opposed to the rarer language engineering skillset. The KConnect services will also be adapted to handle text in Electronic Health Records, which is particularly challenging due to misspellings, neologisms, organisation-specific acronyms, and heavy use of negation and hedging. The consortium is driven by a core group of four innovative SMEs following complementary business perspectives related to medical text analysis and search. These companies will build solutions for their customers based on KConnect technology. Two partners from the medical domain will use KConnect services to solve their medical record analysis challenges. Two highly-used medical search portal providers will implement the KConnect services to innovate the services offered by their search portals. Through these search portals, the KConnect technologies will be used by over 1 million European citizens before the end of the project
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-15-2014 | Award Amount: 4.48M | Year: 2015
Property data are one of the most valuable datasets managed by governments worldwide and extensively used in various domains by private and public organizations. Unfortunately these data are not always easy to access. House and property data is used in a variety of ways to produce value added information within and across several business sectors, including real estate and debt collection. Such sectors suffer from a lack of innovation due to a fragmented data ecosystem which makes it difficult to access relevant datasets. This hampers innovation, protects incumbents and promotes rent-seeking business models. The difficulty in creating a single, open data market partly depends on the fact that some governmental agencies are currently making significant revenues on selling data to a restricted number of business players in the private sector. However, several studies, demonstrated that the transaction costs for government agencies tend to be very high, and often make selling the data non-profitable. proDataMarket aims to disrupt the property data market and demonstrate innovation across sectors where property data is relevant, by integrating a technical framework for effective publishing and consumption of property-related data, showcasing novel data-driven business products and services based on property data. proDataMarket will provide a digital data marketplace for open and non-open property and related contextual data, making it easier for data providers to publish and distribute their data (for free or for a fee) and for data consumers to easily access the data they need for their businesses. The consortium is formed by large companies ensuring high-impact business cases, technology transfer providers ensuring technologies for the creation and maintenance of the data market platform, and large data providers providing data for the business cases. With strong industry involvement proDataMarket will cover a wide range of data value chains related to property data.
Ontotext | Date: 2013-06-21
An RDF reason maintenance system avoids imposing a scalability restriction on the number of explicit statements stored in RDF databases. The system can identify those inferred statements that suitably should be removed whenever an explicit statement is deleted (retracted). The system dynamically computes the truth using a combination of the forward-chaining hardware and the backward-chaining hardware. The system is time-efficient, computes faster than a full re-computation, and need not use any long-lived truth maintenance information (space-efficient).