EEDIS Laboratory

El Abiodh Sidi Cheikh, Algeria

EEDIS Laboratory

El Abiodh Sidi Cheikh, Algeria
SEARCH FILTERS
Time filter
Source Type

Larbi A.,EEDIS Laboratory | Larbi A.,Energarid Laboratory | Mimoun M.,École Supérieure d'Ingenieurs en Informatique et Génie des Télécommunications | Boukhalfa K.,University of Science and Technology Houari Boumediene
CEUR Workshop Proceedings | Year: 2017

The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made.


Abdelmadjid L.,EEDIS Laboratory | Bachir S.,ENERGARID Laboratory | Ismail L.,ENERGARID Laboratory | Merzoug A.,ENERGARID Laboratory
CEUR Workshop Proceedings | Year: 2017

The use of ontologies for the enrichment (expansion) of the user query can be a solution to solve the problem of semantic variations, also for our case in the case where the object sought (drugà is not found, Of similar products by taking into account the contours related to the sick profile and the orientations of the physician and the semantics given by the pharmaceutical ontology. The ontologies offer resources in the form of semantic relations, they allow to extend the field To search for a query, which results in improved search results.


Malki A.,LIRIS Laboratory | Barhamgi M.,LIRIS Laboratory | Benslimane S.-M.,EEDIS Laboratory | Benslimane D.,LIRIS Laboratory | Malki M.,EEDIS Laboratory
IEEE Transactions on Knowledge and Data Engineering | Year: 2015

With the emergence of the open data movement, hundreds of thousands of datasets from various concerns are now freely available on the Internet. The access to a good number of these datasets is carried out through Web services which provide a standard way to interact with data. In this context, user's queries often require the composition of multiple data Web services to be answered. Defining the semantics of data services is the first step towards automating their composition. An interesting approach to define the semantics of data services is by describing them as semantic views over a domain ontology. However, defining such semantic views cannot always be done with certainty, especially when the service's outputs are too complex. In this paper, we propose a probabilistic approach to model the semantics uncertainty of data services. In our approach, a data service with an uncertain semantics is described by several possible semantic views, each one is associated with a probability. Services along with their possible semantic views are represented in a Block-Independent-Disjoint (noted BID) probabilistic service registry, and interpreted based on the Possible Worlds Semantics. Based on our modeling, we study the problem of interpreting an existing composition involving services with uncertain semantics. We also study the problem of compositing uncertain data services to answer a user query, and propose an efficient method to compute the different possible compositions and their probabilities. © 2014 IEEE.


Abdelkader M.,Saida University | Mimoun M.,EEDIS Laboratory
Proceedings of 2015 IEEE World Conference on Complex Systems, WCCS 2015 | Year: 2015

This paper presents a novel approach to detect code clones. The proposed approach formulates the clone detection problem as a problem of querying and mining time series data [18]. The approach is composed of three steps. The first step extracts modules (i.e., methods, functions.. ) from the software system, the second transforms modules to time series and the third one calculates the similarity degree between modules using the DTW (i.e., Dynamic Time Warping) algorithm. Two modules are reported as clones if the DTW similarity value between them is greater than some predefined threshold. The results of the experiment conducted on well known software systems shown that our approach has the potential to detect clones of type I, type II and type III in an effective manner. © 2015 IEEE.


Karima A.,Ecole superieur dinformatique | Zakaria E.,EEDIS Laboratory | Yamina T.G.,Annaba University
Journal of Theoretical and Applied Information Technology | Year: 2012

The quantity of accessible information on Internet is phenomenal, and its categorization remains one of the most important problems. A lot of work is currently focused on English rightly since; it is the dominant language of the Web. However, a need arises for the other languages, because the Web is each day more multilingual. The need is much more pressing for the Arabic language. Our research is on the categorization of the Arabic texts, its originality relates to the use of a conceptual representation of the text. For that we will use Arabic WordNet (AWN) as a lexical and semantic resource. To comprehend its effect, we incorporate it in a comparative study with the other usual modes of representation (bag of words and Ngrams), and we use different similarity measures. The results show the benefits and advantages of this representation compared to the more conventional methods, and demonstrate that the addition of the semantic dimension is one of the most promising approaches for the automatic categorization of Arabic texts. © 2005 - 2012 JATIT & LLS. All rights reserved.


Elberrichi Z.,EEDIS Laboratory | Abidi K.,EEDIS Laboratory
International Arab Journal of Information Technology | Year: 2012

The quantity of accessible information on Internet is phenomenal, and its categorization remains one of the most important problems. A lot of work is currently, focused on English rightly since; it is the dominant language of the Web. However, a need arises for the other languages, because the Web is each day more multilingual. The need is much more pressing for the Arabic language. Our research is on the categorization of the Arabic texts, its originality relates to the use of a conceptual representation of the text. For that we will use Arabic WordNet (AWN) as a lexical and semantic resource. To comprehend its effect, we incorporate it in a comparative study with the other usual modes of representation (bag of words and N-grams), and we use the K-NN learning scheme with different similarity measures. The results show the benefits and advantages of this representation compared to the more conventional methods, and demonstrate that the addition of the semantic dimension is one of the most promising ways for the automatic categorization of Arabic texts.

Loading EEDIS Laboratory collaborators
Loading EEDIS Laboratory collaborators