Research Center on Scientific and Technical Information

Algiers, Algeria

Research Center on Scientific and Technical Information

Algiers, Algeria
SEARCH FILTERS
Time filter
Source Type

Chaa M.,Research Center on Scientific and Technical Information | Chaa M.,University of Abderrahmane Mira de Béjaïa | Nouali O.,Research Center on Scientific and Technical Information | Bellot P.,Aix - Marseille University
CEUR Workshop Proceedings | Year: 2016

In this paper, we describe our participation in the INEX 2016 Social Book Search Suggestion Track (SBS). We have exploited machine learning techniques to rank query terms and assign an appropriate weight to each one before applying a probabilistic information retrieval model (BM15). Thereafter, only the top-k terms are used in the matching model. Several features are used to describe each term, such as statistical features, syntactic features and others features like whether the term is present in similar books and in the profile of the topic starter. The model was learned using the 2014 and 2015 topics and tested with the 2016 topics. Our experiments show that our approach improves the search results.


Aliane A.A.,University of Science and Technology Houari Boumediene | Aliane A.A.,Research Center on Scientific and Technical Information | Aliane H.,Research Center on Scientific and Technical Information | Ziane M.,University of Science and Technology Houari Boumediene | And 2 more authors.
Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA | Year: 2017

With the recently increasing interest for opinion mining from different research communities, there is an evolving body of work on Arabic Sentiment Analysis. There are few available polarity annotated datasets for this language, so most existing works use these datasets to test the best known supervised algorithms for their objectives. Naïve Bayes and SVM are the best reported algorithms in the Arabic sentiment analysis literature. The work described in this paper shows that using a genetic algorithm to select features and enhancing the quality of the training dataset improve significantly the accuracy of the learning algorithm. We use the LABR dataset of book reviews and compare our results with LABR's authors' results. © 2016 IEEE.


Nouali O.,Research Center on Scientific and Technical Information | Kechadi T.,University College Dublin
2015 1st International Conference on Anti-Cybercrime, ICACC 2015 | Year: 2015

To controlling the information flows between the competing companies (CC) in the cloud computing (Multi-tenancy) or in the social network, the Chinese wall security policy (CWSP) is very interesting solution. Its rule is the building of the walls between the CC. The CWSPs idea, is the building of impenetrable wall among subject, and this to controlling the information flow between the CC, caused by the subjects. However, the preview proposed models [1-6]; they have a set of errors and they can't fix the composite information flows (CIF) problem between the CC (a malicious Trojan horses problem). In this article; we will propose a new approach based on the access query type of the subject to the objects and the philosophy of the CWSP. In our model we have two types walls placement, the first are built around the subject, and the second around the object; which, we can't find inside each once wall two competing objects data. © 2015 IEEE.


Aliane H.,Research Center on Scientific and Technical Information | Guendouzi A.,University of Science and Technology Houari Boumediene | Mokrani A.,University of Science and Technology Houari Boumediene
International Conference Recent Advances in Natural Language Processing, RANLP | Year: 2013

We present in this paper an unsupervised approach to recognize events, time and place expressions in Arabic texts. Arabic is a resource -scarce language and we don't easily have at hand annotated corpora, lexicons and other needed NLP tools. We show in this work that we can recognize events, time and place expressions in Arabic texts without using a POS annotated corpus and without lexicon. We use an unsupervised segmentation algorithm then a minimalist set of rules allows us to get a partial POS annotation of our corpus. This partially annotated corpus will serve as a basis for the recognition process which implements a set of rules using specific linguistic markers to recognize events, and expressions of time and place.


Bessai-Mechmache F.Z.,Research Center on Scientific and Technical Information | Alimazighi Z.,University of Science and Technology Houari Boumediene
International Journal of Intelligent Information and Database Systems | Year: 2012

In this paper, we are interested in content-oriented XML information retrieval which aims to retrieve not a set of relevant documents but a number of elements (parts of document) relevant to a query. Our goal is to revisit the granularity of the unit to be returned. More precisely, instead of returning the whole document or a list of disjoint elements of a document, as it is usually done in the most XML information retrieval systems, we attempt to build the best elements aggregation (set of non-redundant elements) which is likely to be relevant to a query composed of key words. Our approach is based on possibilistic networks. The network structure provides a natural representation of links between a document, its elements and its content, and allows an automatic selection of a combination of independent elements (i.e., set of non-redundant elements from different parts of the document tree) that better answers the user's query. Experiments carried out on a sub-collection of INEX INitiative for the evaluation of XML (INEX) retrieval, showed the effectiveness of the approach. Copyright © 2012 Inderscience Enterprises Ltd.


Aliane H.,Research Center on Scientific and Technical Information | Alimazighi Z.,University for Information Science and Technology
International Journal of Computer Applications in Technology | Year: 2011

We present a new approach to discover Arabic language structures from electronic texts. The method is based on a distributional analysis inspired from Arabic Grammatical Tradition (AGT) and Harris, and uses a minimum knowledge about the Arabic language. The idea underlying this research is that in the absence of a formal model of Arabic language and freely usable Natural Language Processing (NLP) tools for this language, what we can learn about the structures of this language only by a formal analysis of raw corpora written in it. By the word 'formal' we mean, here, that the analysis is based only on the form of the written texts and uses only a minimum knowledge about the language. This knowledge consists in some minimal hypothesis inspired from AGT and that never refers to the meaning of words, utterances and texts. On the contrary, we do not yet know these structures, but we want to discover them formally by automatic algorithms. © 2011 Inderscience Enterprises Ltd.


Bouchama S.,Research Center on Scientific and Technical Information | Aliane H.,Research Center on Scientific and Technical Information | Hamami L.,Polytechnic School of Algiers
2013 International Conference on Information Science and Applications, ICISA 2013 | Year: 2013

Very few reversible data hiding methods are proposed for compressed video and particularly for the H.264/AVC video codec, despite the importance of both of the watermarking reversibility criterion and the codec. The reversible watermarking techniques of images, when applied to the compressed video, can affect particularly the video quality and bitrate. Thus, to make these techniques applicable, the embedding capacity is usually reduced. Therefore an adaptation is necessary to improve the tradeoff between the embedding capacity, the visual quality and the bitrate of the watermarked video. In this paper, we investigate the possibility to introduce the DCT based reversible data hiding method, proposed initially for compressed images, to H.264/AVC codec. The embedding is applied during the encoding process, in the quantized DCT coefficients of I and P frames. To enhance the embedding capacity a mapping rule is used to embed three bits in one coefficient. Results show that exploiting the P frames improves considerably the video quality and the embedding capacity. © 2013 IEEE.


Chaa M.,Research Center on Scientific and Technical Information | Chaa M.,University of Abderrahmane Mira de Béjaïa | Nouali O.,Research Center on Scientific and Technical Information
CEUR Workshop Proceedings | Year: 2015

In this paper, we describe our participation in the INEX 2015 Social Book Search Suggestion Track (SBS). We have exploited in our experiments only the tags assigned by users to books provided from Library Thing (LT). We have investigated the impact of the weight of each term of the topic in the retrieval model using two methods. In the first method, we have used the TF-IQF formula to assign a weight to each term of the topic. In the second method, we have used Rocchio algorithm to expand the query and calculate the weight of the tags assigned to the example books mentioned in the book search request. Parameters of our models have been tuned using the topics of INEX 2014 and tested on INEX 2015 Social Book Search track.


Maredj A.-E.,Research Center on Scientific and Technical Information | Tonkin N.,Research Center on Scientific and Technical Information
Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing, IEEE ICSC 2015 | Year: 2015

We propose an approach for the dynamic adaptation of multimedia documents modeled by an over-constrained constraint satisfaction problem (OCSP). In addition the solutions that it provides for the problem of determining the relations that do not comply with the user profile and the problem of the combinatorial explosion when searching for alternative relations, it insures a certain quality of service to the presentation of the adapted document: (i) If the required constraints are not satisfied, no document is generated, unlike other approaches that generates even if the presentation of the adapted document is completely different from the initial one, (ii) The definition of the constraints hierarchy (strong constraints and medium constraints) maintains as much as possible of the initial document relations in the adapted one. As result, the adapted presentations are consistent and close to those of the initial ones. © 2015 IEEE.


Maredj A.-E.,Research Center on Scientific and Technical Information | Tonkin N.,Research Center on Scientific and Technical Information
World Academy of Science, Engineering and Technology | Year: 2011

The recent developments in computing and communication technology permit to users to access multimedia documents with variety of devices (PCs, PDAs, mobile phones...) having heterogeneous capabilities. This diversification of supports has trained the need to adapt multimedia documents according to their execution contexts. A semantic framework for multimedia document adaptation based on the conceptual neighborhood graphs was proposed. In this framework, adapting consists on finding another specification that satisfies the target constraints and which is as close as possible from the initial document. In this paper, we propose a new way of building the conceptual neighborhood graphs to best preserve the proximity between the adapted and the original documents and to deal with more elaborated relations models by integrating the relations relaxation graphs that permit to handle the delays and the distances defined within the relations.

Loading Research Center on Scientific and Technical Information collaborators
Loading Research Center on Scientific and Technical Information collaborators