Entity

Time filter

Source Type


Jbara Y.H.F.,Arab Academy for Banking and Financial Sciences
Proceedings of the IASTED International Conference on Modelling and Simulation | Year: 2010

This paper proposes a novel ant-based approach for solving continuous function optimization in 2D and 3D search spaces. The proposed approach called 2D-3D Continuous Ant Colony Approach (2D-3D-CACA), and is able to handle continuous 3D real word spaces applications. Novel concepts of this algorithm that distinguish it from the other heuristics are: (1) the inclusion of a dynamic discretization representation in order to change from discrete nature to continuous one. The paper briefly proposes the benefits and necessity of applying the dynamic discretization representation to resolve 3D application representation issues. (2) The use of a novel simulated annealing-based local search approach for local optimization with the aim of improving the overall performance. The proposed approach consists of two phases, that is, the global phase, and the local phase. The global phase provides a high quality starting solutions while the local phase operates on these quality solutions to obtain higher quality solutions. By iterating these two phases, global optimum obtained. Extensive experimental results show that the proposed algorithm performs very well against other continuous ant-related approaches found in the literature on a set of four well-known benchmark algebraic problems and one 3D real world application.


Hmeidi I.I.,Jordan University of Science and Technology | Al-Shalabi R.F.,Arab Academy for Banking and Financial Sciences | Al-Taani A.T.,Yarmouk University | Najadat H.,Jordan University of Science and Technology | Al-Hazaimeh S.A.,Jordan University of Science and Technology
Journal of the American Society for Information Science and Technology | Year: 2010

Root extraction is one of the most important topics in information retrieval (IR), natural language processing (NLP), text summarization, and many other important fields. In the last two decades, several algorithms have been proposed to extract Arabic roots. Most of these algorithms dealt with triliteral roots only, and some with fixed length words only. In this study, a novel approach to the extraction of roots from Arabic words using bigrams is proposed. Two similarity measures are used, the dissimilarity measure called the "Manhattan distance," and Dice's measure of similarity. The proposed algorithm is tested on the Holy Qu'ran and on a corpus of 242 abstracts from the Proceedings of the Saudi Arabian National Computer Conferences. The two files used contain a wide range of data: the Holy Qu'ran contains most of the ancient Arabic words while the other file contains some modern Arabic words and some words borrowed from foreign languages in addition to the original Arabic words. The results of this study showed that combining N-grams with the Dice measure gives better results than using the Manhattan distance measure. © 2009 ASIS&T.


Amro I.,Al-Quds University | Zitar R.A.,New York Institute of Technology | Bahadili H.,Arab Academy for Banking and Financial Sciences
International Journal of Speech Technology | Year: 2011

In this paper, a replacement algorithm for Linear Prediction Coefficients (LPC) along with Hamming Correction Code based Compressor (HCDC) algorithms are investigated for speech compression. We started with an CELP system with order 12 and with Discrete Cosine Transform (DCT) based residual excitation. Forty coefficients with transmission rate of 5.14 kbps were first used. For each frame of the testing signals we applied a multistage HCDC, we tested the compression performance for parities from 2 to 7, we were able to achieve compression only at parity 4. This rate reduction was made with no compromise in the original CELP signal quality since compression is lossless. The compression approach is based on constructing dynamic reflection coefficients codebook, this codebook is constructed and used simultaneously using a certain store/retrieve threshold. The initiallinear prediction codec we used is excited by a discrete cosine transform (DCT) residual, the results were tested using the MOS and SSNR, we had acceptable ranges for the MOS (average 3.6), and small variations of the SSNR (±5 db). © Springer Science+Business Media, LLC 2011.


Al-Bahadili H.,Arab Academy for Banking and Financial Sciences | Hussain S.M.,University of Petra
International Journal of Automation and Computing | Year: 2010

This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n,s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n,s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools. © 2010 Institute of Automation, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg.


Mohammad A.H.,University of Jordan | Hamdeh M.A.,Arab Academy for Banking and Financial Sciences | Sabri A.T.,Jerash Private University
European Journal of Scientific Research | Year: 2010

Eliciting knowledge is one of the most critical tasks that many organizations may face. Many organizations face problems when experts leave their job. This paper presents a framework for knowledge acquisition which will allow for a consistent method for capturing the knowledge of a particular enterprise, organization or human (domain) expert. This developed framework can be used for almost large organizations with a little adaptation related to job nature of the organizations. The goal of this paper is to present a framework in which a knowledge engineer can use to successfully capture, apply and validate knowledge. This framework can also be used for both types of knowledge (explicit and tacit). Moreover, applying this framework does not require a huge amount of effort or cost. © EuroJournals Publishing, Inc. 2010.

Discover hidden collaborations