Arak, Iran

Islamic Azad University-Arak is a private university in Arak, Iran.The university was established in 1985 to further satisfy the growing need for an academic institution in the city of Arak. Today, the university operates 7 research centers and hosts 22,000 students.The university currently offers degrees in 6 colleges in 148 fields for both undergraduate and graduate students. It is the largest institute of higher education in the province in terms of student enrollment, eclipsing both University of Arak and Arak University of Medical science. Wikipedia.


Time filter

Source Type

Akbari Torkestani J.,Islamic Azad University of Arak
Journal of Network and Computer Applications | Year: 2012

Centralized or hierarchical administration of the classical grid resource discovery approaches is unable to efficiently manage the highly dynamic large-scale grid environments. Peer-to-peer (P2P) overlay represents a dynamic, scalable, and decentralized prospect of the grids. Structured P2P methods do not fully support the multi-attribute range queries and unstructured P2P resource discovery methods suffer from the network-wide broadcast storm problem. In this paper, a decentralized learning automata-based resource discovery algorithm is proposed for large-scale P2P grids. The proposed method supports the multi-attribute range queries and forwards the resource queries through the shortest path ending at the grid peers more likely having the requested resource. Several simulation experiments are conducted to show the efficiency of the proposed algorithm. Numerical results reveal the superiority of the proposed model over the other methods in terms of the average hop count, average hit ratio, and control message overhead. © 2012 Elsevier Ltd. All rights reserved.


Akbari Torkestani J.,Islamic Azad University of Arak
Journal of Network and Computer Applications | Year: 2012

In realistic mobile ad-hoc network scenarios, the hosts usually travel to the pre-specified destinations, and often exhibit non-random motion behaviors. In such mobility patterns, the future motion behavior of the mobile is correlated with its past and current mobility characteristics. Therefore, the memoryless mobility models are not capable of realistically emulating such a mobility behavior. In this paper, an adaptive learning automata-based mobility prediction method is proposed in which the prediction is made based on the Gauss-Markov random process, and exploiting the correlation of the mobility parameters over time. In this prediction method, using a continuous-valued reinforcement scheme, the proposed algorithm learns how to predict the future mobility behaviors relying only on the mobility history. Therefore, it requires no a prior knowledge of the distribution parameters of the mobility characteristics. Furthermore, since in realistic mobile ad hoc networks the mobiles move with a wide variety of the mobility models, the proposed algorithm can be tuned for duplicating a wide spectrum of the mobility patterns with various randomness degrees. Since the proposed method predicts the basic mobility characteristics of the host (i.e., speed, direction and randomness degree), it can be also used to estimate the various ad-hoc network parameters like link availability time, path reliability, route duration and so on. In this paper, the convergence properties of the proposed algorithm are also studied and a strong convergence theorem is presented to show the convergence of the algorithm to the actual characteristics of the mobility model. The simulation results conform to the theoretically expected convergence results and show that the proposed algorithm precisely estimates the motion behaviors. © 2012 Elsevier Ltd.


Akbari Torkestani J.,Islamic Azad University of Arak
Applied Intelligence | Year: 2012

The recent years have witnessed the birth and explosive growth of the Web. The exponential growth of the Web has made it into a huge source of information wherein finding a document without an efficient search engine is unimaginable. Web crawling has become an important aspect of the Web search on which the performance of the search engines is strongly dependent. Focused Web crawlers try to focus the crawling process on the topic-relevant Web documents. Topic oriented crawlers are widely used in domain-specific Web search portals and personalized search tools. This paper designs a decentralized learning automata-based focused Web crawler. Taking advantage of learning automata, the proposed crawler learns the most relevant URLs and the promising paths leading to the target on-topic documents. It can effectively adapt its configuration to the Web dynamics. This crawler is expected to have a higher precision rate because of construction a small Web graph of only on-topic documents. Based on the Martingale theorem, the convergence of the proposed algorithm is proved. To show the performance of the proposed crawler, extensive simulation experiments are conducted. The obtained results show the superiority of the proposed crawler over several existing methods in terms of precision, recall, and running time. The t-test is used to verify the statistical significance of the precision results of the proposed crawler. © 2012 Springer Science+Business Media, LLC.


Akbari Torkestani J.,Islamic Azad University of Arak
Soft Computing | Year: 2012

Given a graph G and a bound d ≥ 2, the bounded-diameter minimum spanning tree problem seeks a spanning tree on G of minimum weight subject to the constraint that its diameter does not exceed d. This problem is NP-hard; several heuristics have been proposed to find near-optimal solutions to it in reasonable times. A decentralized learning automata-based algorithm creates spanning trees that honor the diameter constraint. The algorithm rewards a tree if it has the smallest weight found so far and penalizes it otherwise. As the algorithm proceeds, the choice probability of the tree converges to one; and the algorithm halts when this probability exceeds a predefined value. Experiments confirm the superiority of the algorithm over other heuristics in terms of both speed and solution quality. © 2012 Springer-Verlag.


Asady B.,Islamic Azad University of Arak
Expert Systems with Applications | Year: 2010

Recently, Wang, Liu, Fan, and Feng (in press) proposed an approach to overcome the limitations of the existing studies and simplify the computational procedures based on the LR deviation degree of fuzzy number. However, there were some problems with the ranking method. In this paper, we want to indicate these problems of Wang's method and then propose a revised method which can avoid these problems for ranking fuzzy numbers. Since the revised method is based on the Wang's method, it is easy to rank fuzzy numbers in a way similar to the original method. The method is illustrated by numerical examples and compared with other methods. © 2009 Elsevier Ltd. All rights reserved.


Akbari Torkestani J.,Islamic Azad University of Arak
Ad Hoc Networks | Year: 2013

The connected dominating set (CDS) concept has recently emerged as a promising approach to the area coverage in wireless sensor network (WSN). However, the major problem affecting the performance of the existing CDS-based coverage protocols is that they aim at maximizing the number of sleep nodes to save more energy. This places a heavy load on the active sensors (dominators) for handling a large number of neighbors. The rapid exhaustion of the active sensors may disconnect the network topology and leave the area uncovered. Therefore, to make a good trade-off between the network connectivity, coverage, and lifetime, a proper number of sensors must be activated. This paper presents a degree-constrained minimum-weight extension of the CDS problem called DCDS to model the area coverage in WSNs. The proper choice of the degree-constraint of DCDS balances the network load on the active sensors and significantly improves the network coverage and lifetime. A learning automata-based heuristic named as LAEEC is proposed for finding a near optimal solution to the proxy equivalent DCDS problem in WSN. The computational complexity of the proposed algorithm to find a 1/1-ε optimal solution of the area coverage problem is approximated. Several simulation experiments are conducted to show the superiority of the proposed area coverage protocol over the existing CDS-based methods in terms of the control message overhead, percentage of covered area, residual energy, number of active nodes (CDS size), and network lifetime. © 2013 Elsevier B.V. All rights reserved.


Akbari Torkestani J.,Islamic Azad University of Arak
Decision Support Systems | Year: 2012

The recent years have witnessed the birth and explosive growth of the web. It is obvious that the exponential growth of the web has made it into a huge interconnected source of information wherein finding a document without a searching tool is unimaginable. Today's search engines try to provide the most relevant suggestions to the user queries. To do this, different strategies are used to enhance the precision of the information retrieval process. In this paper, a learning method is proposed to rank the web documents in a search engine. The proposed method takes advantage of the user feedback to enhance the precision of the search results. To do so, it uses a learning automata-based approach to train the search engine. In this method, the user feedback is defined as its interest to review an item. Within the search results, the document that is visited by the user is more likely relevant to the user query. Therefore, its choice probability must be increased by the learning automaton. By this, the rank of the most relevant documents increases as that of the others decreases. To investigate the efficiency of the proposed method, extensive simulation experiment is conducted on well-known data collections. The obtained results show the superiority of the proposed approach over the existing methods in terms of mean average precision, precision at position n, and normalized discount cumulative gain. © 2012 Elsevier B.V. All rights reserved.


Mehrabadi F.A.,Islamic Azad University of Arak
Materials and Design | Year: 2013

This paper deals with mode III and mixed mode (III + II) delamination behavior of woven glass fiber reinforced polymer laminates; numerically and experimentally. An improved test method was employed to study fracture under pure out of plane shear condition (mode III) using edge crack torsion (ECT) specimen. Load-displacement curves indicated that deviation from linearity, PC (NL) and maximum failure loads, PC (Max) ranged about 35% with the change of crack length. The actual R-curve obtained by the compliance combination method showed fracture toughness raised with the increase of crack length. In addition, the finite element analyses suggested that the pure mode III fracture state existed in the mid-section of the ECT specimen in spite of the existence of a small mode II component near the free edges of the delamination front. The six-point bending plate (6PBP) test was proposed and developed to obtain mixed mode ratios (III + II). In spite of pure mode III, experimental results showed a quasi-linear load-displacement curve to failure. Dependently of the fracture tests, three-dimensional finite element analyses were undertaken, and the distribution of mode I, mode II, and mode III strain energy release rates (SERRs) at the delamination front were evaluated by the stress intensity factor (SIF) approach and showed non-uniform crack advance behavior in 6PBP test. © 2012 Elsevier Ltd.


Torkestani J.A.,Islamic Azad University of Arak
Journal of Intelligent Information Systems | Year: 2012

Due to the massive amount of heterogeneous information on the web, insufficient and vague user queries, and use of the same query by different users for different aims, the information retrieval process deals with a huge amount of uncertainty and doubt. Under such circumstances, designing an efficient retrieval function and ranking algorithm by which the most relevant results are provided is of the greatest importance. In this paper, a learning automata-based ranking function discovery algorithm in which different sources of information are combined is proposed. In this method, the learning automaton is used to adjust the portion of the final ranking that is assigned to each source of evidence based on the user feedback. All sources of information are first given the same importance. The proportion of a given source increases, if the documents provided by this source are reviewed by the user and decreases otherwise. As the proposed algorithm proceeds, the probability of appearance of each source in the final ranking gets proportional to its relevance to the user queries. Several simulation experiments are conducted on well-known data collections and query types to show the performance of the proposed algorithm. The obtained results demonstrate that the proposed algorithm outperforms several existing methods in terms of precision at position n, mean average precision, and normalized discount cumulative gain. © 2012 Springer Science+Business Media, LLC.


Akbari Torkestani J.,Islamic Azad University of Arak
Cluster Computing | Year: 2012

Job scheduling is one of the most challenging issues in Grid resource management that strongly affects the performance of the whole Grid environment. The major drawback of the existing Grid scheduling algorithms is that they are unable to adapt with the dynamicity of the resources and the network conditions. Furthermore, the network model that is used for resource information aggregation in most scheduling methods is centralized or semi-centralized. Therefore, these methods do not scale well as Grid size grows and do not perform well as the environmental conditions change with time. This paper proposes a learning automata-based job scheduling algorithm for Grids. In this method, the workload that is placed on each Grid node is proportional to its computational capacity and varies with time according to the Grid constraints. The performance of the proposed algorithm is evaluated through conducting several simulation experiments under different Grid scenarios. The obtained results are compared with those of several existing methods. Numerical results confirm the superiority of the proposed algorithm over the others in terms of makespan, flowtime, and load balancing. © 2011 Springer Science+Business Media, LLC.

Loading Islamic Azad University of Arak collaborators
Loading Islamic Azad University of Arak collaborators