Entity

Time filter

Source Type

Coimbatore, India

Sashi K.,SNR Sons College | Thanamani A.S.,NGM College
Future Generation Computer Systems | Year: 2011

Grid computing is emerging as a key part of the infrastructure for a wide range of disciplines in science and engineering, including astronomy, high energy physics, molecular biology and earth sciences. These applications handle large data sets that need to be transferred and replicated among different grid sites. A data grid deals with data intensive applications in scientific and enterprise computing. Data grid technology is developed to permit data sharing across many organizations in geographically disperse locations. Replication of data to different sites will help researchers around the world analyse and initiate future experiments. The general idea of replication is to store copies of data in different locations so that data can be easily recovered if a copy at one location is lost or unavailable. In a large-scale data grid, replication provides a suitable solution for managing data files, which enhances data reliability and availability. In this paper, a Modified BHR algorithm is proposed to overcome the limitations of the standard BHR algorithm. The algorithm is simulated using a data grid simulator, OptorSim, developed by European Data Grid projects. The performance of the proposed algorithm is improved by minimizing the data access time and avoiding unnecessary replication. © 2010 Elsevier B.V. All rights reserved. Source


Vijendran A.S.,SNR Sons College
International Review on Computers and Software | Year: 2014

In this paper propose a noise removal method for reducing noise in digital images. An efficient Rao-Blackwellized Particle Filter (RBPF) with maximum likelihood Estimation approach is used for improving the learning stage of the image structural model and guiding the particles to the most appropriate direction. It increases the efficiency of particle transitions. The proposal distribution is computed by conditionally Gaussian state space models and Rao-Blackwellized particle filtering. The discrete state of operation is identified using the continuous measurements corrupted by Gaussian noise. The analytical structure of the model is computed by the distribution of the continuous states. The posterior distribution can be approximated with a recursive, stochastic mixture of Gaussians. Rao-Blackwellized particle filtering is a combination of a particle filter (PF) and a bank of Kalman filters. The distribution of the discrete states is computed by using Particle Filters and the distribution of the continuous states are computed by using a bank of Kalman filters. The Maximum likelihood Estimation method is used for noise distribution process. The RBPF with MLE is very effective in eliminating noise. RBPF with MLE is compared with particle filter, Markov Random Field particle filter and RBPF. In this paper different performance metrics are evaluated for this type of noise removal technique. The metrics are Mean Square error, Root Mean square error, Peak Signal to Noise Ratio, Normalized absolute Error, and Normalized Cross Correlation, Mean Absolute Error and Signal to Noise Ratio. Experimental results prove that RBPF with MLE outperforms for degraded medical images. © 2014 Praise Worthy Prize S.r.l. - All rights reserved. Source


Sashi K.,SNR Sons College | Santhanam T.,DG Vaishnav College
ARPN Journal of Engineering and Applied Sciences | Year: 2013

Grid computing is one of the fastest emerging technologies within the high performance computing environment. Grid deployments that require access to and processing of data are called data grids. They are optimized for data oriented operation. In a data grid environment, data replication is an effective way to improve data accessibility. However, due to limited storage, a replica replacement strategy is needed to make the dynamic replica management strategy efficient. In this paper, a dynamic replacement strategy is proposed for a region based network where a weight of each replica is calculated to make the decision for replacement. It is implemented by using a data grid simulator, Optorsim developed by European data grid projects © 2006-2013 Asian Research Publishing Network (ARPN). Source


Sashi K.,SNR Sons College | Thanamani A.S.,NGM College
DSDE 2010 - International Conference on Data Storage and Data Engineering | Year: 2010

Grid computing is emerging as a key enabling infrastructure for a wide range of disciplines in Science and Engineering. Data grids provide distributed resources for dealing with large scale applications that generate huge volume of data sets. Data replication, a technique much discussed by data grid researchers in past years creates multiple copies of file and stores them in conventional locations to shorten file access times. One of the challenges in data replication is creation of replicas and replica placement. Dynamic creation of replicas in a suitable site by data replication strategy can increase the systems performance. When creating replicas a decision has to be made on when to create replicas and which one to be created. This decision is based on popularity of file. Placement of replicas selects the best site where replicas should be placed. Placing the replicas in the appropriate site reduces the bandwidth consumption and reduces the job execution time. This paper discusses about dynamic creation of replicas and replica placement. It is implemented by using a data grid simulator, Optorsim developed by European data grid projects. © 2010 IEEE. Source


Jeevarathinam R.,SNR Sons College | Santhanam T.,D G Vaishnav College
ARPN Journal of Engineering and Applied Sciences | Year: 2013

Generally software systems are scantily documented and incomplete specifications in the documents results in high maintenance cost. To lower maintenance efforts, automated tools are necessary to aid software developers to understand their existing code base by extracting specifications from the software. Specification mining aids the document to intend software behaviour, software maintenance, refactor or add new features to software, and detecting software bugs. In this paper a new technique called CT Trace Miner is proposed to efficiently mine software specifications, which in turn mines software specifications from program execution traces. Then the mined specification using the CT Trace Miner approach is given as input to the ANOVA Two Way feature selection approach for selecting the best features. Conclusively, ARC-BC classifier is used to categorize the selected features. The experimental results exposed that the proposed approach provides better results than the existing approach. © 2006-2013 Asian Research Publishing Network (ARPN). Source

Discover hidden collaborations