VR Siddhartha Engineering College

Vijayawāda, India

VR Siddhartha Engineering College

Vijayawāda, India

Time filter

Source Type

Kiran Kumar R.,Krishna University | Saichandana B.,Gandhi Institute of Technology and Management | Srinivas K.,VR Siddhartha Engineering College
Indonesian Journal of Electrical Engineering and Computer Science | Year: 2016

This paper presents genetic algorithm based band selection and classification on hyperspectral image data set. Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. In this paper, first filtering based on 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, band selection is done using genetic algorithm in-order to remove bands that convey less information. This dimensionality reduction minimizes many requirements such as storage space, computational load, communication bandwidth etc which is imposed on the unsupervised classification algorithms. Next image fusion is performed on the selected hyperspectral bands to selectively merge the maximum possible features from the selected images to form a single image. This fused image is classified using genetic algorithm. Three different indices, such as K-means Index (KMI) and Jm measure are used as objective functions. This method increases classification accuracy and performance of hyperspectral image than without dimensionality reduction. © 2016 Institute of Advanced Engineering and Science. All rights reserved.


Saichandana B.,Jawaharlal Nehru Technological University Kakinada | Srinivas K.,VR Siddhartha Engineering College | Kiran Kumar R.,Krishna University
Indonesian Journal of Electrical Engineering and Computer Science | Year: 2016

Hyper spectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image. © 2016 Institute of Advanced Engineering and Science. All rights reserved.


Swamy G.N.,VR Siddhartha Engineering College
Proceedings - 2012 International Conference on Communication, Information and Computing Technology, ICCICT 2012 | Year: 2012

In this paper the concepts of frequency domain watermarking and Secret Sharing are combined to protect the copyrights of a digital image. The proposed method employs the concept of XOR-based Visual Secret Sharing (XVSS) to split a watermark into two unexpanded parts, a public watermark and a private watermark. The private watermark is extracted from the features of host image and original watermark. The public watermark can be extracted from the controversial image at any time without the need for original host image and/or watermark. Since private watermark is needed in recovering the original watermark, the proposed scheme comes under the category of semi-blind watermarking. The proposed watermarking technique aims at improving the security of similar watermarking schemes by processing LL sub-band coefficients of Discrete Wavelet Transform (DWT) of the host image to satisfy central limit theorem. The simulation results reveal that the proposed method can resist to a variety of common image processing manipulations. © 2012 IEEE.


Turlapaty A.,University of Maryland Eastern Shore | Turlapaty A.,VR Siddhartha Engineering College | Jin Y.,University of Maryland Eastern Shore
IEEE Transactions on Signal Processing | Year: 2016

In this paper, we consider the problem of multi-parameter estimation in the presence of compound Gaussian clutter for cognitive radar by the variational Bayesian method. The advantage of variational Bayesian is that the estimation of multi-variate parameters is decomposed to problems of estimation of univariate parameters by variational approximation, thus enabling analytically tractable approximate posterior densities in complex statistical models consisting of observed data, unknown parameters, and hidden variables. We derive the asymptotic Bayesian Cramer-Rao bounds and demonstrate by numerical simulations that the proposed approach leads to improved estimation accuracy than the expectation maximization method and the exact Bayesian method in the case of non-Gaussian nonlinear signal models and small data sample size. © 2016 IEEE.


Srinivasa Rao B.,VR Siddhartha Engineering College | Vaisakh K.,Andhra University
International Journal of Electrical Power and Energy Systems | Year: 2013

This paper presents a new multi objective optimization approach based on Adaptive Clonal Selection Algorithm (ACSA) to solve complex Environmental/ Economic Dispatch (EED) problem of thermal generators in power system. The proposed methodology also incorporates the power demand equality constraint and ensures various operating constraint limits while solving EED problem. In this algorithm an adaptive Clonal selection principle with non-dominated sorting technique and crowding distance has been used to find and manage Pareto-optimal set. Clonal selection principle is one of the models used to incorporate the behavior of the artificial immune system. The biological principles of clone generation, proliferation and maturation are mimicked and incorporated into this algorithm. To show the effectiveness of the proposed Multi Objective Adaptive Clonal Selection Algorithm (MOACSA) in solving EED problem two types of test systems have been considered with various objectives. These includes an IEEE 30-bus 6 unit test system and an 82-bus 10 unit Indian utility real life power system network for solving EED problem without and with load uncertainty. Simulation results are compared by implementation of three other standard algorithms such as Non-dominated Sorting Genetic Algorithm-II (NSGA-II), Multi-Objective Particle Swarm Optimization (MOPSO) and Multi-Objective Differential Evaluation (MODE) methods. © 2013 Elsevier Ltd. All rights reserved.


Kumari K.S.,VR Siddhartha Engineering College | Amulya B.,VR Siddhartha Engineering College | Prasad R.S.,Acharya Nagarujuna University
2014 International Conference on Circuits, Power and Computing Technologies, ICCPCT 2014 | Year: 2014

Software reliability is one of the most important characteristics of software quality. Its measurement and management technologies employed during the software life cycle are essential for producing and maintaining quality/reliable software systems. Over the last several decades, many Software Reliability Growth Models (SRGMs) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. Statistical process control (SPC) is a branch of statistics that combines rigorous time series analysis methods with graphical presentation of data, often yielding insights into the data more quickly and in a way more understandable to lay decision makers. SPC has been applied to forecast the software failures and improve the software reliability. In this paper we proposed Pareto Type II Distribution model with an order statistic approach and applied SPC to monitor the failures. Also the proposed model is compared with Half Logistic Distribution considering time domain data based on Non Homogeneous Poisson Process (NHPP). The parameters are estimated using the Maximum Likelihood Estimation. The failure data is analyzed with both the models and the results are exhibited through control charts. © 2014 IEEE.


Raj V.N.P.,VR Siddhartha Engineering College | Venkateswarlu T.,Sri Venkateswara University
2011 IEEE Recent Advances in Intelligent Computational Systems, RAICS 2011 | Year: 2011

In Medical diagnosis operations such as feature extraction and object recognition will play the key role. These tasks will become difficult if the images are corrupted with noises. So the development of effective algorithms for noise removal became an important research area in present days. Developing Image denoising algorithms is a difficult task since fine details in a medical image embedding diagnostic information should not be destroyed during noise removal. Many of the wavelet based denoising algorithms use DWT (Discrete Wavelet Transform) in the decomposition stage which is suffering from shift variance. To overcome this in this paper we are proposing the denoising method which uses Undecimated Wavelet Transform to decompose the image and we performed the shrinkage operation to eliminate the noise from the noisy image. In the shrinkage step we used semi-soft and stein thresholding operators along with traditional hard and soft thresholding operators and verified the suitability of different wavelet families for the denoising of medical images. The results proved that the denoised image using UDWT (Undecimated Discrete Wavelet Transform) have a better balance between smoothness and accuracy than the DWT. We used the SSIM (Structural similarity index measure) along with PSNR to assess the quality of denoised images. © 2011 IEEE.


Raj V.N.P.,VR Siddhartha Engineering College | Venkateswarlu T.,Sri Venkateswara University
ICECT 2011 - 2011 3rd International Conference on Electronics Computer Technology | Year: 2011

The Electrocardiogram (ECG) is a technique of recording bioelectric currents generated by the heart which will help clinicians to evaluate the conditions of a patient's heart. So it is very important to get the parameters of ECG signal clear without noise. Many of the wavelet based denoising algorithms use DWT (Discrete Wavelet Transform) in the decomposition stage which is suffering from shift variance. To overcome this in this paper we are proposing the denoising method which uses Undecimated Wavelet Transform to decompose the raw ECG signal and we performed the shrinkage operation to eliminate the noise from the noisy signal. In the shrinkage step we used semi-soft and stein thresholding operators along with traditional hard and soft thresholding operators and verified the suitability of different wavelet families for the denoising of ECG signals. The results proved that the denoised signal using UDWT (Undecimated Discrete Wavelet Transform) have a better balance between smoothness and accuracy than the DWT. © 2011 IEEE.


Ramanaiah K.,VR Siddhartha Engineering College | Ratna Prasad A.V.,VR Siddhartha Engineering College | Hema Chandra Reddy K.,Jawaharlal Nehru Technological University Anantapur
Materials and Design | Year: 2013

The objective of present work is to introduce sansevieria natural fiber as reinforcement in the preparation of partially biodegradable green composites. The effect of fiber content on mechanical properties of composite was investigated and found that tensile strength and impact strength at maximum fiber content were 2.55 and 4.2 times to that of pure resin, respectively. Transverse thermal conductivity of unidirectional composites was investigated experimentally by a guarded heat flow meter method. The thermal conductivity of composite decreased with increase in fiber content and the quite opposite trend was observed with respect to temperature. In addition, the experimental results of thermal conductivity at different volume fractions were compared with theoretical model. The response of specific heat capacity of the composite with temperature as measured by differential scanning calorimeter was discussed. Lowest thermal diffusivity of composite was observed at 90°C and its value is 0.9948E-07m2s-1.Fire behavior of composite was studied using the oxygen consumption cone calorimeter technique. The addition of sansevieria fiber has effectively reduced the heat release rate (HRR) and peak heat release rate (PHRR) of the matrix by 10.4%, and 25.7%, respectively. But the composite ignite earlier, release more amount of carbon dioxide yield and total smoke during combustion, when compared to neat polyester resin. © 2013 Elsevier Ltd.


Ramanaiah K.,VR Siddhartha Engineering College | Ratna Prasad A.V.,VR Siddhartha Engineering College | Hema Chandra Reddy K.,Jawaharlal Nehru Technological University Anantapur
Materials and Design | Year: 2012

The main focus of this study is to utilize waste grass broom natural fibers as reinforcement and polyester resin as matrix for making partially biodegradable green composites. Thermal conductivity, specific heat capacity and thermal diffusivity of composites were investigated as a function of fiber content and temperature. The waste grass broom fiber has a tensile strength of 297.58MPa, modulus of 18.28GPa, and an effective density of 864kg/m3. The volume fraction of fibers in the composites was varied from 0.163 to 0.358. Thermal conductivity of unidirectional composites was investigated experimentally by a guarded heat flow meter method. The results show that the thermal conductivity of composite decreased with increase in fiber content and the quite opposite trend was observed with respect to temperature. Moreover, the experimental results of thermal conductivity at different volume fractions were compared with two theoretical models. The specific heat capacity of the composite as measured by differential scanning calorimeter showed similar trend as that of the thermal conductivity. The variation in thermal diffusivity with respect to volume fraction of fiber and temperature was not so significant. The tensile strength and tensile modulus of the composites showed a maximum improvement of 222% and 173%, respectively over pure matrix. The work of fracture of the composites with maximum volume fraction of fibers was found to be 296Jm-1. © 2012 Elsevier Ltd.

Loading VR Siddhartha Engineering College collaborators
Loading VR Siddhartha Engineering College collaborators