Time filter

Source Type

Chennai, India

The Loyola-ICAM College of Engineering and Technology is an engineering and technology school in Chennai, India. It is approved by AICTE and affiliated with the Anna University of Technology, Chennai. It is a Jesuit institution. Wikipedia.

Alamelu S.,Loyola-ICAM College of Engineering and Technology | Baskar S.,Thiagarajar College of Engineering | Babulal C.K.,Thiagarajar College of Engineering
International Journal of Electrical Power and Energy Systems | Year: 2015

This paper addresses an application of evolutionary algorithms to optimal siting and sizing of UPFC which are formulated as single and multiobjective optimization problems. The decision variables such as optimal location, both line and distance of UPFC from the sending end, control parameters of UPFC and system reactive power reserves are considered in the optimization process. Minimization of total costs including installation cost of UPFC and enhancement of the loadability limit are considered as objectives. To reduce the complexity in modeling and the number of variables and constraints, transformer model of UPFC is used for simulation purposes. CMAES and NSGA-II algorithms are used for optimal siting and sizing of UPFC on IEEE 14 and 30 bus test systems. NSGA-II algorithm is tested on IEEE 118 bus system to prove the versatility of the algorithm when applied to large systems. To validate the results of transformer model of UPFC for optimal siting and sizing, results using other models are considered. In single objective optimization problem, CMAES algorithm with transformer model yields better results when compared to other UPFC models. The statistical results conducted on 20 independent trials of CMAES algorithm authenticate the results obtained. For validating the results of NSGA-II with transformer model for optimal siting and sizing of UPFC, the reference Pareto front generated using multiple run CMAES algorithm by minimizing weighted objective is considered. In multiobjective optimization problem, the similarity between the generated Pareto front and the reference Pareto front validates the results obtained. © 2015 Elsevier Ltd. All rights reserved. Source

Karuppanan K.,Anna University | Jayakumaran C.,Loyola-ICAM College of Engineering and Technology
International Journal of Environment and Pollution | Year: 2013

Regionalisation is to organise a large set of spatial objects into spatially contiguous regions despite optimising the homogeneity of the derived regions, while representing social and economic geography. To confront this problem, it is necessary to classify the regions to form groups that are homogeneous in air quality attributes. It is to develop a system that applies data mining techniques to study the distribution of air pollutants in Chennai, a metro city in India using vehicular ad hoc networking and map the distribution on the geographical map for effective policy making. In conventional regionalisation methods, the data points are assigned to a single region in a multidimensional attribute space affecting air pollution response. However, some data points, having distributed membership to more than one region, could not be justified and allocated to a single region. Rough set-based clustering technique is applied to regionalisation problem to resolve vague or overlapping regions. The overlapping regions are restructured to guarantee the homogeneity of the regions formed or altered. The investigations of the cluster validity tests confirm the effectiveness of rough set-based regionalisation in air quality modelling. © 2013 Inderscience Enterprises Ltd. Source

Ramachandran B.,University of West Florida | Ramanathan A.,Loyola-ICAM College of Engineering and Technology
2015 Clemson University Power Systems Conference, PSC 2015 | Year: 2015

The recent push for electrified vehicles, including both plug-in hybrid vehicles and pure electric vehicle may further increase peak electrical load if left unmitigated, resulting in more demand for generation and transmission capacities. Fortunately, PEVs can be treated as controllable loads or even power sources under extraneous situations for demand side management (DSM). Although centralized approach certainly performs an effective demand side management, an important concept regarding the privacy of the information access is not already considered. The objective function for this multi objective problem of demand side power management and decentralized control is the total electricity generation cost and cost associated with implementing the demand side management programs. Thus saving a unit of electricity because of implementing demand side management can be treated like producing a unit of electricity by a power plant. The main constraint related to DSM is that the maximum expected saving that could be achieved by implementing DSM is capped to a realistic maximum limit. The framework is flexible and could incorporate any meta-heuristic for multi-objective optimization. This multi objective approach is applied on a test system comprising of 2516 domestic consumers, 296 small consumption firms, 150 medium consumption firms and 4 large consumption firms. It is observed that PEVs could utilize information transfer with the grid to shape the effect exhibited on the overall load. Also the obtained numerical results show that this approach will improve PEV market penetration, especially relative to centralized strategies that could deter consumers who wish to independently determine their charging strategy. © 2015 IEEE. Source

Senthilkumar K.K.,Anna University | Kunaraj K.,Loyola-ICAM College of Engineering and Technology | Seshasayanan R.,Anna University
Eurasip Journal on Image and Video Processing | Year: 2015

The discrete cosine transform (DCT) performs a very important role in the application of lossy compression for representing the pixel values of an image using lesser number of coefficients. Recently, many algorithms have been devised to compute DCT. In the initial stage of image compression, the image is generally subdivided into smaller subblocks, and these subblocks are converted into DCT coefficients. In this paper, we present a novel DCT architecture that reduces the power consumption by decreasing the computational complexity based on the correlation between two successive rows. The unwanted forward DCT computations in each 8 × 8 sub-image are eliminated, thereby making a significant reduction of forward DCT computation for the whole image. This algorithm is verified with various high- and less-correlated images, and the result shows that image quality is not much affected when only the most significant 4 bits per pixel are considered for row comparison. The proposed architecture is synthesized using Cadence SoC Encounter® with TSMC 180 nm standard cell library. This architecture consumes 1.257 mW power instead of 8.027 mW when the pixels of two rows have very less difference. The experimental result shows that the proposed DCT architecture reduces the average power consumption by 50.02 % and the total processing time by 61.4 % for high-correlated images. For less-correlated images, the reduction in power consumption and the total processing time is 23.63 and 35 %, respectively. © 2015, Senthilkumar et al. Source

Nirmala D.E.,Loyola-ICAM College of Engineering and Technology | Vaidehi V.,Anna University
2015 IEEE International Conference on Computer Graphics, Vision and Information Security, CGVIS 2015 | Year: 2015

With the recent developments in the field of visual sensor technology, multiple imaging sensors are used in several applications such as surveillance, medical imaging and machine vision, in order to improve their capabilities. The goal of any efficient image fusion algorithm is to combine the visual information, obtained from a number of disparate imaging sensors, into a single fused image without the introduction of distortion or loss of information. The existing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best features for fusion. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. In this paper, Non-Subsampled Contourlet Transform (NSCT) based two feature-level fusion schemes are proposed and compared. In the first method Fuzzy logic is applied to determine the weights to be assigned to each segmented region using the salient region feature values computed. The second method employs Golden Section Algorithm (GSA) to achieve the optimal fusion weights of each region based on its Petrovic metric. The regions are merged adaptively using the weights determined. Experiments show that the proposed feature-level fusion methods provide better visual quality with clear edge information and objective quality metrics than individual multi-resolution-based methods such as Dual Tree Complex Wavelet Transform and NSCT. © 2015 IEEE. Source

Discover hidden collaborations