Time filter

Source Type

Inamdar K.,Sardar Vallabhbhai National Institute of Technology, Surat | Kosta Y.P.,Marwadi Education Foundation Group of Institutions | Patnaik S.,Xavier Institute of Engineering
Radioelectronics and Communications Systems | Year: 2015

Metamaterials have been an attractive topic for research in the field of electromagnetics in recent years. In this paper, a criss-cross structure has been suggested; this shape has been inspired from the famous Jerusalem Cross. The software analysis of the proposed unit cell structure has been validated experimentally thus giving negative response of ɛ and μ. Following this, a microstrip patch antenna based on suggested metamaterial has been designed. The theory and design formulas to calculate various parameters of the proposed antenna have been presented. The design of a metamaterial based microstrip patch antenna has been optimized for providing of an improved gain, bandwidth and multiple frequency operations. All the antenna performance parameters are compared and presented in table and response-graphs. Also it has been observed that the physical dimensions of the metamaterial based patch antenna are smaller compared to its conventional counterpart operating in the same frequency band. The response of the patch antenna has been verified experimentally either. The important part of the research was to develop metamaterial based on some signature structures and techniques that would offer advantage in terms of bandwidth and multiple frequency operation, that is demonstrated in the paper. The unique shape suggested in this paper provides an improvement in bandwidth without reducing the gain of the antenna. © 2015, Allerton Press, Inc.

Deshpande A.M.,Sardar Vallabhbhai National Institute of Technology, Surat | Deshpande A.M.,College of Engineering, Pune | Patnaik S.,Xavier Institute of Engineering
11th IEEE India Conference: Emerging Trends and Innovation in Technology, INDICON 2014 | Year: 2015

This paper presents an effective single image spatially variant motion blur removal technique. Motion blur during the image capture occurs due to the relative motion between the capturing device and image being captured. This blur becomes spatially variant if it varies with position in an image. Removal of such space/shift variant blur from a single image is a challenging problem. To solve this problem, the blurred image is divided into smaller subimages assuming that each subimage is uniformly blurred. In the proposed technique each subimage is transformed into frequency domain for estimating motion blur parameters. Proposed blur parameter estimation implies dual Fourier spectrum computation and Radon transformation steps to obtain estimated values of blur length and blur angle respectively. These estimated motion blur parameters are used to prepare a local parametric blur model. These local parametric blur models are deconvolved with the blurry subimages and thus restores the original image. Restoration step is performed using parametric Wiener filtering. However, due to piecewise uniform consideration, proposed technique introduces some blocking artifacts which are later removed by applying post processing steps on the restored image. To demonstrate the usefulness of this technique for natural scenes and real blurred images it is tested on Berkeley Segmentation dataset and standard test images. The proposed technique shows effective deblurring of images under spatial variance conditions. © 2014 IEEE.

Jose J.,Sardar Vallabhbhai National Institute of Technology, Surat | Patel J.N.,Sardar Vallabhbhai National Institute of Technology, Surat | Patnaik S.,Xavier Institute of Engineering
Optik | Year: 2015

Recent past has witnessed the use of sparse coding of images using learned dictionaries for image compression, denoising and deblurring applications. Though, few of the works reported in literature have addressed the issue of determining the optimum size of the dictionary to be learned and the extent of learning required and largely depends on trial and error approach for finding it. This paper analyses the dictionary learning process and models it using multiple regression analysis, a mathematical tool for determining the statistical relationship among variables. The model can be used as a reference for learning dictionaries from the same training set for different applications. Though the analysis returns a fit model, it lacks generality due to the specific training image set used. However, while using a larger or content specific image set for learning a dictionary, such an analysis is extremely useful. © 2015 Elsevier GmbH.

Deshpande A.M.,College of Engineering, Pune | Deshpande A.M.,Sardar Vallabhbhai National Institute of Technology, Surat | Patnaik S.,Sardar Vallabhbhai National Institute of Technology, Surat | Patnaik S.,Xavier Institute of Engineering
Optik | Year: 2014

In this paper we have proposed a single image motion deblurring algorithm that is based on a novel use of dual Fourier spectrum combined with bit plane slicing algorithm and Radon transform (RT) for accurate estimation of PSF parameters such as, blur length and blur angle. Even after very accurate PSF estimation, the deconvolution algorithms tend to introduce ringing artifacts at boundaries and near strong edges. To prevent this post deconvolution effect, a post processing method is also proposed in the framework of traditional Richardson-Lucy (RL) deconvolution algorithm. Experimental results evaluated on the basis of both qualitative and quantitative (PSNR, SSIM) metrics, verified on the dataset of both grayscale and color blurred images show that the proposed method outperforms the existing algorithms for removal of uniform blur. A comparison with state-of-the-art methods proves the usefulness of the proposed algorithm for deblurring real-life images/photographs. © 2014 Elsevier GmbH. All rights reserved.

Patel J.N.,Sardar Vallabhbhai National Institute of Technology, Surat | Jose J.,Sardar Vallabhbhai National Institute of Technology, Surat | Patnaik S.,Xavier Institute of Engineering
IEICE Transactions on Information and Systems | Year: 2015

The concept of sparse representation is gaining momentum in image processing applications, especially in image compression, from last one decade. Sparse coding algorithms represent signals as a sparse linear combination of atoms of an overcomplete dictionary. Earlier works shows that sparse coding of images using learned dictionaries outperforms the JPEG standard for image compression. The conventional method of image compression based on sparse coding, though successful, does not adapting the compression rate based on the image local block characteristics. Here, we have proposed a new framework in which the image is classified into three classes by measuring the block activities followed by sparse coding each of the classes using dictionaries learned specific to each class. K-SVD algorithm has been used for dictionary learning. The sparse coefficients for each class are Huffman encoded and combined to form a single bit stream. The model imparts some rate-distortion attributes to compression as there is provision for setting a different constraint for each class depending on its characteristics. We analyse and compare this model with the conventional model. The outcomes are encouraging and the model makes way for an efficient sparse representation based image compression. Copyright © 2015 The Institute of Electronics, Information and Communication Engineers

Santiago J.R.,Santa Clara University | Samuel S.J.,Xavier Institute of Engineering | Sawn R.,Xavier Institute of Engineering
Proceedings of 3rd International Conference on Context-Aware Systems and Applications, ICCASA 2014 | Year: 2014

Even amidst the hustle and bustle of busy lives, numerous people dream of playing a musical instrument. Unfortunately, many may never get a chance to touch one. But this doesn't stop them from 'air drumming' or playing 'air guitar' passionately while listening to their favorite tunes. To encourage this passion for music, especially in the absence of a real instrument, we introduce to you the Virtual Air Guitar. This application allows one to showcase their guitar skills, regardless of their knowledge of playing a real guitar. It uses color tracking to detect inputs and a sound module incorporating the Karplus Strong algorithm to generate musical notes as an output. As a result, a simple webcam and brightly colored gloves are required to use the application. The application detects each gloved hand as a separate point and renders a horizontal line passing through the left point (representing a simple guitar). The line is divided into four differently colored segments, representing four different frequencies (or 'notes' separated by 'frets' in guitar lingo). These notes are used in the intro of the classic rock song, Smoke on the Water. When the point on the right (representing the right glove) crosses a particular colored segment, a corresponding sound (note) which has been assigned to that segment, will be generated. This application has enormous potential as a base for interactive guitar games, teaching music, and of course, to compose guitar based songs. Copyright © 2015 ICST.

Morade S.S.,SVNIT | Morade S.S.,Kk Wagh Institute Of Engineering And Research | Patnaik S.,SVNIT | Patnaik S.,Xavier Institute of Engineering
Optik | Year: 2015

Automatic lip reading is a technique of understanding the uttered speech by visually interpreting the lip movement of the speaker. The two major parts, which play crucial role in lip reading system, are feature extraction followed by the classifier. For automatic lip reading, there are many competing methods published by researchers for feature extraction and classifiers. In this paper, we compare some of these leading methods. We have compared Support Vector Machine (SVM), Back Propagation Neural Network (BPNN), K-Nearest Neighborhood (KNN), Random Forest Method (RFM) and Naive Bayes (NB) classifiers, on the basis of recognition performance and training time. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are studied to extract feature vectors. The CUAVE and Tulips database are used for experimentation and comparison. It is observed that SVM outperforms the rest for CUAVE database. Training time of SVM is also less than others. © 2015 Elsevier GmbH. All rights reserved.

Jain K.,Xavier Institute of Engineering | Vidhate A.V.,Xavier Institute of Engineering | Wangikar V.,k-Technology | Shah S.,Vidyalankar Institute of Technology
International Conference and Workshop on Emerging Trends in Technology 2011, ICWET 2011 - Conference Proceedings | Year: 2011

Grid computing provides the effective sharing of computational and storage resources among geographically distributed users. In most of the organizations, there are large amounts of underutilized computing power and storage existing. On the other hand most desktop machines are busy less than 5 percent of the time. Grid computing provides a framework for exploiting these underutilized resources and thus increases the efficiency of resource usage. Now a days, many commercial, business and research institutes produce huge amount of data and need to store this data on secondary storage of machines. Users of data are distributed among different geographical boundaries and they want to collaborate on the same problem. Data grids focus on providing secure access to distributed, heterogeneous pools of data. Data grids harness data, storage, and network resources located in distinct administrative domains, and provide high speed and reliable access to data. Optimization of data access can be achieved via data replication, whereby identical copies of data are generated and stored at various sites. A good replication strategy should ideally minimize latencies; reduce access time while optimizing resources. Hence in this paper we have focused on improving data grid performance. We have first presented a detailed analysis of various replication strategies like No replication, Always replication and the Economic model simulated by OptorSim. Next we have proposed a dynamic replication strategy which switches between No replication and Always replication based on file size and type of access. We argue that the proposed file size and type of access based replication algorithm will minimize the access latencies and execution time of jobs on Grid. In the next phase we shall proceed with implementing algorithm on simulator and observing the performance of the implemented algorithm. Copyright © 2011 ACM.

Morade S.S.,SVNIT | Patnaik S.,Xavier Institute of Engineering
2015 International Conference on Pervasive Computing: Advance Communication Technology and Application for Society, ICPC 2015 | Year: 2015

In lip reading, selection of features play crucial role. In lip reading applications database is video, so 3 Dimensional transformation is appropriate to extract lip motion information. State of art the lip reading is based on frame normalization and frame wise feature extraction. However this is not appropriate due to chances of information loss during frame normalization. Also all the frames cannot be considered equally as they bear varying motion information. In this paper 3D transform based method is proposed for feature extraction. These features are the input to Genetic Algorithm (GA) model for discriminative analysis. Genetic Algorithm is used for dimensionality reduction and to improve the performance of the classifiers at low cost of computation. Both testing and training time for classifier is reduced by compact feature size. For experimentation of digit utterances CUAVE and Tulips database are used. The results obtained are compared with various feature selectors from WEKA software. It is found that from classification accuracy point of view proposed method is better than others. © 2015 IEEE.

Morade S.S.,SVNIT | Morade S.S.,Kk Wagh Institute Of Engineering Education And Research | Patnaik S.,Xavier Institute of Engineering
Optik | Year: 2014

Lip contour tracking is an integral part of lip reading application. Fast and accurate lip tracking is an important step in lip reading. This paper uses a novel active contour model for lip tracking and proposes geometrical feature extraction approach for lip reading. Effect of individual features are compared and a joint feature model is obtained by combining weighted decision obtained by a feature vector of difference in inner area, height and width of lip. Ergodic hidden markov model (HMM) is used as a classifier. For each digit Markov Model is tested for 3 states and 5 states. Videos of English digit from 0 to 9 have been recorded for recognition test. Cuave database is used for comparison along with an in-house database. While doing computation of feature vectors, only significant frames are used to reduce the computation complexity. Results of experimentations on digit utterances are given to show that the maximum recognized digit can be used for important programming command of computerized numerical control machines. © 2014 Elsevier GmbH. All rights reserved.

Loading Xavier Institute of Engineering collaborators
Loading Xavier Institute of Engineering collaborators