Dhirubhai Ambani Institute of Information and Communication Technology , is a technological university located in Gandhinagar, Gujarat, India. It is named after the Gujarati entrepreneur and Reliance group founder Dhirubhai Ambani. It is run by the Dhirubhai Ambani Foundation and is promoted by the Anil Dhirubhai Ambani Group.DA-IICT began admitting students in August 2001, with an intake of 240 undergraduate students for its Bachelor of Technology program in Information and Communication Technology . Since then, it has expanded to include postgraduate courses such as Master of Technology in ICT, Master of Science in Information Technology, Master of Science , Master in Design , a five year dual degree programme, along with a Doctorate programme. The Bachelors programme is of four-year duration. The first batch of DA-IICT of post graduates passed out in 2004 and the first batch of graduates in 2005. Wikipedia.
Mehta P.,Dhirubhai Ambani Institute of ICT
54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Student Research Workshop | Year: 2016
The availability of large documentsummary corpora have opened up new possibilities for using statistical text generation techniques for abstractive summarization. Progress in Extractive text summarization has become stagnant for a while now and in this work we compare the two possible alternates to it. We present an argument in favor of abstractive summarization compared to an ensemble of extractive techniques. Further we explore the possibility of using statistical machine translation as a generative text summarization technique and present possible research questions in this direction. We also report our initial findings and future direction of research. © 2016 Association for Computational Linguistics.
Pillutla L.S.,Dhirubhai Ambani Institute of ICT
Wireless Personal Communications | Year: 2017
We consider the problem of indoor target tracking using wireless sensor networks. To facilitate ease of deployment and keeping the cost to a minimum we focus on devising a target tracking system based on received signal strength indicator (RSSI) measurements. We adopt a model based approach in which the targets are assumed to evolve in time according to a certain maneuver model and the deployed sensors record RSSI measurements governed by an appropriate observation model. To devise an accurate target tracking algorithm, that would account for the radio environment, we use mixed maximum likelihood (ML)-Bayesian framework. Under this framework the radio environment is estimated using the ML approach and the target tracking is accomplished using a Bayesian filtering technique namely, particle filtering. Next to create a distributed tracking algorithm which warrants that every sensor node has access to RSSI measurements of all the other sensor nodes we introduce a dissemination mechanism for the same based on the technique of random linear network coding (RLNC). In this technique every sensor node encodes RSSI measurements that it has received from other nodes (including its own) to create a network coded packet, which in turn is transmitted using the carrier-sense multiple access based access mechanism. Our simulation results demonstrate that the root mean square tracking error (RMSE) obtained by using RLNC is strictly lower than what was achieved with a competing scheme based on localized aggregation. This can be attributed to the rapid dissemination capability of the RLNC technique. Further, the growth of RMSE in a strongly connected network with noise variance was found to be much slower than in the case of a weakly connected network. This points to the potential of RLNC in improving tracking performance, especially in strongly connected networks. © 2017 Springer Science+Business Media New York
Patel T.B.,Dhirubhai Ambani Institute of ICT |
Patil H.A.,Dhirubhai Ambani Institute of ICT
IEEE Journal on Selected Topics in Signal Processing | Year: 2017
Countermeasures used to detect synthetic and voiceconverted spoofed speech are usually based on excitation source or system features.However, in the natural speech production mechanism, there exists nonlinear source-filter (S-F) interaction as well. This interaction is an attribute of natural speech and is rarely present in synthetic or voice-converted speech. Therefore, we propose features based on the S-F interaction for a spoofed speech detection (SSD) task. To that effect, we estimate the voice excitation source (i.e., differenced glottal flow waveform, g?(t)) and model it using the well-known Liljencrants-Fant model to get coarse structure, gc (t). The residue or difference, gr (t), between g?(t) and gc (t) is known to capture the nonlinear S-F interaction. In the time domain, the L2 norm of gr (t) in the closed, open, and return phases of the glottis are considered as features. In the frequency domain, the Mel representation of gr (t) showed significant contribution in the SSD task. The proposed features are evaluated on the first ASVspoof 2015 challenge database using a Gaussian mixture model based classification system. On the evaluation set, for vocoder-based spoofs (i.e., S1-S9), the score-level fusion of residual energy features, Mel representation of the residual signal, and Mel frequency cepstral coefficients (MFCC) features gave an equal error rate (EER) of 0.017%, which is much less than the 0.319% obtained with MFCC alone. Furthermore, the residues of the spectrogram( aswell as theMel-warped spectrogram) of estimated g?(t) and gc (t) are also explored as features for the SSD task. The features are evaluated for robustness in the presence of additive white, babble, and car noise at various signal-to-noise-ratio levels on the ASVspoof 2015 database and for channel mismatch condition on the Blizzard Challenge 2012 dataset. For both cases, the proposed features gave significantly less EER than that obtained by MFCC on the evaluation set. © 2017 IEEE.
Joshi M.,Dhirubhai Ambani Institute of ICT |
Jalobeanu A.,University of Évora
IEEE Transactions on Geoscience and Remote Sensing | Year: 2010
In this paper, we propose a model-based approach for the multiresolution fusion of satellite images. Given the highspatial-resolution panchromatic (Pan) image and the low-spatialand high-spectral-resolution multispectral (MS) image acquired over the same geographical area, the problem is to generate a highspatial-and high-spectral-resolution MS image. This is clearly an ill-posed problem, and hence, we need a proper regularization.We model each of the low-spatial-resolution MS images as the aliased and noisy version of their corresponding high spatial resolution, i.e., fused (to be estimated) MS images. A proper aliasing matrix is assumed to take care of the undersampling process. The highspatial-resolution MS images to be estimated are then modeled as separate inhomogeneous Gaussian Markov random fields (IGMRF), and a maximum a posteriori (MAP) estimation is used to obtain the fused image for each of the MS bands. The IGMRF parameters are estimated from the available high-resolution Pan image and are used in the prior model for regularization purposes. Since the method does not directly operate on Pan pixel values as most of the other methods do, spectral distortion is minimum, and the spatial properties are better preserved in the fused image as the IGMRF parameters are learned at every pixel. We demonstrate the effectiveness of our approach over some existing methods by conducting experiments on synthetic data, as well as on the images captured by the QuickBird satellite. © 2009 IEEE.
Rajwade A.,Dhirubhai Ambani Institute of ICT |
Rangarajan A.,University of Florida |
Banerjee A.,University of Florida
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013
In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented. © 1979-2012 IEEE.
Chi Y.-T.,University of Florida |
Ali M.,University of Florida |
Rajwade A.,Dhirubhai Ambani Institute of ICT |
Ho J.,University of Florida
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2013
This paper proposes a dictionary learning framework that combines the proposed block/group (BGSC) or reconstructed block/group (R-BGSC) sparse coding schemes with the novel Intra-block Coherence Suppression Dictionary Learning algorithm. An important and distinguishing feature of the proposed framework is that all dictionary blocks are trained simultaneously with respect to each data group while the intra-block coherence being explicitly minimized as an important objective. We provide both empirical evidence and heuristic support for this feature that can be considered as a direct consequence of incorporating both the group structure for the input data and the block structure for the dictionary in the learning process. The optimization problems for both the dictionary learning and sparse coding can be solved efficiently using block-gradient descent, and the details of the optimization algorithms are presented. We evaluate the proposed methods using well-known datasets, and favorable comparisons with state-of-the-art dictionary learning methods demonstrate the viability and validity of the proposed framework. © 2013 IEEE.
Nathani A.,Dhirubhai Ambani Institute of ICT |
Chaudhary S.,Dhirubhai Ambani Institute of ICT |
Future Generation Computer Systems | Year: 2012
In present scenario, most of the Infrastructure as a Service (IaaS) clouds use simple resource allocation policies like immediate and best effort. Immediate allocation policy allocates the resources if available, otherwise the request is rejected. Best-effort policy also allocates the requested resources if available otherwise the request is placed in a FIFO queue. It is not possible for a cloud provider to satisfy all the requests due to finite resources at a time. Haizea is a resource lease manager that tries to address these issues by introducing complex resource allocation policies. Haizea uses resource leases as resource allocation abstraction and implements these leases by allocating Virtual Machines (VMs). Haizea supports four kinds of resource allocation policies: immediate, best effort, advanced reservation and deadline sensitive. This work provides a better way to support deadline sensitive leases in Haizea while minimizing the total number of leases rejected by it. Proposed dynamic planning based scheduling algorithm is implemented in Haizea that can admit new leases and prepare the schedule whenever a new lease can be accommodated. Experiments results show that it maximizes resource utilization and acceptance of leases compared to the existing algorithm of Haizea. © 2010 Elsevier B.V. All rights reserved.
Raut M.K.,Dhirubhai Ambani Institute of ICT
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014
The algorithm to compute prime implicates and prime implicants in modal logic has been suggested in . In this paper we suggest an incremental algorithm to compute the prime implicates of a knowledge base KB and a new knowledge base F from Π(KB)∈ F in modal logic, where Π(KB) is the set of prime implicates of KB and we also prove the correctness of the algorithm. © 2014 Springer International Publishing Switzerland.
Gajjar P.P.,Dhirubhai Ambani Institute of ICT |
Joshi M.V.,Dhirubhai Ambani Institute of ICT
IEEE Transactions on Image Processing | Year: 2010
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints. © 2010 IEEE.
Das M.L.,Dhirubhai Ambani Institute of ICT
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013
RFID (Radio Frequency IDentification) system has found enormous applications in retail, health care, transport, and home appliances. Over the years, many protocols have been proposed for RFID security using symmetric key and public key cryptography. Based on the nature of RFID tags' usage in various applications, existing RFID protocols primarily focus on tag identification or authentication property. Internet of Things (IoT) is emerging as a global network in which every object in physical world would be able to connect to web of world via Internet. As a result, IoT infrastructure requires integration of several complimentary technologies such as sensor networks, RFID system, embedded system, conventional desktop environment, mobile communications and so on. It is prudent that RFID system will play significant roles in IoT infrastructure. In that context, RFID system should support more security properties, such as mutual authentication, key establishment and data confidentiality for its wide-spread adoption in future Internet applications. In this paper, we present a strong security and privacy of RFID system for its suitability in IoT infrastructure. The proposed protocol provides following security goal: - mutual authentication of tags and readers. - authenticated key establishment between tag and reader. - secure data exchange between tag-enabled object and reader-enabled things. The protocol provides narrow-strong privacy and forward secrecy. © 2013 Springer-Verlag.