Deshpande D.S.,Mgms Jawaharlal Nehru Engineering College |
Manthalkar R.R.,Shri Guru Gobind Singhji Institute of Engineering and Technology
Smart Innovation, Systems and Technologies | Year: 2015
The incidence of breast cancer is rapidly becoming the number one cancer in females. It is the serious health problem and leading cause of death for middle aged women. Mammography is one of the most reliable methods for early detection of breast cancer. But mammograms are the most difficult images for interpretation and may lead to false diagnosis. Therefore there is a significant need of automatic extraction of the actionable information from the mammogram data in order to ensure improvement in diagnosis. To address this issue, we have proposed an automatic classification system for breast cancer using Texture Based Associative Classifier (TBAC). Here we wish to automatically classify the breast mammograms into three basic categories i.e. normal, benign and malignant based on their texture associations. Our experimental results on MIAS dataset demonstrate that the proposed classifier TBAC is superior to existing associative classifiers for mammogram classification. © Springer India 2015.
Zambare A.S.,Mgms Jawaharlal Nehru Engineering College
Mini-Reviews in Medicinal Chemistry | Year: 2014
4,5,6,7-Tetrahydrothieno pyridine is an important class of heterocyclic nucleus. Various 4,5,6,7-tetrahydrothieno pyridine derivatives have been synthesized and evaluated for various biological activities in different models with desired findings. Some analogs have shown potent biological activities and may be considered as lead molecule for the development of future drugs. Number of drug molecules are available in the market and many molecules are in clinical development containing 4,5,6,7-tetrahydrothieno pyridine nucleus as an important core. This review is an attempt to organize the chemical and biological aspects of 4,5,6,7-tetrahydrothieno pyridine analogs reported in last 20 year to till date. Review mainly focuses on the important role of the core in synthesis of drug or drug intermediates giving emphasis on synthetic schemes and biological activities of the different 4,5,6,7-tetrahydrothieno pyridine analogs. © 2014 Bentham Science Publishers.
Joshi M.S.,MGMs Jawaharlal Nehru Engineering College
International Journal of Ambient Computing and Intelligence | Year: 2016
An enormous growth in internet usage has resulted into great amounts of digital data to handle. Data sharing has become significant and unavoidable. Data owners want the data to be secured and perennially available. Data protection and any violations thereby become crucial. This work proposes a traitor identification system which securely embeds the fingerprint to provide protection for numeric relational databases. Digital data of numeric nature calls for preservation of usability. It needs to be done so by achieving minimum distortion. The proposed insertion technique with reduced time complexity ensures that the fingerprint inserted in the form of an optimized error leads to minimum distortion. Collusion attack is an integral part of fingerprinting and a provision to mitigate by avoiding the same is suggested. Robustness of the system against several attacks like tuple insertion, tuple deletion etc. is also depicted.
Joshi M.,MGMs Jawaharlal Nehru Engineering College
Proceedings - 1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015 | Year: 2015
With the ever-increasing usage of internet, the availability of digital data is in tremendous demand. In this context, it is essential to protect the ownership of the data and to be able to find the guilty user. In this paper, a fingerprinting scheme is proposed to provide protection for Numeric Relational Database (RDB), which focuses on challenges like: 1. Minimum distortion in Numeric database, 2. Usability preservation, 3. Non-violation of the requirement of blind decoding. When the digital data in concern is numeric in nature the usability of data needs to be keenly preserved, this is made possible by achieving minimum distortion. © 2015 IEEE.
Ghosh S.,MGMs Jawaharlal Nehru Engineering College |
Lohani B.,Indian Institute of Technology Kanpur
International Journal of Digital Earth | Year: 2015
Light Detection and Ranging (LiDAR) technology generates dense and precise three-dimensional datasets in the form of point clouds. Conventional methods of mapping with airborne LiDAR datasets deal with the process of classification or feature specific segmentation. These processes have been observed to be time-consuming and unfit to handle in scenarios where topographic information is required in a small amount of time. Thus there is a requirement of developing methods which process the data and reconstruct the scene in a small amount of time. This paper presents several pipelines for visualizing LiDAR datasets without going through classification and compares them using statistical methods to rank these processes in the order of depth and feature perception. To make the comparison more meaningful, a manually classified and computer-aided design (CAD) reconstructed dataset is also included in the list of compared methods. Results show that a heuristic-based method, previously developed by the authors perform almost equivalent to the manually classified and reconstructed dataset, for the purposes of visualization. This paper makes some distinct contributions as: (1) gives a heuristics-based visualization pipeline for LiDAR datasets, and (2) presents an experimental design supported by statistical analysis to compare different pipelines. © 2014 Taylor & Francis.
Kokate R.D.,MGMS Jawaharlal Nehru Engineering College |
Waghmare L.M.,Shri Guru Gobind Singhji Institute of Engineering and Technology
International Journal of Systems Signal Control and Engineering Application | Year: 2010
The Generalized Predictive Controller (GPC) in transfer function representation is proposed for the cascade control task. The recommended cascade GPC (CGPC) applies one predictor and one cost function that results in several advantageous features: The disturbance regulations of the inner and the outer loops can be totally decoupled; the inner disturbance regulation is well damped, the typical overshoot of the traditional cascade control structure is avoided. The investigation is based on simulation experiments of heat exchanger model identified from SISO input and output data. © Medwell Journals, 2010.
Kawarkhe M.,MGMs Jawaharlal Nehru Engineering College |
Musande V.,MGMs Jawaharlal Nehru Engineering College
Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2014 | Year: 2014
Cotton crop classification is found to be a significant task in crop management. Literature has exploited unsupervised fuzzy based classification and various vegetation indices for cotton crop classification. However, fuzzy based classification has negative effect on performance, because of inliers and outliers in the image. Hence, it is not reliable to investigate the performance of vegetation indices and for cotton crop classification. To overcome this drawback, this paper introduces possiblistic fuzzy c-means (PFCM) clustering for labeling the learning data and exploits support vector machine (SVM), which enables supervised learning, for cotton crop classification. Subsequently, five vegetation indices namely, simple ratio (SR), Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index (SAVI), Triangular Vegetation Index (TVI) and Transformed Normalized Difference Vegetation Index (TNDVI) are considered for investigation. LISS - III multi - spectral images of IRS - P6 sensors are acquired from Aurangabad region, India and they are subjected to experimental study. Three image sets are subjected to experimental investigation and the proposed classifier is compared with an existing classifier. The proposed classifier outperforms the existing classifier in all the image sets. Comparison in terms vegetation indices demonstrate that SR outperforms other vegetation indices by achieving 88.72%, 88.71% and 89.15% accuracy values for image sets 1, 2 and 3 respectively. © 2014 IEEE.
Tamane S.,MGMs Jawaharlal Nehru Engineering College
ACM International Conference Proceeding Series | Year: 2016
These days' Big data is becoming a very essential component for the industries where large volume of data at very high speed is used to solve particular data problems. Generally, big data is first analyzed and then used with other available data in the company to make it more effective. Therefore, big data is never operated in isolation. There are a variety of non-relational data stores (databases) available. These data stores and big data can be used in combination to work with. Attributes of these databases are available for companies where big data is used. In last few years it is the requirement of companies that these databases should operate very fast, it should be extended/contracted whenever required and should generate reports quickly. It also requires that the different means should be available to manage and organize these massive databases. This paper mainly focuses on some methods for data management like key-value databases, document databases, tabular databases, object data bases and graph databases. Use of RDBMS for big data implementation is not practical because of its performance, scale or even cost. Now a day's companies have adopted non-relational databases, known as NoSQL databases. Programmers and analysts may take benefit of non-relational databases as it has simple modeling constraints than the relational databases. Analysts can do various types of analysis by taking different types of non-relational databases every time. For example, key value databases, graph databases. The non-relational databases do not depend on the common traditional relational database management systems. © 2016 ACM.
Mangulkar M.N.,Mgms Jawaharlal Nehru Engineering College |
Jamkar S.S.,Government College of Engineering, Aurangabad
Jordan Journal of Civil Engineering | Year: 2016
Aggregate characteristics have a significant effect on the properties of concrete in the fresh state and the hardened state. They also influence the quantity of cement paste required to fill the voids between aggregate particles. The manual methods suggested for the measurement of aggregate characteristics are laborious, time consuming and approximate. This paper presents the development of a Digital Image Processing (DIP) based system for the measurement of sphericity, shape factor, elongation ratio and flatness ratio of coarse aggregate particles. The system is calibrated using standard objects such as marbles, coins and then used for the measurement of coarse aggregate particles having varied characteristics. Samples of rounded gravels and crushed aggregates from different crushers are considered for the study. The results indicated that the system can be used for the accurate measurement of aggregate characteristics. © 2016 JUST. All Rights Reserved.
Kaleem A.Md.,MGMs Jawaharlal Nehru Engineering College |
Tamboli A.I.,Shri Guru Gobind Singhji Institute of Engineering and Technology
Proceedings - 2012 International Conference on Communication, Information and Computing Technology, ICCICT 2012 | Year: 2012
Combination approaches provide flexible and versatile solution to improve adaptive filter performance. In this paper, mean square performance of an affine combination of two time varying (TV) LMS adaptive filters is studied. The purpose of this combination is to obtain TV LMS adaptive filter with fast convergence and small mean square error (MSE).Two practical schemes proposed in  are used for this case. Simulation results indicate that these schemes yield overall MSE that is less than the MSE of either filter. Moreover this combination is better than combination of two LMS adaptive filters. © 2012 IEEE.