MGMs Jawaharlal Nehru Engineering College

Aurangābād, India

MGMs Jawaharlal Nehru Engineering College

Aurangābād, India
SEARCH FILTERS
Time filter
Source Type

Mahadwad O.K.,MGMs Jawaharlal Nehru Engineering College | Wagh D.D.,MGMs Jawaharlal Nehru Engineering College | Kokil P.L.,Sanjay Techno Plast Pvt. Ltd.
IOP Conference Series: Materials Science and Engineering | Year: 2017

Pyrolysis of wood is the possible path for converting biomass to higher valuable products such as bio-oil, bio-char and bio-gas. Bio-oil or liquid biofuels have higher heating values so it can store and transport more conveniently. The by-products bio-char and bio-gas, which can be used to provide heat required in the process. This work focused on the formation, analysis and characterization of bio-oil which was obtained from the mixed wood pyrolysis. A GC-MS technique was used for the determination of families of lighter chemicals form pyrolyzed oil. Karl fisher titration and other analytical methods were used for the characterization of pyrolyzed oil. In all there were sixty-six compounds found in the GC-MS analysis of bio-oil and the major compound was acetic acid (19.06 wt ), formic acid (4.90 wt ) 1,2-benzenediol (4.43 wt ) and furfural (3.46 wt ). Along with this analysis, pyrolyzed oil was characterized by calculating its viscosity, density, calorific value, acid value, fire point, flash point, carbon, hydrogen, nitrogen, ash and water content in it. Most of the above mention properties of bio-oil matches with the properties of crude oil except it show more water content in it. © Published under licence by IOP Publishing Ltd.


Bhosle K.,Maharashtra Institute of Technology | Musande V.,MGMs Jawaharlal Nehru Engineering College
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives | Year: 2017

Stressed on crop can be monitored using different indices. Red Edge position is good estimator for stress monitoring. The red edge position (REP) is strongly correlated with foliar chlorophyll content. Strong chlorophyll absorption causes the abrupt change in the 680-800nm region of reflectance spectra of green vegetation. The red edge consist the point of maximum slope between red and near-infrared wavelengths. REP can be used to recognize green zone of the observation area. The REP is present in spectra for vegetation recorded by remote sensing methods. REP is clearer and significant in hyper spectral data as hyper spectral consist of more and continuous bands data. In this paper experiments were carried out for mulberry crop using USGS EO-1 Hyperion data. Atmospheric corrected data is used for classification. Classification is carried out on small cluster of 14 field samples. Ground truth is verified and classified by comparing with Hyperion data. REP is different for different stressed condition of crops and shows healthy and diseased crop condition. Nutritional stresses, diseases, drought of plants are detected using REP. Stressed and healthy field of mulberry are estimated by calculating REP using maximum first derivative, linear interpolation, linear extrapolation method. Finally REP is compared using above methods. It is noticeable difference of REP for healthy and stressed crop. The research indicates that overall accuracy using maximum first derivative was 92.85 % and it was more compared to other methods. Linear extrapolation gives less accuracy compared to linear interpolation method.


Joshi M.S.,MGMs Jawaharlal Nehru Engineering College
International Journal of Ambient Computing and Intelligence | Year: 2016

An enormous growth in internet usage has resulted into great amounts of digital data to handle. Data sharing has become significant and unavoidable. Data owners want the data to be secured and perennially available. Data protection and any violations thereby become crucial. This work proposes a traitor identification system which securely embeds the fingerprint to provide protection for numeric relational databases. Digital data of numeric nature calls for preservation of usability. It needs to be done so by achieving minimum distortion. The proposed insertion technique with reduced time complexity ensures that the fingerprint inserted in the form of an optimized error leads to minimum distortion. Collusion attack is an integral part of fingerprinting and a provision to mitigate by avoiding the same is suggested. Robustness of the system against several attacks like tuple insertion, tuple deletion etc. is also depicted.


Joshi M.,MGMs Jawaharlal Nehru Engineering College
Proceedings - 1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015 | Year: 2015

With the ever-increasing usage of internet, the availability of digital data is in tremendous demand. In this context, it is essential to protect the ownership of the data and to be able to find the guilty user. In this paper, a fingerprinting scheme is proposed to provide protection for Numeric Relational Database (RDB), which focuses on challenges like: 1. Minimum distortion in Numeric database, 2. Usability preservation, 3. Non-violation of the requirement of blind decoding. When the digital data in concern is numeric in nature the usability of data needs to be keenly preserved, this is made possible by achieving minimum distortion. © 2015 IEEE.


Deshpande D.S.,MGMs Jawaharlal Nehru Engineering College
International Journal of Data Mining, Modelling and Management | Year: 2016

In today's emerging field of descriptive data mining, association rule mining (ARM) has been proven helpful to describe essential characteristics of data from large databases. Mining frequent item sets is the fundamental task of ARM. Apriori, the most influential traditional ARM algorithm adopts iterative search strategy for frequent item set generation. But, multiple scans of database, candidate item set generation and large load of system's I/O are major abuses which degrade the mining performance of it. Therefore, we proposed a new method for mining frequent item sets which overcomes these shortcomings. It judges the importance of occurrence of an item set by counting present and absent count of an individual item. Performance evaluation with Apriori algorithm shows that proposed method is more efficient as it finds fewer items in frequent item set in 50% less time without backtracking. It also reduces system I/O load by scanning the database only once. Copyright © 2016 Inderscience Enterprises Ltd.


Chavan S.M.,Government College of Engineering, Aurangabad | Tamane S.C.,MGMs Jawaharlal Nehru Engineering College
Proceedings - International Conference on Global Trends in Signal Processing, Information Computing and Communication, ICGTSPICC 2016 | Year: 2017

Among the various applications of high performance computing, security of cloud computing is very essential and greater awareness in India. Security issues in cloud are very challenging, as the data are roaming without any control. The need to keep the security as well as privacy of the information in the cloud becomes a critical issue. On the basis of cloud architectures we can specify the security rules for the web application attacks. In this paper we go through study, design and purpose of ontology containing encryption algorithms. Here ontology describes the security rule specification. Encryption algorithm describes comparision of performance among different algorithms. The main focus of this kind of work is to present a study of cloud security policy using some encryption algorithm based on ontology. This study will follow the ontology framework build for security policy rules. This paper also discusses cloud based web services attacks, design of semantic rules, and detection of malicious traffic over internet as well as tools for handling ontology. © 2016 IEEE.


Mangalge S.,MGMs Jawaharlal Nehru Engineering College | Jawale S.,MGMs Jawaharlal Nehru Engineering College
International Conference on Signal Processing, Communication, Power and Embedded System, SCOPES 2016 - Proceedings | Year: 2017

In this paper, a detailed study and analysis has been done for investigating a technique for the location of fault in a long high voltage DC transmission line, say up-to 1000kms long. For achieving this, the paper proposes to use combination of two methods, first method is based on the travelling wave principle, and second method is wavelet transform. The wave-front i.e. the arrival time of the travelling wave is measured with the help of wavelet transform and in wavelet transform specifically continuous wavelet transform is used because of its prominent and accurate results. The presented technique can accurately predict the exact fault location, with the help of correctly measured surge arrival times obtained from continuous wavelet transform. The studies are carried out with the detailed simulation model as well as programming in MATLAB. © 2016 IEEE.


Ghosh S.,MGMs Jawaharlal Nehru Engineering College | Lohani B.,Indian Institute of Technology Kanpur
International Journal of Digital Earth | Year: 2015

Light Detection and Ranging (LiDAR) technology generates dense and precise three-dimensional datasets in the form of point clouds. Conventional methods of mapping with airborne LiDAR datasets deal with the process of classification or feature specific segmentation. These processes have been observed to be time-consuming and unfit to handle in scenarios where topographic information is required in a small amount of time. Thus there is a requirement of developing methods which process the data and reconstruct the scene in a small amount of time. This paper presents several pipelines for visualizing LiDAR datasets without going through classification and compares them using statistical methods to rank these processes in the order of depth and feature perception. To make the comparison more meaningful, a manually classified and computer-aided design (CAD) reconstructed dataset is also included in the list of compared methods. Results show that a heuristic-based method, previously developed by the authors perform almost equivalent to the manually classified and reconstructed dataset, for the purposes of visualization. This paper makes some distinct contributions as: (1) gives a heuristics-based visualization pipeline for LiDAR datasets, and (2) presents an experimental design supported by statistical analysis to compare different pipelines. © 2014 Taylor & Francis.


Tamane S.,MGMs Jawaharlal Nehru Engineering College
ACM International Conference Proceeding Series | Year: 2016

These days' Big data is becoming a very essential component for the industries where large volume of data at very high speed is used to solve particular data problems. Generally, big data is first analyzed and then used with other available data in the company to make it more effective. Therefore, big data is never operated in isolation. There are a variety of non-relational data stores (databases) available. These data stores and big data can be used in combination to work with. Attributes of these databases are available for companies where big data is used. In last few years it is the requirement of companies that these databases should operate very fast, it should be extended/contracted whenever required and should generate reports quickly. It also requires that the different means should be available to manage and organize these massive databases. This paper mainly focuses on some methods for data management like key-value databases, document databases, tabular databases, object data bases and graph databases. Use of RDBMS for big data implementation is not practical because of its performance, scale or even cost. Now a day's companies have adopted non-relational databases, known as NoSQL databases. Programmers and analysts may take benefit of non-relational databases as it has simple modeling constraints than the relational databases. Analysts can do various types of analysis by taking different types of non-relational databases every time. For example, key value databases, graph databases. The non-relational databases do not depend on the common traditional relational database management systems. © 2016 ACM.


Mangulkar M.N.,Mgms Jawaharlal Nehru Engineering College | Jamkar S.S.,Government College of Engineering, Aurangabad
Jordan Journal of Civil Engineering | Year: 2016

Aggregate characteristics have a significant effect on the properties of concrete in the fresh state and the hardened state. They also influence the quantity of cement paste required to fill the voids between aggregate particles. The manual methods suggested for the measurement of aggregate characteristics are laborious, time consuming and approximate. This paper presents the development of a Digital Image Processing (DIP) based system for the measurement of sphericity, shape factor, elongation ratio and flatness ratio of coarse aggregate particles. The system is calibrated using standard objects such as marbles, coins and then used for the measurement of coarse aggregate particles having varied characteristics. Samples of rounded gravels and crushed aggregates from different crushers are considered for the study. The results indicated that the system can be used for the accurate measurement of aggregate characteristics. © 2016 JUST. All Rights Reserved.

Loading MGMs Jawaharlal Nehru Engineering College collaborators
Loading MGMs Jawaharlal Nehru Engineering College collaborators