Sinha S.,JSSATE |
Mehfiiz S.,Jamia Millia Islamia University |
Urooj S.,Gautam Buddha University
Proceedings of the 2014 International Conference on Issues and Challenges in Intelligent Computing Techniques, ICICT 2014 | Year: 2014
Cognitive Radio offers a solution by utilizing the spectrum holes that represent the potential opportunities for non-interfering use of spectrum. This paper focuses on spectrum sensing using energy detection techniques of an unknown signal over a multipath channel. The paper consider with out-diversity case, and explain some alternative closed-form expressions for the probability of miss detection and probability of false alarm to those recently reported in the literature. Probabilities are calculated based on Rayleigh fading channel. © 2014 IEEE.
Padmashree M.G.,JSSATE |
Proceedings of the 2016 2nd International Conference on Applied and Theoretical Computing and Communication Technology, iCATccT 2016 | Year: 2017
Job scheduling is a important purpose of the running process. Laptop assets are scheduled earlier than use. By means of switching the CPU among techniques, the operating procedure can make the computer more productive. All fundamental CPU scheduling algorithms targets at maximizing the CPU utilization, throughput, and minimizing waiting time, turnaround time and context switch. To beef up the efficiency of CPU and to cut down the overhead on the CPU, allocate the CPU cycles among the strategies in a multiprocessor atmosphere fairly. Or else, efficiency will likely be less. Fairness is essential component for each OS scheduler. Previous scheduling algorithms endure from bad equity, high overhead, or incompatibility with existing scheduler insurance policies. Because the hardware industry continues to push multi-core, it is essential for operating techniques to maintain up with accurate, effective, and scalable fair scheduling designs. This paper describes S2R2, a multiprocessor reasonable scheduling algorithm that achieves these goals. Round-Robin, viewed as probably the most largely adopted CPU scheduling algorithm. However existing algorithm displays bad natural ready time and too many context switches within the multiprocessor environment. Hence, we advise a process to maximize the efficiency of the multiprocessor utilizing S2R2: Type-split circular-Robin algorithm which is established on round Robin scheduling algorithm. Each outcome and calculations show that, our proposed process is extra effective than the existing circular Robin scheduling algorithm. © 2016 IEEE.
Srimani P.K.,Bangalore University |
Patil M.M.,JSSATE |
Patil M.M.,Bhartiyaar University
Advances in Intelligent Systems and Computing | Year: 2014
Mining educational data is an emerging interdisciplinary research area that mainly deals with the development of methods to explore the data stored in educational institutions which is referred to as Edu-Data. Data mining is concerned with the analysis of data for finding patterns which are previously unknown and are presently useful for future analysis. The technique of mining Edu-data is referred to as Edu-mining. On the other hand statistics is a mathematical science concerned with the collection, analysis, interpretation or explanation, and presentation of data which plays a very important role in the process of data mining. The paper aims at developing a simple linear regression model for Edu-data using the statistical approach. The results obtained helps the management to predict the semester results and also helps in proper decision making processes in Technical Education System. It is also found that the predictions were almost nearing to the actual values. The present work is first of its kind in literature. © Springer International Publishing Switzerland 2014.
International Journal of Applied Engineering Research | Year: 2014
Finishing of surfaces without sub surface damage is very difficult and is high in demand. Dimensional accuracy and precision is highly required in industries such as automotive, etc. and also, it is of utmost importance in such industries. Surface finishing in nanometer range and providing a defect free surface is a challenging task for already developed fine finishing processes. Also few advanced finishing processes have been developed but their applications are limited to few geometries only, due to the restricted movements of work piece and the finishing medium. To overcome such limitations a new fine finishing process Ball End Magnetorheological (MR) Finishing process is developed for flat as well as 3D surfaces, for ferromagnetic and non-ferromagnetic materials. In this process the smart behavior of the magnetorheological polishing (MRP) fluid, by controlling the finishing forces, is utilized to achieve the final surface finish. An experimental setup, computer controlled, is designed and developed for the study of process performance and characteristics. In the present study the simulations of magnetostatic force using the Ball end magnetorheological tool on a nonferromagnetic copper workpiece is been done by increasing or decreasing the number of carbonyl iron particles. © Research India Publications.
Chayadevi M.L.,JSSATE |
Smart Innovation, Systems and Technologies | Year: 2015
Malaria is an endemic, global and life threatening disease. Technically skilled person or an expert is needed to analyze the microscopic blood smears for long hours. This paper presents an efficient approach to segment the parasites of malaria. Three different colour space namely LAB, HSI and gray has been used effectively to pre-process and segment the parasite in digital images with noise, debris and stain. L and B plane of LAB, S plane of HSI of input image is extracted with convolution and DCT. Fuzzy based segmentation has been proposed to segment the malaria parasite. Colour features, fractal features are extracted and feature vectors are prepared as a result of segmentation. Adaptive Resonance Theory Neural Network (ARTNN), Back Propagation Network (BPN) and SVM classifiers are used with Fuzzy segmentation and fractal feature extraction methods. Automated segmentation with ARTNN has recorded an accuracy of 95 % compared to other classifiers. © Springer India 2015.
Patnaik A.,N.I.T. |
Mamatha T.G.,JSSATE |
Biswas S.,National Institute of Technology Rourkela |
Materials and Design | Year: 2012
The present research work, describes the development of particulate filled alloy composites consisting of ZA-27 alloy as a base material and titania particulate as reinforcing materials. The objective of the present work is to study the erosion wear behavior of particulate filled alloy composites by Taguchi experimental design technique at four different impact velocities (25-79. m/s), erodent temperature (40-100 °C), impingement angles (30-90°) and erodent size (250-550 μm) respectively. A finite element simulation model (ANSYS/AUTODYN) for damage assessment in erosion is developed and validated by a well designed set of experiments. It is observed that the experimental results are in good agreement with the computational results and the proposed simulation model is very useful for damage assessment. However, a series of steady state erosion analysis is also conducted by varying the impingement angle and impact velocity one at a time while keeping other factors remains constant before studying the Taguchi experimental results. From this analysis it is observed that the peak erosion rate occur at about 60° impingement angle at constant impact velocity (66. m/s), erodent size (450 μm), stand-off-distance (75 mm) and erodent temperature (60 °C) for all the composites. However, the steady state erosion rate as a function of impact velocity for unfilled composite shows maximum erosion rate as compared to filled composites at constant impingement angle (45°), erodent size (450 μm), stand-off-distance (75 mm) and erodent temperature (60 °C) respectively. Finally, the damage mechanisms of eroded surfaces are investigated using scanning electron microscope (SEM). © 2011 Elsevier Ltd.
Kumar M.R.M.,JSSATE |
Procedia Computer Science | Year: 2015
One of the main goal of the operating system for the disk drives is to use the hardware efficiently. we can meet this goal using fast access time and large disk bandwidth that depends on the relative positions of the read-write head and the requested data. Since memory management allows multiprogramming so that operating system keeps several read/write request in the memory. In order to service these requests, hardware (disk drive and controller) must be used efficiently. To support this in disk drive, the hardware must be available to service the request. if the hardware is busy, we can't service the request immediately and the new request will be placed in the queue of pending requests. Several disk scheduling algorithms are available to service the pending requests. among these disk scheduling algorithms, the algorithm that yields less number of head movement will remain has an efficient algorithm. In this research paper, we propose a new disk scheduling algorithm that will reduce the number of movement of head thereby reducing the seek time and it improves the disk bandwidth for modern storage devices. Our results and calculations show that, proposed disk scheduling algorithm will improve the performance of disk i/o by reducing average seek time compared to the existing disk scheduling algorithm. For few requests, the seek time and the total number of head movement is equal to SSTF or LOOK scheduling. © 2015 The Authors. Published by Elsevier B.V.
AIP Conference Proceedings | Year: 2011
Lot of work is being accomplished in the national and international standards communities to reach a consensus on standardizing metadata and repositories for organizing the metadata. Descriptions of several metadata standards and their importance to statistical agencies are provided in this paper. Existing repositories based on these standards help to promote interoperability between organizations, systems, and people. Repositories are vehicles for collecting, managing, comparing, reusing, and disseminating the designs, specifications, procedures, and outputs of systems, e.g. statistical surveys. © 2011 American Institute of Physics.
2014 International Conference on Computing for Sustainable Global Development, INDIACom 2014 | Year: 2014
GPUs (Graphics Processing Units) are designed to solve large data-parallel problems encountered in the fields of image processing, scene rendering, video playback, and gaming. GPUs are therefore designed to handle a higher degree of parallelism as compared to conventional CPUs. GPGPU (General Purpose computing on Graphics Processing Units) enables users to do parallel computing on the graphics hardware commonly available on current personal computers. These days' systems are available with multi-core GPUs that provide the necessary hardware infrastructure, thereby enabling high performance computing on personal computers. NVIDIA's CUDA (Compute Unified Device Architecture) and the industry standard OpenCL (Open Computing Language) provides the software platform required to utilize the graphics hardware to solve computational problems using parallel algorithms, otherwise solvable mostly in supercomputing environments. This paper presents two parallel CREW (Concurrent Read Exclusive Write) PRAM algorithms for optimal coloring of general graphs on stream processing architectures such as the GPU. The algorithms are implemented using OpenCL. The first algorithm presents the techniques for computing vertex independent sets on the GPU and then assigns colors to them. The second algorithm focuses on the optimization of the vertex independent set computation for edge-transitive graphs by taking advantage of the structures of such graphs and then assigns color to each of the normalized independent sets. © 2014 IEEE.
Tiwari S.,Sri Krishna College of Engineering And Technology |
ICECT 2011 - 2011 3rd International Conference on Electronics Computer Technology | Year: 2011
Web services and web service composition are a new phase of distributed computing, build on the top of distributed models. Web services are being used largely for business integration. One of the major concerns for today's businesses is to implement adequate security for web services and web service compositions. Attacks on web services can sometimes destroy entire network and also can expose organization's valuable resources to outside world. As a result of our study we present in this paper some important vulnerabilities and possible attacks on web services and web service composition. Further we have listed important vulnerabilities in context of atomic and composite web services along with possible countermeasures. © 2011 IEEE.