Innovation Research Laboratory IRL

Hāora, India

Innovation Research Laboratory IRL

Hāora, India
SEARCH FILTERS
Time filter
Source Type

Banerjee C.,Netaji Subhash Engineering College | Banerjee C.,Innovation Research Laboratory IRL | Kundu A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | And 3 more authors.
Communications in Computer and Information Science | Year: 2013

The cloud service user of a single service relies on specific service capabilities to query relevant data. It can be difficult to consistently query since data is distributed across multiple services. In this paper, we are going to develop an algorithm to crawl the services within a cloud service inventory invoking fetch capabilities and following Internet protocol addresses among the resources assembling retrieved data in a centralized query enabled indexing service. Crawling techniques are being used for pre-cache information [1] that a user needs for post processing information at current available resource(s) for improving user latency to safe requests. © 2013 Springer-Verlag.


Bhadra S.,Swami Vivekananda Institute of Science and Technology | Bhadra S.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | Kundu A.,Innovation Research Laboratory IRL | And 2 more authors.
International Conference on Advanced Computing and Communication Technologies, ACCT | Year: 2014

A new methodology is proposed in this paper to automate traffic system to produce dynamic output. Typical traffic systems perform the task based on specific predefined rules. Best possible traffic system would not be produced by the typical traffic systems due to increasing traffic and non-deterministic behavior of modern traffic. Most feasible result has been generated dynamically implementing proposed method. Agent technology is implemented based on real-time data to produce this dynamic traffic system. Number of cars at any particular time instance on a specific lane and clock time are considered as decision making parameters. Lane status is considered as three types such as busy, moderate, and idle. Real-time data is collected with specific time intervals. The collected data is processed by the respective agent(s) performing specific operations. Fuzzy logic is utilized as mathematical model for implementation of agent technology. The proposed traffic system has dynamic nature as the outcome is being generated automatically in each specific time intervals, and is independent from previous results. Objective of the paper is to exhibit an advanced traffic signaling system using fuzzy rules. Comparison between existing system and proposed system has been demonstrated in experimental section to visualize effectiveness of our proposal. © 2014 IEEE.


Mitra A.,Adamas Institute of Technology | Mitra A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | Kundu A.,Innovation Research Laboratory IRL
Studies in Computational Intelligence | Year: 2014

In today's world, advancement of Information Technology has been simultaneously followed by cyber crimes resulting in offensive and distressful digital contents. Threat to the digital content has initiated the need for application of forensic activities in digital field seeking evidence against any type of cyber crimes for the sake of reinforcement of the law and order. Digital Forensics is an interdisciplinary branch of computer science and forensic sciences, rapidly utilizing the recovery and/or investigation works on digital data explored in electronic memory based devices with reference to any cyber based unethical, illegal, and unauthorized activities. A typical digital forensic investigation work follows three steps to collect evidence(s): content acquisition, content analysis and report generation. In digital content analysis higher amount of data volumes and human resource(s) exposure to distressing and offensive materials are of major concerns. Lack of technological support for processing large amount of offensive data makes the analytical procedure quite time consuming and expensive. Thus, it results in a degradation of mental health of concerned investigators. Backlog in processing time by law enforcement department and financial limitations initiate huge demand for digital forensic investigators turning out trustworthy results within reasonable time. Forensic analysis is performed on randomly populated sample, instead of entire population size, for faster and reliable analysis procedure of digital contents. Present work reports about an efficient design methodology to facilitate random sampling procedure to be used in digital forensic investigations. Cellular Automata (CA) based approach has been used in our random sampling method. Equal Length Cellular Automata (ELCA) based pseudo-random pattern generator (PRPG) has been proposed in a cost effective manner utilizing the concept of random pattern generator. Exhibition of high degree randomness has been demonstrated in the field of randomness quality testing. Concerned cost effectiveness refers to time complexity, space complexity, design complexity and searching complexity. This research includes the comparative study for some well known random number generators, e.g., recursive pseudo-random number generator (RPRNG), atmospheric noise based true-random number generator (TRNG), Monte-Carlo (M-C) pseudo-random number generator, Maximum Length Cellular Automata (MaxCA) random number generator and proposed Equal Length Cellular Automata (ELCA) random number generator. Resulting sequences for all those above mentioned pattern generators have significant improvement in terms of randomness quality. Associated fault coverage is being improved using iterative methods. Emphasis on cost effectiveness has been initiated for proposed random sampling in forensic analysis. © Springer International Publishing Switzerland 2014.


Mitra A.,Adamas Institute of Technology | Mitra A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | Kundu A.,Innovation Research Laboratory IRL
Advances in Intelligent and Soft Computing | Year: 2012

In this research work, we are trying to put emphasis on on the cost effective generation approach of high quality random numbers using one dimensional cellular automaton. Maximum length cellular automata with higher cell number, has already been established for the generation of highest quality of pseudo random numbers. Sequence of randomness quality has been improved using DIEHARD tests reflecting the varying linear complexity compared to the maximum length cellular automata. The mathematical approach for proposed methodology over the existing maximum length cellular automata emphasizes on flexibility and cost efficient generation procedure of pseudo random pattern sets. © 2012 Springer-Verlag GmbH.


Mitra A.,Adamas Institute of Technology | Mitra A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | Kundu A.,Innovation Research Laboratory IRL | Das C.,Netaji Subhash Engineering College
1st International Conference on Automation, Control, Energy and Systems - 2014, ACES 2014 | Year: 2014

Generation of pseudo-random numbers using Cellular Automata (CA) for Built-In-Self-Test (BIST) applications has been emphasized in this work. Enhanced cost effectiveness is achieved for the generation procedure of pseudorandom patterns with proposed Equal Length CA (ELCA) with reference to Maximum-length CA (MaxCA). Proposed ELCA scheme generates patterns for BIST and achieves higher quality of fault coverage compared to MaxCA and Linear Feedback Shift Register (LFSR) based BIST pattern generator. Proposed ELCA based pattern generator is significantly contributing a role in overhead reduction of fault coverage for BIST pattern generator. A high degree of randomness is maintained in generated patterns in proposed design focusing on the reduction of some major associated complexities such as design complexity, time complexity and searching complexity. Experimental results have ensured that high degree of randomness and comparative high degree of fault coverage are achieved for proposed ELCA based BIST application. © 2014 IEEE.


Mitra A.,Adamas Institute of Technology | Mitra A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology | Kundu A.,Innovation Research Laboratory IRL
ACM International Conference Proceeding Series | Year: 2012

Monte-Carlo (M-C) simulations are widely used today to evaluate the reliability of any distributed system. M-C simulations are generated using high quality pseudo-random numbers. Different pseudo-random number generators (PRNGs) are being used as a source of random patterns for M-C simulation. In this research, we have emphasized a cost effective design methodology for the pseudo-random numbers, using Cellular Automata (CA). We have introduced a behavioural discussion and examined the benefits of our proposed Equal Length Cellular Automata (ELCA) based PRNG. This research is expected to contribute to reducing the overhead use for fault coverage in generation of random integers using Cellular Automata. In the proposed design methodology, a high degree of randomness is maintained in generated patterns, while focusing on the reduction of various associated complexities, like design complexity, time complexity and searching complexity. The result achieved in the experiment reflects the high quality of randomness in the generated patterns for ELCA PRNG over the Maximum Length Cellular Automata (Max CA) PRNG and M-C PRNG. Copyright 2012 ACM.


Kundu A.,Netaji Subhash Engineering College | Kundu A.,Innovation Research Laboratory IRL | Roy D.,Netaji Subhash Engineering College | Roy D.,Innovation Research Laboratory IRL
2010 1st International Conference on Parallel, Distributed and Grid Computing, PDGC - 2010 | Year: 2010

In this paper, we propose a Cellular Automata (CA) based implementation in classification for handling huge amount of data on the Web in an efficient way. We concentrate on Multiple Attractor Cellular Automata (MACA) as well as Single Cycle Multiple Attractor Cellular Automata (SMACA), since these are responsible for classifying various types of patterns. CA based implementation results in minimization of storage space needed for storing the downloaded Web pages within a Search Engine. In this paper we are going to use the 3-Neighborhood concept of CA for classification purpose. Efficiency of our approach lies within the usage of CA as a classifier in different forms. Experimental results demonstrate our approach with a higher efficiency level. © 2010 IEEE.


Kundu A.,Netaji Subhash Engineering College | Kundu A.,Innovation Research Laboratory IRL | Guha S.Kr.,Netaji Subhash Engineering College | Guha S.Kr.,Innovation Research Laboratory IRL | And 4 more authors.
Proceedings of the International Conference on Management of Emergent Digital EcoSystems, MEDES'10 | Year: 2010

Maximum available Web prediction techniques typically follow Markov model for Web based prediction. Everybody knows that there are lots of Web links or URLs on any Web page. So, it is very hard to predict the next Web page from the huge number of Web links. Existing approaches predict successfully on the private (personal) computer using different Markov models. In case of public (like cyber cafe) computers, prediction can not be done at all, since many people use the same machine in this type of scenario. In this paper, we propose a new policy on Web prediction using the dynamic behavior of users. We demonstrate four procedures for Web based prediction to make it faster. Our technique does not require any Web-log or usage history at client machine. We are going to use the mouse movement and its direction for the prediction of next Web page. We track the mouse position and its respective direction instead of using Markov model. In this research work, we introduce a fully dynamic Web prediction scheme, since Web-log or any type of static or previous information has not been utilized in our approach. In this paper, we try to minimize the number of Web links to be considered of any Web page in runtime for achieving better accuracy in dynamic Web prediction. Our approach shows the step-wise build-up of a solid Web prediction program which is appropriate in both the private as well as public scenario. Overall, this method shows a new way for prediction using dynamic nature of the respective users. Copyright © 2010 ACM.


Kundu A.,Netaji Subhash Engineering College | Kundu A.,Innovation Research Laboratory IRL | Banerjee C.,Netaji Subhash Engineering College | Banerjee C.,Innovation Research Laboratory IRL
Journal of Convergence Information Technology | Year: 2010

In this paper, we are going to propose multi-layered service suite in cloud computing environment. Different cloud computing services are already classified by the researchers. Each of services serves distinct purpose. Here, we propose seven layers of service abstractions which are further named as multi-layered service suite in cloud computing scenario. Each type of service layer is depicted combining different utility services. Proposed service layers have distinct tasks for the purpose of improving service handling. Grouping of services has been conceptualized based on the common context of specific services. One service handling algorithm is further proposed for guiding each 'cloud service request' efficiently as the overall cloud utilization increases while handling a huge number of requests from the client side. Here, we propose a formula for determining cloud utilization. Cloud utilization is increased as the number of requests handled by the services grows. This proposed multi-layered service suite is a logical and modularized separation, which helps in describing of each service layer during the design process.


Kundu A.,Innovation Research Laboratory IRL | Kundu A.,CAS Shenzhen Institutes of Advanced Technology
International Journal of Computers and Applications | Year: 2014

Web crawler is a computer program that browses World Wide Web in methodical and automated manner. Latest crawling techniques in use are parallel crawling and hierarchical crawling. In later case, total Web site is extracted by dividing it into a few levels. The homepage from which crawling process starts is considered to be the first level. All the hyperlinks present on that Web page all together is considered to be the next level and so on. In this crawling process all the Web pages at a single level gets downloaded simultaneously by the creation of multiple crawlers dynamically depending on the number of hyperlinks on that level. But in real-life scenario the bandwidth available is limited and acts as a deterrent in this case. In this paper, a scheduling algorithm has been proposed on the basis of the sizes of the Web pages to make full utilization of the bandwidth available. To achieve this, a modified type of queue (Y-type) is introduced where URLs of the Web pages are kept in an orderly manner and they are released in such a way that the total size of the Web pages issued is closest to the bandwidth available.

Loading Innovation Research Laboratory IRL collaborators
Loading Innovation Research Laboratory IRL collaborators