News Article | September 29, 2017
Said Mr. Sebastian, "The trucking industry is rapidly being transformed by technology ranging from autonomous vehicles to sophisticated predictive safety system analytics. Idelic's state-of-the-art artificial intelligence technology brings an unsurpassed level of integration, sophistication, and functionality to trucking safety systems." Idelic uses proprietary machine learning analytics to help transportation fleets identify at-risk drivers and assign corrective action before accidents happen. The company is collaborating with Carnegie Mellon University's Master of Science in Information Technology (MSIT) program to develop these powerful algorithms that can predict which drivers are most at risk for accidents, incidents, injuries, violations, and citations. Idelic was founded by Carnegie Mellon alumni Hayden Cardiff, Andrew Russell, and Nick Bartel as a spinout of Pittsburgh-based less-than-truckload (LTL) Carrier PITT OHIO, a company with industry-leading safety and performance records. PITT OHIO CEO Chuck Hammel added, "This is a safety product that all safety professionals will love because of it's functionality and predictive analysis capabilities. There is nothing like it on the market." Fleets today deal with safety and compliance by using complex spreadsheets and disparate third-party systems. Idelic's safety and risk management system, SafetyBox, provides a cutting edge and comprehensive data management platform that integrates all of these systems to provide unique insights into fleet and driver data. "Our latest machine learning algorithm predicted 90% of large truck accidents based on rich customer data. Now we're rolling that same model into predicting violations, incidents, injuries, and other risk metrics that cost fleets significant amounts of money each year," said Idelic CEO Hayden Cardiff. Nick Bartel, Idelic's COO, sees the future of transportation safety incorporating all fleet data systems, "By uncovering new insights hidden within complex fleet data, we can positively impact all drivers out on the road. Our goal is to not only help fleets save time, money, and the lives of their drivers, but also to help make our roads and highways safer for everyone." About Idelic Idelic helps transportation fleets save time, money, and lives. By leveraging its powerful machine learning platform, comprehensive data management system, and extensive third party integrations, the company provides a custom enterprise solution for safety managers to automate compliance, predict at-risk drivers, and prescribe action before accidents happen. More information about Idelic and SafetyBox can be found at www.idelictech.com. About Birchmere Ventures Since 1996 Birchmere Ventures has partnered with entrepreneurs to help build high-growth technology companies. The firm, with offices in Pittsburgh and San Francisco, has invested in many dozens of seed-stage startups, resulting in numerous IPOs and M&A transactions representing over $14b in value. More information about Birchmere Ventures can be found at www.birchmerevc.com. About McCune Capital McCune Capital is a New York-based venture capital firm, founded by Jason Cahill. Focused on seed-stage ventures disrupting transportation, energy, agriculture, and manufacturing, McCune's portfolio currently has ten active investments. The fund's hands-on approach, asymmetric sourcing methods, and diverse background of advisors has allowed McCune to establish themselves as a key partner in the New York seed-stage arena. More can be found at mccune.vc. About M25 Group M25 Group is a Chicago-based Midwest-focused venture capital firm run by Victor Gutwein and Mike Asem. Since the fund's inception in 2015, the firm has invested in over fifty early-stage, Midwest-based tech startups. M25's objective, analytical model has helped support their thesis and craft an 'index fund of Midwest startups.' Their collaborative, forward-thinking approach and diverse investments across industries and business models throughout the region have allowed M25 to establish themselves as a key node in Midwest startup ecosystem. More can be found at m25group.com.
Proceeding - IEEE International Conference on Computing, Communication and Automation, ICCCA 2016 | Year: 2016
Visual Speech processing is the key concern of researchers working in the field of speech processing and computer vision. Though earlier audio speech processing was popularly used for speech recognition but their performance deteriorated in the presence of noise. Moreover, variation in accent is another challenge that affects the performance of such systems. In the presented paper, we explore variation of lip texture features for recognition of visemes. The variations in temporal behavior of lip texture features is coded using Local Binary Pattern features in three orthogonal planes. The classification is carried out using the back propagation neural network, which is a network with hidden layer. The added advantage of hidden layer for lip reading is that it takes into account the nonlinear variation of lip features while speaking. The proposed approach is used for Hindi word recognition and achieved high accuracy at the cost of computation time. © 2016 IEEE.
Yadav A.,MSIT |
International Journal of Future Generation Communication and Networking | Year: 2016
Wireless Sensor Networks (WSNs) is a warm area of research. WSNs consist of large number of sensor nodes deployed randomly in the sensor field. But, it suffers with several shortcomings such as energy imbalance, processing power, storage, transmission range and mitigates energy hole problems. Therefore it’s a tough task to design and develop an optimized routing protocol for WSNs. Hence, this research paper mainly focuses on the energy consumption and network lifespan issues of WSNs and presents an improved version of Distributed Energy-Efficient Clustering (DEEC) protocol. To enhance the network lifetime and reliability of DEEC protocol, three improvements are incorporated in the algorithm and the proposed algorithm named as improved DEEC (IDEEC). From experiment section, it is stated that proposed improvements make protocol more robust and efficient in terms of lifetime of nodes, stability and energy consumption in comparison to other algorithm being compared. © 2016 SERSC.
Arora S.M.,MSIT |
Khanna K.,The NorthCap University
Communications in Computer and Information Science | Year: 2017
In most video sequences, especially containing slow motion, a large number of blocks are stationary. Early determination of these blocks may save large number of computations in any motion estimation (ME) algorithm. The decision for declaring a block to be stationary can be made by comparing the block distortion with a predetermined threshold whose large or small values may affect the speed and accuracy of a ME algorithm. Accurate prediction of this threshold proposes a challenging problem. In this manuscript, a dynamic two level threshold estimation technique has been proposed. This two level scheme not only detects constant variations in the neighboring blocks but is also capable of detecting stationary blocks with abrupt variations. Performance of the proposed technique is evaluated by implementing ZMP before ME process in adaptive rood pattern search (ARPS) algorithm. Simulation results show better performance of proposed technique in comparison to single level dynamic threshold predictor and fixed threshold predictor. © Springer Nature Singapore Pte Ltd. 2017.
Progress In Electromagnetics Research M | Year: 2017
The modulational instability of a lower hybrid wave is investigated in a dusty plasma slab by developing a non-local theory of this four wave parametric interaction process. The immersed dust grains modify the dispersion relation and growth rate expression of low frequency unstable mode. A numerical analysis shows that the frequencies and growth rate of unstable mode is higher in dusty plasma than in that without dust grains. The growth rate of the unstable mode is proportional to pump amplitude and has strong dependence on pump frequency. © 2017, Electromagnetics Academy. All rights reserved.
Tushir M.,MSIT |
Applied Soft Computing Journal | Year: 2010
A possibilistic approach was initially proposed for c-means clustering. Although the possibilistic approach is sound, this algorithm tends to find identical clusters. To overcome this shortcoming, a possibilistic Fuzzy c-means algorithm (PFCM) was proposed which produced memberships and possibilities simultaneously, along with the cluster centers. PFCM addresses the noise sensitivity defect of Fuzzy c-means (FCM) and overcomes the coincident cluster problem of possibilistic c-means (PCM). Here we propose a new model called Kernel-based hybrid c-means clustering (KPFCM) where PFCM is extended by adopting a Kernel induced metric in the data space to replace the original Euclidean norm metric. Use of Kernel function makes it possible to cluster data that is linearly non-separable in the original space into homogeneous groups in the transformed high dimensional space. From our experiments, we found that different Kernels with different Kernel widths lead to different clustering results. Thus a key point is to choose an appropriate Kernel width. We have also proposed a simple approach to determine the appropriate values for the Kernel width. The performance of the proposed method has been extensively compared with a few state of the art clustering techniques over a test suit of several artificial and real life data sets. Based on computer simulations, we have shown that our model gives better results than the previous models. © 2009 Elsevier B.V. All rights reserved.
Sheoran K.,MSIT |
Tomar P.,Engineering School of Information Technology and Communication |
Mishra R.,Engineering School of Information Technology and Communication
Procedia Computer Science | Year: 2016
Software quality is regarded as the highly important factors for assessing the global competitive position of any software product. To assure quality, and to assess the reliability of software products, many software quality prediction models have been proposed in the past decades. In this proposed method we have utilized a hybrid method for quality prediction. The prediction is done with the help of the Advanced Neural network which is incorporated with Hybrid Cuckoo search (HCS) optimization algorithm for better prediction accuracy. The application software is first subjected to test case generation and once the test cases are generated they are applied to advanced neural network for the prediction of quality. The neural network is improved by utilizing HCS which optimizes the weight factor for improving the prediction. The quality metrics like maintainability and reliability are estimated for predicting the software quality and the results are compared with other existing techniques to verify the effectiveness of our proposed method. © 2016 The Authors. Published by Elsevier B.V.
Kherwa P.,MSIT |
Sachdeva A.,MSIT |
Mahajan D.,MSIT |
Pande N.,MSIT |
Souvenir of the 2014 IEEE International Advance Computing Conference, IACC 2014 | Year: 2014
The world wide web can be viewed as a repository of opinions from users spread across various websites and networks, and today's netizens look up reviews and opinions to judge commodities, visit forums to debate about events and policies. With this explosion in the volume of and reliance on user reviews and opinions, manufacturers and retailers face the challenge of automating the analysis of such big amounts of data (user reviews, opinions, sentiments). Armed with these results, sellers can enhance their product and tailor experience for the customer. Similarly, policy makers can analyse these posts to get instant and comprehensive feedback. Or use it for new ideas that democratize the policy making process. This paper is the outcome of our research in gathering opinion and review data from popular portals, e-commerce websites, forums or social networks; and processing the data using the rules of natural language and grammar to find out what exactly was being talked about in the user's review and the sentiments that people are expressing. Our approach diligently scans every line of data, and generates a cogent summary of every review (categorized by aspects) along with various graphical visualizations. A novel application of this approach is helping out product manufacturers or the government in gauging response. We aim to provide summarized positive and negative features about products, laws or policies by mining reviews, discussions, forums etc. © 2014 IEEE.
Chaturvedi K.K.,University of Delhi |
Sing V.B.,University of Delhi |
Proceedings of the 2013 13th International Conference on Computational Science and Its Applications, ICCSA 2013 | Year: 2013
Mining software repositories (MSR) is an important area of research. An international workshop on MSR has been established under the umbrella of international conference on software engineering (ICSE) in year 2004. The quality papers received and presented in the workshop has led to initiate full-fledged conference which purely focuses on issues related to mining software engineering data since 2007. This paper is the result of reviewing all the papers published in the proceedings of the conferences on Mining Software Repositories (MSR) and in other related conference/journals. We have analyzed the papers that contained experimental analysis of software projects related to data mining in software engineering. We have identified the data sets, techniques and tools used/ developed/ proposed in these papers. More than half of the papers are involved in the task accomplished by building or using the data mining tools to mine the software engineering data. It is apparent from the results obtained by analyzing these papers that MSR authors process the raw data which in general publicly available. We categorizes different tools used in MSR on the basis of newly developed, traditional data mining tools, prototype developed and scripts. We have shown the type of mining task that has been performed by using these tools along with the datasets used in these studies. © 2013 IEEE.
2016 International Conference on Computational Techniques in Information and Communication Technologies, ICCTICT 2016 - Proceedings | Year: 2016
Lip reading, also known as visual speech processing, means recognition of spoken word based on the pattern of lip movements while speaking. Audio speech recognition systems are popular since last many decades and have achieved a great success, but recently visual speech recognition has incited the interest of researchers towards lip reading. Lip reading has an added advantage of high accuracy and noise independency. This paper presents an algorithm for automatic lip reading. The algorithm consists of two main steps: feature extraction and classification for word recognition. The lip information is extracted using lip geometric and lip appearance features. The recognition of words is done by Learning Vector Quantization neural network. The accuracy achieved by proposed approach is 97%. The proposed algorithm is applied for recognition of ten words of Hindi language and can be easily extended to include more words of other languages. The presented approach will be helpful for hearing impaired or dumb people to communicate with humans or machines. The proposed algorithm is fast as well as robust to various occlusions. © 2016 IEEE.