Los Angeles, CA, United States

Intelligent Systems Technology, Inc.

Los Angeles, CA, United States
Time filter
Source Type

"For over 15 years, Decernis has developed similar partnerships and client relationships with over 40 governmental entities.  This cooperation ensures the accuracy of regulatory compliance information Decernis provides to its thousands of users globally," stated Kevin C. Kenny, Chief Operating Officer of Decernis LLC.  "Decernis is very pleased to cooperate with FSSAI to improve safety for the food production supply chain, which will also support our clients that manufacture and process foods and beverages." Decernis delivers global solutions for product compliance, safety, and risk management through smart technology, Intelligent Systems Technology, Big Data Analytics, and global expertise.  Our horizon scanning, enterprise solutions, supply chain management and experts in global food, consumer, and industrial product safety support our clients in meeting complex market demands and key decisions. Decernis tracks regulatory developments in more than two hundred countries on a day-by-day basis. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/decernis-and-food-safety-and-standards-authority-of-india-announce-regulatory-cooperation-300444653.html

Xu C.,Peking University | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2015

It is practical to assume that an individual view is unlikely to be sufficient for effective multi-view learning. Therefore, integration of multi-view information is both valuable and necessary. In this paper, we propose the Multi-view Intact Space Learning (MISL) algorithm, which integrates the encoded complementary information in multiple views to discover a latent intact representation of the data. Even though each view on its own is insufficient, we show theoretically that by combing multiple views we can obtain abundant information for latent intact space learning. Employing the Cauchy loss (a technique used in statistical learning) as the error measurement strengthens robustness to outliers. We propose a new definition of multi-view stability and then derive the generalization error bound based on multi-view stability and Rademacher complexity, and show that the complementarity between multiple views is beneficial for the stability and generalization. MISL is efficiently optimized using a novel Iteratively Reweight Residuals (IRR) technique, whose convergence is theoretically analyzed. Experiments on synthetic data and real-world datasets demonstrate that MISL is an effective and promising algorithm for practical applications. © 2015 IEEE.

Zhang Z.,CAS Institute of Automation | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

Slow Feature Analysis (SFA) extracts slowly varying features from a quickly varying input signal [1]. It has been successfully applied to modeling the visual receptive fields of the cortical neurons. Sufficient experimental results in neuroscience suggest that the temporal slowness principle is a general learning principle in visual perception. In this paper, we introduce the SFA framework to the problem of human action recognition by incorporating the discriminative information with SFA learning and considering the spatial relationship of body parts. In particular, we consider four kinds of SFA learning strategies, including the original unsupervised SFA (U-SFA), the supervised SFA (S-SFA), the discriminative SFA (D-SFA), and the spatial discriminative SFA (SD-SFA), to extract slow feature functions from a large amount of training cuboids which are obtained by random sampling in motion boundaries. Afterward, to represent action sequences, the squared first order temporal derivatives are accumulated over all transformed cuboids into one feature vector, which is termed the Accumulated Squared Derivative (ASD) feature. The ASD feature encodes the statistical distribution of slow features in an action sequence. Finally, a linear support vector machine (SVM) is trained to classify actions represented by ASD features. We conduct extensive experiments, including two sets of control experiments, two sets of large scale experiments on the KTH and Weizmann databases, and two sets of experiments on the CASIA and UT-interaction databases, to demonstrate the effectiveness of SFA for human action recognition. Experimental results suggest that the SFA-based approach 1) is able to extract useful motion patterns and improves the recognition performance, 2) requires less intermediate processing steps but achieves comparable or even better performance, and 3) has good potential to recognize complex multiperson activities. © 2012 IEEE.

Marvel J.A.,Intelligent Systems Technology, Inc.
IEEE Transactions on Automation Science and Engineering | Year: 2013

A set of metrics is proposed that evaluates speed and separation monitoring efficacy in industrial robot environments in terms of the quantification of safety and the effects on productivity. The collision potential is represented by separation metrics and sensor uncertainty based on perceived noise and bounding region radii. In the event of a bounding region collision between a robot and an obstacle during algorithm evaluation, the severity of the separation failure is reported as a percentage of volume penetration. © 2004-2012 IEEE.

Liu W.,China University of Petroleum - East China | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Image Processing | Year: 2013

The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR. © 1992-2012 IEEE.

Zhou T.,Intelligent Systems Technology, Inc. | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Image Processing | Year: 2013

Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose 'double shrinking' to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an ℓ1 regularized variance maximization with constraint ∥x∥2=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a 'warm start' for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression. © 1992-2012 IEEE.

Bian W.,Intelligent Systems Technology, Inc. | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2011

We propose a new criterion for discriminative dimension reduction, max-min distance analysis (MMDA). Given a data set with C classes, represented by homoscedastic Gaussians, MMDA maximizes the minimum pairwise distance of these C classes in the selected low-dimensional subspace. Thus, unlike Fisher's linear discriminant analysis (FLDA) and other popular discriminative dimension reduction criteria, MMDA duly considers the separation of all class pairs. To deal with general case of data distribution, we also extend MMDA to kernel MMDA (KMMDA). Dimension reduction via MMDA/KMMDA leads to a nonsmooth max-min optimization problem with orthonormal constraints. We develop a sequential convex relaxation algorithm to solve it approximately. To evaluate the effectiveness of the proposed criterion and the corresponding algorithm, we conduct classification and data visualization experiments on both synthetic data and real data sets. Experimental results demonstrate the effectiveness of MMDA/KMMDA associated with the proposed optimization algorithm. © 2006 IEEE.

Cao L.,Intelligent Systems Technology, Inc.
IEEE Transactions on Knowledge and Data Engineering | Year: 2010

Traditional data mining research mainly focus]es on developing, demonstrating, and pushing the use of specific algorithms and models. The process of data mining stops at pattern identification. Consequently, a widely seen fact is that 1) many algorithms have been designed of which very few are repeatable and executable in the real world, 2) often many patterns are mined but a major proportion of them are either commonsense or of no particular interest to business, and 3) end users generally cannot easily understand and take them over for business use. In summary, we see that the findings are not actionable, and lack soft power in solving real-world complex problems. Thorough efforts are essential for promoting the actionability of knowledge discovery in real-world smart decision making. To this end, domain-driven data mining (D3M) has been proposed to tackle the above issues, and promote the paradigm shift from "data-centered knowledge discovery" to "domain-driven, actionable knowledge delivery". In D3M, ubiquitous intelligence is incorporated into the mining process and models, and a corresponding problem-solving system is formed as the space for knowledge discovery and delivery. Based on our related work, this paper presents an overview of driving forces, theoretical frameworks, architectures, techniques, case studies, and open issues of D3M. We understand D3M discloses many critical issues with no thorough and mature solutions available for now, which indicates the challenges and prospects for this new topic. © 2006 IEEE.

Agency: Department of Defense | Branch: Office of the Secretary of Defense | Program: SBIR | Phase: Phase II | Award Amount: 1.00M | Year: 2014

Complex systems are inherently difficult to analyze because of the hidden interactions and dependencies among components. These interactions and dependencies often produce unpredictable, potentially detrimental overall system behaviors (outcomes). Traditi

Agency: Department of Defense | Branch: Army | Program: SBIR | Phase: Phase I | Award Amount: 150.00K | Year: 2013

In complex systems there tends to be greater likelihood of interactions and dependencies between components. These interactions and dependencies can give rise to unpredictable and often detrimental behaviors. To ameliorate this problem, there is a need for methods and tools that enable: detection, diagnosis, and visualization of component interactions; identification and alerting of undesirable interactions; and methods for circumventing undesirable interactions. Phase I of this effort is concerned with developing and demonstrating an interactive tool capable of helping users visualize interdependencies, test and analyze the interdependencies using diagnostic workflow routines, detect and visually depict undesirable interactions, and alert the user about detrimental interactions and their likely consequences.

Loading Intelligent Systems Technology, Inc. collaborators
Loading Intelligent Systems Technology, Inc. collaborators