Entity

Time filter

Source Type


Menezes N.N.C.,Decision Simulation
Proceedings - IEEE International Symposium on Distributed Simulation and Real-Time Applications, DS-RT | Year: 2014

The initialization of distributed heterogeneous simulation systems presents challenges regarding the parallelization of object construction and setup. This paper presents a method for parallel initialization of distributed simulation systems that consists of a two phases setup. Object instantiation and setup are split in Config and Post Bind phases to permit fast creation times allowing distribution of initialization tasks among different nodes and removing the ordering requirement between the initialization of interdependent objects. A framework of references is presented to facilitate the use of remote objects in a MPI environment using proxies to access local and remote variables, served by a reference name server built into the simulation engine. © 2014 IEEE. Source


Xu Y.,Harbin Institute of Technology | Zhu Q.,Harbin Institute of Technology | Fan Z.,Harbin Institute of Technology | Fan Z.,East China Jiaotong University | And 2 more authors.
Neurocomputing | Year: 2013

Transformation methods have been widely used in biometrics such as face recognition, gait recognition and palmprint recognition. It seems that conventional transformation methods seem to be "optimal" for training samples but not for every test sample to be classified. The reason is that conventional transformation methods use only the information of training samples to obtain transform axes. For example, if the transformation method is linear discriminant analysis (LDA), then in the new space obtained using the corresponding transformation, the training samples must have the maximum between-class distance and the minimum within-class distance. However, it is hard to guarantee that the transformation also maximizes the between-class distance and minimizes the within-class distance of the test samples in the new space. Another example is that principal component analysis (PCA) can best represent the training samples with the minimum error; however, it is not guaranteed that every test sample can be also represented with the minimum error. In this paper, we propose to improve conventional transformation methods by relating the training phase with the test sample. The proposed method simultaneously uses both the training samples and test sample to obtain an "optimal" representation of the test sample. In other words, the proposed method not only is an improvement to the conventional transformation method but also has the merits of the representation-based classification, which has shown very good performance in various problems. Differing from conventional distance-based classification, the proposed method evaluates only the distances between the test sample and the "closest" training samples and depends on only them to perform classification. Moreover, the proposed method uses the weighted distance to classify the test sample. The weight is set to the representation coefficient of a linear combination of the training samples that can well represent the test sample. © 2013 Elsevier B.V. Source


Van Nieuwenhuyse I.,Catholic University of Leuven | De Boeck L.,Decision Simulation | Lambrecht M.,Catholic University of Leuven | Vandaele N.J.,Catholic University of Leuven
Computers in Industry | Year: 2011

The planning and decision support capabilities of the manufacturing planning and control system, which provides the core of any enterprise resource planning package, can be enhanced substantively by the inclusion of a decision support module as an add-on at the midterm planning level. This module, called advanced resource planning (ARP), provides a parameter-setting process, with the ultimate goal of yielding realistic information about production lead times for scheduling purposes, sales and marketing, strategic and operational decision making, and suppliers and customers. This article illustrates the ARP approach with reports from several real-life implementations by large industrial companies. © 2010 Elsevier B.V. All rights reserved. Source


Xie B.-L.,Harbin Institute of Technology | Xie B.-L.,Decision Simulation | Li Y.,Harbin Institute of Technology | Liu M.,Harbin Institute of Technology
Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice | Year: 2014

Optimization of spreading vehicle routing for deicing salt plays important roles in improving operational efficiency of snow removal, reducing environmental pollution and reducing road maintenance cost. Based on the characteristics of deicing salt spreading operations, and combined with road network structure, capacity constraints and load balance constraints, vehicle routing model for spreading deicing salt and vehicle routing model with temporary supplementary points for spreading deicing salt were built. A genetic algorithm was applied to solve the above models, respectively. The results of numerical examples showed that vehicle routing model with temporary supplementary points for spreading deicing salt can obtain less total vehicle mileage, and the deadline distance was only 19.3% of the previous model, which mean that efficiency of spreading operation was improved significantly. Source


Lu Y.,Harbin Institute of Technology | Fang X.,Harbin Institute of Technology | Xie B.,Decision Simulation
Neural Computing and Applications | Year: 2014

Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could not well classify the samples that distribute around intersections. Moreover, the LRC could not perform well at the situation of severe lighting variations. This paper proposes a new classification method, kernel linear regression classification (KLRC), based on LRC and the kernel trick. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC. KLRC implicitly maps the data into a high-dimensional kernel space by using the nonlinear mapping determined by a kernel function. Through this mapping, KLRC is able to make the data more linearly separable and can perform well for face recognition with varying lighting. For comparison, we conduct on three standard databases under some evaluation protocols. The proposed methodology not only outperforms LRC but also takes the better performance than typical kernel methods such as kernel linear discriminant analysis and kernel principal component analysis. © 2013 Springer-Verlag London. Source

Discover hidden collaborations