Time filter

Source Type

Chen Z.,National Laboratory for Parallel and Distributed Processing | Liu Z.,United International University Dhanmondi | Wang J.,National Laboratory for Parallel and Distributed Processing
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Compensating CSP (cCSP) extends CSP for specification and verification of long running transactions. The original cCSP is a modest extension to a subset of CSP that does not consider non-deterministic choice, synchronized composition, and recursion. There are a few further extensions. However, it remains a challenge to develop a fixed-point theory of process refinement in cCSP. This paper provides a complete solution to this problem and develops a theory of cCSP, corresponding to the theory of CSP, so that the verification techniques and their tools, such as FDR, can be extended for compensating processes. © 2011 Springer-Verlag.


Chen Z.,National Laboratory for Parallel and Distributed Processing | Liu Z.,United International University Dhanmondi
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Compensating CSP (cCSP) is an extension to CSP for modeling long-running transactions. It can be used to specify programs of service orchestration written in a programming language like WS-BPEL. So far, only an operational semantics and a trace semantics are given to cCSP. In this paper, we extend cCSP with more operators and define for it a stable failures semantics in order to reason about non-determinism and deadlock. We give some important algebraic laws for the new operators. These laws can be justified and understood from the stable failures semantics. A case study is given to demonstrate the extended cCSP. © 2010 Springer-Verlag.


Wei D.,National University of Defense Technology | Wang T.,National University of Defense Technology | Wang J.,National Laboratory for Parallel and Distributed Processing
IEICE Transactions on Information and Systems | Year: 2011

With the aim to improve the effectiveness of SAWSDL service discovery, this paper proposes a novel discovery method for SAWSDL services, which is based on the matchmaking of so-called finegrained data semantics that is defined via sets of atomic elements with built-in data types. The fine-grained data semantics can be obtained by a transformation algorithm that decomposes parameters at message level into a set of atomic elements, considering the characteristics of SAWSDL service structure and semantic annotations. Then, a matchmaking algorithm is proposed for the matching of fine-grained data semantics, which avoids the complex and expensive structural matching at the message level. The fine-grained data semantics is transparent to the specific data structure of message-level parameters, therefore, it can help to match successfully similar Web services with different data structures of parameters. Moreover, a comprehensive measure is proposed by considering together several important components of SAWSDL service descriptions at the same time. Finally, this method is evaluated on SAWSDL service discovery test collection SAWSDL-TC2 and compared with other SAWSDL matchmakers. The experimental results show that our method can improve the effectiveness of SAWSDL service discovery with low average query response time. The results imply that fine-grained parameters fit to represent the data semantics of SAWSDL services, especially when data structures of parameters are not important for semantics. © 2011 The Institute of Electronics, Information and Communication Engineers.


Yang X.,National Laboratory for Parallel and Distributed Processing | Wang J.,National Laboratory for Parallel and Distributed Processing | Yi X.,National Laboratory for Parallel and Distributed Processing
Computer Journal | Year: 2010

Model abstraction plays an important role in model checking of source codes of programs. Slicing execution is a lightweight symbolic execution procedure to extract the models of C programs in an over-approximated way. In this paper, we present an approach to improving slicing execution with a novel concept called partial weakest precondition (PWP) to alleviate the space explosion problem. PWPs specify the corresponding weakest precondition conservatively by only considering part of program variables. We present how to integrate PWP with slicing execution, which leads to a compact model with much smaller state space compared with the one obtained by the original slicing execution. A new PWP implementation is also presented to avoid possible exponential PWP formula size and support pointers and aliases as well. The distinguished features of the implementation are that it does not need to translate the program to the passive form beforehand, and it supports loops very well. Comparing with slicing execution without PWP, the experimentation on SSL protocol based on the C source code openssl-0.9.6c shows that the state space may be reduced to only 1/10 after applying PWP.


Chen J.,National Laboratory for Parallel and Distributed Processing | Chen J.,National University of Defense Technology | Mao X.,National University of Defense Technology
Proceedings of the 2012 IEEE 6th International Conference on Software Security and Reliability Companion, SERE-C 2012 | Year: 2012

Buffer overflow is one of the most dangerous and common vulnerabilities in CPS software. Despite static and dynamic analysis, manual analysis is still heavily used which is useful but costly. Human computation harness humans' time and energy in a way of playing games to solve computational problems. In this paper we propose a human computation method to detect buffer overflows that does not ask a person whether there is a potential vulnerability, but rather a random person's idea. We implement this method as a game called Bodhi in which each player is shown a piece of code snippet and asked to choose whether their partner would think there is a buffer overflow vulnerability at a given position in the code. The purpose of the game is to make use of the rich distributed human resource to increase effectiveness of manual detection for buffer overflows. The game has been proven to be efficient and enjoyable in practice. © 2012 IEEE.


Wei D.P.,National University of Defense Technology | Wang T.,National University of Defense Technology | Wang J.,National Laboratory for Parallel and Distributed Processing
Science China Information Sciences | Year: 2012

Semantic Web service matchmaking, as one of the most challenging problems in Semantic Web services (SWS), aims to filter and rank a set of services with respect to a service query by using a certain matching strategy. In this paper, we propose a logistic regression based method to aggregate several matching strategies instead of a fixed integration (e. g., the weighted sum) for SWS matchmaking. The logistic regression model is trained on training data derived from binary relevance assessments of existing test collections, and then used to predict the probability of relevance between a new pair of query and service according to their matching values obtained from various matching strategies. Services are then ranked according to the probabilities of relevance with respect to each query. Our method is evaluated on two main test collections, SAWSDL-TC2 and Jena Geography Dataset(JGD). Experimental results show that the logistic regression model can effectively predict the relevance between a query and a service, and hence can improve the effectiveness of service matchmaking. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.


Simon A.,TU Munich | Chen L.,National Laboratory for Parallel and Distributed Processing
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

While the definition of the revised widening for polyhedra is defined in terms of inequalities, most implementations use the double description method as a means to an efficient implementation. We show how standard widening can be implemented in a simple and efficient way using a normalized H-representation (constraint-only) which has become popular in recent approximations to polyhedral analysis. We then detail a novel heuristic for this representation that is tuned to capture linear transformations of the state space while ensuring quick convergence for non-linear transformations for which no precise linear invariants exist. © 2010 Springer-Verlag.


Xia F.,National Laboratory for Parallel and Distributed Processing | Dou Y.,National Laboratory for Parallel and Distributed Processing | Jin G.,Wuhan Naval University of Engineering
Journal of Supercomputing | Year: 2012

In the field of RNA secondary structure prediction, MFE, SCFG and the homologous comparative sequence analysis are three kinds of classical computation analysis approaches. However, the parallel efficiency of many implementations on general-purpose computers are greatly limited by complicated data dependency and tight synchronization. Additionally, large scale parallel computers are too expensive to be used easily for many research institutes. Recently, FPGA chips provide a new approach to accelerate those algorithms by exploiting fine-grained custom design.We propose a unified parallelism schemes and logic circuit architecture for three classical algorithms - Zuker, RNAalifold and CYK, based on a systolic-like master-slave PE (Processing Element) array for fine-grained hardware implementation on FPGA. We partition tasks by columns and assign them to PEs for load balance. We exploit data reuse schemes to reduce the need to load matrix from external memory. The experimental results show a factor of 12-14x speedup over the three software versions running on a PC platform with AMD Phenom 9650 Quad CPU. The computational power of our prototype is comparable to a PC cluster consisting of 20 Intel-Xeon CPUs for RNA secondary structure prediction; however, the power consumption is only about 10% of the latter. © Springer Science+Business Media, LLC 2011.


Wei D.,National University of Defense Technology | Wei D.,University of Zürich | Wang T.,National University of Defense Technology | Wang J.,National Laboratory for Parallel and Distributed Processing | Bernstein A.,University of Zürich
Journal of Web Semantics | Year: 2011

As the number of publicly available services grows, discovering proper services becomes an important issue and has attracted amount of attempts. This paper presents a new customizable and effective matchmaker, called SAWSDL-iMatcher. It supports a matchmaking mechanism, named iXQuery, which extends XQuery with various similarity joins for SAWSDL service discovery. Using SAWSDL-iMatcher, users can flexibly customize their preferred matching strategies according to different application requirements. SAWSDL-iMatcher currently supports several matching strategies, including syntactic and semantic matching strategies as well as several statistical-model-based matching strategies which can effectively aggregate similarity values from matching on various types of service description information such as service name, description text, and semantic annotation. Besides, we propose a semantic matching strategy to measure the similarity among SAWSDL semantic annotations. These matching strategies have been evaluated in SAWSDL-iMatcher on SAWSDL-TC2 and Jena Geography Dataset (JGD). The evaluation shows that different matching strategies are suitable for different tasks and contexts, which implies the necessity of a customizable matchmaker. In addition, it also provides evidence for the claim that the effectiveness of SAWSDL service matching can be significantly improved by statistical-model-based matching strategies. Our matchmaker is competitive with other matchmakers on benchmark tests at S3 contest 2009. © 2011 Elsevier B.V. All rights reserved.


Wang L.,National Laboratory for Parallel and Distributed Processing | Xue J.,Programming Languages and Compilers Group | Yang X.,National Laboratory for Parallel and Distributed Processing
Proceedings -Design, Automation and Test in Europe, DATE | Year: 2010

This paper presents reuse-aware modulo scheduling to maximizing stream reuse and improving concurrency for stream-level loops running on stream processors. The novelty lies in the development of a new representation for an unrolled and software-pipelined stream-level loop using a set of reuse equations, resulting in simultaneous optimization of two performance objectives for the loop, reuse and concurrency, in a unified framework. We have implemented this work in the compiler developed for our 64-bit FT64 stream processor. Our experimental results obtained on FT64 and by simulation using nine representative stream applications demonstrate the effectiveness of the proposed approach. © 2010 EDAA.

Loading National Laboratory for Parallel and Distributed Processing collaborators
Loading National Laboratory for Parallel and Distributed Processing collaborators