Time filter

Source Type

Li F.,Digital China Information System Co. | Li F.,Key Laboratory of High Confidence Software Technologies | Li B.,Beihang University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2017

In the past fifteen years, each department of Chinese government and business service provider has established their own information service systems to publish services for citizens. However, these systems are isolated from each other and the services they provided are heterogeneous. It is more beneficial to build a common infrastructure to connect, aggregate and utilize all these services. In this paper, a Smart Citizen Fusion Service Platform Software Product Line, called CrowdService, is introduced as a solution. Based on open data, crowd sourcing, user profile and application framework, the architecture of CrowdService supports the development of personalized applications in smart cities. It consists of resource gathering components, open API component, PAAS components, service recommendation, etc. which supports rapid development of personalized applications in different cities. In terms of system management, CrowdService provides minimized total product development cost, increased productivity, on-time delivery, full control over all lifecycles, decreased defect rate both in assets and products, compliance on mission focus, architectural conformance, process compliance, high quality and customer satisfaction. Moreover, our one practice of smart citizen fusion service platform in China is introduced to demonstrate the effectiveness of the CrowdService. © Springer International Publishing AG 2017.


Ye W.,Peking University | Luo R.,Peking University | Zhang S.,Peking University | Zhang S.,Key Laboratory of High Confidence Software Technologies
Proceedings of International Conference on Service Science, ICSS | Year: 2013

Under the Web 2.0 circumstances that applications are usually constructed 'on the fly' for some transient or personal needs by integrating existing services and information, the complexity and overhead of Web services models and service composition become a problem. As an emerging Web 2.0 technology, mashup enables end-users to integrate separated Web sources to create innovative applications. However, today's researches on mashup primarily focus on data integration and presentation integration instead of business process. This paper proposes a process mashup model based on complex event processing, which leverages event-driven Publish/Subscribe communication paradigm to orchestrate browser-side components. The model also provides a definition language of composite events to help end-users to define the process behavior, and consequently shields them from traditional process constructs like conditional branching and looping, allowing them to create articulated business processes in a lightweight fashion. © 2013 IEEE.


Wang Z.,Peking University | Wang Z.,Key Laboratory of High Confidence Software Technologies
Advances in Intelligent and Soft Computing | Year: 2012

For the main problems and difficulties the non-computer professional students may face in the learning of data structures and algorithms course, this paper, based on years of teaching experience, discusses some ideas in the teaching of data structures and algorithms, such as using algorithm design methods as clues to introduce various types of algorithms, using data logical structure as modules to organize course contents, through real life examples to improve students interest, emphasis and application ability. © 2012 Springer-Verlag GmbH.


Sun L.,University of Science and Technology of China | Huang G.,Key Laboratory of High Confidence Software Technologies | Huang G.,Peking University
Journal of Systems Architecture | Year: 2011

Access control is a common concern in most software applications. In component-based systems, although developers can implement access control requirements (ACRs) by simply declaring role-based access control configurations (ACCs) of components, it is still difficult for them to define and evolve ACCs accurately implementing ACRs due to the gap between the complex high-level ACRs and the voluminous ACCs enforced by underlying middleware platforms, and the ad hoc mistakes of human. This paper introduces and clarifies the concept of accuracy of ACCs relative to ACRs, and presents a set of metrics and algorithms which can be used to automatically evaluate and improve accuracy of ACCs by evaluating and reconfiguring the software architecture with ACCs. We apply our achievements in a composed e-shop application. © 2010 Elsevier B.V. All rights reserved.


Mei H.,Peking University | Mei H.,Key Laboratory of High Confidence Software Technologies | Hao D.,Peking University | Hao D.,Key Laboratory of High Confidence Software Technologies | And 6 more authors.
IEEE Transactions on Software Engineering | Year: 2012

Test case prioritization is used in regression testing to schedule the execution order of test cases so as to expose faults earlier in testing. Over the past few years, many test case prioritization techniques have been proposed in the literature. Most of these techniques require data on dynamic execution in the form of code coverage information for test cases. However, the collection of dynamic code coverage information on test cases has several associated drawbacks including cost increases and reduction in prioritization precision. In this paper, we propose an approach to prioritizing test cases in the absence of coverage information that operates on Java programs tested under the JUnit framework—an increasingly popular class of systems. Our approach, JUnit test case Prioritization Techniques operating in the Absence of coverage information (JUPTA), analyzes the static call graphs of JUnit test cases and the program under test to estimate the ability of each test case to achieve code coverage, and then schedules the order of these test cases based on those estimates. To evaluate the effectiveness of JUPTA, we conducted an empirical study on 19 versions of four Java programs ranging from 2K-80K lines of code, and compared several variants of JUPTA with three control techniques, and several other existing dynamic coverage-based test case prioritization techniques, assessing the abilities of the techniques to increase the rate of fault detection of test suites. Our results show that the test suites constructed by JUPTA are more effective than those in random and untreated test orders in terms of fault-detection effectiveness. Although the test suites constructed by dynamic coverage-based techniques retain fault-detection effectiveness advantages, the fault-detection effectiveness of the test suites constructed by JUPTA is close to that of the test suites constructed by those techniques, and the fault-detection effectiveness of the test suites constructed by some of JUPTA's variants is better than that of the test suites constructed by several of those techniques. © 2012 IEEE.


Hao D.,Key Laboratory of High Confidence Software Technologies | Hao D.,Peking University | Zhang L.,Key Laboratory of High Confidence Software Technologies | Zhang L.,Peking University | And 5 more authors.
Proceedings - International Conference on Software Engineering | Year: 2012

Most test suite reduction techniques aim to select, from a given test suite, a minimal representative subset of test cases that retains the same code coverage as the suite. Empirical studies have shown, however, that test suites reduced in this manner may lose fault detection capability. Techniques have been proposed to retain certain redundant test cases in the reduced test suite so as to reduce the loss in fault-detection capability, but these still do concede some degree of loss. Thus, these techniques may be applicable only in cases where loose demands are placed on the upper limit of loss in fault-detection capability. In this work we present an on-demand test suite reduction approach, which attempts to select a representative subset satisfying the same test requirements as an initial test suite conceding at most l% loss in fault-detection capability for at least c% of the instances in which it is applied. Our technique collects statistics about loss in fault-detection capability at the level of individual statements and models the problem of test suite reduction as an integer linear programming problem. We have evaluated our approach in the contexts of three scenarios in which it might be used. Our results show that most test suites reduced by our approach satisfy given fault detection capability demands, and that the approach compares favorably with an existing test suite reduction approach. © 2012 IEEE.


Liu H.,Beijing Institute of Technology | Liu H.,Key Laboratory of High Confidence Software Technologies | Ma Z.,Key Laboratory of High Confidence Software Technologies | Shao W.,Key Laboratory of High Confidence Software Technologies | Niu Z.,Beijing Institute of Technology
IEEE Transactions on Software Engineering | Year: 2012

Bad smells are signs of potential problems in code. Detecting and resolving bad smells, however, remain time-consuming for software engineers despite proposals on bad smell detection and refactoring tools. Numerous bad smells have been recognized, yet the sequences in which the detection and resolution of different kinds of bad smells are performed are rarely discussed because software engineers do not know how to optimize sequences or determine the benefits of an optimal sequence. To this end, we propose a detection and resolution sequence for different kinds of bad smells to simplify their detection and resolution. We highlight the necessity of managing bad smell resolution sequences with a motivating example, and recommend a suitable sequence for commonly occurring bad smells. We evaluate this recommendation on two nontrivial open source applications, and the evaluation results suggest that a significant reduction in effort ranging from 17.64 to 20 percent can be achieved when bad smells are detected and resolved using the proposed sequence. © 2006 IEEE.


Liu H.,Beijing Institute of Technology | Liu H.,Key Laboratory of High Confidence Software Technologies | Gao Y.,Beijing Institute of Technology | Niu Z.,Beijing Institute of Technology
Proceedings - International Computer Software and Applications Conference | Year: 2012

Software refactoring might be done in two different tactics. The first one is XP-style small-step refactoring, also called floss refactoring. The other tactic, called root canal refactoring, is to set aside an extended period specially for refactoring. Floss refactoring, as one of the corner stones of XP, is well acknowledged. In contrast, root canal refactoring is doubted, especially by XP advocators. Despite the doubts, however, no large scale empirical study on refactoring tactics has been reported. In contrast to the doubts, cases of root canal refactoring have been reported from industry, e.g., Microsoft. Researchers from academe have also proposed various approaches to facilitating root canal refactoring. To this end, this paper would investigate the following questions. (1)How often are the two different tactics employed, respectively? (2) Is there any correlation between refactoring tactics and categories of refactorings? In other words, are some kinds of refactorings more likely than others to be done as floss refactorings or root canal refactorings? To answer these questions, we analyze refactoring histories collected by Eclipse Usage Data Collector (UDC). The data are collected from 753,367 engineers worldwide. Analysis results suggest that about 11.5 percent of refactorings collected by UDC are root canal refactorings, whereas others (88.5 percent) are floss refactorings. We also find that some kinds of refactorings, e.g., Introduce Parameter, are more likely than others to be performed as root canal refactorings. © 2012 IEEE.


Cai P.,East China Normal University | Gao W.,Chinese University of Hong Kong | Zhou A.,East China Normal University | Wong K.-F.,Chinese University of Hong Kong | Wong K.-F.,Key Laboratory of High Confidence Software Technologies
SIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval | Year: 2011

Learning to adapt in a new setting is a common challenge to our knowledge and capability. New life would be easier if we actively pursued supervision from the right mentor chosen with our relevant but limited prior knowledge. This variant principle of active learning seems intuitively useful to many domain adaptation problems. In this paper, we substantiate its power for advancing automatic ranking adaptation, which is important in web search since it's prohibitive to gather enough labeled data for every search domain for fully training domain-specific rankers. For the cost-effectiveness, it is expected that only those most informative instances in target domain are collected to annotate while we can still utilize the abundant ranking knowledge in source domain. We propose a unified ranking framework to mutually reinforce the active selection of informative target-domain queries and the appropriate weighting of source training data as related prior knowledge. We select to annotate those target queries whose documents' order most disagrees among the members of a committee built on the mixture of source training data and the already selected target data. Then the replenished labeled set is used to adjust the importance of source queries for enhancing their rank transfer. This procedure iterates until labeling budget exhausts. Based on LETOR3.0 and Yahoo! Learning to Rank Challenge data sets, our approach significantly outperforms the random query annotation commonly used in ranking adaptation and the active rank learner on target-domain data only.


Li X.,Key laboratory of High Confidence Software Technologies | Liu T.,Key laboratory of High Confidence Software Technologies
Theoretical Computer Science | Year: 2010

We prove an Ω (20.69 n / sqrt(n)) time lower bound of Knapsack problem under the adaptive priority branching trees (pBT) model. The pBT model is a formal model of algorithms covering backtracking and dynamic programming [M. Alekhnovich, A. Borodin, A. Magen, J. Buresh-Oppenheim, R. Impagliazzo, T. Pitassi, Toward a model for backtracking and dynamic programming, ECCC TR09-038, 2009. Earlier version in Proc 20th IEEE Computational Complexity, 2005, pp. 308-322]. Our result improves the Ω (20.5 n / sqrt(n)) lower bound of M. Alekhovich et al. and the Ω (20.66 n / sqrt(n)) lower bound of Li et al. [X. Li, T. Liu, H. Peng, L. Qian, H. Sun, J. Xu, K. Xu, J. Zhu, Improved exponential time lower bound of Knapsack problem under BT model, in: Proc 4th TAMC 2007, in: LNCS, vol. 4484, 2007, pp. 624-631] through optimized arguments. © 2010 Elsevier B.V. All rights reserved.

Loading Key Laboratory of High Confidence Software Technologies collaborators
Loading Key Laboratory of High Confidence Software Technologies collaborators