Entity

Time filter

Source Type


Cai P.,East China Normal University | Gao W.,Chinese University of Hong Kong | Zhou A.,East China Normal University | Wong K.-F.,Chinese University of Hong Kong | Wong K.-F.,Key Laboratory of High Confidence Software Technologies
SIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval | Year: 2011

Learning to adapt in a new setting is a common challenge to our knowledge and capability. New life would be easier if we actively pursued supervision from the right mentor chosen with our relevant but limited prior knowledge. This variant principle of active learning seems intuitively useful to many domain adaptation problems. In this paper, we substantiate its power for advancing automatic ranking adaptation, which is important in web search since it's prohibitive to gather enough labeled data for every search domain for fully training domain-specific rankers. For the cost-effectiveness, it is expected that only those most informative instances in target domain are collected to annotate while we can still utilize the abundant ranking knowledge in source domain. We propose a unified ranking framework to mutually reinforce the active selection of informative target-domain queries and the appropriate weighting of source training data as related prior knowledge. We select to annotate those target queries whose documents' order most disagrees among the members of a committee built on the mixture of source training data and the already selected target data. Then the replenished labeled set is used to adjust the importance of source queries for enhancing their rank transfer. This procedure iterates until labeling budget exhausts. Based on LETOR3.0 and Yahoo! Learning to Rank Challenge data sets, our approach significantly outperforms the random query annotation commonly used in ranking adaptation and the active rank learner on target-domain data only. Source


Ye W.,Peking University | Luo R.,Peking University | Zhang S.,Peking University | Zhang S.,Key Laboratory of High Confidence Software Technologies
Proceedings of International Conference on Service Science, ICSS | Year: 2013

Under the Web 2.0 circumstances that applications are usually constructed 'on the fly' for some transient or personal needs by integrating existing services and information, the complexity and overhead of Web services models and service composition become a problem. As an emerging Web 2.0 technology, mashup enables end-users to integrate separated Web sources to create innovative applications. However, today's researches on mashup primarily focus on data integration and presentation integration instead of business process. This paper proposes a process mashup model based on complex event processing, which leverages event-driven Publish/Subscribe communication paradigm to orchestrate browser-side components. The model also provides a definition language of composite events to help end-users to define the process behavior, and consequently shields them from traditional process constructs like conditional branching and looping, allowing them to create articulated business processes in a lightweight fashion. © 2013 IEEE. Source


Wang Z.,Peking University | Wang Z.,Key Laboratory of High Confidence Software Technologies
Advances in Intelligent and Soft Computing | Year: 2012

For the main problems and difficulties the non-computer professional students may face in the learning of data structures and algorithms course, this paper, based on years of teaching experience, discusses some ideas in the teaching of data structures and algorithms, such as using algorithm design methods as clues to introduce various types of algorithms, using data logical structure as modules to organize course contents, through real life examples to improve students interest, emphasis and application ability. © 2012 Springer-Verlag GmbH. Source


Song H.,Key Laboratory of High Confidence Software Technologies | Huang G.,Key Laboratory of High Confidence Software Technologies | Chauvel F.,Key Laboratory of High Confidence Software Technologies | Xiong Y.,University of Waterloo | And 3 more authors.
Journal of Systems and Software | Year: 2011

Runtime software architectures (RSA) are architecture-level, dynamic representations of running software systems, which help monitor and adapt the systems at a high abstraction level. The key issue to support RSA is to maintain the causal connection between the architecture and the system, ensuring that the architecture represents the current system, and the modifications on the architecture cause proper system changes. The main challenge here is the abstraction gap between the architecture and the system. In this paper, we investigate the synchronization mechanism between architecture configurations and system states for maintaining the causal connections. We identify four required properties for such synchronization, and provide a generic solution satisfying these properties. Specifically, we utilize bidirectional transformation to bridge the abstraction gap between architecture and system, and design an algorithm based on it, which addresses issues such as conflicts between architecture and system changes, and exceptions of system manipulations. We provide a generative tool-set that helps developers implement this approach on a wide class of systems. We have successfully applied our approach on JOnAS JEE system to support it with C2-styled runtime software architecture, as well as some other cases between practical systems and typical architecture models. © 2010 Elsevier Inc. All rights reserved. Source


Mei H.,Peking University | Mei H.,Key Laboratory of High Confidence Software Technologies | Hao D.,Peking University | Hao D.,Key Laboratory of High Confidence Software Technologies | And 6 more authors.
IEEE Transactions on Software Engineering | Year: 2012

Test case prioritization is used in regression testing to schedule the execution order of test cases so as to expose faults earlier in testing. Over the past few years, many test case prioritization techniques have been proposed in the literature. Most of these techniques require data on dynamic execution in the form of code coverage information for test cases. However, the collection of dynamic code coverage information on test cases has several associated drawbacks including cost increases and reduction in prioritization precision. In this paper, we propose an approach to prioritizing test cases in the absence of coverage information that operates on Java programs tested under the JUnit framework—an increasingly popular class of systems. Our approach, JUnit test case Prioritization Techniques operating in the Absence of coverage information (JUPTA), analyzes the static call graphs of JUnit test cases and the program under test to estimate the ability of each test case to achieve code coverage, and then schedules the order of these test cases based on those estimates. To evaluate the effectiveness of JUPTA, we conducted an empirical study on 19 versions of four Java programs ranging from 2K-80K lines of code, and compared several variants of JUPTA with three control techniques, and several other existing dynamic coverage-based test case prioritization techniques, assessing the abilities of the techniques to increase the rate of fault detection of test suites. Our results show that the test suites constructed by JUPTA are more effective than those in random and untreated test orders in terms of fault-detection effectiveness. Although the test suites constructed by dynamic coverage-based techniques retain fault-detection effectiveness advantages, the fault-detection effectiveness of the test suites constructed by JUPTA is close to that of the test suites constructed by those techniques, and the fault-detection effectiveness of the test suites constructed by some of JUPTA's variants is better than that of the test suites constructed by several of those techniques. © 2012 IEEE. Source

Discover hidden collaborations