Entity

Time filter

Source Type


Wang X.,Chinese Academy of Sciences | Wang X.,CAS Institute of Computing Technology | Wang X.,National Engineering Laboratory for Information Security Technologies | Shi J.,Chinese Academy of Sciences | And 3 more authors.
Procedia Computer Science | Year: 2013

On the Internet, service accessibility is directly related to the reachability of entry points to its service infrastructure. There exists a main challenge in the distribution of these entry points: how to disseminate them widely while avoid the adversary from enumerating them. This paper presents the model of distribution problem under enumeration attacks. A few factors that can influence service accessibility are also presented based on a probabilistic analysis of the model. To improve accessibility, we propose dynamic distribution where the distribution parameter (number of resources assigned to each user) can be adjusted dynamically according to the confrontation situation. Experiments over simulation validate the effectiveness of this idea in accessibility improvement. © 2013 The Authors. Published by Elsevier B.V.


Liu T.,CAS Institute of Computing Technology | Liu T.,University of Chinese Academy of Sciences | Yang Y.,CAS Institute of Computing Technology | Liu Y.,CAS Institute of Computing Technology | And 5 more authors.
Proceedings - IEEE INFOCOM | Year: 2011

Deep packet inspection plays a increasingly important role in network security devices and applications, which use more regular expressions to depict patterns. DFA engine is usually used as a classical representation for regular expressions to perform pattern matching, because it only need O(1) time to process one input character. However, DFAs of regular expression sets require large amount of memory, which limits the practical application of regular expressions in high-speed networks. Some compression algorithms have been proposed to address this issue in recent literatures. In this paper, we reconsider this problem from a new perspective, namely observing the characteristic of transition distribution inside each state, which is different from previous algorithms that observe transition characteristic among states. Furthermore, we introduce a new compression algorithm which can reduce 95% memory usage of DFA stably without significant impact on matching speed. Moreover, our work is orthogonal to previous compression algorithms, such as D2FA, δFA. Our experiment results show that applying our work to them will have several times memory reduction, and matching speed of up to dozens of times comparing with original δFA in software implementation. © 2011 IEEE.


Liu Y.,CAS Institute of Computing Technology | Liu Y.,University of Chinese Academy of Sciences | Liu Y.,National Engineering Laboratory for Information Security Technologies | Guo L.,CAS Institute of Computing Technology | And 5 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Recently regular expression matching has become a research focus as a result of the urgent demand for Deep Packet Inspection (DPI) in many network security systems. Deterministic Finite Automaton (DFA), which recognizes a set of regular expressions, is usually adopted to cater to the need for real-time processing of network traffic. However, the huge memory usage of DFA prevents it from being applied even on a medium-sized pattern set. In this article, we propose a matrix decomposition method for DFA table compression. The basic idea of the method is to decompose a DFA table into the sum of a row vector, a column vector and a sparse matrix, all of which cost very little space. Experiments on typical rule sets show that the proposed method significantly reduces the memory usage and still runs at fast searching speed. © 2011 Springer-Verlag Berlin Heidelberg.


Liu T.,CAS Institute of Computing Technology | Liu T.,University of Chinese Academy of Sciences | Sun Y.,National Engineering Laboratory for Information Security Technologies | Liu A.X.,Michigan State University | And 3 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Regular expression (RegEx) matching has been widely used in various networking and security applications. Despite much effort on this important problem, it remains a fundamentally difficult problem. DFA-based solutions can achieve high throughput, but require too much memory to be executed in high speed SRAM. NFA-based solutions require small memory, but are too slow. In this paper, we propose RegexFilter, a prefiltering approach. The basic idea is to generate the RegEx print of RegEx set and use it to prefilter out most unmatched items. There are two key technical challenges: the generation of RegEx print and the matching process of RegEx print. The generation of RegEx is tricky as we need to tradeoff between two conflicting goals: filtering effectiveness, which means that we want the RegEx print to filter out as many unmatched items as possible, and matching speed, which means that we want the matching speed of the RegEx print as high as possible. To address the first challenge, we propose some measurement tools for RegEx complexity and filtering effectiveness, and use it to guide the generation of RegEx print. To address the second challenge, we propose a fast RegEx print matching solution using Ternary Content Addressable Memory. We implemented our approach and conducted experiments on real world data sets. Our experimental results show that RegexFilter can speedup the potential throughput of RegEx matching by 21.5 times and 20.3 times for RegEx sets of Snort and L7-Filter systems, at the cost of less than 0.2 Mb TCAM chip. © 2012 Springer-Verlag.


Liu T.,CAS Institute of Computing Technology | Liu T.,University of Chinese Academy of Sciences | Liu T.,National Engineering Laboratory for Information Security Technologies | Sun Y.,CAS Institute of Computing Technology | And 3 more authors.
Proceedings - 2010 IEEE International Conference on Networking, Architecture and Storage, NAS 2010 | Year: 2010

Traffic classification is important to many network applications, such as network monitoring. The classic way to identify flows, e.g., examining the port numbers in the packet headers, becomes ineffective. In this context, deep packet inspection technology, which does not only inspect the packet headers but also the packet payloads, plays a more important role in traffic classification. Meanwhile regular expressions are replacing strings to represent patterns because of their expressive power, simplicity and flexibility. However, regular expressions mathcing technique causes a high memory usage and processing cost, which result in low throughout. In this paper, we analyze the application-level protocol distribution of network traffic and conclude its characteristic. Furthermore, we design a fast and memory-efficient system of a two-layer architecture for traffic classification with the help of regular expressions in multi-core architecture, which is different from previous one-layer architecture. In order to reduce the memory usage of DFA, we use a compression algorithm called CSCA to perform regular expressions matching, which can reduce 95% memory usage of DFA. We also introduce some optimizations to accelerate the matching speed. We use real-world traffic and all L7-filter protocol patterns to make our experiments, and the results show that the system achieves at Gbps level throughout in 4-cores Servers. © 2010 IEEE.

Discover hidden collaborations