Time filter

Source Type

Cui X.,Jilin University | Cui X.,Key Laboratory of Computation and Knowledge Engineering | Ouyang D.,Jilin University | Ouyang D.,Key Laboratory of Computation and Knowledge Engineering | And 4 more authors.
Journal of Computational Information Systems | Year: 2011

Most existing storages of the ontology use relational databases as a backend to manage RDF data. This motivates us to translate SPARQL queries, the proposed standard for RDF querying, into equivalent SQL queries. At the basis of giving the IC-based storage to store the ontology with integrity constraints, we present the corresponding translation of SPARQL to SQL called IC-based translation. With the different structure of the tables in the relational databases, the SPARQL query statements are divided into four types. For the different type of query statements, we give the corresponding translations. Copyright © 2011 Binary Information Press.


Wang Y.,Jilin University | Wang Y.,Key Laboratory of Computation and Knowledge Engineering | Li H.,Jilin University | Zuo W.,Jilin University | And 4 more authors.
Advanced Science Letters | Year: 2012

A significant amount of information can only be accessed through query interfaces, so it plays an important role for integrating these interfaces to generate a global interface, which allows uniform access to disparate relevant sources. In this paper, a novel approach of integrating Deep Web query interfaces is proposed based on ontology, which mainly subsumes three aspects: schema extraction, schema matching and schema merging. Firstly, designing schema extraction algorithm based on visual features to grasp and analyze labels and control constraints of semantic attributes. Then, capturing semantic relationships of attributes among different query interfaces by exploiting the "bridging" effect in the process of matching attributes based on ontology. Lastly, merging these source query interfaces to construct a unified schema based on the identified mapping relationships after schema matching. Through a detailed experimental evaluation, the ontology-assisted approach of interface integration is feasible and effective. © 2012 American Scientific Publishers. All Rights Reserved.


Wang Y.,Jilin University | Wang Y.,Key Laboratory of Computation and Knowledge Engineering | Zuo W.,Jilin University | Zuo W.,Key Laboratory of Computation and Knowledge Engineering | And 5 more authors.
Communications in Computer and Information Science | Year: 2011

Deep Web contains a significant amount of visited information, in order to effectively guide users to the appropriate searchable web databases, we need to organize it according to different domain. Ontology plays an important role in locating Deep Web content, therefore, this paper proposes a new Deep Web database selection framework based on ontology. Firstly, constructing domain ontology content model (DOCM), and then, designing the ontologyassisted similarity algorithm, which adds semantic information to form eigenvectors, lastly, selecting the mapping relational databases as domainspecific databases. Experiment shows that the method can effectively select Deep Web databases. ©2011 Springer-Verlag Berlin Heidelberg.


Pan Z.,Jilin University | Pan Z.,Key Laboratory of Computation and Knowledge Engineering | Ouyang D.,Jilin University | Ouyang D.,Key Laboratory of Computation and Knowledge Engineering
Journal of Computational Information Systems | Year: 2014

In this letter, we suggest a novel Object Shrunken (OS) algorithm to handle the image classification task. Unlike the prior art, this letter considers the foreground or the object location in the image for more discriminative image-level representation. The OS algorithm suggests a straightforward procedure to box the object location. It first proposes a Weighted Local Outlier Factor (WLOF) to remove all the interest point outliers, and then positions the object location in terms of the distribution of the rest interest points. We evaluate the proposed algorithm on the well-known dataset Caltech-101. The resulting OS algorithm outperforms the state-of-art approaches in the image classification task. 1553-9105/Copyright © 2014 Binary Information Press


Cui X.,Jilin University | Cui X.,Key Laboratory of Computation and Knowledge Engineering | Ouyang D.,Jilin University | Ouyang D.,Key Laboratory of Computation and Knowledge Engineering | And 5 more authors.
Journal of Computational Information Systems | Year: 2012

OWL Ontologies may change continually to meet user's dynamic request and it may damage the integrity of the ontology. Thus, it is in urgent to propose an effective strategy to maintain the integrity of the continually changed ontology. In this paper, we propose a framework of integrity maintenance of continually changed OWL ontology. At the basis of the connectedness between ontology data and integrity constraints, we define the maximal relevant subontology of the continually changed ontology and perform the integrity checking in its maximal relevant subontology instead of the entire ontology. In further, for the different change operations, the integrity checking is realized by finding the corresponding maximal relevant subontologies. Finally, for the unintegrity ontology data, obtain the integrity ontology for the continually changed ontology which satisfies all the integrity constraints by modifying the ontology axioms. Thus, maintain the integrity of the continually changed ontology. © 2012 Binary Information Press.


Chen K.,Jilin University | Chen K.,Key Laboratory of Computation and Knowledge Engineering | Zuo W.,Jilin University | Zuo W.,Key Laboratory of Computation and Knowledge Engineering | And 3 more authors.
2010 2nd Conference on Environmental Science and Information Application Technology, ESIAT 2010 | Year: 2010

Due to the structuring and semi-structuring of the most data contained in deep web, it's considerably much easier to extract and construct the ontology. The constructed ontology has multiple appliances, such as the data integration, data extraction assistance and so on. During the process of constructing ontology, the matching of attribute is a significantly important step, the quality of whose result will directly affect ontology's quality. The method proposed in this paper works well on identifying and solving the cases of simple matching and complex matching. Experiment results approve that the ontology constructed under the instruction of this method is perfect, as well as effective. ©2010 IEEE.


Zuo W.,Jilin University | Zuo W.,Key Laboratory of Computation and Knowledge Engineering | Wang Y.,Jilin University | Wang Y.,Key Laboratory of Computation and Knowledge Engineering | And 5 more authors.
Journal of Computational Information Systems | Year: 2010

Deep Web contains a significant amount of visited information, in order to be able to make full use of the information, we need to organize it according to different domain. Therefore, it is imperative that Deep Web databases should be classified by domain automatically. In this paper, a new Deep Web database classification framework is proposed, which adds semantic information to feature vectors and centroid vector by extracting the synsets of terms which can be obtained from WordNet, and replace the terms by corresponding synsets in the feature vectors and centroid vector to achieve dimensionality reduction of vectors. Lastly, highlight the semantic feature vectors by semantic centroid vector, and classify the highlighted semantic feature vectors by classification algorithm. Experiments show that experiment 3 which combines experiment 1 and experiment 2 can effectively improve the classification accuracy of Deep Web databases. Copyright © 2010 Binary Information Press.


Wang Y.,Jilin University | Wang Y.,Key Laboratory of Computation and Knowledge Engineering | Zuo W.,Jilin University | Zuo W.,Key Laboratory of Computation and Knowledge Engineering | And 4 more authors.
Journal of Information and Computational Science | Year: 2010

This paper proposes a method of semi-automatic ontology construction based on text learning. Firstly, construct a core ontology by ontology tool, then extract domain related terms by both text classification method and keyword extraction method to produce ontology conception candidate set. Lastly, judge the semantic relationships between domain terms and ontology concepts by phrase similarity method, and add the matching concepts to the core ontology to finish the ontology expansion automatically. Experiment result shows that this method is feasible, which not only improves the construction efficiency, but also ensures the ontology quality. Copyright © 2010 Binary Information Press.


Chen K.,Jilin University | Chen K.,Key Laboratory of Computation and Knowledge Engineering | Zuo W.,Jilin University | Zuo W.,Key Laboratory of Computation and Knowledge Engineering | And 5 more authors.
Journal of Information and Computational Science | Year: 2010

In Deep Web, there are various methods of studying extraction of data records. After carefully reviewing and analyzing those methods, we find that the problem of dealing with either zero or fewer query results and nested data structure still exists. By using ontology's cooperation, the problem would be solved easily. We observe that query interfaces and data pages of the sources often contain rich information on concepts, instances, and concept relationships in the application domain. This proposal proposes a method combined query interface and query result, to automatically extract specific domain ontology. In addition, the ontology model, which was designed based on the structured and semi-structured characteristics of deep web, also can assist the process of extracting deep web result pages. Experimental results approve that our method can get more comprehensive concepts, simple domain-specific ontology is more convenient for being applied in data extraction. Copyright © 2010 Binary Information Press.


Zhu J.,Jilin University | Zhu J.,Key Laboratory of Computation and Knowledge Engineering | Liu Y.,Jilin University | Liu Y.,Key Laboratory of Computation and Knowledge Engineering | And 4 more authors.
Journal of Computational Information Systems | Year: 2010

The severity and number of intrusions on computer networks are rapidly increasing, which requires advances not only in detection algorithm, but also in automatic response techniques. In this paper, a new approach to automatic response called DGII_IDR is presented that employs a game-theoretic response strategy against adversaries. It transforms the incomplete information dynamic game into a complete but imperfect information game by Harsanyi transformation. The optimal response is chosen by the use of perfect Bayesian equilibrium to revise the posterior belief of player's type continuously. Moreover, DGII_IDR solved the conflict between the rational assumption and irrational behavior of attacker/defender by restricting their strategies in game theory. We also show how our game theoretical framework can be applied to configure the intrusion detection strategies in realistic scenarios via a case study and demonstrate the effectiveness of DGII_IDR. Copyright © 2010 Binary Information Press.

Loading Key Laboratory of Computation and Knowledge Engineering collaborators
Loading Key Laboratory of Computation and Knowledge Engineering collaborators