Entity

Time filter

Source Type

Beijing, China

Ju H.,Beijing University
Proceedings - 2012 5th International Conference on Intelligent Computation Technology and Automation, ICICTA 2012 | Year: 2012

In this paper, a new method for handling multicriteria fuzzy decision-making problems based on interval-valued fuzzy sets is presented. The proposed method allows the degrees of satisfiability and non-satisfiability of each alternative with respect to a set of criteria to be represented by interval-valued fuzzy sets, respectively. Furthermore, the proposed method allows the decision-maker to assign the degrees of membership and non-membership of the criteria to the fuzzy concept "importance." The method proposed here can provide a useful way to efficiently help the decision-maker to make his decision. © 2012 IEEE. Source


Ju H.,Beijing University
Lecture Notes in Electrical Engineering | Year: 2011

We discuss the concept of a level set of a fuzzy set and the related ideas of the representation theorem and the extension principle. We then describe the extension of these ideas to the case of interval valued fuzzy sets (IVFS). What is important to note here is that in the case of interval valued fuzzy sets, the number of distinct level sets can be greater than the number of distinct membership grades of the fuzzy set being represented. In particular, the minimum of each subset of membership grades provides a level set. Morover, the membership grades are not linearly ordered and hence taking the minimum of a subset of these can result in a value that was not one of the members of the subset. © Springer-Verlag Berlin Heidelberg 2011. Source


Hongmei J.,Beijing University
Proceedings - 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, IHMSC 2013 | Year: 2013

We consider the system of interval-valued fuzzy sets (IVFS) in a universe X and study the cuts of an IVFS. Suppose a left continuous triangular norm is given. The t-norm Based cut (level set) of an IVFS is defined in a way that binds the membership lower bound and membership upper bound function via the triangular norm. This is an extension of usual cuts of IVFS. We show that the system of these cuts fulfills analogical properties as usual systems of cuts. However, it is not possible to reconstruct an IVFS from the system of t-norm Based cuts. © 2013 IEEE. Source


Dong P.,Beijing University
Proceedings - 5th International Conference on Instrumentation and Measurement, Computer, Communication, and Control, IMCCC 2015 | Year: 2015

In general website evaluation includes website function and website content and website credit, customer service and enterprise strength, website security, its interface design and website technology. These studies mainly are based on evaluation approach on traditional websites. The arrival of the era of big data provides new opportunities and challenges to the construction and application of electronic commerce website. This paper analyzes the evaluation index of E - commerce website and introduce the evaluation method of e-commerce website ,especially the construction of website with big data. © 2015 IEEE. Source


Yan F.,Beijing Institute of Technology | Yan F.,Beijing University | Tan Y.,Beijing University
Journal of Networks | Year: 2011

Today, the world is increasingly awash in more and more unstructured data, not only because of the Internet, but also because data that used to be collected on paper or media such as film, DVDs and compact discs has moved online [1]. Most of this data is unstructured and in diverse formats such as e-mail, documents, graphics, images, and videos. In managing unstructured data complexity and scalability, object storage has a clear advantage. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data. It can detect common embedded data for the first backup across completely unrelated files and even when physical block layout changes. However, almost all of the current researches on data de-duplication do not consider the content of different file types, and they do not have any knowledge of the backup data format. It has been proven that such method cannot achieve optimal performance for compound files. In our proposed system, we will first extract objects from files, Object_IDs are then obtained by applying hash function to the objects. The resulted Object_IDs are used to build as indexing keys in B+ tree like index structure, thus, we avoid the need for a full object index, the searching time for the duplicate objects reduces to O(log n).We introduce a new concept of a duplicate object resolver. The object resolver mediates access to all the objects and is a central point for managing all the metadata and indexes for all the objects. All objects are addressable by their IDs which is unique in the universe. The resolver stores metadata with triple format. This improved metadata management strategy allows us to set, add and resolve object properties with high flexibility, and allows the repeated use of the same metadata among duplicate object. © 2011 ACADEMY PUBLISHER. Source

Discover hidden collaborations