Rodriguez-Gomez R.A.,CITIC UGR |
Macia-Fernandez G.,CITIC UGR |
IEEE Communications Magazine | Year: 2017
Nowadays, a great part of Internet content is not reachable from search engines. Studying the nature of these contents from a cyber security perspective is of high interest, as they could be part of many malware distribution processes, child pornography or copyrighted material exchange, botnet command and control messages, and so on. Although the research community has put a lot of effort into this challenge, most of the existing works are focused on contents that are hidden in websites. However, there are other relevant services that are used to keep and transmit hidden resources, such as P2P protocols. In the present work, we suggest the concept of Deep Torrent to refer to those torrents available in BitTorrent that cannot be found by means of public websites or search engines. We present an implementation of a complete system to crawl the Deep Torrent and evaluate its existence and size. We describe a basic experiment crawling the Deep Torrent for 39 days, in which an initial estimation of its size is 67 percent of the total number of resources shared in the BitTorrent network. © 2017 IEEE.
Ortuno F.M.,CITIC UGR |
Valenzuela O.,University of Granada |
Rojas F.,CITIC UGR |
Pomares H.,CITIC UGR |
And 3 more authors.
Bioinformatics | Year: 2013
Motivation: Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences.Results: The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P < 0.01). This algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P < 0.05), whereas it shows results not significantly different to 3D-COFFEE (P > 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. Availability: The source code is available at http://www.ugr.es/∼fortuno/ MOSAStrE/MO-SAStrE.zip. Supplementary Information: Supplementary material is available at Bioinformatics online. © 2013 The Author.
Lopez-Vega J.M.,CITIC UGR |
Povedano-Molina J.,CITIC UGR |
Pardo-Castellote G.,Real-time Innovations, Inc. |
Lopez-Soler J.M.,ETSIIT UGR
Journal of Systems and Software | Year: 2013
The OMG DDS (Data Distribution Service) standard specifies a middleware for distributing real-time data using a publish-subscribe data-centric approach. Until now, DDS systems have been restricted to a single and isolated DDS domain, normally deployed within a single multicast-enabled LAN. As systems grow larger, the need to interconnect different DDS domains arises. In this paper, we consider the problem of communicating disjoint data-spaces that may use different schemas to refer to similar information. In this regard, we propose a DDS interconnection service capable of bridging DDS domains as well as adapting between different data schemas. A key benefit of our approach is that is compliant with the latest OMG specifications, thus the proposed service does not require any modifications to DDS applications. The paper identifies the requirements for DDS data-spaces interconnection, presents an architecture that responds to those requirements, and concludes with experimental results gathered on our prototype implementation. We show that the impact of the service on the communications performance is well within the acceptable limits for most real-world uses of DDS (latency overhead is of the order of hundreds of microseconds). Reported results also indicate that our service interconnects remote data-spaces efficiently and reduces the network traffic almost N times, with N being the number of final data subscribers. © 2012 Elsevier Inc. All rights reserved.
Lopez-Vega J.M.,CITIC UGR |
Camarillo G.,Ericsson AB |
Povedano-Molina J.,CITIC UGR |
Lopez-Soler J.M.,ETSIIT UGR
Computer Standards and Interfaces | Year: 2013
Data-Centric Publish-Subscribe (DCPS) is an architectural and communication paradigm where applications exchange data-content through a common data-space using publish-subscribe interactions. Due to its focus on data-content, DCPS is especially suitable for deploying IoT systems. However, some problems must be solved to support large deployments. In this paper we define a novel extension to the IETF REsource LOcation And Discovery (RELOAD) protocol specification for providing content discovery and transfer in big scale IoT deployments. We have conducted a set of experiments over multiple simulated networks of 500 to 10,000 nodes that demonstrate the viability, scalability, and robustness of our proposal. © 2013 Elsevier B.V.
Banos O.,Kyung Hee University |
Bang J.,Kyung Hee University |
Hur T.,Kyung Hee University |
Siddiqi M.H.,Kyung Hee University |
And 6 more authors.
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS | Year: 2015
The monitoring of human lifestyles has gained much attention in the recent years. This work presents a novel approach to combine multiple context-awareness technologies for the automatic analysis of people's conduct in a comprehensive and holistic manner. Activity recognition, emotion recognition, location detection, and social analysis techniques are integrated with ontological mechanisms as part of a framework to identify human behavior. Key architectural components, methods and evidences are described in this paper to illustrate the interest of the proposed approach. © 2015 IEEE.
Luengo J.,CITIC UGR |
Herrera F.,CITIC UGR
Information Sciences | Year: 2012
In this work we jointly analyze the performance of three classic Artificial Neural Network models and one Support Vector Machine with respect to a series of data complexity measures known as measures of separability of classes. In particular, we consider a Radial Basis Function Network, a Multi-Layer Perceptron, a Learning Vector Quantization, while the Sequential Minimal Optimization method is used to model the Support Vector Machine. We consider five measures of separability of classes over a wide range of data sets built from real data which have proved to be very discriminative when analyzing the performance of classifiers. We find that two of them allow us to extract common behavior patterns for the four learning methods due to their related nature. We obtain rules using these two metrics that describe both good or bad performance of the Artificial Neural Networks and the Support Vector Machine. With the obtained rules, we characterize the performance of the methods from the data set complexity metrics and therefore their common domains of competence are established. Using these domains of competence the shared good and bad capabilities of these four models can be used to know if the approximative models will perform well or poorly or if a more complex configuration of the model is needed for a given problem in advance. © 2011 Elsevier Inc. All rights reserved.
Garcia J.A.,CITIC UGR |
Rodriguez-Sanchez R.,CITIC UGR |
Fdez-Valdivia J.,CITIC UGR
Journal of Visual Communication and Image Representation | Year: 2010
For progressive transmission, the limited bit-reserves of low transmission cost seem to provide a limit to the growing low-cost bit-allocation and to the sustainable amount of significant coefficients for which we allocate our exhaustible low-cost bit-reserves. Following a bit-saving path in the progressive image transmission, bit streams are prioritized in response to the use of a prioritization protocol, but only a part of the prioritized low-cost bit streams is to be transmitted to the decoder at this time. The level of transmission technology may also be augmented by investing part of the gross income in transmission improvement. The rest of prioritized bit streams, which were not sent to the decoder at their prioritization time, is to be added to a knowledge base from bit-saving. It is a bit-saving just reducing the amount of transmitted bits at the present time, but additions to the knowledge base may be transmitted later following an improved transmission technology at higher bit rates. Our aim is to find the paths of transmission, bit-reserves use, and sustained improvement in transmission technology which maximizes utility of future transmission. It uncovers the exact role and relevance of knowledge from bit-saving in the optimal transmission path. © 2010 Elsevier Inc. All rights reserved.