Paris, France
Paris, France

Time filter

Source Type

MacEdo D.F.,Federal University of Minas Gerais | Dos Santos A.L.,Federal University of Paraná | Nogueira J.M.,Federal University of Minas Gerais | Pujolle G.,Paris Universitas
Computer Networks | Year: 2011

In wireless networks, the peer-to-peer (P2P) paradigm has been extensively used for information sharing due to the similarities of both networks. Both P2P and wireless networks are decentralized and designed to cope with node and link failures. As a consequence of this synergy, P2P services are employed for file sharing, resource discovery, instant messaging and online collaboration services in wireless environments. Unstructured P2P protocols are the most commonly used protocol in dynamic wireless networks, due to their high degree of tolerance to failure and support of more sophisticated query mechanisms. It has been demonstrated that the performance of P2P protocols depends heavily on the deployment. Thus, the parameters of the protocol must be tuned to maximize performance. This article proposes a controller that adjusts the load of the P2P application in real-time by tuning the "number of neighbors" of unstructured P2P protocols. This adaptation avoids the saturation of the network, allowing more messages to be delivered and reducing the transmission delay at the MAC layer. Simulations for a Gnutella-like protocol show that, on each deployment, the controller-based network presents a hit rate and response time comparable to the best manual configuration while presenting an acceptable energy consumption and fairness. © 2011 Elsevier B.V. All rights reserved.


MacEdo D.F.,Federal University of Minas Gerais | Dos Santos A.L.,Federal University of Paraná | Nogueira J.M.,Federal University of Minas Gerais | Pujolle G.,Paris Universitas
Computer Communications | Year: 2011

Wireless multi-hop networks can vary both the transmission power and modulation of links. Those two parameters provide several design choices, which influence the performance of wireless multi-hop networks, e.g. minimize energy consumption, increase throughput, reduce contention, and maximize link quality. However, only network-wide metrics are considered in previous works. Further, per-flow performance metrics, such as the end-to-end energy consumption and latency, have not been studied. Those parameters directly impact the experience of users, which should be considered in capacity and performance studies. Our model incorporates per-flow metrics while also considering fading, contention, hidden terminals and packet error probabilities. We instantiate the model into an IEEE 802.11 multi-hop scenario, and evaluate common routing decisions such as maximizing link quality, maximizing data rate or minimizing the transmission power. © 2011 Elsevier B.V. All rights reserved.


MacEdo D.F.,Federal University of Minas Gerais | Dos Santos A.L.,Federal University of Paraná | Correia L.H.A.,Federal University of Lavras | Nogueira J.M.,Federal University of Minas Gerais | Pujolle G.,Paris Universitas
Computer Networks | Year: 2010

Wireless networks can vary both the transmission power and modulation of links. Existing routing protocols do not take transmission power control (TPC) and modulation adaptation (also known as rate adaptation - RA) into account at the same time, even though the performance of wireless networks can be significantly improved when routing algorithms use link characteristics to build their routes. This article proposes and evaluates extensions to routing protocols to cope with TPC and RA. The enhancements can be applied to any link state or distance vector routing protocols. An evaluation considering node density, node mobility and link error show that TPC- and RA-aware routing algorithms improve the average latency and the end-to-end throughput, while consuming less energy than traditional protocols. © 2010 Elsevier B.V. All rights reserved.


Gaillard N.,Paris Universitas | Deltour S.,Paris Universitas | Vilotijevic B.,Center GEAT | Hornych A.,Center GEAT | And 4 more authors.
Neurology | Year: 2010

BACKGROUND: Paroxysmal atrial fibrillation (PAF) may remain underdiagnosed after stroke, as suggested by long-duration EKG monitoring. Here we report the sensitivity of transtelephonic EKG monitoring (TTM) for detection of PAF in patients following a recent stroke or TIA and a negative 24-hour Holter. METHODS: We analyzed data from 98 consecutive patients with TTM and noncardioembolic TOAST stroke (n = 78) or TIA (n = 20). Most were cryptogenic events (82%). Patients started TTM 0.8 months (interquartile range 0.4-2.5) after the indexed event and randomly recorded about 1 EKG per day for 1 month. Univariate and multivariate analyses were run to identify PAF predictors. RESULTS: Seventeen PAF episodes were detected in 9.2% (9/98) of the patients. The estimated duration of PAF episodes ranged from 4 to 72 hours. Two predictors were identified: premature atrial ectopic beats (more than 100) in 24-hour routine Holter (odds ratio [OR] = 11.0; 95% confidence interval [CI] 1.9-62; p = 0.007) and nonlacunar anterior circulation DWI hypersignals (OR = 9.9; 95% CI 1.1-90.6; p = 0.04). The PAF detection rate varied from 42.6% for patients meeting both criteria to 0% for patients with neither of them. CONCLUSIONS: Transtelephonic EKG monitoring increases detection rate of paroxysmal atrial fibrillation in stroke and TIA patients whose 24-hour Holter result was negative, especially if they had frequent premature atrial ectopic beats, recent anterior circulation infarct on MRI, or both. Copyright © 2010 by AAN Enterprises, Inc.


Falleri J.-R.,University of Bordeaux 1 | Blanc X.,University of Bordeaux 1 | Bendraou R.,Paris Universitas | Da Silva M.A.A.,Paris Universitas | Teyton C.,University of Bordeaux 1
Software - Practice and Experience | Year: 2014

Ensuring models' consistency is a key concern when using a model-based development approach. Therefore, model inconsistency detection has received significant attention over the last years. To be useful, inconsistency detection has to be sound, efficient, and scalable. Incremental detection is one way to achieve efficiency in the presence of large models. In most of the existing approaches, incrementalization is carried out at the expense of the memory consumption that becomes proportional to the model size and the number of consistency rules. In this paper, we propose a new incremental inconsistency detection approach that only consumes a small and model size-independent amount of memory. It will therefore scale better to projects using large models and many consistency rules. Copyright © 2012 John Wiley & Sons, Ltd.


Ghedira E.,Paris Universitas | Molinier L.,Paris Universitas | Pujolle G.,Paris Universitas
Lecture Notes in Business Information Processing | Year: 2011

We present in this paper a task allocation mechanism constrained by a distributed, complex environment namely in our case a home network. Thus, added to multi agent systems constraints, we have restrictions in terms of communication (available throughput) and resource (bandwidth). We have a multi agent system with a knowledge base, and we have to ensure a proper quality of service in home Network adjusting the routing in real time by applying alternative route to the main ones. Our approach uses the first price sealed bid type auction: when an agent does not have an alternative route, it launches an auction. The agent offering the best price will be the next hop of the route. © 2011 Springer-Verlag Berlin Heidelberg.


Da Silva M.A.A.,Paris Universitas | Blanc X.,University of Bordeaux 1 | Bendraou R.,Paris Universitas
2011 26th IEEE/ACM International Conference on Automated Software Engineering, ASE 2011, Proceedings | Year: 2011

Software development companies have been putting a lot of effort in adopting process models, however two main issues remain. On the one hand, process models are inherently incomplete, since companies can not capture all possible situations in a single model. On the other hand, managers can not force process participants (agents) to strictly follow these models. The effect of both issues is that companies need to be able to handle deviations during process enactment. In order to make sure that process agents follow the process model and that their deviations get detected and handled, they adopt the so-called Process-centered Software Engineering Environments (PSEEs). Unfortunately, the options proposed by these tools, when it comes to handling a deviation, are rather limited to basically ignoring or forbidding it. In the present work, we face this limitation by presenting an approach for detecting, managing and tolerating agent deviations. Besides, in this paper we present the formal specification for this approach in the Linear Temporal Logic (LTL). It has been used as a the basis of our PSEE prototype. © 2011 IEEE.

Loading Paris Universitas collaborators
Loading Paris Universitas collaborators