Time filter

Source Type

Bakhshandeh M.,Information Systems Group | Pesquita C.,University of Lisbon | Borbinha J.,University of Lisbon
CEUR Workshop Proceedings

Current Enterprise Architecture (EA) approaches tend to be generic, based on broad meta-models that cross-cut distinct architectural domains. Integrating these models is necessary to an effective EA process, in order to support, for example, benchmarking of business processes or assessing compliance to structured requirements. However, the integration of EA models faces challenges stemming from structural and semantic heterogeneities that could be addressed by ontology matching techniques. For that, we used AgreementMakerLight, an ontology matching system, to evaluate a set of state of the art matching approaches that could adequately address some of the heterogeneity issues. We assessed the matching of EA models based on the ArchiMate and BPMN languages, which made possible to conclude about not only the potential but also of the limitations of these techniques to properly explore the more complex semantics present in these models. Copyright © 2015 for the individual papers by the papers' authors. Source

Korman A.,University Paris Diderot | Kutten S.,Information Systems Group | Peleg D.,Weizmann Institute of Science
Distributed Computing

This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as "how expensive is local verification?" and more specifically, "how expensive is local verification compared to computation?" A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithmsone to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less © Springer-Verlag 2010. Source

Das Sarma A.,Google | Trehan A.,Information Systems Group
Proceedings - IEEE INFOCOM

Healing algorithms play a crucial part in distributed peer-to-peer networks where failures occur continuously and frequently. Whereas there are approaches for robustness that rely largely on built-in redundancy, we adopt a responsive approach that is more akin to that of biological networks e.g. the brain. The general goal of self-healing distributed graphs is to maintain certain network properties while recovering from failure quickly and making bounded alterations locally. Several self-healing algorithms have been suggested in the recent literature [IPDPS'08, PODC'08, PODC'09, PODC'11]; they heal various network properties while fulfilling competing requirements such as having low degree increase while maintaining connectivity, expansion and low stretch of the network. In this work, we augment the previous algorithms by adding the notion of edge-preserving self-healing which requires the healing algorithm to not delete any edges originally present or adversarialy inserted. This reflects the cost of adding additional edges but more importantly it immediately follows that edge preservation helps maintain any subgraph induced property that is monotonic, in particular important properties such as graph and subgraph densities. Density is an important network property and in certain distributed networks, maintaining it preserves high connectivity among certain subgraphs and backbones. We introduce a general model of self-healing, and introduce xheal+, an edge-preserving version of xheal[PODC'11]. © 2012 IEEE. Source

Mani D.,Information Systems Group | Mani D.,Srini Raju Center for Technology and the Networked economics | Barua A.,University of Texas at Austin
Journal of Management Information Systems

Information technology (IT) is central to the process execution and management of an ongoing relationship in outsourcing, both of which are fraught with challenges, and often lead to poor business outcomes. Thus, it is important for IT groups in organizations to understand how to deal with such difficulties for improved outsourcing performance. We study whether firms learn over time to deal with these two related but distinct issues in IT and business process outsourcing. Does such learning affect financial value appropriation through outsourcing? We build on the literature in information systems and strategy to investigate whether value creation in outsourcing depends on relational learning that results from prior association with the vendor, and procedural learning that results from prior experience in managing interfirm relationships. We estimate value in terms of long-term abnormal stock returns to the client relative to an industry, size, and book-to-market matched sample of control firms following the implementation of the outsourcing contract. We also analyze announcement period returns and allied wealth effects. Using data from the hundred largest outsourcing deals between 1996 and 2005, we find that whereas relational learning influences value creation in both simple and complex outsourcing engagements, procedural learning impacts value only in complex initiatives. Financial markets are slow to price the value of learning. The results suggest that caution should be exercised when firms without the experience of managing interfirm relationships externalize complex tasks to vendors they have not worked with in the past. Furthermore, IT groups can help improve learning-based outcomes by developing processes and systems that enable a firm to improve outsourcing procedures in a cumulative manner and also to coordinate and collaborate with the vendor. Copyright © Taylor & Francis Group, LLC. Source

Barateiro J.,National Laboratory for Civil Engineering | Borbinha J.,Information Systems Group
Proceedings of the Mediterranean Electrotechnical Conference - MELECON

The goal of Risk Management is to define prevention and control mechanisms to address the risks attached to specific activities and valuable assets. Many Risk Management efforts operate in silos with narrowly focused, functionally driven, and disjointed activities. That fact leads to a fragmented view of risks, where each activity uses its own language, customs and metrics. That limits an organization-wide perception of risks, where interdependent risks are not anticipated, controlled or managed. The lack of integrated solutions to manage risk information, lead the experts to use spreadsheets as their main tool, impeding collaboration, communication and reuse of risk information. In order to address these issues, this paper presents a solution that integrates a Risk Management framework, including a XML-based Domain Specific Language for Risk Management. The proposed framework is supported by an information system to manage the definition or risks. © 2012 IEEE. Source

Discover hidden collaborations