New York, NEW YORK, United States
New York, NEW YORK, United States

CA, Inc., formerly Computer Associates International, Inc., is one of the largest independent software corporations in the world. CA for short, is an American, multinational, publicly held company headquartered in New York, New York. The company creates systems software that runs in mainframe, distributed computing, virtual machine and cloud computing environments.Although the company once sold anti-virus and Internet security commercial software programs for personal computers during its venture into the business-to-consumer market, it remains primarily known for its business-to-business mainframe and distributed information technology infrastructure applications since the spin off of their security products into Total Defense. CA Technologies claims that its computer software products are used by a majority of the Forbes Global 2,000 companies.CA Technologies posted $4.4 billion in revenue for fiscal year 2010 and maintains 100 offices in more than 45 countries. The company employs 13,200 people , including 5,900 engineers in software development. CA holds more than 400 patents worldwide, and has more than 700 patent applications pending.In 2010 the company acquired eight companies to support its Cloud strategy: 3Tera, Nimsoft, NetQoS, Oblicore, Cassatt, 4Base Technology, Arcot Systems, and Hyperformix. Wikipedia.

Time filter

Source Type

Agency: Cordis | Branch: H2020 | Program: CSA | Phase: INSO-4-2015 | Award Amount: 2.85M | Year: 2016

Science2Society creates, pilots and shares good practices, guidelines and training materials that improve awareness and practical performance in seven concrete university-industry-society interfacing schemes especially affected by Science 2.0 and open innovation. It covers a very wide range of interfacing / co-creation approaches (and the synergy between them) and advances far beyond the traditional role of the interface as a facilitator of knowledge transfer from university to business. Sound methodological frameworks will be combined with real life experience from practitioners in science and industry, making the transition from promising blueprints to actual change within some 3000 actors in Europe by 2020. Science2Society does not only collect knowledge and models; it deeply and innovatively analyses how these can be improved (using advanced methods pioneered in business practice such as process re-engineering, design thinking and change management) and runs substantial experiments to validate the created optimized interfacing schemes. A complete package of dissemination activities will ensure that these results measurably impact the performance of European universities (and other stakeholders) in this area. Our project brings together both practitioners as well as method and system experts; it brings together universities, industries, research & technology organizations and SMEs. The project is endorsed by large (EU-level) networks of peers and ecosystem partners, allowing the project to actually engage in direct dialogue during project execution with hundreds of actors far beyond the consortium itself. Moreover, by building and establishing a Community of Practice type Learning and Implementation Alliance, we will ensure that a self-sustained cross-sector community on the subject of Science 2.0-enabled innovation ecosystems (and the key role of universities interfacing with their ecosystem partners) will be in place and operational by the end of our project.

Agency: Cordis | Branch: H2020 | Program: RIA | Phase: ICT-07-2014 | Award Amount: 3.57M | Year: 2015

The most challenging applications in heterogeneous cloud ecosystems are those that are able to maximise the benefits of the combination of the cloud resources in use: multi-cloud applications. They have to deal with the security of the individual components as well as with the overall application security including the communications and the data flow between the components. The main objective of MUSA is to support the security-intelligent lifecycle management of distributed applications over heterogeneous cloud resources, through a security framework that includes: security-by-design mechanisms to allow application self-protection at runtime, and methods and tools for the integrated security assurance in both the engineering and operation of multi-cloud applications. The MUSA framework leverages security-by-design, agile and DevOps approaches in multi-cloud applications, and enables the security-aware development and operation of multi-cloud applications. The framework will be composed of a) an IDE for creating the multi-cloud application taking into account its security requirements together with functional and business requirements, b) a set of security mechanisms embedded in the multi-cloud application components for self-protection, c) an automated deployment environment that, based on an intelligent decision support system, will allow for the dynamic distribution of the components according to security needs, and d) a security assurance platform in form of a SaaS that will support multi-cloud application runtime security control and transparency to increase user trust. The project will demonstrate and evaluate the economic viability and practical usability of the MUSA framework in highly relevant industrial applications representative of multi-cloud application development potential in Europe. The project duration will be 36 months, with an overall budget of 3,574,190 euros.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.1.2 | Award Amount: 9.00M | Year: 2012

Current Clouds offer is becoming day by day wider providing a vibrant technical environment, where SMEs can create innovative solutions and evolve their services. Cloud promises cheap and flexible services to end-users at a much larger scale than before. However, Cloud business models and technologies are still in their initial hype and characterized by critical early stage issues, which pose specific challenges and require advanced software engineering methods.\nThe main goal of MODAClouds is to provide methods, a decision support system, an open source IDE and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-Clouds with guaranteed QoS.\nModel-driven development combined with novel model-driven risk analysis and quality prediction will enable developers to specify Cloud-provider independent models enriched with quality parameters, implement these, perform quality prediction, monitor applications at run-time and optimize them based on the feedback, thus filling the gap between design and run-time. Additionally, MODAClouds provides techniques for data mapping and synchronization among multiple Clouds.\nMODAClouds innovations thus are: (i)simplify Cloud provider selection favoring the emergence of European Clouds, (ii) avoid vendor lock-in problems supporting the development of Cloud enabled Future Internet applications, (iii) provide quality assurance during the application life-cycle and support migration from Cloud to Cloud when needed.\nThe research is multi-disciplinary and will be grounded on expertise from several research areas. MODAClouds consortium consists of highly recognized Universities and research institutions that will assure a sound scientific progress, SME partners providing expertise on modelling tools, and large companies that assure industry relevance. The MODAClouds approach and tools will be applied on four industrial cases from different domains.

Agency: Cordis | Branch: H2020 | Program: MSCA-ITN-ETN | Phase: MSCA-ITN-2014-ETN | Award Amount: 3.80M | Year: 2015

The consortium of this European Training Network (ETN) BigStorage: Storage-based Convergence between HPC and Cloud to handle Big Data will train future data scientists in order to enable them and us to apply holistic and interdisciplinary approaches for taking advantage of a data-overwhelmed world, which requires HPC and Cloud infrastructures with a redefinition of storage architectures underpinning them - focusing on meeting highly ambitious performance and energy usage objectives. There has been an explosion of digital data, which is changing our knowledge about the world. This huge data collection, which cannot be managed by current data management systems, is known as Big Data. Techniques to address it are gradually combining with what has been traditionally known as High Performance Computing. Therefore, this ETN will focus on the convergence of Big Data, HPC, and Cloud data storage, ist management and analysis. To gain value from Big Data it must be addressed from many different angles: (i) applications, which can exploit this data, (ii) middleware, operating in the cloud and HPC environments, and (iii) infrastructure, which provides the Storage, and Computing capable of handling it. Big Data can only be effectively exploited if techniques and algorithms are available, which help to understand its content, so that it can be processed by decision-making models. This is the main goal of Data Science. We claim that this ETN project will be the ideal means to educate new researchers on the different facets of Data Science (across storage hardware and software architectures, large-scale distributed systems, data management services, data analysis, machine learning, decision making). Such a multifaceted expertise is mandatory to enable researchers to propose appropriate answers to applications requirements, while leveraging advanced data storage solutions unifying cloud and HPC storage facilities.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2013.4.2 | Award Amount: 6.15M | Year: 2014

LeanBigData aims at addressing three open challenges in big data analytics: 1) The cost, in terms of resources, of scaling big data analytics for streaming and static data sources; 2) The lack of integration of existing big data management technologies and their high response time; 3) The insufficient end-user support leading to extremely lengthy big data analysis cycles. LeanBigData will address these challenges by:Architecting and developing three resource-efficient Big Data management systems typically involved in Big Data processing: a novel transactional NoSQL key-value data store, a distributed complex event processing (CEP) system, and a distributed SQL query engine. We will achieve at least one order of magnitude in efficiency by removing overheads at all levels of the big-data analytics stack and we will take into account technology trends in multicore technologies and non-volatile memories. Providing an integrated big data platform with these three main technologies used for big data, NoSQL, SQL, and Streaming/CEP that will improve response time for unified analytics over multiple sources and large amounts of data avoiding the inefficiencies and delays introduced by existing extract-transfer-load approaches. To achieve this we will use fine-grain intra-query and intra-operator parallelism that will lead to sub-second response times.Supporting an end-to-end big data analytics solution removing the four main sources of delays in data analysis cycles by using: 1) automated discovery of anomalies and root cause analysis; 2) incremental visualization of long analytical queries; 3) drag-and-drop declarative composition of visualizations; and 4) efficient manipulation of visualizations through hand gestures over 3D/holographic views.Finally, LeanBigData will demonstrate these results in a cluster with 1,000 cores in four real industrial use cases with real data, paving the way for deployment in the context of realistic business processes.

Ryu K.-S.,IBM | Thomas L.,CA Technologies | Yang S.-H.,IBM | Parkin S.,IBM
Nature Nanotechnology | Year: 2013

Spin-polarized currents provide a powerful means of manipulating the magnetization of nanodevices, and give rise to spin transfer torques that can drive magnetic domain walls along nanowires. In ultrathin magnetic wires, domain walls are found to move in the opposite direction to that expected from bulk spin transfer torques, and also at much higher speeds. Here we show that this is due to two intertwined phenomena, both derived from spin-orbit interactions. By measuring the influence of magnetic fields on current-driven domain-wall motion in perpendicularly magnetized Co/Ni/Co trilayers, we find an internal effective magnetic field acting on each domain wall, the direction of which alternates between successive domain walls. This chiral effective field arises from a Dzyaloshinskii-Moriya interaction at the Co/Pt interfaces and, in concert with spin Hall currents, drives the domain walls in lock-step along the nanowire. Elucidating the mechanism for the manipulation of domain walls in ultrathin magnetic films will enable the development of new families of spintronic devices. © 2013 Macmillan Publishers Limited. All rights reserved.

Mate C.M.,CA Technologies
IEEE Transactions on Magnetics | Year: 2011

As the clearances in disk drives approach subnanometer values, accurately predicting lubricant behavior is becoming more critical to designing reliable slider-disk interfaces. Central to any analysis of lubricant in disk drives is the disjoining pressure of the lubricant films on the disk and slider surfaces. This paper reviews current measurement techniques of the disjoining pressure of the lubricants used in disk drives and theoretical expressions of the disjoining pressure as a function of lubricant thickness. This paper also discusses what disjoining pressure analyses will be needed for future disk drive technologies. © 2006 IEEE.

Hasanbeigi A.,CA Technologies | Price L.,CA Technologies
Renewable and Sustainable Energy Reviews | Year: 2012

The textile industry is a complicated manufacturing industry because it is a fragmented and heterogeneous sector dominated by small and medium enterprises (SMEs). There are various energy-efficiency opportunities that exist in every textile plant. However, even cost-effective options often are not implemented in textile plants mostly because of limited information on how to implement energy-efficiency measures. Know-how on energy-efficiency technologies and practices should, therefore, be prepared and disseminated to textile plants. This paper provides information on the energy use and energy-efficiency technologies and measures applicable to the textile industry. The paper includes case studies from textile plants around the world and includes energy savings and cost information when available. A total of 184 energy efficiency measures applicable to the textile industry are introduced in this paper. Also, the paper gives a brief overview of the textile industry around the world. An analysis of the type and the share of energy used in different textile processes is also included in the paper. Subsequently, energy-efficiency improvement opportunities available within some of the major textile sub-sectors are given with a brief explanation of each measure. This paper shows that a large number of energy efficiency measures exist for the textile industry and most of them have a low simple payback period. © 2012 Elsevier Ltd. All right reserved.

Braithwaite R.N.,CA Technologies
IEEE Transactions on Microwave Theory and Techniques | Year: 2013

A combined approach to digital predistortion (DPD) and crest factor reduction (CFR) is proposed. The new CFR is structured similar to DPD and is implemented by introducing a steady-state offset into the DPD coefficients. The DPD and CFR coefficients are estimated using separate adaptive processes but applied to the transmission path in a common module. The DPD/CFR module provides the means to exploit margins in the transmitter performance, allowing the tradeoff between peak-to-average-power ratio (PAPR), error vector magnitude, and adjacent channel power ratio (ACPR). The proposed approach applies CFR to the predistorted signal instead of the input signal. This allows the envelope clipping module, which is typically present to protect the power amplifier (PA), to be removed, thereby avoiding divergence problems during the iterative closed-loop estimation of the DPD coefficients. Results show that post-CFR lowers the PAPR of the predistorted signal by 5 dB, which reduces the stress on the peaking transistor in a Doherty PA. The combined DPD/CFR reduces the ACPR of the transmitter by 21 dB compared with the unlinearized PA. © 1963-2012 IEEE.

CA Technologies | Date: 2011-01-14

Methods and systems are described for format-preserving encryption. Format-preserving encryption on an entire format F may be achieved by performing format-preserving encryption on one or more subsets of F and then applying one or more permutation rounds in such a way that all elements of F enter a subset to be encrypted. A predetermined number of encryption rounds and a predetermined number of permutation rounds may be interleaved until all elements are thoroughly mixed. The resultant output data may be saved in a database in the same format as the original input data, meet all constraints of the database, and pass all validity checks applied by software supporting the database.

Loading CA Technologies collaborators
Loading CA Technologies collaborators