Cambridge, MA, United States
Cambridge, MA, United States

Time filter

Source Type

Kiayias A.,National and Kapodistrian University of Athens | Papadopoulos S.,HKUST | Triandopoulos N.,RSA Laboratories | Zacharias T.,National and Kapodistrian University of Athens
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2013

We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy and introduce a novel cryptographic primitive called delegatable pseudorandom functions, or DPRFs for short: A DPRF enables a proxy to evaluate a pseudorandom function on a strict subset of its domain using a trapdoor derived from the DPRF secret key. The trapdoor is constructed with respect to a certain policy predicate that determines the subset of input values which the proxy is allowed to compute. The main challenge in constructing DPRFs is to achieve bandwidth efficiency (which mandates that the trapdoor is smaller than the precomputed sequence of the PRF values conforming to the predicate), while maintaining the pseudorandomness of unknown values against an attacker that adaptively controls the proxy. A DPRF may be optionally equipped with an additional property we call policy privacy, where any two delegation predicates remain indistinguishable in the view of a DPRF-querying proxy: achieving this raises new design challenges as policy privacy and bandwidth efficiency are seemingly conflicting goals. For the important class of policy predicates described as (1-dimensional) ranges, we devise two DPRF constructions and rigorously prove their security. Built upon the well-known tree-based GGM PRF family, our constructions are generic and feature only logarithmic delegation size in the number of values conforming to the policy predicate. At only a constant-factor efficiency reduction, we show that our second construction is also policy private. Finally, we describe that their new security and efficiency properties render our DPRF schemes particularly useful in numerous security applications, including RFID, symmetric searchable encryption, and broadcast encryption. © 2013 ACM.

News Article | December 15, 2016

From Yahoo, MySpace and TalkTalk to Ashley Madison and Adult Friend Finder, personal information has been stolen by hackers from around the world. But with each hack there’s the big question of how well the site protected its users’ data. Was it open and freely available, or was it hashed, secured and practically unbreakable? From cleartext to hashed, salted, peppered and bcrypted, here’s what the impenetrable jargon of password security really means. When something is described being stored as “cleartext” or as “plain text” it means that thing is in the open as simple text – with no security beyond a simple access control to the database which contains it. If you have access to the database containing the passwords you can read them just as you can read the text on this page. When a password has been “hashed” it means it has been turned into a scrambled representation of itself. A user’s password is taken and – using a key known to the site – the hash value is derived from the combination of both the password and the key, using a set algorithm. To verify a user’s password is correct it is hashed and the value compared with that stored on record each time they login. You cannot directly turn a hashed value into the password, but you can work out what the password is if you continually generate hashes from passwords until you find one that matches, a so-called brute-force attack, or similar methods. Passwords are often described as “hashed and salted”. Salting is simply the addition of a unique, random string of characters known only to the site to each password before it is hashed, typically this “salt” is placed in front of each password. The salt value needs to be stored by the site, which means sometimes sites use the same salt for every password. This makes it less effective than if individual salts are used. The use of unique salts means that common passwords shared by multiple users – such as “123456” or “password” – aren’t immediately revealed when one such hashed password is identified – because despite the passwords being the same the salted and hashed values are not. Large salts also protect against certain methods of attack on hashes, including rainbow tables or logs of hashed passwords previously broken. Both hashing and salting can be repeated more than once to increase the difficulty in breaking the security. Cryptographers like their seasonings. A “pepper” is similar to a salt - a value added to the password before being hashed - but typically placed at the end of the password. There are broadly two versions of pepper. The first is simply a known secret value added to each password, which is only beneficial if it is not known by the attacker. The second is a value that’s randomly generated but never stored. That means every time a user attempts to log into the site it has to try multiple combinations of the pepper and hashing algorithm to find the right pepper value and match the hash value. Even with a small range in the unknown pepper value, trying all the values can take minutes per login attempt, so is rarely used. Encryption, like hashing, is a function of cryptography, but the main difference is that encryption is something you can undo, while hashing is not. If you need to access the source text to change it or read it, encryption allows you to secure it but still read it after decrypting it. Hashing cannot be reversed, which means you can only know what the hash represents by matching it with another hash of what you think is the same information. If a site such as a bank asks you to verify particular characters of your password, rather than enter the whole thing, it is encrypting your password as it must decrypt it and verify individual characters rather than simply match the whole password to a stored hash. Encrypted passwords are typically used for second-factor verification, rather than as the primary login factor. A hexadecimal number, also simply known as “hex” or “base 16”, is way of representing values of zero to 15 as using 16 separate symbols. The numbers 0-9 represent values zero to nine, with a, b, c, d, e and f representing 10-15. They are widely used in computing as a human-friendly way of representing binary numbers. Each hexadecimal digit represents four bits or half a byte. Originally designed as a cryptographic hashing algorithm, first published in 1992, MD5 has been shown to have extensive weaknesses, which make it relatively easy to break. Its 128-bit hash values, which are quite easy to produce, are more commonly used for file verification to make sure that a downloaded file has not been tampered with. It should not be used to secure passwords. Secure Hash Algorithm 1 (SHA-1) is cryptographic hashing algorithm originally design by the US National Security Agency in 1993 and published in 1995. It generates 160-bit hash value that is typically rendered as a 40-digit hexadecimal number. As of 2005, SHA-1 was deemed as no longer secure as the exponential increase in computing power and sophisticated methods meant that it was possible to perform a so-called attack on the hash and produce the source password or text without spending millions on computing resource and time. The successor to SHA-1, Secure Hash Algorithm 2 (SHA-2) is a family of hash functions that produce longer hash values with 224, 256, 384 or 512 bits, written as SHA-224, SHA-256, SHA-384 or SHA-512. It was first published in 2001, designed by again by the NSA, and an effective attack has yet to be demonstrated against it. That means SHA-2 is generally recommended for secure hashing. SHA-3, while not a replacement for SHA-2, was developed not by the NSA but by Guido Bertoni, Joan Daemen, Michaël Peeters and Gilles Van Assche from STMicroelectronics and Radboud University in Nijmegen, Netherlands. It was standardised in 2015. As computational power has increased the number of brute-force guesses a hacker can make for an efficient hashing algorithm has increased exponentially. Bcrypt, which is based on the Blowfish cipher and includes a salt, is designed to protect against brute-force attacks by intentionally being slower to operate. It has a so-called work factor that effectively puts your password through a definable number of rounds of extension before being hashed. By increasing the work factor it takes longer to brute-force the password and match the hash. The theory is that the site owner sets a sufficiently high-enough work factor to reduce the number of guesses today’s computers can make at the password and extend the time from days or weeks to months or years, making it prohibitively time consuming and expensive. Password-Based Key Derivation Function 2 (PBKDF2), developed by RSA Laboratories, is another algorithm for key extension that makes hashes more difficult to brute force. It is considered slightly easier to brute force than Bcrypt at a certain value because it requires less computer memory to run the algorithm. Scrypt like Bcrypt and PBKDF2 is an algorithm that extends keys and makes it harder to brute-force attack a hash. Unlike PBKDF2, however, scrypt is designed to use either a large amount of computer memory or force many more calculations as it runs. For legitimate users having to only hash one password to check if it matches a stored value, the cost is negligible. But for someone attempting to try 100,000s of passwords it makes cost of doing so much higher or take prohibitively long. If a password is properly hashed using SHA-2 or newer, and is salted, then to break a password requires a brute-force attack. The longer the password, the longer the brute-force attack is going to last. And the longer the brute-force attack required, the more time-consuming and expensive it is to match the hash and discover the password. Which means the longer the password the better, but the configuration of the password also makes a difference. A truly random eight-character password will be more secure than a eight-letter dictionary word, because brute-force attacks use dictionaries, names and other lists of words as fodder. However, if the site stores your password as plain text it doesn’t matter how long your password is if the database is stolen.

Rostami M.,Rice University | Juels A.,RSA Laboratories | Koushanfar F.,Rice University
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2013

We present Heart-to-Heart (H2H), a system to authenticate external medical device controllers and programmers to Implantable Medical Devices (IMDs). IMDs, which include pacemakers and cardiac defibrillators, are therapeutic medical devices partially or wholly embedded in the human body. They often have built-in radio communication to facilitate non-invasive reprogramming and data readout. Many IMDs, though, lack well designed authentication protocols, exposing patients to over-the-air attack and physical harm. H2H makes use of ECG (heartbeat data) as an authentication mechanism, ensuring access only by a medical instrument in physical contact with an IMD-bearing patient. Based on statistical analysis of real-world data, we propose and analyze new techniques for extracting time-varying randomness from ECG signals for use in H2H. We introduce a novel cryptographic device pairing protocol that uses this randomness to protect against attacks by active adversaries, while meeting the practical challenges of lightweight implementation and noise tolerance in ECG readings. Finally, we describe an end-to-end implementation in an ARM-Cortex M-3 microcontroller that demonstrates the practicality of H2H in current IMD hardware. Previous schemes have had goals much like those of H2H, but with serious limitations making them unfit for deployment - -such as naively designed cryptographic pairing protocols (some of them recently broken). In addition to its novel analysis and use of ECG entropy, H2H is the first physiologically- based IMD device pairing protocol with a rigorous adversarial model and protocol analysis. © 2013 ACM.

Zhang Y.,University of North Carolina at Chapel Hill | Juels A.,RSA Laboratories | Reiter M.K.,University of North Carolina at Chapel Hill | Ristenpart T.,University of Wisconsin - Madison
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2012

This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co-locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library. Copyright © 2012 ACM.

Fletcher C.,Massachusetts Institute of Technology | Van Dijk M.,RSA Laboratories | Devadas S.,Massachusetts Institute of Technology
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2012

This paper considers encrypted computation where the user specifies encrypted inputs to an untrusted program, and the server computes on those encrypted inputs. To this end we propose a secure processor architecture, called Ascend, that guarantees privacy of data when arbitrary programs use the data running in a cloud-like environment (e.g., an untrusted server running an untrusted software stack). The key idea to guarantee privacy is obfuscated instruction execution; Ascend does not disclose what instruction is being run at any given time, be it an arithmetic instruction or a memory instruction. Periodic accesses to external instruction and data memory are performed through an Oblivious RAM (ORAM) interface to prevent leakage through memory access patterns. We evaluate the processor architecture on SPEC benchmarks running on encrypted data and quantify overheads. Copyright © 2012 ACM.

Juels A.,RSA Laboratories | Oprea A.,RSA Laboratories
Communications of the ACM | Year: 2013

We have described new techniques that secure cloud data by ensuring a range of protections, from integrity and freshness verification to high data availability. We also proposed an auditing framework that offers tenants visibility into the correct operation of the cloud. These techniques enable an extension of the trust perimeter from enterprise internal data centers into public clouds, as in Figure 1. Our hope is these techniques will alleviate some of the concern over security in the cloud and facilitate migration of enterprise resources into public clouds. We conclude here by mentioning several open problems of interest in this context. © 2013 ACM.

Bowers K.D.,RSA Laboratories | Van Dijk M.,RSA Laboratories | Juels A.,RSA Laboratories | Oprea A.,RSA Laboratories | Rivest R.L.,Massachusetts Institute of Technology
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2011

This paper presents a new challenge - verifying that a remote server is storing a file in a fault-tolerant manner, i.e., such that it can survive hard-drive failures. We describe an approach called the Remote Assessment of Fault Tolerance (RAFT). The key technique in a RAFT is to measure the time taken for a server to respond to a read request for a collection of file blocks. The larger the number of hard drives across which a file is distributed, the faster the read-request response. Erasure codes also play an important role in our solution. We describe a theoretical framework for RAFTs and offer experimental evidence that RAFTs can work in practice in several settings of interest. © 2011 ACM.

Zhang Y.,University of North Carolina at Chapel Hill | Juels A.,RSA Laboratories | Oprea A.,RSA Laboratories | Reiter M.K.,University of North Carolina at Chapel Hill
Proceedings - IEEE Symposium on Security and Privacy | Year: 2011

Security is a major barrier to enterprise adoption of cloud computing. Physical co-residency with other tenants poses a particular risk, due to pervasive virtualization in the cloud. Recent research has shown how side channels in shared hardware may enable attackers to exfiltrate sensitive data across virtual machines (VMs). In view of such risks, cloud providers may promise physically isolated resources to select tenants, but a challenge remains: Tenants still need to be able to verify physical isolation of their VMs. We introduce HomeAlone, a system that lets a tenant verify its VMs' exclusive use of a physical machine. The key idea in HomeAlone is to invert the usual application of side channels. Rather than exploiting a side channel as a vector of attack, HomeAlone uses a side-channel (in the L2 memory cache) as a novel, defensive detection tool. By analyzing cache usage during periods in which "friendly" VMs coordinate to avoid portions of the cache, a tenant using HomeAlone can detect the activity of a co-resident "foe" VM. Key technical contributions of HomeAlone include classification techniques to analyze cache usage and guest operating system kernel modifications that minimize the performance impact of friendly VMs sidestepping monitored cache portions. Our implementation of HomeAlone on Xen-PVM requires no modification of existing hypervisors and no special action or cooperation by a cloud provider. © 2011 IEEE.

Juels A.,RSA Laboratories | Rivest R.L.,Massachusetts Institute of Technology
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2013

We propose a simple method for improving the security of hashed passwords: the maintenance of additional "honeywords" (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the "honeychecker") can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted. © 2013 ACM.

Van Dijk M.,RSA Laboratories | Juels A.,RSA Laboratories | Oprea A.,RSA Laboratories | Rivest R.L.,Massachusetts Institute of Technology
Journal of Cryptology | Year: 2013

Recent targeted attacks have increased significantly in sophistication, undermining the fundamental assumptions on which most cryptographic primitives rely for security. For instance, attackers launching an Advanced Persistent Threat (APT) can steal full cryptographic keys, violating the very secrecy of "secret" keys that cryptographers assume in designing secure protocols. In this article, we introduce a game-theoretic framework for modeling various computer security scenarios prevalent today, including targeted attacks. We are particularly interested in situations in which an attacker periodically compromises a system or critical resource completely, learns all its secret information and is not immediately detected by the system owner or defender. We propose a two-player game between an attacker and defender called FlipIt or The Game of "Stealthy Takeover." In FlipIt, players compete to control a shared resource. Unlike most existing games, FlipIt allows players to move at any given time, taking control of the resource. The identity of the player controlling the resource, however, is not revealed until a player actually moves. To move, a player pays a certain move cost. The objective of each player is to control the resource a large fraction of time, while minimizing his total move cost. FlipIt provides a simple and elegant framework in which we can formally reason about the interaction between attackers and defenders in practical scenarios. In this article, we restrict ourselves to games in which one of the players (the defender) plays with a renewal strategy, one in which the intervals between consecutive moves are chosen independently and uniformly at random from a fixed probability distribution. We consider attacker strategies ranging in increasing sophistication from simple periodic strategies (with moves spaced at equal time intervals) to more complex adaptive strategies, in which moves are determined based on feedback received during the game. For different classes of strategies employed by the attacker, we determine strongly dominant strategies for both players (when they exist), strategies that achieve higher benefit than all other strategies in a particular class. When strongly dominant strategies do not exist, our goal is to characterize the residual game consisting of strategies that are not strongly dominated by other strategies. We also prove equivalence or strict inclusion of certain classes of strategies under different conditions. Our analysis of different FlipIt variants teaches cryptographers, system designers, and the community at large some valuable lessons: 1. Systems should be designed under the assumption of repeated total compromise, including theft of cryptographic keys. FlipIt provides guidance on how to implement a cost-effective defensive strategy. 2. Aggressive play by one player can motivate the opponent to drop out of the game (essentially not to play at all). Therefore, moving fast is a good defensive strategy, but it can only be implemented if move costs are low. We believe that virtualization has a huge potential in this respect. 3. Close monitoring of one's resources is beneficial in detecting potential attacks faster, gaining insight into attacker's strategies, and scheduling defensive moves more effectively. Interestingly, FlipIt finds applications in other security realms besides modeling of targeted attacks. Examples include cryptographic key rotation, password changing policies, refreshing virtual machines, and cloud auditing. © 2012 International Association for Cryptologic Research.

Loading RSA Laboratories collaborators
Loading RSA Laboratories collaborators